
AI Video Generator
A video AI tool developed by ETH’s Media and Methods Lab that empowers lecturers to produce authentic teaching videos. By combining the presenter’s voice, appearance, and a scripted lecture, the intuitive interface enables smooth creation from model training to final export while maintaining a personal and engaging presence.

Project Contribution
The project was already in development when I joined, with the technical foundation in place. My role was to create a clear and accessible design tailored to ETH lecturers. I was responsible for planning the user flow and designing the desktop UI of the AI video tool, ensuring an intuitive experience for users with varying levels of technical familiarity. The UI design has been handed over to the developer, and the project is currently still in development.
User Flow Planning Developed a user-friendly navigation structure tailored to academic use cases.
UI Design
Designed a clear and accessible desktop interface for ETH lecturers.
Developer Handoff Delivered design assets and specifications for implementation.
The Challenge
A central challenge of the project was finding the right structure for the user flow, whether to design the tool as a one-pager or divide it into multiple steps. A single-page layout offered simplicity but quickly became overloaded, as all input fields and the video output shared the same space. This led to visual clutter and a potentially overwhelming experience for users.
In contrast, a multi-step layout risked fragmenting the process and made it difficult to return to earlier inputs. The solution was a balanced two-step flow. The first screen focuses on input by gathering all necessary data to train the AI model. The second screen is dedicated to the generation of videos. This approach keeps the interface focused, reduces cognitive load and preserves a clear and logical workflow.

Structured Workflow for Better Results
Generating videos takes time and patience. That’s why the process prioritises audio and image generation first, before focusing on the final video. Users are encouraged to finalise and approve their audio and image selections by the scripting stage. Once the script is submitted, video generation becomes significantly more time-consuming and resource-intensive.

UI Component Preview
The UI component combines a slider for intuitive adjustment, a manual input box for precise values, and a toggle to quickly enable or disable the parameter. While the tool is currently being coded and tested in the backend, the generation process is already functioning. The interface design and interactive prototype will follow in the next development phase.


Slider + Input Field
Allows intuitive adjustments with the slider and precise control through manual input.
Toggle Button
Enables quick activation or deactivation of parameters for faster testing and feedback.

Combined Component
Slider, input field, and toggle merged into one compact, user-friendly control.
First Tool Test
Initial backend tests show the tool is functioning and generating results as expected. Fine-tuning and prototyping can now begin.
