The In-Car Experience.

To show what this experience would look like inside the car cabin, I mocked up the design inside an Audi Q6, showcasing how the experience looks and feels in context. This helped simulate real-world usage — from glanceable information to intuitive interactions — giving stakeholders a clearer sense of how drivers would engage with the system on the road.

Key Takeaways.

Working on Interactive Discovery pushed me to think beyond traditional screen-based design and consider how products function in motion and in context. One key takeaway was understanding the importance of close collaboration with engineers, especially in an in-car environment where technical feasibility, system limitations, and safety regulations shape what’s possible.


Testing prototypes in a real vehicle and seeing the system live helped me deeply appreciate how design decisions translate into real-world usability. It also emphasized the need for precision and intentionality in every interaction, especially when balancing visual UI with voice-first experiences.

What did I learn?

Information depth should match user curiosity

Tiny changes — like shifting a button’s position, switching a carousel to a grid, or removing a distracting background — led to noticeably smoother user experiences. This project reminded me that great UX isn’t just about big redesigns; it’s about the accumulation of small, thoughtful decisions that reduce friction, increase clarity, and build user confidence.

Balancing design and technical constraints

Small decisions, like moving the generative AI panel or redesigning the template flow, had major implications for user experience. I learned to think critically about what users see first, and how layout can direct focus or cause confusion.

Design systems scale clarity

In redesigning the toolbar, I learned that minimal doesn’t mean limited. The challenge was to make core tools easily accessible while still preserving flexibility — and to match user expectations by borrowing from familiar text editor interfaces.

You don’t know until you see it

Every moment of user confusion was a design opportunity. This project taught me to treat friction not as failure, but as a prompt — to ask why it exists and how we can reduce it in a way that feels natural, not forced.

Information depth should match user curiosity

Tiny changes — like shifting a button’s position, switching a carousel to a grid, or removing a distracting background — led to noticeably smoother user experiences. This project reminded me that great UX isn’t just about big redesigns; it’s about the accumulation of small, thoughtful decisions that reduce friction, increase clarity, and build user confidence.

Balancing design and technical constraints

Small decisions, like moving the generative AI panel or redesigning the template flow, had major implications for user experience. I learned to think critically about what users see first, and how layout can direct focus or cause confusion.

Design systems scale clarity

In redesigning the toolbar, I learned that minimal doesn’t mean limited. The challenge was to make core tools easily accessible while still preserving flexibility — and to match user expectations by borrowing from familiar text editor interfaces.

You don’t know until you see it

Every moment of user confusion was a design opportunity. This project taught me to treat friction not as failure, but as a prompt — to ask why it exists and how we can reduce it in a way that feels natural, not forced.

Looking back, this project deepened my understanding of what it means to design systems that are both context-aware and scalable. Designing for an in-car experience pushed me to think not only about clarity and safety, but also how a single design framework could adapt across different visual languages—like transitioning from our initial prototype to Audi’s branded interface. This experience reinforced that good design isn't just about what works in isolation, but what holds up across environments, platforms, and user expectations, all without compromising on usability or identity.

Volkswagen Group of America

Volkswagen Group of America

Designing an AI-powered HMI that uses gaze detection and voice interactions to reveal points of interests.

Jan 2025 - May 2025

In January 2025, I was onboarded onto a team of 6 to design Interactive Discovery, an AI-powered in-car system designed to help drivers and passengers explore their surroundings safely and effortlessly. It combines gaze detection, voice interaction, and a dynamic HMI (human-machine interface) to surface real-time information about nearby points of interest.


By leveraging voice commands and large language models (LLMs), the system proactively guides users toward meaningful discoveries—without requiring them to take their hands off the wheel or eyes off the road.

I was the lead product designer alongside 2 designers, 3 engineers, and 1 product manager.


I was responsible for the end to end design of the prototype, designing user flows, wireframes, and the final screens, as well as defining system logic for multimodal interactions (voice + visual output). Throughout the process, I collaborated with developers for UI feasibility and design handoff.

Understanding the Space

Understanding the Space

Humans are becoming more comfortable with interacting with voice agents in their day-to-day lives (think Siri, Alexa, Perplexity, just to name a few).

Natural language models have become woven into our everyday routines—helping us search, shop, organize, and connect with minimal friction.

Car Tech Doesn't Cut It.

Car Tech Doesn't Cut It.

Yet when we step into the car, that ease often disappears. Maps and in-car systems today are largely practical tools, built for getting from point A to point B. They rely on manual input, pre-set navigation, or clunky interfaces, leaving little room for spontaneous discovery.

Discovery on the Road

Discovery on the Road

But driving is an experience filled with curiosity. We’re always scanning our surroundings, especially in new locations and cities.


Drivers and passengers frequently spot intriguing places on the road, like a bustling café, a striking landmark, or a farmers market about to open. And yet, accessing that information in real time is often difficult, distracting, or unsafe.

AI for (Safe) Discovery

Introducing Interactive Discovery — a voice-first experience that brings exploration into the car. With a simple question like “What’s that?”, users can engage with their surroundings effortlessly. Within seconds, a voice agent responds, and contextual information is displayed on the car’s center screen without the driver taking their eyes off the road.


But the goal here isn't just to answer the user's questions — it’s about enriching the journey. By weaving AI into the in-car experience, we can bridge the gap between where we are and what we we're curious about, allowing for a new kind of exploration that's intuitive and hands-free.


This realization led us to define our problem statement:

Discovering Use Cases

When we first began designing Interactive Discovery, we grounded our work in a simple question: When and why would someone in a car want to engage with the world around them? We wanted to ensure that we were designing for moments of curiosity and spontaneity that people experience on a typical drive.


Would a driver notice a landmark and wonder what it was? Would a passenger want to explore coffee shops nearby? These kinds of questions helped us uncover the core use cases and emotional triggers behind exploration.

From there, we began to define key categories that mapped to real-world behavior, from single businesses and residential properties to nature spots, cultural landmarks, and large venues. Each of these needed to feel discoverable, relevant, and rewarding to engage with in context.

Next Iterations

We used the card framework as a focused lens to guide our iterations, helping us stay aligned on what content needed to be prioritized and how it should be structured. It gave us a clear foundation to evaluate what changes improved usability while keeping the overall experience familiar, intuitive, and easy to process in motion.

After testing multiple layout options, we chose the final design for its balance of simplicity and usability. Compared to earlier versions, it prioritizes showing only key information and is better suited for the in-car environment. One important decision was to make the CTAs large enough for accessibility but not overly prominent — using a neutral gray kept them visible without overpowering the content.

To guide my design, I looked at Apple CarPlay as a reference. Its interface uses large touch targets, clear visuals, and minimal text, making it extremely easy to use while driving. This helped me simplify my own designs and focus on what works best in a car: quick, glanceable, and distraction-free interactions.

Creating a Framework

We created a standardized card framework to guide what types of content would be displayed and how they'd be structured. Drawing inspiration from Adobe’s design system and card anatomy, we focused on building a clear, intuitive content hierarchy—ensuring that each layer of information felt organized, readable, and easy to scan in a driving context.

This content hierarchy was designed to prioritize clarity and scannability, especially in a glance-based environment like an in-car display.


We placed the preview image and title at the top to immediately capture attention and establish context. Metadata and content areas follow, offering supporting details in digestible chunks. Finally, actions and CTAs are anchored at the bottom for easy access if the user chooses to engage further.

Next Iterations

Next Iterations

We used the card framework as a focused lens to guide our iterations, helping us stay aligned on what content needed to be prioritized and how it should be structured. It gave us a clear foundation to evaluate what changes improved usability while keeping the overall experience familiar, intuitive, and easy to process in motion.

After testing multiple layout options, we chose the final design for its balance of simplicity and usability. Compared to earlier versions, it prioritizes showing only key information and is better suited for the in-car environment. One important decision was to make the CTAs large enough for accessibility but not overly prominent — using a neutral gray kept them visible without overpowering the content.

Applying the Framework to Audi's HMI

Applying the Framework to Audi's HMI

Later in the project, I was also tasked with designing an Audi-specific interface, aligning with the broader goal of making this system adaptable and appealing for automotive brands to integrate into their future vehicles.

Audi's Infotainment UI

You can see that it has a darker, more muted color palette compared to our original designs. But since we had our UI framework and core elements in place, I was able to easily scale across other visual languages without requiring a full redesign.

Designing for Other Use Cases

I also explored what those cards would look like for our other use cases, such as residential, events, and landmarks. I wanted to explore additional features and functionalities that we could display in this system to make the experience more descriptive.

I explored a potential integration with platforms like Zillow or Trulia to support home buyers browsing neighborhoods. As users drive through residential areas, the system could surface real-time information about nearby listings—such as pricing, availability, and property status—offering a convenient shortcut compared to manually searching on a phone.


For event venues and public spaces, we identified a common moment of curiosity: when drivers notice a large crowd or activity and wonder what's happening. By providing real-time data about ongoing events, the system can satisfy that curiosity instantly.

Seeing the UI Live

Seeing the UI in a real environment was actually very important as it helped us see how different widgets and components interact within the navigation screen and would help inform later design decisions.

Live Demo of Current Audi Screens

For example, when testing the map screen, we realized that the map widget had to take up a minimum of two columns. That gave us a constraint to work around, helping us understand that the map had to be full screen or 2/3 of the page, saving us time in designing something that wasn’t technically possible.

Adapting for Audi

Adapting for Audi

We initially explored a small version that would live as a small widget that could expand into a full view, then we moved onto explore larger versions, which we decided to prioritize. Going from the small widget to the expanded view would take another prompt or click, and we wanted to keep the experience as simple as possible.

The third card on the right is the finalized audi design, and we shifted the close button to the left so that it would be closer to the driver, and we decided on the CTAs of “More Info” and “Save.” More info would allow the card to expand into a full-screen view given that the driver doesn’t have ongoing navigation, and save would save the card to a library that they can access later in a parked state.

We designed Audi-specific cards for categories like cafés, restaurants, and landmarks. Each card followed our core content structure but was visually adapted to match Audi’s darker style and suited with the custom icons.

Before I joined the project, a set of initial designs had already been created, but we definitely weren’t limited to them. I saw these early explorations as a strong starting point, but also as an opportunity to push the experience further. There were clear areas for refinement: improving visual hierarchy, simplifying navigation, and creating a more scalable, intuitive layout that could better support in-car interactions.

With the previous designs as a foundation, I started exploring new directions to refine the experience. I tried a variety of approaches, from simple, glanceable snapshots to more detailed previews that offered richer context.


Along the way, I experimented with different card sizes, layouts, and CTA styles, considering how elements like color and information density could impact usability. A key focus was finding the right balance between simplicity and depth, deciding what to show immediately, and what could be revealed with further interaction.

Early on, I realized I was unintentionally designing for a mobile interface. The layouts, text density, and interaction patterns made sense on a phone screen—where users can scroll, tap, and read at their own pace.


But inside a car, attention is limited. Additionally, an HMI requires a radically simplified UI: larger touch targets, minimal text, and clear, glanceable visuals. This shift in mindset helped us refocus the design around what's actually usable—and safe—in motion.

Expanding to Full Screen

Expanding to Full Screen

To address the limitations of the initial card layout, we explored a full-screen experience to support deeper engagement with POIs. The goal was to provide users with a more informative and immersive view when selecting “More Info,” offering details that could guide them to new destinations or surface personalized recommendations. This expanded view allowed us to experiment with layout variations, additional modules like nearby spots and curated guides, and even a user’s Reveal history—helping to shape a more exploratory and context-rich experience.

Designing for Other Use Cases

Designing for Other Use Cases

I also explored what those cards would look like for our other use cases, such as residential, events, and landmarks. I wanted to explore additional features and functionalities that we could display in this system to make the experience more descriptive.

I explored a potential integration with platforms like Zillow or Trulia to support home buyers browsing neighborhoods. As users drive through residential areas, the system could surface real-time information about nearby listings—such as pricing, availability, and property status—offering a convenient shortcut compared to manually searching on a phone.


For event venues and public spaces, we identified a common moment of curiosity: when drivers notice a large crowd or activity and wonder what's happening. By providing real-time data about ongoing events, the system can satisfy that curiosity instantly.

Full Screen Explorations

Full Screen Explorations

We developed a design spec to guide the information architecture of the full-screen view, ensuring it could flexibly adapt to a wide range of POI types. The goal was to surface the most relevant and actionable content depending on context—whether the user tapped into a simple location like a café or a more complex space like the Ferry Building. Core details such as the name, type, distance, and status offer bite-sized info, while LLM-generated summaries and contextual modules (e.g., “Happening Now” or “See What's Inside”) enrich the user’s understanding and engagement. The interface also dynamically adjusts the level of depth, providing richer content, live insights, or related recommendations where appropriate—making the system feel both intelligent and tailored to each discovery moment.


This design spec ultimately guided our final modal designs, helping us determine the right balance of content to display per POI. We chose not to include the Reveal History in the card itself, as that feature already exists in a separate part of the in-car dashboard UI and is better suited as a dedicated experience outside of individual points of interest.

High-Fidelity Modals

Volkswagen Group of America

Designing an AI-powered HMI that uses gaze detection and voice interactions to reveal points of interests.

Jan 2025 - May 2025

In January 2025, I was onboarded onto a team of 6 to design Interactive Discovery, an AI-powered in-car system designed to help drivers and passengers explore their surroundings safely and effortlessly. It combines gaze detection, voice interaction, and a dynamic HMI (human-machine interface) to surface real-time information about nearby points of interest.


By leveraging voice commands and large language models (LLMs), the system proactively guides users toward meaningful discoveries—without requiring them to take their hands off the wheel or eyes off the road.

I was the lead product designer alongside 2 designers, 3 engineers, and 1 product manager.


I was responsible for the end to end design of the prototype, designing user flows, wireframes, and the final screens, as well as defining system logic for multimodal interactions (voice + visual output). Throughout the process, I collaborated with developers for UI feasibility and design handoff.

Understanding the Space

Humans are becoming more comfortable with interacting with voice agents in their day-to-day lives (think Siri, Alexa, Perplexity, just to name a few).

Natural language models have become woven into our everyday routines—helping us search, shop, organize, and connect with minimal friction.

Car Tech Doesn't Cut It.

Yet when we step into the car, that ease often disappears. Maps and in-car systems today are largely practical tools, built for getting from point A to point B. They rely on manual input, pre-set navigation, or clunky interfaces, leaving little room for spontaneous discovery.

Discovery on the Road

But driving is an experience filled with curiosity. We’re always scanning our surroundings, especially in new locations and cities.


Drivers and passengers frequently spot intriguing places—a bustling café, a striking landmark, a live event just unfolding. And yet, accessing that information in real time is often difficult, distracting, or unsafe.

With those designs as a starting point, I began to make my own explorations. 

I explored many variants — from minimal snapshots to richer, more detailed previews.

I played around with different sizes of cards, different colors and looks for the CTAs, and how much information would be shown to the user when the pop-up first appears.

AI for (Safe) Discovery

Introducing Interactive Discovery — a voice-first experience that brings exploration into the car. With a simple question like “What’s that?”, users can engage with their surroundings effortlessly. Within seconds, a voice agent responds, and contextual information is displayed on the car’s center screen without the driver taking their eyes off the road.


But the goal here isn't just to answer the user's questions — it’s about enriching the journey. By weaving AI into the in-car experience, we can bridge the gap between where we are and what we we're curious about, allowing for a new kind of exploration that's intuitive and hands-free.


This realization led us to define our problem statement:

Discovering Use Cases

When we first began designing Interactive Discovery, we grounded our work in a simple question: When and why would someone in a car want to engage with the world around them? We wanted to ensure that we were designing for moments of curiosity and spontaneity that people experience on a typical drive.


From there, we began to define key categories that mapped to real-world behavior, from single businesses and residential properties to nature spots, cultural landmarks, and large venues. Each of these needed to feel discoverable, relevant, and rewarding to engage with in context.

Before I joined the project, there were already previous designs that had been made, but these weren’t ones that we had to stick with. I felt that there were a lot of areas that we could make improvements on to make this a more enhanced experience.

To guide my design, I looked at Apple CarPlay as a reference. Its interface uses large touch targets, clear visuals, and minimal text—making it easy to use while driving. This helped me simplify my own designs and focus on what works best in a car: quick, glanceable, and distraction-free interactions.

Creating a Framework

We created a standardized card framework to guide what types of content would be displayed and how they'd be structured. Drawing inspiration from Adobe’s design system and card anatomy, we focused on building a clear, intuitive content hierarchy—ensuring that each layer of information felt organized, readable, and easy to scan in a driving context.

This content hierarchy was designed to prioritize clarity and scannability, especially in a glance-based environment like an in-car display.


We placed the preview image and title at the top to immediately capture attention and establish context. Metadata and content areas follow, offering supporting details in digestible chunks. Finally, actions and CTAs are anchored at the bottom for easy access if the user chooses to engage further.

Seeing the UI Live

Seeing the UI in a real environment was actually very important as it helped us see how different widgets and components interact within the navigation screen and would help inform later design decisions.

Live Demo of Current Audi Screens

For example, when testing the map screen, we realized that the map widget had to take up a minimum of two columns. That gave us a constraint to work around, helping us understand that the map had to be full screen or 2/3 of the page, saving us time in designing something that wasn’t technically possible.

We designed Audi-specific cards for categories like cafés, restaurants, and landmarks. Each card followed our core content structure but was visually adapted to match Audi’s darker style and suited with the custom icons.

Here’s the prototyped version of this interaction, showing how the experience transitions from the basic card to the full-screen view. In addition to manual input, users can activate this expanded view through a voice command—for example, by saying “Tell me more” or “More info.” This reduces the need for physical interaction, supporting safer, less distracted driving while still giving users access to deeper information. The entire experience is coupled with the voice user interface (VUI), allowing for a seamless, hands-free exploration of content that's tailored to the driving environment.

Here’s the prototyped version of this interaction, showing how the experience transitions from the basic card to the full-screen view. In addition to manual input, users can activate this expanded view through a voice command—for example, by saying “Tell me more” or “More info.” This reduces the need for physical interaction, supporting safer, less distracted driving while still giving users access to deeper information. The entire experience is coupled with the voice user interface (VUI), allowing for a seamless, hands-free exploration of content that's tailored to the driving environment.

Custom Audi Icons

To align with Audi’s visual language, we created a custom set of icons for each point of interest category. These icons were designed to match Audi’s minimalist, monochrome aesthetic—ensuring they felt native to the brand’s HMI while maintaining clarity and recognizability at a glance.

Custom Audi Icons

To align with Audi’s visual language, we created a custom set of icons for each point of interest category. These icons were designed to match Audi’s minimalist, monochrome aesthetic—ensuring they felt native to the brand’s HMI while maintaining clarity and recognizability at a glance.

Expanding to Full Screen

To address the limitations of the initial card layout, we explored a full-screen experience to support deeper engagement with POIs. The goal was to provide users with a more informative and immersive view when selecting “More Info,” offering details that could guide them to new destinations or surface personalized recommendations. This expanded view allowed us to experiment with layout variations, additional modules like nearby spots and curated guides, and even a user’s Reveal history—helping to shape a more exploratory and context-rich experience.

We developed a design spec to guide the information architecture of the full-screen view, ensuring it could flexibly adapt to a wide range of POI types. The goal was to surface the most relevant and actionable content depending on context—whether the user tapped into a simple location like a café or a more complex space like the Ferry Building. Core details such as the name, type, distance, and status offer quick orientation, while LLM-generated summaries and contextual modules (e.g., “Happening Now” or “See What's Inside”) enrich the user’s understanding and engagement. The interface also dynamically adjusts the level of depth, providing richer content, live insights, or related recommendations where appropriate—making the system feel both intelligent and tailored to each discovery moment.


This design spec ultimately guided our final modal designs, helping us determine the right balance of content to display per POI. We chose not to include the Reveal History in the card itself, as that feature already exists in a separate part of the in-car dashboard UI and is better suited as a dedicated experience outside of individual points of interest.

High-Fidelity Modals

Key Takeaways.

Working on Interactive Discovery pushed me to think beyond traditional screen-based design and consider how products function in motion and in context. One key takeaway was understanding the importance of close collaboration with engineers, especially in an in-car environment where technical feasibility, system limitations, and safety regulations shape what’s possible.


Testing prototypes in a real vehicle and seeing the system live helped me deeply appreciate how design decisions translate into real-world usability. It also emphasized the need for precision and intentionality in every interaction, especially when balancing visual UI with voice-first experiences.

Looking back, this project deepened my understanding of what it means to design systems that are both context-aware and scalable. Designing for an in-car experience pushed me to think not only about clarity and safety, but also how a single design framework could adapt across different visual languages—like transitioning from our initial prototype to Audi’s branded interface. This experience reinforced that good design isn't just about what works in isolation, but what holds up across environments, platforms, and user expectations, all without compromising on usability or identity.

The In-Car Experience.

To show what this experience would look like inside the car cabin, I mocked up the design inside an Audi Q6, showcasing how the experience looks and feels in context. This helped simulate real-world usage — from glanceable information to intuitive interactions — giving stakeholders a clearer sense of how drivers would engage with the system on the road.

Key Takeaways.

Working on Interactive Discovery pushed me to think beyond traditional screen-based design and consider how products function in motion and in context. One key takeaway was understanding the importance of close collaboration with engineers, especially in an in-car environment where technical feasibility, system limitations, and safety regulations shape what’s possible.


Testing prototypes in a real vehicle and seeing the system live helped me deeply appreciate how design decisions translate into real-world usability. It also emphasized the need for precision and intentionality in every interaction, especially when balancing visual UI with voice-first experiences.

What did I learn?

Information depth should match user curiosity

Tiny changes — like shifting a button’s position, switching a carousel to a grid, or removing a distracting background — led to noticeably smoother user experiences. This project reminded me that great UX isn’t just about big redesigns; it’s about the accumulation of small, thoughtful decisions that reduce friction, increase clarity, and build user confidence.

Balancing design and technical constraints

Small decisions, like moving the generative AI panel or redesigning the template flow, had major implications for user experience. I learned to think critically about what users see first, and how layout can direct focus or cause confusion.

Design systems scale clarity

In redesigning the toolbar, I learned that minimal doesn’t mean limited. The challenge was to make core tools easily accessible while still preserving flexibility — and to match user expectations by borrowing from familiar text editor interfaces.

You don’t know until you see it

Every moment of user confusion was a design opportunity. This project taught me to treat friction not as failure, but as a prompt — to ask why it exists and how we can reduce it in a way that feels natural, not forced.

Looking back, this project deepened my understanding of what it means to design systems that are both context-aware and scalable. Designing for an in-car experience pushed me to think not only about clarity and safety, but also how a single design framework could adapt across different visual languages—like transitioning from our initial prototype to Audi’s branded interface. This experience reinforced that good design isn't just about what works in isolation, but what holds up across environments, platforms, and user expectations, all without compromising on usability or identity.

The In-Car Experience.

To show what this experience would look like inside the car cabin, I mocked up the design inside an Audi Q6, showcasing how the experience looks and feels in context. This helped simulate real-world usage — from glanceable information to intuitive interactions — giving stakeholders a clearer sense of how drivers would engage with the system on the road.

Configuring the VUI (Voice User Interface)

The first half of this experience is the visual UI that displays on the screen, but the second part that completes it is the voice-activated response. We explored different levels of interaction after a point of interest (POI) was revealed: from simple answers to proactive suggestions or layered follow-up questions.

VUI Response Pathing

This work was important because not every user wants the same level of engagement. Some might only want a quick fact, while others are open to a richer, ongoing conversation. By workshopping these flows with the developers that were in charge of the LLMs, we were making sure the system could adapt its tone and depth of response that would be specific for each user.