As the Lead UX Researcher on the vivoGPT project, I led the UX research and was involved in the UX concept and design for a company-wide, AI-powered, text-based chatbot integrated within the company's main communicational tool. This chatbot enables employees to interact with OpenAI’s GPT technology directly in their workplace environment to receive context-relevant support and complete tasks efficiently. It was the first time UX research was introduced into the project, which had already been launched with minimal user feedback.
When I joined the vivoGPT initiative, the product had already gone live but had never been validated with real users. The design was not grounded in user needs, and there was no understanding of employee behaviors, contexts of use, or expectations from an AI chatbot at work.
My primary challenge—and key contribution—was to create and lead a complete UX research strategy from scratch. This included building a user-centered research plan, conducting generative and evaluative research, prioritizing user needs, and translating findings into UX design concepts.
Later in the project, I also contributed to designing user flows for creating a custom assistant and for browsing and selecting existing assistants tailored for company-wide use
To guide the development of vivoGPT, I applied a user-centered Design Thinking approach and led the team through an iterative UX process embedded within Agile methodology. My goal was to ensure the continuous integration of user insights into the product roadmap and feature prioritization.
As the Lead UX Researcher (and UX Designer in the later stages), I was responsible for:
Defining the UX research strategy and roadmap
Planning and conducting mixed-method user research
Collaborating cross-functionally with product owners, developers, and internal stakeholders
Synthesizing findings and facilitating workshops to prioritize actionable iterations
Contributing to UX concept and design for new features and user flows
July 2024 - April 2025
Figma, Miro, Confluence, Jira, Microsoft Teams
When I joined the vivoGPT project, one of the first things I noticed was the absence of user input in the product’s development. Although the AI chatbot was already live and accessible to all employees, no research had been conducted to understand how people were actually using it—or if it was solving their problems at all. I knew that to bring real value, we had to take a step back and build a foundation based on genuine user needs and behaviors.
To foster a user-centric development process by validating current usage, identifying user needs, understanding behaviors, and improving usability and value for employees using vivoGPT in their daily work.
To do that, I designed and led a comprehensive UX research strategy from scratch. I started by mapping out two main research phases: generative and evaluative. My goal was to first uncover the “why” behind employee behavior—what they needed, what frustrated them, and how they imagined AI could support their daily tasks—and then validate and test how well our solution aligned with these expectations
Generative Research
Conducted during the discovery phase to explore user needs, behaviors, attitudes, and expectations.
Methods Used:
Stakeholder Interviews: Gathered early expectations, business goals, and existing assumptions from internal stakeholders to direction with strategic objectives.
User Interviews: 10 remote structured sessions with employees across departments to uncover motivations, tasks, goals, and challenges related to using AI chatbots in their daily work.
Quantitative Survey: Distributed company-wide (N=28) to collect data on usage frequency, task types, frustrations, and feature expectations. Provided a “what” layer to complement the qualitative findings.
Workshop: User Needs Prioritization: A cross-functional session with the vivoGPT team to collaboratively identify and prioritize user needs and feature ideas based on research findings.
Evaluative Research
Used during and after feature development to test usability, learnability, and satisfaction with the integrated chatbot.
Methods Used:
Usability Testing: 6 first-time users from different departments participated in remote sessions to test key tasks: onboarding, prompting, file uploads, and group chat use.
Planned/Follow-up:
Constant usability testings designed for two groups: for first-time users to assess improvements in onboarding and usability and for existing users to compare vivoGPT with other LLMs and AI assistants.
A/B testing: for comparing two different versions of the assistants tab.
In order to gain a deeper understanding of our users’ behaviors, attitudes, needs, goals and pain points when interacting
with an AI text-based chatbot, we conducted user interviews & shared a survey to collect also quantitative data
Understand behaviors and attitudes of employees using a text-based chatbot
Determine tasks users want to complete using a chatbot
Identify new possibilities where vivo employees could benefit from the usage of generative AI
Document needs, goals and pain points of employees while using a chatbot
As part of the process of sorting and mapping data in order to collect important insights, I created an Affinity Mapping, which allowed mew to sort our data into manageable clusters based on 4 main themes: Current Use of AI, Pain Points/Frustrations/Impediments, Wishes & Needs for future work, Integration of AI in company’s environment
I merged all the insights I gathered from both studies and presented them to my team. Both studies extracted similar results.This multi-method approach allowed us to align and complement both data sources, achieving a high degree of validity.
After I gathered the insights from both studies, I organised a team workshop where the whole vivoGPT team collaborated in order to identify and prioritise the User Needs / Requirements, derived from our studies and translate them into features and actionable iterations. We ended up with a matrix with all UN/UR ranked according to effort and value, which became a roadmap for actionable product iterations. It was in this session that we collectively identified “Assistants” as a high-value concept worth exploring.
In the evaluative phase, I designed and ran a remote usability test with six first-time users. They were asked to complete a series of real-world tasks—like uploading a file for summarization or using the chatbot in a group chat. My goal was to assess not just whether the features functioned, but also whether they were intuitive, trustworthy, and helpful.
The findings revealed several usability issues, as well as areas of confusion around data confidentiality. I synthesized these findings using a rainbow spreadsheet and clustered them into four categories: major errors, minor errors, positive observations, and “interesting to know.” One key insight was that most users were unsure what kind of data was safe to input into the chatbot—a risk for both user confidence and compliance.
This led to immediate design changes. We created a new “Confidentiality” section within the chatbot’s Info tab, revised onboarding communication, and clarified messaging across all touchpoints.
Throughout this entire process, I positioned UX research not as a standalone phase, but as a continuous, strategic practice embedded in the team’s agile workflow. I established a research cadence that continues beyond the initial release:
Regular usability tests with both new and experienced users.
A/B testing to compare iterations and validate new directions such the Assistants tab
Built a culture of evidence-based decision-making across product and design teams.
This end-to-end approach—creating the first UX research plan, conducting multi-method research, and collaborating on actionable design improvements—ensured that vivoGPT evolved from a tool with unclear value into a trusted AI companion tailored to employees' real needs.
Following the research, I helped map both the current and ideal user journeys. This exercise clarified where users experienced friction and guided us in reducing unnecessary steps while lowering cognitive load across both touchpoints.
Once the research insights were prioritized and accepted, I contributed to the design of two core user flows:
Custom Assistant Creation Flow:
A guided process that allows users to define the assistant's purpose, tone, and capabilities based on specific work needs.
Browse & Select Existing Assistant Flow:
A streamlined flow for users to explore and use prebuilt assistants designed for common tasks across the company.
Collaborating with the UI designers, we ensured:
Visual consistency within the Communicational tool environment
Scalable and accessible UI patterns for future assistant types
Clear information architecture for trust-building and transparency
This project reinforced the critical role of UX research in shaping AI products. Key takeaways include:
The value of introducing UX Research early, even in already-launched products
The importance of building trust and transparency in AI-driven tools
The impact of continuous iteration and validation using both qualitative and quantitative methods
The effectiveness of collaborative workshops to align cross-functional teams around user needs