
My role
Senior Product
Designer
Team
Product Manager
Back End Engineer
2 Front End Engineers
About
Dixa is a Conversational Customer Service Platform (B2B/SaaS), that enables companies to interact with their customer via multiple channels, like telephone, e-mail, and messenger.
Project overview
Dixa AI Co-Pilot is an initiative focused on empowering customer support agents through AI. Our cross-functional team set out to improve agent efficiency, impacting key metrics such as Average Handle Time (AHT), First Contact Resolution (FCR), Net Promoter Score (NPS), and boosting engagement across support interactions.
Using a continuous discovery and delivery approach, we introduced features like conversation summaries, translation tools, and response improvement capabilities. This iterative process allowed us to adapt quickly and refine the experience based on real-world usage.
The project led to a 39% reduction in AHT, along with increasing adoption rates and consistently positive feedback through multiple iterations.
Context
Brief introduction
Due to the increasing competitiveness in the emerging AI landscape, Dixa recognized the need to embrace AI technology to maintain its market leadership. This project was initiated to enhance Dixa's market competitiveness and leverage AI to revolutionize customer support. As our team gained familiarity with AI, its potential impact on the customer support market became increasingly evident. This realization elevated the project to a strategic priority, aimed at strengthening Dixa's competitive edge. Given its significance, rapid development and implementation of AI tools were essential to enable learning and shape Dixa's AI-driven future.
Design process
Short timeframe, big ambitions
Due to AI being a relatively new field for us, we began with a solid foundation of user feedback and insights. This guided our initial development efforts. Given the urgency to capitalize on the AI momentum, we prioritized "safe" but impactful features that could deliver immediate value and feedback.
Our initial hypothesis to give us guidance in this project was:
If we focus on supporting agents in their daily workflows, helping them perform their tasks more effectively, efficiently, and with the aid of automation, we will not only lay a strong foundation for integrating AI into the platform, but also uncover valuable opportunities for future AI-driven enhancements.
First experiment
Our first experiment was summaries. The summarization feature was designed to condense the conversation (ticket), enabling agents to quickly grasp the context. The reasoning for this was:
Technical feasibility of summarization
Chosen as the first experiment due to its ease of implementation and quick testing potential.
Insight from previous research
Pain point of agents highlighted in earlier studies, which led to adopting summarization as the initial experiment.
User pain point focused on this experiment
One of the obstacles for CS agents in achieving good AHT metrics was the need to go through entire conversations to understand the context.



Feature implementation and feedback

Building on the good reception of the summarization feature, we applied the same iterative approach to expand the initiative, rebranding it as “Agent Co-Pilot”, a suite of AI-powered tools to support agents in their daily work. The process, grounded in Lean Design principles, revealed strong potential for continuous discovery and delivery. Features like translation, summarization, and text correction were prioritized for their direct impact on handling time, writing quality, and problem resolution, all developed through a fast feedback loop with internal and external stakeholders.
Our process: Designing
through iteration

1.
Ideation & prototyping
Using Dixa’s design system, I crafted the experience. Through design critiques and iterations, we refined the project, addressing platform limitations like the busy agent interface and small screen optimization. My focus was on ensuring new features didn’t disrupt the user experience.
The design decisions were driven by key principles: ensuring the interface worked seamlessly across different screen sizes, avoiding further clutter on an already overloaded page, and making the new features easy to use and fast to access
2.
Feedback (internal)
User tests were conducted with Dixa’s Support Agents, whose daily interactions provided critical insights that shaped the final solutions. Feedback from designers and PMs also played a key role in refining the product.
3.
Go live
A select group of 4-5 customers initially had early Beta access to help monitor performance and provide direct feedback. This number grew with each iteration.
4.
Feedback (live)
Multiple feedback channels for agents were established to maintain a continuous dialogue with team leads and agents. Since direct interviews were challenging, I also prioritized creating quick surveys to efficiently gather key insights, both quantitative and qualitative.

Solutions and experiments
Design, iterate, improve & repeat
These are a few examples of the designs created, all powered by LLMs. Below, I showcase the initial versions alongside key improvements made throughout the process. Each iteration, whether large or small, was tested and refined based on the earlier steps, ultimately shaping the ideal user experience and serving as a foundation for other features.
Summaries
Once in use, some agents found the summaries too long or hard to scan, and expressed a preference for more focused, actionable content. In response, we kicked off a new round of iteration based on their feedback.
We restructured the output to highlight only the key elements:
Contact reason
Provided solution
Final resolution if aplicable
This shift toward clarity and conciseness made summaries easier to trust and act on, and was reinforced by positive qualitative feedback from agents across multiple teams.

Translations
This feature proved to be the most impactful and valuable for our customers, as demonstrated by both qualitative and quantitative metrics. It quickly became a central focus of our efforts moving forward.


Feedback from Agents:
Translation is effective, often better than Google Translate
Occasionally, it’s unclear which language the user is using.
Translating an entire conversation can be frustrating due to repeatedly opening the modal.
New experiments based on feedback
1st experiment: Introduced a “Quick translate to conversation’s language” button to auto-translate replies into the conversation’s language: adoption increased by 70% compared to the previous version.
2nd experiment: Moved the button outside the menu for easier access, which led to a higher adoption compared to the previous one (62%).
3rd experiment: Allowed agents to pin their preferred tools in the reply area, increasing customization and efficiency.

Improve Reply
This option was designed to help agents refine their responses or adjust the tone of conversations. It was generally well received, but through feedback, two areas for improvement were identified.

Feedback from Agents:
• Easy to use. The adoption for Formal and Friendly tones were the highest, totalizing more than 60%.
• Sometimes, the tone was too rigid. Too much formal, too friendly. No freedom to adapt.
• Even playing around with tones, for some companies was difficult to reach the right one, due specific branding tone voices.
New experiment: Open field to ask changes for the AI.
Feedback: Despite the open field option, results were underwhelming, as many agents lacked the skills to craft effective prompts.
New experiment: writing style guide
We introduced a writing style guide feature that allowed customer support leaders to set pre-defined prompts within the “Improve Reply” tool.
Our hypothesis: by enabling CS leaders to define tone and style in advance, agents would be able to refine their responses more quickly and consistently, while staying aligned with the brand’s voice.
This gave teams greater control over communication quality and ensured replies reflected their tone of voice.
Feedback:
CS leaders responded positively and began adopting the feature. While it showed strong potential, there was still room to improve prompt clarity and effectiveness, making it an ongoing work in progress.

Continuous discovery
As the project matured, we continued our continuous discovery and delivery efforts, while deepening our exploration of AI opportunities with customers. Through interviews and close collaboration with stakeholders, including customers, team leads, and agent, we identified pain points in daily workflows, gathered AI-related needs, and built a focused, value-driven backlog. Some key opportunities that emerged included more automated responses and stronger integration with the knowledge base. We shared potential solutions and prioritized them collaboratively, ensuring alignment with real user needs and expectations.
Growth
Space to grow
While the AI Copilot added value to Dixa’s platform, it faced challenges in adoption and engagement. Despite broad access, agents' usage of certain features varied across companies, revealing areas for improvement.
Key observations included:
Lack of structured onboarding: Many companies enabled the Copilot without a clear introduction for agents, leading to inconsistent usage.
Skepticism towards AI: Some agents were hesitant to rely on AI due to trust issues, partly from occasional AI hallucinations.
Low feature awareness: Agents were often unaware of the full range of Copilot's capabilities.
We also identified a gap in our strategy for promoting these features, as the Copilot’s growth depended primarily on sales presentations rather than organic adoption.
Initiatives
To boost engagement with the AI Copilot, we hypothesized that offering more ways to communicate the feature to our customers would be key.
Try Before You Buy: We offered a 30-day trial, allowing agents to explore the Copilot and learn its benefits. This also helped generate leads for the sales team.
Targeted communication: I proposed modals and emails for leaders and agents, highlighting Copilot’s advantages, encouraging engagement, and offering usage tips.
Onboarding: Recognizing the need for better onboarding, I advocated for in-app guidance to support users. We explored third-party tools to streamline this without requiring human interaction or long development.


Impact
Feedback and results
The Co-Pilot project has become a vital add-on for Dixa, making a significant contribution to new contracts and partnerships. While certain features are more popular with customers, the value of Co-Pilot as a comprehensive set of tools is clear. Though there isn’t a one-size-fits-all metric, as each customer has unique needs and perceptions of value, several have shared significant benefits through feedback.
Optimized cross-language support
The translations feature boosted agent agility, reducing the need for multilingual hires and improving cross-language support.
39% reduction in AHT
By enhancing agent agility, the feature reduced Average Handle Time by up to 39%, with some companies seeing a drop from 5:45 to 3:30 minutes.
Organic sales & Co-pilot awareness
After launching the “Try Before You Buy” page, Co-Pilot licenses sold more organically, showing potential for further growth.