AI for impact.com
Building Trust using AI Agents
How did we start?
At Affluent, innovation was always part of our DNA. When AI began emerging as a real opportunity, we wanted to be on the forefront against our competitors.
When I began working on this project, I wanted to make sure I designed with behavior in mind. I wanted to know how people respond to AI, what tones feel reliable, and what interactions actually build trust. That shift from “FAQ bot” to “assistant you can count on” is what made adoption stick.
Summer was peak season at Affluent, making it essential to alleviate the workload the customer success team. Tackling the repetitive "how-to" and "where" inquiries, enabled faster resolutions during the high-demand periods.
While building the in-app AI agent, I also designed a Slack-based copilot for account and agency managers. Research showed Slack was the most trusted daily workspace, so bringing the assistant into that environment made interactions feel seamless and credible. This companion experience reduced friction and positioned the AI not just as a help tool, but as an embedded partner in the workflow.
In the discovery phase of this project, I combined stakeholder interviews, audits, user interviews to help me uncover what information users needed, but also how they expected AI to communicate. This ultimately set the foundation for Alfie’s tone, flows, and voice to help shape the CUX.

From this research, a few truths stood out.
Users trusted our AI Agent more when it explained why results looked the way they did, not just what the numbers were. They wanted a consistent, neutral tone that felt professional but approachable. Also. when Alfie didn’t have data, honesty mattered — fallback answers that acknowledged gaps while offering alternatives built more confidence than evasive responses.

Before
Chat provides raw numbers but lacks tone and anticipation. Users receive data, but no help interpreting what it means or what to do next.

After
Applies research on tone, behavior, and predictiveness: explains the why behind results, highlights trends, and offers clear next steps (report or export). This shifts the experience from functional to trustworthy and actionable.
Technical Findings & Testing
Through these conversations and testing, I gained a high-level overview of what Alfie could and couldn’t do in V1. For example:
Limitations: Alfie could display campaign tables but couldn’t analyze or explain the data without added structure. It also could not schedule reports to go out.
Bugs: At times, responses would echo the user’s request instead of providing an answer.
Discrepancies: Interchangeable terms like “brand” vs. “client” caused data mismatches.
Handling Edge Cases
Understanding these constraints helped me design with more empathy for both users and the system. Instead of overpromising, I focused on creating clear fallback flows, a consistent advisory tone, and conversation templates that kept users informed — even when the data wasn’t available.

Before
Dead end, no guidance, user feels frustrated and loses trust.

After
Applies Transparency Bias (users trust honesty) and Choice Architecture (gives actionable alternatives instead of a dead end)
WireFrames
15%
Increase in User
Acquistion
AI CoPilot (Preview)
Meet Users Where
They're At
Feedback from clients emphasized the importance of integrating tools into existing workflows, like Slack, to drive adoption and engagement.
Speak Their Language
Creating friendly and approachable chatbot responses wasn’t just about improving the user experience—it directly supported key business objectives like boosting platform adoption and reducing the workload on customer success teams.
Implementing a feedback system with thumbs-up and thumbs-down options allowed us to continuously refine the chatbot based on real user input.
Feedback at the Forefront
