The decision to buy AI usually happens way above the people who will use it. Maybe a CEO says explore AI, a head of CX evaluates vendors and signs the contract, and the front-line team wakes up with a new tool in their workspace.
This isn't a criticism—it's just how organizations work. But it creates a gap between those who buy AI and those who rely on it everyday. That gap is where implementations go wrong.
Without readiness, adoption fails in predictable ways
When a front-line team inherits a tool they didn't choose, they probably don’t have its full context. They don't know why this vendor, why now, or what problem it's supposed to solve. Without that clarity, people tend to fill in the blanks themselves. Usually with fear: is this here to replace me?
For AI implementation teams, fear enables resistance, sometimes by active pushback, or sometimes through passive foot-dragging. Without training, no one knows how to use the tool, so it sits there and adoption stalls. Without trust, your team will override the AI or ignore it entirely, which defeats the purpose of investing in it. And without buy-in, they won’t flag problems when the AI fails customers. Quality declines, customers feel it, and by the time the numbers reflect the sentiment the damage to your brand is already done.
Klarna is an extreme version of this. They eliminated their 800-person support team to adopt AI, but someone still had to manage the AI, and the AI still had to handle real customer problems. Neither was ready, so service quality declined, and customers noticed. Now Klarna has hired humans back and is focused on making them ready to work with AI.
What people readiness actually looks like throughout implementation
People readiness isn't a training session you schedule before launch and check off. It happens in phases that map to the implementation timeline and goals, from before you buy through launch and into ongoing operations. Skip a phase and you create the problems described above: fear, resistance, low adoption, and no feedback loop.
Phase 1: Before you buy
Front-line teams should help scope what problems to automate. They're the ones who know which questions come up constantly and have clear answers, and which ones require judgment, empathy, or context that a bot shouldn’t handle, or where a human should be in-the-loop.
For example, one budgeting software company, YNAB, involved its support team when evaluating Forethought. They tested two AI platforms on complex tickets with vague wording and unfamiliar language. The people who see those tickets every day assessed which tool handled them better.
Success metrics also need to be aligned before you buy, not after. An executive sponsor probably cares about deflection and cost savings. The implementation manager cares about CSAT, resolution quality, and team capacity. These aren't always in conflict, but when they are, you want to know before launch, not when someone's being held accountable for a number they didn't agree to.
Even if frontline staff can't be in the room during vendor selection, they should have a voice in the problems the AI is solving. Phil Lynch, who leads AI programs at ActiveCampaign, a marketing automation company, puts it simply: "If you're starting with Forethought, I would advise first talking to the teams who will benefit most from its implementation, usually your customer-facing teams. They know the real pain points customers experience day to day."
Phase 2: At launch
People fear what they don't understand, and they assume the worst when no one explains what’s changing. Launch is where you prevent that fear through transparency: what's changing, what's not, and what this means for their day-to-day work.
Be honest. Yes, some roles may evolve, but new roles also emerge. The people who’ve spent years as lead agents or subject-matter experts often know better than anyone how a bot should interact with customers,, which makes them ideal candidates for managing AI, but they need to know that path exists, and they need training to walk it.
Phase 3: Ongoing
Implementations need ongoing help from tech and product teams to fine-tune integrations. If you operate on a "sign and dash" approach, support teams feel like the tool was tossed over the fence. Consistent partnership throughout optimization is one of the biggest drivers of long-term buy-in.
Airtable's support team was skeptical of Forethought at first, particularly regarding hallucinations and potential job loss. The shift came when they realized they could see hallucinations directly in the tool, trace them back to conflicting or incomplete content in the knowledge base, and actually fix them. Once they saw they could control the outcomes, the attitude changed. As one manager put it, "the attitude became 'let's go crazy with autoflows.'" Existing team members were upskilled to focus on supporting automation initiatives, and skeptics became advocates.
Your team’s buy-in will result in more effective AI
When people are bought in, they're enthusiastic about AI helping them do their job better. They actively want to improve the solution you implement because better AI means easier work for them and better experiences for customers. They flag issues with confidence that their feedback will make the system smarter.
Without people-readiness, you don't get that enthusiasm. You get compliance at best. The feedback loop never starts, problems compound in the background, and you end up wondering why your AI investment isn't delivering what you expected.
Hashtags blocks for sticky navbar (visible only for admin)
{{resource-cta}}
{{resource-cta-horizontal}}
{{authors-one-in-row}}
{{authors-two-in-row}}
{{download-the-report}}
{{cs-card}}



.png)

.png)
