What Leaders Said About AI and Culture at Top Workplaces

April 28, 2026

The USA Today Top Workplaces conference in April brought together some sharp CHROs, CEOs, and talent leaders. The unofficial theme? AI is remaking how we hire, onboard, and manage. But it's also unsettling people.

Not the leaders. The people inside their companies.

The tension was palpable. Everyone in the room got that AI is a productivity lever. The payoff is real: faster hiring, smarter matching, fewer bad hires. The risk is equally real: if people don't trust how you're using it, your best talent walks, and your culture can take a hit. That's IF you don't manage the process with empathy, intention and well…true leadership.

This is where it gets interesting. The leaders we heard from weren't dodging the trade-off. They were naming it directly.

Employees aren't afraid of AI—they're afraid of secret AI. Transparency is what turns a productivity tool into a trust-building one.

Three Themes That Kept Coming Up

1. The AI-Human Boundary Gets Redrawn Every Week

One CHRO on the panel said something that stuck: "AI flags candidates. Humans decide." But that's not quite how it works in practice, is it?

What we heard from multiple leaders was that the boundary between AI recommendation and human choice is blurrier than it sounds. When an AI system screens 500 candidates down to 15, you're already making a decision. The AI isn't just filtering noise. It's shaping your candidate pool in ways that are hard to reverse or even see.

One CEO talked about transparency the way a security person talks about a breach: as the thing you have to get ahead of, or it'll explode on you later. He'd made a call to publish, on the careers page, exactly how AI was being used in recruiting. Not defensively. Just clearly. "We use AI to help screen for skills and cultural fit. Here's what that means. Here's what it doesn't."

The reason? Candidates ask. And if they're asking, your future employees are wondering. The ones who stay should know how they got hired.

2. The Manager's Job is Fundamentally Different Now

This one came up in almost every breakout. The manager role isn't shrinking. It's transforming.

Historically, managers oversaw work assignment, performance review, and promotion decisions. AI is automating chunks of those responsibilities.

What's not automating? The stuff that matters most: building trust, spotting when someone's struggling, knowing when a policy is killing morale, defending your team in a budget meeting, making a risky hire on potential.

One head of talent said managers are now "translation layers." They take AI insights (this person scores high on onboarding readiness, that team's retention is drifting) and they convert that into action. Not because the AI is wrong. Because the AI sees patterns in data. Managers see people.

But here's the catch: most managers weren't trained for this. They're trained to do the old job. The companies winning right now are the ones running hard at re-skilling their people leaders. Teaching them how to read an AI output. When to trust it. When to override it. How to explain it to their team without sounding like a robot or a villain.

3. Transparency Is a Retention Tool, Not a PR Stunt

The leaders who leaned hardest into explaining AI to their teams weren't doing it for ESG points.

They were doing it because their turnover was better.

A CHRO from a mid-market tech company walked through their onboarding process. They'd integrated AI into day-one orientation. Not to spy. To help. "Here's why we're using AI. Here's what it can't do. Here's how you can push back on it."

The outcome? New hires felt less anxious about the tool, faster. They understood it wasn't replacing them. And the ones who couldn't get comfortable with it? They self-selected out earlier, which is a better outcome than hiring someone who resents your tech stack.

The message was consistent across panels: employees aren't afraid of AI. They're afraid of secret AI. Opacity creates paranoia. Clarity creates buy-in. Recent research from Gallup and Bentley University confirms this: transparency around AI use is a meaningful factor in whether employees trust their employer's technology strategy.

Culture isn't AI-proof anymore—it's AI-dependent. The companies treating them as separate strategies are already paying for it.

What This Means for Your AI-and-People Strategy

If you're still deciding how to roll out AI in talent acquisition or people operations, the leaders at Top Workplaces gave you a roadmap.

Start with the boundary question. What is your AI system doing? What's it not doing? Where does human judgment take over? Write it down. Be specific. "We use AI to identify skills gaps" is too vague. "We screen resumes for technical keywords and years of relevant experience" is operational.

Invest in your managers. Whatever budget you'd allocate to AI tools, allocate a matching chunk to manager training. They're the ones who operationalize this stuff. They're also the ones who'll lose your best people if they can't explain why AI just flagged an employee for PIP. Gallup's 2026 State of the Global Workplace report found that employees are 8.7x more likely to view work as transformed by AI when their managers actively support its use.

Be transparent early. Not just with new hires. With current employees. If you're using AI to predict attrition, make a choice: run it silently and hope it works, or let your managers know so they can do something about it. The leaders at the conference chose the second path. It's harder. It also works.

Culture isn't AI-proof. It's AI-dependent now. The companies that treated their culture as separate from their AI strategy are regretting it. Culture scales with AI when you've baked transparency, manager judgment, and human connection into the process. It corrodes when you haven't.

The Question That Cut Through Everything

One moment stood out. A CEO asked the panel: "How do you know if you're using AI to enable your culture, or just automate it away?"

No one had a canned answer. That's the right kind of question to be asking right now.

The answer probably isn't in a single policy or tool. It's in what you do when you find out that AI just made a decision that feels wrong. Do you question it, or do you accept it because it came from the algorithm? Do you ask your team what they think, or do you announce the output and move on?

The leaders at Top Workplaces were getting that distinction. The culture winners are the ones asking it regularly.

If you're navigating this tension between AI adoption and culture, IQTalent helps talent leaders build AI-informed recruiting processes that keep the human piece intact. Reach out and let's talk through your approach.

FAQ: AI and Culture at Scale

Q: How are CHROs using AI in recruiting without creating bias?

A: The leaders we heard from aren't relying on AI to eliminate bias. They're using it to flag bias they can then address. That means testing your system before launch, auditing outcomes quarterly, and treating AI as a tool to inform human judgment, not replace it. One CHRO runs a "bias audit" every quarter, comparing hiring outcomes against applicant pools to spot patterns the AI might've introduced.

Q: Should companies tell employees they're using AI in hiring?

A: The consensus at Top Workplaces was a clear yes. Employees find out anyway. Better to explain it upfront than have them discover it and feel misled. Transparency also reduces anxiety. People are less afraid of a tool they understand than one they suspect is being used behind their backs.

Q: What's the biggest mistake companies make with AI and culture?

A: Implementing AI without involving managers in the rollout. Managers are the ones who have to interpret outputs, explain decisions to their teams, and catch when the AI is making mistakes. If you skip manager buy-in, you end up with a system that creates friction instead of reducing it.

Q: How do you train managers to work effectively with AI systems?

A: Most companies are doing this on the fly. The better approach: start with a clear playbook showing when to use AI outputs, when to override them, and how to explain the system to your team. Then run managers through scenarios. "An AI system flagged a high performer for attrition risk. What do you do?" That kind of thing.

Q: Can AI help protect company culture during growth?

A: Yes, but only as a diagnostic tool. AI can show you early warning signs of culture drift (turnover patterns, engagement metrics, hiring mismatches). But fixing culture requires human leadership. Use AI to spot the problem. Use managers and leadership to solve it.