AI Didn't Break Hiring. It Just Showed Us What Was Already Broken
I've been thinking a lot about what's actually happening right now in hiring, and I keep coming back to a conversation I had with a CHRO last month who told me something I can't shake: "We're laying people off and freezing hiring at the same time we're saying we can't find talent. I know how that sounds. But I also can't explain to my CEO how to fix it."
That's not a confession of incompetence. That's someone naming a systems failure out loud.
Over the past year, I've watched this pattern repeat itself—companies simultaneously shrinking their early-career pipelines while complaining they can't find the right people, HR teams under pressure to move faster and reduce risk while the signals they've relied on for decades are getting weaker by the month, hiring decisions that feel increasingly improvised even as the process gets longer and more elaborate.
None of this is accidental, and honestly, none of it is primarily about AI.
AI didn't break hiring; it exposed how fragile it already was
For years, hiring depended on a shared set of assumptions that were always a bit shaky but felt workable. Résumés were imperfect, but they differentiated. Interviews were subjective, but they felt informative. Credentials stood in as proxies for capability because most of the time they were "good enough," and when they weren't, well, that was just the cost of doing business.
That era is over, and I think we all know it even if we're not ready to say it yet.
AI didn't just polish résumés—it erased differentiation. It didn't just help candidates prepare for interviews—it amplified performance in ways that are genuinely hard to distinguish from real capability. The result is not more deception, though there's plenty of that. The result is less clarity. Everyone looks qualified. Confidence goes down, not up. And when that happens, organizations don't redesign their systems. They retreat. They shrink cohorts, delay decisions, rely more heavily on known networks and contingent labor, add more interviews and more screens and more process, mistaking repetition for rigor.
HR gets left in an impossible position: accountable for outcomes but forced to rely on signals they don't fully trust.
Where hiring actually breaks
I've been thinking about where hiring actually breaks, and here's what I keep seeing; most failures are not decided in the interview room. They're decided upstream, before anyone talks to a candidate. Roles open under pressure instead of clarity. Job descriptions get recycled instead of redefined. Intake conversations focus on speed instead of alignment. Success criteria remain implicit instead of explicit. By the time candidates arrive, the evaluation is already improvisation. People are assessed against moving targets. Someone "feels right." Someone "seems like a fit." When a decision starts to feel risky, the risk has usually been engineered weeks earlier.
Adding automation or speed doesn't fix that; it often amplifies it.
Skills-based hiring without infrastructure is just a belief statement
And then there's skills-based hiring, which nearly every organization now claims to support but very few can explain how it actually works in practice. What skills matter for this role? Demonstrated how? Evaluated by whom? Against what criteria? In what context? Without clear answers, skills language becomes a belief statement instead of a decision system—it sounds progressive, but it doesn't change outcomes. In high-stakes moments, leaders revert to what feels safer even if they know it's flawed, and I understand why because the alternative feels like stepping into empty space without knowing if there's ground underneath.
This gap between intention and execution is one of the defining tensions in hiring right now, and it's what keeps me up at night because it's affecting real people making real decisions about real lives.
What actually rebuilds confidence
In the midst of all this uncertainty, though, one pattern shows up consistently, and it's actually hopeful: when a hiring decision feels genuinely risky, people don't trust process. They trust evidence. Not credentials. Not polish. Not how many interviews were conducted. They want to see work—real work, produced under realistic constraints, something observable, something that can be discussed and questioned and interpreted against shared expectations.
But here's the thing—raw work alone is not enough, and this is where most skills-based hiring efforts fall apart. Without structure, without defined criteria, without context for interpretation, work samples simply introduce a new form of subjectivity. One reviewer sees excellence, another sees acceptable, no one can explain the difference because the standards were never articulated. The difference is not whether work exists. The difference is whether it can be evaluated consistently and responsibly, and that requires infrastructure most organizations don't have.
Judgment is the capability we're failing to see
I've also been thinking about judgment, which is the capability we are systematically failing to see. As AI takes over more execution, differentiation is shifting—the advantage no longer comes from producing outputs faster, it comes from framing problems well, making tradeoffs, responding to feedback, deciding under uncertainty. That capability has a name. It's judgment. And judgment does not show up cleanly on résumés, it rarely surfaces in short interviews, it reveals itself through sustained work over time with constraints.
Most hiring systems are not designed to surface judgment. They're designed to infer it. That inference is no longer reliable.
Why early-career hiring is freezing
This is most visible in early-career hiring, which is where I started this work because I have two kids who will be entering the workforce in a world where their capabilities matter less than their ability to prove them, and that feels wrong to me in ways I can't fully separate from the professional analysis. Organizations are not pulling back on junior talent because it lacks value—they're pulling back because leaders don't trust how those roles are being evaluated in an AI-assisted world. Rather than redefining expectations or redesigning evaluation systems, hiring gets paused or delayed or outsourced. In the short term that feels rational. In the long term it is genuinely costly because when early pipelines stall, capability gaps don't disappear, they surface later under more pressure at higher cost with fewer options.
Trust has to be designed now
So here's what I think is actually happening: trust can no longer be treated as an assumption that emerges from process. It has to be designed into the system. That means defining what good work looks like before hiring begins, creating opportunities for candidates to demonstrate how they think and decide, evaluating that work against clear role-aligned criteria, applying human judgment consistently with evidence to support it. Automation can support those systems, absolutely. But it cannot replace them. Speed alone does not rebuild trust. Transparency does. Evidence does. Clear expectations do.
The companies that struggle most over the next few years will not be the ones without access to talent—they'll be the ones making high-stakes decisions with low-confidence signals. The companies that adapt will be the ones willing to confront an uncomfortable truth: many hiring decisions have been built on inference, not evidence, for a long time, and that worked well enough when the proxies held up but falls apart when they don't.
What comes next
Check out our new playbook that defines exactly what skill verification is and how you may be able to implement this in your organization.
Rebuilding trust in hiring is not about adding more tools or implementing another platform or launching another assessment vendor. It's about redesigning how decisions are made from the ground up, and that is harder and slower and less exciting than most people want to hear, but it's also the work that actually matters.
At Work Ready Partners, we've spent few months documenting what verification actually means in practice, where common hiring signals break down, and how real work can be evaluated before a hire is made; this is not as a theory but as an operational response to the trust gap organizations are now living with, whether they're ready to name it that way or not. If this is something you’d like to explore, I would like to invite you to check out our playbook.
2026 will not reward louder claims about innovation or AI transformation. It will reward clearer evidence, and I think that shift is already starting whether we're ready for it or not.