Impact & Insights | Kelly Services

AI in HR: Practical Applications, Common Mistakes, and How to Get Started in 2026

Written by Ridge Carpenter | May 15, 2026 10:08:43 PM

Most HR organizations have heard the pitch for AI by now, but surprisingly few have seen it pay off. According to SHRM's 2026 State of AI in HR report, more than half of organizations have not adopted AI in their HR functions and have no plans to do so this year. Among those that have, usage clusters around a handful of tasks, with much of the technology's broader value still untapped.

Because Kelly works across both talent strategy and workforce operations, we see AI from multiple angles: how employers want to use it, how workers experience it, and where the operational realities complicate its promise.

As AI product manager at Kelly, I oversee how the organization adopts AI, from the tools we build to the governance that surrounds them. A major part of that job is deciding what we say no to. In my experience, AI delivers results when it's applied to a specific kind of problem: areas where the variety or volume of inputs has outpaced an individual's ability to manage them. Applied to the right problems with the right oversight, it frees HR teams to spend more of their time on the work that requires a human mind.

 

Let’s take a closer look at where that's happening now, what goes wrong when organizations get it backward, and how to build the governance and readiness foundations that make adoption successful.

Where AI in HR is delivering results

The volume problem looks different depending on the HR function. AI in recruiting looks like managing hundreds of applicants against dozens of role attributes. In workforce analytics, it means synthesizing signals that no single person could hold in view at once. In employee development, it shows up as the conversion of raw, unstructured information into something actionable. What connects all three is the same pattern: applied to the right problem with the right oversight, AI tools give people back the capacity to exercise human judgment, even among competing signals and demands.

Recruiting and candidate evaluation

Recruiting is where most HR organizations are putting AI to work first, by a wide margin: 27% of HR professionals say their organization uses AI in recruiting.

At Kelly, for instance, we use AI to standardize resume and job description formats to ensure all components are in place early in the process. One example is that every job description needs to include a salary range for compliant posting. An AI tool can check for those details and flag gaps before they become problems. Beyond formatting, AI handles candidate summarization at scale, evaluating large applicant pools against multiple role attributes simultaneously.

Scaled outreach is another area gaining traction. AI interviewing, AI-assisted coaching, and skills-to-role matching are changing how we connect candidates to opportunities and how quickly we can do so at scale. These applications rely on different types of AI. Generative tools handle language-heavy tasks like transcript summarization and outreach drafting, while skills-to-role matching relies on machine learning models that map how competencies relate to one another for a more specific picture of fit.

The capacity to screen at volume matters more now because the volume of applications is accelerating. According to Kelly's (Dis)honest Job Search survey, 79% of job seekers now use AI tools in their applications. When candidates are using AI to apply at scale, organizations need AI-assisted screening to keep pace. The alternative is to ask recruiters to spend their time on volume management rather than the work that actually requires them: evaluating fit, building candidate relationships, and guiding hiring decisions.

Workforce analytics and planning

Workforce analytics is an area where deterministic AI tools tend to be the stronger fit, simply because of the sheer number of signals involved. In this context, that means machine learning models, not generative chatbots.

When a client needs a hire for a specific role, the analysis has to account for:

  • Geographic factors, including where the role is located, where qualified talent is concentrated, and how those two maps overlap.
  • Industry and specialization factors, such as the vertical the role sits in, how niche the required skill set is, and whether adjacent industries are competing for the same candidates.
  • Temporal factors, like the client's hiring timeline, how urgently the seat needs to be filled, and what level of experience that urgency allows.

Those factors balance differently for every hire, depending on where the role sits and how specialized the work is.

When so many variables are in play, it can be difficult for a human mind to hold them all at the same time. An AI tool with the right data integrity can take on that cognitive load. That means human expertise, whether from an account lead, recruiter, or workforce strategist, gets applied across more clients and candidates, with more discernment, than any individual could sustain on their own.

Employee development and performance

AI is also showing up in how organizations develop and evaluate their people. One application that's gaining traction is the use of large language models to standardize goal-setting across a workforce. At Kelly, we distribute pre-written prompts during goals review cycles that help employees rephrase their individual goals in actionable, trackable formats like SMART goals or OKRs, and check that those goals connect to company-level strategy. These prompts run through GRACE, an internal AI tool that gives employees across the organization secure access to AI without needing individual accounts. Without a tool like Grace, getting every employee across a large organization to write goals in a consistent, trackable format would require significant manual effort.

On the coaching side, AI-driven analysis of call recordings is producing insights that were previously difficult to quantify. We've seen this with our sales and recruiting teams. AI can measure which team members demonstrate the strongest active listening behaviors on their calls and identify where others could improve. Active listening used to be treated as a qualitative judgment call, something a manager might notice but couldn't put numbers to. We’ve found that quantifying it has already correlated with stronger sales margins for team members who follow the best practices these tools surface.

More broadly, a large share of AI's value in development comes from converting unstructured data, like those call recordings, into structured formats, like coaching scorecards or follow-up questions. Without those manual steps, people have more room to be present in conversations, build relationships, and think critically about what comes next.

What employers get wrong about AI adoption

AI adoption failures rarely trace back to the technology. They trace back to misaligned incentives, underestimated costs, and an incomplete understanding of how AI changes the way people work.

Part of the problem is that organizations and their employees aren't having the same conversation about AI's role. According to Kelly's Re:work Report, three in five business professionals and industrial executives say they would replace workers who resist AI, but only 43% of workers believe their employer would follow through. When the two sides disagree on something that basic, the conditions for thoughtful implementation (clear roles, shared expectations, human-led oversight) are hard to establish.

Optimizing for the wrong outcomes

The idea that AI should handle all low-level tasks so people can focus on complex ones sounds like a clean division of labor. In practice, it can backfire. Call center employees, for instance, may end up handling only the calls that are too complex, too urgent, or too contentious for an AI agent. The result is a workforce that spends the entire day on the highest-stress interactions with no lower-intensity work to balance them out. That's a worse employee experience and, in many cases, results in worse service quality.

The ideal model for AI use can be described as a centaur: the human does the thinking, and the technology makes them faster. The person remains in charge, activating AI tools deliberately and under their own supervision. But that model can easily get inverted. When AI starts dictating the work and the person is there to carry out what the tool prescribes, the worker becomes a gap-filler rather than a decision-maker. The process becomes fragile, and the person replaceable.

A related version of this phenomenon is what I'd call accountability sinks, where AI processes are structured so that a human only steps in when something goes wrong. That might look efficient on paper, but it means the person's role is to absorb fault rather than to guide outcomes. At Kelly, we try to structure things so that a person is in charge throughout the process, not just when it starts to break.

Underestimating hidden costs and gradual skill erosion

The visible costs of AI adoption are easy to budget for. It’s the hidden ones that get organizations into trouble.

Software maintenance is a common blind spot. Building or configuring an AI tool is the visible effort, the part of the iceberg above the waterline. What sits below is the maintenance that follows: keeping the tool consistent with other systems, updating it as regulations change, responding to shifts in how adjacent tools operate. This problem is compounding as more teams use AI itself to build custom software, often without accounting for the long-term cost of maintaining what they've built.

Operational costs are another blind spot. AI interviewing tools expand the number of candidates an organization can screen, often reaching people who wouldn't have been contacted through traditional processes. When a portion of those candidates opt out of the AI interview and request a human interviewer instead, that volume lands on recruiters as additional workload, not a return to pre-AI levels. If the opt-out rate is significant, your capacity planning has to account for it, or your recruiters end up overextended. These kinds of costs are easy to miss in an initial implementation plan, and they add up across a workforce.

There's also a more subtle cost that compounds over time: skill erosion. We all experienced a version of this when GPS navigation became standard, and most of us lost the ability to navigate without it. The same pattern applies to workplace skills like writing, planning a discussion, or synthesizing information from multiple sources. If people delegate the tasks that form the core of their role, organizations face a gradual hollowing out of the expertise those roles are supposed to carry. Combine that with the fact that only 26% of employees strongly agree that their organization encourages learning new skills, and the erosion compounds quickly. That pattern also reframes what learning efforts need to accomplish. Upskilling matters, but so does the deliberate retention of skills that AI makes easy to delegate.

Building a governance framework for AI in HR

When I think about AI governance, I start by identifying where the policy resides for each tool. The answer depends on the tool, and that's where many organizations stumble. They try to write one policy to cover everything, when what they need is a structure that accounts for different levels of control.

Three levels of AI governance

The three governance tiers range from tools where compliance is built into the product by design to tools where the organization has almost no control at all.

  • Level one: Enterprise AI Tools. These are core systems that the whole organization uses, like an HRIS or an applicant tracking system, purchased from a partner firm as foundational infrastructure. As much as possible, compliance and governance are built into these tools by design. Authentication, data handling, and usage guardrails are all structural. It should be difficult to use these tools in a non-compliant way.

  • Level two: Team-level tools. These are vendor-specific solutions that serve a single department or function. Maybe it's a tool the legal team uses, or something specific to one arm of the hiring process. Because these tools are narrower in scope, we don't have the same latitude to demand that governance be embedded in the product itself. Instead, we work with the vendor to configure the tool to our specifications, using intake processes like IT architecture reviews and AI impact assessments to set guardrails before anything goes live.

  • Level three: User-level tools. This is where things get complicated. User-level tools are the ones available to anyone with an internet connection. The most common example today is a personal ChatGPT account. We don't control these tools, and we can't configure them. They're still worth using, but the governance approach has to be different. Instead of protocols tied to specific tools, we focus on principles: helping people understand why certain types of data shouldn't go into a non-enterprise tool, not just telling them to avoid it. The goal is for someone to be able to apply the same reasoning to any new tool that comes along, because they will keep coming along.

What the regulatory patchwork means for compliance

AI hiring regulations are proliferating at the state level, with no federal legislation in place yet. States including Illinois, Colorado, and California have introduced or enacted laws addressing AI in employment decisions, with requirements ranging from candidate notification and bias audits to impact assessments and human oversight mandates. NYC's Local Law 144 was an early municipal example, requiring annual independent bias audits for automated hiring tools. And organizations with global operations should note that the EU AI Act classifies employment AI as high-risk, with full enforcement beginning August 2026.

The specifics of each law are worth paying attention to, and organizations operating across multiple jurisdictions will need to track labor law updates closely. From a governance perspective, the takeaway is that organizations with a clear tiered structure are better positioned to adapt when new regulations arrive, because they already know where each tool sits in their stack and what level of oversight applies.

How to start adopting AI in HR and what to keep in mind

For organizations that haven't yet adopted AI into their HR practices, or those looking to expand, I'd recommend starting with two questions: are you ready, and where should you begin?

Know your processes, then start where signal outpaces capacity

The single best predictor of whether AI adoption will succeed is whether the organization has documented, standardized processes before AI enters the picture. If you're figuring out your process while bringing AI in, the relationship gets messy. You also need clean, well-organized data, and a numeric target for what you're trying to change. Without those, AI implementation becomes: "step one: use AI, step two: who knows, step three: profit." That approach does not tend to end well.

Once your foundations are in place, the best entry point is wherever you have a lot of information coming in that needs to be structured for a specific purpose. A transcript from a recorded meeting that needs to become a recap document for leadership. A chain of emails that needs to be turned into a set of action items. A complex report that needs to be distilled for a different audience. Each of these is a one-to-many problem: one person, many inputs. AI handles that relationship well when it's given specific instructions about what the output should look like.

Treat AI as a tool

The one-liner I come back to most often is that AI is a tool, it’s not a worker. Treating it like a word processor or an internet browser will be much more productive than treating it like a coworker you're chatting with and asking to do things. The browser framing gets people thinking about how they'll use AI with different data sources to perform tasks, rather than what they can ask it for generally. It also guards against the kind of implicit decision delegation that creates liability exposure and erodes the human judgment that HR work depends on.

Where AI in HR goes from here

Most organizations have access to AI tools. The adoption gap closes when those tools are embedded in human-led processes with clear governance, realistic cost models, and a deliberate approach to what gets automated and what doesn't. At Kelly, that’s the approach behind tools like GRACE, our internal AI interface, and Helix, our workforce visibility and analytics platform. The technology helps teams manage volume and complexity, while people remain responsible for interpreting outputs and guiding decisions.

For the full data on AI adoption, skills gaps, and workforce planning across industries, download the Kelly Global Re:work Report.