Key Takeaways
AI in recruitment reduces admin such as note taking and updating progress/status as well as filtering out on experience and qualifications but fails at soft skills, leadership assessment, and cultural fit.
AI hiring bias remains a major issue, with legal cases against major firms like Amazon & Goldman Sachs.
AI hiring failures span industries – from finance, legal, and healthcare to retail and hospitality.
Businesses must balance AI automation with human judgment to ensure ethical, accurate hiring.
AI cannot replace executive recruiters for C-suite hiring and leadership roles as these specialists will have a large number of C-suite contacts who they have spent thousands of hours getting to know and understand the right fit.

Businesses today are increasingly questioning how AI is used in recruitment and whether AI hiring bias impacts talent acquisition. While AI-powered tools can reduce time spent on basic tasks, their limitations are significant and often overlooked. A McKinsey report found that 65% of organisations now regularly use AI in at least one business function, nearly doubling in just ten months - but this rapid adoption brings risks as well as opportunities.
How is AI in hiring impacting recruitment?
AI is used in recruitment for automating administrative tasks, CV screening, interview scheduling, predictive analytics, and job advert optimisation. It helps speed up processes but has limitations in assessing soft skills, cultural fit, and leadership potential.
How can businesses balance AI and human recruitment?
Recruiters have long struggled with the sheer volume of CVs, applications, and communications required in the hiring process. AI-powered tools are reducing the time spent on repetitive tasks such as communications and calendar bookings. A McKinsey report found that 65% of organisations now regularly use AI in at least one business function, nearly doubling in just ten months. Some of the most common applications include:
Automation of communications – Chatbots and AI-driven messaging can help streamline candidate interactions, from interview scheduling to status updates although a lot of candidates do not like using these systems and it impacts engagement.
Data management – Keeping candidate databases clean, updated, and GDPR-compliant can be handled by AI-driven solutions.
Predictive analytics – AI can analyse hiring patterns to anticipate recruitment needs before they arise.
Lead generation – AI can analyse data to identify businesses with hiring needs, giving recruiters a competitive edge.
When used correctly, AI can free up recruiters to focus on what really matters: engaging with candidates, building relationships, and providing strategic value to clients. But AI is not a silver bullet, and its effectiveness depends entirely on how it is trained and applied. It should be assumed that what it produces is correct. It still makes errors, and oftentimes repeatedly.
What are the biggest risks of AI in hiring?
The main risks include AI bias, misclassification of candidates, elimination of high-quality candidates due to rigid keyword filtering, and a lack of human judgment in evaluating personality and team fit. Secondly, that it proliferates prebuilt bias that exists in the underlying data. According to the World Economic Forum, over 85% of HR leaders acknowledged that AI hiring tools needed human oversight to avoid discrimination. Meanwhile, 65% of candidates did not trust AI-driven hiring processes (2023 Future of Work Report).
The False Promise of AI Screening
Candidates have reported receiving instant rejection messages, within minutes of their application. At the same time, job seekers who understood how to game the system by copying job descriptions into their applications managed to bypass AI filters. If hiring managers are relying solely on AI tools without human oversight, they are likely eliminating high-quality candidates before they even reach an interview.
A LinkedIn Workforce Report found that 62% of UK hiring managers believe AI recruitment tools have led to higher rejection rates for qualified candidates, particularly in mid-to-senior roles (2024)
Why do AI screening tools reject good candidates?
One of the biggest concerns around AI in recruitment is bias. AI systems are only as good as the data they are trained on. If that data contains historical biases, the AI will reinforce them. The now-infamous case of Amazon’s AI recruitment tool is a perfect example. Trained on a decade of hiring data, the AI consistently downgraded CVs from women applying for leadership roles because past hires had been predominantly male.
How do I know if AI hiring tools are biased?
AI is only as effective as the quality of data it processes, and we know that bias is often deeply embedded within that data. If businesses don’t work to remove it, AI will speed up processes but simply automate discrimination at scale. Below are some examples of hiring failures when relying on AI for recruitment.
LinkedIn: Had to adjust its AI hiring tools due to algorithmic bias.
European Parliament’s AI Ethics Concerns: Policymakers have flagged AI-based hiring tools for failing to meet fairness and transparency standards.
The Netherlands: AI in government hiring unfairly rejected ethnic minority applicants by prioritising Dutch-sounding names, leading to legal scrutiny.
UK: AI recruitment tools failed GDPR compliance, with concerns over biased data storage and lack of transparency, prompting regulatory intervention.
Japan: AI screening tools misclassified international applicants due to language nuances and formatting errors, leading to qualified candidates being overlooked.
China: AI used in state-owned enterprises struggled to assess leadership and innovation, filtering out candidates who didn’t use predefined key phrases.
USA: Multiple US companies faced legal action due to AI recruitment tools unfairly rejecting candidates based on algorithmic bias.
Below we share some of the more recent examples of recruitment and AI failures.
Financial Sector – AI Bias in Banking & Investment Hiring
Goldman Sachs AI Bias Case: AI-driven hiring tools in investment banking have been criticised for unintentionally prioritising candidates from elite universities, reinforcing class and racial biases.
Many AI systems in high-stakes financial recruitment rely heavily on historical hiring data, which inadvertently excludes candidates from non-traditional backgrounds.
Legal Industry – AI Struggles with Specialist Roles
AI struggles to assess complex legal experience, as it relies on rigid keyword-matching rather than evaluating actual case expertise.
A report from legal hiring firms found that AI frequently rejected candidates with strong courtroom experience because their CVs lacked specific terminology used by AI filters.
In some cases, AI recruitment systems misclassified legal specialisations, leading to the wrong candidates being shortlisted.
Healthcare Sector – AI Fails in Medical Hiring
AI-driven hiring software in hospitals failed to recognise the qualifications of international medical graduates, leading to highly skilled doctors being rejected due to formatting or missing keywords.
In nursing recruitment, AI struggled to assess interpersonal skills and bedside manner, leading to poor candidate matches despite strong technical credentials.
A major healthcare provider reported that AI filters mistakenly deprioritised candidates with career gaps, even when those gaps were due to maternity leave or advanced medical training.
Retail & Hospitality – AI Fails to Account for Soft Skills
AI-driven hiring for customer-facing roles struggled to assess emotional intelligence, problem-solving, and communication skills, which are critical in industries like retail and hospitality.
AI systems that filter applicants based on rigid job descriptions have been found to reject candidates with cross-industry experience, even when those skills are highly transferable.
The Candidate Perspective: AI in Job Applications
Candidates have quickly adapted to using AI tools to draft their CVs and cover letters, ensuring they contain the right keywords to get past AI screening tools. A Gartner survey revealed that 69% of HR professionals have received job applications containing AI-generated content, with over half estimating that between 25-50% of applications now use AI assistance or chatbots to refine their interview answers.
A well-written, keyword-rich CV obviously does not guarantee that the candidate has the skills or experience required. This means that the wrong candidates are being selected for an interview, which wastes everyone's time. Combine that with the scenario where a candidate uses an AI tool whilst on a remote video interview, that listens to the questions and delivers an answer. This becomes more challenging to monitor. All the more reason why a well known and vetted candidate is vital before the interview processes begin.
AI’s Role in the Workplace: Train In-House or Recruit?
The reasoning for training inhouse is logical; internal employees understand company culture, have existing relationships, and may adapt AI tools easily to their specific work environments. McKinsey predicts that by 2030, up to 30% of current work hours could be automated, increasing pressure on businesses to either retrain employees or recruit AI specialists. They also report that 40% of organisations anticipate re-skilling more than 20% of their workforce due to AI adoption, highlighting the growing need for structured training programmes. While training is an option, it is not always the best route because:
Not everyone can be retrained. Some employees may lack the aptitude or interest to develop technical AI skills.
It’s not always cost-effective.Training takes time and money, and in some cases, hiring an expert might be the better investment. Plus there is an opportunity cost involved where other projects will need to be put on hold.
In-house training can reinforce existing biases. If the AI tools are trained on internal data that already contains bias, the problem compounds rather than resolves.
There is a risk of companies limiting their growth by refusing to bring in external talent with fresh ideas and perspectives, particularly for businesses that require high-level AI skills. We explore this further in our article here.
Ethical Considerations and the Role of HR
The CIPD emphasises the critical role HR professionals play in the ethical adoption of AI. They advocate for a proactive yet measured approach, encouraging HR teams to engage with AI innovations while upholding ethical standards. This involves ensuring AI applications are free from biases, maintain transparency, and complement human decision-making rather than replace it.
Why AI cannot replace human recruiters
At the CIPD Recruitment Conference 2024, discussions focused on AI’s role in transforming recruitment. Experts highlighted how AI can create a more personalised and efficient hiring process by matching candidates with roles more precisely. AI was felt to have the capability to process vast amounts of data to identify patterns in hiring success, predict candidate success rates, and influence the screening process.
However, caution was advised against over-reliance on AI. Without human input, businesses risk missing out on high-quality candidates who may not fit rigid algorithmic criteria but would thrive in a role due to their adaptability, potential, and cultural fit. AI should complement recruitment, not replace human judgement. A really good recruiter will be able to ‘see’ the skills not written in a CV and pick up on potential.
What should businesses consider before implementing AI hiring tools?
According to the Recruitment & Employment Confederation (REC), 90% of recruiters utilise AI tools for crafting job descriptions. However, AI's application diminishes in later recruitment stages, with only 7% using it for onboarding and 5% for interview analysis. As AI continues to evolve, transparency and accountability will be essential. Companies should evaluate AI bias risks, the accuracy of AI screening processes, hidden costs, compliance with employment laws, and that human recruiters remain central to decision-making. Here is where AI can compliment the process.
AI Strengths | Human Strengths |
Basic communications automation (scheduling, updates) | Shortlisting candidates for interview or initial candidate assessments |
Data management for GDPR compliance | Understanding cultural fit and long-term team cohesion |
Simple predictive analytics for identifying hiring trends | Building relationships and long-term candidate trust |
Automating repetitive tasks (CV parsing, interview scheduling) | Emotional intelligence, soft skills & leadership assessment |
Improving job adverts based on pre-defined parameters | Evaluating soft skills & adaptability |
Create a longlist of potential passive candidates based on predefined terms | Identifying high-potential candidates beyond CV data |
Advising businesses on strategic hiring needs | |
Ensuring diversity and inclusion beyond algorithmic filters |
What are the hidden costs of AI recruitment?
Licensing fees for AI hiring platforms.
Training time for staff to use AI tools properly: AI is not plug-and-play; which diverts focus from strategic hiring efforts.
Potential lost hires from AI rejection mistakes: AI screening can incorrectly reject qualified candidates due to biased algorithms or overly rigid filtering.
Ongoing AI maintenance & updates.
Lack of human intuition resulting in costly mismatches that could have been avoided with human expertise.
Experienced recruiters don’t just fill vacancies, the best bring trusted strategic insight
Bad hiring decisions cost businesses thousands in lost productivity, salary costs and cultural disruption never mind the time to replace the hire. In some cases this can be three times a candidate’s salary. Experienced recruitment consultants bring their own established candidate networks, emotional intelligence, intuition, and an understanding of team dynamics that no algorithm can replicate. They not only understand your business but can identify the right type of candidate for your business.
Experienced recruiters don’t just fill vacancies; they act as trusted advisors to businesses, help organisations define hiring strategies, anticipate future talent needs, and assess which roles will bring the most value to a growing team. Louisa Plint, founder of Auxeris, highlights that often companies approach recruitment with a preconceived notion of their hiring challenges, only to realise through advisory discussions that the real issue lies elsewhere.
Why is AI problematic in executive search?
AI’s effectiveness significantly declines further in executive search and headhunting. Leadership hires require a deep understanding of soft skills, leadership capabilities, and strategic fit, elements that AI simply cannot quantify. Executive recruitment is relationship-driven, relying on broad networks of senior talent, trust, discretion, and an ability to match candidates beyond a checklist of skills. For C-suite and senior roles, the human element remains indispensable. Businesses that attempt to automate these critical hiring decisions risk misjudging cultural alignment, long-term vision, and leadership potential, areas where human expertise is essential and the investment will drive significant ROI.
The Future of AI in Recruitment
AI is not a silver bullet and making sweeping changes that assume it is, will only land your business into trouble. AI still makes errors, it needs human oversight and we’re not yet at a point where it can be left to run all of the tasks. Crucially, real relationships matter more than ever in recruitment. They always have, and they always will.
The best use of AI is as an enabler, not a decision-maker. Human oversight ensures every hire is a long-term investment. If your business is serious about finding the best talent, then working with a third-party recruiter isn’t optional, it’s essential. And if your business is serious about hiring the best talent while using AI responsibly, let’s talk about how Auxeris can help. Get in touch.
Comments