Your next boss may not be human: How AI is deciding who gets hired, rejected and laid off

US workers thought they could spot AI: A new survey reveals a growing workplace trust crisis


Artificial intelligence is rapidly reshaping hiring across the US, with many companies now using AI to screen resumes, reject applicants and assist in workforce decisions. A new MyPerfectResume survey reveals growing concerns that while AI improves speed and efficiency, it may also overlook qualified candidates and weaken human judgment in critical employment decisions.

A job application used to begin with a human moment. A recruiter scanning a resume. A hiring manager notices an unusual career path. A conversation about potential, personality, or promise. Now, for millions of workers, that first encounter may never happen at all.Before a recruiter reads a single line, before a manager notices experience or ambition, an algorithm may have already made the decision.A new 2026 survey by MyPerfectResume reveals just how deeply artificial intelligence has moved into hiring and workforce decisions across the US. The findings paint a picture of a corporate world that is becoming faster, more automated and increasingly dependent on machine-driven judgment, even when employers themselves admit the systems are not always accurate.The survey, conducted through Pollfish among 1,000 US hiring managers and HR professionals, suggests that AI is no longer simply assisting recruiters behind the scenes. It is now shaping who gets noticed, who gets filtered out and, increasingly, who remains employed.And the most uncomfortable question hanging over modern workplaces is becoming impossible to ignore: If algorithms are now making life-changing career decisions, who is accountable when they get it wrong?

The first recruiter is now an algorithm

For many job seekers today, the hiring process no longer begins with a handshake or a recruiter’s call. It begins with software. According to the survey, 73% of employers now use AI in hiring decisions. Even more striking, 65% said AI systems automatically reject candidates before any human review takes place.That means thousands of resumes may disappear into digital silence long before a hiring manager ever sees them. The scale of rejection is significant.Around 26% of employers said AI systems reject between 1% and 25% of applicants automatically. Another 25% said the systems reject between 26% and 50%. Meanwhile, 11% reported rejection rates between 51% and 75%, while 3% said AI eliminates more than 75% of applicants before human involvement. Only 5% said AI does not reject candidates at all.The numbers expose a hiring process that is increasingly built around efficiency and speed. But they also raise a troubling possibility: How many capable workers are being filtered out simply because they failed to satisfy an algorithm’s narrow criteria?

Even employers admit AI may be missing good candidates

What makes the findings more striking is that many employers themselves appear unconvinced about AI’s reliability. Nearly 47% admitted AI systems may have filtered out candidates they personally would have advanced in the hiring process.In other words, almost half of hiring professionals acknowledge that automation may already be costing companies qualified talent. The issue highlights one of the biggest tensions in modern hiring: the conflict between efficiency and judgment.AI systems are designed to scan resumes quickly, identify keywords, rank applicants, and reduce manual workload. For companies dealing with thousands of applications, that speed is attractive.But hiring has never been purely mathematical. A resume gap may reflect caregiving responsibilities. Frequent job changes may signal adaptability rather than instability. An unconventional background may reveal creativity instead of risk. Algorithms, however, often struggle with nuance.That creates a dangerous possibility where candidates are evaluated less as individuals and more as patterns of data.And as AI systems become more deeply integrated into recruitment pipelines, rejected applicants may never know whether they were denied by a human assessment or an automated assumption.

AI is now moving beyond hiring into layoffs

The survey also reveals that AI’s role is expanding far beyond recruitment. More than half of employers, around 52%, said they now use AI for workforce planning decisions such as restructuring and role evaluation. Another 28% said they are considering adopting AI for those purposes.This marks a major shift in how corporate decisions are being made. Artificial intelligence is no longer just helping companies hire people. It is beginning to influence decisions about which roles are valuable, which departments should shrink and potentially which employees stay or leave.That transition raises uncomfortable ethical questions. Can an algorithm truly understand employee performance in complex human environments? Can software account for mentorship, emotional intelligence, leadership or workplace relationships? And should systems trained on historical corporate data be trusted to make decisions that directly affect livelihoods?The survey suggests employers themselves remain divided.While 51% said they were confident AI is used fairly in layoffs and restructuring decisions, 23% expressed doubts. Another 26% said they do not use AI in layoff-related decisions at all. The split reveals a corporate world still uncertain about how much trust these systems deserve.

AI is now judging behaviour, not just qualifications

One of the most revealing parts of the report concerns how AI is being used to make subjective assessments about workers themselves.According to the survey, 51% of employers use AI to flag what they describe as “risky” candidates, including job-hoppers or applicants with employment gaps.Another 12% said they are considering adopting such systems. This represents a significant shift in workplace technology.AI is no longer simply matching skills to job descriptions. It is now attempting to interpret behaviour, predict reliability and assess professional character.That raises difficult questions about fairness. What happens to workers who changed jobs frequently during economic uncertainty? What about parents who stepped away from careers for caregiving? Or employees who took breaks for mental health, education or personal crises?Human recruiters may recognise context. Algorithms may simply recognise patterns. Critics have long warned that AI systems can inherit bias from the historical data they are trained on. If previous hiring trends favoured certain career paths or penalised employment gaps, AI tools may quietly reproduce those same patterns at scale.The danger is not always obvious discrimination. Sometimes it is the silent elimination of people who do not fit a preferred template. The future of work may become less visible and less human. The MyPerfectResume survey captures a workplace entering a new phase of automation.On one hand, AI promises efficiency. Companies can process applications faster, reduce administrative work and make quicker decisions. On the other hand, the findings reveal growing unease around transparency, fairness and accountability.Workers are increasingly navigating systems where they may never know why they were rejected, flagged or overlooked. Employers, meanwhile, are placing greater trust in technologies they themselves admit are imperfect.The result is a hiring culture where decisions may become faster, but not necessarily wiser.And beneath the statistics lies a larger question that will define the future of work: when algorithms become gatekeepers to opportunity, who ensures the gate is fair?Because once hiring, promotions and layoffs begin moving through invisible systems, the greatest risk may not simply be automation itself. It may be the gradual disappearance of human judgment from decisions that shape human lives.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *