2025-07-15 17:13:00
www.zdnet.com

A recent survey from Resume Builder finds that half of managers are using AI to make crucial decisions about their direct reports, including which employees are promoted — and which are fired.
The survey polled 1,342 managers in the US, 60% of whom reported relying on AI to make decisions about their employees: 78% and 77% used the technology to award raises and promotions, respectively, while 66% and 64% used it to determine layoffs and terminations, respectively.
Also: I found 5 AI content detectors that can correctly identify AI text 100% of the time
More than 20% “frequently let AI make final decisions without human input,” though most managers also said they would step in if AI offers a recommendation they disagree with.
Specifically, managers reported using AI tools for a range of tasks related to their direct reports, including making training material and employee development plans. Although 91% reported using the technology to assess their reports’ performance, Resume Builder’s survey questions did not clarify what these assessments entail.
Nearly half (46%) of the managers surveyed were also “tasked with assessing if AI can replace their reports,” Resume Builder noted. Of those, 57% found AI could take over a position, and 43% went ahead and replaced a human role with AI. Resume Builder did not provide details on what kinds of positions managers reported replacing.
When it comes to which AI tools are most popular among managers, those surveyed cited the usual suspects: 53% use ChatGPT most often, while 29% opt for Microsoft Copilot. Gemini had about 16% of the vote, and the remaining 3% of managers use another tool.
Also: You’ve heard about AI killing jobs, but here are 15 news ones AI could create
The survey also noted that two-thirds of those who use AI to manage their direct reports lack formal AI training. However, given how rapidly AI tools have entered workplaces, there are no agreed-upon standards for what adequate training even is — a problem exacerbated by an ongoing lack of regulation.
The report’s authors warned about the risks of using AI blindly.
“While AI can support data-driven insights, it lacks context, empathy, and judgment,” Resume Builder’s chief career advisor Stacie Haller said in the report. “Organizations have a responsibility to implement AI ethically to avoid legal liability, protect their culture, and maintain trust among employees.”
Still, what “ethical” implementation means remains opaque. Resume Builder did not include any guidelines for what this looks like, nor did the survey ask managers to report their own definitions or instincts on where using AI to manage is more or less appropriate.
Also: Anthropic’s Claude dives into financial analysis. Here’s what’s new
“Ethical usage of AI in management would need radical transparency for the employees, giving them a voice in the decision-making of what system should be used and showing exactly why — and most importantly, how — they are evaluated,” Hilke Schellmann, AI expert and author of “The Algorithm,” told ZDNET.
Schellmann added that employees should have a way to appeal decisions made by algorithms, especially when they can be as consequential as a layoff. “Honestly, the best way to use AI in management is to use algorithms that help employees — and are not accessible to management,” she added, “but there seems to be no appetite for that, at least in the US.”
Ethics and controversy
AI tools have become common in hiring and other HR functions. Resume Builder’s survey found that most managers reported being encouraged by their company to manage reports with AI, which most commonly refers to tightening up efficiency, reducing overhead, and evaluating data more quickly.
Also: The best free AI courses and certificates in 2025 – and I’ve tried many
But as many critics have pointed out, these sensitive use cases are where AI’s biases can be most damaging. In 2021, New York City passed Local Law 144 to address AI bias. One of the first laws of its kind, it requires any automated employment decision tools (AEDT) to be routinely audited for bias — at least once a year when in use — and to have the results of that audit published.
However, the law has been criticized for defining AEDT too narrowly, allowing enforcement and compliance on the part of companies to dwindle.
Without explicit worker protections or mandated avenues for employees to appeal outcomes, AI use in personnel is essentially at the discretion of individual companies. Regulators could turn their attention to this AI use case by creating more robust transparency requirements and processes that companies need to adhere to if they are using AI tools in ways that could impact employees.
Privacy concerns
One 2023 paper from the Society for Human Resource Management (SHRM) notes that employees should have a right to know when and how AI is being used, ask questions, and opt out where applicable — something Local Law 144 also requires, specifically in hiring. It’s unclear how many of the managers surveyed, if any, have made their reports aware of their AI use.
Also: Does your generative AI protect your privacy? New study ranks them best to worst
The Resume Builder survey did not ask questions pertaining to the information managers shared with AI tools about their direct reports. If managers include performance details, salary, and other potentially sensitive data with chatbots, especially without employees’ consent, they could be creating a serious privacy problem that employees don’t have control over.
How employees can advocate for themselves
What can employees do if they are concerned about AI-generated decisions affecting the future of their role? It depends on how AI is being used on them.
Also: Don’t be fooled into thinking AI is coming for your job – here’s the truth
“We see AI being used for more and more surveillance of employees, starting with hourly workers to white collar ones,” Schellmann pointed out. “I would suggest that workers band together and work with their unions and write in their bargaining agreements that surveillance technology has to be disclosed and needs co-decision making with representatives of the union.”
Beyond surveillance, employees should ask their managers, where applicable, for transparency on how AI tools are being used — although norms around feedback and how managers come to conclusions even without AI tools may make that difficult to navigate.
Want more stories about AI? Sign up for Innovation, our weekly newsletter.
Keep your entertainment at your fingertips with the Amazon Fire TV Stick 4K! Enjoy streaming in 4K Ultra HD with access to top services like Netflix, Prime Video, Disney+, and more. With an easy-to-use interface and voice remote, it’s the ultimate streaming device, now at only $21.99 — that’s 56% off!
With a 4.7/5-star rating from 43,582 reviews and 10K+ bought in the past month, it’s a top choice for home entertainment! Buy Now for $21.99 on Amazon!
Help Power Techcratic’s Future – Scan To Support
If Techcratic’s content and insights have helped you, consider giving back by supporting the platform with crypto. Every contribution makes a difference, whether it’s for high-quality content, server maintenance, or future updates. Techcratic is constantly evolving, and your support helps drive that progress.
As a solo operator who wears all the hats, creating content, managing the tech, and running the site, your support allows me to stay focused on delivering valuable resources. Your support keeps everything running smoothly and enables me to continue creating the content you love. I’m deeply grateful for your support, it truly means the world to me! Thank you!
BITCOIN bc1qlszw7elx2qahjwvaryh0tkgg8y68enw30gpvge Scan the QR code with your crypto wallet app |
DOGECOIN D64GwvvYQxFXYyan3oQCrmWfidf6T3JpBA Scan the QR code with your crypto wallet app |
ETHEREUM 0xe9BC980DF3d985730dA827996B43E4A62CCBAA7a Scan the QR code with your crypto wallet app |
Please read the Privacy and Security Disclaimer on how Techcratic handles your support.
Disclaimer: As an Amazon Associate, Techcratic may earn from qualifying purchases.