Abstract
Experts are recommending the use of artificial intelligence (AI)-enabled tools in operational decision making to improve efficiency, garner insights that humans might miss, and reduce decision stress among executives. While AI-enabled tools can arguably be used to fully automate operational decisions, such as recruitment screening, organisations that have human oversight over the final decision are better primed to address a variety of ethical, operational, and compliance concerns.
The promise of AI in human resources (HR) lies in efficiency - streamlining recruitment by scanning resumes, assessing communication style and ranking candidates via leaner HR teams (Forbes 2025). AI-enabled technologies meticulously analyse intricate datasets to formulate recommendations that HR operations can use to streamline processes like recruitment and upskilling, as well as advising on operational and strategic decisions (Shrestha et al. 2019). Zerilli et al. (2019) suggest that as workers get accustomed to using AI-enabled tools for decision-making, a contemporary version of the “control problem” will likely emerge. The control problem refers to the tendency for humans to become complacent, over-reliant, or unduly diffident when faced with the outputs of a reliable autonomous system (Bainbridge 1983). A cautionary example is Amazon’s now-retired AI recruitment tool, which was found to downgrade women’s resumes after being trained on a decade of male-dominated hiring
data (Dastin 2018).
The promise of AI in human resources (HR) lies in efficiency - streamlining recruitment by scanning resumes, assessing communication style and ranking candidates via leaner HR teams (Forbes 2025). AI-enabled technologies meticulously analyse intricate datasets to formulate recommendations that HR operations can use to streamline processes like recruitment and upskilling, as well as advising on operational and strategic decisions (Shrestha et al. 2019). Zerilli et al. (2019) suggest that as workers get accustomed to using AI-enabled tools for decision-making, a contemporary version of the “control problem” will likely emerge. The control problem refers to the tendency for humans to become complacent, over-reliant, or unduly diffident when faced with the outputs of a reliable autonomous system (Bainbridge 1983). A cautionary example is Amazon’s now-retired AI recruitment tool, which was found to downgrade women’s resumes after being trained on a decade of male-dominated hiring
data (Dastin 2018).
| Original language | English |
|---|---|
| No. | 14 |
| Specialist publication | AIB Review |
| Publication status | Published - 23 Jun 2025 |