Why AI Triggers Psychological Reactions
Why AI Triggers Psychological Reactions
Artificial Intelligence (AI) has emerged as both a productivity enhancer and a psychological disruptor. For workers and organisations, AI signals profound transformation—often arriving faster than traditional systems can absorb. As the PwC (2018) report outlines, the disruption unfolds across three waves: algorithmic, augmentation, and autonomy(PWC, 2018). This progressive infiltration of AI into jobs generates widespread uncertainty, a fertile ground for fear, resistance, and trust challenges.
The emergence of artificial intelligence (AI) represents one of the most significant technological transformations in organisational life since the Industrial Revolution. Once relegated to science fiction, AI has rapidly evolved into a ubiquitous force shaping decision-making, productivity, job design, and even organisational identity. From the early stages of rule-based expert systems to the current generation of large language models and predictive analytics, the development of AI has not followed a linear path. Rather, it has accelerated exponentially—what was once a theoretical possibility is now deeply embedded in recruitment, learning and development, performance monitoring, and strategic planning across industries.
From the standpoint of industrial-organisational psychology, this emergence has far-reaching implications not just for how work is done, but for how people feel about their work. AI is not merely a technological disruption—it is a psychological one, reshaping trust, motivation, professional identity, and perceptions of control. The organisational context, long studied for its influence on human behaviour, is being redefined by algorithms that often operate invisibly, yet decisively.
Historically, technological changes—such as the mechanisation of factory work or the digitisation of office labour—unfolded over decades, allowing organisations and workers time to adapt. The pace of AI-driven transformation is radically different. According to the PwC (2018) three-wave model of automation, we are now transitioning from the "augmentation wave," characterised by human-AI collaboration, into the "autonomy wave," in which AI systems increasingly make decisions with minimal human oversight. This shift is not just technical but emotional: it challenges workers’ sense of value, agency, and future readiness.
In parallel, research by the Machine Evaluation and Training Regimes (METR) institute has shown how modern AI systems are rapidly improving their ability to perform long-horizon tasks—multi-step, context-sensitive activities that previously required human oversight. METR benchmarks evaluate AI not just on narrow tasks like image recognition, but on more complex challenges such as strategic reasoning and dynamic adaptation over time. These developments suggest that AI is no longer just automating routine or repetitive work—it is beginning to encroach on roles traditionally considered “safe” from automation: project management, client interaction, leadership support, and creative problem solving.
The acceleration of AI capabilities, particularly in these cognitively complex domains, introduces significant uncertainty into the workplace. As I/O psychologists have long recognised, uncertainty is a key driver of workplace stress. It activates threat appraisal mechanisms, prompts defensive coping behaviours, and can undermine psychological safety—the very foundation upon which trust, collaboration, and innovation depend. When employees are unsure whether they will be replaced, monitored, or outperformed by AI systems, their ability to engage meaningfully in their work diminishes. Even if their jobs are not immediately at risk, the perceived instability can be just as damaging.
This uncertainty often manifests as fear, which I/O psychology must not treat as irrational but as evolutionarily grounded. Fear, particularly in the context of job security, autonomy, and identity, serves as a warning system—a cognitive-emotional response to the erosion of control. When AI is introduced without transparency or input, it can trigger what organisational theorists call identity threat: the sense that one’s role is being devalued or replaced(Klug, Selenko, Hootegem, Sverke, & De Witte, 2024). For example, employees who have built their careers on expertise may feel deeply unsettled by systems that mimic or outperform their judgment.
Trust, then, becomes a central variable in the human-AI relationship. Research in trust in automation highlights that trust must be calibrated—it cannot be assumed or imposed (Lee & See, 2004). Over-trust leads to complacency, while under-trust results in rejection or sabotage. For AI to be accepted, it must not only be accurate but also understandable, predictable, and controllable. From an I/O psychology perspective, this means involving workers in the design and implementation of AI systems, providing transparent reasoning behind AI-driven decisions, and ensuring mechanisms for oversight and feedback(Parker & Grote, 2020).
Furthermore, resistance to AI is not simply a problem to be solved—it is data. Resistance often reveals a misalignment between technology and organisational culture, or between algorithmic goals and human values. As outlined in sociotechnical systems theory, the success of AI integration depends on the joint optimisation of technical systems and social systems(Read, Salmon, Lenné, & Stanton, 2015). If organisations pursue technical efficiency at the expense of employee wellbeing, trust, or fairness, they may achieve short-term gains at the cost of long-term engagement and ethical legitimacy.
Sociotechnical systems (STS) theory posits that organisations are comprised of both social and technical subsystems that must be jointly optimised to achieve sustainable effectiveness. Originating from work in the Tavistock Institute in the 1950s, STS emphasises that neither technology nor human factors should dominate—optimal performance emerges from their interaction. It challenges the purely technocratic or purely humanistic approaches by advocating for a balance between efficient technical design and humane work structures. We can reference examples where technology has been added, but no increase in productivity observed, which is likely due to a lack of development into the social system. An example of this is the British coal mines disrupted social structures and decreased productivity, despite technical efficiencies—highlighting the need to co-design the social and technical elements of work systems(Trist & Bamforth, K, 1951).
Key counterpart theories include Contingency Theory, which argues that organisational effectiveness depends on aligning internal structures with external environments, and Systems Theory, which provides a broader lens of interrelated parts working toward a common goal. Additionally, Actor-Network Theory (ANT) expands on the STS perspective by viewing both human and non-human elements (like technologies) as actors within a network, blurring the line between the social and the technical. Together, these theories enrich our understanding of how organisations evolve in complex, dynamic environments.
The I/O psychologist’s role in this evolving landscape is both critical and complex. We are not just observers—we are translators between the algorithmic and the human. We must evaluate how AI is introduced, whether it aligns with evidence-based practices for behaviour change and motivation, and how it impacts key organisational outcomes such as psychological safety, fairness perceptions, and meaningful work.
In sum, the emergence of AI represents more than just a new toolkit—it marks a new era of organisational psychology, one where machines no longer serve only as tools, but as semi-autonomous actors within human systems. The pace of this change, as documented by METR and global research reports, is accelerating beyond what many organisations are prepared for. The challenge for I/O psychology is to ensure that as AI evolves, it does so in a way that enhances—not undermines—the human experience of work. To ignore the psychological realities of fear, uncertainty, and trust is to risk building efficient systems that no one wants to be part of. Our task, then, is to humanise the machine before the machine dehumanises the organisation.
Klug, K., Selenko, E., Hootegem, A., Sverke, M., & De Witte, H. (2024). A lead article to go deeper and broader in job insecurity research: Understanding an individual perception in its social and political context. Applied Psychology. https://doi.org/10.1111/apps.12535
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Parker, S. K., & Grote, G. (2020). Automation, Algorithms, and Beyond: Why Work Design Matters More Than Ever in a Digital World. Applied Psychology, 1171–1204. https://doi.org/10.1111/apps.12241
PWC. (2018). Will robots really steal our jobs?: an international analysis of the potential long term impact of automation. PricewaterhouseCoopers LLP, 1–43.
Read, G. J. M., Salmon, P. M., Lenné, M. G., & Stanton, N. A. (2015). Designing sociotechnical systems with cognitive work analysis: putting theory back into practice. Ergonomics, 58(5), 822–851. https://doi.org/10.1080/00140139.2014.980335
Trist, E. L., & Bamforth, K, W. (1951). Social psychological consequences of the longwall method of coal getting.