Machine learning and the human
Machine learning (ML), a subset of artificial intelligence, refers to systems that improve their performance by learning from data without being explicitly programmed. In the organisational context, ML is now deeply embedded in core functions such as recruitment, performance evaluation, workflow optimisation, and predictive analytics. Its rise has significant implications for both organisational structure and individual psychology. Understanding its principles and limitations requires not only technical literacy but also a grounding in sociotechnical theory, cognitive psychology, and ethical decision-making frameworks.
At its core, machine learning is driven by data. It learns through exposure to training data, using statistical inference to recognise patterns and make predictions. There are several forms of ML, including supervised learning (where the model is trained on labelled data), unsupervised learning (which identifies hidden patterns without pre-defined labels), and reinforcement learning (where models improve through feedback over time). The more diverse and representative the data, the more reliable the model’s predictions. A foundational principle of ML is its ability to generalise; that is, to apply learned patterns to new, unseen situations. However, this generalisation is only as robust as the data quality and the algorithms' ability to avoid overfitting.
Another essential aspect of ML is its iterative improvement. Most models operate through optimisation methods such as back propagation and gradient descent—mechanisms that refine the model by minimising errors. These models are continuously retrained with new data to remain relevant and accurate in dynamic settings. The aim is to automate decision-making with speed, consistency, and—ideally—greater accuracy than humans can provide, especially in environments saturated with data.
However, placing machine learning into the heart of organisational life demands more than technical efficacy. It must be understood through a sociotechnical lens. According to sociotechnical systems theory, organisations are composed of two interdependent subsystems: the technical (tools, systems, procedures) and the social (people, roles, culture). ML, while a technological advancement, cannot be effectively integrated without considering its disruptive influence on social systems. It often changes who makes decisions, how work is allocated, and how performance is judged. These shifts have profound consequences for employee identity, agency, and workplace dynamics.
A key concern raised by sociotechnical theory is that misalignment between the social and technical subsystems can create dysfunction. For instance, introducing an AI hiring tool without employee involvement or transparency can lead to decreased trust, resentment, and even resistance. The most effective implementations of ML, then, are not simply technical upgrades—they are co-designed processes that consider human behaviour, communication norms, and cultural readiness.
Human factors psychology provides further insight into why machine learning systems may generate friction. Cognitive load theory suggests that introducing complex, poorly designed interfaces or systems requiring constant interpretation can overwhelm users. When employees are tasked with monitoring or interpreting ML outputs—especially without training—they may either rely on the system blindly (automation bias) or reject it altogether (algorithm aversion). These reactions are rooted in cognitive limitations. In dual-task environments, where employees must both perform their job and oversee AI-generated suggestions, performance can suffer due to divided attention and mental fatigue.
Psychological trust in ML is not automatic. According to Lee and See’s (20024) model of trust in automation, users are more likely to trust a system that is perceived as reliable, understandable, and aligned with their goals. When these criteria are not met, users may undertrust the system and bypass it—even when it's functioning accurately. On the flip side, overtrust can lead to dangerous complacency, where users fail to question incorrect outputs. Trust must be calibrated through careful design and feedback mechanisms.
The literature on resistance to change also provides a helpful framework. ML systems, by automating decisions previously made by humans, often create change that employees neither initiate nor control. This lack of agency can lead to high levels of resistance, particularly when ML is perceived as threatening job security, status, or professional autonomy. Organisational psychologists must be attuned to these dynamics and take steps to facilitate buy-in, such as involving employees in the implementation process and offering clear, evidence-based justifications for change.
ML implementation disrupts established patterns of human behaviour in the workplace by altering decision authority, workflows, and perceived value of human expertise. Resistance to this change is not simply obstinance—it is often a rational response to uncertainty, identity threat, or fear of obsolescence. Classic I/O psychology literature, such as Lewin’s Change Theory and Oreg’s work on resistance to change, underscores the psychological costs of imposed transformation. Employees may experience a loss of control, learned helplessness, or cognitive dissonance if new systems conflict with their values or role identity.
Organisational psychologists can play a pivotal role by designing and facilitating psychologically safe change processes. This includes:
· Conducting readiness assessments to gauge emotional and cognitive reactions to ML adoption
· Framing change narratives that highlight augmentation rather than replacement, preserving employee identity and status
· Involving employees in shaping how ML is introduced—co-designing tasks, feedback loops, and decision boundaries
· Providing training and growth pathways that reaffirm competence and future relevance
· Using data ethically and transparently to ensure trust in how decisions are made and monitored
When behavioural change is viewed not just as a compliance issue but as a psychological process, I/O psychologists can ensure that ML systems support, rather than erode, individual wellbeing and organisational culture.