A Lack of Trust in Artificial Intelligence in New Zealand Workplaces: A Human Factors and Organisational Psychology Perspective

As artificial intelligence (AI) technologies become increasingly integrated into workplace systems worldwide, questions about trust have moved from the periphery of technical design to the centre of organisational and psychological concern. Nowhere is this more evident than in the New Zealand context, where AI uptake remains cautious and inconsistent. While global reports such as KPMG’s Trust in Artificial Intelligence: Global Insights 2025 highlight growing usage of AI, they also reveal a persistent and widening trust gap between human users and AI systems. This paper contends that in New Zealand, we do not so much mistrust AI as we fail to trust it appropriately. That is, trust has not yet been cultivated because the organisational, cognitive, and cultural conditions necessary for its formation have not been met. Drawing on theoretical frameworks from human factors and industrial-organisational (I-O) psychology, including the three-layered model of trust developed by Hoff and Bashir (2015), this paper explores the dispositional, situational, and learned components of trust that are currently lacking in New Zealand workplaces.

Trust in AI: Conceptual Foundations

Trust in automation has long been a central concern in human factors research, particularly in contexts where human-machine teaming is essential. Lee and See (2004) defined trust in automation as "the attitude that an agent will help achieve an individual’s goals in a situation characterised by uncertainty and vulnerability" (p. 54). This formulation highlights both the cognitive and emotional components of trust, which must be carefully calibrated to the actual capabilities and reliability of the system in question. Miscalibrated trust leads either to misuse (overtrust) or disuse (undertrust), both of which can degrade performance, increase risk, and undermine safety.

Hoff and Bashir (2015) expanded this work through their three-layered model of trust in automation, identifying dispositional trust, situational trust, and learned trust as interdependent elements that influence how users relate to AI systems. Dispositional trust refers to an individual’s general propensity to trust automation, shaped by stable characteristics such as culture and personality. Situational trust is context-dependent, influenced by environmental factors such as task complexity, workload, and risk. Learned trust is developed through direct experience with a system and reflects an evolving assessment of its reliability, transparency, and performance. In the New Zealand workplace context, all three layers of trust are currently underdeveloped or misaligned, resulting in low levels of trust that constrain the effective deployment of AI.

Dispositional Trust: Cultural Conservatism and Technological Reluctance

New Zealanders have historically exhibited high levels of interpersonal trust and low tolerance for institutional opacity or overreach (Miller, 2019). While these traits contribute to cohesive, participative workplaces, they also produce cultural conditions that hinder dispositional trust in non-human systems such as AI. The cultural preference for transparency, relational decision-making, and face-to-face accountability clashes with the often opaque and data-driven nature of AI systems. This dynamic creates what Lee and See (2004) termed vulnerability-based trust decisions — choices made not on empirical performance data but on affective discomfort with the perceived inscrutability of the system.

In particular, many New Zealand workplaces, especially small-to-medium enterprises (SMEs), operate within flat hierarchies that rely on mutual engagement and participatory change. The imposition of AI systems without sufficient explanation or collaboration violates these norms, engendering resistance and undermining the formation of baseline trust. In the absence of a national strategy for AI literacy or culturally aligned design, dispositional trust remains stunted.

Situational Trust: Poor Integration and Organisational Ambiguity

Beyond general dispositions, trust also varies based on context. In New Zealand, situational trust in AI is frequently undermined by poor integration into existing workflows, lack of role clarity, and ambiguous governance structures. For instance, many workers report being expected to use AI tools (e.g., ChatGPT, Copilot) without adequate training, feedback, or oversight mechanisms. According to the KPMG report, while AI use is increasing, less than half of employees can explain how AI is used in their role — a stark indicator of automation without understanding.

From an I-O psychology perspective, this mismatch between expectation and support produces classic symptoms of psychological reactance and cognitive dissonance. Workers may outwardly comply with AI directives while inwardly distrusting their validity, creating a gap between behaviour and belief. Moreover, the erosion of autonomy, competence, and relatedness — three core psychological needs identified in Deci and Ryan’s (1985) Self-Determination Theory — further impairs trust formation and reduces engagement.

Situational trust is also distorted by the lack of clear accountability structures. When responsibility is diffused between human workers and algorithmic systems, uncertainty about blame and consequence breeds risk aversion or complacency. In short, New Zealand workplaces are asking employees to oversee AI systems as a form of dual-task monitoring — but without providing the institutional scaffolding necessary for informed oversight.

Learned Trust: The Absence of Experience-Based Calibration

Perhaps most critically, learned trust — the only form of trust that evolves through direct interaction with a specific AI system — is rarely fostered in New Zealand workplaces. Trust must be earned through transparency, predictability, and demonstrable benefit. However, AI systems are often deployed as turnkey solutions with little user input, minimal feedback loops, and insufficient time for iterative learning.

The result is poor trust calibration. Employees either overtrust AI systems due to an inflated sense of technological authority, or undertrust them because of early errors or misalignment with local values. In both cases, reliance becomes distorted. Overtrust leads to automation bias — blind acceptance of AI outputs — while undertrust produces friction, disengagement, and the abandonment of potentially useful tools.

Trust, in this sense, is not absent; it is unformed. The infrastructure for developing learned trust — including robust onboarding, contextual training, user feedback mechanisms, and transparent system auditing — is largely missing from the current AI integration landscape in Aotearoa.

Conclusion

New Zealand’s cautious approach to AI is not irrational; it is a culturally and psychologically informed defence against premature overtrust. However, this caution must now be transformed into strategic trust-building. As Hoff and Bashir (2015) argue, trust in automation cannot be mandated — it must be cultivated through alignment across dispositional, situational, and learned dimensions. Until this alignment is achieved, the country’s workplaces will continue to struggle not with distrust, but with a profound absence of trust readiness.

To move forward, New Zealand organisations must invest in culturally responsive AI literacy programs, participatory system design, and long-term governance frameworks that clarify responsibility and enable feedback. Only then can trust in AI emerge — not as an assumed input to digital transformation, but as a measurable and sustainable outcome.

Next
Next

Machine learning and the human