Abstract
Across vocational education institutions in England and Australia, educators are adopting artificial intelligence (AI) out of necessity rather than through policy directive. Staff managing 47-hour marking loads within 36.25 paid hours have discovered that AI tools can reduce administrative time by up to 80%, even when their use operates beyond formal policy frameworks. This study utilises uncertainty reduction theory to explore how vocational education and training (VET) leaders engage with proactive (anticipating future possibilities) and retroactive (interpreting observed behaviours) processes to navigate technological disruption. Through qualitative semi-structured interviews, the research investigates how technological uncertainty intersects with broader sector challenges, including recruitment, workload, and professional recognition. Analysis reveals leaders managing complex information flows about technology adoption occurring outside formal channels. With teaching staff age averaging 55–57 years, leaders describe facilitating information exchange between generations, with some educators lacking fundamental computer skills whilst others bring industry-derived technological confidence. VET administrators recognise educator resilience emerging through crisis-driven technological adaptations, despite persistent structural constraints. The research demonstrates organisational uncertainty management through recursive cycles linking observation with planning. Successfully integrating AI requires balancing informal experimentation with formal compliance, protecting staff whilst maintaining regulatory adherence within risk-averse cultures. Addressing technological uncertainty and structural workforce challenges must occur simultaneously.
Keywords: Vocational education and training, uncertainty reduction theory, artificial intelligence, teacher shortage, technological disruption
1 Introduction
The vocational education and training (VET) sector is standing at a crossroads, caught between technological acceleration and a workforce system in crisis. In both England and Australia, teachers describe workloads that far exceed contractual expectations. Reports document staff working beyond contracted hours, marking and preparing lessons outside remuneration (Department for Education 2018). Ninety-six percent of English VET organisations report difficulties filling essential posts (Association of Colleges 2022), and Australian research points to shortages across nearly every discipline (Tyler & Dymock 2021). Industry continues to offer higher salaries (Tully 2023). These factors are compounded by persistent societal perception that, as Billett (2020, 164) observed, vocational teaching endures a “profound, persistent” lack of esteem that leaves the profession invisible in mainstream educational debate. In this environment, artificial intelligence (AI) has entered daily practice not through national strategy but through the pragmatic adaptations of teachers seeking to keep pace under duress: a lecturer tests an automated-marking tool; another drafts materials using text generation to reduce paperwork.
Although the literature on teacher shortages (Misselke et al. 2024; Smith 2024) and educational technology adoption is growing, little explores how leaders manage uncertainty when these pressures converge. The convergence of workforce crisis and technological opportunity has produced more ambiguity than certainty. The World Economic Forum (2023) predicted that 44% of the global labour force will need reskilling within 5 years, while vocational educators themselves are projected to be among the fastest growing professional groups between 2023 and 2027. This paradox reflects what Bakhshi et al. (2023) term a double transformation: teachers must prepare others for AI-shaped workplaces even as they adapt their own professional practice to the same technologies. Earlier waves of automation displaced routine work; generative AI now reaches into complex and creative domains (Productivity Commission 2024). For VET providers, the challenge is no longer only what to teach, but how educator identity transforms when part of expert judgement is delegated to machines.
Research points to positive attitudes towards AI but limited implementation (Ridzuan & Junaidi 2023). This study addresses that gap by applying uncertainty reduction theory (URT; Berger & Calabrese 1975) to examine how VET leaders in England and Australia interpret and respond to technological disruption during structural crisis. URT views uncertainty as something managed through communication and sense-making, encompassing both proactive and retrospective processes. Drawing on qualitative interviews, the analysis traces how leaders observe grassroots AI adoption, what strategies they use to balance informal experimentation with institutional responsibility, and how demographic and policy contexts shape organisational responses. The question is not simply how uncertainty is endured, but how it becomes a routine part of leadership work.
2 Background and Theoretical Framework
VET now operate in a state of what Weick and Sutcliffe (2015) describe as chronic crisis. Long-term stress, limited funding, and constant reform have become normalised conditions. To understand technological uncertainty within this context, one must first examine the structural challenges that underpin it. Guthrie et al. (2017) and Billett (2020) trace many of these challenges to underinvestment, inadequate professional development, vague career pathways, and a policy position that has marginalised VET within mainstream education. In England, vacancies persist in the high-demand STEM areas, producing what Hodgson and Spours (2015) call patchwork provision: learning quality that depends as much on postcode as policy. This produces not only inequality but exhaustion; leaders speak of fatigue as a structural condition rather than a passing phase.
Australia’s version of the problem presents similar patterns despite different contexts. Casual and sessional contracts have multiplied since the 1990s (Nakar 2025; Robertson 2008), splintering the workforce. Many teachers work outside their original trade, caught between professional conscience and compliance. Unsurprisingly, morale and well-being suffer (Nakar 2019; Nakar & Du Plessis 2023). Clayton et al. (2015) linked this situation to competitive funding formulas that reward efficiency while demanding evidence of performance, creating systems that appear lean yet run permanently close to collapse. Black and Yasukawa (2014) named this condition sustainable unsustainability, a paradox that resonates. Added to this is the expectation that teachers remain both craft experts and pedagogical specialists. That dual professionalism, while admirable, leaves them vulnerable to higher paid industry jobs (Wheelahan & Moodie 2011).
These structural challenges interact with policy environments characterised by instability and frequent reform. In England, Coffield et al. (2008) warned that reform fatigue erodes institutional learning; the evidence since has confirmed this analysis. Fletcher and Perry (2017) observed how colleges, constrained by funding cuts, reduced staffing and narrowed curricula simply to maintain viability. Australia’s policy cycles differ in form but not in effect, characterised by contestable funding, regulatory tightening, and repeated restructuring (Noonan 2016). Neoliberal governance threads through both. Ball (2012) described performativity regimes that force constant proof of productivity; Gleeson and James (2007) placed VET within new public management logics that promote entrepreneurialism while obscuring under-resourcing. Nakar and Olssen (2021) described Australian leaders facing ethical compromises between educational purpose and organisational survival. Even the repeated Productivity Commission reviews (2011, 2017) have barely reduced the volatility. Argyris and Schön’s (1978) concept of single-loop learning (adjustment without critical reflection) captures the situation precisely.
AI arrives into this context as both promise and complication. Simulations, adaptive feedback, and personalised learning represent well-rehearsed possibilities (Bekiaridis & Attwell 2024). Systematic reviews spanning 2011–2020 reveal that while AI research in higher education has grown substantially, empirical studies remain concentrated in computer science and STEM disciplines, with limited attention to vocational education contexts (Zawacki-Richter et al. 2022). More critically, a subsequent review found that educators themselves remain largely absent from AI implementation research, raising concerns about technology-driven rather than pedagogy-driven adoption (Zawacki-Richter et al. 2023). Yet, as practice shows, enthusiasm outruns enactment. Students often welcome the idea of AI but rarely encounter it in class (Ridzuan & Junaidi 2023). Teachers, meanwhile, balance hope with scepticism about readiness (Seufert 2024). Professional development demand is rising fastest around automated marking and planning (Nyaaba & Zhai 2024). Ethical concerns relating to data privacy, bias, and plagiarism remain salient (Bekiaridis & Attwell 2024). To date, AI applications in curriculum design or assessment frameworks remain more unrealised and potential than routine (Kong et al. 2024).
Grassroots innovation research sheds light on this phenomenon. Von Hippel (2005) observed that users frequently adapt technologies when formal systems cannot keep pace. Silic and Back (2014) called this “shadow IT”: the quiet bending of rules to get work done. In education, such improvisation is rarely subversive; it is a means of coping. Gasser and Palfrey (2012) warned that these ad hoc solutions often collide with formal infrastructures, what they term interoperability challenges. Studies of VET practice are limited but consistent: teachers differ sharply in digital confidence (Bound 2011), often citing lack of time and training (Datnow & Park 2018). Nakar (2025) shows how precarious employment deepens that divide, leaving temporary staff with little continuity to build capability. Informal adoption, therefore, is less a rebellion and more an act of survival.
URT provides an analytical framework for examining how leaders interpret this dynamic. Berger and Calabrese (1975) applied it to first meetings between individuals, yet its logic transfers to organisations. Uncertainty, they argued, is an uncomfortable state that sparks communication aimed at prediction and control. They distinguished cognitive uncertainty (what to think) from behavioural uncertainty (what to do). Later, Berger (1997) refined the idea, separating proactive efforts to anticipate from retroactive efforts to interpret. In VET settings, leaders move constantly between the two: watching new practices appear, then trying to make sense of them after the fact. URT also identifies three broad strategies: passive observation, active inquiry through others, and direct engagement. All three are visible in the data gathered for this study.
Further adaptations of the theory add nuance. Kramer (2004) explored URT within organisational socialisation, identifying structures that either clarify or cloud newcomers’ understanding. Brashers (2001) pushed further, arguing that uncertainty is not always an enemy; sometimes people preserve it for tactical reasons. That insight matters here. Leaders may choose to leave the boundaries of AI use deliberately loose, valuing innovation over regulation. Weick’s (1995) notion of sense-making extends this thinking: people act, then interpret, constructing order retrospectively. Within VET, uncertainty runs across multiple planes: technological, pedagogical, regulatory, and demographic. Ashby’s (1958) principle of requisite variety reminds us that any organisation must develop internal complexity to match its environment. In that sense, the challenge is not to eliminate uncertainty but to work productively within it.
3 Methodology
This study set out to understand how VET leaders in England and Australia make sense of uncertainty around AI adoption during what many now describe as crisis conditions. The research followed an interpretivist tradition, grounded in the assumption that knowledge is co-constructed through human meaning rather than discovered as fixed fact (Creswell & Poth 2018). Understanding organisational behaviour, therefore, requires close attention to how leaders themselves talk about their experiences: what they emphasise, question, or leave unsaid.
The interview protocol was developed for a broader study examining VET leadership in England and Australia. Questions were refined through expert review by three academics specialising in VET research and qualitative methodology, then piloted with one VET leader to ensure effective elicitation of leadership narratives. The protocol explored multiple dimensions of VET leadership including workforce challenges, policy implementation, and organisational change. During analysis, themes relating to uncertainty management and AI adoption emerged as particularly salient. This paper represents focused thematic analysis of those emergent patterns, applying uncertainty reduction theory (Berger & Calabrese 1975) retrospectively as an analytical framework to examine how leaders navigate ambiguity around technology adoption during crisis conditions.
A qualitative design was appropriate. Sixteen semi-structured interviews were conducted between March and September 2025, each lasting between 60 and 90 minutes and conducted online. Participants were drawn from VET colleges, TAFE institutes, private training providers, and registered training organisations. The sample (eight from England and eight from Australia) covered principals, deputy principals, curriculum managers, and senior teaching-and-learning directors. Most (11 of the 16) had worked as teachers, bringing a dual perspective as both practitioners and leaders. Their experience ranged from 8 to 34 years, which provided a rich temporal view of sector change.
Selection followed purposive sampling to ensure breadth of perspective. All participants held leadership responsibility for teaching, learning, or staff management in institutions delivering government-funded VET programmes and had at least 2 years of leadership experience. The small but balanced cohort allowed for detailed comparison across national systems without losing contextual depth. Interviews invited participants to reflect on how AI tools were appearing in their organisations, how decisions were made (or deferred) about their use, and what forms of guidance or governance existed. Prompts encouraged storytelling and reflection rather than yes-or-no answers. Conversations were recorded with consent, transcribed verbatim, and anonymised. Pseudonyms replaced names; identifying details were removed to protect confidentiality. Ethical approval was secured through both participating institutions’ research-ethics processes, and each participant signed an informed-consent statement after receiving full study information.
Data analysis followed Braun and Clarke’s (2006) reflexive thematic approach. Five phases shaped the process: immersion in the transcripts, generation of initial codes, construction of a thematic framework informed by URT, iterative indexing and charting, and synthesis through cross-case interpretation. Themes were refined through repeated reading and discussion until analytic saturation was reached. Reflexivity remained integral throughout; notes were kept after each interview to capture immediate impressions and potential researcher bias. Limitations of the design are acknowledged: the modest sample size, the absence of direct observation of classroom or leadership practice, and the historical specificity of 2024–2025, a period when AI tools were still emerging rapidly and public discourse was fluid. Nevertheless, the data offer a strong snapshot of leadership sense-making at a pivotal moment.
4 Findings
Analysis of interview data revealed five themes that demonstrate how VET leaders navigate uncertainty surrounding AI adoption in crisis conditions.
4.1 Crisis as Catalyst for Informal Innovation
Across both national contexts, leaders described conditions that had ceased to feel merely difficult and had become, as one participant stated, “a real serious challenge” (Si, England). All participants characterised current working conditions as being at crisis level, using phrases such as “massively challenging” (Xi, England), “It prohibits the capacity of the education system to fuel the economy” (Sarah, England), “a chronic problem” (Ali, Australia), and “a big challenge” (Mike, Australia). Workloads, staffing gaps, and compliance demands converged to create what many called crisis mode. Teachers were described as working 12-hour days, juggling administration, pastoral care, and teaching in ways that simply did not fit within paid hours. One English participant explained, “The admin side is too much” (Xi, England); another in Australia said bluntly, “People are leaving; the load is unbearable” (Ase, Australia). One Australian participant described workloads where “you are there from 7:00 in the morning till 7:00 at night. 12-hour days were not unusual” (Mac, Australia). English participants noted teachers who “work their socks off” facing a “burden of assessment” (John, England).
Into this pressure, AI entered quietly but purposefully. Leaders recounted discovering staff already using tools such as ChatGPT for lesson planning, marking, or generating feedback, often without formal approval. One principal recalled: “I walked past a classroom and saw a teacher grading essays with some sort of AI assistant. Nobody had asked permission. They just needed help.” Another described finding out through casual corridor conversation that several staff had subscribed to premium versions of generative tools using their own funds. The pattern was consistent: crisis created demand, and technology filled the gap before policy could respond. Participants described AI as having the potential to “ease people’s workloads” (Sarah, England; Steve, Australia), with examples including AI supporting formative assessments, contextualising subject content, and helping with planning and marking. Participants noted that AI is seen as a benefit and an improvement to reduce workload (John, Si, Sarah, England; Steve, Usha, Australia).
This informal adoption carried risks. Leaders spoke cautiously about intellectual property, data privacy, and inconsistency in academic standards. Yet they also acknowledged a pragmatic truth: without these tools, some teachers would simply break. One participant connected workload pressures to informal AI adoption, describing how educators facing “47 hours of marking for a 36.25-hour week” (Simon, Australia) turn to AI tools despite organisational policies against their use. On one hand, informal technology adoption occurring outside policy created uncertainties about quality assurance, equity, data protection, and regulatory compliance; on the other, suppressing adoption threatened to eliminate coping mechanisms staff had developed, accelerating the workforce crisis through burnout and departure.
4.2 Observation Without Control: Retroactive Sense-Making
Leaders described managing AI adoption through observation rather than direction. Most lacked comprehensive data about which tools staff were using, how often, or for what purposes. Instead, they pieced together understanding through fragments: a comment in a staff meeting, a question about policy, an email asking for technical support. Leaders described this indirect awareness building as creating uncertainty about the extent and nature of technology adoption. Without systematic information, leaders were left to infer patterns from partial evidence, wondering which staff members were using which tools, for what purposes, with what safeguards, and with what effects. This uncertainty was compounded by recognition that direct enquiry might be unwelcome or unproductive. One Australian participant described how “individual educators, facing immense workload pressures, are using generative text AI daily to manage their tasks” (Simon, Australia). Another Australian participant noted that whilst some organisations have a “blanket” policy against using AI, educators use it anyway to manage impossible workloads (Mac, Australia).
Yet this approach also generated anxiety. Leaders worried about undetected errors, biased outputs, or violations of student privacy. Leaders reflected on why formal information channels had failed to capture technology adoption occurring in practice. Factors were identified, including the absence of policies addressing AI, which created ambiguity about whether disclosure was expected; staff reluctance to reveal adaptations that might be viewed as rule-breaking; time pressures that made formal reporting seem like an additional administrative burden; and cultural norms that privileged teacher autonomy over central oversight. Participants described how remote and flexible working arrangements created conditions where direct observation of teaching practices becomes limited. One Australian participant noted, “The thing is now with all my educators … they all work from home. I get them to come on campus one day a month” (Mac, Australia), creating conditions where leaders must infer practices from outputs rather than from direct observation. The outcome was an atmosphere of tacit awareness, a shared but unspoken understanding that AI use was happening beneath the radar.
4.3 Demographic Divides and Differentiated Responses
Workforce demographics shaped how technological uncertainty played out. Leaders consistently described teaching staff as older, with many in their mid-to-late careers, often close to retirement. Leaders described varied digital capabilities and comfort levels with new technologies across their workforces. Participants noted workforce characteristics, with one English leader stating, “A lot of our staff that come from industry have done the job for a very long time. They come into education at the back end of their career … they are looking to retire” (John, England), whilst an Australian participant observed that VET teaching is “perceived as an ‘older industry’ with an ageing workforce” (Steve, Australia).
This variation created what participants described as differential comfort with technology adoption. Leaders expressed concern about ensuring equitable access to technology support and avoiding creating advantages or disadvantages based on staff technological confidence. One English participant noted concerns about staff who are “scared of AI” and how this “can add stress” (Ronan, England), whilst an Australian participant described the challenge that “some staff may be frustrated if not tech-friendly” (Usha, Australia).
4.4 Structural Constraints on Proactive Planning
Despite the prevalence of reactive sense-making, there were also attempts to act proactively to anticipate challenges and design policy. Participants described drafting guidance papers, piloting AI applications for assessment, or convening small working groups to explore workload reduction. Participants described attempts at proactive approaches, including “developing an AI policy that promotes being ‘welcoming of not scared of AI’” (Ronan, England), “piloting AI for workload support” including “helping with formative assessments, contextualising subjects like maths for vocational areas, developing lesson content prompts” (Ronan, England), and “having a task group working on implementing AI to ease people’s workloads” (Si, England).
Yet such initiatives repeatedly collided with structural barriers. Time, funding, and policy clarity were in short supply. English leaders described waiting for ministerial direction that never arrived; their Australian counterparts spoke of shifting regulatory expectations across states. One English participant noted that whilst AI implementation is underway, “the VET organisation had done very little on AI until recently and is not doing nearly enough yet” (Ronan, England), acknowledging the gap between aspiration and capacity. English participants described how recent policy changes regarding online learning “contradict the potential offered by digital tools and AI” (Asha, England), illustrating how external policy constraints limit organisational capacity for strategic technological planning. Policy contradictions also undermined momentum; for example, new restrictions on online learning that ran counter to digital-innovation goals. Proactive uncertainty reduction, therefore, remained sporadic, hemmed in by the very resource shortages that had fuelled informal adoption in the first place.
4.5 Continuous Cycles: From Resolution to Renewed Uncertainty
A final pattern concerned time and rhythm. Leaders did not describe progressing from uncertainty to certainty. Instead, they spoke of cycles: observe a new tool, interpret its implications, adjust policy, then watch as the next tool arrives and the process begins again. The evolution of technology, changing staff practices, shifting regulatory environments, and persistent crisis conditions created circumstances where leaders maintained continuous awareness rather than achieving stable certainty. One English participant described, “We get an initiative and whatever you think of it for the beginning, good or bad, we go for an initiative, we get some, we start to get some traction, and then we get a change of policy” (Leeza, England), illustrating how evolving external conditions require continuous reassessment. Australian participants noted how “government’s decision to stop free TAFE funding could happen suddenly, with an example given of learning about the cessation of funding on a Monday after a Friday decision” (Mac, Australia), creating conditions where even recently developed responses require immediate revision.
This continuous adjustment demanded a particular kind of leadership capability: not command, but attentiveness. Leaders developed tolerance for ambiguity, accepting that perfect information was unattainable. Within this ongoing process, participants identified organisational resilience: the capacity to adapt, to learn, and to maintain functioning despite uncertainty and constraint.
5 Discussion
The evidence from this study shows that AI adoption in vocational education is unfolding not through neat stages of strategic planning but through the messy pragmatism of crisis management. Recent research on AI in education confirms this pattern, with Zawacki-Richter et al. (2023) documenting how educators remain largely absent from AI implementation research, raising concerns about technology-driven rather than pedagogy-driven adoption. Leaders describe AI entering their institutions not as a policy initiative but as a lifeline for exhausted staff. This observation challenges traditional innovation models such as Rogers’ (2003) innovation-decision process, which assume an orderly sequence from awareness to trial to institutionalisation. In the VET environment, adoption often begins at the end of that sequence, at the point of necessity. The data reaffirm what Guthrie et al. (2017) termed a structural-deficit condition: systemic under-resourcing that forces innovation as a survival tactic rather than a strategic choice. This reframing alters how we think about compliance and control. Informal AI use in VET does not fit the idea of “shadow IT” (Silic & Back 2014), where unauthorised tools simply replace authorised ones. Here, teachers are filling functional gaps that organisations cannot close by other means. Coffield et al. (2008) documented how excessive accountability demands exhaust institutional capacity; the present study finds that such pressure also generates a parallel system of pragmatic workarounds. Leaders, aware of this, practise a kind of managed tolerance: keeping formal policy intact while allowing quiet deviation. Institutional theorists would recognise this as decoupling, the coexistence of official narratives and unofficial practice (Meyer & Rowan 1977). Yet the ambiguity is productive as well as problematic; without it, organisations might simply stop functioning.
The communication patterns described by participants highlight URT’s relevance at organisational scale. Leaders rarely possess full information about staff technology use; instead, they piece together meaning from fragments, an exercise in retroactive uncertainty reduction. The process resembles Weick’s (1995) idea of fragile knowing, where understanding depends on social trust rather than formal data. Kramer’s (2004) insight that information sharing requires safety appears apt: staff disclose experimentation only when confident that honesty will not invite punishment. The resulting information environment could be called collaborative ambiguity: everyone knows innovation is occurring, but no one defines it too precisely, a pattern which Brashers (2001) predicted. Sometimes people preserve ambiguity because it allows action that transparency would forbid. In these institutions, maintaining a measured level of “not knowing” has become a collective survival skill.
Demographic variation adds further texture. Ageing workforces and mixed digital confidence complicate leaders’ efforts to create shared approaches. The cautious optimism Seufert (2024) identified among educators reappears here, coupled with Ridzuan and Junaidi’s (2023) observation of a persistent gap between perceived usefulness and practical engagement. These findings align with recent work by Nyaaba and Zhai (2024), who documented significant variations in pre-service teachers’ readiness for AI integration, with digital confidence correlating strongly with prior technology exposure and age demographics. Similarly, Kong et al. (2024) found that developing AI literacy requires sustained, differentiated support rather than one-size-fits-all professional development, reinforcing the need for leaders to navigate diverse staff capabilities. Leaders must therefore navigate two layers of uncertainty: the external unpredictability of technology and the internal diversity of workforce readiness. Addressing this second layer requires more than technical training: it demands cultural work, including building trust, facilitating peer learning, and creating time for reflection in a sector that rarely has time for anything. Attempts at proactive planning repeatedly ran into structural barriers. Resource scarcity, unstable policy, and conflicting mandates limited even the most motivated leaders. These findings question the assumption, common in management literature, that intention guarantees capability. As Black and Yasukawa (2014) noted, VET operates within sustainable unsustainability, a system functioning at permanent stretch. Under such conditions, uncertainty management depends less on vision statements and more on adaptive improvisation. Pardo and Poquet’s (2023) call for sociotechnical alignment (balancing technological ambition with social and resource conditions) illustrates what remains missing. Until those enabling conditions exist, organisational efforts will continue to oscillate between progress and pause.
Perhaps the most striking insight is temporal. Leaders in this study did not describe moving from uncertainty to certainty; instead, they spoke of cycles—observation, interpretation, adjustment, and repetition. URT assumes that uncertainty reduction leads towards stability (Berger & Calabrese 1975). What the present findings show is closer to Weick and Sutcliffe’s (2015) model of continuous awareness: the skill lies in staying alert rather than in reaching closure. Leaders cultivate responsiveness as an organisational capability, accepting that each resolution generates the next question. In this sense, uncertainty is not an obstacle to overcome but a condition to inhabit intelligently. Smith (2024) provides complementary evidence of this pattern in Australian VET, demonstrating how workforce shortage narratives interact with technological disruption to create what he terms “compounding crisis conditions” that normalise improvisation as standard practice. The risk, as Bakhshi et al. (2023) warn in their analysis of the future of skills, is that when systems depend on individual adaptation rather than structural support, inequality deepens between those with resources to manage uncertainty and those without.
Such adaptability, however, should not be mistaken for adequate resourcing. When leaders demonstrate capacity to manage under constrained conditions, policymakers may interpret this adaptability as evidence that additional support is unnecessary (Coffield et al. 2008). Wheelahan and Moodie (2011) remind us that teacher shortages and low status are not natural phenomena but political outcomes. Celebrating organisational flexibility without addressing underlying resource deficits risks perpetuating the conditions that necessitate such adaptation. True innovation in VET will require system-level reform, stable policy horizons, investment in teacher development, and parity of esteem with academic education so that experimentation becomes choice, not necessity.
Theoretically, the study extends URT in several ways. It demonstrates that proactive and retroactive uncertainty reduction often operate simultaneously, not sequentially. It also reveals that uncertainty can generate organisational learning rather than paralysis. Within VET, uncertainty is neither wholly aversive nor wholly strategic: it is routine. The data show how leaders convert flux into a form of attentiveness, a capability for sense-making under constraint. This reconceptualisation aligns with Argyris and Schön’s (1978) account of double-loop learning—organisations learning not only how to act but why they act as they do. In the end, the VET sector’s fragility has forced its leaders into precisely the reflective practice that policy rhetoric so often demands but seldom enables.
This study focused on VET leadership experiences in England and Australia, two nations sharing common neoliberal policy frameworks and English-language colonial educational histories. This geographic focus was deliberately chosen to examine similar systems experiencing similar workforce crises, but future research must extend to diverse global contexts. The modest sample size, whilst sufficient for rich qualitative analysis, limits statistical generalisation. The study relied on interview data rather than direct observation of leadership practice, meaning accounts represent participants’ interpretations of events rather than observed behaviours. The research was conducted during 2025, a period of rapid AI development, meaning findings capture a specific historical moment in technological evolution.
Future research should expand geographical scope to include Asian VET contexts, where different cultural frameworks, technological infrastructure trajectories, and educational governance models may produce distinct patterns of AI adoption and uncertainty management. Countries such as Singapore, South Korea, Japan, and emerging VET systems in Southeast Asia offer valuable comparative cases for understanding how cultural values around authority, innovation, and risk influence leadership responses to technological disruption. Additionally, comparative research across diverse global contexts would illuminate whether uncertainty reduction processes identified in this study represent universal leadership challenges or culturally specific phenomena shaped by Anglosphere policy assumptions. Research examining teacher and student perspectives on AI adoption would complement leadership accounts, whilst longitudinal studies tracking organisational responses over multiple years could reveal how initial uncertainty management strategies evolve into stable practices or generate new forms of ambiguity. Participatory action research involving VET leaders, teachers, and policy stakeholders could co-design interventions addressing the structural barriers identified in this study, testing whether coordinated support across system levels enables proactive rather than reactive uncertainty management.
6 Conclusion
This research set out to examine how VET leaders in England and Australia navigate uncertainty around AI adoption while operating within enduring workforce crises. Five patterns emerged: crisis as catalyst, observation without control, demographic divides, structural constraint, and continuous cycles of adjustment. Together, they depict leadership not as command but as constant translation, turning ambiguity into temporary coherence. The findings challenge the idea that technological change in education proceeds through deliberate strategy, suggesting instead that crisis has become the true incubator of innovation. Teachers adopt AI informally to stay afloat; leaders interpret, adapt, and rebuild policy around these ground-level experiments. Such adaptive practice sustains the system but also exposes its precarity. Without stable funding and workforce renewal, flexibility alone will not suffice.
For countries confronting similar labour shortages and digital transitions, these insights matter. They show that uncertainty management is now a central professional skill, not an afterthought. To sustain genuine innovation, policymakers must create conditions that allow uncertainty to be explored safely rather than merely survived. Future research could extend this work by examining teacher and student experiences of AI use, mapping the informal networks through which knowledge spreads, and identifying mechanisms that turn reactive adaptation into deliberate design. As the boundaries between human expertise and machine capability continue to blur, the experiences of VET leaders offer an instructive lesson: uncertainty is not the opposite of knowledge but its constant companion. The task is to learn to live with it thoughtfully, creatively, and without losing sight of the people doing the work.
References
Argyris, C., & Schön, D. A. (1978). Organisational Learning: A Theory of Action Perspective. Addison-Wesley.
Ashby, W. R. (1958). Requisite variety and its implications for the control of complex systems. In: Cybernetica, 1, 2, 83–99.
Association of Colleges. (2022). AoC college workforce survey summary of findings 2020/21. Online: https://d4hfzltwt4wv7.cloudfront.net/uploads/files/AoC-Workforce-Survey-2020-21-Finaldocument.pdf (retrieved 11.03.2026).
Bakhshi, H., Downing, J., Osborne, M., & Schneider, P. (2023). The future of skills: Employment in 2030. Pearson.
Ball, S. J. (2012). Performativity, commodification and commitment: An I-spy guide to the neoliberal university. In: British Journal of Educational Studies, 60, 1, 17–28. Online: https://doi.org/10.1080/00071005.2011.650940 (retrieved 11.03.2026).
Bekiaridis, G., & Attwell, G. (2024). Artificial intelligence and vocational education and training: Opportunities, challenges and ethical considerations in the post-COVID era. In: Research in Post-Compulsory Education, 29, 1, 75–94.
Berger, C. R. (1997). Planning Strategic Interaction: Attaining Goals Through Communicative Action. Lawrence Erlbaum Associates.
Berger, C. R., & Calabrese, R. J. (1975). Some explorations in initial interaction and beyond: Toward a developmental theory of interpersonal communication. In: Human Communication Research, 1, 2, 99–112. Online: https://doi.org/10.1111/j.1468-2958.1975.tb00258.x (retrieved 11.03.2026).
Billett, S. (2020). Perspectives on enhancing the standing of vocational education and the occupations it serves. In: Journal of Vocational Education and Training, 72, 2, 161–169. Online: https://doi.org/10.1080/13636820.2020.1751247(retrieved 11.03.2026).
Black, S., & Yasukawa, K. (2014). Working around the official script: Teachers’ literacies of practice in times of change. In: Journal of Educational Administration and History, 46, 3, 287–305.
Bound, H. (2011). Vocational education and training teacher professional development: Tensions and context. In: Studies in Continuing Education, 33, 2, 107–119. Online: https://doi.org/10.1080/0158037X.2010.515572 (retrieved 11.03.2026).
Brashers, D. E. (2001). Communication and uncertainty management. In: Journal of Communication, 51, 3, 477–497. Online: https://doi.org/10.1111/j.1460-2466.2001.tb02892.x (retrieved 11.03.2026).
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. In: Qualitative Research in Psychology, 3, 2, 77–101. Online: https://doi.org/10.1191/1478088706qp063oa (retrieved 11.03.2026).
Clayton, B., Gribble, C., & Jonas, P. (2015). The Nature of Differences Between Government-Funded and International Student Markets: A Study of the International VET Market. National Centre for Vocational Education Research.
Coffield, F., Edward, S., Finlay, I., Hodgson, A., Spours, K., & Steer, R. (2008). Improving Learning, Skills and Inclusion: The Impact of Policy on Post-Compulsory Education. Routledge.
Creswell, J. W., & Poth, C. N. (2018). Qualitative Inquiry and Research Design: Choosing Among Five Approaches (4th ed.). Sage Publications.
Datnow, A., & Park, V. (2018). Opening or closing doors for students? Equity and data use in schools. In: Journal of Educational Change, 19, 2, 131–152. Online: https://doi.org/10.1007/s10833-018-9323-6 (retrieved 11.03.2026).
Department for Education. (2018). College Staff Survey 2018 (Research Report). HMSO.
Fletcher, M., & Perry, E. (2017). The Impact of Changes to 16–19 Funding in England. Education Policy Institute.
Gasser, U., & Palfrey, J. (2012). Interop: The Promise and Perils of Highly Interconnected Systems. Basic Books.
Gleeson, D., & James, D. (2007). The paradox of professionalism in English further education: A TLC project perspective. In: Educational Review, 59, 4, 451–467.
Guthrie, H., McNaughton, A., & Gamlin, T. (2017). Initial teacher education for VET teachers: Preparing teachers for VET in schools and for young people at risk. In: International Journal of Training Research, 15, 2, 136–150.
Hodgson, A., & Spours, K. (2015). An ecological analysis of the dynamics of 14–19 education systems in England: From weakly collaborative arrangements to strongly collaborative local learning ecologies. In: Journal of Education and Work, 28, 1, 41–61. Online: https://doi.org/10.1080/13639080.2013.805186 (retrieved 11.03.2026).
Kong, S. C., Cheung, W. M. Y., & Tsang, O. (2024). Evaluating an artificial intelligence literacy programme for developing university students’ conceptual understanding, literacy, empowerment and ethical awareness. In: Educational Technology & Society, 27, 1, 16–30.
Kramer, M. W. (2004). Managing Uncertainty in Organisational Communication. Lawrence Erlbaum Associates.
Meyer, J. W., & Rowan, B. (1977). Institutionalised organisations: Formal structure as myth and ceremony. In: American Journal of Sociology, 83, 2, 340–363. Online: https://doi.org/10.1086/226550 (retrieved 11.03.2026).
Misselke, L., Schmidt, T., Nakar, S., & Khan, S. I. (2024). Who will teach that class? Perspectives on teacher shortages from English and Australian vocational education and training sectors. In: Education + Training. Advance online publication.
Nakar, S. (2019). Impact of ethical dilemmas on wellbeing of teachers in vocational education and training in Queensland, Australia. In: International Journal of Training Research, 17, 1, 35–49. Online: https://doi.org/10.1080/14480220.2019.1602122 (retrieved 11.03.2026).
Nakar, S. (2025). Understanding Ethical Dilemmas Faced by the Casual Workforce in Vocational Education and Training. In Harris, J., Spina, N., Smithers, K., Blackmore, J. & Gurr, S. K. (eds.): Casualisation, the Gig Economy, and Piece Work in Education: Dilemmas for Leaders in Times of Increasing Precarity. Routledge, 61–89. Online: https://doi.org/10.4324/9781003511144 (retrieved 11.03.2026).
Nakar, S., & Du Plessis, A. (2023). Teaching out-of-field in vocational education and training in Australia: Implications for teacher wellbeing. In: International Journal of Training Research, 21, 2, 156–174.
Nakar, S., & Olssen, M. (2021). The effects of neoliberalism: Teachers’ experiences and ethical dilemmas to policy initiatives within vocational education and training in Australia. In: Policy Futures in Education, 19, 8, 927–947. Online: https://doi.org/10.1177/14782103211040350 (retrieved 11.03.2026).
Noonan, P. (2016). VET Funding in Australia: Background, Trends and Future Directions. Mitchell Institute.
Nyaaba, M., & Zhai, X. (2024). Preparing teachers for AI: Investigating pre-service teachers’ understanding and perceptions of artificial intelligence in education. In: Education and Information Technologies. Advance online publication.
Pardo, A., & Poquet, O. (2023). Learning analytics adoption: Revisiting the diffusion of innovations model to inform policy and practice. In: LAK23: 13th International Learning Analytics and Knowledge Conference. ACM, 1–11. Online: https://doi.org/10.1145/3576050.3576052 (retrieved 11.03.2026).
Productivity Commission. (2011). Vocational Education and Training Workforce (Research Report). Australian Government.
Productivity Commission. (2017). Shifting the Dial: 5-year Productivity Review (Inquiry Report No. 84). Australian Government.
Productivity Commission. (2024). 5-year Productivity Inquiry: From Learning to Growth (Inquiry Report Vol. 6). Australian Government.
Ridzuan, F., & Junaidi, J. (2023). Exploring the acceptance of artificial intelligence in vocational education: A study using the technology acceptance model. In: Journal of Technical Education and Training, 15, 2, 45–58. Online: https://doi.org/10.30880/jtet.2023.15.02.004 (retrieved 11.03.2026).
Robertson, I. (2008). VET teacher education: A case study of casualisation? In: Education Research and Perspectives, 35, 1, 65–83.
Rogers, E. M. (2003). Diffusion of Innovations (5th ed.). Free Press.
Seufert, S. (2024). Artificial intelligence in vocational education: Opportunities and challenges from the perspective of educators. In: International Journal for Research in Vocational Education and Training, 11, 1, 1–21. Online: https://doi.org/10.13152/IJRVET.11.1.1 (retrieved 11.3.2026).
Silic, M., & Back, A. (2014). Shadow IT: A view from behind the curtain. In: Computers & Security, 45, 274–283. Online: https://doi.org/10.1016/j.cose.2014.06.007 (retrieved 11.03.2026).
Smith, E. (2024). The narrative of a VET workforce shortage in Australia: Reality, myth or opportunity? In: Education and Training, 66, 5, 494–509.
Tully, P. (2023). Mission impossible? A strategic approach to improving the recruitment and retention of further education and training teachers in England. In: Education and Training, 66, 5, 510–523.
Tyler, M., & Dymock, D. (2021). Attracting Industry Experts to Become VET Practitioners: A Journey, not a Destination. National Centre for Vocational Education Research.
Von Hippel, E. (2005). Democratising Innovation. MIT Press.
Weick, K. E. (1995). Sensemaking in Organisations. Sage Publications.
Weick, K. E., & Sutcliffe, K. M. (2015). Managing the Unexpected: Sustained Performance in a Complex World (3rd ed.). Wiley.
Wheelahan, L., & Moodie, G. (2011). Rethinking Skills in Vocational Education and Training: From Competencies to Capabilities. NSW Board of Vocational Education and Training.
World Economic Forum. (2023). The Future of Jobs Report 2023. Online: https://www.weforum.org/publications/the-future-of-jobs-report-2023/ (retrieved 11.03.2025(.
Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2022). Artificial intelligence in higher education: A systematic review of empirical research from 2011 to 2020. In: International Journal of Educational Technology in Higher Education, 19, 1, 1–27.
Zawacki-Richter, O., Marín, V. I., Staubitz, T., Bond, M., & Gouverneur, F. (2023). Systematic review of research artificial intelligence applications in higher education: Where are the educators? In: International Journal of Educational Technology in Higher Education, 20, 1, 1–25.