Intentionality, transparency and context matter when adopting AI, explains Chris Webb, a Career Consultant at The University of Huddersfield with a specialist interest in AI.
Dr. Louise Rutherford and Ben Williams from Unseen Group, Dan Brieger, Head of Talent Assessment at Legal & General, and I delivered a session at the ISE Student Recruitment Conference, entitled ‘AI at the Coalface’, which looked at what education providers and employers might need to consider to prepare early career professionals for an AI-imbued workplace.
The conversations that came out of the conference and my work in this space have repeatedly taken me back to the question that is currently being asked by many universities and employers alike – how do we make AI adoption work for us?
Differing agendas
Part of the challenge of AI adoption in the early careers space is that there are a range of different agendas at play.
From a business perspective, organisations are seeking to capitalise on the potential efficiency and cost-saving gains (Fortune, August 2025). Read ISE’s article on how employers are using AI to recruit.
For education providers, academic integrity has been a chief concern, alongside conflicting perspectives (Leon Furze, July 2025) regarding the best way to prepare students to inhabit a world where AI use is part and parcel of everyday life.
There are also early career professionals - students, graduates, school leavers, apprentices - who are being forced to contend not only with the presence of AI in their studies and day-to-day lives but also in relation to job opportunities.
Some of the media and social media scaremongering on AI’s role in recruitment processes and its potential impact on graduate job vacancies (The Guardian, June 2025) is not always an accurate reflection of the reality facing new entrants to the labour market, as reflected in ISE and Group GTI research - have graduate jobs really nose dived?

Image created by Chris Webb, 2024.
What do we mean by ‘AI’?
To make matters even more confusing, there is also the ongoing challenge of figuring out what we even mean by the term ‘AI’, given its catch-all use by many individuals.
Many businesses have been using technology like machine learning to enhance work processes for far longer than GenAI has been in the headlines (Forbes, July 2023).
However, it’s much trickier to find widespread, scalable and genuinely valuable use cases related to GenAI, that aren’t based on the activity of a skilled employee who both understands the tech and how to get the most out of it for their specific context (Fortune, August 2025).
Strategic approach
With all this in mind, it’s hardly a surprise that many individuals and organisations are still grappling with the question of how to make AI adoption work for them.
Despite uncertainty over where this technology is going and uneven implementation across industries and businesses (ONS, 2023), the near ubiquitous presence of GenAI in our daily lives and the ever-increasing number of users amongst younger generations (The Guardian, July 2025) necessitates some form of strategic thinking when it comes to AI adoption. Even a ‘direction of travel’ or ‘stated approach’, rather than a rigid set of policies, is useful.
One neat example, is the first-of-its-kind Community Charter on Data and AI that has been created in the Liverpool City Region (Liverpool City Region Combined Authority, July 2025) with input from members of the public. It focuses on guiding principles like Security, Accountability and Transparency that need to be taken into consideration when using AI and big data for projects that impacting citizens across the region.
This is similar what we’ve seen from some businesses who’ve publicly outlined their stated approach to AI use (EY, 2023).
Practical considerations
A ‘charter’ or set of ‘principles’ concerning AI adoption is only really one part of the picture. Here are other practical considerations:
- An organisation’s ethical stance on AI adoption - e.g. will we aim to have token limits or publish our energy usage associated with AI?
- IT governance policies relating to the use of any enterprise AI technology - on a smaller scale, are we taking a tool audit to see which AI products staff are actually using for their work?
- An approach to R&D concerning AI technology - e.g. do we have an R&D team focusing on small-scale pilots or evaluating the impact of AI implementation and if so, what benchmarks are we working towards?
- A stated intention for how we want to talk about AI adoption with staff and clients - does transparency around AI use look different when addressing these specific stakeholders?
- A framework for supporting teams, services and staff within an organisation to develop the awareness and competence with AI tools needed to support the work being undertaken - how do we want to train or empower people to use AI in a way that is relevant to our business?
Part of the challenge with helping staff or early career professionals establish a baseline understanding of AI is:
- This is a frontier technology and therefore there are a limited number of ‘experts’ who can confidently deliver professional development on this topic.
- To be an effective user of AI, individuals require time and an ability to develop capability in both AI literacy (understanding and critically analysing AI systems and tools) and AI fluency (the ability to use AI systems and tools safely, effectively and responsibly), not just a one-off training session (Marc Watkins / Rhetorica, July 2025).
- Context matters – for example, a marketing team is likely to use AI quite differently to the way that a finance team might, for a variety of reasons (quality or accuracy of data, regulatory requirements, nature of work, usefulness of applications etc.)
For strategic, long-term approaches to AI to work, both for individuals and organisations, adopting frameworks that encompass both the big picture and digital upskilling are necessary.
Framework for AI adoption
Career development professionals like myself and Leigh Fowkes have been using The Foresee Framework (Fowkes, L. & Webb, C., 2024). This is helping teams and services to reflect on their level of comfort and understanding of AI across four domains – confidence, competence, criticality and context – and to identify collective and individual training needs related to their specific work.
Meanwhile, Cato Rolea (ECCTIS), Dr. Tom Newham and Dr. Claudia Bordogna (Nottingham Trent University) have been developing the CAREERGenAI Framework to help frontline colleagues integrate GenAI directly into career development and employability practices.

The Foresee Framework – created by Fowkes, L. & Webb, C. (2024)
While many organisations will decide that they are best placed to determine what sort of framework or approach works best, it’s becoming increasingly clear that businesses and individuals cannot simply wait for someone else to come up with a generalised solution to help them adopt AI.
Intentionality, transparency and context matter, and understanding the perspectives of education providers, businesses, talent acquisition teams and young professionals is going to be key to ensuring that AI adoption works effectively for all of the stakeholders.