Skip to main content

What AI can and can’t do to support early talent wellbeing

23 February 2026

Ahead of ISE’s Apprenticeships Conference, AI specialist, author and speaker Erica Farmer considers whether AI can genuinely support the wellbeing of early talent or if we’re asking it to fix a human problem.

Wellbeing is no longer a side conversation in early careers. It’s front and centre. Graduate and apprentice programmes are under pressure:

  • Economic uncertainty.
  • Cost of living concerns.
  • Social comparison culture. Career anxiety.
  • Hybrid working. Performance expectations.

Add AI disruption into that mix and it’s no surprise that many early career professionals are feeling stretched before they’ve even found their feet.

The question isn’t whether wellbeing matters. The question is whether we’re equipping organisations with the right tools to support it and whether AI has a responsible role to play.

The wellbeing challenge in early careers

Our early career talent population faces a unique set of pressures:

Whether it’s the transition shock from education to corporate life, the accompanying imposter syndrome amplified by high-achiever environments, always-on digital expectations or uncertainty about the future of jobs in an AI-enabled world. Let alone the limited psychological safety to admit struggle.

Employers are responding with wellbeing policies, EAPs, resilience workshops and mental health first aiders. All important, all necessary and when supported in the right way can make real tangible differences to people and organisations.

But here’s the uncomfortable truth: many interventions are reactive. They kick in when someone is already struggling or aren’t available when people need them.

AI, when used thoughtfully, offers the potential to move some of this support upstream. Not to replace human care. Not to automate empathy. But to extend support in practical, scalable ways and support other interventions in place.

Where AI can genuinely help

Used responsibly, AI can support wellbeing in three meaningful ways.

1. Reducing cognitive load

Early career professionals are often overwhelmed by information, whether that be onboarding documentation, policies, systems, performance frameworks, we could keep going. AI copilots can:

  • Summarise complex information
  • Help prioritise tasks
  • Draft emails or reports
  • Support reflective thinking

That reduction in friction matters. Less time wrestling with admin means more headspace for learning and connection.

2. Creating low-risk reflection spaces

Most of us, especially, early career professionals, hesitate to ask ‘basic’ questions for fear of judgement. AI tools can provide:

  • Private rehearsal space before difficult conversations
  • Coaching-style prompts for reflection
  • Support in drafting development plans
  • Scenario practice for feedback discussions

It’s not therapy. It’s not counselling. But it can be a bridge, especially where confidence is fragile.

3. Personalised learning and career development

AI can:

  • Tailor development pathways
  • Spot patterns in engagement data
  • Nudge learners at the right time
  • Provide adaptive support based on needs as a personal coach or tutor

When thoughtfully implemented, this can make early careers programmes feel less generic and more human. Ironically, well-designed AI can make large organisations feel smaller.

Where AI cannot (and should not) replace humans

We need to think really carefully about this and ensure guardrails and supervision is always in place. We need to understand the risks, especially when the bad days start to outnumber the good days. AI can do a lot but it can’t:

  • Build psychological safety

  • Replace line manager empathy
  • Diagnose mental health conditions
  • Make moral judgements about complex human situations
  • Understand cultural nuance without bias

There is a real danger in outsourcing emotional labour to machines. And if AI becomes a substitute for leadership capability, it will damage trust.

Wellbeing is relational. It’s cultural. It’s systemic. Technology can support the environment, but it cannot be the environment.

The ethical considerations we must not ignore

The conversation becomes even more complex when AI intersects with wellbeing data. Employers need to ask:

  • What data are we collecting?
  • Who has access to it?
  • Is it anonymised properly?
  • Are we creating surveillance under the banner of care?
  • Have we been transparent with employees?

Early career professionals are particularly sensitive to fairness and ethics. Missteps risk long-term reputational damage.

There’s also bias. AI systems are trained on historical data. If past systems embedded inequality, AI can amplify it.

Using AI in wellbeing contexts requires governance, clear boundaries and human oversight.

The bigger question: what are we trying to solve?

Sometimes organisations reach for AI because it feels innovative and can feel personal; due to the ways its trained. You can speak to it, personalise it, have it even referred to itself with a nickname. How cute! But the real questions are:

  • Are we trying to fix workload problems with chatbots?
  • Are we masking leadership gaps with automation?
  • Are we confusing efficiency with care?

AI can reduce pressure. It can personalise support. It can provide scalable tools. But it cannot compensate for poor management, unrealistic performance expectations or toxic culture.

Wellbeing strategy still starts with leadership behaviour, job design and psychological safety. AI should sit inside that ecosystem and not attempt to replace it.

Early careers leaders and professionals need to continue to support students, apprentices and graduates by teaching them how to support their own wellbeing and mental health.

AI should not be a substitute for teaching these fundamental skills but rather support it by amplifying and explaining how to use tools and techniques that support wellbeing such as journaling and mindfulness. 

Moving from hype to responsible practice

The opportunity here is not to ask whether AI is ‘good’ or ‘bad’ for wellbeing. It’s to ask:

  • How do we design AI-enabled support that strengthens, rather than weakens, human connection?
  • How do we equip early career professionals to use AI in ways that build resilience rather than dependency?
  • How do we train managers to understand both the potential and the limitations?

These are the conversations that matter.

In summary, AI is not the solution to the wellbeing crisis. But used wisely, it can be part of a more human, more sustainable approach to supporting early talent. Strategies are explored in my forthcoming book AI for People Professionals.

The future of wellbeing won’t be purely technological. And it won’t be purely human either. It will be about how intelligently, and ethically, we blend the two. If we get that balance right, we don’t just support student wellbeing, we strengthen the foundations of the workforce itself.

You can hear more from Erica at ISE’s Apprenticeships Conference on 4 March where in partnership with Gen Healthy Minds and Hewlett Packard they will explore:

  • Practical use cases that support career development
  • Boundaries that protect psychological safety
  • Ethical guardrails that maintain trust
  • Tools you can implement immediately

Back to Knowledge Hub Items