Who Gets Left Behind in an AI-Driven Evaluation World?

“Can you all please stop using AI? There’s going to be no jobs left for us.”

That was the comment from a friend’s teenage daughter last year, watching a group of adults 'experimenting' with new AI features. And it’s stayed with me ever since.


We talk a lot about what AI can do in evaluation.

-Transcribe interviews.

-Speed up analysis.
-Summarise data.
-Draft reports.

And those things matter.

But we’ve been wondering if we’re talking enough about what it might undo. If we automate the tasks that help us develop research skills, what does using AI mean for the next generation of researchers?

Because many of the tasks AI is starting to replace are the sort of tasks early-career evaluators rely on to learn.

-Coding transcripts.
-Spotting patterns.
-Writing first drafts.

The messy, hands-on work where you build judgement over time.

So if those opportunities disappear, where do people learn?


And this isn’t just about skills, it’s about access and representation.

Because the evaluation sector already struggles with diversity.

If entry-level roles shrink, who gets locked out?

Whose perspectives are we losing before they even have a chance to shape the field?

We don’t think the answer is to reject AI.

But we do think it’s worth thinking about what this means for the next generation of evaluators and how we create opportunities for young people to even know that evaluation and broader social research careers exist.


If we’re not careful, we risk building a profession that may be faster, but that’s even less diverse than it is today.

We’re really keen to connect with others interested in and/or thinking about this.

Tell us:

  • How are you thinking about this in your organisation or practice?

  • Are you seeing changes already in entry-level opportunities?

  • And what would it take to protect, or even reimagine, those pathways?

Email jami@brightimpact.co.uk