Podcast | AI In L&D: How To Move Beyond The Gimmicks

Author:
Gary Stringer
PUBLISHED ON:
September 21, 2023
September 21, 2023
PUBLISHED IN:
Podcast

​The buzz around AI for L&D has never been ​higher but the conversation is surface level and dominated by gimmicky ideas.

This conversation goes beyond that, diving into the challenges and practicalities of using AI for L&D. From the ethics to the context and the core principles you need to get it right.

Watch the episode

Listen to the episode

Running order


0:00 Intro the show
1:12 Meet Egle
5:28 Why is organisational context so crucial?
12:03 Lessons from speaking to business leaders.
16:35 Why motivation and environment are key
23:11 Privacy, bias and the ethics of AI in L&D
31:46 Does more information mean better decision making?
35:11 How is AI currently perceived in your business?
39:23 Audience Q&A

Five big lessons on using generative AI in L&D


Egle explained why she’s so passionate about the topic of AI and L&D, and it's really useful context before we dive into our five big takeaways.

“Suddenly, AI went from being this fun little thing to a useful tool with the potential to actually cut the time from idea to execution by an order of magnitude.

“So my ears naturally perked up and it was clear that this technology would have massive implications for our industry and learning in general.” - Egle Vinauskaite

But how do we turn that potential into action and what are the challenges?

1. Organisational context is crucial for AI and learning to work together


In this LinkedIn post, Egle discussed the importance of context, explaining that:

“Using AI for the sake of it, without a clear 'why', won't necessarily enhance your learning experience or its production process. In fact, if done haphazardly, it might disrupt your workflows and output entirely.” - Egle Vinauskaite

Egle explained the early reality check she got when speaking with L&D pros about how AI repositories and tools could be useful for them:

“Yeah this is exciting, but we cannot use it.”

Whether that was a result of reliance on internal content and privacy reasons preventing them from using ChatGPT, being unable to use data analysis due to a lack of data infrastructure, or a very conservative company culture.

The reality was that these organisations weren’t ready for such rapid change or required a higher level of sophistication in terms of functionality, integrations and security.

Egle gave us a great example of the need for context in AI: The Chatbot

Simple on the surface: we ask a question, we get an answer.

But what if there’s an internal company procedure required for me to perform the skill?

Could the bot get the answer to it?

A lot of the time, the answer is no, so the content remains generic.

At the same time, there’s the chance that people are getting ambiguous or even slightly weird answers - there’s certainly no consistency either - and we have to ask if we’re comfortable with that.

2. Lessons learned from speaking with business leaders about generative AI

Egle shared her experiences of speaking with a small group of leaders and two interesting findings about why and how they’re using it.

“AI adoption is often driven by the business and not the other way around… I found out that it's not L&D going out to the business and pushing for AI.

“It’s often some sort of AI council at the top that is deciding on this wholesale company AI strategy and if L&D is going to be part of it, and then pulling it in if and when.” - Egle Vinauskaite

She also discovered that, quite often, the organisations heavily experimenting with AI in L&D are the same ones whose customer-facing products and services depend on AI. There's this competitive imperative to adopt it company wide.

Again, hammering home the point that AI can’t be used for the sake of it, but to solve real problems.

3. Motivation cannot be separated from learning, and so it’s part of our foundation


If people aren’t motivated to learn, there’s very little chance of your learning efforts working - regardless of whether they’re tapping into AI or not.

“Learning design cannot be separate from motivation. These two have to come together. You cannot be an instructional designer and only care about using correct instructional interactions in a course, even if they're evidence based.

“You have to care about what problem you're solving for someone, what is the most effective remedy, and how to persuade them that you've got the value and you're going to solve the problem for them.” - Egle Vinauskaite

If we’re not solving for motivation, no amount of tech or AI novelty factor will help people retain or apply that information.

  1. Always understand someone’s pain point to overcome or goal to reach.
  2. Help that person build clarity and a strong understanding of their goal.
  3. Now they’re convinced, create the scaffold and support they need.

We often hear people talk about being human with AI and this is a very human issue.


4. Ethics, privacy and bias in using AI for L&D


“People data can potentially unlock some of the most powerful use cases of AI in L&D.

“But the crux of the issue is that it's some of the most sensitive data. So we need to tread carefully here for obvious reasons.” - Egle Vinauskaite

Egle explains that there are a lot of layers and facets to this conversation, so it’s helpful to look at three things:

1. Privacy: We need to give generative AI access to internal data to get the most from it, but how do we make sure that data is safe?

2. Bias: AI can help us create learning pathways and will probably help us uncover hidden growth opportunities, but how do we assess people's needs accurately and ensure that these opportunities are distributed fairly?

3. General ethics: Should we use the data to its full potential? And if so, do employees have the right to know how it’s being used and the ability to consent to that?

5. Do learners trust AI recommendations?


AI and human collaboration isn’t as simple as more information = better decisions, as Egle explains in this LinkedIn post.

Most interesting is the learner sentiment towards recommendations from AI.

“The post was related to what we call algorithm aversion, which is a tendency for people to discount algorithmic recommendations.

“And some early findings suggest that people don't seem too happy to collaborate with AI, and they tend to discount AI's assistance, even if it's proven that it outperforms them.” - Egle Vinauskaite.

We might expect that when we’re making difficult or complex decisions, we would appreciate rational or objective input from AI or believe that it will improve the chances of success.

But there is hesitation, and it highlights that the relationship isn’t as simple as we might hope.