Increasing humans with AI becomes the path of the world. During the next blog articles (and one webinar to come), I will dive into the way AI can be integrated into learning, especially inside learning transfer.
This subject seems incredibly relevant at the moment. It's appropriate, it's pressing and – if I'm honest – good friends in Europe insisted that I needed to talk about it. But beyond their encouragement, personal experience recently pushed me to explore this idea more.
The strange case of the podcast generated by AI
I have a little unusual media. I do not subscribe to any paid television service, but I have a YouTube subscription – and I spend a lot of time learning and enjoying the content there.
A few weeks ago, there was a lot of buzz around Google Notebooklm—A tool that could generate podcasts led by AI, apparently expressed by humans but entirely synthetic. I was intrigued by this breakthrough and its implications.
Then something unexpected happened.
I clicked on a YouTube video It seemed interesting of a chain that I am generally not. As soon as I hit the game, I recognized the voices– Not like real people, but as an in-generated by AI. The video, however, has made no mention of this. It was presented as a conversation between two humans.
Picked an emotional mountain.
I imagined it? Could this be really real people? I felt myself A feeling of discomfort, even a betrayal. There was no transparency, and that left me by questioning everything I heard. But here is the twist – the content was excellent. The tool was impressive. Learning was solid.
However, despite all that, I didn't like the experience.
For what? Because confidence was broken. Instead of fully engaging with the content, I was distracted by feelings of mistrust, frustration and even anger The absence of transparency. It was a powerful lesson as a learning professional.
The need for transparency in AI-increase learning
So where does that leave us when it comes to Increase humans with AI in learning?
First and foremost, it strengthens the need for transparency. At Coach M, we are intentional to make sure that learners always know when they interact with AI compared to a human.
For example:
• a “rescue” command – If a learner needs human support, he can type “rescue” to be instantly connected to a person.
• Explicit human signatures – Whenever a human joins a conversation in coach M, the message always starts with: “Hi, it's Emma (human) here.”
No conjecture. No ambiguity. Just an absolute clarity.
It is not only courtesy – it is fundamental to maintain confidence in learning environments. If the learners are distracted by concerns concerning with whom (or with what) they engageHe interferes with their ability to focus on the content itself. And when learning is on Transfer knowledge in actionThis is a serious problem.
What is the next step?
In the next blogs, I will share practical strategies to increase learning with AI – how we do it, what works with our customers and how we embrace this change in learning design.
If you are as fascinated by this subject as me, I would like to have you for the trip. Keep an eye on the next blogs– And if you want to dive deeper, Book your place in our webinar, “Increase learning teams with AI training”
I can't wait to share more. See you there!
PS – This blog post was created with the help of Chatgpt. Complete transparency. To find out more about how, read on this subject here.

Finn founded Learnopoly to provide unbiased, in-depth online course reviews, helping learners make informed choices. With a decade in financial services, he developed strategic partnerships and business development expertise. After a frustrating experience with a biased course review, Finn was inspired to create a trusted learning resource.