Hedra is one of the image video tools that is particularly good for synchronizing with audio. There are many AI video tools that can show people who walk or dance, but not as much who make synchronization lips for realistic conversations. Since I had it Tested hedra last yearI wanted to experiment with it again to see how the Hedra 3 character of Hedra has improved compared to the previous versions. I tested the same character with different emotions.

Character video example hedra
Here is my example. This is two clips: one with a calm audio and a showing the same frustrated character. (Send e-mails to readers, if the video does not appear below, try Watch him on YouTube.)
How I created that
I use the free plan for Hedra currently, so I wanted to record my credits only for the video generation. Therefore, I generated the image of my character and the audio outside of Hedra. However, you can do everything in this unique tool.
First of all, I generated the image of the character in Midjourney. You need an image with a clear face to generate a video. I specifically invited to glasses in the image of my character because sometimes they throw the tools and generate strange artifacts.
Editorial photo, Latina woman wearing a red button-down shirt and wire-rimmed glasses, sitting in a modern office conference room, looking at the camera, neutral expression

I also generated my audio separately in Elevenlabs. Again, it was to save my credits in Hedra during my experience; It would be easier to simply generate audio directly in Hedra.
From there, it is easy to generate a video. With external active ingredients like this, I have just downloaded my image as a starting frame and my audio in the “Audio script” section. In the second clip, I added a “bored, frustrated” text prompt to define emotion.

On a free level, you are limited to a maximum of 20 seconds per generation. This is enough for a few dialogue sentences.
Hedra for scenarios
These dialogue extracts come from my New rental with attitude ramification scenario. (This is why there are two different versions of similar information; these are the consequences of different decisions you make when you try to resolve the conflict.)
I think you can use it to create several extracts and display different versions in a scenario. Videos still have faults, but hand gestures were quite natural (if it is a little too frequent). Even the reflections on the glasses went well. It would be good enough to generate a dialogue for an interactive video scenario, at least for certain purposes.
See my previous hedra experiences
If you are interested in seeing how Hedra improved, see my previous article sharing my experiences with Image AI when creating videos.
Upcoming events
Prepare the field: make Elearning relevant and authentic with the scenarios
Wednesday October 29, 2025 at 12 p.m. Pacific / 15 H Eastern
“What has it to do with me?” This is often the first question that learners ask about Elearning. Many educational designers are fighting in the face of dry or dense content. Often, training seems disconnected from the daily work of learners. Because learners consider it out of words, they are less likely to remember their training and less motivated to change their behavior. In this webinar, you will learn how to create relevant and authentic scenarios to improve your Elearning and improve performance.
Register for this free webinar Thanks to the training of the MAG network.
Related

At Learnopoly, Finn has championed a mission to deliver unbiased, in-depth reviews of online courses that empower learners to make well-informed decisions. With over a decade of experience in financial services, he has honed his expertise in strategic partnerships and business development, cultivating both a sharp analytical perspective and a collaborative spirit. A lifelong learner, Finn’s commitment to creating a trusted guide for online education was ignited by a frustrating encounter with biased course reviews.