Rodolfo Roth's profile

Fast explaining videos done with AI and Meta Human

Fast explaining videos done with AI and Meta Human

In this video, we'll see how AI and meta human technology can help us create amazing explaining videos. By combining these two technologies, we can create videos that are much faster to produce and are much more accurate in their descriptions. If you're looking to improve your video explanations and reach more people, then you're going to want to check out this video! By using AI and meta human technology, we're able to create videos that are much faster to create and are much more accurate in their descriptions. If you're looking for a way to improve your video explanations, then this is the video for you!

Well, for an animation that was 100% done with AI, it's not too bad, isn't?

In just 2 and a half days, without using any manual animation, I decided to test some new things. It all started with a 3D scanning done with #lumaai, which served as the basis to create my #metahuman. Then, I used the available features in the tool to make it as close as possible to my profile.

After that, I trained an artificial intelligence tool with my way of speaking and created a "voice" for this character. I wrote an initial text and asked #ChatGPT to review and make it more dynamic.

With the text and voice ready, I generated the voiceover using #ElevenLabs. Then, I used features from #nvidia #omniverse, such as #Audio2Face and #Audio2Gesture, to generate lipsync and body movements, respectively.

Although the body movement turned out "okay" within Audio2Gesture, when I retargeted the rig within Unreal, the hands were out of position due to differences in the models. Nevertheless, I made some adjustments and hid the awkward finger positions to smooth it out a bit.

Could I have filmed the movement with a camera/webcam and tracked the body? Yes, but in this case, I wanted to explore the full potential of artificial intelligence resources as part of the study. (I still plan on conducting another study using #motioncapture).
I also used #LiveLink to add an extra layer of facial expressions since the lipsync generated by AI is more neutral.

The result may not be perfect, but as an object of study for the workflow, it's ok haha.
Thank you to those who have read this far.
Fast explaining videos done with AI and Meta Human
Published:

Owner

Fast explaining videos done with AI and Meta Human

Published: