Easier Lipsynch- ala Toonboom's engine?
Moderators: Víctor Paredes, Belgarath, slowtiger
Easier Lipsynch- ala Toonboom's engine?
Does anyone know of a way to use Toonboom's automatic lipsynch capabilities but to bring it into AS and use it? Seems like some genius out there could easily right some sort of plugin for that... I know it's not perfect but for doing quick lipsynch it's much easier than Papaygo.
It analyzes the audio file and then automatically places the phonemes in the timeline- so you don't really have to do anything manually. Obviously you need to remove any music/sound effects from the audio stream first, but it does a pretty decent job. I don't know if it outputs any type of file though- it runs within the program so perhaps it doesn't.
There has to be some other 3rd-party speech analyzing software that can do the same thing... (I'll have to google) then it would just be a matter of taking that output and turning it into a DAT file somehow. Simple right? lol
There has to be some other 3rd-party speech analyzing software that can do the same thing... (I'll have to google) then it would just be a matter of taking that output and turning it into a DAT file somehow. Simple right? lol
Hmm- after a search I found Magpie Pro
It says it supports Moho. Anyone know if it works with the .anme extension?
It says it supports Moho. Anyone know if it works with the .anme extension?
Yes it works! At least the export function does- it saves it to a standard dat file. I can't test out how well the voice recongnition works because I'm at work- but will test it when I get home.
But for $250- I dunno if it's worth it. If it can do lengthy voiceovers then it might be worth it- but from what I could tell it only gives you a single line text input to enter the text out, which it uses to help match up the phonemes when it analyzes. If it can only handle 1 or 2 liners- then I'll stick to Papagaya.
But for $250- I dunno if it's worth it. If it can do lengthy voiceovers then it might be worth it- but from what I could tell it only gives you a single line text input to enter the text out, which it uses to help match up the phonemes when it analyzes. If it can only handle 1 or 2 liners- then I'll stick to Papagaya.
I tried ToonBooom and it does an ok job of lipsynch. There is no way to export to a dat file. but you can render out to a sequence of pictures then import them into AS. The down side their is only 7 phonemes. Good lipsynch will have expressions along with the mouth movement and that takes time and effort. No program will do it for you.
Dale
Dale
I prefer to do all by stuff my hand. It's slower but a lot more accurate, less limited, smooth, and it just looks better.
I've worked out a pretty good system using bones that I used in my last Average Joe cartoon:
http://www.nranimation.com/averagejoeseries.htm
It's not perfect yet but it's getting there.
I'll post something later on how I did the lip sync.
I've worked out a pretty good system using bones that I used in my last Average Joe cartoon:
http://www.nranimation.com/averagejoeseries.htm
It's not perfect yet but it's getting there.
I'll post something later on how I did the lip sync.
Even though I use Papagayo I agree with Spoooze!
Papagayo gets the "base" lip sync down. It is almost like a rough pencil sketch for lip sync. Now that I know where the "words" and "phonemes" are I can "do it by hand" from that point since I use bones and actions. I can add more detail and change the default phoneme mouth shapes.
The one draw back to using AS for lip sync is not having "textual" feedback on what the dialog is. I hope someday to create a script that will use the dat file as a reference in AS to "print" the dialog on screen as you scrub through the animation.
-vern
Papagayo gets the "base" lip sync down. It is almost like a rough pencil sketch for lip sync. Now that I know where the "words" and "phonemes" are I can "do it by hand" from that point since I use bones and actions. I can add more detail and change the default phoneme mouth shapes.
The one draw back to using AS for lip sync is not having "textual" feedback on what the dialog is. I hope someday to create a script that will use the dat file as a reference in AS to "print" the dialog on screen as you scrub through the animation.
-vern
That would be a really useful script Vern, especially because my main dislike about Papaygo (did I spell it right that time...) is that it suffers from severe lag when dragging the text around, on my computer at least. How much computer memory does it take to stretch and squish letters/words?!
This is how I think Papaygo should work: For each sentence, the program lets you specify the start and stop place in the timeline- basically instead of just dumping all your text into the timeline at once, it allows you to add it piece by piece (maybe even word by word). This would save a lot of time and would probably help with the lagging.
This is how I think Papaygo should work: For each sentence, the program lets you specify the start and stop place in the timeline- basically instead of just dumping all your text into the timeline at once, it allows you to add it piece by piece (maybe even word by word). This would save a lot of time and would probably help with the lagging.
The idea for my script wouldn't be like Papagayo or replace it.
Basically it would read a papagyo dat file that was already done and display the words of the dialog typed in papagayo on screen in AS on the frames associated with the wav file.
You would still use papagayo for the initial creation of the lip sync but you could tweak the lip sync in AS more easily. Once the dat file is loaded into AS it is difficult to "hear" the dialog and know what word is being spoken exactly as you scrub through if you want to tweak the lip sync.
So my script idea would read the dat file in, and as you scrub through the time line the words or phoneme "text" would appear on screen. For instance I could have a word appear through a time line segment and have each letter highlight from the dat file on the frames as it is "spoken".
Many times I get "lost" in the audio of the AS file and have no clear idea what word is being spoken at that spot. Having this feed back for tweaking would be great. Since I use bone motion and actions for lip sync I can exaggerate the poses for certain words. I can't do this with papagayo. This script wouldn't replace papagayo though.
The problem is that I don't have the skills to duplicate in AS the process papagayo uses to match written dialog with the wav files.
-vern
Basically it would read a papagyo dat file that was already done and display the words of the dialog typed in papagayo on screen in AS on the frames associated with the wav file.
You would still use papagayo for the initial creation of the lip sync but you could tweak the lip sync in AS more easily. Once the dat file is loaded into AS it is difficult to "hear" the dialog and know what word is being spoken exactly as you scrub through if you want to tweak the lip sync.
So my script idea would read the dat file in, and as you scrub through the time line the words or phoneme "text" would appear on screen. For instance I could have a word appear through a time line segment and have each letter highlight from the dat file on the frames as it is "spoken".
Many times I get "lost" in the audio of the AS file and have no clear idea what word is being spoken at that spot. Having this feed back for tweaking would be great. Since I use bone motion and actions for lip sync I can exaggerate the poses for certain words. I can't do this with papagayo. This script wouldn't replace papagayo though.
The problem is that I don't have the skills to duplicate in AS the process papagayo uses to match written dialog with the wav files.
-vern