

Moderators: Víctor Paredes, Belgarath, slowtiger
same here.Mikdog wrote:I find it I get more accurate lip-synching by scrubbing along the timeline and doing it manually without Papayago.
This is one of the reasons I like using bones for posing mouth positions with actions when possible.4. This most important and seldom used, the third layer is individual phonems and must be positioned correctly to make the lips move correctly to the song. If the singer holds a long "A" sound, The AI phonem is placed at the beginning. The next phonem is placed at the next sound change what ever it may be. Papagayo will then hold the AI sound until it reads the next lip change.
If you're going to lip sync a song, you have to sing it. Just speaking it won't work. The inflections and sustains would all wrong on the waveform. Since I'm a rotten singer I wouldn't try it anyway.One time I actually tried to record the voice track separately by speaking the lyrics in time with the music. I used MY voice as the lip sync audio file so the words were clear. It worked sort of okay... but was hard to do. I had to load my voice file along with the original file into an audio editior and sync them up so I could swap them easily.
Uh... yes... I'm not a singer either... that is why I said "speaking" the lyrics... to have called it singing would have just been a horrific lie and an insult to actual singers of the world... well... maybe not Britney Spears.If you're going to lip sync a song, you have to sing it. Just speaking it won't work. The inflections and sustains would all wrong on the waveform. Since I'm a rotten singer I wouldn't try it anyway.