Last spring, Alisa and I did a 1 minute animated short that served as the title sequence for a feature film. Sadly, we can't show the animation publicly yet but I brought it up to talk about our editorial workflow on the project. Our workflow varies slightly from project to project depending on the visual style and final delivery requirements but the following notes may give you a few ideas to try:
1. First, establish the final format. This particular job was rendered at 1080p and 24fps. Normally, we like to render our own projects at 720p because it's just a lot easier to work with but since this short was going to be attached to a live action film from another production company, 1080p it was.
2. From Anime Studio Pro, we like to render out multiple passes and masks for compositing and effects, so we always render to PNG with embedded alpha channels. We make heavy use of ASP's Layer Comp system. ASP's Layer Comp system is similar to the system in Photoshop and it's a whole lot easier than breaking out individual comp scenes manually, especially in ASP 11, where you can batch render all the Layer Comps in your file with a single click. If you use a compositing program like Fusion or After Effects, using this feature is highly recommended.
For our most basic scenes, we render sky, background, characters and foreground passes, and more advanced uses will includes control masks for effects. This project had as many as 8 masks per scene. (For some of our 3D animated productions, we may output a dozen or more layers and masks.)
By the way, Fusion is free right now for non-commercial use. These days, I seem to be using After Effects a lot but I used Fusion for 12 years when I was a staff artist/animator with Rhythm & Hues. After R&H, I used Fusion for several freelance jobs and I still use it for personal work--
Scareplane, for example, was entirely composited in Fusion. I highly recommend it.
3. The rendered PNG layers are brought into After Effects for compositing and effects (like motion blur, glows, DOF, color correction, etc.) It really doesn't matter which compositing program you use--each program has unique strengths, but they all use layers and masks for source material. But if you don't need compositing, you can go directly to your editor.
Anyway, from AE, we write out .avi files using the free Lagarith codec. Lagarith is a good lossless format ideal for archival files. It's a compressed format but because it's lossless, the quality is nearly indistinguishable from uncompressed. (And certainly much better quality than rendering to .mp4.)The downside is that it's very processor intensive, so it's not very good for realtime playback. More on how to work around that that in the next section.
4. Since Lagarith is not so great for realtime playback, we need to transcode it to something more friendly for editorial use. In our last two projects we use Virtual Dub and set up a script for scaling the clips to half size and converting it to Divx .avi. Then we set up a batch Wizard setting that takes the Lagarith clips and saves them to a 'Proxy' directory. One the presets have been made, you simply run the batch render and in a few seconds, you'll have fully playable proxy files for editing. How fast is this process? Well our last production had 22 scenes, some of which were several hundreds of frames long--Virtual Dub can transcode the entire 1080 production in just a couple of minutes.
(Note: you can render directly to Lagarith .avi from ASP is you don't need to break out scenes for compositing. For that matter, I think ASP 11 will allow you set up batch presets for different, so you can probably render your master file and proxy file directly from ASP with just a couple of mouse clicks.)
This is important: be sure your master files and proxy files have identical names, and they should both be .avi. More on the reason why in the next section.
5. For organizational reasons, we have a footage folder that contains three folders: Master, Proxy and Edit. We render the Lagarith master clips to a Master folder, and the Divx proxies to the Proxy folder (surprise!) The Edit folder is what our editorial program uses as it's source footage. You can call the folders whatever you want but this works well for us.
Initially, the Edit folder contains copies of everything that's in the Proxy folder. Because the files are very small, they play back very quickly in a program like Vegas, even if you're altering the files with fx or other plugins in the program. Even though the Proxy files are half-size, Vegas will automatically scale it up the the target size--in this case, from 540p to 1080p. (Note: We're using Vegas here but I
think this should work with Movie Studio too--from what I recall, the programs are nearly identical.)
After the editing is done and I'm ready to render finals formats, I copy the files from the Master directory to the Edit directory, and overwrite the proxies in there. (THis is why the files need to be identically named.) When you open the project in Vegas, it will automatically replace the low-res proxies with the hi-res masters, and render the edit using the highest quality data. (Alternatively, you can just rename the Master or Proxy folder to Edit--that may be quicker if you have a slow network, and Vegas doesn't really care since it's just looking files in a folder called Edit.)
It's worth pointing out that Vegas can generate it's own proxy files of items already on the timeline. This works well for live action video footage that isn't going to change but it's a bit of a pain for animation that my need to go through multiple revisions because you need to delete the proxies and re-render every changed file. This isn't technically different from what I do with Virtual Dub but I find it's faster and easier to keep track of things when writing specially formatted files to discreet directories. Whichever method you choose is up to you--I would try both and see which works best for you.
6. Finally, I after final color corrections and audio tweaks, we output to whatever format we need. On our last job, we rendered Sony AVC with Computer RGB to Studio RGB Levels adjustment for Vimeo compatible files. You absolutely need to do this for Vimeo or YouTube videos, otherwise the footage will look washed out after uploading. The vimeo version was for online client previews--we like to use Vimeo to get quick client approvals and also because it allows us to secure the file using private links and password protection. Then we render out an Avid DNxHD .mov for the client's editorial system and put this in a Dropbox for them. We also rendered a MainConcept .mp4 for realtime playback on portable devices like iPad, Android and Windows tablets. This is handy to show friends, colleagues and impromptu presentations when reeling in the next paying gig. Finally, we render a Lagarith version for archival purposes.
It's true that Uncompressed is often recommended for archival but we chose Lagarith files because the are much smaller with no loss in quality. (If anybody can recommend other good lossless archival formats, please let me know. Years ago we used to use Huff YUV and we switched to Lagarith for x64 compatibility and additional features. That's worked out for us for a few years now but we're always looking out for better ways to do things.)
As for audio syncing, here's what we do:
First we create an animatic in Vegas (or your edit program of choice), with all our audio and music, edited as close to final as we can at this stage. An animatic is basically a version of your movie with the storyboard cut in. You can have some limited animation here too but don't go crazy--save the effort for final animation.
Next, I mark all the edit points. Once the markers are set, I batch render the entire edit to individual 'reference' clips. These files can be loaded into your animation program as 'background' clips to animate to. Alternatively, you can batch render only audio files and load these in to animate to. So long as your animation program and editorial program use the same format settings, the output from the animation program will stay in sync.
Finally, after you write out each animation file, simply over-cut the imported 'finals' on a 'final animation' layer above your 'animatic' layer. Everything should be in sync but after you get the final footage in, you'll probably want to make final adjustments to the audio. Because of this, you may want to keep all your audio clips (dialog, music, sfx) unflattened so you can slip and tweak the individual clips as needed.
Sorry, I know that was a long post. I probably missed still a few things I meant to talk about so feel free to ask any questions.
Bear in mind that the above workflow is just what we did on our last production, and not everything mentioned here may be relevant to your project. In fact, since we frequently change our visual style, we're constantly re-evaluating our workflow from project to project. But I hope the example above gives you a few ideas for minimizing technical issues and getting the best possible output quality. We try to keep the technical parts of our work foolproof because computer animation is hard enough just drawing and animating stuff. Once a solid workflow established, we don't have to think about it so much and we're almost free to focus on just the fun parts of the job.
Hope this helps.
G.