While I did the final fine tuning, color correction and asset passes I would rely heavily on a headset, as that gave me a clearer picture of how things were looking. I did most of my color correction in Adobe Premiere, but for more complicated grading I would bounce out to Davinci Resolve.
All the audio editing would be through a program called Reaper. The audio recording equipment was a Sennheiser Ambeo VR mic that would record spatially, a lavalier mic on the subject, and a shotgun mic that would record ambient sounds plus the subject. All those sources would be converted to spatialized audio using the Facebook Spatial Workstation plugins in Reaper, and then I would realtime track those audio channels to their audio sources. I would have a sort of keyframe writing mode turned on, then I would watch the video through a preview and follow the source along with my mouse, mapping those sources to locations. The end result would be spatialized audio. The only tracks that would remain in headlocked stereo would be anytime we used voice over, or music.
Audio editing was always the last step as the picture was usually locked. We would then export out essentially five different things. The first three were a ProRes video master with no audio, a spatialized audio track from Reaper, and a stereo audio track from Reaper. Then, using a command line in FFMPeg, I would encode the video to a compressed H.264 version. I would then take the H.264 and both audio tracks and muxx them together using the FB360 tool. That would then give me the final export, with video and all audio combined, that would be used for delivering. Our delivery format was usually in the mkv container.
We did experience a number of challenges along the way that we learned from. In the first season we had some big hiccups with color rendition across multiple headsets. Early on, the Oculus Quest didn’t have a link cable for live view, so I would attempt to grade an image, then export out a still frame to a headset like the Oculus Go to check the frame. I would realize it didn’t quite match, so I then would add an adjustment curve to compensate for the variations in color displays only to find everything looked different on another headset. I eventually created compensation LUTs to add so that I could get views of what the image would look like on different headsets. That’s gone away now with the Oculus link; the image that I see in the headset is much more in tune with what the export will be. This really sped up the coloring process. Having a headset with a live view (and a system that could run it) is definitely key to getting the editing process to a reasonable speed.
How long did post production take?
Our end product would typically be about 15-20 minutes in length, cut down from around 4-6 hours of captured footage. Post-production typically took about a month, but that time has sped way up since then. We were relatively early with this type of content in 3D 180 VR, so there was a lot of going back and forth, trying out different things, experimenting with editing styles and graphics, and also just getting the tech to work.