360 Camera Technology

In this section, you'll learn more about your options for 360 camera technology from VR pioneers and our friends at Radiant Images.

Written by Michael Mansouri from Radiant Images

Radiant's core philosophy is simple - even with limitations to technology, there can never be limits placed on human imagination.

Our company got its start just as content production began its transition from film to digital. Recognizing a need for expertise and creative thinking to help guide filmmakers through this evolution, Radiant found its niche by positioning itself on the cutting edge of emerging technologies and acting, in essence, as the technology arm for production companies and filmmakers worldwide.

Radiant has remained at the forefront of camera technology as it has advanced from HD to stereoscopic to 4K, 6K, and 8K, to HDR and HFR, to virtual reality, augmented reality, and most recently, the currently-developing worlds of volumetric, depth extraction, and light field capture.

We celebrate that by working with content creators not only on technology but on methods to support their vision and creativity. In 2000, during our early days when Radiant focused on 360-degree content, we utilized five or six RED One cameras on a circular rig to capture sceneries and create compelling 360 videos.

As the technology advanced, pushed forward by the Oculus Rift headset, the first publicly-released virtual reality head-mounted display, Radiant focused on working with creatives to construct the content to go inside VR headsets.

And as the headset technology advanced, enabling much finer resolution and much better pixel per degree, for example, camera technology followed suit. These advancements have been primarily in post-production and stitching, with these computational post solutions, such as optical flow, having liberated filmmakers to experiment and shoot more freely, capturing content as they wish without concern for the painstaking stitching process.

This major breakthrough came in 2016 when most major camera manufacturers began offering optical flow solutions in computational stitching cameras, such as Nokia OZO, Z CAM, Kandao, Insta360 Pro, Jaunt ONE, Samsung Round, and Google Jump, among others. We have also seen more advancements in camera technology through much higher resolution cameras and optics.

Selection of virtual reality cameras is no different than selecting a camera when shooting in traditional 2D. Selecting lenses and camera bodies is usually the first step when starting a 2D production. Is the shoot mostly low-light, or practical lighting similar to movies such as Stanley Kubrick’s “Barry Lyndon” or Gaspar Noé’s “Enter the Void”? Is this a run-and-gun type of capture or a more cinematic pace? These challenges define the camera and type of lens, allowing you to determine whether it might be better to use a zoom lens instead of a prime lens.

All filmmakers make these selections during pre-production after the director, cinematographer and producer determine the following:

  • Budget
  • Locations
  • Subjects (i.e., what they intend to capture)
  • The mood and “look” of the content
  • Framing and proximity to subjects (i.e., wide, close, long, far away, etc.)
  • Distribution targets (i.e., IMAX, theatrical, TV, YouTube, etc.)

With the introduction of all of these expanded choices for camera solutions — from consumer to prosumer, all the way to high-end professional cameras — I feel that the filmmaking process is less and less about just technology, and much more focused on methods. Since these are important and frequently asked questions, I always try to look at the challenge or situation from a fresh perspective, with no preconceived solutions — just new ideas about which tools can assist us the best.

I do not have a go-to camera. We are rarely asked for cookie-cutter solutions. Normally, people who engage us are asking much deeper questions — like how to capture deeper and richer experiences — rather than how to auto-stitch or auto-expose cameras.

Movies like “Baraka” and “2001: A Space Odyssey” were not conceived or finalized because they were easy or cheap to make. Instead, the filmmakers looked at deeper, more complex questions, such as how these films would make an impact and affect the conscious as well as the subconscious.

Other advancements have come from display technology. While most people think that 360 can only be viewed in headsets, there are many more solutions in display technologies, from handsets to headsets to dome theaters, as well as room-scale headsets (enabling movement in and out of spaces), using products like Oculus Rift, Oculus Go, Samsung HMD Odyssey, and mixed reality headsets from Microsoft.

These advanced headsets require completely innovative, advanced methods for capture, and until now, the screen acted as the ‘fourth wall’ separating the audience from the action in front of their eyes. With these new capture methods, filmmakers can remove this wall and bring their audience into their stories and environment, with the ability to move freely and visually experience scenes and subjects.

Once in the virtual space, a viewer will be able to move freely in six degrees of freedom, commonly known as 6DoF. Simply put, this is the ability to move forward and back, side to side, up and down, swivel, tilt, and pivot, giving viewers can enjoy a lifelike, immersive sensation of virtual presence within a scene.

The ability to achieve this result has progressed from theories to proof of concept to what is now three distinct capture methods - volumetric, depth extraction (or depth mapping) and light fields.

For volumetric, Microsoft, 8i and Fraunhofer have each built volumetric video studios that feature anywhere from 48 to 106 cameras placed around a cylindrical stage to capture light and movement from all different viewpoints, and with an advanced algorithm, they create a point cloud, or set of data points in space. Point clouds are generally produced by 3D scanners, which measure a large number of points on the external surfaces of objects around them; however, we can also generate point cloud from over-sampling or disparity from each cameras that give us depth and volume information.

While point clouds can be directly rendered and utilized, they are often converted to polygon mesh or triangle mesh models, or CAD models through a process commonly referred to as surface reconstruction.

With depth mapping, the Facebook x24 and x6, as well as the Kandao Obsidian, are 6DoF-designed cameras that can be utilized to triangulate the subject matter to create the perfect depth and bring rich detail to life in live-action VR.

Light field capture was pioneered by Lytro as a commercial product, which recently sold to Google. They are continuing working intently in this area along with Fraunhofer. We at Radiant have propelled light field technology forward with the creation of Meridian, a light field system that captures live-action 6DoF while still maintaining the look and feel of high-quality cinematography.

Radiant’s Meridian consists of 24 perfectly synchronized, equidistant Sony RX0 cameras mounted inside a modular panel. The 4x3-foot panels of cameras capture the light from the various angles and vantage points that pass through the area, or frame. Acting as ‘windows’ into the virtual world, the portable panels can be expanded based on shooting requirements.

Even with these new advances, we are just scraping the surface of what can be achieved in immersive content. Much like man's first discovery of electricity, we are still not sure of its limits and capabilities, and the same is true for 6D0F technology. However, Radiant is looking forward to continued collaboration with content creators and exploring this technology's fullest potential.