With a plethora of 360 video cams hitting the consumer market over the past twelve months, including the Ricoh Theta S, the Kodak SP360 (and their newer 4k version), to name a few, a lot of people are starting to have fun exploring the world of 360 videos. However, while the price of these cameras has finally become affordable, a lot of people have decided to remain on the fence for one simple reason – they don’t know how to create, edit and share 360 videos. In this very short overview article I’m going to outline the basics of 360 video. I should make it clear that I won’t be offering any detailed step-by-step instructions on how to create a 360 video – this is simply an introduction for those curious and interested in 360 video but don’t know where to start.
What is a 360 video and how is it made?
A 360 video is a video that captures a full 360 view in all directions, both horizontal and vertical. When viewed properly, e.g. using Youtube’s 360 feature or some other video hosting site that caters for 360 footage (e.g. Kolor Autopano) then you’ll be able to pan around the video clip, up, down left and right. A true 360 video should not contain any blacked-out areas. These occur when the field of vision (FoV) wasn’t fully covered. At the time of writing it is not possible to capture a full 360 FoV using a single lens. The minimum number required is two lenses positioned back-to-back, each of which have at least a 180 FoV on both axis. The video footage from each separate video stream (captured by each lens) will appear spherical (180×180), and software is then required to join the two streams together and remove (or at least minimize) the geometric warping and distortions caused by using spherical lenses. Examples of cameras that consist of two spherical lenses back-to-back to create 360 videos include the Ricoh Theta S, the Kodak SP360 (and the Kodak SP360 4K, but you’ll need to buy two separate Kodak SP360’s separately along with a housing unit to position them back-to-back) and Nikon’s soon to be released ‘Keymission 360’ camera which includes two spherical back-to-back lenses in a single unit (see image below).
However, it’s important to bear in mind one obvious point: you don’t need to buy a special 360 camera in order to make 360 videos. Granted, cameras like the Ricoh Theta S make everything much easier and simpler for the user as you just need to press a button to simultaneously start recording footage from both spherical lenses. But you still need to stitch together the two video streams to create a single 360 video. And Kodak’s SP360, despite being marketed as a 360 camera, doesn’t actually capture a full 360 FoV. To do that you need two of these devices back-to-back (along with a housing unit for them), and then you need to use software to stitch both video streams together.
In which case, given that you’re still faced with the task of stitching video streams together, many people might be wondering what the point of buying a 360 camera is if, strictly speaking, it’s not really a 360 camera in the first place! Why not (you may be wondering) simply get your hands on some 360 video stitching software and set up your own custom rig? If you’ve thought that then you’re certainly not alone. A lot of people have been disappointed with the low res quality of many 360 cams on the market and have started thinking outside the box.
Thinking of creating your own custom 360 video recording rig?
To do this you’ll need the following:
- A sufficient number of identical video recording devices (DSLRs, GoPros, smartphone cameras, etc.) to cover a full 360 FoV.
- A rig that enables you to position each device in such a way that each one captures an overlapping FoV so that when all the video streams are stitched together you’ll have a seamless 360 video.
Sounds simple right? Well, not quite. Let’s assume you do indeed have access to a large number of (say, six or more) video recording devices. The first thing you’ll need to know is the FoV (Field of Vision, or angle of coverage, if you want to be pedantic about terms) for each device. This is the angle range that the camera lens captures. This is important as you need to know the right number of devices set up in a configuration to capture the entire 360 surrounding scene, both left and right as well as up and down. And you need your devices to be evenly spaced apart, not simply to avoid missing parts of a scene (which will prevent your software from stitching all the video streams together, thus rendering your valiant attempts null and void), but also to avoid amplifying geometrical distortions in your resulting 360 video (e.g. if two devices are too close together while others are much further apart then you’ll get stretching in one area but not in others). In general it’s pretty safe to assume that the minimum number of cams in your rig will be six, with one pointing in each direction (forwards, behind, left, right, up and down). But each device will need a very wide FoV in that configuration, so realistically you’ll need more.
Capturing the full FoV
So, how do you find out the FoV of your device? Well, if you’re using a full-frame camera then it’ll be simple as the FoV will be whatever the lens says it is. So, if you’re using a 24mm lens then your FoV is 24mm. If you’re using a zoom lens then just make sure all the lenses are set at the widest FoV, of course. However, if you’re not using a full-frame camera then things get more complicated due to what’s known as the crop factor: if the sensor size of your camera is smaller than a full-frame one then it will effectively be cropping out the corners and sides. And the amount of cropping that takes place is known as the ‘crop factor’.
Here is a list of the most common crop factors:
|Camera type||Crop factor|
|Full-frame DSLR||1x (i.e. no change)|
|Nikon DSLRs with DX sensors||1.5x|
|Canon DSLRs with APS-C sensors||1.6x|
|Micro Four-Thirds (most mirrorless cameras)||2x|
|Enthusiast compact cameras||4x|
|Most compact cameras||6x|
Of course, you don’t have to spend time tediously calculating the FoV for each of your devices. You can simply work out how to arrange all your devices to capture a full 360 FoV through trial error by positioning each camera and looking through the viewfinder. Yes, it might not be as precise as working it out on paper or calculator first, but it’ll still get the job done, and you can always tweak your setup later on.
However, if you do work out the FoV of the video recording devices you’ll be using (and they should all be identical model devices to ensure the resulting 360 video you eventually make from stitching them together looks seamless!) then you can use this site to quickly work out how many of those devices you’ll need to capture 360 video. Scroll down to the section titled ‘360° Panorama Calculator‘ to see four fields you need to fill in, just like in the screenshot below.
Select the camera type from the drop-down box, then the focal length (after working out the FoV as explained earlier), then the camera orientation (portrait or landscape), and finally the percentage of overlap you want for each camera’s FoV. You’ll notice that 20% is automatically selected, and you’re probably best off leaving it like that to ensure there are no stitching problems for each video stream later on. Then hit the ‘calculate‘ button and it’ll tell you how many images will be required – which is the same number of devices you’ll need for your 360 camera rig!
Making a rig
So, you’ve worked out how many devices you’ll need by calculating the FoV for each camera (or simply through trial and error), and you know how far apart and the angle difference each camera should be separated, right? Good. Now you just need to create a custom rig for them all. If you’ve got access to a 3D printer then all your problems are solved! However, if not then you’ll just have to make one yourself. The simplest option would be just use a small lightweight tripod for each camera positioned closely together. Or you could get more creative, like this guy.
Okay, so let’s assume you’ve managed to get your hands on a bunch of identical devices that shoot video (e.g. 6 GoPro’s, DSLRs, or even got all your friends to let you use their IPhone 6 smartphones in unison) and come up with a simple or elaborate rig system for them, and you’ve finally used them all at the same time to record something that captures a full 360 FoV. Congratulations! But now you’re faced with turning all of those separate video streams into a single 360 video. Budding 360 videographers have several options available to them. The most popular 360 video editing program is Kolor’s Autopano Video. It’s currently the most impressive video editing suite for 360 video footage we’ve come across. It has an easy to use interface that allows you to add different video streams which it automatically stitches together. You can then adjust the colour, layer masks, and the horizon position, as well as change the projection and orientation of the video, by transitioning from a ‘little planet’ (polar panorama) view to fisheye to several others. In short, it’s an extremely impressive software suite that offers more easy to use features than any other we’ve come across. Check out this excellent tutorial on using it to create and edit 360 videos. However, it’s also extremely expensive. You actually need two of their software packages: Autopano Video and Autopano Giga. Buying them together currently costs 688.85 euros ($758.43 or £533.31 at the time of writing this), which isn’t cheap, and those prices don’t even include tax.
Another option is Video-Stitch, but that’s $1,050 (£732) at the time of writing this. The cheapest option is to use PT-Gui. While this isn’t actually a 360 video editing package as it’s only designed for stitching together images into 360 panoramas/photospheres, you can use it to batch create panoramas. So if you’re patient then you can save some money (but spend a lot more time) by turning each video clip into a sequence of still images, and then batch converting them into panoramas. You can then rename the resulting images so they’re numbered sequentially and then use any old video editing software to convert the (sequentially numbered) image sequence into a single 360 video. By the way, the sequential renaming of the files is crucial here as otherwise your video editing program won’t recognize it as an image sequence!
Sharing your 360 videos
If you’ve used a 360 video camera like the Ricoh Theta S or Kodak SP360 and used their own software for stitching together the two video streams into a single 360 one, then you can upload them straight to Youtube and view them in 360 mode as Youtube recognizes those files as being 360 ones. However, if you’ve used your own custom rig and PTGui to create a 360 video, or if you’ve simply added some finishing touches to your 360 video using a different software package (e.g. Adobe Premier Pro, Final Cut Pro, After Effects, etc.) then the meta-data that tells Youtube it’s a 360 video will have been stripped away. And the only way to get it back so Youtube recognizes it as a 360 video (to enable viewers to pan around it instead of displaying it as a big flat distorted panorama!) is to download and use Youtube’s own meta-data app. To do this go the Google help page (here) and scroll down to the ‘Prepare for upload’ section where you’ll find a link to download both the Windows and Mac versions of their meta-data program. It’s a tiny file you need to unzip and run. You’ll then be presented with a small window like the screenshot below.
You then use it to select and open your 360 video and tick the optional boxes. Then click ‘save as’ and youtube will create a duplicate of your original file with the necessary meta-data for you to upload to youtube. Once that video’s finished uploading and processing it’ll be viewable as a 360 video.
In this short article I’ve talked about creating your own custom 360 video camera rigs, as well as things to consider like FoV, briefly outlined some of the main ways to turn your multiple video streams of footage into a single 360 video clip, and how to ensure that Youtube recognizes your video as a 360 one (and displays it accordingly). Hopefully some of you are now suitably inspired to go out and create your own 360 camera rig setup and turn those separate video streams into a 360 video! However, before doing so I should include a few extra tips:
- Camera settings: make sure all the devices you’re shooting with (GoPros, IPhones, DSLRs, etc.) all have the same settings before you start filming. That means the shutter speed, ISO, white balance/colour settings and frame rate (and any other settings your devices have, such as sharpness) are identical. Because if they’re not then you might have problems stitching the separate video streams together at all, and even if that works then there might be noticeable differences in the resulting 360 video (e.g. part of the sky brighter/darker than another).
- Distortions: if you just care about creating a single 360 video using as few cameras as possible then obviously just go with wide angle lenses. But bear in mind that the narrower the FoV each device has then the less geometric distortion there’ll be, which will give you more realistic footage with a greater stereoscopic projection (i.e. 3D effect) when your 360 video is viewed through a VR headset.