Taking a TV newsroom 360 – Part 1
360-degree video, or spherical video, is not necessarily new but it is just on the edge of becoming accessible. Accelerometers inside our phones make it possible to consume without VR goggles and several companies are already selling consumer-grade 360 cameras.
That means, there’s an opportunity to use this to tell engaging and informative stories.
So, why is this post labeled, “part one”? Because I want to collect some of the thoughts I have so far, and compare it to the sessions I plan to attend during the upcoming ONA conference.
At work we’ve been floundering about and experimenting blindly, but I think we’ve arrived at a few rules of thumb that do work well.
So far, I’ve learned a whole lot of things that we shouldn’t do with this technology.
Here are the rules-of-thumb that I’ve developed so far:
- There are two kinds of audio tracks: Live narration and radio-style storytelling. Live narration drastically simplifies the editing process, or removes it entirely. Radio-style audio makes it possible to do some more traditional broadcast storytelling, but the video must compliment the audio without much editing.
- Editing sequences in the traditional 2D sense is impossible, because everything is always in view.
- In general, avoid video edits. You never know which way the viewer is looking or when a cut might interrupt them.
- Silent tours don’t work, because they try the patience of the viewer. They need some kind of guide to when the next view is coming and what they should pay attention to.
- These cameras are omnidirectional and so are their microphones. A secondary audio recording device is a good idea, but requires editing beyond what is available in any companion phone apps.
- 360 cameras have terrible resolution over distances. You need to be very close to the action.
- If the photographer isn’t a desired character in the video, they need to control it remotely or hold the camera above their head (and use a hat to cover their bald spot).
- With the consumer-grade cameras by Ricoh, Samsung and LG, the images from two fish eye lenses are stitched together to create the 360 image — but the image file still has edges. That means, one of the images from the lenses is split down the middle. Make sure you don’t face that lens for stand-ups.
- After editing, you need a third-party tool to insert the necessary metadata so that hosts like YouTube and Facebook can display the video properly.
And here are some samples of my experiments so far:
Good
Not so good