Yesterday, we released Hyperlapse – a new app that lets you shoot and share time lapse videos. Time lapse is a technique in which frames are played back at a faster rate than they were recorded. They allow you to enjoy the 15 second sunset or the fog drifting over the hills like water flowing over rocks. Because time lapse exposes all the motions in our daily life while other things become invisible, time lapse becomes very mesmerizing Hyperlapse is a special form of time lapse where video is rotate when the camera also moves. Traditional hyperlapse shooting is an arduous process that involves planning, a large number of cameras, and professional video editing software. With Hyperlapse, our goal was to simplify the above process. We have achieved with a single record button and a play screen when you select the playback speed. We used a video stabilization algorithm called Cinema (this algorithm was used in Instagram Videos) in Hyperlapse to achieve fluidity in the movements in the video. In this article, we will describe the stabilization algorithm and the technological challenge we encountered while trying to reduce the complex process of transitioning the complex time lapse technology in photography to a simple interface. and easy to interact with
Cinema Stabilization Algorithm
Video stabilization is part of capturing a beautiful and smooth video. In the film industry, it can be achieved by a camera system equipped with a device to separate camera movement from system movement. Because we can’t expect an Instagram user equipped with such a system to capture the world’s moments, we developed Cinema, using the smartphone’s built-in gyroscope to Measure and remove unwanted vibrations. The diagram below illustrates the processing of the Cinema stabilization algorithm. We feed the results from the gyroscope and frames to the stabilizer and get a set of camera orientations. They play a role in compositing a smooth camera movement with all frameshifts or shakes eliminated.
Watching: What is Hyperlapse
The video below demonstrates how the Cinema algorithm works to change the frame against camera shake. The area inside the white line is the visible area in the output video. Here it should be noted that the edge of the distortion canvas never intersects the white border. That’s because our stabilization algorithm computes the smoothest image area the camera can capture while ensuring that the frame is never altered so that the out-of-frame area becomes visible at final video. Also note that it means we need to crop or zoom in to get a buffer around the visible area. This buffer allows us to move the frame to limit hand shake without the appearance of empty areas in the final video.
In Hyperlapse, you can drag the slider to choose the level of time lapse after you have recorded the video. For example, a 6x time lapse level corresponds to placing every 6 frames in the input video and playing those frames at 30 fps. The speed of the video will be 6 times faster than the original video.
We’ve changed the Cinema algorithm to calculate orientation based on retained frames only. It means that blank areas are only applied to these frames. Based on that we can output a smooth video even if the unstable input video becomes violently shaky over time. You can watch the video below for a better understanding.
See also: What is Strength
As mentioned before, we need to zoom in close to create some space against hand shake without letting any blank areas appear in the output video (for example, the area outside the input frame will not have any pixels). which data). All video stabilization algorithms trade off resolution for stability. However, Cinema zooms intelligently based on the shake of the input video. You can continue to watch the video below for easy understanding
The video on the left is only slightly shaky because it was shot while still standing. In this case, we only need to zoom a bit because we don’t need too much free space to counteract the camera shake. The video on the right was shot while walking. So the camera is really shaky. We need to zoom in more to have enough space for smoothness. This is a trade-off between resolution and smoothness of camera movement because zooming in will reduce the visible area. Our intelligent zoom algorithm minimizes camera shake while trying to preserve the effective resolution based on each video.
Put it all together
“The first 90% of the code equals 90% of the play time first development. The remaining 10% of the code equals the other 90% of development time.” – Tom Cargill, Bell Labs
In the early stages of Hyperlapse development, we decided we wanted a slider to adjust the speed of the time lapse. We also wanted an immediate feedback channel to increase the experience and reduce effort, even when complex calculations are performed below. Every time you move the slider, we calculate the following steps:
1. We request frames from the decoder at the new write rate
2. We use the Cinema stabilization algorithm in the background thread to calculate the optimal zoom and a set of orientations for the new zoom and time lapse amount.
3. We keep playing video while waiting for new stable data to arrive. We interpolate these pre-calculated time-lapse orientations to yield the orientations of the new frames.
4. Once new orientations are introduced from the stabilizer, we replace them with the old set of orientations.
See also: What is Arb – Arb Arbs
We perform the steps above every time you change the slider value without interrupting video recording. The end result is that the app should be light and responsive. We can’t wait for the creativity Hyperlapse brings to our community where you can shoot a hyperlapse with the touch of a button.