The cinema mode of the new iPhone 13 is Apple’s big bet this year. It tries to go beyond a portrait mode in video, introducing automatic focus to people or objects according to the prominence they have in the scene. All in real time and with data saved for post-processing, something they achieve thanks to the A15 Bionic.
In an interview with both Apple Vice President Kaiann Drance and Interface Designer Johnie Manzari, details have been revealed on how the cinema mode works on a technical level and all the elements that were taken into account when developing it.
Apple’s cinema mode, explained
From Apple they indicate that developing cinema mode was “much more challenging” than developing portrait mode. In video, more information about the depth of the scene is needed, so that the blur can work on people, pets, objects and so on. In addition, it is necessary that this data is processed continuously and in real time, moving.
To this we must add that cinema mode is rendered in Dolby Vision HDR quality, although it is limited to Full HD, and that this information must be shown in the preview, that is, while we are recording. The objective was to create an automatic mode, capable of detecting the elements of the scene to adjust the focus and process everything in real time.
All possible resources are used to process this mode in real time, but consumptions were taken into account, so that it uses too much battery
Apple indicates that uses both the CPU and the phone’s own GPUas well as Apple’s neural engine. It also uses the new stabilized sensor and takes advantage of the extra brightness of the sensor and the lens, which now has an f / 1.5 aperture. Basically, Apple indicates that the main elements of its cinema mode are the following:
- Recognition and tracking of subjects
- Focus lock
- Rack focus (focus movement from subject to subject organically)
- Image overscan and in-camera stabilization
- Synthetic Bokeh (Lens Blur)
- Post-shot edit mode that lets you modify your focus points even after shooting
In other words, the A15 Bionic is responsible for processing all the information in the scene in real time, with the main objective of segmenting objects and people in order to selectively isolate them from the background. You get to use the phone’s own accelerometer to predict the movements of the person recording.
If we record a person and move slightly closer to another, this will become focused thanks to the iPhone’s own accelerometer
That is, if we are recording a person and we move slightly to focus on another, It will already be in focus before we have even finished the movement, since the cinema mode anticipates the scene itself. Too analyze the person’s gaze, to understand if they are going to enter the scene or not.
Definitely, a cinema mode that takes its first steps, as did the portrait mode, and that puts all the resources of the iPhone at your disposal.
Via | Techcrunch