Apart from incorporating “the best camera system to date” in their smartphones, one of the most interesting novelties of the just announced iPhone 13 is he Cinematic Mode which makes its appearance in both the advanced iPhone 13 Pro and 13 Pro Max as well as the more modest iPhone 13 and 13 Mini. A feature that allows you to record with a shallow depth of field and add transitions between subjects, something that also can be done afterwards.
I mean, it’s something like the landing of the Portrait mode in the world of video; It’s certainly something we’ve already seen on some high-end Android (who have also experimented with the possibilities of HDR video), but now Apple is introducing it in a big way by making it possible to apply the effect once it has already been recorded; therefore, in practice the iPhone 13 are “the only devices that allow you to edit the depth of field effect on a video after recording“.
That will be if we do not like the result that we obtain in situ, which should already be good thanks to the fact that the mobile will be able to detect which person is in the foreground and which is in the background, and shift focus from one to the other when relevant. Of course this can also be done manually (simply by tapping your finger on the character we want to focus on) both directly in the app Photos like in iMovie for iOS (and coming soon iMovie for macOS and Final cut pro).
But, in principle, what is most striking is undoubtedly that the mobile be able to do it automatically. For example, when a new character enters the scene or when a subject in the foreground looks toward another further behind, the iPhone itself will shift focus and adjust depth of field. To understand let’s see an example through the following video:
As you have seen, the result is impressive and, without a doubt, as Apple itself says, this function can make “change the story“of our videos. And it adds some possibilities that can lead our home movies to have an almost cinematic quality. But what?where does all this come from? Well, let Apple tell us about it:
Before creating the iPhone’s Cinema mode, we had to thoroughly study the selective focus techniques that great filmmakers use to bring excitement and suspense to their stories.
On a Hollywood shoot, controlling focus is the job of a whole team of stuntmen. The person who directs the photography decides what to focus on and at what time, while the camera assistant takes care of the smooth transitions, the control of the times and that everything comes out well focused and sharp.
Now imagine getting the iPhone to do both.
And how do you get it?
The answer is both simple and very complex: creating a depth map of the scene. Again, let’s let Apple tell us again what it has done:
The first step was to generate complex depth data that would allow Cinema mode to calculate the exact distance to which people, animals and the rest of the elements that appear on the scene are. In the case of video recording, this data must be produced continuously at 30 frames per second.
Then we teach the Neural Engine to work like a film crew would. That is, to make decisions on the fly about what to focus on and to generate smooth transitions every time there is a change. Of course, you can always take control and adjust the focus on your own, either while recording or editing the video afterwards.
We needed a chip with a power up to the challenge. And the A15 Bionic nails its role.
In short, running machine learning algorithms, rendering autofocus changes, allowing manual adjustment, and grading each frame in Dolby Vision, all in real time, requires stratospheric power.
It is as if you carry a film studio in your pocket.
Without a doubt, as Apple says, the power of the new A15 Bionic plays an important role in data management, but we cannot underestimate the importance of the new incorporated optics, as well as the image sensors with special relevance to the LiDAR, which is the version that Apple has made of the TOF 3D sensors.
These work in the style of the sonar of a submarine and allow to measure with precision the distance at which an object is located by projecting towards it a beam of infrared light that, after bouncing off the objects, returns to the sensor and allows to calculate the distance at which they find each other.
In addition, according to DPReview the depth map could also use the differences that the camera detects between what the angular and ultra-angular lenses capture or, even, do it through the use of a double pixel divided photodiode sensor (something never confirmed by brand).
But, beyond the technical, undoubtedly to achieve the application we are talking about, much more is needed, and this is where computational photography comes in, which, beyond the hardware, is surely where we have the greatest advance of the new cameras of the iPhone 13. Because they are the advanced machine learning algorithms the latter responsible for the camera to automatically focus on the subject of greatest interest and for the camera to readjust the focus when you are looking away.
This sounds great on paper but, as always, it will be necessary to test to what extent it is accurate in all kinds of situations and with all kinds of inexperienced users. Either way, the fact that this is accessible after recording is certainly something that could revolutionize the way you make videos, no longer with mobile phones but with any type of device.
And it is that, as they say in the aforementioned DPReview article, freeing the filmmakers, cinematographers, camera operators, etc. from the work related to the approach to focus more on creativity it is certainly something extraordinary. And on top of that, what the iPhone 13 does is offer this feature. to all types of audiences in a simple way Can you ask for more?
Cover photo | Captured from iPhone 13 Pro video – Hollywood in your pocket | Manzana