Google has presented the new Pixel 6 and Pixel 6 Pro, two phones dumped into the world of photography and who dare, without complexes, to compete with the latest iPhone presented. The battle between the two giants will be exciting. But don’t be blinded by 50 MP or a much more competitive price. Let’s see what they offer us photographers.
The Pixel they have a name within the mobile photography. The problem was that, for three generations they continued with the same 12.2 MP sensor, the now forgotten Sony IMX363 that has carried more than 100 mobiles in recent years, such as the Nokia X7 or the Xiaomi Mi Mix 3.
It is a long time for a market that wants to destroy everything. So if they wanted to be considered again as candidates for the throne of mobile photography they had to bet on something totally new.
And so it has been. Thanks to the new Tensor processor, developed by Google, they have achieved that computational photography reaches new quality quotas. Of course we won’t be able to confirm it until we can prove it. At the moment ** it seems that it will be difficult to see it in Spain, since it will not go on sale until the beginning of next year **.
Two phones with cameras to forget the competition
If you want to sell a mobile nowadays it has to be equipped with the best possible camera module. To have different focal lengths (and not depend on the poor digital zoom) manufacturers have chosen to incorporate two or three sensors, each with their own goal. And Google has, of course, followed this trend.
In the case of the Pro version, it has three sensors, in addition to the front camera; and the basic version with only two sensors in addition to the front one:
- Principal: 50 MP f1.85 with stabilization system.
- Ultra angular: 12 MP f2.2
- Telephoto: 48 MP f3.5 with 4x optical zoom (this is only found in the Pro version). It is a lens that has a design known as folded optics thanks to a prism that deflects the light 90º. In combination with the Super ResZoom it achieves increases of up to 20x.
- Frontal: 8 / 11.1 MP f
The camera module is very reminiscent of what Apple proposes. But instead of betting on 12 MP To which iPhones are accustomed, they equip sensors with many more millions of pixels.
Until we test them we cannot confirm anything, but we have two realities:
- The sensors are much larger (they speak of a size of 1 / 1.31 inches).
- Computational photography enters through the big door to achieve an image with sufficient quality.
One of the most striking technologies of these mobile phones is pixel bining, which groups the pixels together to achieve a larger useful surface and better capture light. It is something that we have already seen in the Huawei P20 Pro or the Xiaomi Mi 6X.
They are actually recognizing that there is nothing like a large sensor for these purposes. But since it doesn’t fit, they do it through computational photography.
This technology allows you to increase the size of the pixels in certain situations, such as in low-light scenes to improve noise and colors. They are actually recognizing that there is nothing like a large sensor for these purposes. But as it does not fit, they do it through computational photography. And they try to achieve a similar quality.
The advantages of computational photography
So far we have seen all the physical news. But mobile photography is software dependent. Everything we had to do so far with editing programs is solved by the Google Pixel 6 with the invaluable help of the new Tensor chip.
It allows, for example, to improve the video thanks to new algorithms such as HDRnet and offer 4K at 60 fps. This means that it is capable of performing precise calculations with 498 MP per second.
But let’s go back to photography and highlight its main novelties:
- Magic Eraser: allows you to erase, just by pointing at it, any object disturb us in photography and replace it with a compelling background. And it works with any photograph.
- Face Unblur: solves the problem of movement in low light places. Before shooting, the camera analyzes the scene with FaceSSD to search for faces in the frame. If it detects them, it starts up a second camera and shoots both at the same time, usually the ultra-wide and the main one. Then combine the result of the two to reduce noise and merge the sharper face into the final file. In the end, it even corrects possible motion blur in the rest of the image.
- Motion mode: a mode that simulates slow exposure images in nature, the city or at night. That is, it is used to achieve, through machine learning and computational photography, the silk effect of water or the trails of cars at night with a single shot without long exposures.
- Quick Tap to snap: a function that allows the front camera to act freely with the Snapchat app. We can touch the back of the mobile twice and the front camera is automatically activated to record anything that we deem interesting.
We can only wait to test everything we have seen and see if the photographic quality of a mobile phone has been exceeded again. Google wants to re-enter the front door of mobile photography. And we can only wait to try them.