After a multitude of leaks in which we were also able to know a lot about their cameras before it was time, the Google Pixel 6 and Google Pixel 6 Pro were presented on October 19. Google threw the rest in photography on its two new terminals, relying mainly on a new processor, the Tensor, designed to squeeze every drop of artificial intelligence out of your devices.
And since photography was important, and has been important for Google for a long time, the company dedicated a lot of its presentation to tell us what were their new Google Pixel 6 capable of in photography. A pair of phones with two cameras for the “little” brother and three cameras for the “older” brother, but few differences between them other than the strictly physical.
So are the cameras of the Google Pixel 6: technical specifications
We are going to start with the cameras themselves, with the photographic hardware of the Google Pixel 6. In this case, it should be noted that the two phones share two cameras that are exactly the same while the older one, the Google Pixel 6 Pro, adds a third camera with telephoto lens. Apart from the difference between front cameras (8 megapixels in one and 11.1 megapixels in another), that third chamber is the one that separates both models. So let’s go in order, the main camera.
As main camera, Google has opted for a 50 megapixel Octa PD Quad Bayer sensor. This sensor is 1 / 1.31-inch in size and 1.2-micron pixels across, allowing you to get 2.4-micron pixels when the pixel bining which groups the pixels into blocks of four. Accompanying the sensor we have a lens with aperture f / 1.85, quite bright, and which has a field of view of 82º. The focal length is 26 millimeters, so we are talking about a wide-angle camera, and here we find optical stabilization in addition to omnidirectional PDAF focus.
We got to the second chamber and here we have 12 megapixels on board its sensor, with pixels of 1.25 micrometers on the side. This sensor is accompanied by a lens with aperture f / 2.2 and which is also an ultra wide angle. Google does not specify its focal length anywhere, although we assume that it will move between 13 and 16 millimeters, based on the example photographs shown. This camera does have a small difference in both models because in the Pixel 6 we will have 114º of field of view while in the Pixel 6 Pro we will have 106.5º, a smoother curvature.
Omnidirectional PDAF and laser focus for both models
Finally, we come to the third camera, the one that we only find on board the Google Pixel 6 Pro. Here the North American company employs a 48 megapixel sensor and 1/2 inch in size that has pixels of 0.8 micrometers on each side. The lens, with an aperture f / 3.5, is a telephoto lens and offers us 4 optical magnifications. Google claims that it is capable of obtaining what they call Super Resolution with a hybrid zoom of up to 20x, which should not detract from photos taken at this close-up.
In addition to these cameras, Google uses an LDAF sensor in both models for focus, Laser Detection Auto Focus; that is both focus by laser. Google combines this approach with the phase detection focus of both main sensors so we should get better focused photos and with a much faster focus. In addition to other software “tricks” used by the company such as the Face Focus that we will see later. RoughlyThese are the cameras of the Google Pixel 6 and Google Pixel 6 Pro.
And so is the computational photography built on the Pixel 6
As we have already mentioned on a previous occasion, Google has designed a new processor whose main virtue is being able to execute artificial intelligence code much faster and with lower energy consumption. And in addition to other aspects of the phone, the new Google Tensor has a determining effect on photography, providing processing power for the different modes and functions that the Pixel 6 debut in this area. Let’s go there.
We start with the magic eraser or magic eraser. This function was already presented in the Google I / O a couple of years ago but it has not been released until now, and it comes with the Pixel 6. In a very brief way, we can remove objects from a photograph simply by touching them. An algorithm will be in charge of erasing said object and filling the space it leaves with coherent textures. That is, we can erase a glass on the table and Google will fill the hole with wood, or a bird and Google will fill in the hole with sky.
Another function of the Pixel 6 is known as Motion mode or motion mode. Here we are talking about a simulation of movement using artificial intelligence algorithms. As if we were capturing images with a long exposure, Google will be able to add a sensation of movement to objects, blurring them to obtain the final result. As shown in the photograph that illustrates it, you can photograph a stopped train with Google pretending to be moving, or create a silk effect from a static photograph showing water. We’ll see how this works out when we can test it first-hand.
We continue with the Real tone or real tone, which Google has defined as the most inclusive when correctly representing the skin tones of people of color. And also the rest, of course. Again supported by AI, Google claims to be able to identify with great precision the area of the image in which there are faces and apply corrections on them so that the color of the skin is not affected by different lighting that may be in the scene . Google claims to be able to deliver optimized images even when people with different colors or skin tones are in the photo.
Related to the previous mode we find the new algorithm of white balance correction of each scene. In addition to offering us the possibility to vary it manually in each photograph we are taking, Google ensures that its new automatic adjustment is more precise even in photographs as complex as portraits against the light. Again, a mode that we are looking forward to testing thoroughly to see how effective Google says it is.
And finally, portraits. Specifically, the new mode of face focus or face unblur. Here Google plays with the two main cameras on each phone (the only ones on the Pixel 6). The company claims that when it comes to taking portraits, it captures an image with each of its cameras, simultaneous images, and fuses them to improve the sharpness of the faces of the people included in the photograph, as well as the focus. Photographs maintain a sense of movement in other areas but faces must be perfectly focused.
More information | Google