The Google I/O of 2017 was a great edition of the annual event for developers, because in addition to letting us know more news about Android Oreo, laid the foundations of its artificial intelligence applied to image recognition and condensed it into what we now know as Google Lens.
In addition to going over what image recognition would be able to do with photo noise, Sundar Pichai, CEO of Alphabet said (at minute 10 of the presentation) that “very soon, if you take a picture of your daughter at a baseball game, and there is something obstructing it, we can do the hard work, remove the obstruction and have the image of what matters to you in front of you“. He ended that part of the presentation with a “we are clearly at a turning point with the vision”.
This image that we see below is the basic photograph on which Google said it could soon apply all its image recognition intelligence:
And this is the image fixed by Google, which Pichai boasted:
This is what Google Photos achieves almost 5 years later
The pity of Sundar Pichai’s statements is that, although we users believed that Google had the potential for the demonstration to be real, said function did not arrive “very soon”. The reality is that in 2022 it has not yet arrived as it was shown, integrated in some way into Google Lens or the Google Photos service. However, a function that uses this technology has been integrated into the Google Pixel 6. It is called ‘Magic Eraser’, and we have already tested it and compared it with Photoshop with very good sensations.
Since we already did real use tests with our photos, now we wanted to put the Magic Eraser to the test, applying it to what Google said almost five years ago that it would be able to do. This is the result after two different tests:
As we see, both examples are “not bad”, but they are a long way from what Google showed to have “for very soon” in 2017. Google also announced it as if it were an automatic fence recognition function, and in our case we had to go through all the metal frames to eliminate it. The magic eraser is capable of detecting things and then suggesting to delete them, but it has not offered anything here. Thus, the selection has been 100% manual.
Both in the sand and in the body of the girl, it can be seen perfectly by the places where we have been erasing, and this is the case even after reviewing the areas that remained with more grooves. In addition, there are artifacts and bad editing in critical parts, such as the face, which in the original demo was perfect. The function, therefore, is still not up to the task of what we saw in that demonstration, but there is also something to keep in mind.
For this test we have taken advantage of the artificial intelligence power of the Google Tensor SoC, while Google Lens and other Google Photos features are not processed on the device, but on Google servers, which are much more powerful than a simple smartphone. It would be nice to see what they are capable of today.