Photographer Chase Jarvis rightly said that the best camera is the one you take with you. And for that, the smartphones They quickly began to replace compact cameras, even though the quality of their snapshots was much worse in comparison. The mobile is always in your pocket, ready to capture any scene that catches your eyeas the quality of photos improved, its adoption grew exponentially.
In fact, manufacturers of compact cameras went from selling more than 120 million units in 2010 to less than 20 million each year since 2018. The few that continue to be sold are the high-end models, which have inimitable features for a phone. until they do.
Those who have a minimal knowledge of photography will be frowning after reading this last sentence, and rightly so. A sensor smaller than a button on a shirt will never be able to capture the same light as a DSLR camera. Nor will it be able to offer the versatility offered by the different sets of objectives and lenses that can be used on a professional camera. It’s physical, and neither Apple nor Samsung, no matter how deep their pockets, can break its laws.
But today, in the age of digital and social media, the most important thing is not what is, but what appears to be. And this is where computational photography and artificial intelligence come into play to apparently break the laws of physics on your iPhone camera
Why smartphone cameras are already more software than hardware
The cameras of the smartphones have already managed to reach the quality standard expected by digital compacts, and most of us we did not expect substantial improvements due to the unbreakable physical barriers discussed above. But thanks to computational photography, the latest iPhone, Samsung Galaxy, and Google Pixel take snapshots that look like they were taken from a professional camera. Hence the nickname “Pro” that some models have.
Every time you take a photo with iPhone 14 Pro, the phone takes a snapshot of the scene with each lens and then recreates the image using information from each lens. Apple calls this process deep fusion, and it is an advanced version of the HDR technique. One that achieves that all parts of the scene are illuminated and without losing an iota of detail. This allows anyone to take pictures that look like a magazine cover, but also cause them to look very artificial.
A traditional camera lens (just like the human eye) can only capture light through a single aperture size at any one time. However, with the iPhone Pro camera, faces are illuminated even against the light and the skies look like something out of a video game.
The cameras of the iPhone, or any other high-end smartphone, are already more software than hardware. There are no speed dials or focus ring anymore, just point and shoot. The mobile device is in charge of turning you into a true reporter of the National Geographic or a photographer from Vogue. Each image is processed and altered without your realizing it, trying to reach an ideal that not even our eyes can capture.
Such is the computational advance in the case of the iPhone, that the screen will only show you the scene as your eyes see it, just until you capture it. Then, with the photograph already taken, you will see the idyllic and improved version made in Apple. This process, in addition to being transparent to the user, is shown to be irreversible. But most consumers like it. They do not want reliable photographs, but spectacular ones.
But in reality, photography was never realistic
for some puretasthe photos taken by the iPhone camera will be unrealistic, may be highly saturated or have Obvious processing flaws and they would prefer their camera to be less “smart”. The truth is that this debate is inherent to photography itself as an activity and as an artistic expression. Since its invention, some intellectuals warned of how dangerous it was for art to kneel before external reality. Because for artists, as Nabokov said, “reality” only makes sense when it’s in quotation marks.
Photography could not be an art and it was dangerous, they believed, because it deprives us of the intention of the painter, who is the only one who can make us see what a mechanical device can never perceive. However, photography did not end painting, rather it set it free. No one else had to waste their time faithfully portraying people or landscapes. Impressionism is, in part, the celebration of this liberation: photography would portray reality as the public sees it and painting as the artist feels it.
It was soon seen that there was no such distinction, and that a lens is also not impartial because it is directed by a person’s eye. Photography was also art, and it became a powerful medium for transmitting ideas, influencing the behavior of others, or portraying better than brushes. the human condition as we enjoy it and we suffer
Why not paint the photographs?
If photography drinks from painting, and this one from that, why not increase the capabilities of cameras with algorithms that not only control exposure and tonalities, but also paint over them? The most amazing advances in artificial intelligence are out there. Dall-E, Midjourney or Stable Diffusion are capable of generating or modifying images through commands in natural language.
This enables digital paintings and designs to be created in seconds, and their amazing skill and speed have already sparked passions and fears. There are not a few digital illustrators who have expressed their concerns. They are afraid that a robot will replace them as the cameras did with the two-dollar painters.
But it is not merely artistic, it can also improve photographs. Thanks to its algorithms that transform any problem into a mathematical prediction, it is capable of rescuing out-of-focus images or zooming in on them without losing sharpness.
Yes, just like in the series CSIwhich made those of us who knew a little about computers so funny. It was science fiction, because where there isn’t, you can’t get it. The necessary information to enlarge a photograph and not lose sharpness was missing. But now we have algorithms and techniques capable of making a very accurate approximation of what an image would look like at a higher resolution or if it were not badly focused.
This process would not be computational photography, but Image generation using artificial intelligence. It works by replicating our brain. If we see a blurry photograph of a ladybug, our mind can imagine what it would look like in sharp focus. Now computers can too, offering us the improved version in a few seconds.
The camera of the latest Smasung smartphones does not capture moons, but generates them
samsung I have already begun to apply these techniques, as their own users have been able to find out, sparking controversy in Reddit in a thread about photographs that the new Galaxy S23 Ultra is capable of taking of the moon. “They seem fake,” say several owners of the terminal.
They seem so because, in reality, they are. The impressive zoom is aided by a neural network trained on thousands of high-quality photographs of the moon so that when the user takes a snapshot, an algorithm adds the texture and details of our natural satellite, which it knows is there, but the objective due to its physical limitations has not been able to capture.
The manufacturer has mentioned that its algorithms use photographs that are representative of what the human eye can capture. So, actually, it is not lying as some user is concluding, saying that the algorithms are not improving the photography. But it is directly generating it itself. «The photos of the moon of the Galaxy are false. Samsung’s marketing is misleading. You are adding detail where there is none.” says the user who started the discussion.
Misleading or not, the photography is spectacular, which is what the consumer wants. Those who want to handle the camera in a traditional way can always count on the RAW mode.
These techniques may be adopted in more settings and by other manufacturers. It could extract information from its owner’s face through the rear camera and improve all the photos taken with that of selfies. Also introduce new filters that completely change the lighting of the scene to transform, for example, a bathroom into a neon-lit alley. Or make an image taken in the office have the tones of one taken at sunset by the sea.
For Apple, Samsung and Google it is very easy to train these algorithms. They have the ability to learn from all the photos uploaded to the internet and those that their users will take. No image will be out of focus or overexposed. They will all be perfect and wonderful.
But, if any photograph comes out spectacular and perfect, we will never be surprised to see one of them.