Adobe, like all programs, is evolving towards artificial intelligence so as not to be left behind. So one of the things he did was pull out the neural filters for Adobe Photoshop. In the new update they have improved the behavior of the Depth Blur filter. So let’s analyze it to see if we can forget about the diaphragm or not.
It is designed for photographs taken with mobile phones. As they do not have a diaphragm, this filter can improve the result of the application that we use. Best of all, we can apply it to any file.
Still We can find it in the Beta section of the neural filters. That means they keep testing it and waiting for us users to guide them until it is perfect. Although it may bother many to work as guinea pigs and not have even a compensation.
For this reason, and after playing with it on three different computers, all with an i7 processor and at least 16GB of RAM, I have run into many malfunctions. This filter blocks the program, sometimes it works, sometimes it doesn’t …
The results are better than in the previous version. And from what Piximperfect says, it’s faster because no longer works in the cloud. All operations are done on our computer, so we no longer have to be afraid of sharing our photos.
11 EASY PHOTOSHOP TOUCH-UP TO BE THE EDIT MASTER
The new version of the neural filter ‘Depth Blur’
This new version consumes many resources. The effect takes time to appear on our screen. It depends on the graphics card that we have. If it detects that it cannot work with it, it will go to the processor.
We just have to activate the filter and wait for it to do its work. That is its main virtue and its main problem if we want professional results. There is no way to create a mask to tell you exactly what it is we want to blur. Work for free, to understand us. And there are times when it is wrong. And there is no choice but to accept the results.
But this time he has new features that allow us to harbor some hope. It depends on luck, rather on how the photography is, so that the final finish is perfect. So I am going to explain the steps to achieve the best result:
- We choose the right photograph. In this case it is a sculpture that I found in a Madrid neighborhood.
- After developing it in Adobe Lightroom I decide to open it as a smart object in Photoshop (Photo> Edit In> Open As Smart Object).
- It is the right time to blur the photo, so I will Filter> Neural Filters and active Depth blur. If you haven’t downloaded it yet, now is the time.
- For it to work correctly it is best to mark the option Focus on subject. It is the new function that allows a more reliable result.
- If we want to exaggerate the effect, we can raise the parameter Blur intensity up to 100.
- And with the parameter Focal range we managed to simulate the blurring of the lens we want. In this case I have no choice but to go up to 100 to avoid problems with the edge of the sculpture at the bottom.
- To finish we can change the Temperature, the Saturation or the Brightness from the background, but I can’t find the photographic reason to do it.
- The most interesting is the parameter Granulated that allows you to recover the noise in the blurred part so that it is more natural.
- We have the possibility of choose multiple outputs for the result. If we start from an intelligent object the best thing is, of course, Smart filter. But if our computer suffers from all the information it is moving we have different options such as New layer that speed up the operation of the filter.
If we are not convinced by the result we can always check the Depth Map option, very useful for other interesting effects within the photograph. You will tell us how it works for you, because we have had a really bad time.
When it works well (we don’t know where the bug is for sure) it will be one of the most interesting filters in the program. But while it is in beta version few things we can do with it.