Photoshop, one of the most popular editing programs in the Adobe suite, has been updated with the integration of Adobe Firefly, the company’s powerful AI model that allows you to create images from a short text description, and which promises save working hours.
The company has included its AI model in a feature available in Photoshop called Generative Fill, and works similarly to DALL-E or other AI-powered platforms that allow you to create layouts using a text description. In the case of Adobe, the feature also able to remove or add items in a photo using AI, as well as enlarge image in case it is necessary; function also available on DALL-E.
Unlike the OpenAI or Midjourney models, which simply create the image without the possibility of editing it later, Generative Fill renders designs on a new “generative layer” so that users can continue working on editing their photos in a more intuitive way. This feature will also be integrated into all Photoshop selection tools. At the moment, yes, Generative Fill is only available in the Photoshop application for desktop, and it is in Beta phase.
Here’s an example of the results Photoshop’s new AI feature can generate; including the expansion of the image and the creation of elements in the photograph.
Photoshop gets other AI-powered features
In parallel, Adobe has announced different features to improve the editing experience in Photoshop, such as a new presets mode. This function allows you to view and apply filters to an image in a much more intuitive way, without the need for the user to adjust the tones, etc. At the moment, Photoshop comes with 32 presets that can be easily applied and undone, as well as edited if necessary.
Another AI-powered feature that Photoshop integrates is the remove tool. This allows you to remove elements from an image simply by passing the brush over them. It works especially on large objects.
Related to artificial intelligence and machine learning, Adobe has included in Photoshop a new contextual taskbar. This appears during the process of editing an image and shows shortcuts to the next steps and options that the user could perform. For example, if a user selects an object, the contextual taskbar will show charms for selecting, masking, creating a layer, etc.
Lastly, Adobe has updated the gradients feature. It’s now easier to create a gradient, and new canvas controls are integrated to make it more customizable. This and the rest of the functions will be progressively implemented in the app.