Adobe Firefly is already a hit with Photoshop users of all stripes but this next iteration could make the generative image tool that does not rely on publicly-scrapped images indispensable.
This week, at its Adobe Max in London, the company announced an update to its Photoshop Firefly generative AI utility that deepens the integration and vastly expands capabilities through the adoption of a new model: Firefly Image 3 Model.
I’ve been using Adobe Firefly almost since its introduction, and while its power is impressive, its use is simple. You select a photo, area, or image object in Photoshop, and then in the text prompt area describe what you want to appear. I’ve used it to extend table tops to help fill out a 16:9 aspect-ratio image, and to add a metallic apple to this Getty Image (below).
Image 3 Model, however, will add much finer control and new image workflows to Firefly’s generative arsenal. Among the key enhancements are:
- Reference Image
- Generate Background
- Generate Similar
- Enhance Detail
Perhaps most useful might be Firefly’s ability to help virtually anyone fill a blank canvas with pro-level imagery.
In a demonstration I watched, Adobe executives started with a blank canvas and then described in a prompt the scene they wanted: a bear playing guitar by a campfire. What’s new here, though, is how the enhanced generative tools include the ability to use reference images, set a content type, and add effects, all before generation begins.
It still takes about 20-25 seconds for a render to complete, but there are now more options; and the image detail, even when zoomed in, is far superior to what I’ve seen from Firefly Image 1 or 2 models.
The new reference image skills are especially powerful. With the bear, we selected its acoustic guitar, and from a series of reference guitar images chose an electric one and then had Firefly replace the original guitar. Firefly not only put the new guitar in place, but it properly lit it and adjusted the bear image so that its grip matched the fresh instrument.
It’s also now just as easy to swap out a background for something new. We started with a bottle of pink perfume, and then removed the background and asked Firefly to put it in pink water. The result was a dramatically lit bottle floating on a sea of pink water. Because the background is just an object, swapping it for snow or sand was also easy. I noticed how in each instance, the lighting and blend between the bottle and the new backgrounds were visually perfect.
Reference images mean that you can have an idea of how you want to replace a fill, surface, or object, and use reference images to drive each of those iterations. Firefly just makes it all look natural and good.
The new Generate Similar looks like a great way to iterate on an initial image idea.
Adobe’s VP of Generative AI, Alexandru Costin, told me the new model is now better at understanding longer prompts, and takes advantage of an improved style engine. The results are also more natural because Firefly Image 3 Model doesn’t default to what might look most aesthetically pleasing, like producing all portraits in “golden hour light.”
It can use references to match styles, but also image structure, and then build countless variations.
Firefly Image 3 Model will arrive first in Adobe Photoshop desktop beta and the Firefly web app in beta today (April 23, 2024). General availability is expected in Photoshop and Firefly later this year.