Adobe's Project Stardust provides a glimpse into its upcoming AI photo editing engine.

Tech...
Adobe's Project Stardust provides a glimpse into its upcoming AI photo editing engine.

Adobe's Project Stardust AI photo editor, which was leaked earlier this month, has now been officially unveiled at the MAX conference. This sneak preview showcases the potential of a next-generation AI-driven photo-editing engine. The engine, powered by the newly launched Firefly Model 2, allows users to effortlessly remove objects and people from a scene, alter backgrounds, and more. The fundamental concept is to empower anyone to explore their creativity in image editing using Adobe's AI tools.

To clarify, this is currently only a "sneak" from Adobe - a public glimpse into the technologies they are developing behind the scenes, which may or may not be incorporated into a final product. However, since Stardust essentially repackages many existing Firefly-based AI tools, it's likely we'll see more of this in the near future. It merges Adobe's object recognition models with existing AI-powered features like generative fill (something has to replace an object once it's moved, after all).

In many respects, this mirrors Google's efforts with its Magic Editor on Android. Both tools strive to simplify what was once laborious image editing work.

The director of product management for Project Stardust at Adobe gave me a brief live demonstration of the service last week. To begin, users can upload their own photo or have Firefly generate one. Firefly then automatically analyzes the image in the background and creates layers for the various objects it identifies. Moving things around is as simple as dragging and dropping, with the AI tools filling in the gaps. Just like in Photoshop today, it's also straightforward to add new objects to a scene, with the service offering four different options for every prompt. In total, Adobe claims, over a dozen different AI models are employed to power Stardust's various features.

Perhaps it's indicative of the rapid advancement of this technology that I watched the demo and wasn't even astounded by what, undoubtedly, would have seemed like magic just a few years ago. Now, with Google already showcasing similar capabilities (even if not actually releasing them), it feels like the conversation has already shifted from being amazed by the technology to considering its long-term implications for photography.