Qualcomm Demonstrates Groundbreaking On-Device AI Image Generation Using Stable Diffusion


Artificial intelligence continues to advance by leaps and bounds as Qualcomm’s demonstration of a world-first on-device AI image-generating feature using full stack optimization drives home the tech revolution sweeping the world and promising to revolutionize multiple industries in its wake.

Hand holding Android smartphone with prominent lenses
Hand holding Android smartphone with prominent lenses. Photo by Zana Latif

As Qualcomm explains in their blog post detailing the advancement, one popular method used to create generative AI platforms is the foundation model which leverages a massive neural network trained on reams of data. Qualcomm has taken that network process and placed it on an Android smartphone instead. Think of it this way: The popular, more common method relies upon a cloud network while Qualcomm’s great advancement is taking all of that and localizing it on a smartphone headset. In terms of things being a “pretty big deal,” this is right up there.

Of course, the implications for photography and other industries notwithstanding, shrinking down these massive networks into localized functionalities opens up a whole new world of possibilities when it comes to everything from generating photos to editing them on the fly. In other words, this kind of advance could really push smartphone photography forward in a big way, among other areas.

From Qualcomm’s blog post:

“Our vision of the connected intelligent edge is happening before our eyes as large AI cloud models begin gravitating toward running on edge devices, faster and faster. What was considered impossible only a few years ago is now possible. This is compelling because on-device processing with edge AI provides many benefits, including reliability, latency, privacy, efficient use of network bandwidth, and overall cost.

Although the Stable Diffusion model seems quite large, it encodes a huge amount of knowledge about speech and visuals for generating practically any imaginable picture. Furthermore, as a foundation model, Stable Diffusion can do much more than image generation with text prompts. There are a growing number of Stable Diffusion applications, such as image editing, in-painting, style transfer, super-resolution, and more that will offer a real impact. Being able to run the model completely on the device without the need for an internet connection will bring endless possibilities.”

You can check out some examples on the post at this link.

Qualcomm goes on to broaden the discussion to other devices including laptops or “virtually any other device powered by Qualcomm Technologies.” That’s quite a broad brush, indeed.

Any thoughts you might have on AI and photography are welcome in the comments.

Check out some of our other photography news on Light Stalking at this link right here.


About Author

Kehl is our staff photography news writer since 2017 and has over a decade of experience in online media and publishing and you can get to know him better here and follow him on Insta.

Leave a Reply

Your email address will not be published. Required fields are marked *