In what could be a pretty huge update to its Google Lens service, the search engine giant just announced it would integrate its Multitask Unified Model, or MUM, technology into it in order to provide users to find what they are looking for across a variety of formats.
Naturally, eCommerce is one of the many elements driving this as product identification is listed among the many options Lens provides users but, aside from that, there’s also on-the-fly translation, copy-and-paste text to your computer, and getting help with your homework, just to name a few.
Integrating Lens with MUM will allow users to ask Google questions about their search results, TechCrunch reports, allowing for a depth of subject exploration that is currently only really possible via text.
Another example offered by the company that demonstrates this in action is using Lens to identify a broken part on a bike but, not only that, to skip the identification stage and instead simply query how to fix the scanned broken part. Google Lens will then use this visual information to find relevant data on the web that tells the user not only what the part is but also how to potentially repair it.
That’s some pretty impressive stuff and a kind of preview of what cameras will be doing for us in the future, among other tasks.
Again, we can’t forget the eCommerce aspect and, in this regard, MUM will allow users to ask Google Lens for items with a similar pattern as that shown or any other range of attributes they are trying to match. Of course, one of our favorite features, and one that can’t be hyped enough, is Lens’ ability to identify plants and animals.
Have you used Google Lens? Let us know your thoughts on Google Lens in the comments below.
Don’t forget to check out our other photography news on Light Stalking at this link right here.