In its constant pursuit of innovation, Google made a great announcement concerning the tech that powers both its Assistant and Lens programs.
The technology race for smartphones is intense, and Google is no latecomer – the company vies with Apple every year on multiple fronts of innovation within the realm of smartphone technology.
That is the kind of competition that has created Google Lens, an awesome feature found on Google’s flagship line of smartphones, the Pixel 2.
It is driven by artificial intelligence that uses visual algorithms to identify whatever object at which your smartphone’s lens is pointed, such as famous landmarks.
It’s also capable of scanning barcodes, looking books, movies, restaurant reviews, really a whole gamut of things that are out there. If there is an algorithm for it, then Google Lens will know a little something about what it is seeing.
Google Lens initial implementation was somewhat clunky as it required the user to take a picture of the object first then ask Google lens to scan it, a time consuming process that most people are not going to do.
But with the announcement of Google Lens integration into Assistant, users no longer have to snap a pic first then scan it as it will be accessible within the Assistant app.
The experience is much more seamless, just point the lens at what you want to identify, tap the Lens icon in Assistant and it will provide all the information it can about it. The Swiss knife style combination of multiple features is characteristic of the company, which is always striving towards a level of innovation that others both admire and envy.
It is a visual approximation, in essence, to what Google’s search engine does with text to use an imperfect analogy.
The new Lens integration will be rolled out to English language Pixel phones in the United States and then gradually to other regions of the world.