When we go to Google for answers as if it were the oracle, it is not always enough for us to type words. The truth is, how many times would we like it to be possible to include text in visual searches … Well, it is something that is going to be possible thanks to the new improvements in Google Lens. The Mountain View company is willing to get users to do more than write a few words in the search box to solve our doubts and revolutionize the experience.
Google has announced its intention to modernize the search engine by adding various functions to Google Lens. This tool will now allow you to search with both words and photos. In addition, it will incorporate a new box of “things to know” with subtopics usually related to our search.
Thanks to MUM, a machine learning algorithm capable of collecting information from formats other than text, such as images, audio and video and translating that knowledge into 75 languages, Google now you can include context beyond text-based search terms. In this way, users will be able to ask more precise questions and receive more complete answers.
So Google also prevents you from leaving the search engine and you can get at a glance all the information you need without having to click on the source link. And it is that Google’s Artificial Intelligence will have already extracted all the info relevant search results.
Thus, if someone wants to obtain, for example, information on how to repair a puncture, you just have to focus the wheel with the camera and type: “how to repair”. In addition, they try to boost Google Shopping because we are more likely to buy products if we can identify them. Thus, we will have a new feed of products to discover, such as products of a similar style or of different colors.
Other news include that the image recognition service also comes to Chrome. The integration will allow you to select images, videos or text to search related content without leaving the page.
Photos | @aitanax