Google's Multisearch Feature Combines Images and Text
1 min 14 sec read
April 08, 2022You know that frustrating feeling when you're searching for some weird knickknack but can't describe that thing well enough in words for Google to understand?
Well, you're not alone.
According to Google, 66% of shoppers failed to find a product using only words to describe it. So to help those who can't put their finger on what they're looking for, Google introduced multisearch, which combines text and images to help you find stuff.
To use Google's multisearch tool, you have to have the Google app on your iOS or Android device.
Once you open the app, you tap on the "Lens camera icon," and you can either use an image that you have saved or snap a pic in real-time. Next, swipe up and hit the "+ Add to your search" button to write text.
Let's say you're in front of something that you're not familiar with. You can ask a question about that object, and Google Lens will tell you what it is.
You can search for things in different colors, brands, or by visual attributes.
If you take a screenshot of a dress in orange, you can add the search query "green" to find it in a different color.
You can take a pic of your furniture and search for matching items. You can take a photo of your plants, for example, and then search for "care instructions" so you don't kill it this time—we're kidding, you probably have a green thumb.
How is Google able to pull this powerful search feature off?
It's all thanks to Google's MUM. It's Google's most advanced AI search model they're working on currently. It's pretty cool to see MUM in action with this new search feature for our phones.
The multisearch tool is in beta mode for now, but you can try it out using English search queries in the US.
Want to read this in Spanish? Spanish Version >>