One of the most interesting announcements at Google’s Search-focused keynote in September was a big upgrade to Lens that lets you take a photo and ask questions about it. “Multisearch” in Google Lens is now available to beta test on Android and iOS.
Multisearch is Google’s “entirely new way to search” that aims to address how you sometimes don’t “have all the words to describe what you were looking for.” It starts by taking a picture (or importing an existing one) with Google Lens and then swiping up on the results panel and tapping the new “Add to your search” button at the top.
This will let you enter a “question about an object in front of you or refine your search by color, brand or a visual attribute.” Examples include:
- Screenshot a stylish orange dress and add the query “green” to find it in another color
- Snap a photo of your dining set and add the query “coffee table” to find a matching table
- Take a picture of your rosemary plant and add the query “care instructions”
Fashion and home decor use cases are prominently highlighted today, with Google noting that – currently – the “best results [are] for shopping searches.” That said, the last example above means you don’t have to first use Lens to identify a plant and then perform a separate text search for “care instructions” after identification.
Google credits the “latest advancements in artificial intelligence” as making possible Lens multisearch. That said, it’s not using the Multitask Unified Model and can’t handle complex queries yet. For example, MUM was responsible for the demo where you could take a picture of broken bicycle gears and get instructions on how to repair.
Google Lens multisearch today is officially a beta in English in the US. You can access it from the Google app on Android and iOS.
More on Google Lens:
Check out 9to5Google on YouTube for more news:
Google Lens ‘multisearch’ lets you ask questions about photos, now in beta on Android and iOS - 9to5Google
Read More
No comments:
Post a Comment