Google recently held its annual Search On event which included several updates to both search and Maps. The event showcased new and advanced moves that reveal Google’s ambition of building an ‘internet of places’, applying their recognition and indexing tech to the real world.
Google’s next drive is in adding digital data to the real world with the ambition stemming from the company’s long history of creating immense value by indexing the web for more than 20 years.
Google wants to achieve something similar to what it has achieved in the online world and is putting together the tech components it needs to successfully and usefully index the physical world too, building a corresponding knowledge graph as part of its plans.
Doing the groundwork
So far the initiatives under this plan include long-established projects such as Google Lens and Live View which allow users to identify objects and also navigate using AR overlays. Behind these front-end products are the company’s unique data that took years to build out such as Maps’ Street View and Google’s vast, indexed image libraries.
Taking the concept to that next level are four product updates, all revealed at their Search On event and all designed to advance Google’s reach and utility. They are:
1 – Multisearch Near Me
This allows users to combine multiple search inputs to find the things you’re looking for. So, if you start a search for a shoe using an image (or Google Lens live feed), then refine the search with text (e.g. “the same shoe in blue.”)
For instance, say you discover a shoe on Instagram. You can use that image of the shoe to identify it with Google Lens, then use Multisearch Near Me to find local shoe stores that sell the same or similar shoes.
2 – Search with Live View
Live View is Google’s 3D/AR urban navigation feature that utilizes the Street View image database to localize a device that can pinpoint where you’re standing and overlay AR directional arrows on your route.
3 – Language Translation
The company also previewed upcoming updates to Google Translate that will let users activate it on the go using their cameras. This will also involve more integrated translations. So rather than the usual pop-ups that translate text visually (or audibly), the new text will be visually integrated to a scene.
4 – Immersive View
Finally, Google’s Immersive View is more about front-end sex appeal. Also previewed at Google I/O, Immersive View will feature stylized birds-eye views of different locales. Use cases include travel planning and general ‘wanderlust’.
Whatever all of this adds up to, you can bet that with the power of Google behind the scenes the Metavearth is almost certainly going to wind up ‘a thing’.