Over the past few years, there has been a significant shift from text-based content to image and video-based content on the Internet. Google is attempting to boost its use of these mediums in the search engine market, by taking advantage of machine learning. Simply put, Google Lens can use images or the camera on your device to do Google searches and pull up relevant information based on what is seen.
For examples of how Google Lens can work, here is just a snippet of what it can do:
- Identify the species of flowers that the camera is focusing on.
- Log onto a wireless network by viewing the SSID sticker on the router.
- Translate text that the camera is looking at into a different language.
- Obtain information about local restaurants, stores, and other establishments.
Google Lens can interact with both the Google Assistant and Google Photos apps. Google Assistant lets you add an event to your calendar by simply pointing your camera at the information board. On the other hand, Google Photos lets users check the details about businesses, like opening and closing times. For example, if you have someone’s business card, you can even call them directly just by scanning the image.
More apps that will add even more functionality to Google Lens will be provided in the future.
What are your thoughts on Google Lens? Do you think these features will change the way that you go about your daily tasks? Let us know in the comments.