How Changes to Google’s Algorithm Will Affect How We Search
The world’s most visited site, boasting 3.2 billion searches each and every day, is 25 years old this year. As consumers, we use Google to seek new knowledge, fact check and, most generally, as a problem-solving tool. Some have even called Google our “second brain.” Google search is such a widely-used tool, so ingrained in our daily routines, that an analysis of the frequency of search terms may indicate economic, social and health trends.  We know this because data related to the frequency of hot topic search terms on Google can be openly inquired via Google Trends and have been shown to correlate with everything from flu outbreaks to unemployment levels. Not only that, Google can provide the information faster than traditional reporting methods and surveys.
Let’s take a quick walk down search memory lane. The earliest days of search were simpler, but less effective, than they are now.
The evolution of search is certainly an interesting one. Some of us remember the days when we would “search” for businesses, services and even the phone numbers of individuals by way of the printed phone directory. While some phone books do exist, they are few and far between.
It is interesting to consider that this very traditional “directory” used to be dominated by ads and generated a tremendous amount of revenue for those seeking bold lines and/or large color display ads.
Search engines within the digital space came into existence in the 1990s (think: Yahoo!, Alta Vista, Ask Jeeves and eventually Google); the interfaces were very basic, with logos and a search bar and possibly a directory for relevant sites. The functionality varied from browser to browser, engine to engine. Results, too, were incomplete at best; with far fewer websites on the internet than there are now. Search engines were also slow and often unreliable.
Fast forward 20+ years, and we’ve arrived to the point where much of the new search engine power is being driven by AI, or Artificial Intelligence. It might seem conflicting at first glance, but we’ve put the “human” in search by offloading the tasks to machine intelligence and deep learning. Constantly learning, machine feedback loops inform our searches and their relevance in real time. Once the sole purview of humans, how we see and understand pictures is now being understood by digital entities and used to augment our searches.
“The sum total of human knowledge is available.”
In May, Google held their Marketing Live Keynote for 2022. They touched on numerous advances across the entire digital and search marketing space. One of the key focal points was the considerable advantages for visual searches. It wasn’t long ago that machines could parse words but not pictures. Now, artificial intelligence has become a game-changer, opening up a world where searches can be facilitated by way of photographs of all kinds and readable by machines.
This is a stunning advance for online intelligence.
The internet is filled with a seemingly infinite expanse of data and information about every conceivable topic. For a single search engine like Google, this represents a massive amount of data to sort through to provide the most relevant results for an individual searcher. As a result, technology has had to evolve to the point where massive amounts of data can be sifted, organized and provided upon request—without a human.
New Technologies, New Breakthroughs
During the Keynote address, Jerry Dischler, VP & GM of Ads at Google, explained the new advances underpinning visual search updates. Pictures are now machine-readable—you can literally take a picture of something, upload it to Google, and have search results returned based on that picture. Ads are now integrated across platforms and dashboards, including YouTube “Shorts” and Shopping. The former static mobile SERP (Search Engine Results Page) is now a mobile scroll, with results returned based on factors like search history, personal interests and algorithmically-identified heuristic methods to return searches in many different media.
It can sound overwhelming, but we’ve become gradually used to a more robust interface from the earliest age of search engines. Knowledge panels, powered by Google’s knowledge graph, exists on the right side of the scroll for business queries to provide quick information, along with sponsored related ads, answered questions, business profiles, reviews and similar products through shopping. Video results are returned from YouTube; and all of this is integrated into a seamless interface of returned information for any given query.
Computer technologies like machine vision and natural language understanding is transforming search into a more helpful, more sensory-based environment. Where once search could be done only via text or speech, search can now be conducted based on sounds, pictures, and other aspects of our lived environment. This makes it broadly applicable not only to the majority of customers’ needs, but specially-abled users as well. Search has become an experience powered by experiences.
For example, using Google Lens, you can snap a photo and find relevant results for your car, your home or other needs. I planted a perennial last summer and it began to grow this spring, but I could not recall the name of it. I couldn’t possibly know how to best care for it if I couldn’t even name it! I simply snapped a photo and was able to search the name of it based on the image alone. Pretty remarkable.
Google’s new multisearch feature allows you to customize your search results, find similar products, and automatically search for tutorials or helpful augmenting pages to not only find the right product, but how to use it and integrate it as well. By extending your queries based on your search to help people seek out information intuitively (as opposed to linearly), we’re now able to find what we need, when we need it, and what else we’ll need, so we won’t be stranded with only partially-helpful results.
These results are powered by Google’s breakthrough advances in machine intelligence, called Multitask Unified Model (MUM). MUM is a long-gestating feature for Google’s visual search. The ability to machine-read pictures and other graphic media and pair that with intuitive related results means online search engines have finally begun to resemble search the way humans actually think. It works amazingly for finding patterns, swatches, matches, designs, fabrics and a host of identifiable features that put us closer to the results we seek.
Part of the attraction to this new visual interface is helping people find their content needs through different modes of perception than previously utilized in online spaces. More visual searches mean more visual ads to go along, and for marketers and businesses, this transforms the way we think about advertising and how we can go about reaching more people through visual searches.
For businesses looking to capitalize on these new developments, it will be important to:
- Maintain an attractive visual presence
- Customize ads and ad copy to stay relevant and competitive
- Redesign ad campaigns based on factors such as search intent and pictorial/graphic representations of queries
- Formulate value chains for searches based on other sensory perceptions and how those run together
- Continue to innovate as a brand to stay flexible and agile in an ever-evolving online search space.
Many businesses may elect to pursues the more traditional form of ads and searches—there’s no shame in that, but for a competitive, realistic, and successful ad campaign in the modern world, it will be imperative to constantly evaluate, and update based on these changes which allow for higher quality reach and results. The power of Google advertising lies in its ability to reach people where they are, when they’re searching, and based on their unique interests. Targeting user needs through every possible search modality will keep the business and its marketing fresh and relevant; it will prevent being left behind by changes that other companies are eager adapt.
Google has made search more visual than ever before, and for businesses looking to compete, it will take require an equally robust visual advertising strategy designed to ROAS (Return on Ad Spend) if maximized.