Marketing During a Recession without Reducing Quality
With fears of an oncoming recession, most organizations have already begun tightening their belts. Marketing budgets are notorious for being the first to feel the squeeze....
As competition between generative AI models like GPT, Bard, LLaMA, and others heats up, so has speculation about the future of search. Is search still relevant in light of chat-driven “answer engines” that use large language models (LLMs) to create a more human-centered discovery experience?
At first glance, these technologies’ more conversational approach, direct style, and the ability to perform like a full-stack assistant make them a strong contender to disrupt the search space. The technology remains rough around the edges, though, with a variety of aspirational but untested consumer and business use cases that make it feel more like a solution in search of a problem than a technology about to drive deep behavioral change.
The question is more than academic, though. Search engines are the window through which we’ve come to explore and understand our world. As such, any substantive change in how we use them represents a multi-trillion-dollar disruption to the business ecosystem that would impact virtually every person and organization on the planet.
In this post, we’ll break down the search and answer engine landscapes, exploring what a possible evolution would mean, including a primer on search and answer engines work and the possible economics of answer engines.
To be clear, I am a marketing technologist, not an engineer or SEO. Over my career, however, I’ve worked closely alongside both to communicate how these technologies work in layperson’s terms and adopt them into use in my own line of work daily.
In all that time, it would be hard to find an existential tech crisis as deep and wide as the one search engines now face from the likes of GPT and others. However, this paradigm shift isn’t due to a huge disruption in the space. In fact, both search and answer engines function on the same basic premise.
You, a human, are curious or trying to accomplish a task and need additional information to do so. They, a portal of some kind, parse millions of terabytes of data to get you the best, most high quality, and relevant answer – often within a fraction of a second.
While modern search engines use a combination of algorithmic feedback (adjustments based on user behavior) and artificial intelligence to source the best answer to your query, usually, they stop there. On the other hand, answer engines use generative AI to aggregate and repackage that data and large language models to give it back to you in a more personable and friendly way – namely through a Q&A-style chat.
On its face, an evolution to this type of product seems like a no-brainer. Time is, after all, our most valuable asset. The thought of an all-purpose assistant able to put answers directly in our hands instead of searching for them is intoxicating.
When you look more closely, however, the technology represents a number of substantive questions, ranging from slightly humorous to deeply problematic:
When ChatGPT launched, one of the most common responses was excitement about the platform’s ability to generate clear, concise, and compelling answers without forcing us to dig through a six thousand word essay to find them.
Over the last decade or so, inbound marketing strategies have become pop-culture cannon fodder for everything wrong with the internet; a seven line pasta recipe, preempted by a multi-page “Eat, Pray, Love” style novella about one’s summer trip to Rome.
It’s a bed of Google’s own making though – the logical conclusion of website owners reverse engineering the algorithm to drive search traffic in a way that enables them to increase sales of everything from ad space to groceries to legal services.
As both the progenitor and police force of the problem, the company has gone through decades of announcements telling us how to optimize for the algorithm, followed by a parade of updates that better spot and deprecate low-quality, spammy, and even machine-generated content that tries to game it.
Finding a middle ground and succeeding against those who use “black hat” tactics is an incredibly thin, incredibly gray line that often culminates in the need for content creators to write as much for the machines that assess a piece of content’s value as the humans reading them.
At first glance, large language models appear to cut past this problem outright. You ask a question; you get an answer. You can even iterate on your questions in a way that search won’t allow. It’s pretty neat stuff and definitely leads us into a new era of the web.
What’s less clear is the economics of it all.
Since answer engines crowdsource data from many different sources and present it to you within the context of its own chat interface, the user likely never sees the training data. As of now, they don’t even see a citation for it.
It begs a basic, but important question of supply and demand: Novellas about your post-breakup trip to Italy aside, in an ecosystem where there’s almost no incentive to create content, why do it? Will you receive less spam? Undoubtedly. Will creators of all types and sizes also begin gating their content from answer engines? Same answer.
This conundrum isn’t limited to more typically spam-laden long-tail search though.
In a world where answer engines simply provide a long-form synopsis of your company, achievements, et al, what value would building a website even confer? Why not just hand over a JSON file that describes your business with pure code?
If, instead, answer engines do end up providing links to various other sites (a seemingly reasonable solution), do they offer a measurable enough improvement over the search engine experience to drive behavioral change? It remains to be seen.
One thing we can say for sure is that even disruptors are subject to the laws of supply and demand. While recent innovations in AI have captured our attention, disincentivizing the creation of IP, which powers it, is going to cause some problems down the line.
Human communications are nuanced, to say the least. The developmental process needed to create shared understanding begins forming at a young age and is one of the only aspects of our brains that continues evolving until we die.
Mastering them takes a lot of practice (as any parent can tell you), and most of us still screw it up regularly, nonetheless.
This year, we saw an impressive jump in AI’s ability to mimic the otherwise incredibly complex diaspora of human communications – one marked principally, by its ability to provide thoughtful, easy-to-understand, and natural language answers.
While impressive on its face, understanding the technology’s potential impact with regard to search requires a deeper exploration style vs. substance. That includes an understanding of intent – or the ability to respond appropriately based on the implicit meaning of a prompt, rather than its explicit wording.
Search engines – led by Google – have spent a lot of time tackling this question over the last two decades and have gotten pretty good at delivering a relevant result. Most searches, for example, never go past the first couple of listings; as the phrase goes, “Page 2 is where brands go to die”.
We’ll apply the same standard here. Save a few instances of GPT making up answers (supposedly fixed in the latest version, GPT4), answer engines seem to be mostly on par with search’s ability to find and deliver factual data.
Dethroning search, which some in the space have posited as an early use case for GPT, will take a little more though. That’s because, when it comes down to brass tacks, search isn’t exclusively about returning correct data. It’s about returning actionable data.
What makes data actionable? More often than not, how it’s presented.
Take, for example, restaurant reviews. Scrolling through search engine results isn’t a particularly effective method of comparing menus, reviews, accommodations, et al. It offers no filtering, and its text-based format doesn’t account for the old statement that people “eat with their eyes”.
This is a big part of the reason Amazon and Yelp were so successful (each in their heyday). It wasn’t about the ability to deliver data; it was about the ability to deliver it in a way that was most useful for a particular use case. Google later addressed the challenge with a combination of tools ranging from onsite review microdata to owned platforms like Google Shopping, Reservations, Flights, etc.
In recent years, search engines have begun responding to objective questioning with their own similar, albeit less mature, generative AI. When asked “what two times two is”, for example, Google understands the intent of the question by providing the answer dynamically in search rather than a static landing page dedicated to answering this specific question.
Search engines’ ability to understand the searcher’s intent and change modalities to return data in a way that allows the user to take action is what separates the two platforms.
Would it be hard for chatbots to replicate this functionality? No. It does beg a significant question though. If we assume that the future of discovery is multi-modal, do chat-forward models (such as GPT) present any apparent gain in functionality over search engines, already multi-modal, offering?
Here, I think Microsoft’s recent adoption of ChatGPT into a multimodal search engine that allows the user to pick how it wants to ask a question is an interesting first step. Eventually, however, it will be incumbent upon the platform to understand and adapt quickly based solely on the user’s language and behavior.
For those of us old enough to remember, Google’s big breakthrough wasn’t its technology. It was its simplicity. While competitors, like Yahoo and AOL, offered a casino-esque screen full of banner ads, videos, sports scores and news, Google offered a simple box in the middle of a white screen that prompted you to ask a question.
Like search before it, the winner between search and answer engines will likely boil down to which offers a better user experience. Either way, it’s hard to believe that our appetite for options is going anywhere. As humans, context is important, and having multiple sources of information will never go out of style.
Answer engines are, of course, much more than the next evolution of search. They’re part of the shift towards a more human-centered and interactive discovery experience that will come to power our interactions with everything from personal assistants to autonomous vehicles.
While acknowledging the technology’s promise, experts have also flagged a variety of potentially problematic downstream implications specifically related to its application in the search sphere. They range from the impact of polluted data to the easy proliferation of disinformation.
In sum, they boil down to one overarching methodological difference in the way each serves data. While search engines return familiar, hierarchically arranged lists, answer engines return single aggregated answers. At first glance, this seems like a simple UI/UX change. A look under the covers reveals something riskier.
Search engines filter data first for junk, then for trust, then for relevance. Finally, you, the searcher, are asked to provide human feedback about the results. The process is purposeful and meant to have algorithms take on the heavy lifting, while giving you, a human, subjective oversight over your discovery process.
Answer engines use many of the same basic tactics in the early stages of aggregating your answer. However, unlike search engines, they replace human oversight with artificial intelligence, to determine the relative voracity of a particular thought process.
To its credit, ChatGPT’s AI has responded well when inaccuracies were pointed out by users. However, that ability assumes that most users are subject matter experts capable of deciphering when they’ve been served incorrect information and not laypeople who are likely to take its answers at face value.
While answer engines are generally capable of delivering multiple points of view (just as search engines are capable of delivering incorrect resources, especially on controversial topics), it’s less clear how they’ll handle various types of moral ambiguities, logical fallacies, and outright misinformation (of which millions are created daily) that plague the very data they’ll use to inform their answers.
All of these challenges are illustrated well in reporting from the Washington Post (and verified by hundreds of others), in which journalists were able to get the ChatGPT platform to present factually incorrect and even made up data in order to avoid providing multiple options.
In all documented cases, these were relatively harmless missteps brought on by intentionally misleading the AI. However, when multiplied over millions of users, the vulnerability begs deeper consideration – a more extensive thought experiment in which we ask it to look at scenarios in which information bears the earmarks of high-quality data, but are based on logical fallacies.
Open AI’s founder, Sam Altman, recently hinted that future iterations of its model would focus heavily on individualization. While having a deeply personalized AI assistant is a compelling value proposition, the potential for a personalized mis- and disinformation machine tailored around our psychographic profile should set off alarms.
This scenario isn’t some far-fetched dystopian thought experiment though; it’s a close parallel to the factors that allowed for the weaponization of social media we’ve already seen over the last five to ten years and warrants deep consideration.
While a single-answer format makes more sense in light of a deeply integrated and voice-centric IoT ecosystem, best described in Google’s 2015 “Micro-moments” article as focusing on arms-length questions like “I want to go” “I want to know”,” “I want to do”, et al, applying it to the web at large is a risky proposition.
Some have positioned answer engines as the inevitable successor to search. And, by any measure, it would be hard to deny the existential threat that generative AI and large language models present.
Before sunsetting your SEO strategy, however, marketers should work to better understand how these tools work, their current limitations, and potential long-term opportunities, including those that build on the current search ecosystem rather than replace it.
Google’s product lead for Bard, Jack Krawczy, recently said that “these [technologies] “are large language models, not knowledge models. They are great at generating human-sounding text, they are not good at ensuring their text is fact-based”.
The larger implication is that once the quantum leap in progress that we’ve seen this year dies down, they will become more of a productivity tool – one that amplifies the usefulness of its existing suite of offerings (including search) and may well spawn a category of new ones.
At the risk of being that one guy who called the internet a “passing fad,” though, I not only agree with him, but would go one step further. There’s a glut in the AI world right now. We have a lot of newfound power and aren’t sure what to do with it. While we’re anxious to use this shiny new tool to work its magic, answer engines, for the time being, seem to be better in theory than practice.
For more information on managing your search or answer engine marketing strategy, let’s chat.
Omnichannel marketing offers communications leaders a powerful strategy for creating relationship-driven brand storytelling . It can be enormously effective at growing a loyal following, but delivering on that promise is often more complex than it first appears....