I had the great fortune to attend the Cognitive Colloquium in early October of this year at the IBM Watson Research Center in Yorktown Heights, NY. It was one of those life-changing moments when you feel like you’re sitting on top of a mountain and you can see much more distant horizons. In my case, the horizon I saw involved using some of my mental energy to solve the grand problems of digital content using the methods of cognitive computing.
What are these methods? Well, at IBM, we describe cognitive computing as a cluster of practices that use machine learning, natural language processing and high-performance computing to change the way computers work and how humans work with them. Heady stuff, I know.
Before you abandon this post for more comfortable pursuits, please consider a ready example of this in Watson, the supercomputer that competed in Jeopardy! last year and beat the top champions the show had ever had. The IBM team taught Watson the rules of the game and he proceeded to improve his play through many months of live competition leading up to the televised show. He used natural language processing to understand the clues presented by the host, and devised likely questions for them. He used machine learning to get better and better at the game. He’s now being employed in medicine, marketing and several other domain-specific specialties, including our line of work.
Job 1 for my new mission was to read Nobel Laureate Daniel Kahneman’s thick book Thinking Fast and Slow. Kahneman was a keynote speaker at the Cognitive Colloquium. His talk triggered several new insights in me about the relationship between human psychology and content strategy. As I read the book (primarily on my train ride between my home in Beacon, NY and Grand Central station), I continue to solidify these insights. I can now articulate several of them. In the interests of space, I will cover one of them for the content strategists who are likely to read this post. If you’re still interested, please read on.
(If you’re interested in the complete set, look for my forthcoming book: Outside-In Marketing: Using Big Data to Drive Your Content Marketing. I also highly recommend reading Kahneman when you find yourself with a hundred hours or so of unstructured time.)
The central framework of Thinking Fast and Slow
The central thesis for Kahneman’s life’s work, spanning over forty years of research of practitioners in fields too numerous to list, is a kind of mental dualism. Our minds have two distinct systems, which Kahneman calls System 1 and System 2.
System 1 is the set of processes that happen automatically, in a flash. They are so automatic, we often can’t recall afterwards intending to do them. We just do them. Examples include the habits of driving, like putting on your turn signal prior to a turn. You don’t have to think about it, you just do it. Most of our lives and much of our communication is governed by System 1. We are faced with so much uncertainty in life and it all comes at us so fast, we need a system to make sense of it in the rough. Kahneman calls System 1 “a machine for jumping to conclusions,” because that is what it does. It judges things automatically before all the data are available.
System 2 is the logical and systematic part of our minds, which has been modeled by cognitive scientists since the discipline was conceived. Though it is accurate and precise, it is slow and lazy. There are times when we doubt the knee-jerk responses our System 1 provides. And these are the times we engage System 2 to analyze all the facts at hand and make a reasoned decision. But System 2 is so lazy, we don’t use it as much as the philosophers and other idealists like to believe. In his book, he documents decisions made by experts in a variety of fields based almost entirely on System 1 thinking, and laced with the biases that it uses to jump to conclusions.
Kahneman was the keynote speaker at the cognitive Colloquium because his framework serves as a new way to model human thinking. As he said, “If you want to build systems that think like humans, start with understanding how humans think.”
Computers have always been devices that needed to be right all the time, without fail. So of course we patterned them after System 2 thinking. The trouble is, it takes huge supercomputers to do somewhat ordinary human tasks, like scanning encyclopedic knowledge for a likely question that matches a cryptic answer. Watson takes up a decent sized room and consumes massive amounts of electricity. The machines of tomorrow need to get ever smaller and more efficient, approaching the efficiency of the human brain. To do that, we need to build systems that do much of their work like System 1, fast and imprecise. Only when accuracy is needed will they engage System 2.
Practice: How do users interact with websites?
Beyond the implications of Kahneman’s work for cognitive computing, some of his work has more direct practical applications for content strategy. Indeed, his framework can be used to approximate how users consume websites. Consider this scenario:
Lizzy is a highly educated millennial who works as an editor in the publishing field. She searches for “structured mark-up” in Google and gets a ton of results. She scans the first search engine results page (SERP) and clicks the most likely link without really reading the results. When she lands on the page, she scans it to determine if it is worth the effort. She decides that it is, and begins reading the long-form content on the page.
What does Lizzy’s mental state look like? Well, she uses both System 1 and System 2 in the process of her information journey. System 1 is the primary mechanism of her scanning and clicking behavior. Scanning SERPs and clicking is so familiar to Lizzy, it’s like using your turn signals while driving. She doesn’t need to think about it. System 2 is what she uses to read and digest the content.
A whole UX discipline has grown out of the imperative, “Don’t make me think.” If you make Lizzy think when she lands on your page, you force her to engage System 2, which is slow and lazy. Not only is Lizzy in a hurry, she really doesn’t want to waste mental energy either. If you force her to think, she will jump to the conclusion that your page is not relevant before even engaging System 2, and she’ll bounce back to the search engine to try another result.
When Lizzy does find your page relevant, she is ready to engage System 2. This means providing enough data, case studies and other stuff to help her complete her information task. Once she engages System 2, she does not want to have to go back to the SERP again. Ideally, she can get everything she needs on your site. Once she engages System 2, long-form content is what she needs.
For the longest time, we have had a raging debate in our field of whether users read on the web. All kinds of studies showed that “users don’t read” on the web, they just scan. I have tried to replicate these studies in ibm.com with mixed results. After analyzing the results, I came to a conclusion that seems obvious after the fact: If you get the Lizzy use case right, users do read on the web. They’ll even download a longish whitepaper and read it on the web if it is relevant and compelling. But if you don’t get the Lizzy use case right, they bounce off your page before reading regardless of how closely related the content is to the query.
I have not done a complete analysis. Provisionally, the studies that suggest that users just scan on the web suffer from the fallacy of small samples. They happened to choose content that was not easy to scan as the basis for the studies. It forced users to do something they were not willing to do: To engage System 2 prior to deciding whether the content was worth their time and attention. Since these users never relented to engage System 2, they never “read” in those studies.
As pages improve and the body of evidence approaches critical mass, similar studies have come to different conclusions. Thanks to Kahneman, we now have a framework for understanding these studies. The inflection point between scanning and reading seems to be a System 1 process that determines whether or not a page is worth a user’s time and attention.
Theory: Digital content relevance works like typical human psychology
Those of you who are familiar with my work know I have based much of it on Relevance Theory, which is a kind of psychology of communication. It is the keystone of my book Audience, Relevance and Search, Targeting Web Audiences with Relevant Content. The theory defines relevance as a sliding scale with two extent conditions, which I sketch below:
The stronger the cognitive effect in the audience, the more relevant the linguistic artifact to that audience
The more effort a linguistic artifact requires, the less relevant it is
A cognitive effect is just a change in the mind of the audience. When we learn or are influenced or make a decision, there is a corresponding cognitive effect. Most of these are small and incremental. Some are breakthroughs. All things considered, breakthroughs are more relevant than small changes to our attitudes. The actual theory is quite a bit more complex than this, but we can gloss over that complexity for the time being.
As I read Kahneman’s book for the first time, it struck me that Sperber and Wilson—the authors of Relevance Theory—were describing communication in terms of System 1 and System 2. They just hadn’t made that connection. When they talk about cognitive effects, they are talking about System 2. Relevance Theory is based on work by H.P. Grice that describes how we reason when we communicate. Because reasoning falls into System 2, cognitive effects are, by definition, System 2 processes.
The extent condition that is more interesting to me is the one about effort. It seems to me that determining whether a page is nominally relevant—that is, whether it is worth the effort or not—is a System 1 process. The content buried within an opaque UX could answer Lizzy’s questions exactly, but she will determine it is irrelevant in a flash if it lacks the visual cues System 1 requires—tight punchy headings, bolded keywords, etc., in short, all the things Google’s algorithm looks for.
The one correction I would make to Relevance Theory after reading Thinking Fast and Slow is to reverse the extent conditions. I would put the one about effort first, because on the web, a page is functionally irrelevant if it doesn’t convince System 1 to devote the effort. And if it requires too much effort for the time being, it loses relevance fast. Only after it is deemed worth the effort do users judge to what extent it is relevant. If the page helps Lizzy make a breakthrough about structured mark-up, it is highly relevant to her.
The blog medium prevents me from stating more. All I hoped to do is plant a few seeds in the minds of enterprising readers to take these thoughts further than I could in this medium. As I said, I will have a great deal more to say in my book when it comes out this year. In the meantime, if one reader had a mountain-top experience with this post, I feel it is doing its job.
James Mathewson is the program director for global search and content marketing at IBM.