With apologies to Lewis Carroll … “Beware the Chatterbot, my son! The jaws that bite, the claws that catch”.
ChatGPT is currently being talked about as an existential threat across many sectors. So, should the insight sector be worried? There are four key reasons why it shouldn’t:
- Insight comes from Human intelligence (HI) not Artificial intelligence
- ChatGPT relies on what has been written, real behavior is commonly driven by far more than is expressed
- It can’t do the key thing brands need insight for
- It can’t replace the security of a human being delivering the research
ChatGPT is AI not HI: lessons from the Chinese room
It’s easy interact with ChatGPT and really feel like you’re having a conversation with a Human Intelligence (HI). But you’re not. Consider the Chinese room proposed by philosopher John Searle. He described a person sitting in a room with letterboxes in and out. Chinese characters are fed in, and the person’s task is to make Chinese characters in response. They get feedback as to what are good and bad characters.
Over time they become adept at responding correctly until it becomes possible to feed a note in, in Chinese, and then get a meaningful response out. So, to an observer the system looks like it understands Chinese, but Searle pointed out this can all happen with the person inside having no actual understanding of Chinese. ChatGPT is the same, words go in and words come out, but ChatGPT doesn’t understand the meaning of what it has produced. Insight is derived from understanding not regurgitation even if it feels human.
We are more than what we say
As a psychologist who has delt with non-conscious processes for 30 years, we know a vast amount of our behavior is motivated by mental processes that are beneath conscious awareness, and hence are impossible to express. Some years ago, I worked with a charity that supported people with facial disfigurements. Their key challenge was anecdotally, people with a disfigurement had poorer educational and career outcomes and it was believed this was due to discrimination. Despite this whenever they did research, people vehemently denied ever discriminating on looks. (An Implicit Attitude Test revealed a very strong unconscious bias, that people would not admit it to themselves or to a researcher.)
Taking this example, ChatGPT would look at what people said about their beliefs and conclude that people don’t discriminate, because they said they didn’t. It doesn’t ‘understand’ what people say may not correspond to their behavior as it can’t deduce anything beyond the words. It needs that spark of human understanding to read between the lines and understand why we don’t, or indeed can’t, express what motivates our behavior.
ChatGPT does not give brands what they really want: prediction
ChatGPT in its rawest form searches vast amounts of text databases, sorts, pattern matches language structures, and returns a meaningful summary. But, by definition, this means it can only tell you about the past, or more precisely, what other people have written, accurately or not, about the past.
So ChatGPT does not do the key thing brands need from research, prediction. Will that pack work? Will consumers like that new product? Will that ad sell? Prediction is at the heart of what the insight sector does and remains a uniquely human quality. ChatGPT can’t take that leap in creative thinking to see beyond the data and predict outcomes. For example, imagine having a cream tea with your friends and ChatGPT.
There was one last pastry left that was offered to someone (who you knew liked pastries). They say, “Oh no I really mustn’t”. Based on the linguistic input ChatGPT would predict that that person would not eat the pastry. The human minds around the table would predict a different outcome.
ChatGPT, by definition, can only tell you what has happened, it takes human qualities such as understanding, consideration and empathy to be able to predict.
Insight is a ‘people business’ for a good reason
Whenever I meet someone starting in the insight sector, I always teach them that the most important thing to remember is that brands don’t buy research findings, they buy confidence. Confidence to make a decision that needs to be made. For better or for worse, a researcher’s job is to take the responsibility for decisions, take the plaudits if it goes well but more importantly, the blame if it goes badly.
Imagine anyone being grilled by the board as to why a new product has flopped. The current the response would be ‘The respected research company provided evidence it would work’. This may not get them off the hook entirely, but as due diligence can be seen to have been done and they may be forgiven. Now imagine the reaction if the response was “I asked a chatbot that said it would work”. What situation would you rather be in?
Have the safety net of a body of evidence provided by a research company with a known track record (and other people to ‘chuck under the bus’ if necessary) or admit that the buck stopped with you. The security of having an organization or person to blame, will always be psychologically preferable to those who are responsible for the choices brand need to make.
ChatGPT is a useful tool
ChatGPT does have a place in insight. It can potentially interview people, and react to their responses, it can analyze large amounts of data, particularly transcripts which is an which is an arduous task at the best of times, it could even do literature reviews and help write proposals and debriefs. But can it replace a researcher?
I was once asked in a workshop to summarize my job without telling people my profession. I jokingly said, “I ask people questions they can’t answer then tell other people what they didn’t mean”. A little frivolous I know, but in there is a truth in there, that being a researcher does require an understanding of the human condition. It is this we use to take those leaps to see beyond what people say, as we know it is not always what they do. Only human minds have a theory of mind, an ability to put ourselves into another’s mindset and situation, giving us the ability understand other people’s intentions.
We can go beyond the face value of the words or data collected and take the creative leaps allowing us to predict outcomes. ChatGPT only reports what has happened, or importantly what other people have rightly or wrongly said happened. It can also never replace the security of a human being responsible for a decision, and importantly who can be blamed if it all goes wrong. Anyone trying to replace research with ChatGPT will soon realize the key value research adds and underline why human beings giving insights is so important to businesses.
ChatGPT clearly is a useful tool, but to anyone who thinks research can be replaced by ChatGPT, I say again “Beware the Chatterbot, my son!”