2021-08-12 nap bejegyzései

(3617) Evolúciós stratégia

Tibor bá’ fordítása online

A cikk kivonatolt fordítását elküldöm a VIP előfizetőknek

A társadalom kutatók ijesztő új elmélete: hamis hírek és összeesküvés elméletek alkalmazása evolúciós stratégia

___________________________________________________________________________

A social scientist’s terrifying new theory: Fake news and conspiracy theories as an evolutionary strategy

 

2021 a 89. év

Political misinformation — whether „fake news,” conspiracy theories or outright lying — has often been attributed to widespread ignorance, even though there are numerous examples of 20th-century propaganda aimed at those most attentive to politics. Books like Edward S. Herman and Noam Chomsky’s „Manufacturing Consent” began to challenge that notion, as did the 1991 study of media coverage of the first Gulf War with the memorable bottom line, „the more you watch, the less you know.” In the age of social media, scholarly explanations have shifted to discussions of „motivated reasoning,” which could be defined by Paul Simon’s line from „The Boxer”: „A man hears what he wants to hear and disregards the rest.”

But the ignorance perspective has a deep hold on us because it appeals to the Enlightenment notion that we are motivated to pursue truth. We are „the thinking animal,” right? The important part of that expression may be „animal.” Human beings have an evolutionary history, and deception is commonplace in the animal world because it confers evolutionary advantage. There’s good reason to believe we’re not so different, other than the fact that humans are ultra-social creatures. In ancestral and evolutionary terms, being part of a successful social group was every bit as essential as food and water. So deception among humans evolved from group conflicts. That’s the thesis of a recent paper called „The Evolutionary Psychology of Conflict and the Functions of Falsehood” by the Danish political scientists Michael Bang Petersen and Mathias Osmundsen and American anthropologist John Tooby.

While the paper aligns with the „motivated reasoning” perspective, its focus goes deeper than the psychological mechanisms that produce and reproduce false information. These researchers are trying to elucidate the functions of those mechanisms, that is, to answer the question of why they evolved in the first place. I interviewed Petersen three years ago, about a previous paper, „A ‘Need for Chaos’ and the Sharing of Hostile Political Rumors in Advanced Democracies,” which was summarized on Twitter thusly: „Many status-obsessed, yet marginalized individuals experience a ‘Need for Chaos’ and want to ‘watch the world burn.'” That paper provided crucial insight into prolific spreaders of misinformation and why they do what they do. But that individualist account was only part of the story. This new paper seeks to illuminates the evolutionary foundations and social processes involved in the spread of outright falsehoods. So I had another long conversation with Petersen, edited as usual for clarity and length.

Over the past decade or so, it’s become more common to regard the spread of political misinformation, or „political rumors,” as they’re sometimes called, as the result of „motivated reasoning” rather than ignorance. But your new paper proposes a broad evolutionary account of the social functions behind that motivated reasoning. Tell me about what led you to writing it, and what you set out to do?

One of our major goals with this research is to try to understand why it is that people believe things that other people believe are completely bizarre. I think it’s clear for everyone that that problem has gained more prominence within the last few decades, especially with the advent of social media. It seems that those kind of belief systems — belief in information and content that other people would say is blatantly false — is becoming more widespread. It can have some pretty dire consequences, as we could see for example with the storming of the Capitol on Jan. 6.

So what we’re trying to understand is, why people believe things that must be false. The traditional narrative is, ‘Well if you believe false things, then you must be stupid. It must be because you haven’t really made an effort to actually figure out what is going on.” But over the last few decades, more and more research has accumulated that suggests that’s not the case. In fact the people who are responsible for spreading misinformation are not those who know the least about politics. They actually know quite a lot about politics. In that sense, knowledge doesn’t guard against believing things that are false.

What we’re trying to do is to say, „Well, if it’s not because people are ignorant, then what is it?” In order to understand that, we utilize the framework of evolutionary psychology, basically trying to understand: Could there be anything adaptive about believing false information? Could this in some way be functional? Is it actually sort of on purpose that false information is believed and spread, rather than being an accident?

Before you discuss human evolution, you have a section of nonhuman animals. What can we learn from deception and conflict in the animal world?

I think that’s an important stepping stone, to look at the animal world, because most people would say that what animals do is the products of biological evolution, and has some sort of evolutionary advantage. And what we can see in animals is that they spread false information all the time when they are engaged in conflict.

One sort of obvious example is that animals try to appear larger than they are when they are engaged in conflict with other animals. That’s, of course, to send a signal to the other animals that you shouldn’t mess with me and if we actually get into a real fight I will win. So animals are trying to get an upper hand in conflict situations by making false signals.

So how does that change, or not change, when we look at humans?

First, that is also what we should expect that humans do, that if they can send false signals that are advantageous to them, then they should do it. What we then discuss is that there are certain constraints on the degree of falsehood in animal communication. That constraint is that communication systems evolved in the first place because they are a helpful for both individuals or both organisms involved in the exchange. So before a communication system can evolve it should be adaptive for the sender and for the receiver. That means that even in conflict situations you cannot set up blatant falsehoods. There are some kinds of reality constraints.

We are then saying that actually, in some situations, with regards to humans and human evolution, these constraints doesn’t operate. That’s because if we look at nonhuman animals, then the conflict is often between two individuals, but in human conflict it’s often between two groups, and the members of one group, are cooperating with each other against the other group. That means there might be certain advantages, within one group, to spread misinformation and spread falsehoods, if that can give them an upper hand in the conflict with the other group. Then we go on to discuss a number of ways in which that might be true.

You identify three functions of information sharing: group mobilization for conflict, coordination of attention, and signaling commitment. You argue that accomplishing these goals efficiently is what gets selected, in evolutionary terms, not truth or veracity. Can you give an example of each, starting with mobilization?

When you want to mobilize your group, what you need to do is find out that we are facing a problem, and your way of describing that problem needs to be as attention-grabbing as possible before you can get the group to focus on the same thing. In that context, reality is seldom as juicy as fiction. By enhancing the threat — for example, by saying things that are not necessarily true — then you are in a better situation to mobilize and coordinate the attention of your own group. The key thing is that it may actually be to your group’s advantage that if everyone is in agreement that we don’t like these other guys, then we make sure that everyone is paying attention to this other group. So by exaggerating the actual threat posed by the other group, you can gain more effective mobilization.

The key to understand why this makes sense, why this is functional, is that one needs to distinguish between interests and attention. A group can have a joint set of interests, such as, „Well, we don’t like this other group, we think we should deal with this other group in in some way.” But on top of that interest or set of interests, there is the whole coordination problem. You need to get everyone to agree that this is the time to deal with that problem. It’s now, and we need to deal with it in this way. It’s in that sort of negotiation process where it can be in everyone’s interest to exaggerate the threat beyond reality, to make sure that everyone gets the message.

You’ve more or less answered my next question about coordination. So what about signaling commitment? How does falsehood play a role there?

I think these are the two major problems, the mobilization on the one part and then the signaling on the other part. When you’re a member of the group, then you need other group members to help you. In order for that to take place, you need to signal that, „Well, I’m a loyal member of this group. I would help you guys if you were in trouble, so now you need to help me.”

Humans are constantly focused on signals of loyalty: „Are they loyal members of the group?” and „How can I signal that I’m a loyal member?” There are al sorts of ways in which we do that. We take on particular clothes, we have gang tattoos and all sorts of physical ways of expressing loyalty with the group.

But because we humans are exceptionally complex, another way to signal our loyalty is through the beliefs that we hold. We can signal loyalty to a group by having a certain set of beliefs, and then the question is, „Well, what is the type of belief through which we can signal that we belong?” First of all, it should be a belief that other people are not likely to have, because if everyone has this belief, then it’s not a very good signal of group loyalty. It needs to be something that other people in other groups do not have. The basic logic at work here is that anyone can believe the truth, but only loyal members of the group can believe something that is blatantly false.

There is a selection pressure to develop beliefs or develop a psychology that scans for beliefs that are so bizarre and extraordinary that no one would come up with them by themselves. This would signal, „Well, I belong to this group. I know what this group is about. I have been with this group for a long time,” because you would not be able to hold this belief without that prehistory.

I believe we can see this in a lot of the conspiracy theories that are going around, like the QAnon conspiracy theory. I think we can see it in religious beliefs too, because a lot of religious beliefs are really bizarre when you look at them. One example that we give in the text is the notion of the divine Trinity in Christianity, which has this notion that God is both one and three at the same time. You would never come up with this notion on your own. You would only come up with that if you were actually socialized into a Christian religious group. So that’s a very good signal: „Well, that’s a proper Christian.”

Right. I was raised Unitarian. As a secular Jew in Northern California at that time, the only place we could have a home was a Unitarian fellowship. It was filled with secular Jews, definitely not „proper Christians.”

Yes, I went to a private Catholic school myself, so I’ve been exposed to my portion of religious beliefs as well. But there’s another aspect that’s very important when it comes to group conflict, because another very good signal that you are a loyal member is beliefs that the other group would find offensive. A good way to signal that I’m loyal to this group and not that group is to take on a belief that is the exact opposite of what the other group believes. So that creates pressure not only to develop bizarre beliefs, but also bizarre beliefs that this other group is bad, is evil, or something really opposed to the particular values that they have.

This suggests that there are functional reasons for both spreading falsehoods, and also signaling these falsehoods. I think one of the key insights is that we need to think about beliefs in another way than we often do. Quite often we think about the beliefs that we have as representations of reality, so the reason why we have the belief is to navigate the world. Because of that, there needs to be a pretty good fit or match between the content of our beliefs and the features of reality.

But what we are arguing is that a lot of beliefs don’t really exist for navigating the world. They exist for social reasons, because they allow us to accomplish certain socially important phenomena, such as mobilizing our group or signaling that we’re loyal members of the group. This means that because the function of the beliefs is not to represent reality, their veracity or truth value is not really an important feature.

In the section „Falsehoods as Tools for Coordination” you discuss Donald Horowitz’s book, „The Deadly Ethnic Riot.” What does that tell us about the role of falsehood in setting up the preconditions for ethnic violence?

„The Deadly Ethnic Riot” is an extremely disturbing book. It’s this systematic review of what we know about what happens before, during and after ethnic massacres. I read this book when I became interested in fake news and misinformation circulating on social media, and this was recommended to me by my friend and collaborator Pascal Boyer, who is also an evolutionary psychologist. Horowitz argues that you cannot and do not have an ethnic massacre without a preceding period of rumor-sharing. His argument is exactly what I was trying to argue before, that the function of such rumors is actually not to represent reality. The total function of the rumors is to organize your group and get it ready for attack. You do so by pointing out that the enemy is powerful, that it’s evil and that it’s ready to attack, so you need to do something now.

One of the really interesting things about the analysis of rumors in this book is that, if you look at the content of the rumors, that’s not so much predicted by what the other group has done to you or to your group. It’s really predicted by what you are planning to do to the other group. So the brutality of the content of these rumors is, in a sense, part of the coordination about what we’re going to do to them when we get the action going — which also suggests that the function of these rumors is not to represent reality, but to serve social functions.

What I was struck by when I read Horowitz’s book was how similar the content of the rumors that he’s describing in these ethnic massacres all over the world, how similar that is to the kind of misinformation that is being circulated on social media. This suggests that a lot of what is going on in social media is also not driven by ignorance, but by these social functions.

One point you make is that to avoid being easily contradicted or discredited, these kinds of „mobilization motivations should gravitate towards unverifiable information: Events occurring in secret, far away in time or space, behind closed doors, etc.” This helps explain the appeal of conspiracy theories. How do they fit into this picture?

When we look at falsehoods there is a tension. On one level, there is a motivation to make it as bizarre as possible, for all the reasons we have been talking about. On the other hand, if you are trying to create this situation of mobilization, you want the information to flow as unhindered as possible through the network. You want it to spread as far as possible. If you’re in a situation where everyone is looking at a chair and you say, „Well, that chair is a rock,” that’s something that will hinder the flow of information, because people will say, „Well, we know that’s really a chair.”

So while there is this motivation or incentive to create content as bizarre as possible, there is also another pressure or another incentive to avoid the situation where you’re being called out by people who are not motivated to engage in the collective action. That suggests it’s better to develop content about situations where other people have a difficult time saying, „That’s blatantly false.” So that’s why unverifiable information is the optimal kind of information, because there you can really create as bizarre content as you want, and you don’t have the risk of being called out.

We see a similar kind of tactic when conspiracy theorists argue, „Well, we are only raising questions,” where you are writing or spreading the information but you have this plausible deniability, which is also a way to avoid being called out. Conspiracy theories are notorious exactly for looking for situations that are unverifiable and where it’s very difficult to verify what’s up and what’s down. They create these narratives that we also see in ethnic massacres, where we have an enemy who is powerful, who is evil and who is ready to do something that’s very bad. Again, that completely fits the structure of mobilizing rumors that Horowitz is focusing on. So what we’ve been arguing, here and elsewhere, is that a lot of conspiracy theories are really attempts to mobilize against the political order.

In the section „Falsehoods as Signals of Dominance” you write that „dominance can essentially be asserted by challenging others,” and argue that when a given statement „contradicts a larger number of people’s beliefs, it serves as a better dominance signal.” I immediately thought of Donald Trump in those terms. For example, he didn’t invent birtherism, and when he latched onto it he didn’t even go into the details — there were all these different versions of birther conspiracy theories, and he didn’t know jack-shit about any of them. He just made these broad claims, drawing on his reputation and his visibility, and established himself as a national political figure. I wonder if you can talk about that — not just about Trump, but about how that works more generally.

Yes, I can confess that I too was thinking about Donald Trump when writing that particular section of the paper. So I will talk a little bit about Donald Trump, but I will get to the general case. I think one of the first examples for me of that tactic was during the presidential inauguration in 2017, where the claim was that there were more people at Trump’s inauguration than Obama’s inauguration, and everyone could clearly see that was false.

In the section „Falsehoods as Signals of Dominance” you write that „dominance can essentially be asserted by challenging others,” and argue that when a given statement „contradicts a larger number of people’s beliefs, it serves as a better dominance signal.” I immediately thought of Donald Trump in those terms. For example, he didn’t invent birtherism, and when he latched onto it he didn’t even go into the details — there were all these different versions of birther conspiracy theories, and he didn’t know jack-shit about any of them. He just made these broad claims, drawing on his reputation and his visibility, and established himself as a national political figure. I wonder if you can talk about that — not just about Trump, but about how that works more generally.

Yes, I can confess that I too was thinking about Donald Trump when writing that particular section of the paper. So I will talk a little bit about Donald Trump, but I will get to the general case. I think one of the first examples for me of that tactic was during the presidential inauguration in 2017, where the claim was that there were more people at Trump’s inauguration than Obama’s inauguration, and everyone could clearly see that was false.

Just to start with that particular observation, I think with that sort of denial — for example, „This is not racism, this is not sexism,” or whatever — part of the function is again to have plausible deniability, whereby you can make sure that the information spreads, that everyone who needs to hear it will hear it and it’s not really being blocked. Because you could say that outright racism or outright sexism would be something that would stop the spread of the information. So people who are in a mobilization context are always caught in this cross-pressure between making sure that the signal is as loud as possible, and that it is disseminated as widely as possible. Often there is this tension between the two that you need to navigate. I think looking at and understanding that conflict and that tension is an important theoretical next next step.

As we say numerous times in the chapter, this is a theoretical piece where we are building a lot of hypotheses which are in need of empirical evidence. So I think one important next step is to gain and develop the empirical evidence or empirical tests of these hypotheses, to see what actually seems to hold up, and what may be misguided.

One thing I’m very interested in personally is to to look into who uses these tactics more than others — who is most motivated to engage in these kinds of tactics to win conflict. This is a line of work that we have been studying, and one thing we are finding is that people who are seeking status are the most motivated to use these kinds of tactics to gain that status.

I always like to end by asking: What’s the most important question I didn’t ask? And what’s the answer?

I think the most important question that you may not have asked is this: We started out talking about motivated reasoning, so what is the difference between what we are bringing to the table, compared to the traditional theories of motivated reasoning? Those argue that you hold certain beliefs because they feel good. You like to believe certain things about your group because it gives you self-esteem. You like to believe the other group is evil because that also helps you feel good about your group. When social scientists have abandoned the ignorance argument for those kinds of beliefs and looked into social function, then they say, „Well, the social function of these beliefs is to make you feel good about yourself.”

What we are saying is that while it is probably true that these beliefs make you feel good about yourself, that’s not really their function, that’s not their real purpose. We’re saying that evolution doesn’t really care whether you feel good or bad about yourself. Evolution cares about material benefits and, in the end, reproductive benefits. So the beliefs that you have should in some way shape real-world outcomes.

We are arguing that these false beliefs don’t just exist to make you feel good about yourself, but exist in order to enable you to make changes in the world, to mobilize your group and get help from other group members. I think that’s an important point to think more about: What it is that certain kinds of beliefs enable people to accomplish, and not just how it makes them feel.

_________________________________________________________________________________________________________________________________________________________________________________________________________________________________