Share

20 September, 2018

Psychology Makes the Strategist



Military activity is never directed against material force alone; it is always aimed simultaneously at the moral forces which give it life, and the two cannot be separated.
Carl von Clausewitz, On War

I have a new double-book review up at Strategy Bridge. This time both books were written by the same person: King's College (London) professor of war studies Kenneth Payne. The books are his 2015 The Psychology of Strategy: Exploring Rationality in the Vietnam War and his more recent Strategy, Evolution, and War: From Apes to AI. Here is how I introduce the topic:
A new science of human behavior has emerged over the past two decades. This new science has linked together the research of neuroscientists, cognitive and evolutionary anthropologists, decision theorists, social and cross cultural psychologists, cognitive scientists, ethnologists, linguists, endocrinologists, and behavioral economists into a cohesive body of research on why humans do what they do. Research in this field rests on two propositions about the human mind. The first, that the mind is embodied; the second, that it is evolved.

When behavioral scientists say the mind is embodied, they mean the mind is a biological thing and the study of decision making cannot be divorced from the architecture of the biological machinery that makes the decisions. Their research suggests most of the mind’s machinery works under the hood, below the level of conscious awareness. Researchers have their favorite object of study: for some it is hormones and emotions, for others it is specialized cognitive modules evolved in the deep human past to solve problems faced by our hominid ancestors, and for yet others it is culturally created cognitive gadgets impressed into the biological structure of brains at an early age by the societies in which we grew up. When behavioral scientists say these attributes of human psychology are evolved, they mean only that, as a biological thing, the human mind was created by the same evolutionary process that crafted the function and form of every other living thing. “Nothing in biology makes sense except in the light of evolution” (as one famous biologist declared several decades ago), and this is as true for the study of the human mind as it is for the study of bacteria or butterflies.

What does this have to do with war or strategy? Everything, answers Kenneth Payne, professor in the War Studies department at King’s College London. In the last three years, Payne has published two books on the subject. The first, The Psychology of Strategy: Exploring Rationality in the Vietnam War, uses the Vietnam War as its central case study; the second, Strategy, Evolution, and War: From Apes to Artificial Intelligence, extends the themes of the first book deep into the wars of humanity’s evolutionary past and forward into the less human wars of its future. The reasoning behind Payne’s books is simple: strategic decision making is human decision making. Like all aspects of human behavior, powerful insights about the nature of strategy can be gained by viewing it through the lens of behavioral science. [1]
I am extremely sympathetic to Payne's approach (this is why I jumped at the chance to get review copies of his two books). Any theory of military strategy that is not informed by behavioral science on the one hand and organization science on the other is a dead end. This is not a new insightas Payne writes about at some length in his books (and I mention in a footnote in this review) Clausewitz was obsessed with the psychological aspects of war and built his theory of war around them. The difference between Clausewitz 's day and our own is that we have a much stronger understanding of how the mind works than was available at the turn of the 19th century. It seems foolish to ignore this new knowledge. Clausewitz certainly would not have.

I encourage you to read the rest of the review. Payne's books are interesting—they cover everything from warfare among chimpanzees to the role emotion plays in political decision making to the implications of using AI to augment human decision making in battlebut as I argue, I think they may be less useful for what they prove (for as Payne admits, they prove precious little) than for the avenues of research they open up:
Payne’s books are full of small asides that—if properly investigated—could become their own books. Here are three potentially fruitful research questions that occurred to me as I read through these two books.

1. In one of the more intriguing passages of the The Psychology of Strategy, Payne suggests:
Insofar as honour is the goal for states embroiled in war, the fighting itself can tend to the ritualized and stylized, rather than the conception of ‘total’ war offered in parts of Clausewitz’ writing.… Display and attention to rules become integral parts of strategy. Societies have more latitude to fight according to their cultural precepts, rather than to adjust them in pursuit of efficiency. They can acquire armed forces and develop ways of fighting that seem in tension with strategic conditions facing them.

The contrast Payne sees between wars of honor and more total conceptions of war has striking parallels with patterns military historians have described independently. J.E. Lenden, Pier Mackay, and Stephen Morillo have described this exact contrast in their analysis of different wars between the polis of ancient Greece, the kings of medieval Europe, and the European empires of the 18th century. But if stylized wars of honor are a real phenomena, what determines when armies and states fight them instead of wars dominated by fear or interest? Why were the first ten years of the duel between Athens and Sparta defined by Greek honor norms, when these same norms had so little power to shape behavior in the later years of the conflict? What, in short, can the study of human psychology teach us about the durability of norms of war?

2. Cross-cultural psychology is a burgeoning subfield of psychology. Psychologists, and more than a few anthropologists, have discovered human beings from different cultures often have different cognitive profiles, including the psychological biases they are victim to. As anthropologist-cum-psychologist Joseph Henrich noted, “Many researchers want to study those psychological processes that make us uniquely human. The problem is, at this point, there has been so little systematic comparative experimental research across diverse populations that we currently lack any reliable way to know when we are tapping innate psychological processes, or the products of centuries of cultural evolution.” 
This critique is relevant to almost all the evidence Payne presents. Indeed, Henrich and a team of cross-cultural psychologists suggest in a forthcoming research article that optimism bias, one of the biases Payne discusses at length, is not similarly manifested in East Asian and Western populations. One must ask: Is Payne’s psychology of strategy really just the psychology of Western strategy?

This may cause some to question the utility of Payne’s entire work. In contrast, I see it as an opportunity to extend Payne’s general research program. For the last three decades scholars have tried to create viable theories of strategic culture that might explain patterns in the strategic decision making across cultures. While this literature has been plagued with many problems, one of its key failings is that most of it fails to explain how strategic culture is transmitted from one generation to the next. This literature also fails to describe the mechanism by which culture actually changes decision making.

By refocusing these debates on cognitive differences of decisions makers, progress may be possible. Psychology might be the missing key to the puzzle. It is easy to imagine a robust line of research that attempts to ferret out which elements of human psychology are most relevant to strategy, tests through laboratory and field studies which of these elements are cognitive gadgets unique to certain cultures and which are genetically ingrained human universals, and then uses these results as a lens through which to test strategic history.

3. Another new and fascinating line of research in the behavioral sciences is the study of what researchers have dubbed folk sociology. As cognitive scientist Pascal Boyer has described, “In all human societies, people have some notion of what social groups are, how they are formed, what political power consists of.” Linguistically, this folk sociology is expressed through metaphors. For example, we talk about groups of people as if they were unitary agents (“the American administration is angry with China”), and we talk about political power as if it were a physical force (“the Republicans bowed under popular pressure” or “the Conservatives crushed Labour”) even though neither of these things is true. Despite its inaccuracy, this way of talking is natural and appears in multiple languages. Boyer and his compatriots suggest this is because the cognitive resources we use to understand these concepts originally evolved for other purposes—in this case, understanding the behavior of actual unitary agents and intuitive models of physics, respectively. They have traced many ways in which this folk sociology has a powerful effect on the way humans understand and interact with political institutions and economic markets.

Is there such a thing as folk strategic theory? If Payne is correct, and warfare was a source of selection pressure throughout the evolution of humanity, then it is likely we have developed cognitive modules that channel or understanding of violence, strategy, and war into certain metaphors and mental conceptions.[2]
Readers interested in the citations for the various books referenced and quotations reproduced in this section should read the footnotes of the original piece over at Strategy Bridge. If the topic strikes your fancy, also consider purchasing Payne's two books.

------------------------------------------------------------------

[1] Tanner Greer, "#Reviewing The Psychology of Strategy & Strategy, Evolution, and War," Strategy Bridge (18 September 2018).

[2] Ibid.

13 September, 2018

A Small Note on the Terror of Uncertainty

Freedom of men under government is to have a standing rule to live by, common to every one of that society, and made by the legislative power erected in it; a liberty to follow my own will in all things, where that rule prescribes not: and not to be subject to the inconstant, uncertain, arbitrary will of another man.
John Locke, Second Treatise on Government, section 22 (1689)

This week's post on Xinjiang and the many things one can do there to be thrown into a political reeducation camp has been picked up by Foreign Policy. The FP version of the article has been published under the title "48 Ways to Get Sent to a Chinese Concentration Camp."

The material will be familiar to readers of this blog as most of the article is a direct adaptation of an earlier blog post here. However, I did make one significant point in the FP version of the essay that I did not make here:
A central element of this campaign is uncertainty. It is difficult to judge which of these items are official policy and which are simply the result of ad hoc decisions made by local officials. This is likely by design. One Uighur interviewee told HRW how he simply stopped using his smartphone because he could not tell which websites were allowed and which might incriminate him; another described how she stopped talking to neighbors and strangers altogether because she did not want to unintentionally say something that might bring the police to her door. Vagueness breeds fear. Fear makes the people subject to the Communist Party’s campaigns easier to control.[1]
I wish I could claim credit for this particular insight, but as the epitaphs placed at the top of this post evince, it is an old one. It does, however, help make sense of some of the more mysterious items on the list.

------------------------------------------------------------------

[1] Tanner Greer, "48 Ways to get Sent to a Chinese Concentration Camp," Foreign Policy (13 September 2018)

11 September, 2018

Things That Will Get You Thrown in a Chinese Political Education Camp

"People have to tell the crowd what their families did, just like during the Cultural Revolution."
—"Ainagul," 52, who left Xinjiang in 2017 and whose son is in
 a political education camp (interviewed May 18, 2018).
"A wife denounces her husband, an imam who was imprisoned for extremism, ... saying something about him propagating Wahhabism; and then a kid who denounces his father for having prayed and read the Quran. [There were also] people who have exceeded the birth quotas, the couple and their kids were crying as the authorities announced the huge fines against them. This is called a ‘Looking Back’ (回头看) exercise, looking back at what bad things people had done in the past 20 years.“
—"IIham," who left Xinjiang in 2017 (interviewed June 7, 2018).

Earlier this month, Human Rights Watch published a 125 page report on the crisis in Xinjiang:Eradicating Ideological Viruses: China’s Campaign of Repression Against Xinjiang’s Muslims. It does not make pleasant reading. The report consists mostly of excerpts from interviews that Human Rights Watch researchers conducted with 58 ethnic Uyghurs or Kazakhs, living in nine countries, who have managed to flee from Xinjiang over the last two years. This is the largest interview set yet published. The accounts published by Human Rights Watch largely corroborate other evidence we have gleaned so far from the five main streams of information that we have about what is happening in Xinjiang: 1) journalist accounts from within Xinjiang itself, 2) social media posts and online job advertisements in Chinese, 3) official state media photographs, statistics, or proclamations, 4) satellite imagery, and 5) other interviews with Kazakhs or Uyghurs who have been able to escape from China after all of this began. SupChina has put together a good round-up of all of the English-language material available before September 2018.

There is a lot of material in the Human Rights Watch report. I want to focus on a tiny slice of it: the reasons Uyghur or Kazakhs report that they are being thrown into "political education camps" (that is, a gulag with Chinese characteristics).

Things which may cause you to detained without trial and locked away in an education camp indefinitely, in Xinjiang, China, 2018:
  • Owning a tent
  • Owning welding equipment
  • Owning extra food
  • Owning a compass
  • Owning multiple knives
  • Abstaining from alcohol
  • Abstaining from cigarettes
  • Wailing, publicly grieving, or otherwise acting sad when your parents die
  • Performing a traditional funeral
  • Inviting more than 5 people to your house without registering with the police department
  • Wearing a scarf in the presence of the PRC flag
  • Wearing a hijab (if you are under 45)
  • Going to a mosque
  • Praying
  • Fasting
  • Listening to a religious lecture
  • Telling others not to swear
  • Telling others not to sin
  • Eating breakfast before the sun comes up
  • Arguing with an official
  • Sending a petition that complains about local officials
  • Not allowing officials to sleep in your bed, eat your food, and live in your house
  • Not having your government ID on your person
  • Not letting officials take your DNA
  • Not letting officials scan your irises
  • Not letting officials download everything you have on your phone
  • Not making voice recordings to give to officials
  • Speaking your mother language in school
  • Speaking your mother language in government work groups
  • Speaking with someone abroad (with skype, etc.)
  • Speaking with someone who has traveled abroad
  • Having traveled abroad yourself
  • Merely knowing someone who has traveled abroad
  • Publicly stating that China is inferior to some other country
  • Having too many children
  • Having a VPN
  • Having Whatsapp
  • Watching a video filmed abroad
  • Wearing a shirt with Arabic lettered writing on it
  • Wearing a full beard
  • Wearing any clothes with religious iconography
  • Not attending mandatory propaganda classes
  • Not attending mandatory flag raising ceremonies
  • Not attending public struggle sessions (Cultural Revolution style)
  • Refusing to denounce your family members in these public struggle sessions
  • Being related to anyone who has done any of the above
  • Trying to kill yourself when detained by the police
  • Trying to kill yourself when in the education camps proper.
Something terrible is happening in Xinjiang.

NOTE: Also see my earlier post, "Moral Hazards and China."

EDIT (13 Sep 2019):
An expanded version of this post has been published in Foreign Policy magazine. See also my follow up post here.

02 September, 2018

So Why Did They Publish Them? - A Few Notes on the Latest Batch of Fail-to-Replicates

The big news this week is a fresh study in Nature that reports the results of a team that sought to replicate 21 high profile experiments in social psychology, all originally published by the journals Nature or Science between the years 2010 and 2015. The study has garnered a lot of headlines. You can read takes by Science Magazine, The Washington Post, Ars Techica, The Atlantic, Science Trends, and many others with a bit of google searching. Popular interest is driven by the study's result: the research team was only able to replicate 13 out of the 21 experiments.

I am going to assume that readers are familiar with the general outlines of the "reproducibility crisis" (if you are not, Susan Dominus' New York Magazine long-read on the crisis, "When the Revolution Came for Amy Cuddy" is a good place to start). What is most interesting about this study is not that they found more experiments that failed to replicate. These days that is old hat. What is new about this study is that the experimenters asked a pool of 200 psychologists to predict which studies would fail to replicate and which ones would not. They did this both by survey and by prediction market. What did they discover? This graphic (pulled from the paper in Nature) tells the story:


You will notice that researchers did a very good job predicting which studies would fail to replicate. The studies the majority predicted would fail to replicate were the same studies that actually failed to replicate. What does this mean? Psychologists can tell the difference between good studies and bad ones. But that raises another question: if psychologists can sift the wheat from the chaff, why is so much chaff being published?

I have read some uncharitable answers to this question on Twitter. I think these answers are unnecessarily uncharitable. But before I explain why, let me offer you a challenge: go visit the website 80,000 Hours and see if you can predict which experiments will replicate and which will not. The folks at 80,000 have created a neat quiz which presents the results, methodology, and sample size of each experiment to you, and allows you to guess if the results were replicated before you see the real results.

OK, are we back? I took the quiz before I read the original paper or any of the news coverage about it. Despite this, I got on almost perfect scoreI only guessed two wrong, and both of those I labeled as "not sure." How did I score so well? My predictions followed a rough rule of thumb: if the study 1) involved "priming," or 2) seemed to fly against my own experience dealing with humans in day to day life, I predicted it would not replicate.

You can find a suitable definition of "priming" at NeuroSkeptic. Basically, it refers to attempts to unconsciously influence perception and decision making by exposing subjects to subtle stimuli. For example, there is a famous set of studies that found placing a picture of eyes on a wall will increase the honesty and generosity of those exposed to them.  You can criticize studies like this from two angles: on the one hand, this simply does not seem to describe how the actual humans you know go about living their lives. That is one of the reasons  studies like this garner so much attention. They are counter-intuitive. In the age of the TED talk, cleverly subverting people's intuitions is high prestige endeavor. But I tend to be extremely skeptical of any psychological study that makes unusually counter-intuitive claims. Why? Because for the greater part of humanity's evolutionary history, the single most important selection pressure put on human beings was the ability to intuit the behavior and intentions of other human beings. Being able to understand and predict other humans' behavior is critical to our survival. It is something we are naturally good at (though only a few very perceptive and articulate individuals are skilled at communicating these intuitions to others). I am thus usually very suspicious of any study which claims that our intuitions have led us to make faulty assessments of others' behavior.

My demand for especially strong evidence when priming studies are conducted is also informed by advances in other fields of the behavioral sciences. Over the last two decades, there has been a substantial amount of research done on the relationship between genetics and behavior, hormones and behavior, and life history and behavior. All three streams of research suggest that a lot of our behavior (say, our propensity to be honest) is determined days and years before the actual moment of decision. It is difficult, though not entirely impossible, to square this research with social priming studies that suggest that humans live in constant churn, buffeted about by a never-ending stream of imperceptible stimuli. 

Now I want to be clear here: none of the above means I reject all counter-intuitive findings, or even all social priming findings, out of hand.   But it does mean I ask for an unusually high standard of evidence before accepting them. But I do not think I would have demanded this same standard of evidence back in 2010. This is why I am less harsh on the editors of Science and Nature than many seem to be. By 2016 it was clear that those "surveillance cue" studies that had psychologists pinning eyes up on walls were failing to replicate. In 2010 the replication wave had not yet hit, and scientists were not trained to ask themselves whether their studies would replicate. Things have changed. Psychologists now constantly ask themselves if their studies will replicate; more importantly, they have a large body of failed studies to learn from. If you have paid any attention to these developments you will have learned which kind of studies do not replicate: those with tiny sampling sizes, those that rely on superficial social priming, and those whose results are counter-intuitively flashy. But this body of failed studies did not exist in 2010. It is not fair to judge the scientists of that time against data that has only become available in ours.