12 April, 2021

For God and Progress: Notes On Training the Medical Mind

William Osler, teaching at the bedside.

Understanding changing perceptions of “great works”— what books are included in a canon at a given moment in history, why certain works make the cut while others fall to the wayside, and tracking down the individuals responsible for these decisions—is a hobby of mine. I have written about it many times on the Scholar's Stage. This week I came across an interesting example of cannon formation in action. William Osler was one of the founding physicians of John Hopkins Medicine, creator of the hospital residency system, inventor of much of the hands-on medical pedagogy still used in medical schools today, and one of the most famous doctors of his day. On the final page of Aequanimitas, a collection of Osler’s lectures and orations published in 1904, is a list of books that Osler believes should be on every medical student’s bookshelf. He suggests that while in medical school young doctors-to-be should spend the last 30 minutes of their night reading from this chosen library. 

 This is the list: 

  1. The Old and New Testaments 
  2. Shakespeare 
  3. Montaigne 
  4. Plutarch’s Lives 
  5. Marcus Aurelius 
  6. Epictetus 
  7. Religio Medici 
  8. Don Quixote 
  9. Emerson 
  10. Oliver Wendell Holmes—Breakfast Table series. [1] 

 It is difficult to imagine such a list pushed on medical students by any dean today! I fear starting with Osler’s bedside library might give you the wrong impression of Osler’s priorities; this is, after all, taken from the last page of Osler’s book. Osler was practical minded: he had these words stenciled in in every one of his medical textbooks at John Hopkins:

The knowledge which a man can use is the only real knowledge, the only knowledge which has life and growth in it and converts itself into practical power. The rest hangs like dust about the brain or dries like rain drops off the stones.[2]
To find this “real knowledge” Osler instructed students to “Divide your attentions equally between books and men."[3]To study the phenomena of disease without books is to sail an uncharted sea,” Osler argued, but “to study books without patients is not to go to sea at all.”[4] Thus Osler’s pioneering pedagogy, which brought medical students into hospitals as they learned. But Osler’s injunction to study men went beyond medical patients:
The strength of a student of men is to travel—to study men, their habits, character, mode of life, their behavior under varied conditions, their vices, virtues, and peculiarities. Begin with a careful observation of your fellow students and of your teachers; then, every patient you see is a lesson in much more than the malady from which he suffers. Mix as much as you possibly can with the outside world, and learn its ways. Cultivated systematically, the student societies, the students’ union, the gymnasium, and the outside social circle will enable you to conquer the diffidence so apt to go with bookishness and which may prove a very serious drawback in after-life. I cannot too strongly impress upon the earnest and attentive men among you the necessity of overcoming this unfortunate failing in your student days.[5]
The “books” Osler urged his students to study were not limited to his bedside library. He gave long lectures on the urgency of keeping up with advances in science and medicine that would occur during a doctor’s life.[6] Of these sciences, Osler believed biology the most critical:
Biology touches the problems of life at every point, and may claim, as no other science, completeness of view and a comprehensiveness which pertains to it alone. To all whose daily work lies in her manifestations the value of a deep insight into her relations cannot be overestimated. The study of biology trains the mind in accurate methods of observation and correct methods of reasoning, and gives to a man clearer points of view, and an attitude of mind more serviceable in the working-day-world than that given by other sciences, or even by the humanities. Year by year it is to be hoped that young men will obtain in this Institute a fundamental knowledge of the laws of life.
To the physician particularly a scientific discipline is an incalculable gift, which leavens his whole life, giving exact-ness to habits of thought and tempering the mind with that judicious faculty of distrust which can alone, amid the uncertainties of practice, make him wise unto salvation. For perdition inevitably awaits the mind of the practitioner who has never had the full inoculation with the leaven, who has never grasped clearly the relations of science to his art, and who knows nothing, and perhaps cares less, for the limitations of either.[7]

I excerpt Osler at such length only to show that he was not some fancy fart bitter over the low number of STEM students enrolling in his seminar on the poetics of gender in 1970s Chicano literature. This was a man who valued practical knowledge above all, a man who was enamored with science and its possibilities—and a man who wanted all of his medical students to read Shakespeare and Emerson. 


Here is the passage where Osler introduces the necessity of liberal learning:

The medical man, perhaps more than any other man, needs that higher education of which Plato speaks,—" that education in virtue from youth upwards, which enables a man eagerly to pursue the ideal perfection." It is not for all, nor can all attain to it, but there is comfort and help in the pursuit, even though the end is never reached.
…Like a good many other things, it comes in a better and more enduring form if not too consciously sought. The all-important thing is to get a relish for the good company of the race in a daily intercourse with some of the great minds of all ages. Now, in the spring-time of life, pick your intimates among them, and begin a systematic cultivation of their works. Many of you will need a strong leaven to raise you above the dough in which it will be your lot to labour. Uncongenial surroundings, an ever-present dissonance between the aspirations within and the actualities without, the oppressive discords of human society, the bitter tragedies of life, besides the bidden springs of which we sit in sad despair—all these tend to foster in some natures a cynicism quite foreign to our vocation, and to which this inner education offers the best antidote. Personal contact with men of high purpose and character will help a man to make a start—to have the desire, at least, but in its fulness this culture—for that word best expresses it—has to be wrought out by each lesson that you will enjoy.
The practice of medicine is an art, not a trade; a calling, not a business; a calling in which your heart will be exercised equally with your head. Often the best part of your work will have nothing to do with potions and powders, but with the exercise of an influence of the strong upon the weak, of the righteous upon the wicked, of the wise upon the foolish.
To you, as the trusted family counsellor, the father will come with his anxieties, the mother with her hidden grief, the daughter with her trials, and the son with his follies. Fully one-third of the work you do will be entered in other books than yours. Courage and cheerfulness will not only carry you over the rough places of life, but will enable you to bring comfort and help to the weak-hearted and will console you in the sad hours when, like Uncle Toby, you have " to whistle that you may not weep." [8]
Osler’s sentimentality is offered without apology. Words like his sound saccharine to the cynics of our century, but they were par for course at the turn of the last. No decade of American history was as effusively earnest as the 1890s; it was a time where men thought in moralisms and even vain intellectuals chased after homely, middle-class ideals. Like Osler, its leading figures were fervent devotees of both God and Progress. Osler saw his profession as the embodiment of both of these absolutes. No irony, no self-deprecation, no titters and tutters about doctors and their follies are to be heard from him! Hear him discourse to the nurses of the nation on the rightness of their chosen career:
Practically there should be for each of you a busy, useful, and happy life; more you cannot expect; a greater blessing the world cannot bestow. Busy you will certainly be, as the demand is great, both in private and public, for women with your training. Useful your lives must be, as you will care for those who cannot care for themselves, and who need about them, in the day of tribulation, gentle hands and tender hearts. And happy lives shall be yours, because busy and useful; having been initiated into the great secret—that happiness lies in the absorption in some vocation which satisfies the soul; that we have here to add what we can to, not to get what we can from, life.[9]
Imagine this sort of declaration coming from the lips of a professor of medicine today:How strange it would sound! In our day, those who quest for transcendence do not go to med-school (much less nursing). For the American millennial, med-school is instead a portal to bourgeois respectability (and personal misery). But Osler is all in on transcendence. He unironically describes the path of medicine as the path of Christ. Nurses and doctors have consecrated themselves as hands of the Lord and heralds of the future. Theirs is to comfort the afflicted, succor the needy, and heal the sick. Osler put the matter bluntly in an address to medical students in Minneapolis:
My message is chiefly to you, Students of Medicine, since with the ideals entertained now your future is indissolubly bound. The choice lies open, the paths are plain before you. Always seek your own interests, make of a high and sacred calling a sordid business, regard your fellow creatures as so many tools of trade, and, if your heart's desire is for riches, they may be yours; but you will have bartered away the birthright of a noble heritage, traduced the physician's well-deserved title of the Friend of Man, and falsified the best traditions of an ancient and honourable Guild.[10]

 Osler was no patsy. That he felt this admonition necessary is evidence of an acute awareness that not all doctors were as committed to the life of righteousness as he. This returns us the full circle, back to Osler’s bedside library. 

He saw in these books a tool to instill within his students both the character traits and guiding ideals needed for a life in medicine. These traits include the “cheerfulness and courage” endorsed in the passage above, but also equanimity (for moments of pressure and crisis), wisdom (for the grieving patient), optimism (in the face of death and dying), and charity (an absolutely necessary trait, Osler argues, for peace of mind in a profession afflicted with large egos and petty jealousies). He also writes that

Nothing will sustain you more potently than the power to recognize in your humdrum routine, as perhaps it may be thought, the true poetry of life—the poetry of the commonplace, of the ordinary man, of the plain, toilworn woman, with their loves and their joys, their sorrows and their griefs. The comedy, too, of life will be spread before you, and nobody laughs more often than the doctor at the pranks Puck plays upon the Titanias and the Bottoms among his patients.[11]

Behind Osler’s broad conception of a doctor’s role was an equally broad conception of the education a doctor needed to perform it. The metaphor Osler favored is the leaven in the bread-dough. He believed that studying great works of literature and philosophy will lead inborn virtues to bloom large and quick—or at least, larger and quicker than they otherwise would if left unleavened. A doctor who has internalized these works was not guaranteed to live a better life, but on balance such a doctor was more likely to choose right when the “best traditions of an ancient and honorable guild” were arrayed against the allures of lucre, envy, and prestige.

Thusfar I have emphasized the contrast between Osler’s view of the medical man and the narrower confines of the modern medical school. But his vision of a liberal education also stands in contrast to the manner in which most great works are approached by specialists in literature, poetry, rhetoric, and history today. Few professors would turn to Plutarch or Cervantes for “courage and cheerfulness.” Among the tempted few, fewer still are those who would proclaim their inspiration loudly. The modern academic mode is analytic and detached. What they teach is doomed to “hang like dust about the brain or dry like rain drops off the stones.

 But it may be misleading to contrast Osler only with the present day. The truth is that his attitude towards the great works was itself a rather new development in anglophone thought. Let us contrast Osler’s recommended reading with a list composed by a different doctor some fifty years earlier. John Brown, once the most eminent doctor of Edinburgh, gives this advice on the training of young doctors-to-be:

But it may be asked, how are the brains to be strengthened, the sense quickened, the genius awakened, the affections raised — the whole man turned to the best account for the cure of his fellow-men? How are you, when physics and physiology are increasing so marvellously, and when the burden of knowledge, the quantity of transferable information, of registered facts, of current names—and such names!—is so infinite: how are you to enable a student to take all in, bear up under all, and use it as not abusing it, or being abused by it?
You must invigorate the containing and sustaining mind, you must strengthen him from within, as well as fill him from without; you must discipline, nourish, edify, relieve, and refresh his entire nature; and how? We have no time to go at large into this, but we will indicate what we mean: encourage languages, especially French and German, at the early part of their studies; encourage not merely the book knowledge, but the personal pursuit of natural history, of field botany, of geology, of zoology; give the young, fresh, unforgetting eye, exercise and free scope upon the infinite diversity and combination of natural colours, forms, substances, surfaces, weights, and sizes—everything, in a word, that will educate their eye or ear, their touch, taste, and smell, their sense of muscular resistance; encourage them by prizes, to make skeletons, preparations, and collections of any natural objects; and, above all, try and get hold of their affections, and make them put their hearts into their work.
Let there be no excess in the number of classes and frequency of lectures. Let them be drilled in composition; by this we mean the writing and spelling of correct plain English (a matter not of every-day occurrence, and not on the increase)—let them be directed to the best books of the old masters in medicine, and examined in them,—let them be encouraged in the use of a wholesome and manly literature. We do not mean popular or even modern literature—such as Emerson, Bulwer, or Alison, or the trash of inferior periodicals or novels—fashion, vanity, and the spirit of the age, will attract them readily enough to all these; we refer to the treasures of our elder and better authors. If our young medical student would take our advice, and for an hour or two twice a week take up a volume of Shakspere, Cervantes, Milton, Dryden, Pope, Cowper, Montaigne, Addison, Defoe, Goldsmith, Fielding, Scott, Charles Lamb, Macaulay, Jeffrey, Sydney Smith, Helps, Thackeray, etc., not to mention authors on deeper and more sacred subjects — they would have happier and healthier minds, and make none the worse doctors. 
If they, by good fortune—for the tide has set in strong against the lifer humaniores—have come off with some Greek or Latin, we would supplicate for an ode of Horace, a couple of pages of Cicero or of Pliny once a month, and a page of Xenophon. French and German should be mastered either before or during the first years of study. They will never afterwards be acquired so easily or so thoroughly, and the want of them may be bitterly felt when too late.[12]

There are several points of interest here. Notice first that with the exception of Cervantes and Montaigne, the writers were all British. They are poets, essayists, and novelists all, artists of the English language. Not included are philosophers or political theorists who wrote in English (like John Locke or Thomas Hobbes). Ancient authors are included almost as a philological exercise; they are to be read in their original tongue, and only a few pages a day. While these readings are described as “wholesome and manly” the main reason given for studying them is not to leaven moral virtues (like Osler’s “courage and cheerfulness”) but to produce “happier and healthier minds.” Brown’s educational program is not designed to shape the soul so much as it is “strengthen the brain” and “quicken the sense”—the nineteenth century gloss for what we today might call “critical thinking.” 

Brown’s conception was closer to the nineteenth century norm. In England and America both, an education is “great books” meant reading through the greatest poets and prose-writers of the English language. If a book like Dante’s Inferno, a staple of 20th century great books collections, was taught in university, it was to students of Italian. Thus even in 1907 Arnold Bennett would write up a list of two hundred great works without including a single work in translation.[13] The study of these writers were often justified in fairly utilitarian terms: Thomas Jefferson advised his nephew to read “Milton's Paradise Lost, Shakspeare, Ossian, Pope's and Swift's works” not for their insight but “in order to form your style in your own language.”[14] This is why Brown connects his list of wholesome, manly writers to “the writing and spelling of correct plain English”—one read the great poets, novelists, and essayists of the past to become a better wordsmith.

Running parallel to the study of English language was an education in classics, meaning literature of Greece and Rome. But here too, study was more philological than philosophical. The crowning jewels of the nineteenth century education were Horace and Cicero.[15] Neither makes Osler’s list, and neither is given much space in the twentieth century great works curricula. I do not read Latin, but my secondhand understanding is that as stylists Horace and Cicero tower over their fellow Romans. Both served as the inspiration for some of the greatest poems and orations of the English language. However, (from firsthand experience this time) neither author’s wordplay translates especially well into English; I have not yet found a translation of Horace or Cicero that I would describe as beautiful. I suspect that those who cannot read Latin will never enjoy these two writers the way the great minds of the nineteenth century did. 

At the dawn of the twentieth century these two distinct educational traditions—the “masters” of the English language and the “classics” of Greece and Rome begin to merge. Osler’s reading list is a data point in this transition, but also an example of why this transition occurred. Osler was a pioneer of the new university system. These were universities as we know them today: houses of learning divvied up into distinct (and multiplying!) academic apartments staffed by professors who are expected to engage in research as a professional pursuit. Skill in ancient Greek was of little use to students in these departments, and the mandatory study of Greek and Latin was slowly eliminated from admissions tests.[16]

Yet the growing specialization of academic fields was not without its problems.[17] Many worried that learning would grow too fractured, intellectual life too fragmented. Others feared that an education system that simply churned out technicians would endanger democracy and the liberty. One solution to these problems was that endorsed by Osler: the close reading of a new canon of “classic” texts.

 I sometimes think of this as the transition from Horace to Homer. Homer was read avidly over the eighteenth and nineteenth centuries, but it is difficult to find him in any of the educational lists or syllabi written over these centuries. Homer’s Greek was archaic even in classical times, and his length did not make him an ideal vessel for language study. But when the study of the classics transitioned to the study of classics in translation, this was no longer a barrier. In this new environment, a gripping narrative work like The Iliad had an advantage over Horace’s lyrics. Homer became a staple of the new syllabi, and Horace was reduced down to an optional afterthought.[18] 

This transition to reading classics in translation also opened the door to writers like Dante, Molière, and Tolstoy, who did not fit any of the traditional language based categories. The canon soon grew beyond Great Britain and the ancient world. Famous British prose writers—like Addison, Lamb, and Johnson—were squeezed out of the curriculum to make room for these new additions. Further pressure on the masters of English prose came when philosophy and theology were folded into this new conception of the canon. As the new canon was defined by intellectual and moral categories instead of philological ones, the inclusion of philosophy made sense. The main cost was a severe reduction in the English poetry and prose students were expected to be familiar with. This is still true today: Augustine, Aquinas, Rousseau, and Kant are not exactly household names, but our intellectuals are expected to have a vague idea of what they wrote, even if they have not read their works themselves. No such expectations exist for Edmund Spencer, Henry Fielding, or Thomas de Quincy. 

Osler’s bedside library arrives at the beginning of the Horace-to-Homer transition. He does not include Homer or any long work of political philosophy in his list. Works like that do not meet the goals he has set out for the bedside library. His reading list is carefully tailored. it is meant for men who will embark on a career in medicine, not politics. They must be books that can be read thirty minutes at a time before bed. As he recommends this list to all his students, they must also be easy for philosophical or historical novices to pick up. His list largely follows these requirements. 

Two of Osler’s chosen authors were themselves doctors. None offer extended narratives or complex, belabored arguments (Don Quixote comes closest here, but even it is divided up into a dozen smaller narratives, not unlike a television serial). One of Osler’s students could pick up a meditation of Emerson’s one day, a life of Plutarch the next, then an essay of Montaigne’s the day following, and not suffer from confusion. These are works whose subdivisions can be read straight through or as standalone pieces. They are all very much concerned with practical ethics and practical spirituality. Most could be defined as character studies or wisdom literature; all were well known for their aphoristic acumen. 

 In addition to the Bible, Osler’s list of authors includes two Stoics, one other Roman, three Renaissance men, and two American transcendentalists. Another way to look at that: in Osler’s library we see the foundational ethics of Rome and Jerusalem, the Renaissance attempt to synthesize these pagan and Christian values into one organic whole, and the American update on this synthesis for the democratic age. The enlightenment and the middle ages are not represented, subsumed instead under the eras that follow them. 

There is a certain optimism in this progression. The Roman voices come from the empire at its height, not the Republic in decline; the Renaissance thinkers wrestle over and glory in the expanding frontiers of their age; the Americans speak for a nation brimming with energy and self-confidence. Foundations are followed by synthesis, a synthesis soon brightened and democratized. There are no voices here from human civilization's descents into darkness. Osler’s library is the story of mankind climbing towards the light. 

Osler lived just before the lights went out. From this side of that black chasm, the defining egoist of Osler’s century was not Emerson but Nietzsche. To our ears, the voices of the transcendentalists are crowded out by the prophecies of Marx, the warnings of Dostoevsky, and the murky imaginaries of Melville and Conrad. Twentieth century compilers of great works would turn to these dark visionaries to make sense of their lived reality. They would champion the shattered order of Thucydides and the nightmarish disasters of the Greek tragedians over the sober reflections of Plutarch and the Stoics; they pulled Augustine and Dante out of the dark ages and set them as the equal of any artist of the Renaissance. Their canon was larger Osler’s bed-side library, but also grimmer. They lived with less confidence in God and Progress. 

 As to what ten books the doctors of today should read to complete their learning, I cannot say. I am no doctor. But I cannot help but think Osler is right: the physician is called to “exercise an influence of the strong upon the weak, of the righteous upon the wicked, and the wise upon the foolish,” and must prepare his or her soul to do so. Perhaps that requires a canon tinged darker than Osler’s little library. But perhaps not. Perhaps the pendulum has swung too far. Maybe what the doctors and nurses of our day need most is the brazen idealism of Osler himself. If so, Osler’s Aequanimitas would be a worthy first entry in the bedside library of our own doctors-to-be. 


To read more of my notes on the history of great works and their canons, see: "A Few More Notes on the Dearth of Great Works," "Do the Great Books Have a Place in the 21st Century?" "Longfellow and the Decline of American Poetry," "A Non-Western Canon: What Would a List of Humanity's Hundred Greatest Thinkers Look Like?,"  "On Adding Phrase to the Language." To get updates on new posts published at the Scholar's Stage, you can join the Scholar's Stage mailing list, follow my twitter feed, or support my writing through Patreon. Your support makes this blog possible.



 [1] William Osler, Aequanimitas, With Other Address to Medical Students, Nurses, and Practitioners of Medicine (Philadelphia: P. Blakiston's Son & Company, 1904), 389.

[2] ibid., 215.

[3] ibid., 220.

[4] William Osler, "The Student Life," or. pub. 1921. Available at Quotidiana, Edited by Patrick Madden, 19 January 2007.

[5] ibid.

[6] cf. Osler, Aequanimitas, 278-280,

[7] ibid., 97.

[8] ibid., 383.

[9] ibid., 19.

[10] ibid., 42.

[11] ibid.

[12] John Brown, Horae Subsecivae, 7th ed. (Edinburgh: Edmonston and Douglas, 1871 [or. ed 1858]), 400.

[13] Arnold Bennet, Literary Taste, How to Form It: With Detailed Instructions For
 Collecting A Complete Library Of English Literature
(Hodder: 1909), reprint edition, available at DJ McAdam's personal website.

[14] Thomas Jefferson to Peter Carr, Paris, 19 August, 1785. Available at the Avalon project.

 [15] Mary Rosner, Cicero in Nineteenth-Century England and America,"  Rhetorica 4, no. 2 (1986) 153–82; Lyon Rathbun, “The Ciceronian Rhetoric of John Quincy Adams,” Rhetorica 18, no. 2 (2000): 175–215; . "English Literature and the Latin Classics: Review of Horace and the Chief Poets of the Nineteenth Century," Classical Weekly, vol 12, no 23 (21 April 1921); Stephan Harrison, "The Reception of Horace in the Nineteenth and Twentieth Centuries," The Cambridge Companion to Horace (Cambridge: Cambridge University Press, 2007), 334-347.

[16] For late 19th century changes in university curricula, see Christopher Stay, Classics Transformed: Schools, Universities, and Society in England, 1830-1960 (Oxford: Clarendon Press, 1998); John Thelin, A History of American Higher Education, 3rd ed (Baltimore: John Hopkins University Press, 2019).

[17] Osler himself laments this in a passage that sounds eerily similar to today's complaints:

The extraordinary development of modern science may be her undoing. Specialism, now a necessity, has fragmented the specialities themselves in a way that makes the outlook hazardous. The workers lose all sense of proportion in a maze of minutiae. Everywhere men are in small coteries intensely absorbed in subjects of deep interest, but of very limited scope. Chemistry, a century ago an appanage of the Chair of Medicine or even of Divinity, has now a dozen departments, each with its laboratory and literature, sometimes its own society. Applying themselves early to research, young men get into backwaters far from the main stream. They quickly lose the sense of proportion, become hypercritical, and the smaller the field, the greater the tendency to megalocephaly. The study for fourteen years of the variations in the colour scheme of the thirteen hundred species of tiger-beetles scattered over the earth may sterilize a man into a sticker of pins and a paster of labels; on the other hand, he may be a modern biologist whose interest is in the experimental modification of types, and in the mysterious insulation of hereditary characters from the environment.
Osler, Aequanimitas, 50. 

[18] Another factor which accounts for this change, which I do not have time to go into at length here, is the slow decline of oratory as a spring of glory and a source of entertainment, and the correspondingly small amount of attention educators gave to rhetoric and wordplay as time went on.

01 April, 2021

Welcome to the Decade of Concern

We’re looking at that big bow wave and wondering how the heck we’re going to pay for it, and probably thanking our stars we won’t be here to have to answer the question.

— Brian McKeon, Deputy Under-Secretary of Defense for Policy [2016
The most dangerous concern is [the use] of military force against Taiwan... My opinion is this problem is much closer to us than most think.
—John Aquillo, Admiral, Indo-Pacific Command  [2021]

The 2020s do not look good.

This weekend I read two large reports that look at the present and future of the U.S. military’s force structure. Together they present a disturbing picture of the decade to come. Both of these reports are squarely focused on the United States military and a constellation of problems it will soon face. Neither is written by an expert in Asian military affairs; both write with shared assumptions about the nature of “great power competition” with China, but neither report is about China. Neither attempts to contrast their predictions for the United States with likely developments across the Pacific. But it is precisely those developments that make the trends traced in Mark Cancian's U.S. Military Forces in FY 2021: The Last Year of Growth? and Mackenzie Eaglen and Hallie Coyne’s The 2020s Tri-Service Modernization Crunch so alarming. In particular, the trends described in these reports should have alarm bells going off in Taipei. Taiwan—and any country that might be called on to defend it—is entering a dangerous decade.

Defense planning talk can lead eyes to glaze over. Little surprise! Debates over this topic quickly get bogged down with acronyms, accounting terms, and references to opaque production and planning cycles. One helpful way for ordinary citizens get a handle on these issues—at least, for ordinary citizens of my generation—is to think of defense planning as bit similar to a real time strategy (RTS) computer game, like Warcraft or Age of Empires. Games of that sort force the player to decide how they will spend constrained resources. Do you spend your gold or vesper gas (or whatever else it is the game uses in lieu of money) on the production of new fighting units, the development of new technologies that will improve your kingdom, or on increasing the scale of resource extraction? That is the usual tradeoff present in most RTS games, though some games will add in additional wrinkles. Gamers know that the right balance between these three is key: if they choose poorly at the beginning of a gaming session, they will be dealing with the consequences of their bad investment for the rest of their game.

Though far larger in scale, the senior officers and civilian leaders in charge of the Pentagon’s purse strings are not that different from Starcraft e-sports stars. Like the RTS gamer, these leaders must plan over time in a world of constrained resources. Tradeoffs are inevitable.

Defense planners find themselves trying to balance three competing priorities. The first of these is the development of new technologies and fighting platforms. To fight with the technology of the future, one must pay for its development now. In defense planning lingo, this is usually called “modernization.” You might think of this as the ‘research tree’ found in most RTS games.

The next category is the procurement of new platforms that have already been developed. Sometimes these purchases mean an absolute increase in the number of platforms fielded; other times it simply means buying new vehicles or ships to replace those that are retiring from service. This is spending on “force structure” — the idea is that your military force is properly structured to accomplish the strategic goals laid out for it. Again, there is an easy analogue here with the RTS gamer, who must carefully choose which units will be the most valuable additions to their army based off of the type of enemy they are fighting.

The next item does not have such an easy analogue in most RTS games (though you do see a similar mechanism featured in many turn-based strategy games). This category includes the costs of maintenance, training, and operations. Maintenance includes regular repairs needed to fix the wear-and-tear of normal use, but also technological upgrades—say, installing a new weapons system or radar array on an aircraft that has been in service for many years. It also includes the cost of training exercises and other measures (such as fueling, inspections, deployment, etc.) that keep these platforms and the military units they are attached to “ready” to join the fight. Thus in peacetime this category is often described with the word “readiness.” However, combat operations are also usually included as part of this category when budgets are being drawn up. Depending on the tempo and intensity of the war in question, combat operations might swallow up this entire section of the budget (and much more besides). Defense planning documents often call this the “operations and maintenance” section of the budget.

There are other factors that might determine how money is spent— for example, the desire to have a resilient industrial base or please a Senator—but the vast majority of spending decisions are an attempt to try and balance out the competing demands of modernization, force structure, and readiness. A military branch that spends all of its money on modernization will have superior technology in the long term but nothing to fight with in the here-and-now. A military optimized for force structure, on the other hand, risks mortgaging the long term away for the sake of near-term gains. But even those near-term gains might not be near enough: because new platforms are expensive and slow to construct, a military too focused on optimizing force structure might find itself blindsided and unprepared if it has not spent an equal amount of money on maintaining readiness and upgrading old legacy platforms while new ones are being built. Finally, a force that spends all of its money on the maintenance and operations of the minute will be ready for a fight today, but will struggle to compete with advancing adversaries in the future, and may be overwhelmed by the rising costs of operations as technology and platforms begin to age.

I apologize to experienced nat-sec hands for this introductory, simplified overview, but this issue is important—important enough that Americans outside the defense industry need to understand it. That small backgrounder should be enough context for these two reports to make sense.

Mark Cancian’s report is the more sober of the two; he does not argue a case so much as identify current trends and explain the sort of tradeoffs facing each of the U.S. military’s main branches. Eaglan's report is just as well sourced as Cancian’s, but more argumentative. She and her research assistant believe a crisis is around the corner. They want you to believe it too. Eaglen is also more willing to endorse specific solutions to the crises she sees. However, the two reports’ findings are complementary and I will quote liberally from both of them below.

Let us start with Eaglen. She describes the basic problem quite dramatically:
Fleets of ships, aircraft, vehicles, and other equipment are reaching the end of their service lives, hitting the edge of their upgrade limits, and losing combat relevance. As great-power competition accelerates, the United States is offering a free and open window of opportunity and advantage to its adversaries. Unless policymakers take concrete steps now, defense leaders will continue America’s sleepwalk into strategic insolvency and its consequences. The aptly named “Terrible 20s” have arrived. The intention of this report is not to propose ideal or preferred defense investments. Rather, it aims to deliver an unvarnished overview of the existing modernization bill before the Pentagon today, forcing an overdue confrontation with reality…. In 2016, popular military blogger and Navy Cmdr. CDR Salamander (ret.) coined the phrase “Terrible 20s” to describe the modernization challenges before the US military this coming decade. He offered an ominous overview of the next 10 years as “that horrible mix of debt bombs, recapitalizing our SSBN [ballistic missile submarines] fleet, and the need to replace and modernize legacy aircraft, ships, and the concepts that designed them.” It is a bracing and accurate summary of the following analysis.[1]

She also includes a fun graphic to illustrate the problem: 

Figure 2,  The 2020s Tri-Service Modernization Crunch (2021)

 How did this happen? It started with a Clinton era decision to focus on upgrading legacy platforms instead of developing or purchasing new ones:
By the end of the Bill Clinton administration, the Pentagon had laid out a strategy to update and replace the Reagan-era fleets. This plan hinged on justifying end strength reductions across the services with the increases in combat power delivered by new and improved military technologies.
When explaining this reasoning for the American Enterprise Institute in 2007, Robert Work used the example of advancements made to the shipboard vertical launch systems (VLS) In 1989, 108 large surface combatants carried 1,525 VLS cells, with an aggregate magazine capacity of 7,133 battle-force missiles. By 2004, the Large Surface Combatant (LSC) fleet shrunk to 71, but it carried 6,923 VLS, with a fleet magazine capacity of 7,539 battle-force missiles. More revolutions in satellite-guided weapons, unmanned aerial vehicles, missile defense systems, and improved targeting and radar technology are also cited as demonstrable examples of key new battlefield technologies from the Clinton years, even as modernization spending on procurement and R&D plummeted from its peak in FY85 to a new low a decade later. [2]

During the Bush years, force structure was focused on winning the war at hand, and modernization was once again put off:
In the 2000s, Pentagon leaders focused understandably on the wars but did so while planning too optimistically in realizing ambitious technology transformations that would take decades to materialize. As a result, not enough investment was made in the conventional platforms required to maintain a ready force and strong conventional deterrent through the 2020s. In fact, rosy assumptions about revolutions in military affairs and the promises of technology solutions tomorrow became a justification to drastically slash those same aging fleets and inventories of ships, aircraft, and vehicles the troops use every day to sail, fly, and drive to accomplish their missions. Now the military is facing a decade of staggering modernization cost.[3]

Then came Robert Gates’ fight against “Next War-itis” and the sequester years:
Politically vulnerable because of outside pressures, new programs stood no chance and were killed en masse by the new administration. President Obama felt liberal pressure to curtail the military-industrial complex and defense spending, while Gates took personal offense at a military bureaucracy still focused on preparing for conventional conflict instead of pouring its full energy into the ongoing counterinsurgencies and counterterror operations in Iraq and Afghanistan. The bureaucracy scaled down its plans below its own requirements and sought to shield programs from permanent death by keeping their pilot flames lit. These choices created a second round of cancellations near the turn of the decade that dwarfed the Rumsfeld cluster….

In 2011, following the hollow buildup of the 2000s, Congress and the president’s failure to agree on entitlement and other reforms resulted in the BCA. Two years later, the BCA led to the sequestration of 2013, which swung a budgetary axe mostly on discretionary funding, half of which sustains the US military. The Pentagon responded largely by canceling dozens of pro-grams permanently and delaying almost everything else except for present-day needs. Leaders calculated they could accept risk in the mid to long term, as long as large swaths of troops were still engaged in ongoing conflicts and another large part stood ready to fight on a moment’s notice.[4]

Thus for three decades America traded out modernization and longer-term force structure procurement for the sake of maintaining readiness and battlefield operations. The long wars forced some of this trade off on the services (cue Eaglen: “Today, the US military is in the middle of a future that was mortgaged to pay for the wars of yesterday”), but political foolishness played just as large a part.[5] The thing to emphasize here are the long term consequences of poor decision making by national elites. As procurement and development programs run so long, mistakes made in 2003 or 2013 reverberate decades later. Today we the enter the 2020s with a military built during the 1980s.

But as Cancian makes clear, the temptation to further defer force structure procurement and modernization lingers with us:
From the service perspective, the key tension for force structure will be between the desire to cut size to invest in modernization and the need to maintain day-to-day deployments for crisis response, ongoing operations, and allied and partner engagement. If the forces get too small, then the operational tempo required to maintain these deployments will stress personnel. This would hurt sustainability of the all-volunteer force, particularly if the economy recovers and recruiting and retention get more challenging as a result of competition for labor. The Biden administration, like every administration before it, will pledge to support service members, so it will need to heed complaints about stress…. 
The Biden administration will be particularly conflicted here because of its often-stated desire to reassert U.S. global leadership. The United States cannot be a global leader if it pulls its forces back from global deployments. Some strategists have argued that a “virtual” or intermittent presence from the United States can substitute for forward stationing or continuous rotations. However, critics point out that virtual presence is actual absence. Knowing that a carrier is in Norfolk does not have the same impact as seeing 90,000 tons sail into one’s harbor.[6]

Deferring modernization and procurement like this carries a financial cost. Eaglen explains why:
As a result of failing to undertake necessary modernization, the military instead pays for aging platforms to stay in the force. In fact, the problem mirrors our broader national challenge with net interest. Just as a quarter or more of debt growth over the next decade will be net interest on the debt itself, the military has begun to pay more to keep old equipment running, which makes it increasingly difficult to invest in new platforms. It’s a vicious cycle, often called an “acquisition death spiral. [7]

The fiscal consequences of this can be seen in the percentage of current expenditures that go towards maintenance and upgrades of legacy systems: 


Why these costs are so severe makes more sense when you see just how old many of our principle military platforms really are. For one example, here is Cancian’s tally of the Air Force fleets:
Some fleets are in relatively good shape: the transport fleet (21 years, on average) because of acquiring C-17s and C-130s, the special operations fleet (12 years) because of its high priority, and the UAVs/RPVs (6 years) because of large wartime purchases. Other fleets are old: fighter/attack (29 years old), bomber (42 years), tanker (49 years), helicopter (32 years), and trainers (32 years). All the older fleets (except for some specialty aircraft) have programs in place for modernization, but the programs have been delayed, are expensive, and may take years to implement fully. [8]

But now this system of pushing platforms just one more decade past their due date has reached its limits. Many of the old legacy systems simply cannot be rolled through one more decade of use. Even if they could, the money spent on drawing out the life of a legacy system would be better spent on modernization and force structure changes. This is the logic behind the US Marine Corps’ decision to get rid of their tank battalions, for example. Theirs is a purposeful attempt to shed platforms that the service does not think will be useful in a conflict with China. But other draw-downs are simply the product of poor planning.

Consider the Navy's hopes to drastically increase the number of nuclear attack submarines they can put to water: 


Cancian explains what you see:
Attack submarines (SSNs) receive strong support from strategists because their firepower and covertness are useful in great power conflicts. Thus, they are likely to receive strong support in the next administration, whether that is a Trump or Biden administration. However, submarines are expensive (about $3.3 billion each in the current version), so increasing production is difficult…

Numbers dip in the late-2020s and early-2030s, bottoming at 42 boats as Los Angeles-class boats built during the 1980s retire. Secretary Esper said that the new plan intends to extend the service life of additional older submarines, but the Navy tends to retire old ships early in order to buy new ships…
The obvious solution is to build more submarines, but having two submarine construction programs operating simultaneously puts pressure on both the shipbuilding account and the submarine industrial base. The FY 2020 Navy 30-year shipbuilding plan showed a capacity for three total submarines per year, attack (SSN) or ballistic missile (SSBN) submarines, although the Navy did not always fund to the total capacity. Esper called for building three Virginia-class submarines per year in addition to SSBNs as soon as possible, but the industrial base will need a lot of funding and lead time to get to that level of production….The Navy cannot build enough new submarines quickly enough to significantly mitigate the trough. What it can do is accelerate the rate at which it gets to its target level. [9]

 If you are familiar with the war games and simulations American military officers run to game out Taiwan contingencies, Cancian’s info-graphic should disturb you. Attack submarines are widely viewed as a crucial component of the American conventional deterrent in any potential cross-straits dust up. Stealthy and submersible, nuclear submarines are one of the few platforms we expect to reliably pierce the A2/AD death zone that will project out thousands of kilometers from the Chinese coast. Yet their numbers are set to fall through most of the 2020s. Worst of all, there is very little we can do about it. The time to have averted this crisis was back in 2015.

Another set of platforms American strategists anticipate U.S. forces will rely on to pierce the A2/AD bubble are our stealth bomber fleets. Here is what Cancian has to say about that:
Since no new aircraft are being produced, the bomber force continues to age (currently 43 years on average), though various upgrade programs keep the aircraft flying and operationally relevant, for example, new engines for the B-52s and a new defensive system for the B-2s. The Air Force would like to divest some of the B-1s early but has run into congressional opposition. The B-21 Raider program continues in development, with budget demands seeming to stabilize: $2.9 billion in FY 2020 and $2.8 billion in FY 2021 and remaining at that level through FY 2025. Because the B-21 has a mid-2020s fielding date (“Initial Operating Capability”), the legacy B-52s, B-1s, and B-2s will comprise the bomber force for many years to come. Details are uncertain, however, because the B-21 remains a classified program. [10]

Here the pattern repeats. Eventually the B-21 will go online and be purchased in large numbers. But purchases in small numbers will not happen until the mid-2020s at earliest, and the fleet will, as with the submarines, require time to slowly grow in size. Until then we can only expect the stealth bomber fleet to degrade as the the existing systems age and the Air Force tries to remove the oldest platforms to save on cost.

 All of this is assumes that the money can be found for the post-2030 expansion of the bomber and submarine fleets. Yet part of the reason we got into this mess in the first place is because we spent the last decade pitching plans like these, which project growth in force structure—but only in the far away future. Representative Mike Gallagher rightly complained about this sort of thing back in 2019:

Yet, the Navy’s FY20 shipbuilding budget represents an overall decrease of1.5 percent from the previous year. While the Navy submitted a 30-Year Ship-building Plan along with its budget that reached 355 ships for the first time in more than two decades, much of this growth happens in the outyears—the Pentagon’s version of “the check is in the mail.” Despite reaching 355 ships roughly 20 years faster than the FY19 shipbuilding plan, the new document only adds one additional ship over its first five years compared to last year’s plan. [11]

But even if the money can be found the problem I am highlighting here will not go away. Consider the US Marine Corp’s transformation from a “tip of the spear” ready-force able to deploy anywhere in the world to a long-range artillery force to be stationed on the islands of the West Pacific. This transformation does not require any extra money from Congress. What it does require is time. The USMC have called their plan “Force Design 2030.” Perhaps their force design really is the perfect ticket for deterring the PLAN—but if so, it will not be complete for another decade.

Similar things could be said about the US Army's attempt to obtain more long range munitions, the Navy’s plans to remake the surface fleet as a more distributed force centered on lighter tonnage ships, or the surplus of unmanned submersibles and aircraft that are supposed to sustain the Navy and Air Force’s lethal edge through mid-century. In each case, the modernization of the future force is gained by slimming down the current one. This is necessary, but it comes with a catch: that future force doesn't fully arrive until the 2030s.

Can we wait that long? I am not sure we can. When Captain James Fanell (ret.), intelligence analyst with the U.S. Navy, labeled the 2020s as the “Decade of Concern” based on his projections of the PLA Navy's growing capabilities, he was treated as something of a pariah.[12] But now that Admiral Philip Davidson, INDOPACOM’s outgoing commander, just declared that he believes the PLA will be capable of assaulting Taiwan within six years, Fannell’s judgment seems prescient.[13] The 2020s will see both the growth of Chinese military power to new heights and a temporary nadir in American capacity to intervene in any conflict in China’s near abroad.

The “temporary” part of that equation is important. Historians of the First World War and the Pacific War trace the origins of those conflicts to pessimistic assessments of the changing balance of power.[14] The belligerency of imperial Japan and Wilhelmine Germany rested on a belief that their position vis a vis their enemies could only decline with time. Any statesman who believes that a temporary military advantage over an enemy will soon erode will have a strong incentive to fight it out before erosion has begun.

And that is the problem. Commander Salamander’s “Terrible ‘20s” and Captain Fanell’s “Decade of Concern” are the same decade. In the mid 2020s the United States will be struggling to pay the Pentagon’s “modernization crunch.” The Navy, Marine Corps, and Air Force will be midway through a transition to a new, counter-China force structure. The number of attack submarines and stealth bombers that the United States can put in the field will be at an absolute low. 

It is at this moment we project the PLA will be capable of executing a cross straits invasion.

This does not make conflict inevitable. But if the Chinese have concluded that military means are the only way to bring about Taiwan’s integration into the People’s Republic of China,  Beijing's leaders will soon face powerful pressure to escalate towards war. Waiting until the 2030s or 2040s to sabre rattle is to wait for the U.S. military’s counter-China modernization and procurement programs to run their course. There will be a terrific temptation to "resolve" the problem before these programs have been implemented.

If you are Taiwanese the implications of all of this should be obvious. The clock is ticking. The terrible ‘20s have begun. 


For more of my writing on U.S. force structure and strategy, see my posts "Questions on the Future of the U.S. Marine Corps" and "Against the Kennan Sweepstakes." If on the other hand it is Taiwanese military affairs that has caught your interest, consider reading "All Measures Short of a Cross Straight Invasion,"Why Taiwanese Leaders Put Political Symbolism Above Military Power,"  and "Losing Taiwan is Losing Japan." To get updates on new posts published at the Scholar's Stage, you can join the Scholar's Stage mailing list, follow my twitter feed, or support my writing through Patreon. Your support makes this blog possible.

[1]  Mackenzie Eaglen and Hallie Coyn, The 2020s Tri-Service Modernization Crunch (Washington DC: American Enterprise Institute for Public Policy Research, 2021), 1, 3.

[2] ibid., 7.

[3] ibid., 5

[4] ibid., 18, 22.

[5] ibid., 22.

[6] Mark Cancian, U.S. Military Forces in FY 2021: The Last Year of Growth?, CSIS Defense Outlook (Washington DC: Center for Strategic and International Studies, 2021), xi

[7] Eaglen and Coyn, Tri-Service Modernization Crunch, 15.

[8] Cancian, U.S. Military Forces, 80.

[9] ibid., 53-54.

[10] ibid., 85

[11] Mike Gallagher, “State of (Deterrence by) Denial,” Washington Quarterly 32, no. 2 (Summer 2019), 35.

[12] The most mature statement of this position is found in James Fanell, “Now Hear This—The Clock Is Ticking in China: The Decade of Concern Has Begun,” Proceedings, October 2017; for Fannell's most recent assessment of the PLA Navy's growth and development, see James Fanell, “China’s Global Navy—Today’s Challenge for the United States and the U.S. Navy,Naval War College Review 73, no. 4 (2020): article 4.

[13] Malory Shelbourne, “Davidson: China Could Try to Take Control of Taiwan In ‘Next Six Years,’” USNI, March 9, 202.

[14]  See David Stevenson, Cataclysm: The First World War As Political Tragedy (New York: Basic Books, 2005), ch. 1; David Herrmann, The Arming of Europe and the Making of the First World War (Princeton: Princeton University Press, 1997); Michael Barnhart, Japan Prepares for Total War: The Search For Economic Security, 1919-1941 (Ithica: Cornell University Press, 1991).

29 March, 2021

Notes From All Over - March 2021

My newest "Notes From All Over"—a collection of the best essays, news items, blog posts, podcast episodes, and scientific articles that I read this month, and recommend you read as well—is now posted to Patreon. This will be a recurring monthly feature in the future. Subscribers are welcome to go read it!

26 March, 2021

A Few More Notes on the Dearth of Great Works

Over the last two years I have written a few pieces on the flagging vitality of American intellectual life. This week Ross Douthat wrote up a response to these pieces. It has prompted a few thoughts.

1. In the comment thread for "Where Have All the Great Works Gone?" a common objection to the premise was raised several times: isn't the problem simply that there are now too many books? The logic here is easy enough to grasp: when Tolstoy was at his height, the world of Russian fiction was much smaller than the world of American letters today. In Tolstoy's day a major work made waves. But now that small pond has grown into a vast ocean; if a Tolstoy was to write in today's choppy waters, his novel could only make a few ripples before sinking out of sight.

This is obviously more true the further back in time you go. Chaucer is remembered as the father of English literature because there really are no other 14th century contenders for the title. After a few million novels have been published, however, it is hard for anything new to stand out. But I doubt this logic really works as well as my readers think it does: on the one hand, book sales still follow a pareto distribution ("0.25% of [nonfiction books] account for 50% of the sales"), and single books (say, Black Swan or How to be an Anti-Racist) regularly become household names. This is true even for fiction, though there the winning works of 21st century fiction have been almost entirely been a part of the "YA" genre. 

On the flipside, there were still hundreds of thousands of books being published every year in the early 20th century, but that did not stop the creation of "great works" then. If this argument is correct we should see the production of "great works" peter off as time moved forward. But the last decades of the 19th century and the first decades of the 20th were arguably some of the most intellectually vital in Western history.

However, there is more sophisticated version of this argument, one that has more validity to it. Sean Manning--who blogs at Books and Swords--made this point to me in an email discussion we had on the piece. He writes:

Don't forget about the power of exponential growth.  The number of works published on ancient Persia [his academic specialty] has been increasing at about 5%/year since 1945 (and it was increasing a few percent a year in the interwar period).  It was just plain possible for a polyglot to read the important books in more fields in 1900 than in 2000.  This force affects other areas of culture too: in the 1950s you could just about know all the rock music, or science-fiction novels, or fine-art photography, or what have you, that was being produced in your region. By the 21st century that was impossible.

From this perspective, the important thing about the rising number of books published is not that it becomes ever more difficult to sort the wheat from the chaff, but that it becomes ever more difficult to cross between fields and write something of interest to multiple domains of inquiry. This is a good point, and it gels with Douthat's speculations on the same question. 

2. Here is Douthat:

My own favored explanation, in The Decadent Society, is adapted from Robert Nisbet’s arguments about how cultural golden ages hold traditional and novel forces in creative tension: The problem, as I see it, is that this tension snapped during the revolutions of the 1960s, when the Baby Boomers (and the pre-Boomer innovators they followed) were too culturally triumphant and their elders put up too little resistance, such that the fruitful tension between innovation and tradition gave way to confusion, mediocrity, sterility.

I may be over-influenced here by the Catholic experience, where I think the story definitely applies. As R.R. Reno argued in a 2007 survey of the so-called “heroic generation” in Catholic theology, the great theologians of the Vatican II era displayed their brilliance in their critique of the old Thomism, but then the old system precipitously collapsed and subsequent generation lacked the grounding required to be genuinely creatively in their turn, or eventually even to understand what made the 1960s generation so significant in the first place....

 I think this frame applies more widely, to various intellectual worlds beyond theology, where certain forms of creative deconstruction went so far as to make it difficult to find one’s way back to the foundations required for new forms of creativity. Certainly that seems the point of a figure like Jordan Peterson: It’s not as a systematizer or the prophet of a new philosophy that he’s earned his fame, but as popularizer of old ideas, telling and explicating stories (the Bible! Shakespeare!) and drawing moral lessons from the before-times that would have been foundational to educated people not so long ago. Likewise with the Catholic post-liberals, or the Marx reclamation project on the left. It’s a reaching-backward to the world before the 1960s revolution, a recovery that isn’t on its own sufficient to make the escape from repetition but might be the necessary first step.

 One might be able to combine the two arguments as so: previous eras of 'creativity' were possible because intellectuals of the past had a common canon to react to. This canon stretched across many disciplines (the loss of which forces intellectuals down into intellectual silos that grow too quickly to climb out of) and stretched across normal divides of time and culture. That last one is important. Past cultures lived by values and concepts wildly different than one's own. An education in past minds always has a subversive element to it.

This blog is in many ways an attempt at the project Douthat has outlined here. In my case, much of what I am trying to "recover" are works and ways of thought that never were part of the American tradition. Thus my essays about Ibn Khaldun's asabiyah or the wisdom of imperial Chinese poetry. One way out of the postmodern doldrums might be a 21st century Renaissance. Where Renaissance thinkers incorporated lost Greek works into their culture, injecting their intellectual life with a shot of alien values and ideas, we might do the same with the best of Asia's great civilizations. Like the products of Greek thought elevated in the Renaissance, the great debates of, say, the Song Dynasty, are integrated in a coherent fashion. They are strange, but sensible. They have a structure and depth of their own, even if it is utterly different from our own. Perhaps that mix is exactly what we need today. 

3. With all that said, I still like my original musings for what went wrong. Here was one of my points:

The professionalization of intellectual pursuit is another problem. Melville would never have written Moby Dick if he had spent years enrolled in an MFA program instead of spending years at sea. Men and women who in past ages would have observed humanity up close (or at least who would have been forced through a broad but rigorous education in classics) instead cloister themselves in ivory towers. Their intellectual energy is channeled into ever more specialized academic fields and cautiously climbing a bureaucratic and over-managed academic ladder. Could that social scene ever produce a great work?  

Douthat begins his reaction to my piece by reflecting on the spate of reviews that have followed a new biography of Philip Roth. "In the sheer energy and delight of the Roth reviews," says he, "you can feel a cultural pulse that contemporary fiction rarely stirs." I have never been able to stomach more than a few chapters of Roth myself. I will cheerfully admit that I have never been able to finish one of his novels. One of the reviews Douthat highlights explains exactly why this is so:

The dominant literary style in America is careerism. This is neither a judgment nor a slur. For decades it has simply been the case that novelists, story writers, even poets have had to devote themselves to managing their careers as much as to writing their books. Institutional jockeying, posturing in profiles and Q&As, roving in-person readership cultivation, social-media fan-mongering, coming off as a good literary citizen among one’s peers—some balance of these elements is now part of every young author’s life. It’s a matter of necessity and survival, above and beyond the usual dealings with editors, agents, and Hollywood big shots. The ways writers used to mythologize themselves have either expired or been discarded as toxic. In the old gallery there were patrician men of letters (Howells, Eliot), abolitionists (Stowe), adventurers (Melville, London, Hemingway), madmen (Poe), shamans (Whitman), aristocrat expatriates (James), bohemian expatriates (Stein, Baldwin, Bishop), playboy expatriates (Fitzgerald), denizens of café society (Wharton), romantic provincials (Cather, Thomas Wolfe), small-town chroniclers (Anderson), country squires (Faulkner), suburban squires (Cheever, Updike), vagabonds (Algren), cranks (Pound), drunks (West, Agee, Berryman), dandies (Capote, Tom Wolfe), decadents (Barnes), butterfly-chasing foreigners (Nabokov), cracked aristocrats (Lowell), recluses of uncertain eccentricity (Salinger, Pynchon, DeLillo), committed radicals (Steinbeck, Rexroth, Wright, Hammett, Hellman, Paley), disabused radicals (Ellison, Mary McCarthy), radicals turned celebrities (Mailer, Sontag), activist women of letters (Morrison), alienated children of immigrants (Bellow), neo-cowboys (Cormac McCarthy), hipsters (Kerouac), junkies (Burroughs), and hippies (Ginsberg). In the end there is only the careerist, the professional writer who is first, last, and only a professional writer. The original and so far ultimate careerist in American literature was Philip Roth.

But what can the careerist hope to learn of life? What have they experienced, what have they seen, what wisdom have they gained, that is worth writing about? Intellectuals who have devoted their life to careerist climbing are crimped, stunted beings. They are fine as people, sure, but as observers of the human heart? On Twitter I once called this "21st century America's Brooklyn writer crisis." Too many of our "high literature" novelists went straight from a childhood in suburbia to an MFA to some decrepit Brooklyn apartment, having seen nothing of the world. They have spent life interacting with no one except other members of this class of professional naval-gazers. Of course these people don't matter. They have nothing to say to anyone outside of their own milquetoast milieu. 

Academia is similar: just as careerist, just as cloistered, just as divorced from 'big questions.' If intellectual vitality is to return to American life, it is people outside these networks that will restore it.

EDIT (26 March 2021): A reader points to this piece by Musa al-Gharbi, on the source of intersectional ideology in the 1960s and 1970s:

While academics in the social sciences and (especially) humanities are most frequently attributed with the rise of the concepts and approaches listed above – they may be getting way too much credit. In fact, most of the people listed on the chart, who created and established these innovations, were practitioners in fields like psychiatry and law (and occasionally, activists outside the university, such as in the example of the ‘safe zone project’ or with the mainstreaming of ‘trigger warnings.’).

As Nassim Nicholas Taleb explained in Black Swanthis is par for the course: rather than being a font of innovation themselves, academics tend to systematize, rationalize, extrapolate from various innovations that were produced by people outside their field, or outside of academia altogether.

It is the intersections of action and thought where genuinely new ideas are born, not inside the institutions ostensibly created to generate ideas themselves. The incentives inside these institutions do not lend themselves to creativity, nor to ideas with wide application.


 If this post on inter-generational intellectual history caught your interest, you might consider some of my older posts on similar problems: "On Living in the Shadow of the Boomers," "Longfellow and the Decline of American Poetry," "A Non-Western Canon: What Would a List of Humanity's Hundred Greatest Thinkers Look Like?,"  Book Notes: On Strategy, a History," "On Adding Phrase to the Language." To get updates on new posts published at the Scholar's Stage, you can join the Scholar's Stage mailing list, follow my twitter feed, or support my writing through Patreon. Your support makes this blog possible.


23 March, 2021

On Laws and Gods

Image Source

It would take gods to give men laws. 

—Jean-Jacques Rousseau [1762

This is going to be one of those essays where I throw a lot of things on the wall and see what sticks. We are going to range today from the American Revolution to Cambodian spirits to hunter-gatherer conceptions of authority to the collapse of the Qin dynasty to Icelandic law disputes and back. Strap yourself in.

I have recently been reading Hannah Arendt’s On Revolution. To read Arendt is to gain an appreciation for the powers of a comprehensive “classical” education. Each of her books is a tour de force tour through the millennia. Effortlessly she leaps across time, moving fluidly from Roman law to the debates in the St. Petersburg soviet. She knows not only the great philosophers, novelists, and political theorists of this three-thousand-year span, but all of the minor writers these great writers were reading themselves. Very few thinkers today are capable of such a performance.

If there is a weakness to Arendt’s books, full as they are of untranslated quotations in French or classical Greek, it is this: she knows nothing of, and has little interest in, the world outside her own tradition. This is a bit of a shame—one of the underlying themes of Arendt’s early work is that the totalitarian upheavals of the 20th century fundamentally broke this tradition, leaving Western man with no legs to stand on. One might assume she would have more interest in looking outside it.

The problems that spring from limiting herself to the West (and a limited West at that; one finds that she systematically discounts or ignores old Germanic works—like the Icelandic sagas—even when their message is in many ways complimentary to her own political ideals) are not hard to find.

Consider Arendt’s discussion of “the absolute” in On Revolution. In the period preceding the French and American revolutions (the primary topic of On Revolution), many a European monarchy descended into absolutism. Under the absolutist schema, kings were not only given absolute power, but were conceived as a transcendent source of political authority. But then we see a funny thing: after French revolutionaries deposed their absolutist monarch they proceeded to justify their own ruling power as the expression of an undifferentiated “general will;” Lenin’s merry band of regicides would claim their crimes were the laws of universal history realized. There is an easy and ironic parallel between these revolutionaries and the regimes they overthrew. The revolutionaries’ need to base their authority in some transcendent absolute, one might argue, is simply a continuation of the old absolutist claims of Tsars and Sun Kings. The source of sovereignty changes, but the old understanding of sovereignty does not.

The trouble with this argument—at least in Arendt’s view—is the American revolution:
That the problem of an absolute is bound to appear in a revolution, that it is inherent in the revolutionary event itself, we might never have known without the American Revolution. If we had to take our case cue solely from the great European revolutions: from the English Civil War in the 17th century, the French Revolution in the 18th, and the October revolution of the 20th, we might be so overwhelmed with historical evidence pointing unanimously to the interconnection of absolute monarchy followed by despotic dictatorships as to conclude that the problem of an absolute in the political realm was due exclusively to the unfortunate historical inheritance, to the absurdity of an absolute monarchy, which had placed a an absolute, the person of the Prince, into the body politic, and absolute for which the revolutions then erroneously in vainly tried to find a substitute…

[But] the revolution grew out of a conflict with a limited monarchy. And the government of King and parliament from which the colonies broke away there was no potestas legibus soluta, no absolute power absolved from laws. [Yet] the revolutions, even when they were not burdened with the inheritance of absolutism as is the case of the American Revolution, still occurred within a tradition which was partly founded on an event in which the ‘word had become flesh’, that is, on an absolute that had been appeared in historical time as a mundane reality. It was because of the mundane nature of this absolute that authority as such had become unthinkable without some sort of religious sanction, and since it was the task of the revolutions to establish a new authority, unaided by custom and precedent and the halo of immemorial time, they could not but throw into relief with unparalleled sharpness the old problem, not of law and power per say, but the source of law which would bestow legality upon positive, posited laws, and of the origin of power which would bestow legitimacy upon the powers that be. [1]

King George III was many things, but never an absolute monarch. Yet the American revolutionaries, with no practical experience of “the absolute” were nonetheless convinced that their revolution and the new government it created needed a transcendent foundation. Thus “the Laws of Nature and of Nature's God” talk in the Declaration of Independence. Thus all those John Adams quotes about “Religion and Morality alone.” Thus the four pages of examples Arendt digs up to show just how worried the American revolutionaries were about locating an 'absolute' source of authority that might stand behind their new regime.[2]

Arendt sees the scramble for a transcendent source of authority as one of the defining features of the modern age—one of the few features that unites what we today call “early modern Europe” with the Europe of her day. The revolutionaries and the absolutist monarchs were grasping for solutions to the same problem: the problem of how to justify political authority once the Christian God has been kicked out of politics. As she writes:
European absolutism in theory and practice, the existence of an absolute sovereign whose will is the source of bold power and law, was a relatively new phenomenon; It had been the first and most conspicuous consequences of what we call secularization, namely, the emancipation of secular power from the authority of the church. Absolute monarchy, commonly and rightly credited with having prepared for the rise of the nation state, has been responsible by the same token, for the rise of the secular realm with a dignity and splendor of its own ….

But the truth of the matter was that when the Prince had stepped into the Pontifical shoes of the Pope and Bishop he did not for this reason assume the function and receive the same sanctity of a Bishop or Pope; in the language of the political theory, he was not a successor but a usurper, despite all the new theories about sovereignty and the divine right of Princes. Secularization, the emancipation of the secular realm from the tutelage of the church, inevitably posed the problem of how to found and constitute a new authority which the secular realm, far from acquiring a new dignity of its own, would have lost even the derivative importance it had held under the auspices is of the Church.[3]

This is where things get interesting. For Arendt clearly does not believe this is a universal problem in human politics, but a crisis created by the very specific path that Western Europe took in the Middle Ages. All of these problems revolving around legitimacy and authority—which in a different work she labels the crisis “of the rulers and the ruled”—did not exist in the pre-Christian age.[4] The Greek polis and the Roman res publica did not face these questions, Arendt maintains, except when they devolved into tyranny. Read, for example, her description of law in the Greek polis

So much was law thought to be something erected and laid down by men without any transcendent authority or source that pre-Socratic philosophy, when it proposed to distinguish all things by asking whether they owe their origin to men or are through themselves what they are, introduced the terms nomo and physei, by law or by nature. Thus, the order of the universe, the kosmos of natural things, was differentiated from the world of human affairs, whose order is laid down by men since it is an order of things made and done by men. This distinction, too, survives in the beginning of our tradition, where Aristotle expressly states that political science deals with things that are nomo and not physei.[5]

In the essay I just quoted Arendt is less eager to blame the Europe’s transition from human law to natural law entirely on Christianity, but her antipathy towards natural law shines through nonetheless.[6] She views it as an unhealthy understanding of authority, a deviation from human sociality as it should be, a perverse historical accident that has stuck Western politics in the wrong frame for the better part of a millennium. This is characteristic for Arendt. In many ways her project can be understood as a quest to identify the wrong turns in Western thought, trace the dooms they have led us to, and excise them from our consciousness. Only after we have cast out the intellectual baggage of the last thousand years will we be able to look back on the world that came before—the world of the Romans and Greeks—and see them for the glory that they were, instead of as the shadows tradition would have them be.

This attitude—hostility towards tradition mixed in with unabashed celebration of the past—is one of the things that makes Arendt refreshing to read. She does not fit in any of the standard 21st century culture war molds. But her mold has its own limits. By restricting herself to one tradition of politics, understood as a linear journey through time, Arendt confines her investigations in too small a compass. She has access to the time series but not the cross section. Sometimes it is the cross section you want.

Take, for example, this fuss over transcendent authority. Was this really a Christian invention? Is the problem unique to a collapsing Christian world? The truth, I suspect, is the opposite of what Arendt insinuates: modernity’s drive for a transcendent source of authority is the historical norm. It is not the Christians nor the moderns who are exceptional, but the Greek and Romans who got by on such an unusual understanding of law as an artificial creation.

Let us visit a non-Western society that I am passably familiar with: the average Khmer village out in the Cambodian countryside. Four authorities have claim on Khmers’ daily conduct. The first, and that which will be most familiar to Western readers, is the Cambodian government. But its hand is weak. Many times over the last few centuries—such as during the crises of the 1830s-40s—its authority disappeared altogether. [7] The next source is the one most embodied in living, breathing individuals: the Buddhist sangha and its precepts. These two are at times intertangled. I find it interesting that when Cambodian government officers wish to explain new laws to Khmer villagers, they do so with analogy to the Buddhist code of ethics. [8] In a country like Cambodia, the concept of religious law precedes its secular counterpart.

But both government regulations and the Buddhist dhamma are ‘more honour'd in the breach than the observance.’ The actual guiding force Cambodian life is tradition, epitomized in Khmer proverbs and in didactic poems known as chbab (ច្បាប់), on the one hand, and the dictates of the neak ta (អ្នកតា) and the other spirits that occupy the Cambodian countryside on the other. Arendt concedes the importance of tradition in making proper law abiders of otherwise unruly men, though Cambodians take the principle to an extreme that would likely surprise her. I have always found it interesting that the word “chbab” is used somewhat indiscriminately—you will hear it used to describe a didactic poem, a social code, or a government command. Once again we see government authority being piggybacked on older, less political conceptions of authority. But this makes sense really, for a country where most of its people take the following proverb as a self-evident truth: “Do not walk the twisted path. Do not take te straightest route. Follow the path that your ancestors took (ផ្លូវវៀចកុំបោះបង់ ផ្លូវណាត្រង់កុំដើរហោង ដើរដោយផ្លូវគន្លង តម្រាយចាស់បុរាណ។).”

If Arendt can meet the Cambodians half way on tradition, she has more trouble with the neak ta. There is nothing like the neak ta in her work. Khmer are for the most part sincere Buddhists, but it is a Buddhism thoroughly imbued with Hindu deities and more local powers. Some might describe these powers as the typical spirits of “animism.” Mountains, forest, rivers, lakes, villages, temples, and even individual houses all have their own—or perhaps all are—their own spirits.

The most important of these beings are the neak ta, or the “Grandfathers.” The neak ta are territorially bound, described by Khmer as "the masters of the water and the earth." Every village in Cambodia has one (or is one—Khmer I have talked to are not consistent on this point). Usually the spirit will choose to reside in an object convenient for veneration: rocks, bodhi trees, or statues made specially to hold the neak ta’s presence. The neak ta must be respected. When his will is known it must be obeyed. If one is to clear farmland or hunt in a forest, then the neak ta’s permission must be gained. To defy a neak ta is to court disaster. Those who do not please the neak ta will be punished with injury, illness, or failing crops. Those who gain its favor, on the other hand, will be blessed with good weather, bumper crops, healthy pregnancies, lucky lottery tickets, and the return of lost water buffalo.

The state may not have the power to regulate interpersonal behavior, but the neak ta certainly do. To pick one example: some neak ta frown on infidelity. If one member of a household is sick, the time has come to find out if any other member of the household has been unfaithful. To placate the neak ta the offender must admit to their crime publicly and make penance with the angered master of water and earth, usually through offerings. I am not aware of any anthropological or sociological study that has measured the total amount the average Khmer family offer to the neak ta per year, but it is not small. The neak ta of forests might be given alcohol (poured on the ground) before villagers begin hunting trips or patrol to keep out illegal loggers. The neak ta of a village might be given an offering before fields are planted or a new child is born. At least once a year, the entire village will gather together for a feast in honor of the neak ta. Villagers will also come together to build a proper home for the neak ta or hire a sculptor who can better represent him.[9]

Schiller’s longing for a world where the Naiad of each mossy fountain / played and sported in its silver tide,” is the wish of a man who has never feared the malice of a mountain stream.[10] The Greeks knew better: Even Achilles could not fight the river. The spirits of the animist may do good, but their goodness is of their own choosing. To find divinity in stream and tree is to see in every tree and every stream a potential tyrant in waiting.

But does this apply outside the Cambodian case? One might suppose that Khmer attitudes towards spirit tyrants reflect their relationship with human ones. Cambodians are intensely conscious of hierarchies, after all; I personally have never experienced another culture that puts higher value on knowing one’s place. Perhaps the Khmer have simply built a spirit realm that mirrors their social world?

Hogwash! Or so might say Marshall Sahlins, whose essay “The Original Political Society” addresses this theme. Sahlins is not writing about Cambodians. The subject of his essay are the forager and horticulturalist societies famous for their egalitarian social structure. How do a people who have no priests, no kings, no hereditary castes or governments, nor even any “big men,” conceive of the spirits? Cue Sahlins:
Human societies are hierarchically encompassed—typically above, below, and on earth—in a cosmic polity populated by beings of human attributes and metahuman powers who govern the people’s fate. In the form of gods, ancestors, ghosts, demons, species-masters, and the animistic beings embodied in the creatures and features of nature, these metapersons are endowed with far-reaching powers of human life and death, which, together with their control of the conditions of the cosmos, make them the all-round arbiters of human welfare and illfare. Even many loosely structured hunting and gathering peoples are thus subordinated to beings on the order of gods ruling over great territorial domains and the whole of the human population. There are kingly beings in heaven even where there are no chiefs on earth….
There are no egalitarian human societies. Even hunters are ordered and dominated by a host of metaperson powers-that-be, whose rule is punitively backed by severe sanctions. The earthly people are dependent and subordinate components of a cosmic polity. They well know and fear higher authority—and some-times they defy it. Society both with and against the state is virtually a human universal. [11]

  Consider, for example, the Chewong of Malaysia:

Chewong are a few hundred people organized largely by kinship and subsisting largely by hunting. But they are hardly on their own. They are set within and dependent upon a greater animistic universe comprised of the persons of animals, plants, and natural features, complemented by a great variety of demonic figures, and presided over by several inclusive deities.

Though we conventionally call such creatures “spirits,” Chewong respectfully regard them as “people” (beri)—indeed, “people like us” or “our people." The obvious problem of perspective consists in the venerable anthropological disposition to banish the so-called “supernatural” to the epiphenomenal limbo of the “ideological,” the “imaginary,” or some such background of discursive insignificance by comparison to the hard realities of social action. Thus dividing what the people join, we are unable to make the conceptual leap—the reversal of the structural gestalt—implied in Howell’s keen observation that “the human social world is intrinsically part of a wider world in which boundaries between society and cosmos are non-existent.”

“There is no meaningful separation,” she says, “between what one may term nature and culture or, indeed, between society and cosmos.” So while, on one hand, Howell characterizes the Chewong as having “no social or political hierarchy” or “leaders of any kind,” on the other, she describes a human community encompassed and dominated by potent metapersons with powers to impose rules and render justice that would be the envy of kings.

“Cosmic rules,” Howell calls them, I reckon both for their scope and for their origins. The metahuman persons who mandate these rules visit illness or other misfortune, not excluding penalty of death, on Chewong who transgress them. “I can think of no act that is rule neutral,” Howell writes; taken together, “they refer not just to selected social domains or activities, but to the performance of regular living itself." Yet though they live by the rules, Chewong have no part in their enforcement, which is the exclusive function of “whatever spirit or non-human personage is activated by the disregard of a particular rule.”

Something like a rule of law sustained by a monopoly of force. Among hunters. [12]

Sahlins continues on in this way, cycling through one example after another. Inuit hunters, New Guinean horticulturists, Australian aborigines, and Amazonian lowlanders are all considered. In all of these societies the same pattern is found. Each lives beneath the shadow of supernatural force. The spirits—Sahlins prefers the term “metahumans”—are organized in hierarchies. Some spirits rule over others; those at the highest rung have the power to order those lower down to do as they bid (say, to hurt a human who has broken the higher spirit’s rules). These spirit rulers do care about rules. They proscribe hundreds. When these groups make contact with Westerners, they are quick to appropriate the word and concept of “law” as the term best fit to describe the nature of the rules that bind them. Though nomadic and egalitarian, possessing little of their own, these societies know that the spirits do “own” possessions. This includes land: if the dominion of individual spirits is limited, it is a territorial bound. But that within their territory is theirs. Humans are merely granted the chance to use a spirit’s possessions at the spirit’s pleasure. These possessions can include humans themselves. Titles like “father,” “mother,” “grandfather,” “grandmother,” and so forth are given to these beings, not because they are the literal ancestors of any individual human who fears them, but because their shared obeisance to these rulers is what makes them all part of one community. “Family” is the natural metaphor foraging life provides for that sort of shared relationship.

All of this mirrors what I said earlier about the neak ta (and some things about them I have not said: “their name [“The Grandfahers”] suggests ancestors,” Philip Coggan notes, “but no one living is related to them and the name signifies no more than that through the neak ta the village becomes a family.“)[13] Those familiar with ancient states will see a different set of parallels. Replace offerings with taxes and you have a king. On Kings is the book Sahlins coauthored with David Graber precisely to explore the point. They argue:
As is true of big-men or shamans, access to the metaperson authorities on behalf of others is the fundamental political value in all human societies so organized. Access on one’s own behalf is usually sorcery, but to bestow the life-powers of the god on others is to be a god among men. Human political power is the usurpation of divine power. This is also to say that claims to divine power, as manifest in ways varying from the successful hunter sharing food or the shaman curing illness, to the African king bringing rain, have been the raison d’être of political power throughout the greater part of human history….

It also follows that kings are imitations of gods rather than gods of kings—the conventional supposition that divinity is a reflex of society notwithstanding. In the course of human history, royal power has been derivative of and dependent on divine power. Indeed, no less in stateless societies than in major kingdoms, the human authorities emulate the ruling cosmic powers—if in a reduced form. Shamans have the miraculous powers of spirits, with whom, moreover, they inter-act. Initiated elders or clan leaders act the god, perhaps in masked form, in presiding over human and natural growth. Chiefs are greeted and treated in the same ways as gods. Kings control nature itself. What usually passes for the divinization of human rulers is better described historically as the humanization of the god. As a corollary, there are no secular authorities: human power is spiritual power—however pragmatically it is achieved. Authority over others may be acquired by superior force, inherited office, material generosity, or other means; but the power to do or be so is itself deemed that of ancestors, gods, or other external metapersons who are the sources of human vitality and mortality. [14]

Both Sahlins’ description of forager societies attitudes towards their supernatural rulers, and his account of earliest political authority tallies well with what we understand about "early civilizations" one layer of complexity up the scale from the chieftainships and "big-men" Sahlins has made a career of studying. I am thinking here of the kind of states created by the Egyptians, Mesopotamians, Yoruba, Inka, and Maya. [15] Though separated from each other by oceans and deserts, these civilizations shared much in common with each other and with the less stratified societies Sahlins is most interested in. Here again we see a universe imbued with divinity, kings who serve as the interface between their communities’ spirit rulers and human subjects, and a plethora of laws justified by divine sanction. Remember how the world’s first recorded law code begins:
When Anu the Sublime, King of the Anunaki, and Bel, the lord of Heaven and earth, who decreed the fate of the land, assigned to Marduk, the over-ruling son of Ea, God of righteousness, dominion over earthly man…. then Anu and Bel called by name me, Hammurabi, the exalted prince, who feared God, to bring about the rule of righteousness in the land, to destroy the wicked and the evil-doers; so that the strong should not harm the weak; so that I should rule over the black-headed people like Shamash, and enlighten the land, to further the well-being of mankind. [16]

It is difficult to say, however, that these peoples we're really grounding their laws in Arendt’s “absolute.“ In Bruce Trigger’s comparative study of early civilizations, he makes clear that these deities and spirit were not ‘transcendent’ in the way we usually use the term:
In early civilizations, it was assumed that the natural world, of which humans were a part, was suffused with and animated by supernatural powers. There was no distinction between the natural and the supernatural: everything was alive, conscious, and interrelated. Humans lived in a realm composed only of other beings; there were no things. Nature functioned because it was animated by powers that behaved much like human beings but were usually much stronger and therefore could determine human destiny…. People who lived in early civilizations believed that, because the natural world, which they inhabited and were a part of, were animated by spirits, it was possible for them to interact with the natural/supernatural realm in much the same way that they interacted with other human beings, especially very powerful ones…. This meant that what we identify as the natural and the supernatural were regarded not as separate from but as intimately linked to and interpenetrating the social realm.[17]

In other words, the spirits who provided suprahuman backing to a human order were not essentially different from the humans backing it. They were more powerful and knowledgeable than humans, but still fallible and limited. They could be tricked, and occasionally, killed. Nor were the upholders of the natural order especially moral beings: “There is no evidence that in any of the early civilizations… people believed that a unified moral order pervaded the universe,” and this is abundantly clear in the actions of the spirits, who were often petty, jealous, cruel, disloyal, or outright malicious.[18] The spirits were not honored for their goodness but for their potency. But this power was contingent, not transcendent. Today there is Cronus; tomorrow there will be Zeus. One falls and one rises due to nothing more than a mismatch in force and guile.

On the other hand, Arendt occasionally admits that the transcendent source of moral authority need not be that transcendent, nor especially moral. Consider her discussion of “the absolute” as an outgrowth of Hebrew law:
The whole problem of an absolute which would bestow validity upon positive, man-made laws was partly an inheritance from absolutism, which in turn had fallen heir to those long centuries when no secular realm existed in the Occident that was not ultimately rooted in the sanction given to it by the church, and when therefore secular laws were understood as the mundane expression of divinely ordained laws. This however is only part of the story…. What mattered was that—the enormous influence of roman jurisprudence and legislation upon the development of medieval as well as modern legal systems and interpretations notwithstanding—the laws themselves were understood to be commandments, but they were construed in accordance with the voice of God, who tells men: thou shalt not. Such commandments obviously could not be binding without a higher, religious sanction. Only to the extent that we understand by law a commandment to which men obedience regardless of their consent and mutual agreements, does the law require a transcendent source of authority for its validity, that is, an origin which must be beyond human power. [19]

Now even this giver of the “thou shalts” was first depicted and understood as a sovereign not altogether different from those we have discussed already: one deity among many, embodied, distinguished from other divinities not by his goodness but by his power. [20] Neither Yahweh, lord of storms, nor any of the examples we have discussed so far seem “absolute” by modern standards, but they are sufficient for the definition Arendt gives here for a “source of authority” whose “origin must be beyond human power."

But perhaps it is no accident that modern standards of absolute emerged. The more thinking a civilization gets to doing, the less appealing rule by jealous spirit lords seems to be. The partial, petty Zeus of the Iliad becomes the steady author of Stoic law and logos. The champion deity of the Israelite tribes is elevated to an omniscient and omnipotent origin of all creation. This shift away from the arbitrary rule by self-interested divinity towards a grander moral order upheld by universal, impartial power also occurred in imperial China.

The Chinese case is interesting because it is an example of a high civilization whose journey follows a similar path and echo similar themes to the West, despite having no knowledge of Israelite commandments or Greek natural philosophy. When the Chinese emerge onto the historical stage 3000 years ago, we find them ruled by kings who believe themselves to be the link between their human community and a pantheon of deities whose favor or disfavor must be gained but through ritual and sacrifice. Over time the Chinese understanding of the suprahuman foundations of human order would grow less anthropomorphized and partial. The Zhou would claim that their authority was grounded in the choice of Heaven itself— a conception of transcendent support for earthly order badly tarnished by the five centuries of war that followed the Zhou's collapse.[20] While later dynasts “shared [this] vision of a dynamic cosmic order in which the superhuman domain and human world were in dynamic correspondence” and that Heaven and Earth “supervised and guided human affairs” by endowing China’s rulers “with the cosmic mission of bringing harmony and prosperity to the human world and transforming the hearts and minds of his subject” the nature of this cosmic order was increasingly understood in terms similar to the “laws of nature and of nature's god” proposed by the Western tradition.[21]

The solution that Chinese thinkers developed to both explain the disorder that followed the collapse of Zhou authority and to reestablish a link between the natural order and the human one when China was again in the hands of a single, stable dynasty is known as “correlated cosmology.” The exact details of this cosmology would differ over time, but Peter Bol's gloss does a good job of capturing its main tenets:
The theory supposed that all things and processes belonged to specific categories as a result of the qualities of the energy¬matter (qi) that constituted them and that things of the same category "resonated" with one another (for example, strings tuned to the same note on different instruments all resonate when one is plucked). Cosmic resonance theory held further that the categories into which all things and activities fell were valid for both the natural and the human order.

It was this last proposition that had great political significance: since the natural order was of itself constant, predictable, and harmonious, any aberration must be the result of human actions. And since the emperor was the central figure in the human order and had the power to orchestrate his own and others' actions, it followed that when those in government went against the proper order of things, heaven-and¬earth responded with unusual events (which could be taken as portents of even greater harm to come) or even with full-fledged natural calamities, such as floods and earthquakes. It was incumbent on the emperor to adhere to the rules of the natural order (which the organization of antiquity was assumed to reflect) and to ensure that all society conformed.

If properly managed, the human realm would be perfectly coterminous with nature, and all creation would spontaneously function in a harmonious manner. In effect, this made the ruler, the huang di (the term usually translated as "emperor" but more literally the "august thearch"), the master of the universe, answerable only to the cosmic order.[22]

This understanding of the divine superstructure supporting human authority lasted just under a thousand years before falling victim to crises of the Tang dynasty. Then a combination of cataclysmic political upheaval and new theories of metaphysics and physics made the old correlative understanding of the universe untenable. For century or so Heaven no longer spoke to the human world; nature did not resonate with its movements. Much as in our own time, the intellectuals of this age saw the cosmos as a cold and uncaring witness to human folly.[23] But this was a temporary interregnum: by the Song Dynasty a new understanding of natural order had arisen. This allowed human law to be understood as an extension of what Chinese philosophers were calling heaven's reason (tianli 天理):
[In the early Song dynasty there was] a transformation of the political system comparable to that in Europe between the sixteenth and early eighteenth centuries. That is, the early imperial vision of a powerful ruler who commanded the populace and kept nature on course, a ruler who mediated between heaven and man and was the center around which all revolved, whose rituals had the power to move heaven and humanity, lost credibility. Instead, the ruler became a more human figure, who was expected to cultivate himself through learning in the style of the literati and whose ability to maintain the support of the populace depended on his success in managing the government so that it served the common good….
The way of heaven¬and-earth was simply another name for li, the necessary principles of a thing's coherent operation. All things, being products of heaven-and-earth, embody li, and all things are inherently coherent. And tianli ("heavenly li''), the very coherence of the universe as a whole and all things within it, is equally endowed in all human beings can provide an innate moral sensibility. The mind mediates a person's perception of the li in things and of one's own natue. The sages, being born with minds of the purest qi, fully and completely perceived the li of things and were in accord with tianli when they responded to events. Thus, they created civilization incrementally in response to the human condition of the moment. Civilization was not in any sense an artificial construct, since, thanks to the sages, it was in full accord with li. [24]

Thus in the later half of imperial history, justice was understood as the “the alignment of heavenly reason, state law, and human relations[25], and it was upon cosmic reason earthly authority was based. Though this understanding would be challenged by various intellectuals over the next few centuries, it lasted as the metaphysical foundation for imperial order right up to the end of the Qing, China's final imperial dynasty.

This superficial survey of Chinese views of natural law demonstrates that Arendt’s intuition about the genealogy of absolute politics is not correct. All over the globe, in all states of human society, we see rules and laws resting in sources “beyond human power.” In the case of  civilizations with long intellectual traditions, like that of Imperial China, the pattern broadly follows that of the West, where the “source beyond human power” becomes increasingly abstract and universal. This is not an artifact of the Great Schism, nor even a unique inheritance of the Abrahamic tradition. Where there are laws, there are humans who will try to ground them in a great beyond.

It is worth asking why this must be so. Here I find the one Chinese dynasty that did not ever claim a divine mandate to be of interest. Qin Shi Huang was the first ruler in China's "imperial" history, the dynast who picked up all of China's warring pieces and wielded them together in one empire. He may be the sole emperor in Chinese history to have never claimed the title “Son of Heaven” (tianzi 天子). In none of the steles that he erected to celebrate his accomplishments do we find any reference to Heaven's mandate or divine support. Qin Shi Huang seemed to think that his accomplishments—ending centuries of war by forcing the entire civilized world into his new empire—were self legitimizing. The early Qin elite earnestly believed that their state had embarked on a revolutionary political project, one which eclipsed all precedents and could not rely on previous models of political authority. Through the force of their arms and the wisdom of their laws they—not Heaven!—had built a new order for the ages. [25]

Their new order lasted for 15 years.

Historians have described the collapse of the Soviet Union as the byproduct of a political system built on an "absolute" its people no longer believed in.[26] With the Qin, it seems, there never was an absolute to believe in. For both the Qin and Soviets there were of course intervening economic and political causes of collapse, but I find it hard to argue that belief had nothing to do with their misfortunes. Preceding every fallen ancien regime is a period of disillusionment with the myths that held it upThere is a rebellious twitch in every man humiliated by naked power. We gladly bow to men only when they have a god behind them.

But that still leaves open the question of those rare societies who operate on fully human theories of authority not grounded in the beyond. The Greeks were not the only exception to the rule. When Njáll Þorgeirsson warns his Icelandic countrymen “with law our land shall rise, but with lawlessness laid waste,” it is not to the consequences of divine displeasure nor to the dictates of natural law that he appeals. [27]  What else could he say? This was a people without Leviathan. The goði of Iceland met together once every two years to review and amend the laws of their society.[28] They knew these laws were made by men. Lacking kings, governors, or even a government, the laws of the Icelandic Free State could only be enforced with the consent of the community they were governed by. There was no distinction here between the rulers and the ruled: the laws of medieval Iceland were made by the ruled for the ruled. 

When reading the Icelandic Sagas one is reminded of Alexis de Tocqueville's reflections on the nature of law and authority in antebellum America:
There are countries where a power in a way external to the social body acts on it and forces it to march on a certain track.There are others where force is divided, placed at once in society and outside it.

Nothing like this is seen in the United States; there society acts by itself and on itself. Power exists only within its bosom; almost no one is encountered who dares to conceive and above all to express the idea of seeking it elsewhere. The people participate in the drafting of laws by the choice of the legislators, in their application, by the election of the agents of the executive power; one can say that they govern themselves, so weak and restricted is the part left to the administration, so much does the latter feel its popular origin and obey the power from which it emanates. The people reign over the American political world as does God over the universe. They are the cause and the end of all things; everything comes out of them and everything is absorbed into them.…  [The American] sees in the public fortune his own, and he works for the good of the state not only out of duty or out of pride, but I would almost dare say out of cupidity.
One does not need to study the institutions and history of Americans to know the truth of what precedes; mores advertise it enough to you. The American, taking part in all that is done in this country, believes himself interested in defending all that is criticized there.[29]

Toqueville, as worried  about the source of proper authority in a world stripped of the old verities as Arendt, hoped the Americans' solution to the problem would save his French homeland. This hope was misplaced. It could not even save America. In time the Americans would face their own crisis of the "rulers and the ruled." The townships of the antebellum and colonial eras preserved the spirit of the Greek polis, but that spirit was restricted by the new constitution to a certain level of American life. Arendt honors the U.S. constitution for its “capacity to guarantee constitutional liberties," but condemns it for failing to "enable the citizen to become a “participator” in public affairs." Under the constitution's federal order, "The most the citizen can hope for is to be represented.” [30]

In Tocqueville's day this mattered little, for the American Leviathan was small in size, limited in scope, and the American people's loyalties lay close to home. As the American population grew larger, more urbanized, and less regional, as the federal government increased in strength, and as international affairs came to dominate American affairs, the "public freedom" of popular rule dwindled away. Public freedom has many enemies, but few harder to slay than size. The Icelandic free state, the Greek polis, and the American townships practiced politics at a human scale. When politics grows beyond the human scale, polities must nail themselves to an authority beyond the human. In America's case, this was done through transcendent notions like "We the People" and "nature's law and nature's God."

But what happens if the people no longer believe they are a "We"? What occurs when they find nothing natural in their laws? What follows when they lose their faith in nature's God? A past America, whose sense of order was grounded in the everyday experience of order-building, might weather such moments without too much worry. But Americans no longer build. Their laws are not theirs, but someone else's. They have grown too large and unbalanced for it to be otherwise. They can no longer escape the problem of the ruler and the ruled.

 That problem looms largest when the transcendent ties connecting the ruled, their rulers, and the laws that bind them begin to weaken. Such a nation is as Tocqueville described:

Sometimes a moment arrives in the lives of peoples when old customs are changed, mores destroyed, beliefs shaken, the prestige of memories faded away, and when, however, enlightenment remains incomplete and political rights are badly secured or restricted. Then men no longer perceive the native country except in a weak and doubtful light; they no longer place it in the soil, which has become a lifeless land in their eyes, nor in the usages of their ancestors, which they have been taught to regard as a yoke; nor in the religion which they doubt; nor in the laws they do not make, nor in the legislator whom they fear and scorn. They therefore see it nowhere, no more with its own features than with any other, and they withdraw into a narrow and unenlightened selfishness.[31]

Beneath Tocqueville's veiled speech is a finger pointed straight at his own country. The moment he speaks of had already occurred, in the last years of the tottering ancien regime.  Such moments do not long last.  Laws will be had. Where laws are had, gods will be found to guide them. Let us hope our coming gods will treat us kinder than the gods of the last upheaval treated the souls trusted to them.


If you would like to read some of my other trips through the minds of the human past, you might find the posts "Vengeance as Justice," "On Cultures that Build," "The Growth Revolution," "Islamic Terrorism in Context," "A Short History of Han-Xiongnu Relations," and "Three Centuries of American Political Culture" of interest. To get updates on new posts published at the Scholar's Stage, you can join the Scholar's Stage mailing list, follow my twitter feed, or support my writing through Patreon. Your support makes this blog possible.


 [1] Hannah Arendt, On Revolution (New York: Penguin, 2006 [or. ed 1963]), 149, 148, 151.

[2] ibid., 177-78, 182-184.

[3] ibid., 151.

 [4]  Hannah Arendt, “The Great Tradition: II. Ruling and Being Ruled,” Social Research, Hannah Arendt’s Centenary: Political and Philosophical Perspectives, Part II, 74, no. 4 (Winter 2007): 941–54.

 [5]  Hannah Arendt, “The Great Tradition I. Law and Power,” Social Research, Hannah Arendt’s Centenary: Political and Philosophical Perspectives, Part I, 73, no. 3 (Fall 2007), 716.

[6] ibid, 718.

[7] A good examination of how failing royal power in the 18th and 19th centuries might have affected Cambodian conceptions of authority is found in David Chandler, "Normative Poems (chab) and Pre-colonial Cambodian Society," in Facing the Cambodian Past (Chiang Mai: Silkworm Books, 1996), 45-61.

[8] On p. 89 of Courtney Work's “Chthonic Sovereigns? ‘Neak Ta’ in a Cambodian Village,” Asia Pacific Journal of Anthropology 20, no. 1 (2019) we find this charming vignette:

Once the blessings and offerings [to the neak ta] were complete, the village head took the microphone. This was unprecedented, as he rarely took a leading role, even at village meetings. He read rather stiffly, from a prepared document that began by explicitly comparing the government with Buddhism. ‘The government’, he said, ‘has five precepts, just like Buddhism’.

The Buddhist precepts are the moral codes. Regular laypersons are expected to adhere to five. These are: Do not kill, do not steal, do not engage in sexual misconduct, do not lie, and do not consume intoxicating substances. More serious practitioners, like achar and elders, hold ten precepts, and monks hold 277. Few lay people actually adhere to these, and only elders are expected to observe the precepts.

The village head went on to enumerate the government’s precepts: no stealing, no trafficking in addictive drugs, no domestic violence, no trafficking in women or chidren, and no lawless behaviour. He continued, explaining how the government provides safety and services to the people, but that with the development of new roads villagers will be more at risk from strangers. The village police, he said, do not patrol in the evenings and so the people have to join together to protect each other and to warn the police if strangers are in the village.

I find it interesting to that when the Khmer Rouge imposed its rule upon the countryside it also resorted to supranatural rule as the ready metaphor for its own authority. Villagers were told that "the organization" (as the Khmer Rouge called themselves) were "the masters of the water and the earth." They were, in essence, claiming the authority of the neak ta as their own.

See Ian Harris, Cambodian Buddhism: History and Practice (Honolulu: University of Hawaii Press, 2005), 176.

 [9] This portrait of the neak ta is informed by basically every article that has been written on the neak ta in English (there is an entire book on them in French, but alas, I cannot read it). But the most useful, and from which most of these specific examples are drawn, are Philip Coggan, Spirit Worlds: Cambodia, The Buddha, and the Naga (Oxford: John Beaufoy Publishing, 2015), 27-41; Courtney Work, “Chthonic Sovereigns?,": 74–92 and “‘There Was so Much’: Violence, Spirits, and States of Extraction in Cambodia,” Journal of Religion and Violence 6, no. 1 (2018), 55-72; Ann Yvonne Guillou, “An Alternative Memory of the Khmer Rouge Genocide: The Dead of the Mass Graves and the Land Guardian Spirits [Neak Ta],South East Asia Research, Life after Collective Death in South East Asia Research: Part 1 – The (Re-)Fabrication of Social Bonds, 20, no. 2 (2012): 207–26.

[10] Friedrich Schiller, "The Gods of Greece," translated by Francis Levenson Gowan, in Translations from the German and Original Poems (London: Thomas Davison Whitefriars, 1824), p. 42.

[11] David Graeber and Marshall Sahlins, On Kings (Chicago: Hau Books, 2018),

[12] Marshall Sahlins, “The Original Political Society,” Hau: Journal of Ethnographic Theory 7, no. 2 (2016), no pg numbers.

[13] Coggan, Spirit Worlds, 27.

[14] Graeber and Sahlins, On Kings,

[15]  Bruce Trigger, Understanding Early Civilizations: A Comparative Study (Cambridge: Cambridge University Press, 2003), pp. 70-74, 79-88, 409-433, 436-456, 467-475. The main difference between those early civilizations and the less complex societies that preceded them is that these peoples believed that “the gods depended no less on the material support of human beings” (through sacrifice and offerings) just as much as “society as a whole depended on the strength and goodwill of the gods.” See p. 484.

[16]  “The Code of Hammurabi,” translated by L.W. King, available at the Avalon Project.

[17] Trigger, Understanding Early Civilizations, 412. Sahlins and Courtney work make similar points the spirits of the foraging societies and the <i>neak ta</i>, respectively. See notes 12 and 9.

[18] Trigger, Understanding Early Civilizations, 436.

[19] Arendt, On Revolution, 181.

[20] For a very concise overview of Yahweh as depicted in the earliest materials, see Jo Ann Hackett, "'There Was No King In Israel': The Era of the Judges," in Michael Coogan, ed. The Oxford History of the Biblical World (Oxford: Oxford University Press,  2001), 156-161. For a larger account, see Thomas Romer, The Invention of God (Cambidge: Harvard University Press, 2015) and Mark Smith, The Early History of God: Yahweh and the other Deities of Ancient Israel, 2nd ed. (Grand Rapids, MI: Eerdsmans, 2002).

[20] On the origins of the concept of "Heaven's Mandate" see David Pankenier, “The Cosmo-Political Background of Heaven’s Mandate,” Early China vol 20 (1995), pp. 121-176; Edward Shaughnessy, “Western Zhou History,” in Cambridge History of Ancient China, edited by Michael Loewe and Edward L. Shaughnessy (Cambridge, UK: Cambridge University Press, 1999), 292–93, 314–17.  On the difficulties posed to this theory by the collapse of the Zhou and the various attempts by Chinese thinkers to over come it, see Yuri Pines, Envisioning Eternal Empire: Chinese Political Thought of the Warring States Era (Honolulu: University of Hawaii Press, 2009), and “Contested Sovereignty: Heaven, the Monarch, the People, and the Intellectuals in Traditional China," in The Scaffold of Sovereignty:  Global and Aesthetic Perspectives on the History of a Concept by Zvi Ben-Dor Benite, Stefanos Geroulanos, and Nicole Jerr, eds (New York: Columbia University Press 2017), 80-101.

[21] Jiang Yonglin, The Mandate of Heaven and the Great Ming Code (Everett: University of Washington, 2011), 33.

[22] Peter Bol, Neoconfucianism in History (Cambridge: Harvard University Press, 2008), 66, 155. 

[23] Ibid., 113, 69. On the failing faith in old models of heaven see also Fang Li-Tian, "Liu Zongyuan and Liu Yuxi Theories of Heaven and Man" in Tang  Yi-Jie,  Li  Zhen, George F. McLean, eds., Man and Nature: The Chinese Tradition and the Future (Washington DC: The Council for Research in Values and Philosophy, 1989), 25-32; Michael Fuller, An Introduction to Chinese Poetry: From the Canon of Poetry to the Lyrics of the Song Dynasty (Cambridge: Harvard University Press, 2018), ch. 8; Peter Bol, "This Culture of Ours": Intellectual Transitions in T'ang and Sung China (Stanford: Stanford University Press, 1994).

[24] Xu Xiaoqun, Heaven Has Eyes: A History of Chinese Law (Oxford: Oxford University Press, 2020), 11.

[25] Yuri Pines, “The Messianic Emperor: A New Look at Qin’s Place in China’s History” in Yuri Pines, Lothar von Falkenhausen, Gideon Shelach and Robin D.S. Yates, eds., Birth of an Empire: The State of Qin revisited (Berkeley: University of California Press., 2014), especially pp. 268-269.

[26] Though they have sharp disagreements with each other, this is essentially the argument of Yuri Slezkine, House of Government: A Saga of the Russian Revolution (Princeton: Princeton University Press, 2017); Andrzej Walicki, Marxism and the Leap to the Kingdom of Freedom: The Rise and Fall of the Communist Utopia (Stanford: Stanford University Press, 1997);  Martin Malia. “A Fatal Logic,” The National Interest 31 (1993): 80-90

[27] Saga of Burnt Njal (ch. 70).

[28] Jesse L. Boyack, Viking Age Iceland (New York: Penguin Books, 2001), ch. 9 (Kindle Locations 2660-2670).

[29] Alexis de Tocqueville, Democracy in America, translated by Harvey Mansfield (Chicago: University of Chicago Press, 2000), 55, 227.

[30] Arendt, On Revolution, 260.

[31] Tocqueville, Democracy, 225.