Gordon Wood Reconsidered

Much could and should be said in praise of Steven F. Hayward’s careful consideration of Gordon Wood’s historical oeuvre, but it is also true that in certain regards Hayward misses the mark (“The Liberal Republicanism of Gordon Wood,” Winter 2006/07). He is right, for example, to highlight the use to which a great many law professors have put the opening and closing chapters of Wood’s first book,The Creation of the American Republic; he is admirably sensitive to the manner in which Wood’s later work, The Radicalism of the American Revolution, signals a change in perspective on his part; and he is correct in noting that Wood thinks the legal Left’s appropriation of his work utterly wrong-headed. But he misses the chief reason why Wood rejects their overtures. Wood’s original view—jettisoned, I believe, in his second book—was that the Revolution was initially anti-capitalist, and it would be correct to call his stance at that time neo-Beardian. But, even then, he refrained from making the jump from the putative communitarianism of the revolutionary generation to the notion that they would have embraced the administrative state subsequently championed by Mark Siedenfeld, Morton Horwitz, Cass Sunstein, and the like. These legal scholars may entertain the notion that there is something communitarian about the modern bureaucratic state. Wood understands and has always understood that this presumption is preposterous. Whether the revolutionary generation was ever genuinely communitarian in its orientation can be debated (and has been). No one can argue, however, that they were friendly to the state and its functionaries as such, and Wood’s account makes it perfectly clear that from the start, in political matters, jealousy was their watchword.

In similar fashion, Hayward is on the mark when he criticizes Wood’s understanding of the discussion of virtue that took place in the founding period, when he attacks him for neglecting the significance of the Declaration of Independence, and when he faults his assault on the very idea of original intent. In general, in his work, Wood fails to weigh the relative significance of pronouncements, treating everything said, whether public or private, whether official or partisan, as equally significant. He is loath to contemplate the possibility that some of the arguments presented in the revolutionary period were profoundly insightful, had a decisive impact, and might even have permanent significance. This failing derives from the species of intellectual historicism which links his work with that of his teacher, Bernard Bailyn, and which renders it almost indistinguishable from the linguistic historicism of J.G.A. Pocock, Quentin Skinner, and the other adherents of the so-called Cambridge School. Those who, in the name of scholarly objectivity, resolutely treat argument as ideology are destined to discount the significance of argument and to misapprehend the nature of politics.

But if Hayward is correct in most of what he says, he is wrong in one crucial particular. Wood’s “enduring appeal” does not derive from the discussion of “republicanism” in the early chapters of The Creation of the American Republic. Nor does it owe anything to the discussion of “liberalism” in the final chapters of that work. Among historians, at least, the argument with which Wood framed that book is regarded as passé—and rightly so. For what remains valuable—and perhaps invaluable—are the chapters in between, which form the core of the book. These deal with the experiments in the constitution of republican government conducted within the states in the period stretching from 1776 to 1787, with the debates that took place in this period concerning the political architecture proper for a modern republic, with the calling of the Federal Convention, and with the framing and ratification of the Constitution. These chapters, which Hayward does not even mention, are splendid; and, in the book concealed within his book, one can see Wood treating statesmanship and human reflection at the highest level with something approaching the respect that it deserves. When faced with genuine grandeur, to his very great credit, he simply cannot help himself.

There is one other point that should be made. Bernard Bailyn, Gordon Wood, Edmund Morgan, John Murrin, Jack Greene—the historians of yesteryear who illuminated the American Revolution and the American Founding—are all now retired or semi-retired, and with one exception have not been replaced. Ten years ago, I was a visiting professor at Yale University. In order to teach a lecture course on the American Revolution, I had to submit a syllabus and propose a brand new course: none on this subject, I was told, had been taught in the decade that had passed since Edmund Morgan had retired. On campus after campus, the subject has been quietly dropped; and where it has not been dropped, it soon will be, as the older generation passes from the scene, for it has become almost as unfashionable as military and diplomatic history. The only really important book published in this field since the beginning of the new millenium—Max Edling’s A Revolution in Favor of Government—was started in England and finished on the European continent by a Swede. Whatever defects we may be inclined to attribute to Gordon Wood and the other titans of the generation now retired, this much must be said: they had a better grasp of the central importance of statesmanship—especially, statesmanship at the highest level—than do those who now strut and fret upon the academic stage, ranting piously and pompously about gender, race, and class. One need only survey recent issues of The William and Mary Quarterly, the principal journal devoted to early American history, to see just how bad things have become.

Paul A. Rahe
University of Tulsa
Tulsa, OK

 

Steven F. Hayward replies:

I do not have much of a quarrel, nor do I want one, with the one person who has covered the entire space-time continuum of republicanism far beyond Gordon Wood’s compass—especially since Professor Rahe affirms my chief criticism concerning Wood’s intellectual historicism. About Rahe’s specific complaint that I have misattributed the best reason for Wood’s enduring appeal, I want merely to reiterate that while Wood’s thesis may be “passé” among academic historians, it is still au courant among intellectuals in other fields, especially law and the “civic republican revival” advertised in such political thinkers as Michael Sandel. I accept Rahe’s criticism that I neglected to give due weight and consideration toCreation‘s fine middle chapters. One of the things that rankles about the use and abuse of Wood in other fields is that most of the time it seems as though none of the people who appeal to Wood has actually read with care any of his chapters—raising the ironic specter that Wood is being treated with the same intellectual historicism with which he himself approached the American Founding.

* * *

Shakespeare and Postmodernism

Paul Cantor’s cover essay, “Playwright of the Globe,” contrasting real people’s continuing appreciation of Shakespeare with the abuse meted out by the suicide bombers of the literary criticism establishment, is especially welcome (Winter 2006/07). Ever the gentleman, Cantor leaves open the possibility that they are simply mistaken about a complex matter (having “a false conception of culture”), whereas I regard them as either fools or knaves, or most likely, both.

The point I wish to make concerns the implicit or explicit historicism of these pseudo-scholars whose stock-in-trade is debunking. As Cantor puts it, “the idea [is] that all thought is not simply conditioned by historical circumstances but is actually determined by them,” adding that “in its strict and only logically consistent form, historicism must—and usually does—insist on the uniqueness of all cultures.” Actually, the logic of its premises terminates in complete solipsism, for no two persons of any given time and place fully share identical experiences, attitudes, feelings, and education, such as would bind them together in a mutual but exclusive understanding of each other and the world. Simply imagine a suitable variety of “Elizabethans”—Warwickshire dairy maid, Thames boatman, old Oxford don, young Cornish tin miner, rich London mercer, poor London trollop, Sheriff of Nottingham, Lord Chancellor, Queen’s Lady, the Queen herself-all may speak English of a sort, but if the premises that supposedly make “cultures” unique and in principle unintelligible to each other were valid, each person would be, following Plato’s famous analogy, a “cave” unto himself, with no exit (such as that provided by dialogue with others, especially with the best and brightest of other places and times).

Thus, if Lit-Crit’s High Priests understood the implications of their historical-cultural relativism, and actually took them seriously—as the fools do not the former, and the knaves do not the latter—they would have to admit that anything they claimed about Shakespeare’s views, about the times in which he lived, and any supposed relationship between the two, would necessarily be intellectually worthless. For with their being as hermetically sealed in their own corner of space and time, as they allege Shakespeare to have been in his, these would be things they could not possibly know anything about. Their understanding would necessarily be, virtually by their own definition, a misunderstanding, though of indeterminable relation to the historical reality about which they presume to speak—pointlessly, then, one must add. Ironic enough that they profess to be experts on something which, were their belittling treatments to be believed, would not be worthy of expertise. When their radically relativistic theories of culture and knowledge are applied reflexively—as intellectual probity demands—all their claims dissolve into incoherent mush. Thus it is with all radically relativistic epistemologies. A logical nicety too subtle, apparently, for those who find the postmodern mindset so satisfying to their vanity.

Leon Harold Craig
University of Alberta
Edmonton, Alberta
Canada

* * *

Getting Original Intent Right

The Claremont Review of Books (Winter 2006/07), under the heading of “The Disputed Question,” published my critique of Professor Michael Uhlmann’s review in a previous issue of a book entitled Originalism in American Law and Politics by Johnathan O’NeillI objected to Uhlmann’s indiscriminate praise (as I thought) of a book that celebrated an “original intent” jurisprudence that was utterly alien to those who framed and ratified the original Constitution. The Constitution is in crisis because—as Professor Uhlmann and I agree—constitutional law has become almost entirely estranged from the Constitution. Liberal activist justices have been encouraged by the idea of a “living Constitution” to say what the law ought to be rather than what it is. The late Chief Justice William Rehnquist, followed by Justice Antonin Scalia, Judge Robert Bork, and many of the legal scholars considered conservative, have adopted what they call a jurisprudence of “original intent” as a counter to the “living Constitution.”

The meaning of genuine original intent was however set forth by Abraham Lincoln, in his commentary on the verse from Proverbs, “A word fitly spoken is like apples of gold in pictures of silver.”

The assertion of that principle (‘that all men are created equal’) was the word ‘fitly spoken’ that has proved an ‘apple of gold’ to us. The Union and the Constitution are the pictures of silver, subsequently framed around it. The picture was made, not to conceal or destroy the apple, but to adorn or preserve it. The picture was made for the apple—not the apple for the picture.

As Lincoln said, the relationship of the Declaration and the Constitution was one of ends and means. To deny the integrity of the two together is to leave the Constitution open to whatever ends or purposes the Court might wish. This is the essence of the doctrine of the “living Constitution.”

What Prof. Uhlmann and I do not agree upon is why liberal judicial activism (the “living Constitution”) has been so successful, and why the appeal by conservative justices to a jurisprudence of original intent has been such a failure. His admission that the conservatives’ rejection of the Declaration undercuts the metaphysical ground of their originalism is altogether inadequate. In fact, their rejection of the Declaration undercuts not only the metaphysical but the moral foundation of the Constitution. Without the distinction between the principles of the Constitution and the compromises of the Constitution no moral case for originalism is possible, nor is any case possible against the living constitution. Unless conservative jurists can break out of this box, and restore to constitutional interpretation its Lincolnian integrity, the Union Lincoln saved will face a disintegration from within, at least as deadly as that represented by Dred Scott, secession, and slavery.

Harry V. Jaffa
Claremont, CA

* * *

The Civil War’s Lost Cause

Professor Glen Thurow wants to know why in my book I do not come down clearly on “the cause” of the Civil War, exonerating the Union and excoriating the Confederacy (“God of Battles,” Winter 2006/07). But this is an issue that poses a real ethical question without criteria for definitive judgments. In civil wars, where significant numbers of belligerent citizens align themselves under warring banners, with substantial territories and competent leadership on both sides, it often becomes difficult to discern with finality who is the unjust aggressor and who the just defender.

Who is right and who is wrong in Iraq’s civil war? The Shia? The Sunnis? Did Norway have a right to secede from Sweden? Does modern-day Chechnya have a right to secede from Russia? These are moral issues that recur in civil wars throughout the years, with no definitive answers. Did the South have a right to secede from the North? Yes. And no. Only a civil war would determine the answer.

Although he never takes a stand, I suspect Thurow thinks otherwise. He would like me to judge the Confederate case for a defensive war as unjust. But to my thinking, this argument is only incontrovertible if one ascribes a transcendent—even mystical—holiness to the Union as “God’s New Israel.” Northern evangelical Republicans certainly had no problem with this premise during the war and used it frequently to justify their cause (as did the South in virtually identical language for their Confederacy). But speaking for myself, such a perspective is absurd, if not downright idolatrous. Lying at the center of my argument, alongside just war judgments, is an account of how the war was sacralized into the occasion for a full-blown nondenominational civil religion, existing alongside of and equal in power to Christian and Jewish denominations.

Thurow apparently has no problem with any instances of unjust conduct in the Civil War (at least by the North). War, as General William Tecumseh Sherman observed, is hell. So get over it. What happened happened and the only moral issue of moment was the preservation of the Union. And here, given America’s sacred destiny, no step was too draconian or immoral if it led to the triumph of the federal nation-state.

Where Thurow (following Lincoln) would like me to accept the war’s conduct, and embrace “the cause,” I can’t. In his view this renders me a softie who would prefer a “genteel” war to total war. Well, so be it. I would rather have a genteel war with honor (and lose it if necessary) than a dirty war fought in a deliberately unjust manner.

In the end, Prof. Thurow’s critique is a throwback to the old triumphalist good guy/bad guy histories of the Civil War in which the South is so utterly wrong because of slavery and the North so utterly right because of emancipation (in the process ignoring Northern racism and Jim Crow utterly), that hard questions don’t have to be addressed. And it is precisely this mentality that has empowered America to fight endless subsequent wars in the name of “freedom,” without ever really engaging questions of just conduct and protection of innocents. Today, when our country is still bemused by its belief that 9/11 made and makes any and all anti-terrorist actions appropriate, one can see that if a leader so humane and noble as Lincoln could pay such small heed to the still, small voice of conscience, what can we expect of our present leadership? Pretty much what we are getting.

Harry S. Stout
Yale University
New Haven, CT

 

Glen E. Thurow replies:

It was not I, but Professor Stout, who claimed to be able to show us whether the Civil War was a just war on the part of either the North or the South. I did not argue that he should have exonerated the Union and excoriated the Confederacy, but rather that he ought to have engaged the arguments actually made by participants in the struggle. Instead of examining the causes of the American Civil War as understood at the time, Stout (as he makes clear in his letter) assumes from the beginning that it is impossible to determine who has justice on his side in any civil war. Hence he does not need to look at what actually led to this war, and doesn’t, since he already knows that there could not possibly be a just cause involved. On the crucial question of whether the South had a right to secede, he says, “only a civil war would determine the answer.” Might makes right?

In his letter he makes the astounding assertion that one might be able to find the Southern cause unjust only if one ascribes mystical holiness to the Union. Why does he not look at what people of the time actually argued? In his First Inaugural Address, Abraham Lincoln makes a very powerful argument, not that the seceding states were violating the holiness of the Union, but that their secession undermined the rule of law and was based on the principle of anarchy. The president might have been wrong, but if Stout wishes to write a serious moral analysis of the Civil War, he needs to confront Lincoln’s argument.

Stout’s confusions are nowhere more evident than in his judgment of Lincoln. On the one hand his Lincoln, by issuing the Emancipation Proclamation, not only turned the war into a semi-religious crusade resulting in the unjust deaths of tens of thousands, but forged a nation that has acted unjustly for 150 years afterwards. Yet, Stout says, the president was “humane and noble.” He cannot have it both ways.

* * *

The G.I. Bill

Thomas Bruscino begins his review of my book on the G.I. Bill with a cogent summary, but then he proceeds to dismiss my central finding: that veterans who benefited from the Bill’s education and training provisions became particularly active citizens in postwar America (“No Soldier Left Behind,” Winter 2006/07). He insists that these high rates of civic involvement must have emanated not from the G.I. Bill but rather from the military service that preceded it. He criticizes me, therefore, for not comparing veterans with non-veterans.

Bruscino may not be aware that other scholars (most notably, Kent Jennings and Gregory Markus) have already conducted such analyses. Contrary to what he expects, they have found that a record of military service does not, in itself, lead to heightened civic involvement. While many aspects of military service prepare servicemembers for subsequent civic involvement, as Bruscino and I both agree, evidently other aspects—perhaps associated with serving in combat—counteract these positive forces and actually reduce individuals’ likelihood of participating.

My purpose was to compare individuals who were alike in all the regards we can measure but who differed in terms of whether or not they used the G.I. Bill for education or training. On that basis, I was able to isolate what turns out to be the striking and largely unintended civic effects of the policy. Through statistical analysis of hundreds of World War II veterans that I surveyed, I found that in comparing two non-black males with the same socio-economic background and same level of education—one who used the G.I. Bill for education or training and the other who did not—the G.I. Bill beneficiary proceeded to participate in 50% more civic organizations and 30% more political activities during the postwar era. On net, such effects yielded a powerful impact, given that, as Robert Putnam has noted, fully 80% of the males of the “civic generation” were veterans and among them about half used the G.I. Bill’s education and training benefits. I do not claim that G.I. Bill usage functioned as a “silver bullet,” by itself explaining the intense civic engagement of the postwar decades. Nonetheless, the provisions’ effects in invigorating democracy in that era are impressive.

Compared to that period, when ordinary Americans felt connected to the political system and regularly made their voices heard to elected officials, participatory citizenship has declined sharply over recent decades, particularly among less advantaged individuals. No doubt the causes for this shift are multifaceted, but it makes sense to consider the role that public policies may play. Scholars have found that receiving Social Security benefits fosters higher rates of political involvement among low-income senior citizens than their other characteristics would lead us to expect; welfare receipt, conversely, appears to depress the likelihood of voting, even when controlling for other factors. The G.I. Bill still stands as one of America’s landmark policies: it offered educational opportunity to those who had served the nation and it prompted their subsequent involvement as more active and vocal citizens.

Suzanne Mettler
Syracuse, NY

 

Thomas A. Bruscino, Jr., replies:

The fundamental flaw in Suzanne Mettler’s thesis can be seen in her response to my review. She argues that there is little difference between veterans and non-veterans when it comes to civic participation, but that G.I. Bill recipients—all of whom were veterans—were “more active and vocal citizens.” Which is it?

The answer, as I point out, can only be found in a thorough comparison of World War II veterans (G.I. Bill recipients and non-recipients) and non-veterans from the same age cohort. Research by Jeremy Teigen, Samuel Stouffer, and E.M. Schreiber has found statistically significant distinctions between World War II-era veterans and non-veterans in several areas, including voting participation, tolerance, and isolationism. (Jennings and Markus looked at Vietnam veterans, explicitly stating that their findings differed from studies on veterans of World War II.) This conclusion is at odds with Mettler’s own preconceived notion that government welfare programs can make better citizens.

But even if we accept her findings and agree that G.I. Bill recipients participated more than veteran non-recipients, there is still a hole in the research: we have nothing to which to compare the results; no non-veterans received G.I. Bill benefits. And when non-World War II veterans have been on the receiving end of large government welfare programs, the results have been more than a little disappointing.

* * *

The Coming Pandemic

Virtually everything passing for “fact” in Mark Helprin’s scarifying essay on pandemic flu, “The Worst Generation Faces the Greatest Peril” (Fall 2006), was dispatched in my Weekly Standard cover story of November 21, 2005, “Fuss and Feathers: Pandemic Panic over the Avian Flu.” The rest was dismissed in my follow-up in the December 25 issue, “The Chicken Littles Were Wrong: The Bird Flu Threat Flew the Coop.” Further, Helprin’s call for spending 2.5% of the national budget or about 1% of GDP to stave off this will o’ the wisp, when defense spending has only gone up 0.8% of GDP since 9/11, is downright distressing.

The very idea of comparing a 1918-1919 pandemic (the Spanish Flu) to a 21st-century one is fatuous. Back then, antibiotics and antibacterial vaccines that could prevent the deaths caused by secondary infections were still decades away. Yet despite the myth that most people died directly from the virus in that pandemic, overwhelming evidence indicates that as with all flues it was opportunistic bacteria that ultimately killed the great majority of victims.

Helprin also ignores the existence of antivirals, even though we are now building huge stockpiles of two—Tamiflu and Relenza—that could be tremendously effective against avian flu H5N1 because they specifically target neuraminidase (the “N” part of the name.) Studies at St. Jude’s Research Hospital have shown that H5N1 appears to express the highest level of neuraminidase of any flu since 1957, and that the drugs can effectively kill two birds with one stone by both preventing a person from getting the flu or, if they do get it, from transmitting it.

A review of four such studies in the American Journal of Epidemiology showed that preventive administration of Relenza reduced the chance of becoming infected by 75%, reduced the chance of transmission by 19%, and reduced the severity of illness by 52%. For Tamiflu, preventive administration reduced the chance of becoming infected by 81%, reduced the chance of transmission by 80%, and reduced the severity of illness by 56%.

Helprin warns of the possibility of reassortment, whereby humans (or one of the few types of animals that can contract human flu) also contract avian flu, and the two merge to form a super-hybrid, with the worst aspects of both. Yet H5N1 was first discovered in Scottish chickens in 1959 and has therefore had almost half a century to either reassort or mutate into a form readily transmissible between humans, yet has failed to do so. That hardly supports Helprin’s exhortation that avian pandemic flu “will almost certainly strike in one form or another; it could strike tomorrow….”

A study using one of those few animal species, ferrets, appeared in the August 8, 2006 issue of Proceedings of the National Academy of Scientists. The ferrets were infected with several H5N1 strains in addition to a common human influenza virus that circulates almost every year. The infected animals were then either placed in the same cage with uninfected ferrets to test transmissibility by close contact, or in adjacent cages with perforated walls to test spread of the virus from respiratory droplets. None of the secondary ferrets received either a reassorted virus or even the H5N1, mimicking what we’ve seen in humans.

Separately, the scientists used gene splicing to create a hybrid virus. They found these hybrids also did not pass easily between the animals. Moreover, ferrets injected with the reassorted virus symptoms were less severely ill than were those who received pure H5N1. Reassortment appears to have weakened the virus.

Helprin claims the “mortality of Avian Flu has been calculated on the basis of isolated cases, the subjects of attentive care, and reads far lower” than if it went pandemic. In fact, the data of which he speaks all come from Third World countries with Third World medical practices and comprise only those who became sick enough to go to the hospital. As to the true death rate, a study of residents in a rural district in Vietnam published in January 2006 in the Archives of Internal Medicine found a mortality rate for those infected with avian flu of about 1 in 140 or 0.71%—the same range as seasonal human flu.

Meanwhile, at least six different drug companies have vaccines for H5N1 in testing or even in production while awaiting regulatory approval. At least one country has ordered enough for every citizen. Every day that passes doesn’t bring us a bit closer to a pandemic, but rather closer to having ever larger and more effective stockpiles of drugs to both prevent and treat avian flu. If only this were true of our national security against the forces of Islamist terror. That’s where the money he would throw down the gullet of the faux flu needs to go.

Michael Fumento
Arlington, VA

 

Mark Helprin replies:

Dare I defend myself from an attack so magisterial that it dispatched my arguments a year before I made them? And we can have no doubt that it did, because the author’s authority is two articles that he himself wrote.

His arguments suggest both careless reading and careless writing. For example, he gratuitously assumes that other observers don’t know that flu deaths are often caused by secondary infections, and that the “antibiotics and antibacterial vaccines” that did not exist at the time of the 1918 pandemic are sufficient to render these of no great import. Not all secondary infections are bacterial; a large proportion of deaths is caused directly by viral flu pathogens; and, in regard to the sub-category of bacterial secondary infections, resistance to antibiotics is so rapidly building that they cannot be an effective line of defense in a pandemic. Even if they could, in a mass outbreak medical personnel, equipment, and drugs themselves would not be available to the vast majority of patients.

Antivirals such as Tamiflu and Relenza are subject to the same degradation as antibiotics, as is shown by the steadily growing viral resistance to them in Japan. The government stockpile, assuming that it is not severely reduced by efforts to contain an outbreak somewhere else in the world, will not cover eight out of ten Americans. And what does it say about Mr. Fumento’s argument that he is content because a study found that Relenza at one time reduced transmissibility (of which virus?) by 19%?

The claim that H5N1 has been known for half a century and has not reassorted into a dangerous variant ignores the immense increases in Chinese animal husbandry, reduction of waterfowl habitat, and other factors, which further opportunities for reassortment by orders of magnitude. Nor does it look beyond the H5N1 pathogen to the many others, known and unknown, that have arisen and will arise in such conditions. Yes, reassortment may weaken lethality, but it may not.

As for the assertion that the death rates in advanced countries would be low because of advanced medical care, in the absence of the necessary preparations I advocate, medical care would largely disappear within the first weeks of a mass outbreak, given the tight margins in the supply of hospital beds, ventilators, personnel, and drugs, as anyone who has ever waited in an emergency room would understand.

There is a broad range in estimations of mortality. Kathleen Gensheimer et al. in Emerging Infectious Diseases project “89,000-207,000 deaths” in the United States, and “Hundreds of thousands to millions [of casualties].” Sarah A. Lister of the Congressional Research Service cites “CDC estimates that, in the United States…a pandemic could cause more than 200,000 [deaths].” The World Health Organization (WHO) summarizes expert estimates for worldwide mortality as having “ranged from 2 million to over 50 million.” Dr. Shigeru Omi, WHO director for Asia and the Pacific, in the New York Times, estimated “at least 2 to 7 million, maybe more-20 million or 50 million, or in the worst case, 100 million.” Dr. Michael T. Osterholm, director of the University of Minnesota Center for Infectious Disease Research and Policy, estimates that a pandemic could kill up to 1.7 million people in the U.S. and as many as 177 million worldwide. Henry L. Niman, Ph.D., president of Recombinomics, believes a pandemic could potentially kill a billion people if the 72% mortality rate seen in recent confirmed H5N1 cases prevailed.

The tremendous range of expert estimates is due to the uncertainty of the variables. The least “authoritative” in the samples above is also the highest, but the most “authoritative,” that of the Center for Disease Control (CDC), was offered by Martin Meltzer, a health economist in the Office of Surveillance of the CDC’s National Center for Infectious Diseases, who based his projections on the small-bore Hong Kong Flu of 1968, and calls his own numbers “conservatively low.”

Nor is there consensus in regard to the death toll from the 1918-19 Influenza Pandemic, with a range of 20 million to 50 million deaths worldwide, and the usual citation of 500,000 deaths in the United States, although this latter figure has been elevated by some estimates to 675,000. An article in Johns Hopkins Public Health dismisses the lower estimates: “The global death toll from the 1918 flu was long pegged at 20 million, but most experts now think that grossly low. They talk of 50 million, perhaps 100 million.”

It is possible that continuing research will extend the hopeful progress to date, and that the newly emerging pathogens will be (almost) neutralized by antiviral vaccines. But these vaccines are not available now, and Mr. Fumento’s position has not been, is not, and will not apparently be favorable to allocation of the hundreds of millions and billions of dollars that are necessary to make them so, and that have brought the progress he cites as if it reinforces his line of reasoning.

What is most peculiar and arbitrary is that he apparently believes appropriations for the military and those for biological preparations are alien competitors, and that my call for spending $100 billion per year (which he wrongly calls 1% of GDP) would cripple defense. The roughly $5 billion planned for biological defenses is approximately one one-hundredth of what we have spent, thus far and over and above the defense budget, for the wars in Afghanistan and Iraq. Because the reassortment that he claims with no basis whatsoever will not occur in nature can be accomplished in the laboratory—yielding a wide variety of potential combinations (such as a pathogen with the lethality of Ebola and the transmissibility of the common cold)—biological defenses are obviously a very important part of the war, even if fatal to his argument.