It is a peculiarity of American government that after more than 200 years no fixed system exists for selecting the president of the United States. Almost every nomination contest brings with it a different arrangement for the schedule of primaries, the allocation of delegates, and the regulation of campaign finance. No one can get used to the system before it has changed again.

This year was surely no exception. The start date of the process was moved up to January 3, with the Iowa caucus virtually ushering in the New Year. There was an unprecedented bunching of more than 20 contests, including both California and New York, on February 5, dubbed “Tsunami Tuesday,” to distinguish it from “Mega-super Tuesday” (March 2) of 2004, and from merely “Super Tuesday” before that. The nation is running out of superlatives. With so many contests scheduled up front, most of the candidates formally announced their candidacies and began intense fundraising in the spring of 2007, a full year and a half before the final election. A battery of debates followed, saturating the cable networks throughout the summer and fall.

If, as Alexis de Tocqueville once remarked, the presidential election period marks a kind of “national crisis” in which the political elites are distracted from the normal business of governing, then America is courting danger to the point of obsession.

Given this frontloaded schedule, most analysts predicted that the nomination decisions would be resolved by early February. Things didn’t work out that way, especially for the Democrats. What was supposed to be a mad dash turned into a grueling marathon. Judgments about the system seemed to shift as time went on. The early chorus of objections that the contests would be settled too quickly, precluding adequate deliberation, gave way by May to plaintive objections that the campaign had gone on too long, risking division to the party and the nation. By June, all seemed to be for reform, though as usual not the same reform nor for the same reasons.

Is this any way to pick the men (and women) who would be president?

Rules of the Game

To invoke the wooden terminology of political science, the presidential nomination system has not achieved full “institutionalization.” In an institutionalized process, rules precede the activity to be governed and structure it in patterned ways; and they remain in place long enough to produce reasonably clear effects. In sports, that’s known as the “rules of the game.” Nothing approaching this salutary authority governs the nominating process. Candidates and their supporters regularly scheme to change state laws and (in the case of the Democrats) party rules to try to benefit their campaigns, though they are frequently burned by their own machinations. Because each year’s arrangement is different, the candidates must make new strategic assessments. (Rudy Giuliani, for example, disastrously concluded that the 2008 schedule allowed him to ignore all the contests up to Florida, a decision netting him one delegate for his $59 million investment). Non-institutionalization has become, all on its own, a factor that influences who is nominated, helping candidates who judge better—or perhaps just guess rightly—the effects of the rules on their race. Whether this skill is a sound predictor of presidential performance is another matter.

Imagine what the rest of the world must think at observing this spectacle. As late as November 20, 2007, six weeks before the provisional start of the campaign, the secretary of state of New Hampshire was preparing to move his state’s primary date up (to before Christmas!) to preempt a possible move by the state of Michigan. Two of the largest states (Michigan and Florida) went on to hold primary elections that, in the case of the Democrats, were initially not recognized by the national party because they contravened national party rules—which of course did not prevent Hillary Clinton, who had signed off on the rules, from solemnly demanding, as a matter of highest principle, that the people’s voice be heard. And it was truly bizarre that Puerto Rico, a U.S. territory with no vote in the presidential election, had been allocated almost twice as many delegates in the Democratic Party as West Virginia, a state potentially critical to the general election. (The excuse was that since actual delegates after a certain point in the process do not matter anyhow, it would be clever to make a symbolic gesture to Hispanic voters.) Finally, there was Rush Limbaugh’s “Operation Chaos,” which asked his legions of ditto—heads to vote in Democratic races to gum up the results.

Far from being a model, our presidential selection process is unworthy of a banana republic. To add insult to injury, it is unclear where, if anywhere, the effective authority resides to implement any serious reform. Each state can change its own laws, but not the laws of any other state. Each national party—or, in the ideal case, the two of them working together—can influence state laws, but they are loath to take on this assignment; and the states, in any case, are not obliged to listen. As for the federal government, it is disputed to this day whether Congress has the constitutional power to legislate in this domain.

How did we get ourselves into this situation? No one, surely, could have planned it this way. And no one ever did.

The Founders’ Intent

If blame must be assigned, a portion of it must go to the founders for instituting a system that never fully succeeded in managing the problem. But it was certainly not for want of trying. The Constitution, contrary to popular impression, created not three but four national institutions: the presidency, the Congress, the Supreme Court, and the presidential selection system (centered on the Electoral College). The question of presidential selection was just that important to the founders, and they created a system that was meant to institutionalize the process from start to finish—from the gathering and winnowing phase up through the final election. The Constitution, in other words, was intended to control “nominating” as well as electing: the electors, meeting in their respective states, would vote for two people for president (at least one of whom had to come from another state), thus nominating and electing at the same time. When the votes were collected and opened on the floor of the House of Representatives, the winner (if he had a majority) would become president, and the runner-up would become vice president.

The nominating plan, as matters turned out, worked as intended only when there was no real need for it. The electors twice nominated the one individual in American history, George Washington, whose choice was never in doubt. By the time Washington stepped down, national political parties, which the founders never expected, had begun to impose their influence on the electors’ nominating function, promoting the parties’ own candidates for president and vice president and effectively removing this process from constitutional jurisdiction.

Despite this failure, the founders introduced a comprehensive way of looking at the selection process that continued to exercise a broad influence. One of their simplest but most important principles was to consider the presidential selection system a means to an end, not an end in itself. Its purpose was to elevate a meritorious person to the presidency, in a way that promoted the Constitution’s design for the office. Their explanation of the system did not celebrate the process as a positive event in its own right, much less as the consummation of American democracy. They focused instead on the need to avoid the many potential problems and dangers attendant on the choice of a chief executive.

The principal objective was to choose a sound statesman, someone “pre-eminent for ability and virtue,” in the words of The Federalist, by a method that satisfied republican standards of legitimacy. (The system, with electors to be chosen by the state legislatures or the public, was a remarkably democratic arrangement for its day.) How to identify a person of “virtue” was the crux of the issue. The best way would be a judgment based largely on the individual’s record of public service, as determined finally by the electors. The founders’ intent was above all to prevent having the decision turn on a demonstration of skill in the “popular arts” as displayed in a campaign. They were deeply fearful of leaders deploying popular oratory as the means of winning distinction; this would open the door to demagoguery, which, as the ancients had shown, was the greatest threat to the maintenance of moderate popular government. By demagoguery, the founders did not mean merely the fomenting of class envy, or harsh, angry appeals to regressive forces; they also had in mind the softer, more artful designs of a Pericles or a Caesar, who appealed to hopeful expectations, “those brilliant appearances of genius and patriotism, which, like transient meteors, sometimes mislead as well as dazzle” (Federalist 68). The greatest demagogues would be those who escaped the label altogether.

The selection system was also designed to promote the more elusive goal of shaping the tone of the nation’s political class. By sending a clear signal of how and how not to be considered for the presidency, the system was intended to structure the careers of the most spirited leaders, discouraging them from cultivating the popular arts and encouraging them to establish strong records of public service. The task was to make virtue the ally of interest, in order to avoid the danger expressed in Alexander Pope’s couplet: “The same ambition can destroy or save / And makes a patriot as it makes a knave.”

King Caucus

The founders’ nomination system could not survive the advent of political parties during Washington’s second administration. Since that time, some four or five (depending on how you count them) methods of selecting presidential candidates have been tried: the congressional caucus (1796-1820); a brief interlude of nonpartisan self-selection (1824-1828); the national nominating convention under the control of party organizations (1832-1908); a “mixed” system balancing popular choice with the previous convention system (1912-1968); and the modern system of popular choice (since 1972).

When national parties took control of the nominating function in 1796, it was not by design but in fulfillment of the very logic of a party in a democratic system. To win power through election requires a party to concentrate support behind one person. Otherwise, party supporters might divide their votes among several, allowing a candidate from the opposition party to win. Political parties did not seek to subvert the Constitution’s aims for the selection system, but in assuming the function of nominating they added a criterion for selection—fidelity to party principles—that was and still is in some tension with the constitutional spirit of the presidency.

But how would the parties decide—or who among them would decide—on the nominees for executive office? At first, the task fell, faute de mieux, to a meeting of the delegation of the party’s members of Congress. The originators of the congressional caucus, whoever they were, never viewed themselves as founding a new institution, however. Only afterwards, when the caucus was challenged beginning in the mid-1810s, did its defenders begin to think in these terms. They justified it modestly on the practical grounds that the caucus was the only arrangement available at the time. It also served the party’s purposes, and gave the choice to a group well suited to judge the candidates’ qualities. In this sense, at least, it kept the founders’ goal in mind.

The caucus came under criticism for placing too much power in the hands of a small Washington group, in contrast to the more popular plan envisaged in the Constitution, which had left nomination to electors from each state. In a brilliant public relations ploy, someone branded the institution “King Caucus,” a label that helped to insure the institution’s demise. A further, more trenchant criticism was that the caucus involved members of Congress in the task of selecting the president, which, as John Quincy Adams noted, “places the President in a state of undue subserviency to the members of the legislature.” (Adams might well have had in mind the strong signals sent by some Republican members of Congress to President James Madison in 1812 to go to war with Great Britain.)

The greatest problem the caucus confronted, however, stemmed from growing opposition to party nomination of any kind. When the Federalist Party collapsed as James Monroe assumed the presidency in 1817, the nation was left with only one party. Monroe responded not just by declaring victory for the Democratic-Republicans, but by calling for the elimination of all vestiges of partisanship, which he called the “curse of the country.” His aim, which won the day among much of the political class in Washington, was the restoration of the founders’ original idea of nonpartisanship. This was the meaning of the Era of Good Feelings.

Monroe’s position put enormous pressure on the caucus system, which many Americans now wished to jettison outright. Even those who still believed in political parties found themselves on the defensive. If there were only one party, and all presidential aspirants were faithful members, what need was there any longer to subordinate the individual to the party? There was also the undeniable fact that with only a single party, the caucus was not merely nominating a presidential candidate but picking the president as well. The institution found itself with fewer and fewer supporters, and as the 1824 election approached the “King” was well on his way to exile.

Putting Party Over Person

The 1824 election took place without binding party nominations and featured, as a result, a multiplicity of candidates (Adams, Andrew Jackson, William Crawford, and Henry Clay). With no formal starting point, electoral activity began very early, and with so many contenders no one received an electoral majority. The election had to be decided in the House, where Clay backed Adams (and became secretary of state), earning him Jackson’s wrath as the “Judas of the west.” One might compare this contest to the early stage of certain modern nomination races, where each candidate, devising his own strategy and platform, vies to carve out enough votes from limited constituencies to finish among the top two or three contenders, enabling him or her to go on. The appeals can be narrow and demagogic.

Most who look back on this election treat it as an aberration. But it only became so because of certain deliberate steps taken afterwards that changed the character of the nomination system. The changes were orchestrated by one of the great institution builders of American history, Martin Van Buren. The inevitable consequence of a nonpartisan system, Van Buren argued, would be a repetition of the general outcome of 1824, with many candidates participating and an electoral majority forming only in the rare circumstance of an extraordinarily popular candidate (as Andrew Jackson proved to be in 1828). Leading a coterie of leaders from the old Jeffersonian party, Van Buren set out to rescue the nation from the system of 1824. His aim was not merely to revive his own party but to restore two-party competition—indeed, not just to restore it, but to make it into a permanent and respectable part of the political system. Even his partisan goal, which was second to his grander institutional reform, required founding and supporting an opposition party. One party alone would be like a single hand trying to clap. It would not work.

Van Buren criticized the nonpartisan system because it removed all restraints on individual ambition and opened the presidential election to an endless campaign, which would be conducted on the basis of popular leadership and demagoguery. He feared that such an election process would divide and destroy the nation, most likely by fomenting sectional appeals. Van Buren’s solution could be summed up in one phrase: put party over person. National parties, established on safe and broad national principles, would be the gateways through which anyone seeking the presidency would have to pass. This system could hardly depart more dramatically from the original constitutional plan, but it was inspired by the founders’ aim of managing ambition and controlling popular leadership for the common good.

As an institution builder Van Buren understood that the task of resuscitating parties could not rely on a change in the Constitution. It was an extra-legal task. Van Buren’s ingenious and paradoxical stratagem was to try to connect his project to the most powerful, popular—and polarizing—force in the nation, Andrew Jackson, who, to make matters more difficult, favored a politics of personalism and Monroe’s related nonpartisan idea. But Van Buren persisted. If the old party could be connected to Jackson, another party in opposition would be quick to follow. Van Buren’s plan did not fully “take” in 1828—Jackson won on a personal appeal—but by 1832, with mounting opposition (and with Van Buren having insinuated himself into his good graces), Jackson concluded that he needed the Democratic Party. He became its nominal founder and great champion.

From early on Van Buren concluded that the caucus system was finished and that a new system of nomination was needed. This was the Age of Jackson in which people demanded a greater role in choosing their leaders. The new institution Van Buren proposed was the party convention, a meeting consisting of a large number of delegates chosen from the states. Not only was the convention more democratic than the caucus, it allowed a large number of politicians to meet face to face to work out the arrangements, including the choice of the nominee, that secured party harmony. As time went on, of course, the conventions also occasionally became the forums for revealing and intensifying factional differences.

Parties were self-created associations, not official public bodies. They determined their own procedures and rules. One of the most unusual and fateful innovations was the Democrats’ two-thirds rule for nomination, adopted in 1832, which effectively gave a veto to any geographical section. The rule also produced some conventions of notorious duration; the longest, in 1924, selected the otherwise forgettable John W. Davis after 103 ballots. Party conventions, which operated at all levels of government, became the institution that performed the great “public” function of nomination. Yet they had the legal status of fully private entities in which, as the political scientist V.O. Key once colorfully noted, “it was no more illegal to commit fraud…[than] in the election of officers of a drinking club.” Following sober reassessment, some states began the gradual process of bringing parties under the control of state legislation. But national parties and the national conventions operated beyond the jurisdiction of any state, and no one at the federal level at the time ever conceived that Congress had the authority to regulate them. A national function of the highest importance—nominating presidential candidates—was carried out by an institution that no political authority could regulate.

Popular Leadership

Like the congressional caucus earlier, the party convention eventually came under fire. Once again, the criticism was only partly directed against its intrinsic flaws as a nominating mechanism; the larger objection was that the convention embodied the alleged defects of the parties of the day. American parties, in the eyes of their Progressive critics at the turn of the 20th century, were thoroughly corrupt, more interested in victory than principle, and willing as a result to settle for weak, pliable candidates. Nominations, it was charged, were settled either by emotional swings among convention delegates or by secret deals struck by party bosses and machine leaders in smoke-filled rooms (tobacco in those days).

As a remedy, the Progressives advocated going over the heads of the party organizations to the people. They prescribed two reforms: nomination by primary elections, in which a wholly new kind of party would form around the victorious nominee, and independent personal candidacies of the sort represented by Ross Perot in 1992. The Progressives’ oft-proclaimed faith in democracy, including direct democracy, was no doubt genuine, but it came wrapped within a concept of popular leadership. The heart and soul of their theory of governance was the idea of a special relation between the leader and the public. Popular leadership, located in the presidency, would modify the spirit of our antiquated constitutional system, overcoming the separation of powers and breathing new life into a moribund mechanical structure. As Woodrow Wilson put it, “The nation as a whole has chosen him, and is conscious that it has no other political spokesman…. Let him once win the admiration and confidence of the country, and no other single force can withstand him….”

To win this admiration and confidence, the leader had to be selected on his own merits, without the help or constraint of the traditional, decentralized party organization. All elements of popular leadership should be on display in the nomination process, to be judged by the public. The Progressives sought to reverse Van Buren’s principle of placing the party above the leader. Though sincere in their wish to restore something of the founders’ notion of statesmanship, they had in mind statesmanship of a different kind, recognized by different means. Progressives sought to raise the selection process to the status of a grand, exalted good, a centerpiece of republican government, celebrated for discovering the leader and educating the public—who would choose among the candidates on the basis of their positions on issues and their appeal as leaders, revealed especially through their rhetorical prowess. It is only natural, wrote Wilson, that “orators should be the leaders of a self-governing people.” Hand in hand with oratory came the celebration of openness in all matters, “an utter publicity about everything that concerns government.”

And what of the risks the new method entailed? In an enlightened age, with a vigilant press, the reformers were confident that dangerous appeals had little chance of succeeding: “Charlatans cannot long play statesmen successfully when the whole country is sitting as critic,” declared Wilson.

By 1912, another important turning point in presidential selection, 14 states had adopted some kind of primary elections, often including various methods to instruct or bind the delegates. By accepting these delegates, even with qualifications, the parties effectively consented to an alteration of the nomination process. This system was immediately put to use on the Republican side by Teddy Roosevelt. Roosevelt forced a reluctant President William H. Taft out of the White House and into an active campaign, most notably in the Ohio primary, where Taft immediately proceeded to label T.R. a “demagogue.” Wilson, too, campaigned in some Democratic primaries. In the end, however, the primaries were not decisive in either contest. Taft bested T.R. at the convention, despite T.R.’s greater success in the primaries, and Wilson, with nowhere near the two-thirds needed for nomination, had to await his fate at the Democratic convention, where he was chosen by party leaders after four long days of contest and 46 ballots.

But the issue of a new nominating process was now squarely placed on the national agenda. In his acceptance speech at the Progressive convention, Roosevelt proclaimed “the right of the people to rule,” going on to declare, “We should provide by national law for presidential primaries.” Wilson took the same tack in his first State of the Union address, urging “the prompt enactment of legislation which will provide for primary elections throughout the country at which the voters of the several parties may choose their nominees for the Presidency without the intervention of nominating conventions.” He also suggested bringing party functions under a federal legislative regime. Conventions would be held—after the nomination, he proposed—consisting not of elected delegates, but of the nominees for Congress, the national committee members and the presidential candidate, so that “platforms may be framed by those responsible to the people for carrying them into effect.”

The Present System

Two things occurred to halt the enactment of an all-primary system. The first was opposition in Congress to national legislation, mainly on grounds that the federal government had no authority to regulate the selection of delegates to party conventions. The second was the rapid decline after the war of the Progressive movement as a whole. States lost interest in establishing new primary elections. Later on, liberal leaders in the Democratic Party, most notably Franklin D. Roosevelt, preferred to work with party bosses rather than pursue procedural reforms through state legislation. The national party on its own, however, at FDR’s insistence, eliminated the two-thirds rule in 1936.

This loss of enthusiasm in the Progressive impulse resulted in the rise, in an organic fashion, of a new, “mixed” system of presidential selection. It consisted of an uneasy synthesis of the original party convention idea and the new theory of nomination by primaries. Candidates could pursue a limited primary strategy to impress party leaders and claim the mantle of the people’s choice, as John F. Kennedy did in 1960, but the ultimate authority to nominate still lay with the convention. Proof of this point occurred in 1968, when Hubert Humphrey won the Democratic nomination without entering any primaries.

But his selection also proved that the Progressive idea, under its new name of “Reform,” had won the battle of legitimacy. Many Democrats regarded Humphrey’s nomination as tainted. As a result, enough states adopted presidential primaries to make them the main component in the nomination of candidates. King Convention was dethroned. Since 1976, all nominees of both parties have been selected in advance of the party conventions by a choice of the people in primaries and open caucuses. This year, the possibility that the Democratic Convention might play a role in nominating its party’s candidate was enough to provoke horror in the minds of most party leaders. The convention today serves a purely public relations function: to showcase the party nominee in a speech that is now the effective platform of the party.

The nation has been trying since 1972 to work out the new principle’s logic. As long as the convention made the ultimate decision, when states held their primary or caucus was a relatively minor matter, because all state delegations might have their say at the convention. With the decision now made before the convention, states have been trying to preserve their influence by shifting their primaries nearer to the campaign’s start, so that their citizens might vote before the race was over. Of course, some states might decide to hang back, betting that if there is a split they may be in a good spot later on to play a major role—like Wisconsin, North Carolina, and Oregon this year.

Fraught with Unknowns

Barack Obama’s selection as the democratic nominee fulfills at least one hope—the Progressive longing to pick candidates who during the campaign exhibit all the arts of popular leadership and oratory. Granted, by the very nature of politics, those who have experience and connections will under almost any electoral process possess an advantage. Most of the nominees chosen under this system could just as easily have been chosen under the past two systems. But the present system has certainly opened the door to other possibilities. Candidates like George Wallace, Jesse Jackson, Pat Buchanan, Pat Robertson, and Howard Dean, to name only a few, have competed, sometimes with promising results, as popular leaders. Many had thin records of public service and treated the presidency as largely an entry-level political position.

Any selection system that permits choice—unlike, for example, selection by seniority or primogeniture—by definition does not determine the outcome; it only influences it. This makes it impossible to attribute a particular result to the system’s formal properties alone. But two nominees now seem to be clear “products” of the new system: Jimmy Carter and Barack Obama. Neither won on the basis of a substantial record of public service or high previous standing in his party. Their victories were due to their performance as popular leaders. Carter was a Jimmy one-note, repeating a mantra of promising never to tell a lie that resonated perfectly with the electorate’s mood in 1976. Obama, more the maestro than fiddler, has composed his work using a more complex register, alternating motifs of change and hope.

Indeed, Obama’s campaign has forged what for many Democrats is an almost spiritual bond between the leader and his followers. The strength and depth of this personal appeal became so apparent and so alarming to others that the candidate felt compelled to declare, “It’s not about me”—this, in the extraordinary context of a party acceptance speech he delivered in a football stadium before more than 80,000 in attendance. Obama has also made a conspicuous point of rejecting the low “popular arts” in favor of a more high-minded rhetoric, which many critics nevertheless suspect to be a cleverer form of artful popular leadership.

Yet Obama is a candidate whom many Americans feel they hardly “know” or can confidently place. Despite or because of this, he has become for many the object of their dreams and the vessel of their hopes. His nomination affords the opportunity for the observer to gaze squarely into the heart of the new method of leadership selection. Choosing a president is always a process fraught with unknowns, and more so in this case than in almost any other. If Barack Obama is elected president, fondly can we hope, and fervently must we pray, that the country has avoided the greatest potential danger of its fractured electoral system.

This essay is part of the Taube American Values Series, made possible by the Taube Family Foundation.