Flagging Online Falsehoods

A remedy for foreign disinformation attacks

To redress the problem posed by the Russian use of Facebook and other online platforms to manipulate U.S. public opinion during the 2016 campaign, it is necessary that the remedial measures be both effective and protective of free speech.  While the Russian government itself has no First Amendment right to spread propaganda in the United States, it will be impossible to insulate America from all messaging with ties to foreign sources.

Consider the German political philosopher Jurgen Habermas. If Congress tried to ban publication of his works in the United States because he is a foreigner, that surely would violate the First Amendment.  Americans have a right to hear his views, or read his words, even though he is not a U.S. citizen.  And even if it could be argued that the dissemination of his ideas in America might influence the outcome of the next election, that premise would provide no more valid basis to suppress those ideas in order to prevent Americans from finding them persuasive.

On the contrary, Americans have the right—as they have always had—to be persuaded by foreign thinkers, whether Adam Smith (British), Karl Marx (German), or Alexis de Tocqueville (French).  Americans have no fear of foreign thought just because it emanates from abroad.

The internet does not change this.  Foreigners can choose to send political messages to Americans through Facebook or Twitter, rather than through newspapers and magazines.  If Congress tried to ban the Times of London from these shores, that would be no more valid than if Congress tried to ban The New York Times. This proposition is equally true with respect to the print and online version of either publication.

While Facebook and Twitter happen to be American companies, that makes no difference for First Amendment purposes.  If either were a foreign-owned firm but Americans chose to use it for sharing messages, these messages still would be protected in the same way as letters written by Americans to the editor of the Guardian or books by American authors published by foreign-owned presses (like the Hachette group) but distributed by domestic booksellers.

The problem with Russia’s use of online platforms was not its foreignness, but its falsity.  To be sure, foreigners—individuals and corporations as well as governments—may be barred from engaging in express advocacy for or against the election of an American candidate (“Vote for Smith,” “Vote against Jones,” and the functional equivalent of such express electioneering).  That’s because foreigners are not American members of “our national political community” (to quote the relevant court decision on this point) and can be barred from participating directly in America’s elections.

But much of the messaging that apparently came from Russian sources did not involve direct electioneering.  Instead, it involved political topics in general—race relations, immigration, gun regulation, and so forth—rather than the election of candidates.  While these messages were intended to affect election outcomes, that alone doesn’t make them electioneering for First Amendment purposes.  If these generally political, but not specifically electoral, messages were sent by Americans, and if they were not demonstrably false, then they would be fully protected by the First Amendment.  It would not matter their point of view: for gun control or against, pro-choice or pro-life, liberal or conservative, or whatever.  This would be so whether these political messages were in print or online.  And if it turned out that the same generally political, but not specifically electoral, message had a foreign rather than American author, that fact alone would not change the message’s protection under the First Amendment.

What matters for First Amendment purposes is whether or not the message is a demonstrably false statement of fact.  If it is, it has a very different First Amendment status than if it is either a true factual assertion or an expression of opinion not purporting to assert a fact.   Yes, the Supreme Court has made clear that an utterance does not lose all First Amendment protection just because it is a false statement of fact.  Congress, for example, cannot criminalize lying about whether one is a Medal of Honor recipient just because Congress wants to punish this outrageous lie.  But preventing Congress from criminalizing false speech without a specific pressing need for doing so does not mean that Congress has no tools to combat deliberate falsehoods designed to influence American elections.

One possible tool at Congress’s disposal would be to require Facebook, Twitter, and other online platforms to provide better ways for users of these platforms to challenge a posted message as false.  Although both Facebook and Twitter enable users to “report” objectionable content, these “reports” are not publicly displayed in the same way that “likes” or “retweets” are.  Facebook does permit a user to affix an “angry face” icon next to a message, but that is not the same as questioning the message’s factual veracity or labeling it as outright false.

Suppose Facebook and Twitter permitted users to click a “???” icon next to messages that they either doubted or disputed, with the number of times this “???” icon has been clicked displayed right underneath the message (in the same way that the number of “likes” currently is displayed).  This addition to each site would alert users that others viewed the message as dubious or untrue.  Moreover, whenever a message received a certain number of these “???” clicks, Facebook or Twitter could be required to investigate the veracity of the message and, if finding it to be demonstrably false, could be required to post its own warning: “False!”, with links to the underlying documentation of its falsity.  If Facebook or Twitter failed to affix this “False!” label to a demonstrably false message, an aggrieved party could have a statutory right to a court order requiring the online platform to remedy this lapse.  In the context of an election campaign, a candidate would qualify as an aggrieved party based on a showing that the false message potentially could affect the election even if the message’s content was not specifically electioneering.  (To the extent that Facebook, Twitter and other online platforms adopted this kind of new regime, their doing so would obviate the need for detailed congressional regulation on this point.  Most beneficial would be for Congress and these firms to cooperate to develop regulatory practices that would best facilitate the ability to warn viewers about online falsehoods.)

Decades ago, the Supreme Court rejected the proposition that a newspaper could be required to give a candidate the right to reply in the paper itself to an editorial critical of the candidate.  But the proposed right of rebuttal in that case was not limited to false statements; rather, it was triggered by any criticism of the politician, however much true or a matter of opinion.  That proposed interference with the newspaper’s editorial prerogative was inconsistent with First Amendment freedom.  Requiring online platforms to provide a better way to challenge demonstrably false messages on their sites would be an altogether different, and much more narrowly tailored, response to the spreading of pernicious falsehoods than would obliging newspapers to provide all politicians a general right of reply.

A well-designed system of this type would deter a repeat of what happened in 2016.  All messages, whether or not obviously from Russian (or other foreign) sources, could be flagged as “???” or “False!”, thereby minimizing the risk of Americans being duped by them.  While theoretically the system might be confined to only obviously foreign-sourced messages, that limitation would be a mistake.  Russia or other malevolent nation-states (like North Korea) can be sophisticated in making their messages look like they have American authors. Facebook and Twitter will not be able to police all the innocuous-sounding entities, like “Americans for a Better America”, to determine whether they are Russian (or other foreign) agents in disguise.  It is much easier just to look at the content of the message and make a determination whether or not it is demonstrably false.

Similarly, it would be a mistake to limit this new flagging mechanism to paid advertisements on the online platforms.  Much of Russia’s pernicious misuse of Facebook involved not paid ads, but rather unpaid ordinary messaging (what the online platforms call “organic” content).  Any new measures, to be effective, must extend to falsehoods in the form of unpaid, as well as paid, online messaging.  (So far, Facebook’s efforts at self-regulation seem focused solely on paid content, and would impose new rules on political messages regardless of whether they are false, true, or opinion.  In this respect, Facebook seems to be missing the key point of what is at stake in responding to the kind of disinformation campaign that Russia apparently waged last year.)

Moreover, Americans deserve a measure of protection from domestic, as well as foreign, disinformation.  Even if this disinformation cannot be criminalized, it can be flagged when demonstrably false.  The flagging is a form of “counterspeech,” which is the appropriate First Amendment remedy for the dissemination of falsehood.  Both domestic and foreign falsehoods deserve to be flagged in this way, so that American voters can best judge for themselves what online messaging to believe.

Just as Americans have the right to receive truths and opinions from foreign and domestic sources, so are they entitled to an equal level of protection from foreign and domestic fabrications designed to subvert the free exchange of ideas.  The best protection against another Russian disinformation attack is a system that adequately protects against disinformation attacks from any source.

Of X-Rays, CT Scans, and Gerrymanders

Progress in the detection of malignant redistricting.

I’m not a doctor, but I think this analogy is sound, and all the more so after today’s oral argument in Gill v. Whitford, the Wisconsin redistricting case in the Supreme Court.

Think of a disease that was undetectable before the development of medical imaging technologies, a type of brain tumor perhaps. It is still very much a disease even if undetectable.

After years of frustration, along comes the invention of a breakthrough medical technology that for the first time permits the detection of some of these tumors.  X-rays didn’t use to exist, but now they can be used to spot some malignant tumors.  And for a while, the only available imaging technology is an x-ray.

But then CT scans are invented, and MRIs.  They are improved forms of imagining, able to detect tumors more accurately than x-rays.  The disease is the same; just the way to detect the presence of the disease has improved.

I would suggest that in 2004, when the Supreme Court decided Vieth v. Jubelirer, not even the equivalent of x-rays existed to detect a malignant gerrymander.  The absence of appropriate detection techniques did not mean that gerrymanders were not a cancer on democracy.  On the contrary, all the Justices on the Court at the time recognized that they were.  There just was not an available tool by which to identify when redistricting had become pathological.

After Vieth, social scientists invented a new detection technology, called the “efficiency gap,” and used it to identify the pathology of Wisconsin’s redistricting map at issue in Gill.  The efficiency gap is far from a perfect detection device; that much is clear from the district court’s trial in Gill as well as various social science commentaries on efficiency gap’s technical properties.  It is like x-rays in this respect; imperfect, but still capable of detecting disease in some circumstances.

Since the development of the “efficiency gap,” social scientists have been hard at work on creating improved detection devices. The mean-median test, for example, and computer simulations.  These might be considered the CT scans and MRIs in the field of redistricting.  There has been incredibly rapid progress in this field, maybe even more rapid than with medical imaging. In both fields, there are likely to be even better techniques in future.  Still, all the while the definition of the disease to be detected remains constant.

What struck me from reading the transcript of today’s oral argument were two points. First, there remains virtually no dispute about the nature of the disease.  As the two attorneys defending Wisconsin’s map both conceded, if there were no issue of detection involved—if the malignant cancer were on the surface of the body politic, so to speak—there would be no doubt about its unconstitutionality.  In response to Justice Kennedy’s question (page 26) about an explicit state rule “that’s saying all legitimate factors must be used in a way to favor party X or party Y,” the Wisconsin legislature’s lawyer (page 27) said “Yes.  It would be unconstitutional,” and the state’s Solicitor General agreed (page 63).  Thus, the only issue is whether the current status of available detection devices permits identification of malignancies that are not on the surface in the same way.

Which leads to the second point about the oral argument.

There was considerable use of the term “outlier” to define a gerrymander that would be subject to judicial invalidation.  Justice Breyer (page 12) included the concept as a key component of his effort to articulate a judicially workable test: after identifying whether a map drawn by a partisan legislature was demonstrably skewed against a party, the trial court would ask: “is this an extreme outlier with respect to asymmetry?” Justice Kagan (page 49) asked the plaintiffs’ attorney: “Mr. Smith, are you suggesting that we should be looking for outliers or are you suggesting that we should be trying to filter out all manner of partisan consideration, or is it some place in between?” Mr. Smith’s response (page 50): “Your Honor, the word ‘outlier’ is probably an appropriate one.” Then, confirming this initial thought, he added: “Certainly, we don’t think … that all partisanship is unconstitutional.  What you need is a method by which the extreme gerrymander . . . can be identified and held unconstitutional.”

This focus on the concept of an “outlier” map seems to me important with respect to both understanding the relevant constitutional standard for judges to enforce as well as developing the measurement technique used to enforce the standard.  The very word “outlier” entails a distinction between norm and deviation.  That distinction makes sense in terms of the role that partisan politics is entitled to play in the redistricting process.  Normal partisanship is routine and constitutionally unobjectionable.  It poses no First Amendment problem.  It is a symptom of healthy, competitive democratic contestation between political parties in a free society governed by First Amendment values.  But just as normal cell growth in a human body can turn malignant, so too can normal partisan contestation metastasize into malignant gerrymandering.  The constitutional value at stake, rooted in First Amendment freedom, is to protect the norm from this kind of malignant deviation.

This basic conceptual distinction between norm and outlier serves to answer a major question raised by Chief Justice Roberts (and echoed by others, including Justice Gorsuch).  “It is just not, it seems, a palatable answer to say the ruling was based on the fact that EG [efficiency gap] was greater than 7 percent,” the Chief Justice observed. (Page 38.)  Then, he added, crucially: “That doesn’t sound like language in the Constitution.” His concern, obviously, is with the apparent arbitrariness of such an efficiency gap cutoff and, most significantly, how to link that cutoff with a governing constitutional principle.  His concern, it must be noted, is one reason why the efficiency gap is just an early-generation detection technique, like an x-ray, most likely to be superseded by the rapid development of improved detection techniques.

To answer the Chief Justice’s question, the judicial task—as required by First Amendment principles—is to distinguish between, on the one hand, normal partisan redistricting that is reflective of healthy democratic competition from, on the other hand, the pathology of extreme gerrymandering by which one party has subverted the electoral competition between differing political ideas.  That judicial task does not require identifying a cutoff in terms of an amount of partisan asymmetry in a map.  Instead, it requires identifying a map that is outside the norm of politically plausible maps for a given state.

How then to identify whether or not a map is an outlier in this way?  That is where the development of the latest detection techniques is relevant, and some of these advances have been made even after the trial of Gill itself. Without delving into all the technical details here, these latest developments use increased computing power to build upon a basic statistical insight, that of a so-called “normal distribution” in which a random sample of data will tend to cluster around a mean, the shape of a familiar “bell curve,” with the norm being the area under the curve, and the outliers being the two “tails” of the curve.  This technique can be used to detect whether a state’s actual map is within the norm, or instead is an outlier in the tail of the distribution.  This inquiry is not arbitrary in the way that a 7 percent cutoff in an efficiency gap score arguably is; rather, it is directly tied to the First Amendment distinction between norm and deviation in the operation of healthy partisan contestation.

As I’ve discussed previouslyone of the amicus briefs in Gill that most lucidly elucidates the statistical ideas of norm and outlier was submitted by Eric Lander, a molecular biologist and founding director of the Broad Institute at MIT and Harvard.  It is noteworthy that this Lander brief was mentioned twice in today’s oral argument: once by Justice Breyer (page 12) as part of his invocation of the outlier concept; and the other time by the plaintiffs’ attorney, Paul Smith (page 56), agreeing that its distinction between norm and outlier will become central to the constitutional inquiry: “I think it will become part of how these cases are decided.”

When I first read the Lander brief, I thought it somewhat surprising that the most significant amicus brief, of the multitude submitted, might have been by a molecular biologist, rather than by someone specializing in politics.  But since it turns out that the task here is to distinguish between healthy and malignant redistricting, so that the cancer of extreme gerrymandering does not destroy the body politic, perhaps it is not surprising after all.

The Oral Argument in the Gerrymandering Case: Questions That Could Matter

Gill v. Whitford looks to be a case for which oral argument might make a difference.

It is often said that oral arguments rarely make a difference to the outcome of a Supreme Court case, that the Justices’ minds are essentially made up before oral argument begins.

But Gill v. Whitford, the blockbuster partisan gerrymandering case from Wisconsin, looks to be one of those rare cases for which what transpires during oral argument genuinely has a chance to be outcome-determinative.

There are two reasons for this.   First, Justice Kennedy—whose vote is widely understood as crucial to determining whether or not the Constitution is interpreted as containing a judicially enforceable constraint on the deliberately partisan manipulation of legislative districts—has made clear from his own previous opinions on the topic that he is genuinely torn between two opposing views: on the one hand, the need to identify some such constraint; and on the other, the inability to do so thus far. Even if Justice Kennedy goes into Tuesday’s oral argument tentatively leaning towards one side or the other (having read all the briefs filed in the case), there is a significant possibility that what is said during the argument could push him back in the opposite direction.   There is little doubt that even now, so far into the litigation of this issue, Justice Kennedy is still very much open to persuasion on this issue. It is, of course, the task of the Supreme Court advocate to be persuasive when and where the opportunity exists, and there may be moments in Tuesday’s argument—in responding to one of Justice Kennedy’s questions, or even one of another Justice’s—when the advocate can make a point that either dislodges a previous expectation based on the reading of the briefs or instead solidifies a tentative understanding.

The second reason is that, even after all the briefs (or maybe because of all of them), there is still much uncertain and unsettled about the litigation of monumental lawsuit and thus important points that the oral argument can clarify or pin down in ways that might be helpful to one side or the other. For example, how important is the so-called “standing” issue, upon which the state of Wisconsin places much emphasis in its briefs, but which received relatively less attention in the district court (and virtually no discussion in the media’s consideration of the case)? In other words, could this particular lawsuit fail not because of an invalid theory on the merits of the claim, but because the plaintiffs did not identify specific districts that were harmed as a result of the statewide gerrymander (and thus did not attempt to link specific plaintiffs with a district-specific injury, even if the unconstitutionality of the gerrymander had a statewide character)?

Another point of uncertainty concerns the relationship between (1) the degree to which a redistricting map is skewed in favor of one political party and (2) possible permissible explanations for that skew, like longstanding geographic and demographic circumstances that cause (for example) Democrats to cluster in cities while Republicans are more dispersed in exurban areas. If a state legislature under control of one political party draws a map with a significant skew in that party’s favor, is the state then obligated to show that it was unable to achieve its permissible redistricting objectives with any less of a skew? In its briefs, the state seems to understand the district court (and the plaintiffs) as having adopted this position, which would amount to something like a “necessity” or “least restrictive alternative” analysis in other areas of constitutional law (typically those subject to the so-called “strict scrutiny” standard of judicial review). For example, on page 14 of its reply brief, the state defines the relevant portion of plaintiffs’ test this way: “was it impossible for the Legislature to draw a map that scored better, while still complying with other requirements?” And again, continuing on to the very next page of the same reply brief, the state repeats: “Plaintiffs define their third element as whether ‘alternative district maps’ could have been drafted that have less partisan symmetry on some metric, while still complying with traditional redistricting principles and other requirements.”

But maybe the state’s understanding of this point is incorrect and, instead, the district court (and the plaintiffs) set forth a position that operates more like a “reasonable relationship” test that is also familiar in other areas of constitutional law (those governed by a lower standard of judicial review than “strict scrutiny”)? In other words, on this view a state’s map would be constitutionally valid, even if skewed in favor of the party that drew the map (and intentionally so), as long as the map bore some “reasonable relationship” to permissible redistricting criteria; there would be no constitutional requirement of being the least skewed map that satisfies those permissible criteria. So, which of these two different understandings of the district court’s (and plaintiff’s) position is correct? Is a remand necessary, or unnecessary, to clarify this important point? These questions are ones for which Tuesday’s oral argument potentially could be extremely significant.

Related to this uncertainty about the appropriate legal standard for when “geography justifies skew” (to put the point colloquially and somewhat over-simplistically) is the nature of the relevant evidence concerning this particular legal issue. There is much discussion, especially among multiple amicus briefs, concerning the possibility of using computer simulations to identify a distribution of possible maps that conform to the state’s permissible redistricting criteria. This distribution then can be used to determine whether the state’s actual map has a degree of partisan skew that is, or is not, an outlier compared to other possible maps compliant with the state’s permissible criteria. (I have previously discussed this statistical approach and the amicus briefs that emphasize them.)

But questions remain about the relationship of this kind of statistical evidence and this specific lawsuit over this particular Wisconsin map. Was any such statistical evidence based on computer simulations introduced in the district court’s trial of this case and, if not, what is the consequence? Is it part of a plaintiff’s burden in challenging a redistrict map as a partisan gerrymander, according to the appropriate constitutional standard to be identified in this litigation, to provide statistical evidence of this nature—in order to demonstrate the state’s map to be an outlier in its degree of partisan skew (compared, again, to a myriad of other possible maps that would achieve all of the state’s permissible redistricting goals at least as well or better)? Or is the state obligated to provide statistical evidence showing that its map is not such an outlier, at least if a plaintiff is able to present a prima facie case (using other types of evidence) that the state’s map has a significant partisan skew that is both intentional and unwarranted? Is a remand required for further consideration of how this particular type of statistical evidence should bear upon an evaluation of this particular map’s constitutionality?

These are just some of the many questions that could be raised in Tuesday’s oral argument and, depending upon how they are handled by the advocates on both sides, potentially could make a difference in the Court’s disposition of the pending appeal.

If I myself had the opportunity to frame a question for each side, here’s what it would be:

For the state: do you accept the premise, as accepted by all the opinions in Vieth, that an extreme partisan gerrymander is unconstitutional in principle, the obstacle simply being the ability to distinguish in practice extreme partisanship from run-of-the-mill partisanship, which is inevitably acceptable; and if you accept this premise, then if new statistical techniques do in fact enable us to distinguish extreme from run-of-the-mill partisanship in a way that we could not before, must you necessarily concede that the constitutional question is justiciable, with the only remaining inquiry being whether your map is or is not extreme according to the new statistical technique?

For the plaintiffs: to what extent is the district court’s position, and the position that you are advocating in this Court, the same or different from the position of some of your amici who appear to advocate an “outlier” test based on a statistical technique using computer simulations; insofar as the positions are different, what is this Court supposed to do with this particular case at this stage of the lawsuit (assuming we find the amici persuasive on this point); and if there is no difference, why is there so much discussion about the possibility of using these types of computer simulations and the role they can and should play in litigation of partisan gerrymandering claims?

We will soon know what questions the Justices actually ask, see how the advocates respond to them, and have at least an initial impression of how effective these responses appear—especially in their effort to convince Justice Kennedy one way or the other. And maybe there will even be something of a surprise: like the possibility that another Justice, like Chief Justice Roberts, might appear open to persuasion (on something like an “extreme outlier” test, for example) in a way that had not been previously anticipated.

The Missing Link in Gerrymandering Jurisprudence

The key advance is the ability to identify whether a redistricting map is an extreme outlier in the degree of its partisan bias.

The key advance is the ability to identify whether a redistricting map is not merely biased against a political party but whether it is an extreme outlier in the degree of its partisan bias relative to other maps that might have been drawn to achieve the mapmaker’s permissible redistricting objectives.

The difficulty up to now, in framing a constitutional challenge to partisan gerrymandering, has been one of linking together two necessary components of a complete claim.  One component is the metric for identifying when a redistricting map deviates from impartial fairness to the competing political parties.  The other component is the standard for determining when a partisan motive for drawing the particular district lines runs afoul of a federal constitutional requirement.

As a policy matter, it is easy to establish a metric for identifying redistricting maps that deviate from neutrality between the parties.  Indeed, as political scientists and statisticians frequently explain, as they do in multiple amicus briefs submitted to the Supreme Court in Gill v. Whitford, the pending case from Wisconsin, there is no shortage of such metrics.  One such metric is the so-called “mean-median difference” (or, as some prefer, “average-median difference”). This metric measures a party’s share of the vote each district in the map and, listing the districts in order of the party’s vote share from largest to smallest, the compares the party’s vote in the median district—the district that is the midpoint of the list—with the party’s share of the vote across the entire map (which is the same as the party’s share of the vote in an “average” district, controlling for different turnout rates across districts).  To the extent that the party’s share in the median district is smaller than the party’s overall (or average) share, the map is structurally biased against the party.

To consider an extremely simple example: suppose there are five districts, each with 20 voters, for a total of 100 voters.  Suppose these 100 voters split 60%-40% between Party A and Party B, but the district-specific splits are:

A     B

1      20     0

2      20     0

3        8    12

4        8    12

5        4    16

District 3 is the median district, and Party A’s share of the vote in that district is only 40% (8 of 20 votes cast), whereas Party A’s overall vote share is 60% (60 out of 100, or an average of 15 votes across the five 20-voter districts).  This difference between 40% and 60% measures the map’s structural bias against Party A.  Thus, measuring a map’s deviation from neutrality just straightforward arithmetic—as Princeton mathematician Sam Wang is eager to emphasize.

But what does this arithmetical observation have to do with federal constitutional law?  It is easy to argue, as a policy matter, that a redistricting map is undesirable insofar as it exhibits this kind of bias against either of the two major political parties that compete head-to-head in legislative elections in order to win governing control in the legislature.  A fair map would harbor no such bias (at least not long-term, in election after election).  But the federal Constitution contains no explicit requirement that legislative maps be neutral with respect to   the competing political parties.  Indeed, the most important electoral feature of the federal Constitution—the Electoral College system for presidential elections—egregiously deviates from any such conception of partisan neutrality, as the result in 2016 most recently demonstrates.  (Hillary Clinton’s share of votes in the median state—and states are districts for Electoral College purposes—was far below her vote share overall or in an “average” state.)

Thus, measuring a map’s partisan bias is easy.  The difficulty is linking this measurement to constitutional law.

We can come at the linkage problem from the other direction.   There is no doubt that an extreme partisan gerrymander violates the Constitution.  As Justice Kennedy vividly put it in his Vieth concurrence: “If a State passed an enactment that declared ‘All future apportionment shall be drawn so as most to burden Party X’s rights to fair and effective representation, though still in accord with one-person, one-vote principles,’ we would surely conclude the Constitution had been violated.”  The problem, however, has been how to tell when a partisan gerrymander that is not so explicitly blatant contravenes constitutional law.  This problem is compounded the Court’s previous pronouncements that some degree of partisanship in the drawing of district lines is constitutionally permissible.  When the mapmaker does not expressly announce a desire to go “too far” in a partisan direction, how is the judiciary to determine from the map itself whether it reflects an excessive degree of partisanship?

In short, the constitutional principle is clear: egregious partisan gerrymandering violates the First Amendment right of political parties to participate in politics free from government efforts to suppress that political participation.  The challenge is how to measure a partisan gerrymander that is egregious rather than merely routine partisan tinkering with district lines.

The difficulty, again, is one of linkage.  Measuring partisan bias, independent from constitutional principle, is easy.  Articulating the constitutional principle, independent from measurement, is straightforward.  It is the marriage of principle and measurement that has proved elusive.

Until now.

As Justice Kennedy also anticipated, the increasing power of computer technology has enabled the development of new statistical techniques that can identify whether a redistricting map is an outlier compared to all possible maps that would achieve a mapmaker’s constitutionally permissible objectives, including compactness and respect for existing political subdivisions.  A computer can do this by drawing thousands, even millions, of alternative maps, all of which are constrained by the stipulated set of constitutionally permissible criteria, and then the computer can measure the degree of partisan bias for each of these alternative maps using the same voting data applicable to the actual map under consideration.  For example, the computer could calculate the mean-median difference for each of these alternative maps.  (In other words, the computer could measure for each possible map the extent to which a party’s vote share in the median district diverges from the party’s overall, or average, vote share.)

Crucially, the key metric is not the absolute value of mean-median difference for the actual map, or how much this difference deviates from the ideal of zero, the score of a perfectly neutral map.  Instead, the key metric is where the mean-median score of the actual map falls within the distribution of mean-median scores of all the alternative maps that the computer is able to draw.  If the score for the actual map falls outside the normal range of scores for all these maps—falls, in other words, along the tails of the distributional curve—then the actual map is an outlier in terms of the degree of its partisan bias.

The distributional approach of this statistical technique, it is important to understand, does not judge—even indirectly—an actual map with respect to a standard of perfect neutrality.  In a given state, it might well be the case that the normal distribution of possible maps drawn by the computer does not center on maps with mean-median scores of zero.  Instead, geographic factors applicable to the particular state might cause the typical map drawn by the computer (in other words, the mode of the computer’s distribution of maps) to have a mean-median score disadvantageous to one political party.  This could occur, for example, if one party’s voters are geographically clustered in tight political subdivisions, while the opposing party’s voters are more advantageously dispersed throughout the state.  All the maps generated by the computer would reflect this natural geographical advantage of one political party.  Still, the process of generating these alternative maps would determine whether or not the actual map was an outlier even with respect to this natural geographical advantage, or instead fell within the normal range of partisan bias given this natural geographical advantage.

Thus, this new computer-assisted statistical approach can be used to identify what the constitutional principle was looking for: an egregious partisan gerrymander.  Strictly defined, and precisely measured, an egregious partisan gerrymander is one that is identified as an outlier using this new computer-generated statistical technique.

Several amicus briefs in Gill invoke this new statistical technique as the method for enabling the Court to articulate a judicially manageable standard to identify unconstitutional gerrymanders.  One brief that discusses the technique in particular detail—and does so lucidly—is submitted on behalf of Eric Lander, the President of the Broad Institute of Harvard and MIT.  The ACLU’s brief, in turn, does an effective job linking the statistical technique to the First Amendment’s requirement that the government regulate political competition between parties without improperly giving one party an excessive competitive advantage.

For Justices on the Court who are historically minded in their overall constitutional jurisprudence, and who thus wish to ground the constitutional analysis of partisan gerrymandering on relevant historical considerations, the new computer-generated statistical technique also can be linked to a history-based approach.  How so? First, the relevant history demonstrates that the original Gerry-mander of 1812—along with all partisan manipulations of legislative maps that are similarly egregious—has been regularly and vigorously condemned as inconsistent with the fundamental principles of popular sovereignty established in the original Constitution and reaffirmed in the Fourteenth Amendment.  Indeed, throughout the nineteenth century, the very practitioners of these egregious partisan gerrymanders recognized that they were acting contrary to constitutional principles, but the pressure of partisan politics prevented them from adhering to the Constitution as they knew they should.  This point is made effectively in an amicus brief submitted by a group of distinguished historians, and it is also emphasized in my own recent scholarship.

Second, the unconstitutionality of the original Gerry-mander can generate a judicially manageable test for evaluating modern redistricting maps in two ways.  The first way, which I have explored in a contribution to a William & Mary Law Review redistricting symposium, is more direct.  It measures the degree to which the original Gerry-mander was a distortion of district lines, and requires a mapmaker to justify any new map that is equivalently or even more distorted.  The other way is more indirect.  It identifies the original Gerry-mander as the archetype of egregiously partisan districting and, in condemning the archetype itself as quintessentially unconstitutional, necessarily also condemns as unconstitutional the whole class of egregiously partisan gerrymanders of which the original Gerry-mander is the archetype.  The way to measure whether a redistricting map is egregiously partisan, apart from having districts as distorted as the original Gerry-mander, is to determine whether it is an outlier according to the new computer-generated statistical technique.

Using the statistical technique in this way is consistent with what I have termed “particularistic,” rather than “universalistic” reasoning in constitutional cases.  (In my William & Mary contribution, I explain how particularistic reasoning lends itself to historically-oriented constitutional analysis, whereas universalistic reasoning lends itself to more philosophically-oriented approaches to constitutional interpretation.)  One of the best examples of particularistic reasoning in Supreme Court jurisprudence is the invocation of the Sedition Act of 1798 as the basis for holding that the First Amendment constrains a state’s use of its libel law to suppress criticism of government officials.  But this exercise of particularistic reasoning did not yield the conclusion that only state laws that are exactly congruent with the Sedition Act of 1798 are unconstitutional.  Instead, the Court appropriately identified the Sedition Act as the archetype of a larger class of laws comparably suppressive of political dissent and thus necessarily comparably unconstitutional.   Once the archetype was determined to be unconstitutional—because it had been deemed so “in the court of history”—that constitutional determination was an anchor, and it became necessary for the Court to craft a contemporary doctrine for which the archetypal determination served as a foundation but which treated the entire relevant class of politically suppressive laws in a coherent and principled way.

So too with respect to the archetype of the original Gerry-mander.  Its unconstitutionality is established in the “court of history,” but that determination simply generates the necessity of crafting the contemporary doctrine that renders unconstitutional all comparably egregious partisan gerrymanders.  The new computer-generated statistical technique can identify the outliers that form the class of egregiously partisan maps that are unconstitutional according to the principle derived from the archetype.

Thus, the new statistical technique can provide the missing link between principle and measurement that heretofore has been so elusive.  Whether grounded in historical analysis, by focusing on the archetype of the original Gerry-mander, or instead rooted in reasoning philosophically based on general First Amendment principles (as the ACLU brief does), it is possible to articulate the relevant constitutional principle as the prohibition of egregious partisan gerrymanders, not the purging of all partisanship from redistricting.  Once this principle is articulated, the new statistical technique can be employed to determine whether the map under review is an outlier relative to all possible maps that might be drawn to achieve the map’s constitutionally permissible redistricting goals.  If the map is indeed an outlier, and if the mapmaker cannot justify it as appropriate despite its outlier status, then the map should be condemned as inconsistent with the fundamental constitutional principle at stake.

In this way, the missing link finally has been found.

The Vare Precedent in the Senate and Its Relevance to the Trump-Russia Inquiry

Analysis prompted by McClatchy news report.

In Pennsylvania’s 1926 U.S. Senate election, the Republican candidate William Vare beat his Democratic opponent, William Wilson, by over 170,000 votes. Yet the U.S. Senate never seated Vare. Why not? Because his campaign had engaged in significant campaign improprieties, including spending massive sums of money in ways the Senate considered corrupt. The Senate that denied Vare the seat, by the way, was in the hands of a Republican majority, and the effort to keep Vare from obtaining the fruits of improper campaign activity was led by George Norris, the progressive Republican from Nebraska.

One can debate whether Norris and his fellow Republicans were correct in denying Vare the seat. Although there was some tampering with ballots that affected the vote count, this kind of fraud—directly manipulating the vote tally itself—was not nearly enough to wipe out Vare’s six-figure margin of victory. Rather, Vare was denied the seat on the theory that the real votes actually cast for him had been improperly polluted by his corrupt campaign expenditures.

It is dangerous in a democracy to deny voters their choice of which candidate to put in office on the ground that the electorate’s actual decision was tainted by misinformation. It’s like the Senate telling the citizens of Pennsylvania, “you can’t have the candidate you said you wanted because you were misled, and therefore the choice you actually expressed can’t be accepted as a genuine or authentic choice.” Because the refusal to seat Vare seems to have rested on this kind of reasoning, my initial view was that the Senate made a mistake in this case.

But I’ve come to appreciate that there is force to the other side of the argument—to Norris’s position, in other words. In some circumstances, surely, it is appropriate to void the outcome of an election because of malevolent disinformation injected into a campaign at a critical moment with the intent to affect the result. The classic hypothetical that election law scholars often consider involves “dirty tricks” designed to suppress the vote, like telling voters that their polling place has been changed or that Election Day has been changed to a different day. That kind of deliberate fraud about the voting process is one step removed from deliberately destroying valid votes that have been cast. Since the latter is certainly grounds for voiding an election, the former is thought to be as well—if it can be proved that enough actual voters were misled by the deliberate fraud about the voting process, such that the votes they would have in fact been cast were wrongfully suppressed and thus not included in the count (as they should have been). Indeed, as recently as 2013, in deciding a case involving a “robocall” effort to suppress votes by telling voters their polling place had been changed, a Canadian court indicated that this kind of impropriety would be a basis for voiding a parliamentary election if there were evidence that it affected enough votes to make a difference. See McEwing v. Attorney General of Canada, 2013 FC 525 (2013).

But deliberate fraud about the mechanics of the voting process—like a voter’s polling location—is one thing. Deliberate fraud about an opposing candidate is another. Imagine a candidate’s campaign falsely accuses an opponent of running a pedophile ring, intentionally disseminating this falsehood in the hope that it will suppress turnout in favor of the opponent. (This hypo obviously draws upon the “pizzagate” fabrication during last fall’s presidential election, but it is emphatically a hypo for purposes of present discussion insofar as the fabrication is attributed to the candidate’s campaign.) Would that circumstance also be the basis for voiding an election, assuming it could be showed that this deliberate falsehood actually had the suppressive effect that was intended? Reaching this conclusion requires a judgment that the voters who stayed home, or who changed which candidate they supported, because of this deliberately false message should not have let it affect their calculation about whom to vote for: they were wrongly duped. That kind of judgment is understandable, but it does involve second-guessing the voter’s choice—and thus arguably is invading the domain of voter sovereignty, territory that might be considered sacrosanct in a democracy.

Or maybe it should not matter whether or not there is solid proof the deliberate lie about an opposing candidate suppressed, or swayed, enough votes to turn the result of the election. Maybe it should be enough that the attempt was made. Given the inherent malevolence of one candidate spreading deliberately calumnious falsehoods about an opponent in an effort to win the election, maybe the malevolent candidate should be prohibited from profiting from that malevolence, and thus prevented from taking office based solely upon proof of the malevolence itself. In 2010, a British court reached a decision along these lines when it voided an election, denying a winning candidate a seat in Parliament upon proof that the candidate falsely accused his opponent of association with violent Islamic militants. See Watkins v. Woolas, 2010 WL 4339493, upheld in relevant part on review, R v. Parliamentary Election Court, [2011] A.C.D. 20 (Dec. 3, 2010).

Whatever one thinks of these judicial decisions from Canada and Britain, our fellow democracies, the Vare precedent makes clear that the U.S. Senate would have the power to reach a similar result in reviewing one of its own elections (as would the U.S. House of Representatives). Thus, if a candidate for U.S. Senate were to win an election after engaging in a campaign to suppress votes for the opposing candidate by knowingly fabricating a false report that the candidate’s opponent was running a pedophile ring, one can imagine a latter-day George Norris—John McCain comes to mind, for example—leading the effort to nullify the election, preventing the ostensibly winning candidate from holding a seat in the Senate despite having received more actual votes, because the election was indelibly tainted by the candidate’s deliberately calumnious falsehoods.

I raise these points about the Vare precedent and its potential ongoing relevance to improprieties in a contemporary or future U.S. Senate (or U.S. House) election in order to contrast (1) the power under the Constitution given each chamber of Congress to nullify an election to that chamber with (2) the absence of any comparable constitutional provision concerning a presidential election. I consider this contrast now in light of the McClatchy news report that there are investigations into whether the Trump campaign was involved with Russian efforts to disseminate deliberately false reports about Hillary Clinton and to target those deliberate falsehoods (like the Canadian “robocall” disinformation campaign) in a way designed to maximize the likelihood that they would suppress votes for Clinton in key battleground locations. I hasten to say that the allegations in the McClatchy news report are far from proven—there is indeed no specific available evidence to corroborate what is allegedly under investigation—and so what follows is simply based on a hypothetical assumption that a presidential candidate’s campaign was involved in this kind of deliberate disinformation activity.

The main observation that I wish to make is that the Constitution fails to provide, in the context of a presidential election, any institution with authority comparable to the power of the Senate (or House) to judge the elections of its own members. That omission, of course, is because the Founders did not conceive of presidential elections as Americans in the twenty-first century do.   The Electoral College picks the president. Voters only pick the electors. That means this: if Russian disinformation “polluted” the 2016 presidential election in any way comparable to Vare’s corruption polluting his 1926 Senate victory, it was because the Russian disinformation “polluted” the vote for the electors from Michigan, Wisconsin, and Pennsylvania. But once those electors cast their votes for president on December 19, their job was over. They could not be unseated from their state-level office (their office being that of “presidential elector” in their particular state, and their sole function since fulfilled). There would be no way, therefore, for Congress to undo the votes for president that these since-dismissed electors (who are state, not federal, officials) already had cast, which are constitutionally distinct from the ballots that voters earlier had cast for the electors themselves. A problem affecting the ballots that voters cast in November is constitutionally moot after the electors have discharged their constitutionally separate duty to cast their Electoral College votes for president. That’s the lesson from the resolution of the disputed Hayes-Tilden election of 1876. (For more details, see chapter 5 of Ballot Battles.)

The key point, then, is that for presidential elections Congress lacks a power of the kind that the Senate exercised in the Vare case. To be clear, the Senate’s power in the Vare case was not contingent upon exercising it before Vare was seated. Rather, the Senate could have seated him first and then unseated him afterwards, upon making the same judgment that his election was indelibly polluted by impropriety. Thus, in a future case, if a Senator’s election were to benefit from a Russian disinformation campaign, but the Senate did not come to understand until months after the Senator was sworn into office how that disinformation campaign operated in terms of micro-targeting particular voters through innovative use of social media technologies, the Senate would have the power to unseat that Senator based on the new information coming to light about the way in which the election had been improperly tainted. By contrast, Congress has no power to reach back and undo a presidential election in the same way.

The Constitution, of course, does provide the power for Congress to impeach and remove a president. But this impeachment power is analytically distinct from the kind of power to judge an election that the Senate exercised in the Vare case. (The Senate has a separate expulsion power that is analytically comparable to impeachment.) For one thing, the impeachment power necessarily focuses on wrongdoing committed by the President himself. The Senate’s elections power is not so limited. The Senate can void an election to that chamber for wrongdoing committed by the candidate’s campaign, or by campaign supporters (domestic or foreign) on behalf of the candidate, even if the candidate himself (or herself) had no personal knowledge of the wrongdoing. The impeachment power, moreover, is limited to “high crimes and misdemeanors.” Whatever that language means, and whatever latitude the House and the Senate have in interpreting it, the words “crimes” and “misdemeanors” connote the kind of penal wrongdoing for which an individual could be sent to prison. Once again, the elections power that the Senate exercised in the Vare case is not so strictly limited: even if the campaign expenditures there were not strictly “against the law” in the way that could give rise to penal liability, the Senate was entitled to make the elections-related judgment that these expenditures were inherently corrupt and improper in a way that tainted the election itself—so that the election result could not stand even if no one was at risk of going to jail for the same campaign activity.

Whenever the dust settles on the 2016 (and that may not be for a long while), one question that is likely to remain for the future is whether it is wise for the United States to lack an institution for judging the validity of a presidential election comparable to the power of each chamber of Congress to judge the validity of its own elections. Is it wise, in other words, either to leave Congress powerless to determine the validity of a presidential election that arguably has been tainted with an intentionally malevolent disinformation campaign—or, in the alternative, for Congress to be forced to invoke the impeachment power, which was designed for other purposes and ill-suited as a substitute for judging the validity of an election, as the only available means of addressing an impropriety that may have tainted the outcome of a presidential election?

It is way to early to make definitive assessments on these issues, in light of the fast-unfolding nature of news concerning what happened in the 2016 election. But, in light of these same news reports, it is not too early to begin thinking about these issues.

The Electoral Fix We Really Need

The Electoral College winner should be the majority choice in each state that counts towards that Electoral College victory.

The Electoral College winner should be the majority, not just plurality, choice in each state that counts towards those 270 or more Electoral College votes—and this reform is one that every state currently has the constitutional power to adopt on its own.

As January 20 approaches, and with it the inauguration of the new president, my inbox contains two items that underscore the need for reforming the method by which the nation elects its presidents. The first item is the package of reprints for my recent article, Third-Party and Independent Presidential Candidates: The Need for a Runoff Mechanism, which was a contribution to a symposium held last fall by the Fordham Law Review. The second, and very much related, is an article by Eric Maskin and Amartya Sen in the latest issue of the New York Review of Books, entitled The Rules of The Game: A New Electoral System.

Maskin and Sen are both Nobel prize-winning economists at Harvard University, who have written, together and individually, leading texts on the mathematical properties of various voting systems. The new Maskin-Sen article is an expansion upon an oped-type piece that these authors wrote in the New York Times last April. I cited that piece in my Fordham contribution, and so it is not surprising that there are significant affinities between their newly expanded article and the Fordham contribution. But now that the 2016 presidential election is finally over, it is worth highlighting these common points and their implications for the future.

First, a major point made by both Maskin-Sen and the Fordham piece is that the nation’s existing system for presidential elections does not adequately handle the existence of third-party or independent candidates. Maskin-Sen cite the familiar “spoiler” role that Ralph Nader played in 2000, “attract[ing] nearly 100,000 votes in Florida, mostly at Al Gore’s expense, giving George W. Bush the presidency.” My Fordham article, extending beyond this one example, reviews the totality of presidential elections to identify the large number of instances—over ten percent, including most significantly Theodore Roosevelt’s “Bull Moose” run as the Progressive Party candidate in 1912—in which the presence of a third-party or independent candidate likely determined which of the two major-party candidates won the White House.

In the end, was 2016 an election of this type? Did Jill Stein’s candidacy cause Donald Trump’s victory in the Electoral College over Hillary Clinton? That’s debatable. Jill Stein did receive more votes in each of the three states crucial to Trump’s Electoral College victory—Michigan, Pennsylvania, and Wisconsin—than the final number of votes by which Trump defeated Clinton in those states (as shown in this table).

But we do not know whether, if Jill Stein’s name had not been on the ballot in these three states, her voters would have voted for Clinton instead—or at least enough of them to put Clinton ahead of Trump in all three states, thereby changing the outcome of the election as a whole. Stein’s voters might have stayed home instead of voting for Clinton. Indeed, given the narrow gap in Pennsylvania between Stein’s vote (49,941) and Trump’s margin of victory (44,292), it seems doubtful that Clinton would have ended up with more votes than Trump in Pennsylvania if Stein had not been on the ballot there.

Moreover, Stein was not the only third-party candidate in 2016. Gary Johnson was the Libertarian candidate, and he got substantially more votes than Stein in all three of these pivotal states: 172,136 in Michigan; 146,715 in Pennsylvania, and 106,674 in Wisconsin. Given that Johnson was formerly the Republican governor of New Mexico (and his running-mate, Bill Weld, formerly the Republican governor in Massachusetts), it is at least conceivable that Johnson pulled more votes from Trump than Stein siphoned for Clinton. If we are going to be intellectually honest in analyzing the possibility that Stein as a third-party candidate may have made a difference in the outcome of the election, then we must equally consider the possibility that Johnson’s presence as another third-party candidate may have neutralized (or even more than offset) Stein’s role in this respect. If we are honest, we must acknowledge that we just don’t know what the result would have been if Trump and Clinton were the only two names on the ballot in Michigan, Pennsylvania, and Wisconsin—or indeed, in other close states, including those like New Hampshire that Clinton won—when voters went to the polls on November 8, 2016.

But this uncertainty is itself significant. It indicates the deficiency of our nation’s existing electoral system, pursuant to which we cannot be sure that the candidate who won a majority of Electoral College votes was supported by a majority of citizen-voters on November 8 in each of the states that make up that candidate’s Electoral College majority. The problem, as Maskin and Sen observe, is not just that Clinton won 2,864,974 votes than Trump nationwide. Even if we believe that the Electoral College appropriately reflects federalism values in allocating each state the same number of Electoral College votes as its combined number of Senators and Representatives in Congress—so that Michigan has 16, Pennsylvania has 20, and Wisconsin has 10 (and so forth)—the real problem with the results of 2016 presidential election is that Trump received only 47.3% of the vote in Michigan, only 48.6% of the vote in Pennsylvania, and only 46.5% of the vote in Wisconsin. A majority of voters in each of these three pivotal states—crucial to Trump’s win in the Electoral College—voted against Trump and for another candidate instead.

To be sure, Clinton won only 46.4% of the vote in Minnesota, 46.8% of the vote in New Hampshire and only 47.9% of the vote in Nevada, to take three states for which she received all of their Electoral College votes. But two wrongs don’t make a right. It just means that, for many states, we cannot be confident that the candidate who actually received the Electoral College votes from the state was the candidate who would have received the state’s Electoral College votes if the rules required that a candidate obtain a majority of citizen-votes when compared head-to-head against another candidate using some sort of runoff procedure to select a winner when the list of candidates on the November ballot numbers more than two.   In other words, the wrong candidate may have received the Electoral College votes in each of these states, insofar as there may have been another candidate on the ballot whom the majority of participating voters actually preferred at the time they cast their ballots over the candidate who was awarded that state’s Electoral College votes.

This observation about the deficiency of existing electoral rules leads to another key point made by both Maskin-Sen and my Fordham article. There exists an alternative electoral mechanism that would remove this deficiency and the uncertainty it causes. In other words, we could have used an electoral mechanism in 2016 that would have told us definitively whether Stein’s being on the ballot (and/or Johnson’s) affected whether or not Trump prevailed over Clinton.

This alternative electoral mechanism would have given each voter the option of ranking candidates in addition to their first-choice candidate. It would have been an option, not a requirement. Thus, a Stein voter could have ranked Clinton as a second-choice—or not, as that particular Stein voter preferred. Likewise, a Johnson voter could have ranked Trump second, or not, or even ranked Trump second and Stein third, to make clear than Clinton was that voter’s last choice for president.

When using ballots that enable voters to provide this kind of optional ranking, there are somewhat different mathematical methods of aggregating the various preferences expressed by the voters into a single composite result. My Fordham article advocated the use of one of these mathematical methods, often called Instant Runoff Voting. As this name implies, this mathematical calculation works by eliminating the candidate with the least number of first-place votes, identifying the second-choice candidates of those voters who most preferred this eliminated candidate, and then redistributing these second-choice voters to the remaining candidates to see if now one of them is preferred by a majority of voters. If not, the next least-preferred candidate is eliminated using the same redistribution procedure, and so on, until one candidate obtains a majority of votes.

Instant Runoff Voting is used in some big-city elections around the country (including San Francisco and Minneapolis), as well as for legislative elections in Australia. Maine just adopted Instant Runoff Voting for most of its elections (although, regrettably, not its presidential elections). Maskin-Sen embrace the desirability of Instant Runoff Voting, especially in comparison to the existing deficient system, although they favor a different mathematical method for aggregating the voter preferences expressed on ranked-choice ballots. Their favorite mathematical method is often called Condorcet calculation, after the eighteenth-century French philosopher who first advocated it.

Condorcet calculation looks at the rankings indicated on each voter’s ballot and uses them to compare each candidate against every other candidate in series of head-to-head matchups. Thus, with Trump, Clinton, Johnson, and Stein all on the ballot (among other candidates), a computer could examine each voter’s ballots and then determine overall all whether voters prefer Trump to Clinton, Trump to Johnson, Trump to Stein, Clinton to Johnson, Clinton to Stein, and so forth. If one candidate beats all others in this series of head-to-head matchups, that candidate is known as the Condorcet winner.

As Maskin-Sen acknowledge, the Condorcet method is imperfect insofar as sometimes, at least theoretically, there will be no Condorcet winner who is capable of being identified as prevailing over all others in the series of head-to-head matchups. Imagine an election with three candidates: Rock, Paper, and Scissors. Scissors beats Paper, Paper beats Rock, and Rock beats Scissors. While each individual voter’s ranking of candidates won’t be cyclical in this way, it is possible that the electorate’s rankings as a whole have this cyclical character when aggregated using the Condorcet calculation. As a remedy for this defect, Maskin-Sen suggest that Instant Runoff Voting could be used in those situations where no Condorcet winner is identifiable. (As a mathematical method, Instant Runoff Voting always produces a single identifiable winner, and thus does not suffer from the same imperfection.)

Usually, the winner as calculated by Instant Runoff Voting is also the same candidate as the Condorcet winner—a point stressed by Rob Richie, the executive director of FairVote.org. Thus, as a practical matter, it would make little difference whether a government adopted Condorcet calculation with Instant Runoff Voting as a backstop in the rare circumstance that no Condorcet winner is identifiable, as Maskin-Sen recommend, or instead adopted Instant Runoff Voting as the government’s electoral procedure. Indeed, if Instant Runoff Voting is adopted as the rule, with the sole exception being the rare instance in which a Condorcet winner is identifiable but would not be the candidate chosen by Instant Runoff Voting—in which case the Condorcet calculation would supersede Instant Runoff Voting—then this use of Instant Runoff Voting would be exactly equivalent to the Maskin-Sen recommendation.

The key point, at least for present purposes, is not to quibble over either the relative merits of Instant Runoff Voting and Condorcet calculation or operationally the exact relationship between these two mathematical methods. Instead, the key point is that both are vastly preferable to the existing electoral system because both (especially if used in combination as described above) enable the identification of the candidate whom the majority of voters prefer when compared to any other potential runner-up candidate also on the ballot at the same time. Moreover, as both Maskin-Sen and my Fordham article explain, if this kind of ranked-choice ballot had been used for the 2016 presidential election, then we would have known definitively whether Trump or Clinton would have been preferred by a majority of voters in each state if the electorate’s choice was just between the two of them.

(The website Vox commissioned an analysis of the 2016 presidential election that purports to show that Clinton would have beaten Trump using either Instant Runoff Voting or Condorcet calculation. In my judgment, however, this analysis is not particularly relevant for two reasons: one, it relies upon a post-election public opinion survey, not actual election results; two, it purports to analyze the overall nationwide outcome between Clinton and Trump under these alternative electoral methods. But, as discussed above, the constitutionally relevant question given the Electoral College is how Clinton and Trump would have fared on a state-by-state basis if voters had been given ranked-choice ballots and then those rankings had been aggregated using either Instant Runoff Voting or the Condorcet method. If that had happened, or if we could confidently reproduce what the state-by-state results would have been in that situation—which, as far as I know we cannot do despite the Vox analysis—then we would know whether or not Trump was the majority’s choice in the states that gave him his Electoral College victory.)

This kind of ranked-choice ballot not only provides the definitive determination of the majority’s preference, which democracy deserves, but also enables a robust multiplicity of candidates without them causing problems. Jill Stein, or Ralph Nader, or Teddy Roosevelt in 1912, can occupy a spot on the ballot—giving voters a wider array of options—without any deleterious distorting effect. When a ranked-choice ballot is used, a vote for Stein in no way undermines the capacity to determine the electorate’s relative preference between Trump and Clinton. The same point applies to a vote for Nader and the electorate’s relative preference between Bush and Gore. And so forth.

This observation leads to another: it’s not just “fringe” candidates, like Jill Stein, who can be added to the ballot without adverse consequences. It is also moderate, mainstream candidates who voters might prefer to the two major-party nominees, particular if as in 2016 the two major parties happen to nominate candidates who are highly unpopular with broad swaths of the electorate. Both Maskin-Sen and my Fordham article invoke the example of Michael Bloomberg: a mainstream moderate who potentially might have been competitive against both Trump and Clinton, but who decided against entering the race for fear of drawing more support from Clinton than Trump. With a ranked-choice ballot, Bloomberg could have run without this worry. Any voter who preferred Bloomberg to Clinton, but who also strongly preferred Clinton to Trump, could have made these rankings known, and Bloomberg would have won if enough voters preferred him to both Clinton and Trump; but otherwise Bloomberg’s presence in the race would not have affected any voter’s relative preference between Clinton and Trump—and thus Clinton would have beaten Trump as long as more voters preferred her to him in enough states to equal or exceed 270 Electoral College votes (as may have been the case, although we cannot be sure for the reasons already explained).

Finally, the most important point made by both Maskin-Sen and my Fordham article is this: under the Constitution as it currently exists, any state has the power to adopt ranked-choice ballots to determine which candidate wins the state’s Electoral College votes. There is absolutely no need for a constitutional amendment to adopt this highly beneficial change.   And while it would be most desirable if all fifty states made this move (just as it was desirable a century ago for all states to make the decision to adopt the secret Australian ballot), there is no need for the states to act in concert. Any single state on its own could gain the benefit of adopting ranked-choice ballots for the allocation of its Electoral College votes, even if no other state makes the same move. That situation would be analytically equivalent to one state choosing to adopt all-mail balloting, as Oregon did, without waiting for other states to adopt this electoral innovation. (In making this comparison, I’m not taking a position on the wisdom of all-mailing balloting; instead, I’m only making the point that as a matter of constitutional power Oregon was perfectly entitled to do this on its own as its method of conducting its participation in presidential elections, even as other states continued to use traditional Election Day polling places.)

There is thus no doubt whatsoever about the current authority of each state, entirely on its own initiative, to adopt ranked-choice ballots to determine which candidate wins the state’s Electoral College votes. Under Article Two of the Constitution, each state’s legislature can choose the manner of appointing that state’s presidential electors. It would be an exercise of this unquestionable constitutional power for a state legislature to enact a law providing that the state’s presidential electors will be those pledged to the candidate who is the choice of the majority of voters as determined using ranked-choice ballots—rather, as current law provides, being those pledged to the candidate who is merely preferred by a plurality of voter. (This change in state law wouldn’t address the issue of so-called “faithless electors,” who deviate from the candidate to whom they are pledged. But as the 2016 election demonstrated, even when the major-party candidates are as unpopular as Trump and Clinton were, the risk of “faithless electors” changing the Electoral College outcome is extraordinarily remote.)

It is greatly regrettable that states did not adopt this electoral change previously. But as there is no constitutional obstacle to their doing so, we can only hope that they muster the political will to make this change sometime soon—preferably before 2020. For anyone concerned about the health of American democracy in the aftermath of the 2016 presidential election, this reform is worth rallying around. Given the power of the presidency, and thus the importance of the office, we should make sure that the candidate who wins the Electoral College is the one actually preferred by a majority of voters in each state necessary for that Electoral College victory.