Such an approach was impossible a decade or two ago when neither the expertise nor the computational capacity was available. One of the most well-accepted principles of gerrymandering today is that of partisan symmetry, first advanced by Gary King and Robert Browning. While this might seem like a contemporary post-digital question, it has, in fact, a much longer history dating back to the origins of American representative democracy. Many mathematicians supported him. The problem, however, is not simply geometric, especially when it comes to partisan gerrymandering. Numbers and quantification — so often taken to be objective, unbiased, and merely descriptive — actually can end up formalizing political arguments. Is it an outlier, lying at the tail of the bell curve, or does it fit squarely in the middle of the distribution? Trying to put a number on fairness is important, but it is an act of judgment, not interpretation. The fair solution was not the one that satisfied the mathematical community, but the one that met the approval of Congress and the American people. If maps carved up by politicians are not drawn strategically, with the purpose of advancing one party over the other, then they should fall within the bell curve of all possible maps. Walter Willcox, a former statistician in the Census, advocated for the method of “major fractions.” Harvard mathematician Edward Huntington promoted that of “equal proportions.” Throughout the 1920s, both men testified in several hearings in front of the Committee on the Census, composed detailed reports outlining their respective positions, wrote letters to other scholars seeking allies and supporters, and brought their feud to the educated public in a series of papers published in Science. Math has always, in fact, been implicated in how “fairness” is construed in American electoral politics. Willcox wrote in Science in 1929, “the choice of a method seems to me to be of little importance compared with the need of securing congressional compliance with the constitution. The measurement of fairness in this case is both operative and political. I would gladly abandon my preference for the method of major fractions if I thought another method had a better chance of acceptance by Congress and the country.” For him, fairness was contextual. This results in the Massachusetts quota being 15.88. It was impossible, he insisted, to isolate the math element of the problem from democratic contexts. Given the various provisions of a given state, is there a way for geometers to find the optimal solution, where “optimal” concurs with some mathematical measurement? That means that approximately every 212,019 (92,228,496/435) people in the United States were entitled to one representative. The measurement of disparity and the measurement of bias were inherently connected, but they represented two different ideals about who should benefit from reapportionment. In its final formulation, the bill did not specify whether equal proportions or major fractions should be used. If the courts decide to adopt some of these new measures, then future fights over gerrymandering will lean on a computational definition of fairness. The fact is that the role of mathematicians becomes apparent only during moments of disagreement when political, legal, and scientific rationales clash. Both Willcox and Huntington knew this, and yet each man fervently (and wrongly) argued that his method was unbiased. However, various states have additional provisions. In particular, if Congress adopted the method of equal proportions, then over the decades smaller states would benefit from greater representation than larger ones. A new reapportionment bill was signed into law in June 1929. The optimal solution from a mathematical standpoint was not necessarily the best political one. AUGUST 10, 2018

HOW DO YOU measure fairness? If the population of California is twice that of Colorado, then following the Constitution, California should have twice as many members of Congress. This is not to say that the Supreme Court should not choose a measurement. But this simplicity is misleading. Fundamental fairness in American democracy seems easy, at least as the US Supreme Court interpreted it in 1964: one person, one vote. When voters are allocated to districts, the districts can then drastically affect the composition of the legislatures representing them after Election Day. These latest approaches, while promising, still avoid the fundamental question: is fairness something you can mathematically optimize? Conflating the two will only lead to greater confusion and potential abuse of democracy by the numbers. Before you settle on an answer, know that neither of these solutions works — you might end up with either too many or not enough representatives. Statisticians in the Bureau of the Census began noticing them toward the end of the 19th century. Similarly, the method of major fractions benefited larger states. According to federal laws, districts need to be of equal population (not geographic) size and comply with the Voting Rights Act of 1965. This method would probably have remained in effect indefinitely, and the entire debate would have been avoided if not for some curious anomalies. Apportionment for the next decade remained based on the outdated 1910 census. ¤

The challenges of partisan gerrymandering are not new, nor is the hope that mathematics can offer a cure. Keep in mind that the Constitution dictates that the seats in the House of Representatives be apportioned after each decennial census in proportion to the size of the population in each state. For example, in the 2016 election in North Carolina, a statewide voter preference of 46.4 percent for Democrats resulted in a congressional map with 10 Republican congressmen and only three Democrats. Thomas Jefferson, Alexander Hamilton, Daniel Webster, and John Adams tried to tackle it, each offering a methodology for apportionment. This means that the number is always open to debate and argumentation, as it should be. In other words, despite winning only 53.6 percent of the votes, Republicans won almost 77 percent of the seats. The judges’ decision to send both cases back to lower courts is revealing. For example, when he testified before the Committee on the Census in January 1927, Willcox argued that one of the strongest arguments “in favor of the method of major fractions is that it seems to me to hold the balance between the large state and the small state.” Huntington feverishly disputed this fact in the pages of Science five months later: “The mathematical evidence, which was seriously misrepresented in the recent hearings, clearly indicated that the method of equal proportions is the one method which has no bias in favor of either the smaller or the larger states.”

The linguistic shift from “fairness” to “bias” is telling. A map drawn by legislators could thus be compared to randomly generated maps. Similarly, Maine’s quota was 3.5, and Arkansas’s 7.42. Throughout the decade, Huntington maintained that the problem of apportionment was purely about arithmetic. There was no way to eliminate all inequality, but it was also unclear what is the most “just” way to measure and minimize it. If, on the other hand, 55 percent of the votes gives Republicans 80 percent of the seats, symmetry has been violated. Surely, he insisted, it was crucial to redistribute the seats among the states to account for these changes. Instead, the bill instructed that if Congress fails to act, the last method used would be used again. But at what point does fiddling with district maps amount to a constitutional violation? The fate of the legislative branch hangs in the balance, and who gets to decide fair representation will undoubtedly shape the US government for decades to come. The Census Bureau was instructed to provide Congress with the results of both methods. Though their approaches are different, they share a belief that advances in computer science and mathematics over the past decades have at long last made it possible to provide mathematical solutions to the gerrymandering problem. In a sense, the two men disagreed on how to measure the “amount of inequality” between any two states. Lamone). Finally, Willcox and Huntington also disagreed on the nature of the problem itself and what sort of expertise was required to solve it. For him, the problem was as much about politics as arithmetic. But in this time of great demographic change, it was in the interest of rural states with declining populations to maintain the status quo. In other words, if the Democrats win 60 percent of the seats with 55 percent of the total votes, then the Republicans should win roughly 60 percent of the seats if they receive 55 percent of the votes. Arguing for a geometrically optimized solution is thus insufficient. The voice of legal and political scholars, Willcox argued, was just as important as those of the mathematicians. Willcox had a different idea of how disparity should be measured. In oral arguments in a case coming out of Wisconsin, Justice Alito described partisan gerrymandering as “distasteful,” but quickly cautioned that if the Court were to rule in the case, it would have to agree upon some standard — some measurement (ideally not subject to interpretive argument) — for deciding whether a given map represents voters fairly. Whitford, Benisek v. Since you can’t send .5 of a representative from Maine, there needs to be a procedure for smoothing out these fractions. This principle states that given roughly equal conditions, the result of an election should be the same regardless of which party is in control. In the midst of these debates, a technical disagreement emerged, which revolved not around how to count the population or limit the size of the House, but rather about the method of apportionment — namely, how could state seats in the House be divided in practice? At first glance, it isn’t obvious why the two measurements of disparity proposed by Huntington and Willcox would lead to different apportionments, but if you work through several examples, the differences become clear. ¤

The current debate about gerrymandering follows similar logic. For example, during the 1920s, Congress was unable to reach an agreement. The latest ruling by the Court did not close the judicial debate over partisan gerrymandering but merely postponed it. The Court heard two cases of alleged partisan gerrymandering this year (Gill v. To figure out how many representatives Massachusetts would send to Congress, the next step was to look at the total population of the state, which was 3,366,416, and divide that number by 212,019. But neither should the Court argue against a quantitative strawman. In the 1920s, as Congress struggled to pass an apportionment bill, two scholars devised two new methods. Some congressmen argued that the size of the House of Representatives should be increased to accommodate population growth (a common practice until then); others argued that only native or naturalized citizens should be counted for the purpose of apportionment (as a response to the flood of immigrants); a small number insisted that the disenfranchisement of African Americans in the South should be counted against the total population of Southern states (fulfilling Section Two of the 14th Amendment, which states that when the right to vote is denied to law-abiding citizens, the basis of representation should be reduced to account for this infringement). Even if no objective standard exists, this does not imply that there are no standards. The “efficiency gap” seeks to measure each party’s “wasted votes” (votes are considered “wasted” if more than a 51 percent majority of citizens votes on a candidate who wins a seat, or if votes are spent on a candidate who does not win a seat). The optimal solution, according to Huntington, was the one that minimized this ratio between any two states. What are you going to do with these remainders? If the average district in Massachusetts is 212,000 (population divided by number of representatives) and in Missouri 223,000, then the goal is to ensure that the ratio between the former and the latter is as close to one as possible (because if they were equal, the ratio would be exactly one). Recently, mathematicians and computer scientists from Duke University, Tufts University, and Princeton University entered the fray. Quantification is useful, and the suggestions put before the Court are persuasive and rigorous. Ironically, while mathematical solutions seem to promise universality, there may in the end not even be a national consensus on fairness. For example, if a New York state resident’s share of a representative was .00000472, and a South Dakota resident’s share was .0000052, a fair representation, on Willcox’s telling, would be one that would ensure that the absolute difference between the two numbers were as close to zero as possible. ¤

Clearly, there is no singular fair method of apportionment and no such thing as a fair redistricting map, at least not if by “fair” we mean some practical objective measurement rather than, say, a theoretical possibility. Who, in other words, gets to decide what fairness is? The perennial challenge, though, is that quantification and its opaque rigors can all too easily be strategically deployed and conflated with the interpretive work of politics. One solution researchers are testing uses supercomputers to randomly generate thousands of possible maps for a given state, ranking them according to a given measurement such as, for example, the number of seats that a given political party would be expected to win under each given map or according to the so-called efficiency gap. When districts are drawn by politicians for partisan advantage, this is known as partisan gerrymandering. But unlike in mathematics textbooks, when it comes to people, numbers do not add up neatly — and that is the crux of the problem. Luckily for Congress, after the 1930 census, Huntington’s and Willcox’s methods were in agreement and so the debate on method was postponed until the 1940s, when Huntington prevailed. ¤

Alma Steingart is a lecturer in the Department of the History of Science at Harvard University. She earned her PhD from MIT in the Program in History | Anthropology | Science, Technology, and Society (HASTS), and served as a junior fellow in the Harvard Society of Fellows. Since a perfect solution was now recognized as impossible — with some states inevitably better represented than others, the question became how to measure disparities between states. By 1850, Congress settled on the method known as the “Hamilton method”: once you allocate all the whole numbers in each state’s quota, you simply arrange the fractions in decreasing order and begin allocating the “remaining” seats until you reach the desired size of the House. The decision may be left to states to adjudicate. For example, districts need to be compact, contiguous, and respect communities of interest. The judges, however, avoided answering the principal question at the core of both cases: is there a standard according to which the courts can rule a map to be unconstitutional? It is this fight over method that is akin to current debates over gerrymandering. At its core, the issue was less about the ins and outs of the mathematical theory than it was about the possibility of objectively measuring fairness. Perhaps the stakes have never been as high as they are today, when conservatives control the executive branch, and the Supreme Court may, after Justice Kennedy’s retirement, sway rightward for a generation. The mathematics of relative and absolute differences is the key to the divergence in their approaches. By 1926, one congressman urged his colleagues to act; he pointed out that the district of Los Angeles, which had one representative, had more than a million inhabitants, while some other districts around the country had as few as 180,000 inhabitants. Justice Gorsuch and Chief Justice Roberts likewise asked whether explicit criteria could be articulated; if not, Roberts feared, the “status and integrity” of the Court could be harmed by wading into the thicket of politics. From a purely mathematical point of view it was clear, to Huntington’s eye, how fairness should be measured; he was convinced that he had shown that his solution was the correct one. Chief Justice Roberts’s description during oral arguments that the current measurement in front of the Court, known as the efficiency gap, amounts to “sociological gobbledygook” makes plain that the question of expertise is very much still with us today. In other words: Our attempts to quantify fairness are anything but futile, but they should be acknowledged and implemented as normative rather than descriptive. Or should you round up and down depending on the fraction’s size? This problem was first recognized soon after the first US census in 1790, with many of the greatest American thinkers of that generation lending their minds to the problem. Both of them argued that their method represented the fairest solution to the problem of apportionment, but as their dispute wore on, it became clear that fairness was a slippery concept indeed, with more than one definition. The problem can be approached geometrically. But what would that even mean? Should you ignore all fractions? It undergirds representative democracy. Such a measurement, like the one about bias between large and small states in the 1920s debates, operates both as a mathematical and a political metric. The fairest solution, according to him, and the one he believed most closely followed the aims of the Constitution, would ensure that an individual’s “share” of a representative (number of representatives divided by population) in one state would be as close to an individual’s share in another state. What exactly is the source of the problem? Huntington argued that the correct measurement is the relative difference between average districts in each state. Willcox, however, did not concede. No one denies that politicians in both parties practice it. Balancing the impact of apportionment between the large and small states could be understood as a mathematical reading of the constitutional provision, but it inevitably smuggled political agendas into apportionment. For mathematicians, redistricting can be thought of as a geometrical optimization problem. But the debate did not end here. At first, the argument focused on how to correctly measure the disparity between any two states, but as the dispute marched on, a more complex argument emerged. “However widely scholars may differ on political questions they surely should be able to present a united front on questions of arithmetic,” he exclaimed. Most curious of all was the “Alabama Paradox”: all things being equal, increasing the number of seats in the House by one would result in Alabama losing a representative. In 1910, the population of the United States was 92,228,496 and the size of the House of Representatives was 435. Some methods tended to favor larger states and others smaller states. Throughout the decade, various bills were drafted by Congress, but no consensus was reached.