1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
Introduction
“Why can’t we have nice things?” Perhaps there’s been a time when you’ve pondered exactly this question. And by nice things, you weren’t thinking about hovercraft or laundry that does itself. You were thinking about more basic aspects of a high-functioning society, like adequately funded schools or reliable infrastructure, wages that keep workers out of poverty or a public health system to handle pandemics. The “we” who can’t seem to have nice things is Americans, all Americans. This includes the white Americans who are the largest group of the uninsured and the impoverished as well as the Americans of color who are disproportionately so. “We” is all of us who have watched generations of American leadership struggle to solve big problems and reliably improve the quality of life for most people. We know what we need—why can’t we have it?
“Why can’t we have nice things?” was a question that struck me pretty early on in life—growing up as I did in an era of rising inequality, seeing the wealthy neighborhoods boom while the schools and parks where most of us lived fell into disrepair. When I was twenty-two years old, I applied for an entry-level job at Demos, a research and advocacy organization working on public policy solutions to inequality. There, I learned the tools of the policy advocacy trade: statistical research and white papers, congressional testimony, litigation, bill drafting, media outreach, and public campaigns.
It was exhilarating. I couldn’t believe that I could use a spreadsheet to convince journalists to write about the ideas and lives of the people I cared most about: the ones living from paycheck to paycheck who needed a better deal from businesses and our government. And it actually worked: our research influenced members of Congress to introduce laws that helped real people and led to businesses changing their practices. I went off to get a law degree and came right back to Demos to continue the work. I fell in love with the idea that information, in the right hands, was power. I geeked out on the intricacies of the credit markets and a gracefully designed regulatory regime. My specialty was economic policy, and as indicators of economic inequality became starker year after year, I was convinced that I was fighting the good fight, for my people and everyone who struggled.
And that is how I saw it: part of my sense of urgency about the work was that my people, Black people, are disproportionately ill served by bad economic policy decisions. I was going to help make better ones. I came to view the relationship between race and inequality as most people in my field do—linearly: structural racism accelerates inequality for communities of color. When our government made bad economic decisions for everyone, the results were even worse for people already saddled with discrimination and disadvantage.
Take the rise of household debt in working- and middle-class families, the first issue I worked on at Demos. The volume of credit card debt Americans owed had tripled over the course of the 1990s, and among cardholders, Black and Latinx families were more likely to be in debt. In the early 2000s, when I began working on the issue, bankruptcies and foreclosures were rising and homeowners, particularly Black and brown homeowners, were starting to take equity out of their houses through strange new mortgage loans—but the problem of burdensome debt and abusive lending wasn’t registering on the radar of enough decision makers.
Few politicians in Washington knew what it was like to have bill collectors incessantly ringing their phones about balances that kept growing every month. So, in 2003, Demos launched a project to get their attention: the first-ever comprehensive research report on the topic, with big, shocking numbers about the increase in debt. The report included policy recommendations about how to free families from debt and avoid a financial meltdown. Our data resulted in newspaper editorials, meetings with banks, congressional hearings, and legislation to limit credit card rates and fees. Two years later, Congress took action—and made the problem of rising debt worse. Legislators passed a bankruptcy reform bill supported by the credit industry that made it harder for people ever to escape their debts, no matter how tapped out they were after a job loss, catastrophic medical illness, or divorce. The law wasn’t good for consumers, did nothing to address the real problems in family finances, and actually made the problem worse. It was a bad economic policy decision that benefited only lenders and debt collectors, not the public. This was a classic example of the government not doing the simple thing that aligned with what most Americans wanted or what the data showed was necessary to solve a big problem. Instead, it did the opposite. Why?
Well, for one thing, our inability to stop bankruptcy reform made me realize the limits of research. The financial industry and other corporations had spent millions on lobbying and campaign donations to gin up a majority in Congress, and many of my fellow advocates walked away convinced that big money in politics was the reason we couldn’t have nice things. And I couldn’t disagree—of course money had influenced the outcome.
But I’ll never forget something that happened on the last day I spent at the Capitol presenting Demos’s debt research to members of Congress. I was walking down the marble hallway of the Russell Senate Office Building in my new “professional” shoes—I was twenty-five years old— when I stopped to adjust them because they kept slipping off. When I bent down, I was near the door of a Senate office; I honestly can’t remember if it belonged to a Republican or a Democrat. I heard the bombastic voice of a man going on about the deadbeats who had babies with multiple women and then declared bankruptcy to dodge the child support, using the government to avoid personal responsibility. There was something in the senator’s invective that made my heart rate speed up. I stood and kept moving, my mind racing. Had we advocates entirely missed something about the fight we were in? We had been thinking of it as a class issue (with racial disparities, of course), but was it possible that, at least for some of the folks on the other side of the issue, coded racial stereotypes were a more central player in the drama than we knew?
I left Capitol Hill, watching the rush hour crush of mostly white people in suits and sneakers heading home after a day’s work in the halls of power, and felt stupid. Of course, it’s not as if the credit card companies had made racial stereotypes an explicit part of their communications strategy on bankruptcy reform. But I’d had my political coming-of-age in the mid1990s, when the drama of the day was “ending welfare as we know it,” words that helped Bill Clinton hold on to the (white) political center by scapegoating (Black) single mothers for not taking “personal responsibility” to escape poverty. There was nothing explicit or conclusive about what I’d overheard, but perhaps the bankruptcy reform fight—also, like welfare, about the deservingness and character of people with little money—was playing out in that same racialized theater, for at least one decision maker and likely more.
I felt frustrated with myself for being caught flat-footed (literally, shoe in hand!) and missing a potential strategic vulnerability of the campaign. I’d learned about research and advocacy and lobbying in the predominantly white world of nonprofit think tanks, but how could I have forgotten the first lessons I’d ever learned as a Black person in America, about what they see when they see us? About how quick so many white people could be to assume the worst of us…to believe that we wanted to cheat at a game they were winning fair and square? I hadn’t even thought to ask the question about this seemingly nonracial financial issue, but had racism helped defeat us?
Years later, I was on a conference call with three progressive economists, all white men. It was 2010, and we were plotting the research strategy for a huge project about the national debt and budget deficit. Both measures were on the rise, as the Great Recession had decimated tax revenue while requiring more public spending to restart the economy. The Tea Party had burst onto the political stage, and everyone, from conservative politicians to the kind of Democrats who had President Obama’s ear, was saying that we needed a “grand bargain” to create a dramatically smaller government by 2040 or 2050, including cuts to Social Security, Medicaid, and Medicare. We were preparing the numbers to show that such a bargain would be the death blow to a middle class that was already on its knees, and to offer an alternative budget proposal that would include a second stimulus and investments to grow the middle class. Toward the end of our planning call, I cleared my throat into the speakerphone. “So, when we’re talking about the fiscal picture in 2040 or 2050, we’re also talking about a demographic change tipping point, so where should we make the point that all these programs were created without concern for their cost when the goal was to build a white middle class, and they paid for themselves in economic growth…and now these guys are trying to fundamentally renege on the deal for a future middle class that would be majority people of color?” Nobody spoke. I checked to see if I’d been muted. No—the light on the phone was still green. Finally, one of the economists spoke into the awkward silence.
“Well, sure, Heather. We know that, and you know that, but let’s not lead with our chin here. We are trying to be persuasive.” I found the Mute button again, pressed it, and screamed.
Then I laughed a little, and sighed. At least that economist had said the quiet part out loud for once. He was just expressing the unspoken conventional wisdom in my field: that we’d be less successful if we explicitly called out the racial unfairness or reminded people that the United States had deliberately created a white middle class through racially restricted government investments in homeownership and infrastructure and retirement security, and that it had only recently decided that keeping up those investments would be unaffordable and unwise. What was worse, I didn’t have the confidence to tell my colleagues that they were wrong about the politics of it. They were probably right.
Nearly all the decision makers in our target audience were going to be white, from the journalists we wanted to cover our research to the legislative staff we’d meet with to the members of Congress who would vote on our proposal. Even under a Black president, we were operating within a white power structure. Before long, the Tea Party movement used the language of fiscal responsibility but the cultural organizing of white grievance to force a debt ceiling showdown, mandate blunt cuts to public programs during a fragile recovery, and stall the legislative function of the federal government for the rest of Obama’s presidency. Was it possible that even when we didn’t bring up race, it didn’t matter? That racism could strengthen the hand that beat us, even when we were advocating for policies that would help all Americans—including white people? —
ON THE DAY Donald Trump was to take the oath of office in 2017, I’d been the president of Demos for three years. I was gearing up to fight against the onslaught that Trump’s incoming administration portended for civil rights and liberties, for immigrants and Muslims, and for the Black Lives Matter movement that he had gleefully attacked in his campaign. But as an economic policy advocate, I also knew that the Trump agenda—from repealing the Affordable Care Act to cutting taxes for big corporations and the wealthy (apparently the concern about the national debt expired with the Obama presidency) to stopping action on climate change, which would have catastrophic economic and social costs for the country and the world —was going to do damage across the board. It would create more economic inequality. Why would white voters have rallied to the flag of a man whose agenda promised to wreak economic, social, and environmental havoc on them along with everyone else? It just didn’t add up.
The inadequacy of the tool I was bringing to this question, economic policy research, felt painfully obvious. Contrary to how I was taught to think about economics, everybody wasn’t operating in their own rational economic self-interest. The majority of white Americans had voted for a worldview supported not by a different set of numbers than I had, but by a fundamentally different story about how the economy works; about race and government; about who belongs and who deserves; about how we got here and what the future holds. That story was more powerful than cold economic calculations. And it was exactly what was keeping us from having nice things—to the contrary, it had brought us Donald Trump.
So, I made an unexpected decision. I decided to hand over the reins at Demos and start plotting a journey, one that would take me across the country and back again over the next three years. I began calling experts not on public policy but on public opinion, the psychology and the political proclivities of people: what makes us see the world in certain ways, what compels us to act, what drives us toward or against certain solutions to our big problems. Before I left, I had Demos partner with a critical race scholar and a linguist to develop our own public opinion research on race, class, and government. Most important, I leaned on the relationships I’d built over the years with grassroots and labor organizers, who introduced me to Americans of all backgrounds who were willing to talk to me about how they were making sense of one another and their futures. I remained guided by the same mission I had when I started at Demos nearly two decades prior: changing the rules to bring economic freedom to those who lack it today. But I wouldn’t be treating the issues as cut-and-dried dollars-andcents questions, but questions of belonging, competition, and status— questions that in this country keep returning to race.
In my gut, I’ve always known that laws are merely expressions of a society’s dominant beliefs. It’s the beliefs that must shift in order for outcomes to change. When policies change in advance of the underlying beliefs, we are often surprised to find the problem still with us. America ended the policy of enforced school segregation two generations ago, but with new justifications, the esteem in which many white parents hold Black and brown children hasn’t changed much, and today our schools are nearly as segregated as they were before Brown v. Board of Education. Beliefs matter.
So, what is the stubborn belief that needs to shift now for us to make progress against inequality? I found my first clues in a series of psychology studies. Psychologists Maureen Craig and Jennifer Richeson presented white Americans with news articles about people of color becoming the majority of the population by 2042. The study authors then asked the subjects to “indicate their agreement with the idea that increases in racial minorities’ status will reduce white Americans’ status.” The people who agreed most strongly that demographic change threatened whites’ status were most susceptible to shifting their policy views because of it, even on “race-neutral policies” like raising the minimum wage and expanding healthcare—even drilling in the Arctic. The authors concluded that “making the changing national racial demographics salient led white Americans (regardless of political affiliation) to endorse conservative policy positions more strongly.” I immediately thought of the deficit project and of my white colleagues’ resistance to stating the obvious about demographic change for fear it would backfire and make austerity more popular. Six years later, there it was, that fear corroborated in a psychological experiment: thinking about a more diverse future changed white Americans’ policy preferences about government.
It was a dramatic finding, but it still wasn’t clear to me why white people would view the presence of more people of color as a threat to their status, as if racial groups were in a direct competition, where progress for one group was an automatic threat to another. And it was even more baffling to me how that threat could feel so menacing that these white people would resist policies that could benefit them, just because they might also benefit people of color. Why would they allow a false sense of group competition to become a self-defeating trap?
But then again, they weren’t getting that idea out of nowhere. This zerosum paradigm was the default framework for conservative media—“makers and takers,” “taxpayers and freeloaders,” “handouts,” and “special favors”; “they’re coming after your job, your safety, your way of life.” Without the hostile intent, of course, aren’t we all talking about race relations through a prism of competition, every advantage for one group mirrored by a disadvantage for another? When researching and writing about disparities, I was taught to focus on how white people benefited from systemic racism: their schools have more funding, they have less contact with the police, they have greater access to healthcare. Those of us seeking unity told that version of the zero-sum story; the politicians seeking division told the other version—is it any wonder that many white people saw race relations through the lens of competition?
But was that the real story? Black people and other people of color certainly lost out when we weren’t able to invest more in the aftermath of the Great Recession, or tackle climate change more forcefully under President Obama, or address the household debt crisis before it spiraled out of control—in each case, at least partly because of racist stereotypes and dog whistles used by our opposition. But did white people win? No, for the most part they lost right along with the rest of us. Racism got in the way of all of us having nice things.
If I looked back at all the vexing problems I’d worked on in my career (student debt, workers’ rights, money in politics, unfair taxes, predatory lending, low voter turnout), would I find the fingerprints of racism on all our setbacks and defeats? It is progressive economic conventional wisdom that racism accelerates inequality for communities of color, but what if racism is actually driving inequality for everyone?

THIS BOOK RECOUNTS my journey to tally the hidden costs of racism to us all. It starts where my own journey began, trying to understand how the rules of our economy became so tilted toward the already wealthy and powerful.
The people of our country are so productive and generate so much wealth, but most of the gains go to a small number, while most families struggle to stay afloat. I traveled to Mississippi and sat with factory workers trying to unite a multiracial workforce to bargain collectively for better pay and benefits. I talked to white homeowners who had lost everything in a financial crisis that began with the predatory mortgages that banks first created to strip wealth from Black and brown families. I heard from white parents and students who feared that segregated white schools would render them ill equipped for a diverse world. To understand when white America had turned against government, I traveled to one of the many places where the town had drained its public swimming pool rather than integrate it.
In each of these places, the white people’s neighbors and co-workers of color struggled more because of racism: the Latinx factory worker is paid less for the same work; Black homeownership rates are near thirty-year lows while white levels are on their way back; the Black child in the segregated school has far more barriers to overcome; the loss of public goods at the time of integration means that families of color never got to enjoy that kind of government largesse.
As the descendant of enslaved Africans and of a line of Black Americans who were denied housing, equal education, jobs, and even safety from white lynch mobs, I am well aware that the ledger of racial harms is nowhere near balanced. I know the risks I’m taking by widening the aperture to show the costs of white supremacy to our entire society. This book amasses evidence for a part of the story I believe we are neglecting at our peril, but rather than shift focus from racism’s primary targets, I hope this story brings more people’s eyes—and hearts—to the cause. Black writers before me, from James Baldwin to Toni Morrison, have made the point that racism is a poison first consumed by its concocters.
What’s clearer now in our time of growing inequality is that the economic benefit of the racial bargain is shrinking for all but the richest. The logic that launched the zero-sum paradigm—I will profit at your expense—is no longer sparing millions of white Americans from the degradations of American economic life as people of color have always known it. As racist structures force people of color into the mines as the canary, racist indifference makes the warnings we give go unheeded—from the war on drugs to the financial crisis to climate disasters. The coronavirus pandemic is a tragic example of governments and corporations failing to protect Black, brown, and Indigenous lives—though, if they had, everyone would have been safer.
I’ll admit that my journey was deeply personal, too. At its best, this country brings together all the world’s peoples and invites them to make something new. Collisions of cultures have stretched the branches of my Black family over the years, so that it now includes white, Asian, and Latinx people, too. I started my journey when I was pregnant with a child whose grandparents would be Black, white, and South Asian. I’m sure some part of me doesn’t want to believe that oppression of people of color really is an unalloyed good for white people, making us truly separate and intrinsically at odds—because then the multiracial America that made my son possible is doomed.
The logical extension of the zero-sum story is that a future without racism is something white people should fear, because there will be nothing good for them in it. They should be arming themselves (as they have been in record numbers, “for protection,” since the Obama presidency) because demographic change will end in a dog-eat-dog race war. Obviously, this isn’t the story we want to tell. It’s not even what we believe. The same research I found showing that white people increasingly see the world through a zero-sum prism showed that Black people do not. African Americans just don’t buy that our gain has to come at the expense of white people. And time and time again, history has shown that we’re right. The civil rights victories that were so bitterly opposed in the South ended up being a boon for the region, resulting in stronger local economies and more investments in infrastructure and education.
The old zero-sum paradigm is not just counterproductive; it’s a lie. I started my journey on the hunt for its source and discovered that it has only ever truly served a narrow group of people. To this day, the wealthy and the powerful are still selling the zero-sum story for their own profit, hoping to keep people with much in common from making common cause with one another. But not everyone is buying it. Everywhere I went, I found that the people who had replaced the zero sum with a new formula of cross-racial solidarity had found the key to unlocking what I began to call a “Solidarity Dividend,” from higher wages to cleaner air, made possible through collective action. And the benefits weren’t only external. I didn’t set out to write about the moral costs of racism, but they kept showing themselves.
There is a psychic and emotional cost to the tightrope white people walk, clutching their identity as good people when all around them is suffering they don’t know how to stop, but that is done, it seems, in their name and for their benefit. The forces of division seek to harden this guilt into racial resentment, but I met people who had been liberated by facing the truth and working toward racial healing in their communities.
At the end of my journey to write this book, a multiracial coalition voted to end Donald Trump’s presidency, with historic turnout levels despite a pandemic, and racial inequality topping the list of voter concerns.
That coalition included millions of white voters, particularly the collegeeducated and the young. Yet the majority of white voters still supported an impeached president who lied to Americans on a daily basis, whose rhetoric and policies made him a hero of white supremacist terror groups, and who mismanaged and downplayed a pandemic that cost more than 200,000 American lives in less than a year. Rather than ending the soul-searching of the Trump era, the 2020 election raised new questions about how much suffering and dysfunction the country’s white majority is willing to tolerate, and for how elusive a gain.
I’m fundamentally a hopeful person, because I know that decisions made the world as it is and that better decisions can change it. Nothing about our situation is inevitable or immutable, but you can’t solve a problem with the consciousness that created it. The antiquated belief that some groups of people are better than others distorts our politics, drains our economy, and erodes everything Americans have in common, from our schools to our air to our infrastructure. And everything we believe comes from a story we’ve been told. I set out on this journey to piece together a new story of who we could be to one another, and to glimpse the new America we must create for the sum of us. Chapter 1
AN OLD STORY: THEZERO-SUM HIERARCHY
Growing up, my family and my neighbors were always hustling. My mother had the fluctuating income of a person with an entrepreneur’s mind and a social worker’s heart. My dad, divorced from my mom since I was two, had his own up-and-down small business, too, and soon a new wife and kids to take care of. If we had a good year, my mom, my brother, and I moved into a bigger apartment. A bad spell, and I’d notice the mail going unopened in neat but worrisome piles on the hall table. I now know we were in what economists call “the fragile middle class,” all income from volatile earnings and no inherited wealth or assets to fall back on. We were the kind of middle class in the kind of community that kept us proximate to real poverty, and I think this shaped the way I see the world. My mother took us with her to work in Chicago’s notorious Robert Taylor public housing projects while she gave health lessons to young mothers, and some of my earliest playmates were kids with disabilities in a group home where she also worked. (It seemed she was always working.) We had cousins and neighbors who had more than we did, and some who had far less, but we never learned to peg that to their worth. It just wasn’t part of our story.
I did learn, though, to ask “why,” undoubtedly to an annoying degree. In the back seat of the station wagon facing the rear window, I asked why there were so many people sleeping on the grates on Lower Wacker Drive downtown, huddled together in that odd, unsunny yellow lamplight. Why did the big plant over on Kedzie have to close, and would another one open and hire everybody back? Why was Ralph’s family’s furniture out on the curb, and where did their landlord think Ralph was going to live now?
My father turned eighteen the year the Voting Rights Act was signed; my mother did when the Fair Housing Act was signed three years later.
That meant that my parents were in the first generation of Black Americans to live full adult lives with explicitly racist barriers lowered enough for them even to glimpse the so-called American Dream. And just as they did, the rules changed to dim the lights on it, for everyone. In the mid-1960s, the American Dream was as easy to achieve as it ever was or has been since, with good union jobs, subsidized home ownership, strong financial protections, a high minimum wage, and a high tax rate that funded American research, infrastructure, and education. But in the following decades, rapid changes to tax, labor, and trade laws meant that an economy that used to look like a football, fatter in the middle, was shaped like a bow tie by my own eighteenth birthday, with a narrow middle class and bulging ends of high- and low-income households.
This is the Inequality Era. Even in the supposedly good economic times before the COVID-19 pandemic that began in 2020, 40 percent of adults were not paid enough to reliably meet their needs for housing, food, healthcare, and utilities. Only about two out of three workers had jobs with basic benefits: health insurance, a retirement account (even one they had to fund themselves), or paid time off for illness or caregiving. Upward mobility, the very essence of the American idea, has become stagnant, and many of our global competitors are now performing far better on what we have long considered to be the American Dream. On the other end, money is still being made: the 350 biggest corporations pay their CEOs 278 times what they pay their average workers, up from a 58-to-1 ratio in 1989, and nearly two dozen companies have CEO-to-worker pay gaps of over 1,000 to 1. The richest 1 percent own as much wealth as the entire middle class.
I learned how to track these numbers in my early days working at a think tank, but what I was still asking when I decided to leave it fifteen years later was: Why? Why was there a constituency at all for policies that would make it harder for more people to have a decent life? And why did so many people seem to blame the last folks in line for the American Dream— Black and brown people and new immigrants who had just started to glimpse it when it became harder to reach—for economic decisions they had no power to influence? When I came across a study by two Bostonbased scholars, titled “Whites See Racism as a Zero-Sum Game That They Are Now Losing,” something clicked. I decided to pay the study authors a visit.
It was a hot late-summer day when I walked into the inner courtyard at Harvard Business School to meet with Michael Norton and Samuel Sommers, two tall and lean professors of business and psychology, respectively. Harvard Business School is where some of the wealthiest people in America cemented their pedigrees and became indoctrinated in today’s winner-take-all version of capitalism. It is an overwhelmingly white club, admittance to which all but guarantees admittance to all other elite clubs. Nonetheless, that’s where we sat as these two academics explained to me how, according to the people they’d surveyed, whites were now the subjugated race in America.
Norton and Sommers had begun their research during the first Obama administration, when a white Tea Party movement drove a backlash against the first Black president’s policy agenda. They had been interested in why so many white Americans felt they were getting left behind, despite the reality of continued white dominance in U.S. life, from corporations to government. (Notwithstanding the Black president, 90 percent of state, local, and federal elected officials were white in the mid-2010s.) What Norton and Sommers found in their research grabbed headlines: the white survey respondents rated anti-white bias as more prevalent in society than anti-Black bias. On a scale of 1 to 10, the average white scoring of antiBlack bias was 3.6, but whites rated anti-white bias as a 4.7, and opined that anti-white bias had accelerated sharply in the mid-1970s.
“We were shocked. It’s so contrary to the facts, of course, but here we are, getting calls and emails from white people who saw the headlines and thanked us for revealing the truth about racism in America!” said Norton with a dry laugh.
“It turns out that the average white person views racism as a zero-sum game,” added Sommers. “If things are getting better for Black people, it must be at the expense of white people.” “But that’s not the way Black people see it, right?” I asked.
“Exactly. For Black respondents, better outcomes for them don’t necessarily mean worse outcomes for white people. It’s not a zero sum,” said Norton.
As to why white Americans, who have thirteen times the median household wealth of Black Americans, feel threatened by diminished discrimination against Black people, neither Sommers nor Norton had an answer that was satisfying to any of us.
“There’s not really an explanation,” said Professor Sommers.

I NEEDED TO find out. I sensed that this core idea that’s so resonant with many white Americans—there’s an us and a them, and what’s good for them is bad for us—was at the root of our country’s dysfunction. One might assume that this kind of competitiveness is human nature, but I don’t buy it: for one thing, it’s more prevalent among white people than other Americans. If it’s not human nature, if it’s an idea that we’ve chosen to adopt, that means it’s one that we can choose to abandon. But if we are ever to uproot this zerosum idea, we’ll need first to understand when, and why, it was planted. So to begin my journey, I immersed myself in an unvarnished history of our country’s birth.

THE STORY OF this country’s rise from a starving colony to a world superpower is one that can’t be told without the central character of race—specifically, the creation of a “racial” hierarchy to justify the theft of Indigenous land and the enslavement of African and Indigenous people. I use quotes around the word racial when referring to the earliest years of the European colonialization of the Americas, because back then, the illusory concept of race was just being formed. In the seventeenth century, influential Europeans were starting to create taxonomies of human beings based on skin color, religion, culture, and geography, aiming not just to differentiate but to rank humanity in terms of inherent worth. This hierarchy—backed by pseudo-scientists, explorers, and even clergy—gave Europeans moral permission to exploit and enslave. So, from the United States’ colonial beginnings, progress for those considered white did come directly at the expense of people considered nonwhite. The U.S. economy depended on systems of exploitation—on literally taking land and labor from racialized others to enrich white colonizers and slaveholders. This made it easy for the powerful to sell the idea that the inverse was also true: that liberation or justice for people of color would necessarily require taking something away from white people.
European invaders of the New World believed that war was the only sure way to separate Indigenous people from the lands they coveted. Their version of settler colonialism set up a zero-sum competition for land that would shape the American economy to the present day, at an unforgivable cost. The death toll of South and North American Indigenous people in the century after first contact was so massive—an estimated 56 million lives, or 90 percent of all the lands’ original inhabitants, through either war or disease—that it changed the amount of carbon in the atmosphere.
Such atrocities needed justification. The European invaders and their descendants used religious prejudices: the natives were incurable heathens and incompatible with the civilized peoples of Europe. Another stereotype that served the European profit motive was that Indigenous people wasted their land, so it would be better off if cultivated by productive settlers.
Whatever form these rationales took, colonizers shaped their racist ideologies to fit the bill. The motive was greed; cultivated hatred followed.
The result was a near genocide that laid waste to rich native cultures in order to fill European treasuries, particularly in Portugal, Spain, and England—and this later fed the individual wealth of white Americans who received the ill-gotten land for free.
Colonial slavery set up a zero-sum relationship between master and enslaved as well. The formula for profit is revenue minus costs, and American colonial slaveholders happened upon the world’s most winning version of the formula to date. Land was cheap to free in the colonies, and although the initial cost of buying a captured African person was high, the lifetime of labor, of course, was free. Under slavery’s formative capitalist logic, an enslaved man or woman was both a worker and an appreciating asset. Recounts economic historian Caitlin Rosenthal, “Thomas Jefferson described the appreciation of slaves as a ‘silent profit’ of between 5 and 10 percent annually, and he advised friends to invest accordingly.” With sexual violence, a white male owner could literally create even more free labor, indefinitely, even though that meant enslaving his own children. The ongoing costs of slave ownership were negligible: just food and shelter, and even these could be minimized. Take the record of Robert Carter of the Nomini Hall plantation in early 1700s Virginia: He fed his enslaved workers “less than they needed and required them to fill out their diet by keeping chickens and by working Sundays in small gardens attached to their cabins. Their cabins, too, he made them build and repair on Sundays.” It stands to reason that the less the slaveholder expended making his bound laborers’ lives sustainable, the more profit he had. The only limit to this zero-sum incentive to immiserate other human beings was total incapacity or death; at that point, theoretically, Black pain was no longer profitable.
Then again, by the nineteenth century, owners could purchase life insurance on their slaves (from some of the most reputable insurance companies in the country) and be paid three-quarters of their market value upon their death. These insurance companies, including modern household names New York Life, Aetna, and U.S. Life, were just some of the many northern corporations whose fortunes were bound up with slavery. All the original thirteen colonies had slavery, and slavery legally persisted in the North all the way up to 1846, the year that New Jersey passed a formal emancipation law. Even after that, the North-South distinction meant little to the flow of profits and capital in and out of the slave economy. Wealth wrung from Black hands launched the fortunes of northeastern port cities in Rhode Island; filled the Massachusetts textile mills with cotton; and capitalized the future Wall Street banks through loans that accepted enslaved people as collateral. In 1860, the four million human beings in the domestic slave trade had a market value of $3 billion. In fact, by the time war loomed, New York merchants had gotten so rich from the slave economy—40 percent of the city’s exporting businesses through warehousing, shipping insurance, and sales were Southern cotton exports— that the mayor of New York advocated that his city secede along with the South.
In very stark and quantifiable terms, the exploitation, enslavement, and murder of African and Indigenous American people turned blood into wealth for the white power structure. Those who profited made no room for the oppressed to share in the rewards from their lands or labor; what others had, they took. The racial zero sum was crafted in the cradle of the New World.

OF COURSE, CHATTELslavery is no longer our economic model. Today, the zero-sum paradigm lingers as more than a story justifying an economic order; it also animates many people’s sense of who is an American, and whether more rights for other people will come at the expense of their own. It helped me understand our current moment when I learned that the zero sum was never solely material; it was also personal and social, shaping both colonists’ notions of themselves and the young nation’s ideas of citizenship and selfgovernance.
The zero sum was personal because the revolutionary ideal of being a free person (a radical, aspirational concept with no contemporary parallel) was abstract only until it was contrasted with what it meant to be absolutely unfree. According to historian Greg Grandin:
At a time when most men and nearly all women lived in some form of unfreedom, tied to one thing or another, to an indenture, an apprentice contract, land rent, a mill, a work house or prison, a husband or father, saying what freedom was could be difficult.
Saying what it wasn’t, though, was easy: “a very Guinea slave.”
The colonists in America created their concept of freedom largely by defining it against the bondage of the Africans among them. In the early colonial years, most European newcomers were people at the bottom of the social hierarchy back home, sent to these shores as servants from orphanages, debtors’ prisons, or poorhouses. Even those born in America had little of what we currently conceive of as freedom: to choose their own work and education or to move at will. But as the threat of cross-racial servant uprisings became real in the late 1600s—particularly after the bloody Bacon’s Rebellion, in which a Black and white rebel army burned the capital of colonial Virginia to the ground—colonial governments began to separate the servant class based on skin color.
A look through the colonial laws of the 1680s and early 1700s reveals a deliberate effort to legislate a new hierarchy between poor whites and the “basically uncivil, unchristian, and above all, unwhite Native and African laborers.” Many of the laws oppressing workers of color did so to the direct benefit of poor whites, creating a zero-sum relationship between these two parts of the colonial underclass. In 1705, a new Virginia law granted title and protection to the little property that any white servant may have accumulated—and simultaneously confiscated the personal property of all the enslaved people in the colony. The zero sum was made quite literal when, by the same law, the church in each parish sold the slaves’ confiscated property and gave the “profits to the poor of the parish,” by which they meant, of course, the white poor.
Just how unfree were the enslaved Africans in early America? The lack of freedom extended to every aspect of life: body, mind, and spirit; it invaded their family, faith, and home. The women could not refuse the sexual advances of their masters, and any children born from these rapes would be slaves the masters wouldn’t have to purchase at market. Physical abuse was common, of course, and even murder was legal. A 1669 Virginia colony law deemed that killing one’s slave could not amount to murder, because the law would assume no malice or intent to “destroy his own estate.” In a land marked by the yearning for religious freedom, enslaved people were forbidden from practicing their own religions. The Christianity they were allowed to practice was no spiritual safe haven; the Church condoned their subjugation and participated in their enslavement. (In colonial Virginia, the names of slaves suspected of aiding runaways were posted on church doors.) Black people in bondage were not allowed the freedom to marry legally and had no rights to keep their families intact. Tearing apart families by selling children from parents was so common that after Emancipation, classified ads of Black people seeking relatives buoyed the newspaper industry. In sum, the life of a Black American under slavery was the living antithesis of freedom, with Black people subject to daily bodily and spiritual tyranny by man and by state. And alongside this exemplar of subjugation, the white American yearning for freedom was born.
Most Euro-Americans were not, and would likely never be, the wealthy aristocrat who had every social and economic privilege in Europe. Eternal slavery provided a new caste that even the poorest white-skinned person could hover above and define himself against. Just imagine the psychic benefit of being elevated from the bottom of a rigid class hierarchy to a higher place in a new “racial” hierarchy by dint of something as immutable as your skin color. You can imagine how, whether or not you owned slaves yourself, you might willingly buy into a zero-sum model to gain the sense of freedom that rises with the subordination of others.
Racial hierarchy offered white people a reprieve from the class hierarchy and gave white women an escape valve from gender oppression.
White women in slaveholding communities considered their slaves “their freedom,” liberating them from farming, housework, child rearing, nursing, and even the sexual demands of their husbands. Historian Stephanie E.
Jones-Rogers’s They Were Her Property: White Women Slaveholders in the American South reveals the economic stake that white women had in chattel slavery. In a society where the law traditionally considered married women unable to own property separate from their husbands’, these women were often able to keep financial assets in human beings independent of their husbands’ estates (and debts). In addition to relative financial freedom, slavery gave these women carte blanche to use and abuse other humans.
They Were Her Property recounts stories of white women reveling in cruelties of the most intimate and perverse sort, belying the myth of the innocent belle and betraying any assumption that womanhood or motherhood would temper depravity, even toward children. An image that will never leave my mind is Professor Jones-Rogers’s description of a white mother rocking her chair across the head of a little enslaved girl for about an hour, while her daughter whipped the child, until the Black girl’s face was so mangled that she would never again in life eat solid food. —
FROM THE ECONOMY to the most personal of relationships to the revolution itself, early America relied on a zero-sum model of freedom built on slavery. The colonies would not have been able to afford their War of Independence were it not for the aid provided by the French, who did so in exchange for tobacco grown by enslaved people. Edmund S. Morgan, author of American Slavery, American Freedom: The Ordeal of Colonial Virginia, wrote “To a large degree it may be said that Americans bought their independence with slave labor.” Its freedom purchased, the newborn nation found itself on the verge of creating something entirely novel in the world and not at all guaranteed to succeed: a new nation of many nations, made chiefly of people from European communities that had long been at war. To forge a common basis for citizenship in this conglomerate country, a new, superseding identity would need to emerge. This citizenship would guarantee freedom from exercises of state power against one’s home or religion, free movement and assembly, speech, and most significantly, property. Citizenship, in other words, meant freedom.
And freedom meant whiteness. In the founding era, northerners’ ambiguity about slavery in their own states didn’t stop them from profiting from the slave economy—or from protecting its survival in the Constitution. Ten out of the eleven passages in the U.S. Constitution that referred to slavery were pro-slavery. The founders designed the new U.S.
Congress so that slave states gained bonus political power commensurate with three-fifths of their enslaved population, without, of course, acknowledging the voice or even the humanity of those people. It was to this slavocratic body that the Constitution delegated the question of who could be an American citizen and under what terms. The First Congress’s answer, in the 1790 Naturalization Act, was to confine citizenship to “free white persons,” encoding its cultural understanding of whiteness as free—in opposition to Blackness, which would be forever unfree. Though people of African descent were nearly one-fifth of the population at the first Census, most founders did not intend for them to be American. For the common white American, the presence of Blackness—imagined as naturally enslaved, with no agency or reason, denied each and every one of the enumerated freedoms—gave daily shape to the confines of a new identity just cohering at the end of the eighteenth century: white, free, citizen. It was as if they couldn’t imagine a world where nobody escaped the tyranny they had known in the Old World; if it could be Blacks, it wouldn’t have to be whites.

WITH EACH GENERATION, the specter of the founding zero sum has found its way back into the American story. It’s hard for me to stand here as a descendant of enslaved people and say that the zero sum wasn’t true, that the immiseration of people of color did not benefit white people. But I have to remind myself that it was true only in the sense that it is what happened—it didn’t have to happen that way. It would have been better for the sum of us if we’d had a different model. Yes, the zero-sum story of racial hierarchy was born along with the country, but it is an invention of the worst elements of our society: people who gained power through ruthless exploitation and kept it by sowing constant division. It has always optimally benefited only the few while limiting the potential of the rest of us, and therefore the whole.
In decade after decade, threats of job competition—between men and women, immigrants and native born, Black and white—have perennially revived the fear of loss at another’s gain. The people setting up the competition and spreading these fears were never the needy job seekers, but the elite. (Consider the New York Herald’s publishing tycoon, James Gordon Bennett Sr., who warned the city’s white working classes during the 1860 election that “if Lincoln is elected, you will have to compete with the labor of four million emancipated negroes.”) The zero sum is a story sold by wealthy interests for their own profit, and its persistence requires people desperate enough to buy it.

THAT SAID, WHENEVER the interests of white people have been pitted against those of people of color, structural racism has called the winner. So, how is it that white people in 2011, when Norton and Sommers conducted their research, believed that whites were the victims? I tried to give their perspective the benefit of the doubt. Perhaps it was affirmative action. The idea of affirmative action looms large in the white imagination and has been a passion among conservative activists. Some white people even believe that Black people get to go to college for free—when the reality is, Black students on average wind up paying more for college through interestbearing student loans over their lifetimes because they don’t have the passed-down wealth that even poorer white students often have. And in selective college admissions, any given white person is far more likely to be competing with another white person than with one of the underrepresented people of color in the applicant pool.
Is it welfare? The characters of the white taxpayer and the freeloading person of color are recurring tropes for people like Norton and Sommers’s survey respondents. But the majority of people receiving government assistance, like the majority of people in poverty, are white; and people of color pay taxes, too. The zero-sum idea that white people are now suffering due to gains among people of color has taken on the features of myth: it lies, but it says so much.
The narrative that white people should see the well-being of people of color as a threat to their own is one of the most powerful subterranean stories in America. Until we destroy the idea, opponents of progress can always unearth it and use it to block any collective action that benefits us all. Today, the racial zero-sum story is resurgent because there is a political movement invested in ginning up white resentment toward lateral scapegoats (similarly or worse-situated people of color) to escape accountability for a massive redistribution of wealth from the many to the few. For four years, a tax-cutting and self-dealing millionaire trumpeted the zero-sum story from the White House, but the Trump presidency was in many ways brought to us by two decades of zero-sum propaganda on the ubiquitous cable news network owned by billionaire Rupert Murdoch. This divide-and-conquer strategy has been essential to the creation and maintenance of the Inequality Era’s other most defining feature: the hollowing out of the goods we share. Chapter 2
RACISM DRAINED THE POOL
The United States of America has had the world’s largest economy for most of our history, with enough money to feed and educate all our children, build world-leading infrastructure, and generally ensure a high standard of living for everyone. But we don’t. When it comes to per capita government spending, the United States is near the bottom of the list of industrialized countries, below Latvia and Estonia. Our roads, bridges, and water systems get a D+ from the American Society of Civil Engineers. With the exception of about forty years from the New Deal to the 1970s, the United States has had a weaker commitment to public goods, and to the public good, than every country that possesses anywhere near our wealth.
Observers have tried to fit multiple theories onto why Americans are so singularly stingy toward ourselves: Is it a libertarian ideology? The ethos of the western frontier? Our founding rebellion against government? When I first started working at Demos in my early twenties, the organization had a project called Public Works that tried to understand antigovernment sentiment and find ways of communicating that would overcome it.
Community-based advocates who were fighting for things like food stamps, public transit, and education funding sought the project’s help as they faced resistance both in their legislatures and when knocking on doors. Public Works’ research revealed that people have fuzzy ideas about government, not understanding, for example, that highways, libraries, and public schools are, in fact, government. The project encouraged advocates to talk about government as “public structures” that build economic opportunity, with a goal of activating a mindset of “citizens” as opposed to “consumers” of public services.
As I sat in Demos’s staff meeting listening to the two people leading the project present their research, I took notes and nodded. I was just an entrylevel staff person not involved in the project, but when the presentation wrapped, I raised my hand. The two presenters were white, liberal advocates from Texas who had spent their lives pushing for economic fairness and opportunity for children. I had no research experience in communications, but having grown up in the 1980s and ’90s, I had the impression that every time anybody in politics complained about government programs, they invoked, explicitly or otherwise, lazy Black people who were too reliant on government. So, I had to ask, “Did race ever come up in your research?” It turned out they hadn’t even asked the question.
The organization eventually stopped working on the Public Works project. Years later, when I set out on my journey to find the roots of our country’s dysfunction, I had a chance to come at the question again—but this time, informed by conversations with community organizers, social scientists, politicians, and historians who did ask the question, I was able to discover a more convincing rationale for why so many Americans had such a dim view of government.

IN 1857, A white southerner named Hinton Rowan Helper published a book called The Impending Crisis of the South: How to Meet It. Helper had taken it upon himself to count how many schools, libraries, and other public institutions had been set up in free states compared to slave states. In Pennsylvania, for instance, he counted 393 public libraries; in South Carolina, just 26. In Maine, 236; in Georgia, 38. New Hampshire had 2,381 public schools; Mississippi 782. The disparity was similar everywhere he looked.
Helper was an avowed racist, and yet he railed against slavery because he saw what it was doing to his fellow white southerners. The slave economy was a system that created high concentrations of wealth, land, and political power. “Notwithstanding the fact that the white non-slaveholders of the South are in the majority, as five to one, they have never yet had any part or lot in framing the laws under which they live,” Helper wrote. And without a voice in the policy making, common white southerners were unable to win much for themselves. In a way, the plantation class made an understandable calculation: a governing class will tax themselves to invest in amenities that serve the public (schools, libraries, roads and utilities, support for local businesses) because they need to. The wealthy need these assets in a community to make it livable for them, but also, more important, to attract and retain the people on whom their profits depend, be they workers or customers.
For the owners in the slave economy, however, neither was strictly necessary. The primary source of plantation wealth was a completely captive and unpaid labor force. Owners didn’t need more than a handful of white workers per plantation. They didn’t need an educated populace, whether Black or white; such a thing was in fact counter to their financial interest. And their farms didn’t depend on many local customers, whether individuals or businesses: the market for cotton was a global exchange, and the factories that bought their raw goods were in the North, staffed by wage laborers. Life on a plantation was self-contained; the welfare of the surrounding community mattered little outside the closed system.
With his book, Hinton Rowan Helper aimed to destroy that system. He even took on the most common objection to abolition at the time: the question of how to compensate slave owners for their losses (which President Lincoln managed for District of Columbia slave owners loyal to the Union during the Civil War, at three hundred dollars per enslaved person). But Helper argued that owners should actually have to compensate the rest of the white citizens of the South, because slavery had impoverished the region. The value of northern land was more than five times the value of southern land per acre, he calculated, despite the South’s advantage in climate, minerals, and soil. Because the southern “oligarchs of the lash,” as he called them, had done so little to support education, innovation, and small enterprise, slavery was making southern whites poorer. Today, according to the U.S. Census Bureau, nine of the ten poorest states in the nation are in the South. So are seven of the ten states with the least educational attainment. In 2007, economist Nathan Nunn, a softspoken Harvard professor then in his mid-thirties, made waves with a piece of research showing the reach of slavery into the modern southern economy. Nunn found that the well-known story of deprivation in the American South was not uniform and, in fact, followed a historical logic: counties that relied more on slave labor in 1860 had lower per capita incomes in 2000.
He was building on global comparative research by Stanley Engerman and Kenneth Sokoloff, which found that “societies that began with relatively extreme inequality tended to generate institutions that were more restrictive in providing access to economic opportunities.” Nunn’s research showed that although of course slave counties had higher inequality during the era of slavery (particularly of land), it wasn’t the degree of inequality that was correlated with poverty today; it was the fact of slavery itself, whether on large plantations or small farms. When I talked to Nathan Nunn, he couldn’t say exactly how the hand of slavery was strangling opportunity generations later. He made it clear, however, that it wasn’t just the Black inhabitants who were faring worse today; it was the white families in the counties, too. When slavery was abolished, Confederate states found themselves far behind northern states in the creation of the public infrastructure that supports economic mobility, and they continue to lag behind today. These deficits limit economic mobility for all residents, not just the descendants of enslaved people.

A FUNCTIONING SOCIETY rests on a web of mutuality, a willingness among all involved to share enough with one another to accomplish what no one person can do alone. In a sense, that’s what government is. I can’t create my own electric grid, school system, internet, or healthcare system—and the most efficient way to ensure that those things are created and available to all on a fair and open basis is to fund and provide them publicly. If you want the quality and availability of those things to vary based on how much money an individual has, you may argue for privatization—but even privatization advocates still want the government, not corporations, to shoulder the investment cost for massive infrastructure needs. For most of the twentieth century, leaders of both parties agreed on the wisdom of those investments, from Democratic president Franklin D. Roosevelt’s Depression-era jobs programs to Republican president Eisenhower’s Interstate Highway System to Republican Richard Nixon’s Supplemental Security Income for the elderly and people with disabilities.
Yet almost every clause of the American social contract had an asterisk.
For most of our history, the beneficiaries of America’s free public investments were whites only. The list of free stuff? It’s long. The Homestead Act of 1862 offered 160 acres of expropriated Indigenous land west of the Mississippi to any citizen or person eligible for citizenship (which, after the 1790 Naturalization Act, was only white immigrants) if they could reach the land and build on it. A free grant of property! Fewer than six thousand Black families were able to become part of the 1.6 million landowners who gained deeds through the Homestead Act and its 1866 southern counterpart. Today, an estimated 46 million people are propertied descendants of Homestead Act beneficiaries.
During the Great Depression, the American government told banks it would insure mortgages on real estate if they made them longer-term and more affordable (offering tax deductions on interest along the way)—but the government drew red “Do Not Lend” lines around almost all the Black neighborhoods in the country with a never-substantiated assumption that they would be bad credit risks.
The New Deal transformed the lives of workers with minimum wage and overtime laws—but compromises with southern Democrats excluded the job categories most Black people held, in domestic and agricultural work. Then the GI Bill of 1944 paid the college tuition of hundreds of thousands of veterans, catapulting a generation of men into professional careers—but few Black veterans benefited, as local administrators funneled most Black servicemen to segregated vocational schools. The mortgage benefit in the GI Bill pushed the postwar white homeownership rate to three out of four white families—but with federally sanctioned housing discrimination, the Black and Latinx rates stayed at around two out of five, despite the attempts of veterans of color to participate.
The federal government created suburbs by investing in the federal highway system and subsidizing private housing developers—but demanded racial covenants (“whites only” clauses in housing contracts) to prevent Black people from buying into them. Social Security gave income to millions of elderly Americans—but again, exclusions of job categories left most Black workers out, and southern congressmembers opposed more generous cash aid for the elderly poor. You could even consider the New Deal labor laws that encouraged collective bargaining to be a no-cost government subsidy to create a white middle class, as many unions kept their doors closed to nonwhites until the 1960s.
Between the era of the New Deal and the civil rights movement, these and more government policies worked to ensure a large, secure, and white middle class. But once desegregation lowered barriers, people with power (politicians and executives, but also individual white homeowners, business owners, shop stewards, and community leaders) faced the possibility of sharing those benefits. The advantages white people had accumulated were free and usually invisible, and so conferred an elevated status that seemed natural and almost innate. White society had repeatedly denied people of color economic benefits on the premise that they were inferior; those unequal benefits then reified the hierarchy, making whites actually economically superior. What would it mean to white people, both materially and psychologically, if the supposedly inferior people received the same treatment from the government? The period since integration has tested many whites’ commitment to the public, in ways big and small.

THE AMERICAN LANDSCAPE was once graced with resplendent public swimming pools, some big enough to hold thousands of swimmers at a time. In the 1920s, towns and cities tried to outdo one another by building the most elaborate pools; in the 1930s, the Works Progress Administration put people to work building hundreds more. By World War II, the country’s two thousand pools were glittering symbols of a new commitment by local officials to the quality of life of their residents, allowing hundreds of thousands of people to socialize together for free. A particular social agenda undergirded these public investments. Officials envisioned the distinctly American phenomenon of the grand public resort pools as “social melting pots.” Like free public grade schools, public pools were part of an “Americanizing” project intended to overcome ethnic divisions and cohere a common identity—and it worked. A Pennsylvania county recreation director said, “Let’s build bigger, better and finer pools. That’s real democracy. Take away the sham and hypocrisy of clothes, don a swimsuit, and we’re all the same.” Of course, that vision of classlessness wasn’t expansive enough to include skin color that wasn’t, in fact, “all the same.” By the 1950s, the fight to integrate America’s prized public swimming pools would demonstrate the limits of white commitment to public goods.

IN 1953, A thirteen-year-old Black boy named Tommy Cummings drowned in Baltimore’s Patapsco River while swimming with three friends, two white and one Black. The friends had been forced to swim in the dangerous waterway because none of the city’s seven public pools allowed interracial swimming. Tommy was one of three Black children to die that summer in open water, and the NAACP sued the city. It won on appeal three years later, and on June 23, 1956, for the first time, all Baltimore children had the chance to swim with other children, regardless of skin color. Public recreation free from discrimination could, in the minds of the city’s progressive community, foster more friendships like the one Tommy was trying to enjoy when he drowned. What ended up happening, however, was not the promised mingling of children of different races. In Baltimore, instead of sharing the pool, white children stopped going to the pools that Black children could easily access, and white adults informally policed (through intimidation and violence) the public pools in white neighborhoods.
In America’s smaller towns, where there was only one public pool, desegregation called into question what “public” really meant. Black community members pressed for access to the public resource that their tax dollars had helped to build. If assets were public, they argued, they must be furnished on an equal basis. Instead, white public officials took the public assets private, creating new private corporations to run the pools. The town of Warren, Ohio, dealt with its integration problem by creating the members-only Veterans’ Swim Club, which selected members based on a secret vote. (The club promptly selected only white residents of the town.) The small coal town of Montgomery, West Virginia, built a new resort pool in 1942 but let it lay untouched for four years while Black residents argued that the state’s civil rights law required equal access. Unable to countenance the idea of sharing the pool with Black people, city leaders eventually formed a private “Park Association” whose sole job was to administer the pool, and the city leased the public asset to the private association for one dollar. Only white residents were allowed admission. Warren and Montgomery were just two of countless towns—in every region in America, not just the South—where the fight over public pools revealed that for white Americans, the word public did not mean “of the people.” It meant “of the white people.” They replaced the assets of a community with the privileges of a club.
Eventually, the exclusion boomeranged on white citizens. In Montgomery, Alabama, the Oak Park pool was the grandest one for miles, the crown jewel of a Parks Department that also included a zoo, a community center, and a dozen other public parks. Of course, the pool was for whites only; the entire public parks system was segregated. Dorothy Moore was a white teenage lifeguard when a federal court deemed the town’s segregated recreation unconstitutional. Suddenly, Black children would be able to wade into the deep end with white children at the Oak Park pool; at the rec center, Black elders would get chairs at the card tables.
The reaction of the city council was swift—effective January 1, 1959, the Parks Department would be no more.
The council decided to drain the pool rather than share it with their Black neighbors. Of course, the decision meant that white families lost a public resource as well. “It was miserable,” Mrs. Moore told a reporter five decades later. Uncomprehending white children cried as the city contractors poured cement into the pool, paved it over, and seeded it with grass that was green by the time summer came along again. To defy desegregation, Montgomery would go on to close every single public park and padlock the doors of the community center. It even sold off the animals in the zoo. The entire public park system would stay closed for over a decade. Even after it reopened, they never rebuilt the pool.
I went to see Oak Park for myself in 2019 and walked the grounds looking for signs of what used to be. I was able to spot the now-barren rock formation where the zoo’s monkeys used to climb. I asked the friendly women in the parks office where the pool had been, but nobody was quite sure. Oak Park used to be the central gathering place in town for white Montgomery; on that hot afternoon, I was one of only four or five people there. Groundskeepers outnumbered visitors. I noticed an elderly white couple sitting in a car in the parking lot. They saw me approaching and stared without welcome. I stood for a beat, smiling at the car window, before the man reluctantly rolled it down.
“Hi, sir, ma’am,” I ventured, getting nods in return. They appeared to be in their eighties. “Are you from around here?” More nods. “I am doing a project and was wondering if you remember when there used to be a big pool here?” The couple looked at each other, still wary.
“Yes, of course,” the man replied curtly.
“Do you remember where it was?” They hesitated, and then the woman pointed straight ahead to where they’d been looking moments before. I took a sharp breath of excitement. Had I interrupted them reminiscing about the pool? Maybe they’d met there as teenagers? I leaned forward to ask more, but the man recoiled and rolled up his window.
I backed off. Where the woman had pointed was a wide, level expanse rimmed with remembering old oak trees. The only sounds were the trilling of birds and the far-off thrum of a lawn mower.
The loss of the Oak Park pool was replicated across the country. Instead of complying with a desegregation order, New Orleans closed what was known as the largest pool in the South, Audubon Pool, in 1962, for seven years. In Winona, Mississippi, if you know where to look, you can still see the metal railings of the old pool’s diving board amid overgrown weeds; in nearby Stonewall, a real estate developer unearthed the carcass of the segregated pool in the mid-2000s. Even in towns that didn’t immediately drain their public pools, integration ended the public pool’s glory years, as white residents abandoned the pools en masse.
Built in 1919, the Fairground Park pool in St. Louis, Missouri, was the largest in the country and probably the world, with a sandy beach, an elaborate diving board, and a reported capacity of ten thousand swimmers.
When a new city administration changed the parks policy in 1949 to allow Black swimmers, the first integrated swim ended in bloodshed. On June 21, two hundred white residents surrounded the pool with “bats, clubs, bricks and knives” to menace the first thirty or so Black swimmers. Over the course of the day, a white mob that grew to five thousand attacked every Black person in sight around the Fairground Park. After the Fairground Park Riot, as it was known, the city returned to a segregation policy using public safety as a justification, but a successful NAACP lawsuit reopened the pool to all St. Louisans the following summer. On the first day of integrated swimming, July 19, 1950, only seven white swimmers attended, joining three brave Black swimmers under the shouts of two hundred white protesters. That first integrated summer, Fairground logged just 10,000 swims—down from 313,000 the previous summer. The city closed the pool for good six years later. Racial hatred led to St. Louis draining one of the most prized public pools in the world.
Draining public swimming pools to avoid integration received the official blessing of the U.S. Supreme Court in 1971. The city council in Jackson, Mississippi, had responded to desegregation demands by closing four public pools and leasing the fifth to the YMCA, which operated it for whites only. Black citizens sued, but the Supreme Court, in Palmer v.
Thompson, held that a city could choose not to provide a public facility rather than maintain an integrated one, because by robbing the entire public, the white leaders were spreading equal harm. “There was no evidence of state action affecting Negroes differently from white,” wrote Justice Hugo Black. The Court went on to turn a blind eye to the obvious racial animus behind the decision, taking the race neutrality at face value. “Petitioners’ contention that equal protection requirements were violated because the pool-closing decision was motivated by anti-integration considerations must also fail, since courts will not invalidate legislation based solely on asserted illicit motivation by the enacting legislative body.” The decision showed the limits of the civil rights legal tool kit and forecast the politics of public services for decades to come: If the benefits can’t be whites-only, you can’t have them at all. And if you say it’s racist? Well, prove it.
As Jeff Wiltse writes in his history of pool desegregation, Contested Waters: A Social History of Swimming Pools in America, “Beginning in the mid-1950s northern cities generally stopped building large resort pools and let the ones already constructed fall into disrepair.” Over the next decade, millions of white Americans who once swam in public for free began to pay rather than swim for free with Black people; desegregation in the mid-fifties coincided with a surge in backyard pools and members-only swim clubs. In Washington, D.C., for example, 125 new private swim clubs were opened in less than a decade following pool desegregation in 1953. The classless utopia faded, replaced by clubs with two-hundred-dollar membership fees and annual dues. A once-public resource became a luxury amenity, and entire communities lost out on the benefits of public life and civic engagement once understood to be the key to making American democracy real.
Today, we don’t even notice the absence of the grand resort pools in our communities; where grass grows over former sites, there are no plaques to tell the story of how racism drained the pools. But the spirit that drained these public goods lives on. The impulse to exclude now manifests in a subtler fashion, more often reflected in a pool of resources than a literal one.

AS SOMEONE WHO’S spent a career in politics, where the specter of the typical white moderate has perennially trimmed the sails of policy ambition, I was surprised to learn that in the 1950s, the majority of white Americans believed in an activist government role in people’s economic lives—a more activist role, even, than contemplated by today’s average liberal. According to the authoritative American National Elections Studies (ANES) survey, 65 percent of white people in 1956 believed that the government ought to guarantee a job to anyone who wanted one and to provide a minimum standard of living in the country. White support cratered for these ideas between 1960 and 1964, however—from nearly 70 percent to 35 percent— and has stayed low ever since. (The overwhelming majority of Black Americans have remained enthusiastic about this idea over fifty years of survey data.) What happened?
In August 1963, white Americans tuned in to the March on Washington (which was for “Jobs and Freedom”). They saw the nation’s capital overtaken by a group of mostly Black activists demanding not just an end to discrimination, but some of the same economic ideas that had been overwhelmingly popular with whites: a jobs guarantee for all workers and a higher minimum wage. When I saw that white support for these ideas crumbled in 1964, I guessed it might have been because Black people were pushing to expand the circle of beneficiaries across the color line. But then again, perhaps it was just a coincidence, the beginning of a new antigovernment ideology among white people that had nothing to do with race?
After all, white support for these government commitments to economic security has stayed low for the rest of the years of ANES data, through a sea change in racial attitudes. As the civil rights movement successfully shifted cultural norms and beliefs, it became rarer and rarer to hear the argument that people of color were biologically inferior. That kind of “oldfashioned,” biological racism waned relatively quickly over the decades (by 1972, 31 percent of white people subscribed to it; by 1986, just 14). Racism couldn’t still be lowering support for government antipoverty efforts today.
It turns out that the dominant story most white Americans believe about race adapted to the civil rights movement’s success, and a new form of racial disdain took over: racism based not on biology but on perceived culture and behavior. As professors Donald R. Kinder and Lynn M. Sanders put it in their 1996 deep dive into public opinion by race, Divided by Color: Racial Politics and Democratic Ideals, “today, we say, prejudice is preoccupied less with inborn ability and more with effort and initiative.” Kinder and Sanders defined this more modern manifestation of anti-Black hostility among whites as “racial resentment.” They measured racial resentment using a combination of agree/disagree statements on the ANES that spoke to the Black work ethic, how much discrimination Black people had faced as compared to European immigrants, and whether the government was more generous to Blacks than to whites. They found that “although whites’ support for the principles of racial equality and integration have increased majestically over the last four decades, their backing for policies designed to bring equality and integration about has scarcely increased at all. Indeed in some cases white support has actually declined.” I wasn’t surprised to read that Kinder and Sanders found that people with high racial resentment opposed racial public policies such as nondiscriminatory employment and college quotas. The researchers couldn’t explain this correlation away using demographic characteristics or other beliefs, like abstract individualism or opposition to government intervention in private affairs; nor could they pin it to a genuine material threat. But my data analyst colleague Sean McElwee and I found that white people with high levels of resentment against Black people have become far more likely to oppose government spending generically: as of the latest ANES data in 2016, there was a sixty-point difference in support for increased government spending based on whether you were a white person with high versus low racial resentment. Government, it turned out, had become a highly racialized character in the white story of our country.
When the people with power in a society see a portion of the populace as inferior and undeserving, their definition of “the public” becomes conditional. It’s often unconscious, but their perception of the Other as undeserving is so important to their perception of themselves as deserving that they’ll tear apart the web that supports everyone, including them.
Public goods, in other words, are only for the public we perceive to be good.
I could understand how, raised in an explicitly white-supremacist society, a white New Dealer could turn against the Great Society after the civil rights movement turned government from enforcer of the racial hierarchy to upender of it. But how to explain the racial resentment and the correlated antigovernment sentiments by the 1980s? By then, white folks had seemed to acclimate themselves to a new reality of social equality under the law. The overt messages of racial inferiority had dissipated, and popular culture had advanced new norms of multiculturalism and tolerance. What stopped advancing, however, was the economic trajectory of most American families—and it was on this terrain that racial resentment dug in.
While racial barriers were coming down across society, new class hurdles were going up. It began immediately after the federal civil rights victories of the mid-1960s, when President Johnson accurately predicted that, by signing these bills into law, he had given away the South. Over the next decade, the New Deal–era social contract that existed between white power-brokers in government, business, and labor came to a painful end. It had never been a peaceful one, but over the 1940s, ’50s, and ’60s, its signatories had generally seen a mutual benefit in ensuring better and better standards of living for white men and their families as they moved up from the tenement and the factory to the suburb and the office. Economic growth and wage growth were high, as were taxes (which hit their peak as a percentage of the economy in 1965). The biggest industries were highly regulated, and antitrust protections worked to prevent monopolies. During these years, the leaders in government, big business, and organized labor were often white men only years or a generation away from the same circumstances as the guy on the shop floor; perhaps this accounted for the level of empathy reflected in decisions to, for example, pay low-skilled workers middle-class wages and benefits, or spend hundreds of billions to make homeownership possible to millions with no down payment. Perhaps managers still saw themselves in workers, people they considered their fellow Americans. I often picture it literally—three white men seated in a room, signing a contract: Walter Reuther of the United Automobile Workers; Charles Wilson, the General Motors chief executive; and President Dwight Eisenhower. Their handshakes seal the deal for a broad, white middle class. Then, in the mid-sixties, there’s a commotion at the door. Women and people of color are demanding a seat at the table, ready to join the contract for shared prosperity. But no longer able to see themselves reflected in the other signatories, the leaders of government and big business walk out, leaving workers on their own—and the Inequality Era was born.
That era began in the 1970s, but the policies cohered into an agenda guided by antigovernment conservatism under the presidency of Ronald Reagan. Reagan, a Californian, was determined to take the Southern Strategy (launched by President Nixon) national. In southern politics, federally mandated school integration had revived for a new generation the Civil War idea of government as a boogeyman, threatening to upend the natural racial order at the cost of white status and property. The Reagan campaign’s insight was that northern white people could be sold the same explicitly antigovernment, implicitly pro-white story, with the protagonists as white taxpayers seeking defense from a government that wanted to give their money to undeserving and lazy people of color in the ghettos. (The fact that government policy created the ghettos and stripped the wealth and job opportunities from their residents was not part of the story. Nor was the fact that people of color pay taxes, too, often a larger share of their incomes due to regressive sales, property, and payroll taxes.) My law professor Ian Haney López helped me connect the dots in his 2014 book Dog Whistle Politics: How Coded Racial Appeals Have Reinvented Racism and Wrecked the Middle Class. Reagan’s political advisers saw him as the perfect carrier to continue the fifty-state Southern Strategy that could focus on taxes and spending while still hitting the emotional notes of white resentment. “Plutocrats use dog-whistle politics to appeal to whites with a basic formula,” Haney López told me. “First, fear people of color. Then, hate the government (which coddles people of color).
Finally, trust the market and the 1 percent.” This type of modern political racism could operate in polite society because of the way that racial resentment had evolved, from biological racism to cultural disapproval: it’s not about who they are; it’s about what some (okay, most) of them do. He went on, “Dog-whistle politics is gaslighting on a massive scale: stoking racism through insidious stereotyping while denying that racism has anything to do with it.” For a few moments in a tape-recorded interview in 1981, however, the right-wing strategist for Presidents George H. W. Bush and Ronald Reagan, Lee Atwater, admitted to the plan:
You start out in 1954 by saying, “Nigger, nigger, nigger.” By 1968 you can’t say “nigger”—that hurts you, backfires. So you say stuff like, uh, forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites….“We want to cut this,” is much more abstract than even the busing thing, uh, and a hell of a lot more abstract than “Nigger, nigger.”
In the 1980s, Republicans deployed this strategy by harping on the issue of welfare and tying it to the racialized image of “the inner city” and “the undeserving poor.” (An emblematic line from President Reagan, “We’re in danger of creating a permanent culture of poverty as inescapable as any chain or bond,” deftly suggests that Black people are no longer enslaved by white action, but by their own culture.) Even though welfare was a sliver of the federal budget and served at least as many white people as Black, the rhetorical weight of the welfare stereotype—the idea of a Black person getting for free what white people had to work for—helped sink white support for all government. The idea tapped into an old stereotype of Black laziness that was first trafficked in the antebellum era to excuse and minimize slavery and was then carried forward in minstrel shows, cartoons, and comedy to the present day. The welfare trope also did the powerful blame-shifting work of projection: like telling white aristocrats that it was their slaves who were the lazy ones, the Black welfare stereotype was a total inversion of the way the U.S. government had actually given “free stuff” to one race over all others. To this day, even though Black and brown people are disproportionately poor, white Americans constitute the majority of low-income people who escape poverty because of government safety net programs. Nonetheless, the idea that Black people are the “takers” in society while white people are the hardworking taxpayers—the “makers”— has become a core part of the zero-sum story preached by wealthy political elites. Whether it’s the more subtle “47 percent” version from millionaire Mitt Romney or the more racially explicit Fox News version sponsored by billionaire Rupert Murdoch, it works. In 2016, the majority of white moderates (53 percent) and white conservatives (69 percent) said that Black Americans take more than we give to society. We take more than we give.
Seeing this high a number among white moderates jogs a memory: I’m in the seventh grade, for the first time attending an almost all-white school.
It’s a government and politics lesson, and the girl next to me announces that she and her family are “fiscally conservative but socially liberal.” The phrase is new to me, but all around me, white kids’ heads bob in knowing approval, as if she’s given the right answer to a quiz. There’s something so morally sanitized about the idea of fiscal restraint, even when the upshot is that tens of millions of people, including one out of six children, struggle needlessly with poverty and hunger. The fact of their suffering is a shame, but not a reason to vote differently to allow government to do something about it. (We could eliminate all poverty in the United States by spending just 12 percent more than the cost of the 2017 Republican tax cuts.) The media’s inaccurate portrayal of poverty as a Black problem plays a role in this, because the Black faces that predominate coverage trigger a distancing in the minds of many white people.
As Professor Haney López points out, priming white voters with racist dog whistles was the means; the end was an economic agenda that was harmful to working- and middle-class voters of all races, including white people. In railing against welfare and the war on poverty, conservatives like President Reagan told white voters that government was the enemy, because it favored Black and brown people over them—but their real agenda was to blunt government’s ability to challenge concentrated wealth and corporate power. The hurdle conservatives faced was that they needed the white majority to turn against society’s two strongest vessels for collective action: the government and labor unions. Racism was the ever-ready tool for the job, undermining white Americans’ faith in their fellow Americans. And it worked: Reagan cut taxes on the wealthy but raised them on the poor, waged war on the unions that were the backbone of the white middle class, and slashed domestic spending. And he did it with the overwhelming support of the white working and middle classes.
The majority of white voters have voted against the Democratic nominee for president ever since the party became the party of civil rights under Lyndon Johnson. The Republican Party has won those votes through sheer cultural marketing to a white customer base that’s still awaiting delivery of the economic goods they say they want. Despite the dramatic change in white Americans’ support for government antipoverty efforts, the typical white voter’s economic preferences are still more progressive than those of the Republican politicians for whom they vote. I looked at the two economic issues that have been top priority for Republicans in Washington since 2008, healthcare and taxes. Republican politicians have thoroughly communicated their positions on these issues to their base through campaign ads, speeches, and the conservative media echo chamber, so one would think that their voters would get the message. That message is: cut taxes whenever possible and oppose government involvement in healthcare.
But 46 percent of Republicans polled in the summer of 2020 actually supported a total government takeover of health insurance, Medicare for All —even after a Democratic primary where the idea was championed by a Democratic Socialist, Vermont senator Bernie Sanders. Zero Republican politicians support this policy, and almost all voted in 2017 to repeal the relatively modest government role in healthcare under the Affordable Care Act. On taxes, nearly half of Republican voters support raising taxes on millionaires by 4 percent to pay for schools and roads, but the Republican Congress of 2017 reduced taxes by more than a trillion dollars, mainly on corporations and the wealthy. In the Inequality Era brought to us by racist dog-whistle politics, white voters are less hostile to government policies that promote economic equality than the party they most often vote into power. But vote for them they do. Racial allegiance trumps.
Most white voters will deny that racism has anything to do with their feelings about government. And many political pollsters will believe them.
For instance, in fall 2009 focus groups, conservative anti-Obama Republicans mentioned race only in order to complain that they couldn’t express their opposition to Obama without being labeled racists. The influential Democratic pollsters Stan Greenberg  and James Carville, who were conducting the focus groups, took them at their word, writing in their summary of the findings, “The press and elites [who] continue to look for a racial element that drives these voters’ beliefs…need to get over it.” But they were missing how political race-craft works. There is such a strong cultural prohibition on being racist (particularly during the color-blind triumphalism in the wake of Obama’s election) that it’s important to look at what voters feel and perceive, not just what they say. Race isn’t a static state; it’s better understood as an action, and one of its chief functions is to distance white people from people who are “raced” differently. When race is introduced in this fashion to white voters, it activates seemingly race-neutral reactions such as demonization, distrust, zero-sum thinking, resistance to change, and resource hoarding. Note how Greenberg and Carville followed the section in their memo advising commentators to “get over” the role of race in opposition to President Obama:
They are actively rooting for Obama to fail as president because they believe he is not acting in good faith as the leader of our country. Only 6 percent of these conservative Republican base voters say that Obama is on their side, and our groups showed that they explicitly believe he is purposely and ruthlessly executing a hidden agenda to weaken and ultimately destroy the foundations of our country.
Experts on the way racialized thinking operates would read the same comments and see the fingerprints of racism all over them. In studying the same anti-Obama sentiment during the same period, psychologist Eric Knowles and his colleagues devised experiments to minimize the silencing impact of social desirability (that is, giving answers you know society wants you to give); to analyze based on implicit, not explicit, bias; and to control for other rationales such as ideology and partisanship. With all that stripped away, racial prejudice remained. They explained, “People may fail to report the influence of race on their judgments, not because such an influence is absent, but because they are unaware of it—and might not acknowledge it even if they were aware of it.” There are many white Americans who think of themselves as nonracist fiscal conservatives and who are sincerely “unaware” of the influence of race on their judgments, as Knowles describes. Then there are the increasing numbers of white Americans who are aware of the influence of racism and yet do not acknowledge it—further still, they claim that it’s the liberals and the people of color who are the racists. This is the narrative they receive from millionaire right-wing media personalities, and hysteria over Obama’s secret plan for racial vengeance was one of their mainstay narratives during his presidency. Here’s Rush Limbaugh: Obama has a plan. Obama’s plan is based on his inherent belief that this country was immorally and illegitimately founded by a very small minority of white Europeans who screwed everybody else since the founding to get all the money and all the goodies, and it’s about time that the scales were made even….It’s always been the other way around. This is just payback. This is “how does it feel” time.
It sounds a lot like Greenberg and Carville’s focus group respondents, but with the race part dialed all the way up. Here’s Glenn Beck: “Have we suddenly transported into 1956 except it’s the other way around?…Does anybody else have a sense that there are some that just want revenge?
Doesn’t it feel that way?” Or Bill O’Reilly: “I think Mr. Obama allows historical grievances—things like slavery, bad treatment for Native Americans, and U.S. exploitation of Third World countries—to shape his economic thinking…leading to his desire to redistribute wealth, thereby correcting historical grievance.” Just what were the anti-white comeuppance policies Obama was pushing to merit these reactions?
Economic recovery from the financial crisis and the radical idea that wealthy people and businesses depended on public investments such as roads and the internet.
Racism, then, works against non-wealthy white Americans in two ways.
First, it lowers their support for government actions that could help them economically, out of a zero-sum fear that it could help the racialized “undeserving” as well. Yet racism’s work on class consciousness is not total —there are still some New Deal–type economic policies that the majority of white Americans support, like increasing the federal minimum wage and raising taxes on the wealthy. But the racial polarization of our two-party system has forced a choice between class interest and perceived racial interest, and in every presidential election since the Civil Rights Act, the majority of white people chose the party of their race. That choice keeps a conservative faction in power that blocks progress on the modest economic agenda they could support.
Political scientists Woojin Lee and John Roemer studied the rise of antigovernment politics in the late 1970s, ’80s, and early ’90s and found that the Republican Party’s adoption of policies that voters perceived as anti-Black (opposition to affirmative action and welfare, harsh policing and sentencing) won them millions more white voters than their unpopular economic agenda would have attracted. The result was a revolution in American economic policy: from high marginal tax rates and generous public investments in the middle class such as the GI Bill to a low-tax, lowinvestment regime that resulted in less than 1 percent annual income growth for 90 percent of American families for thirty years. According to Roemer and Lee, the culprit was racism. “We compute that voter racism reduced the income tax rate by 11–18 percentage points.” They conclude, “Absent race as an issue in American politics, the fiscal policy in the USA would look quite similar to fiscal policies in Northern Europe.” In the social democracies of Northern Europe, families are far more economically secure; middle-class workers there don’t have American families’ worries about their healthcare, retirement, childcare, or college for their kids. But if government tried to secure these essential public benefits for families in the United States, in the political culture of the last two generations, it would signal a threat to the majority of white voters.
Government help is for people of color, the story goes. When you cut government services, as Reagan strategist Lee Atwater said, “blacks get hurt worse than whites.” What’s lost in that formulation is just how much white people get hurt, too. Chapter 3
GOING WITHOUT
For generations, college-going white Americans could count on public money from their governments, whether federal or state, to pay most if not all of their costs of higher education. The novel idea of flourishing public colleges—at least one in every state—took shape in the 1860s, when the U.S. government offered the states over ten million acres of land taken from Indigenous people to build on or to sell for institutions of higher education for their citizens. More free federal money for higher education came with the GI Bill, which paid tuition plus living expenses for World War II veterans and swelled college coffers: in 1947, veterans made up 50 percent of U.S. college admissions. (Racist program administration and educational segregation left Black veterans in the South largely excluded from these opportunities, however.) Public commitment to college for all was a crucial part of the white social contract for much of the twentieth century. In 1976, state governments provided six out of every ten dollars of the cost of students attending public colleges. The remainder translated into modest tuition bills—just $617 at a four-year college in 1976, and a student could receive a federal Pell Grant for as much as $1,400 against that and living expenses. Many of the country’s biggest and most respected public colleges were tuition-free, from the City University of New York to the University of California system. This massive public investment wasn’t considered charity; an individual state saw a return of three to four dollars back for every dollar it invested in public colleges. When the public meant “white,” public colleges thrived.
That’s no longer the case. Students of color comprised just one in six public college students in 1980, but they now make up over four in ten.
Over this period of growth among students of color, ensuring college affordability fell out of favor with lawmakers. State legislatures began to drastically cut what they spent per student on their public colleges, even as the taxable income base in the state grew. More and more Americans enrolled nonetheless, because other policy decisions in the labor market made a college degree necessary to compete for a middle-class job. By 2017, the majority of state colleges were relying on student tuition dollars for the majority of their expenses. The average public college tuition has nearly tripled since 1991, helping bring its counterpart, skyrocketing student debt, to the level of $1.5 trillion in 2020. This represents an alarming stealth privatization of America’s public colleges.
The rising cost of college feels to most Americans like so many aspects of our economy: unexplained and unavoidable. But at Demos, we researched the causes of rising tuition and linked them squarely to the withering government commitment to public funding. The federal government for its part slowly shifted its financial aid from grants that didn’t have to be repaid (such as Pell Grants for low-income students, which used to cover four-fifths of college costs and now cover at most onethird) to federal loans, which I would argue are not financial aid at all. Yes, student loans enable Americans to pay their college bills during enrollment, but the compounding interest means they must pay at least 33 percent more on average than the amount borrowed. Millions of students are also paying double-digit interest on private loans.
The new “debt-for-diploma system,” as my former Demos colleague Tamara Draut called it, has impacted Black students most acutely, as generations of racist policies have left our families with less wealth to draw on to pay for college. Eight out of ten Black graduates have to borrow, and at higher levels than any other group. In my high school, the seniors had a tradition of posting college admissions letters on the school counselor’s wall: right side up for acceptance, sideways for waitlist, and upside down for rejection. So much bravado in that transparency, and yet nobody was putting their financial aid letters on the wall. I borrowed five figures for college and nearly six for law school, including a high-interest private loan that my grandmother had to cosign. At forty years old, I’m still paying it all off, and I don’t know a single Black peer who’s not in the same boat, even those whose parents were doctors and lawyers. Because wealth is largely shaped by how much money your parents and grandparents had, Black young adults’ efforts at higher education and higher earnings aren’t putting much of a dent in the racial wealth gap. This generation was born too late for the free ride, and student loan repayment is making it even harder for Black graduates’ savings and assets to catch up. In fact, white high school dropouts have higher average household wealth than Black people who’ve graduated from college.
As with so many economic ills, student debt is most acute among Black families, but it has now reached 63 percent of white public college graduates as well and is having ripple effects across our entire economy. In 2019, the Federal Reserve reported on what most of my generation knows: student debt payments are stopping us from buying our first home, the irreplaceable wealth-building asset. It’s even contributing to delays in marriage and family formation. And by age thirty, young adults with debt have half the retirement savings of those who are debt-free.
Fundamentally, we have to ask ourselves, how is it fair and how is it smart to price a degree out of reach for the working class just as that degree became the price of entry into the middle class? And how is it fair or smart to create a new source of debt for a generation when that debt makes it harder for us to achieve the hallmarks of middle-class security: a house, marriage, and retirement savings? There is neither fairness nor wisdom in this system, only self-sabotage. Other countries learned from the midcentury American investment in higher education and have now raced ahead. A third of developed countries offer free tuition, and another third keep tuition lower than $2,600. In the United States, recent policy proposals to restore free college are generally popular, though race shapes public opinion. There’s a 30-percentage-point gap in support for free college between white people on the one hand (53 percent) and Black and Latinx Americans on the other (86 and 82 percent). The most fiercely opposed?
Among the very people who benefited the most from the largely whites-only free college model and who now want to pull the ladder up behind them: older, college-educated (white) Republicans.
In the story of how America drained the pool of our public college system, racism is the uncredited actor. The country’s first ambitious free college system, in California, was created in 1868 on a guarantee of no tuition and universal access; this public investment helped launch California’s rise as an economic giant and global hub of technology and innovation. But the state’s politics shifted radically in the 1970s, spurred by a backlash to the civil rights policy gains of the 1960s and, in an important harbinger of national trends, rising resentment of immigration and demographic change. The older, wealthier, and whiter political majority began voting for ballot initiatives opposing civil rights, fair housing, immigration, and taxes. In 1978, a ballot initiative known as Proposition 13 drastically limited property taxes by capping them at 1 percent of the property’s value at purchase, limiting increases and assessments, and requiring a supermajority to pass new taxes. Property tax revenue from corporate landowners and homeowners in the state dropped 60 percent the following year. The impact was felt most acutely in public K–12 schools; California went from a national leader in school funding to forty-first in the country. But Prop 13 also swiftly destroyed the local revenue base for California’s extensive system of community colleges and put them in direct competition for state funding with the more selective state schools and universities. The resulting squeeze accelerated the end of the free college era in California. Between 1979 and 2019, tuition and fees at the four-year public colleges increased eight-fold.
Dog-whistling was ever-present in the campaign to win Proposition 13, from flyers claiming that lower property taxes would put an end to busing for integration purposes to messaging questioning why homeowners should pay for “other people’s children.” Conservative columnist William Safire put it most directly, however, when he endorsed the proposition in The New York Times: “An underlying reason is the surge in the number of illegales— aliens fleeing poverty in Mexico—who have been crossing the border by the hundreds of thousands….As one might expect, property taxpayers see themselves giving much more than they are getting; they see wage-earners, both legal and illegal, getting more in services than they pay for in taxes.” A decade later, voters in Colorado, another state with a growing Latinx immigrant population, passed a constitutional amendment severely limiting taxes. TABOR (Taxpayer’s Bill of Rights) has forced Coloradans to go without a long list of public services, including for two years children’s vaccines when the state couldn’t afford to purchase them—and the state has dropped to forty-seventh place in higher education investments.

THE RISE IN student diversity shifted the politics of state education spending across the country. As part of the antigovernment fervor in the 1980s and ’90s, spending on the welfare of youth fell out of favor, but meanwhile, legislatures were tripling their expenditures on incarceration and policing.
By 2016, eighteen states were spending more on jails and prisons than they were on colleges and universities. The path to this system of mass incarceration is another story of racist policymaking creating unsustainable costs for everyone.
The loss of good factory jobs in the mid-1970s hit the cities first, and with cities, their segregated Black residents. Instead of responding to the economic problem with economic development, jobs programs, and stronger safety nets, the federal government cut back massively on urban social spending in the 1980s. In its place, it waged a drug war.
Dehumanizing and unpitying stereotypes about the dangers of drug use in the inner cities fueled a new era of harsher sentencing and post-release penalties to create a system of mass incarceration. While the so-called crack epidemic is far behind us, the system rolls on, and today, more than 1.25 million people are arrested each year for drug possession. These are not kingpins or high-level dealers; more than four times as many people are arrested for possessing drugs as for selling drugs, often in amounts so tiny they can only be intended for personal use. In 2016, the number of arrests for marijuana possession exceeded the total number of arrests for all violent crimes put together.
The racist nature of our mass incarceration system has been well documented. White and Black people are equally likely to use drugs, but the system is six times as likely to incarcerate Black people for a drug crime. Sentences for possession of crack cocaine, which is more widely used by African Americans than whites, are about eighteen times harsher than penalties for the powder version of the drug, which is used more often by whites. For decades before policy changes in 2010, this sentencing disparity was about one hundred to one.
Over the last twenty years, however, a striking change has taken place.
Getting locked up over drugs and related property crimes has become more and more common among white people and less so among Black folks. A primary factor in this shift is, as The New York Times wrote, “Mostly white and politically conservative counties have continued to send more drug offenders to prison, reflecting the changing geography of addiction. While crack cocaine addiction was centered in cities, opioid and meth addiction are ravaging small communities” in largely white locales. The “pathology” long ascribed to urban communities as integral and immutable characteristics of Black life (drug addiction, property crimes to support a habit, broken families) has now moved, with deindustrialization, into the suburbs and the countryside. By 2018, an estimated 130 people were dying every day from opioid overdoses, and over 10 million people were abusing prescription opioids.
The option to treat poverty and drug addiction as a public health and economic security issue rather than a criminal one has always been present.
Will our nation choose that option now that white people, always the majority of drug users, make up a soaring population of people for whom addiction takes over? The woes that devastated communities of color are now visiting white America, and the costs of incarceration are coming due in suburban and rural areas, squeezing state budgets and competing with education. It’s not a comeuppance but a bitter cost of the white majority’s willingness to accept the suffering of others, a cost of racism itself.

AS RACIALIZED AS the politics of government spending has become, the victims of this new higher education austerity include the majority of white students.
When Demos was working to build the research case for debt-free college, we partnered with a then-small online group organizing students and graduates with debt, called the Student Debt Crisis. Now more than one million members strong, the Student Debt Crisis—run by Natalia Abrams, a white Millennial grad of the University of California, Los Angeles—speaks for an indebted generation, lifting up the stories it collects in an online story bank. “We recently polled the activists on our list, and about seventy percent identify as white,” Abrams told me.
Josh Frost is thirty-nine and works full time at a news station and part time at a gas station. He pays three-quarters of his salary toward his student debt while living with his parents. Though he did everything society told him to do, he’s nearing forty but feels like adulthood is passing him by: “I’m watching everyone I know start families and buy homes,” he said.
Emilie Scott from St. Paul, Minnesota, needed to go to college to fulfill her goal of becoming a teacher. She worked four jobs while studying, to keep her borrowing low, but still graduated with $70,000 in private and public student loans. Four years after graduation, making payments of $600 a month, she has paid off $28,000, but because of interest rates close to 10 percent, her remaining balance is $65,000. “This is madness,” she says.
“How can I keep up with this? And for how long?” Unfortunately for Emilie, more than three million senior citizens who still owe $86 billion in student loans can attest that the “madness” doesn’t really end. Seniors with student loans are more likely to report rationing medical care, and the government garnishes Social Security payments for seniors in default.
Robert Settle Jr. was sixty years old in 2016, and although he is completely disabled and lost all his savings in the financial crisis of 2008, he is still being sued for $60,000 in private student loans he obtained while working for a master’s degree to advance his career. Robert points his ire at the government for allowing this system to flourish, and he is eager to tell his story. “I want the entire country to see how a disabled, elderly couple is treated by our federal government!” The saddest, most common refrain in dozens of interviews and testimonials from borrowers is “I wish I had never gone to college.” If growing cynicism about higher education is the result of this sudden and total shift from public to private, then our entire society will bear the cost.
— THE SUDDEN DEMISE of our public college system and the growing scourge of student debt are recent phenomena, but there may be no question that has vexed Americans for longer than why our healthcare system isn’t better: more affordable, less complex, more secure for everyone. We pay more individually and as a nation for healthcare and have worse health outcomes than our industrialized peers, all of whom have some version of publicly financed universal coverage. But the United States doesn’t—even though the closest thing we have to European-style single-payer care, Medicare for the elderly, is successful economically and popular with its beneficiaries.
(Even the theoretical opposition to universal healthcare is weaker than it seems; at the height of public opposition to the Affordable Care Act, eleven out of twelve of the bill’s provisions polled with majority support.) In the modern era, more elections have been won, lost, and fought on healthcare than on any other single issue besides the overall economy. Why can’t we fix this?
In some ways, the story of America’s healthcare dysfunction comes back to the pool. Health insurers use that exact term when they refer to the number of people in the “risk pool” of a plan. A high number minimizes the risk posed by any individual’s health costs. Whether we’re talking about insurance or drug trials or vaccines or practice improvements, in health, the key is getting everybody in. Healthcare works best as a collective endeavor, and that’s at the heart of why America’s system performs so poorly. We’ve resisted universal solutions because when it comes to healthcare, from President Truman’s first national proposal in 1943 to the present-day battles over Medicaid expansion, racism has stopped us from ever filling the pool in the first place.

UNITED STATES SENATORClaude Pepper of Florida was a towering figure topped with bright red hair. He gained national prominence when he became President Harry Truman’s most reliable southern Democratic champion for national health insurance. According to Jill Quadagno, who tells the story in her book One Nation, Uninsured, Claude Pepper was “a farm boy from the red clay country of eastern Alabama who never saw a paved road until he went to college [and] entered public life because he believed that government could be a force to enhance the greater good.” That farm boy never would have anticipated he’d grow up to be public enemy number one of the American Medical Association. The AMA is a trade group of doctors known to most Americans now for its labeling on consumer products, but in the 1940s, it acted as a scorched-earth lobbying group whose leadership viewed any kind of insurance that mediated between the patient and the doctor with suspicion. Government insurance, with the potential for costrationing, portended a threat to the profitability of the entire medical profession. The AMA launched the first modern public relations and lobbying campaign to paint government insurance as a threat not to doctors’ finances, however—but to the entire American way of life. They labeled the idea socialist.
Racism gave the accusation of socialism added power. Red-baiting tapped into many white Americans’ fears about what it would mean in the United States to mandate equality: an end to white supremacy.
Segregationists regularly tried to marginalize the issue of civil rights by calling groups like the NAACP “a front and tool by subversive elements… part and parcel of the Communist conspiracy.” Claude Pepper’s cause of universal healthcare would ultimately get him lumped in with “the communist-inspired doctrine of racial integration and amalgamation.” After Pepper came out as a champion of government-funded healthcare and other liberal programs, he became a prime target in the 1950 election.
The effort to unseat him was a coordinated campaign led by a wealthy businessman who opposed Pepper’s economic liberalism, including his support for higher wages and taxes. In an early example of plutocratic dog whistle politics (that is, using racism to further an economic agenda), a group of anti-Pepper businessmen “collected every photo of Pepper with African Americans…and charged that northern labor bosses were ‘paying ten to twenty dollars to blacks to register’ and vote for him.” The physicians’ lobby joined in, running newspaper ads with a photo of Senator Pepper with Paul Robeson, the Black actor and Communist activist. The racist red-baiting campaign worked. Universal healthcare’s biggest Senate champion lost his 1950 race by more than sixty thousand votes. The bare-knuckle assault on universal health insurance signaled the beginning of the end of the New Deal Democrats’ reign in national politics.
Liberal southern Democrats who saw the transformative potential in government action, like Claude Pepper, were a dying breed, and Harry Truman could not get the segregationist caucus of southern “Dixiecrat” Democrats in his party behind his vision of national healthcare. Truman was the first president to champion civil rights since Reconstruction, desegregating the armed forces and forming a President’s Committee on Civil Rights. The southern Democratic bloc saw the civil rights potential in his healthcare plan—which was designed to be universal, without racial discrimination—as too great a cost to bear for the benefit of bringing healthcare to their region. As Jill Quadagno writes, “If national health insurance succeeded, it would be without the support of the South.” Needless to say, it would not succeed. Truman declined to run again in 1952, and national health insurance receded from the legislative agenda for the next decade.
To be clear, the beneficiaries of Truman’s universal coverage would have been overwhelmingly white, as white people at the time made up 90 percent of the U.S. population. Few Americans, Black or white, had private insurance plans, and the recent notion that employers would provide it had yet to solidify into a nationwide expectation. The pool of national health insurance would have been mainly for white Americans, but the threat of sharing it with even a small number of Black and brown Americans helped to doom the entire plan from the start.
After the defeat of Truman’s proposal, unions increasingly pressed employers for healthcare benefits for workers and retirees. By the 1960s, as part of his “war on poverty,” President Johnson created a generous federal healthcare program for the elderly (an even whiter population than the overall population) in Medicare and a less generous patchwork for lowincome people and children, Medicaid. Johnson’s Congress conceded to leave whether and how to offer Medicaid to the individual states, in a compromise with racism that curtailed the program’s reach for decades.
Medicaid was intended to insure all Americans living in poverty by 1970, but by 1985, the Robert Wood Johnson Foundation estimated that less than half of low-income families were covered. Then corporations began cutting back on offering health benefits to their employees in the 1980s, and the number of uninsured skyrocketed. As of 2020, we still have no universal health insurance.
The closest the United States has come to a universal plan is the Affordable Care Act, created by a Black president carried into office with record turnout among Black voters and passed with no congressional votes from the Republican Party. The Affordable Care Act created state-based markets for consumers to comparison-shop healthcare plans, with federal subsidies for moderate- to middle-income purchasers. It also stopped private insurance companies from some of their most unpopular practices, such as denying insurance to people with preexisting conditions, dropping customers when they were sick, and requiring young adults to leave their parents’ insurance before age twenty-six. But Congress rejected the reform ideas that would have relied the most on Americans swimming in one national pool: a federal “public option” plan and collective bargaining to lower prescription drug costs. The idea of a Truman-style national health insurer never made it to a vote. As comparatively modest as it was, Obamacare has been deeply unpopular with the majority of white voters.
White support remained under 40 percent until after the law’s namesake left office, and as of this writing, it has yet to surpass the 50 percent mark.
Blame President Obama—not for strategic missteps; blame him for being Black. Numerous social science studies have shown that racial resentment among white people spiked with the election of Barack Obama.
When the figurehead of American government became a Black man in 2009, the correlation between views on race and views on government and policy went into overdrive. Professor Michael Tesler, a political scientist at Brown University, conducted research on the way that race and racial attitudes impacted Americans’ views of the Affordable Care Act in 2010.
He concluded that whites with higher levels of racial resentment and more anti-Black stereotypes grew more opposed to healthcare reform after it became associated with President Obama. “Racial attitudes…had a considerably larger impact on our panel respondents’ health care opinions in November 2009 than they did before Barack Obama became the Democratic nominee for president,” Tesler explained in a Brown University interview. He also ran an experiment to try to disassociate health reform proposals from Obama. “The experiments…revealed that health care policies were significantly more racialized when they were framed as part of President Obama’s plan than they were for respondents told that these exact same proposals were part of President Clinton’s 1993 reform efforts.” Three researchers out of Stanford looked at anti-Black prejudice and people’s willingness to support Barack Obama and his healthcare policies.
What Eric Knowles, Brian Lowery, and Rebecca Schaumberg found was that respondents who held strong implicit biases were less likely to support Obama and more opposed to his healthcare plan, usually citing policy concerns. Like Tesler, they also tried attributing the same plan details to Bill Clinton and found that the link between healthcare opinion and prejudice dissolved. “In sum,” they wrote, “our data support the notion that racial prejudice is one factor driving opposition to Obama and his policies.” Of course, you don’t have to look far upstream to find the racialized rhetoric that filled this story in white people’s minds: in 2010, Rush Limbaugh’s line on the ACA was “This is a civil rights bill, this is reparations, whatever you want to call it.” Rep. Joe Wilson was so certain that the ACA would benefit undocumented immigrants that he shouted “You lie!” in the middle of President Obama’s State of the Union address.

RURAL AMERICA IS experiencing a quiet crisis. Rural hospitals account for one in seven jobs in their areas, but over the past ten years, 120 rural hospitals have closed, dealing a body blow to the economy and health of the country’s mostly white, overwhelmingly conservative rural communities. A quarter of the remaining rural hospitals are at risk of closing. One thing that all of the states with the highest hospital closures have in common is that their legislatures have all refused to expand Medicaid under Obamacare.
Texas leads the country in rural hospital closures, with twenty-six hospitals permanently closing or whittling down services since 2010. The state has half the hospitals it had in the 1960s. In 2013, an eighteen-monthold died after choking on a grape because her parents couldn’t reach the nearest hospital in time. The outrage from that story swept the state, but it was short-lived. What would reopen the hospitals, according to Don McBeath, an expert in rural medicine who now does government relations for a Texas network of rural hospitals called TORCH, is something that the powers that be in the state capital are dead set against, and that’s Medicaid —both fully funding Texas’s share and expanding eligibility for it, as Congress intended with the Affordable Care Act.
I caught up with McBeath by phone at the beginning of the coronavirus outbreak and asked him why Texas’s hospital system was in crisis. “There’s no question, [it’s the] uninsured in Texas,” he said, and then let his characteristic sarcasm slip in. “I mean, we’re running about twenty-five, twenty-six percent before the Affordable Care Act. And now—oh gosh, yeah, we’re down to a whopping sixteen percent. And so that is a huge problem. When you know that one out of every six, seven people walking in your hospital is not going to pay you…” I asked him if, as my research had suggested, Medicaid expansion would help shore up the rural hospital system. “I’m sure you’re aware, Texas has probably one of the narrowest Medicaid coverage programs in the country,” he said. I was aware. I’d had to double-check the figure because I couldn’t believe it was so low, but in fact, if you make as little as four thousand dollars a year, you’re considered too rich to qualify for Medicaid in Texas, and even that has exclusions, as McBeath explained.
“I hear this all the time: even some of my friends will go, ‘Oh, those lazy bums. They need to get off that Medicaid and go to work.’ And I go, ‘Excuse me? Who do you think’s on Medicaid?’ First of all, there’s no men on Medicaid, period, in Texas. No adult men, unless they have a disability and they’re poor. And there’s no non-pregnant women, I don’t care how poor they are.” Failing to insure so many people leaves a lot of unpaid medical bills in the state, and that drains the Texas hospital system. The conservative majority in the Texas legislature has been so opposed to the idea of Medicaid that they shortchange the state’s hospitals in compensating for the few (mostly pregnant women) Medicaid patients they see. Then, by rejecting Obamacare’s Medicaid expansion, they lose out on federal money that would insure about 1.5 million Texas citizens. As a result of this and some federal policies, including budget cuts in the government sequestration that the Tea Party forced during Obama’s second term, rural healthcare is rapidly disappearing. Texas politicians’ government-bashing is both ideological and strategic; they benefit politically by stopping government from having a beneficial presence in people’s lives—as white constituents’ needs mount, the claim that government is busy serving some racialized other instead of them becomes more convincing.
McBeath sighed. “The thing that we’ve seen in this state is, our politicians have so demonized the term ‘Medicaid expansion,’ that they’ll never reverse course on that as long as they’re in control. And we can prove to them all day long that [it] may be the way to go. But…we quit barking up that tree, because we’re not gonna get anywhere.” What the mostly white and male conservatives in the Texas legislature are doing to sabotage the state’s healthcare system doesn’t hurt them personally—the state provides their health insurance—but it’s costing their state millions. Before Medicaid expansion, working-class people had basically no option for health insurance; most low-wage employers don’t offer coverage, and buying it on your own cost an average of $4,000 a year for families. If you were under sixty-five, this left only Medicaid, whose rules were mostly set by the states in another southern congressional compromise, leading to an average income eligibility cap of just $8,532 for a family of three, well below the minimum wage for one earner. If you lived in a southern state, the likelihood of your being eligible was even lower.
Alabama: $3,910; Florida: $6,733; Georgia: $7,602; Mississippi: $5,647; Texas: $3,692—these are the paltry annual amounts that a parent in a southern state must earn less than in order to qualify for Medicaid in 2020; adults without children are usually ineligible. When the Affordable Care Act passed in 2010, it expanded qualification for Medicaid to 138 percent of the poverty level for all adults (about $30,000 for a family of three in 2020) and equalized eligibility rules across all states. But in 2012, a Supreme Court majority invoked states’ rights to strike down the Medicaid expansion and make it optional. Within the year, the lines were drawn in an all-too-familiar way: almost all the states of the former Confederacy refused to expand Medicaid, while most other states did. Without Medicaid expansion, people of color in those states struggle more—they are the ones most likely to be denied health benefits on the job—but white people are still the largest share of the 4.4 million working Americans who would have Medicaid if the law had been left intact. So, a states’ rights legal theory most often touted to defend segregation struck at the heart of the first Black president’s healthcare protections for working-class people of all races.
Expanding Medicaid should be a no-brainer for states, cost-wise. The federal government paid 100 percent of the cost for the first few years and 90 percent into the future. The states that expanded saw hundreds of thousands of their working-class citizens go from being uninsured—where an accident can cause bankruptcy and preventable illnesses can become fatal—to being able to afford to see a doctor. The benefits don’t stop with individual people, though. Stable Medicaid funding has allowed rural medical clinics in expansion states to thrive financially. In Arkansas, the first southern state to accept expanded Medicaid, a health clinic in one of the poorest towns in the country has constructed a new building, created jobs, and served more patients, creating measurable improvements in the community’s health. Terrence Aikens, an outreach employee at the clinic, told a reporter in 2020, “What we’ve experienced in the last few years has been nothing short of amazing.” Why wouldn’t a state’s politicians take free money to have such amazing health and economic outcomes in their communities, including rural ones with disproportionate conservative representation and fewer options for economic activity? It’s not that it’s unpopular; expanding Medicaid has polled higher than Obamacare since the bill passed. The answer is all too familiar: racism. Colleen M. Grogan and Sunggeun (Ethan) Park of the University of Chicago found that racialization affects state Medicaid decision making. First, they found that just after the Supreme Court decision that made it optional, Medicaid expansion had robust support among Black and Latino Americans at 82 and 65 percent respectively, but slightly below-majority support among white Americans, at 46 percent. Across the country, state-level support for Medicaid expansion ranged between about 45 and 55 percent, and interestingly, some of the highest support was found in the South (where the larger Black populations drove up the average).
But that larger Black population also prompted a sense of group threat and backlash from the white power structure; Grogan and Park found “as the percent of the black population increases, the likelihood of adoption decreases.” The zero-sum story again. As with the public swimming pools, public healthcare is often a benefit that white people have little interest in sharing with their Black neighbors. Grogan and Park’s model found that it didn’t matter whether a state’s communities of color supported the expansion if the white community, with its greater political power and representation, did not. “State adoption decisions are positively related to white opinion and do not respond to nonwhite support levels,” they concluded.
When I pointed out this study to Ginny Goldman, veteran community organizer in Texas, she threw her hands up. For ten years, Ginny built a nonprofit called the Texas Organizing Project (TOP), which aimed to improve Texas’s democracy by engaging residents of color in issue activism and elections. Ginny gives one the impression of being battery-powered.
She’s a fast talker with eyes that size you up quickly—but she’s just as quick to reveal her deep compassion for people who struggle. The idea that it wouldn’t matter how much Black and brown Texas supported better healthcare if white Texas did not—“it just flies in the face of everything that we spend our time doing,” she told me. Ginny tells her members, “There’s power in numbers. You’re the majority. You have to organize. You’ve got to get out. You’ve got to vote. You’ve got to be loud!” But then, as the study I showed her suggested, “there’s actually this tiny sliver of a minority of people who will outdo you.” The thing is, that “tiny sliver” is overrepresented in Texas government.
Texas is considered a majority-people-of-color state (41 percent nonHispanic white, 40 percent Latinx, 13 percent Black, and 5 percent Asian in 2018), but the legislature is two-thirds white and three-quarters male. After the Supreme Court Obamacare decision, TOP joined a coalition advocating for Texas to expand Medicaid above its abysmally low income cap of $3,692, so that more working- and middle-class Texans could be insured.
The upside for Texas was clear: one out of every five nonelderly Texans lacks health insurance, the highest percentage in the country. The problem is not just about poverty, either; the state has the highest uninsured rate for families earning less than $50,000 a year (who would be eligible for expanded Medicaid), but also the highest uninsured rate for those making over $100,000 (who may qualify for middle-income subsidies under Obamacare). The uninsured are disproportionately Latino, but there are over a million white non-Hispanic Texans without any healthcare coverage.
The benefits of Medicaid expansion would be widespread; the cost, minimal. But as soon as the Supreme Court made it optional for states, the state’s governor, Rick Perry, “did his announcement, which was like, ‘Hell, no, I’m not taking the money,’ ” Ginny recalled.
I asked her what arguments Texas’s leaders made for turning away free money to help solve that state’s worst-in-the-nation health coverage crisis.
She recalled accompanying TOP members to state legislative hearings, “and Republican legislators would say, you know, ‘These folks are gonna come out of the woodwork like bugs.’ These freeloaders who are just gonna come out from everywhere. And comparing them to insects and bugs…rodents.
Asking for stuff.” This particular racist trope, the language of infestation, is usually deployed against immigrants and, in the current immigration debates, those from Latin America. In Texas, Latinx people are the largest group of uninsured. But Ginny saw how some in the very community that would be most helped by Medicaid expansion were inclined to oppose it because of anti-Black stereotypes about Medicaid. “When we first started to collect postcards and signatures and support around this, I remember Latino organizers coming back to the office and [saying], ‘We’re not doing very well.’ Because a lot of the folks [in] Latino communities were like, ‘We don’t want a handout. We work for what we earn. We’re not asking for anything for free.’ ” That’s a late-stage benefit of a forty-year campaign to defund and degrade public benefits; in the end, they’re so stigmatized that people whose lives would be transformed by them don’t even want them for fear of sharing the stigma.
“I think Republicans were pretty good at what they’re always good at, right? Pitting communities against each other and using a lot of dog whistle politics around, like, ‘Medicaid equates to Black freeloading people,’ ” Ginny said with a sigh. “And unfortunately, that’s resonating. Or hitting some of the underlying tension that already exists between African American and Latino communities. So, we had to fight a lot against that.” The Texas Organizing Project campaigned within the deserving-versusundeserving narrative it was dealt. “Our talking points were overwhelmingly, like, ‘We work. Why can’t we have healthcare?’ So, we always had to go out of our way to kind of counter this sense of, you know, just this idea that people are wanting ‘free stuff’ and not carrying their part.” If you’re trying to understand Texas’s healthcare crisis, Ginny believes you can’t ignore conservative white Texans’ resistance to the first Black president. “Abbott, who’s the governor now, was the attorney general then.
And he would just say, ‘I get up in the morning. I go to the office. I sue President Obama, and I come home.’ It was just so apparent that it was like, ‘We can’t have Obama having any success, even if it makes perfect sense, economic sense and healthcare sense, for the state. There’s just no way it’s happening.’ ” (Interestingly, the strategy to deny President Obama victories did make some inroads, even with the president’s base. Ginny recalls that “when people still weren’t getting healthcare, especially first-time voters and communities of color that we were organizing, they blamed it on Obama at first….Because the Republicans’ megaphone’s just always bigger than ours.”) The struggle for affordable healthcare for the working class continues in Texas and, as of summer 2020, twelve other states. Meanwhile, conservatives threaten Obamacare with court challenges, legislative repeals, and regulatory rollbacks. The healthcare advocates I spoke with could still remember people they knew who died because, without insurance, they couldn’t afford to see a doctor. Toward the end of our conversation, Ginny began to cry while recalling “a funeral of a woman who was the daughter of one of the first people I ever organized with. This African American man whose daughter died because she had lupus,” a highly treatable disease.
“She had to wait and wait and wait until it got really, really bad. Then she could go to the emergency room. Then they would give her something, then they would turn her out. And she can never get ongoing care. She can never get medication or any treatment. She worked all sorts of low-wage jobs, from one to another. She has kids. And she’s dead, at my age. In her late forties.” Ginny’s voice shook with both grief and rage. “People are dying because they would choose…a political victory over an actual victory that serves millions of people.”

FOR RON POLLACK, a fifty-year leader in the fight for healthcare, housing, and other antipoverty measures, the person he won’t forget is John, a Texan he met over twenty years ago, during the Clinton administration’s lost battle for universal healthcare. “John told us the story about his wife,” Ron told me.
“He worked for a radio station. Wasn’t making a lot of money. He talked about his wife being a very strong person. And she started getting stomach pains, and John would say to his wife, you know, ‘We’ve got to see a doctor.’ “And she said, ‘Well, we don’t have the money to see a doctor. I’m gonna be okay.’ And this persisted for months, until one day, she collapsed in excruciating pain, and she was taken by ambulance to a hospital. And they found this huge tumor in her stomach. And it was a very advanced tumor. And it was clear that she was going to pass away from this. And while she was at her deathbed, we were gonna start the bus trip” across Texas to advocate for universal healthcare.
“And John was encouraged by his wife to get on the bus for three weeks to join us. And John said, ‘Well, I don’t feel comfortable leaving you like this.’ She said, ‘The last favor I have of you is that you join the bus trip.
And tell my story, so that nobody experiences what we have experienced.’ “We were about halfway in the trip, and his wife died. And he flew back to bury her. And then he flew back to rejoin the bus. And he told his story at the White House. And of course, there wasn’t a dry eye in the place,” Ron said, his voice getting weak with the weight of the memory.
He cleared his throat. “So, yes, there have been quite a few people I have met—too numerous to count—who paid the ultimate price for their inability to get health coverage.” After a moment, I asked him about John’s racial background.
“He was white.” —
RON POLLACK WAS born in 1944, during the golden era when the government erected the structures that created a white middle class comprised of the sons and daughters of millions of European immigrants, like his own parents. He became a young man during the brief part of that century when all three branches of the U.S. government took major leaps toward realizing the American Dream for all her children, regardless of race or status—and it forever shaped him.
“I grew up in the period when, in the 1960s, there was this tiny glimmer of hope that we were going to do something serious about poverty in America.” Ron was the student body president at the (then tuition-free) City University of New York when his classmate Andrew Goodman was murdered on a trip to Mississippi during Freedom Summer, and from then on, he threw himself into what would become a lifelong fight to extend America’s promise to all. He founded the country’s premier anti-hunger organization, the Food Research and Action Center, whose litigation and advocacy helped create the Supplemental Nutrition Assistance Program for Women, Infants and Children, including the food stamp program that feeds nearly forty million Americans. His organization Families USA was at the forefront of both the Clinton and Obama healthcare reform battles that eventually won healthcare for twenty-five million people, and at seventysix years old, he has lately turned his passions to preventing evictions and ensuring affordable housing.
Ron Pollack is not a household name by any means, but he has dedicated his life to extending the public web of protection around more Americans, and tens of millions of Americans are better off because he has.
When I got the chance to talk with him, I wasn’t so sure that he’d be willing to speak about the often-invisible headwinds of racism in his antipoverty efforts; I’ve known many a white liberal who was uncomfortable talking about racism’s impact on politics. But he never shied away from naming racism, and when I asked him my final question, he shared with me the vision for America that has guided him through five decades of advocacy.
Ron said that, in his vision, “nobody in this country is deprived of the necessities of life—whether it’s food, whether it’s healthcare, whether it’s housing—in a country that’s as wealthy as ours.” To realize this vision, he said, “I wish there was a greater consciousness about how we’re all in this together. For those people who are opposed to [government aid] out of an animus to people who look different than they are…that lack of social solidarity causes harm to their own communities.
“If we didn’t have these sharp divisions based on race, we could make enormous progress in terms of making sure that people are not hurting as badly as they are, [or] deprived of what clearly are the necessities of life.
And I would like to think it was possible if we had a sense of social solidarity.” Ron Pollack’s diagnosis of the United States—that we suffer because our society was raised deficient in social solidarity—struck me as profoundly true, and, true to my optimistic nature, I suppose, I found the insight galvanizing. I began to think of all that a newfound solidarity could yield for our country, so young, so full of promise and power. Starting with healthcare and public college, I began to see the Solidarity Dividends waiting to be unlocked if more people would stop buying the old zero-sum story that elites use to keep us from investing in one another.

THE STORIES OF the declining public university and the shuttered public pools have parallels across the country, from the infrastructure we have (collapsing bridges and poisoned pipes) to the public goods we are desperately missing (universal childcare and healthcare). There’s a way to tell these stories without race, as my early colleagues at Demos attempted to do. That story makes sense—up to a point. But when I became the organization’s president and had the chance to lead a new research project over a decade later, the Race-Class Narrative Project, I made sure to add race to the investigation. Working with my old law professor Ian Haney López and the linguistics expert Anat Shenker-Osorio, I discovered that if you try to convince anyone but the most committed progressives (disproportionately people of color) about big public solutions without addressing race, most will agree…right up until they hear the countermessage that does talk, even implicitly, about race. Racial scapegoating about “illegals,” drugs, gangs, and riots undermines public support for working together. Our research showed that color-blind approaches that ignored racism didn’t beat the scapegoating zero-sum story; we had to be honest about racism’s role in dividing us in order to call people to their higher ideals.
A coalition of grassroots organizations in Minnesota decided to use our research to develop messages during the 2018 election, an election they knew would be a referendum on demographic change in their historically white state. Once a northern Tea Party hotbed, Minnesota had more than its share of right-wing politicians who campaigned on fear about the state’s growing Latinx and African Muslim population of refugees and immigrants. The coalition’s organizers used insights from our Race-Class Narrative Project to develop a campaign they called “Greater Than Fear.” Minnesota organizer Doran Schrantz recalled, “We decided to emphasize the joy and benefit of things we do together, like sharing meals at a potluck and the feeling of shared accomplishment that every Minnesotan knows: digging each other out of the snow.” But instead of an all-white cast for these campaign videos and posters, the Greater Than Fear storytelling included faces of all shades pushing a minivan out of the snow; and Muslim women in hijabs sharing dishes of homemade food with white women at a potluck. The narratives weren’t communicating simply feelgood diversity; it was a familiar and values-based way for door-knocking canvassers and volunteers at phone banks to open up conversations about what all Minnesotans needed: healthcare, better infrastructure, more funding for education. And the campaign specifically called out the opposition strategy of dog-whistling by using humor, asking voters to send in pictures of their dogs to show real dogs standing up to racist dogwhistling. The Minnesota Democratic Farmer-Labor Party (the state Democratic Party) gubernatorial ticket adopted the Greater Than Fear messaging for its campaign in its final weeks, and won—the Farmer-Labor Party also took control of the statehouse, allowing the advocates in the Greater Than Fear coalition to have a bigger say in shaping the state’s budget. The budget that emerged was a repudiation of the conservative policy of prioritizing “corporations, the wealthy, and insurance companies [while] underfunding our schools, transportation, health care,” according to a statement by the majority leader when the budget was introduced. The House Speaker boasted that the budget would freeze college tuition and take steps toward enacting a public healthcare option, “honest investments” for families “no matter where in the state you live, or what you look like.” This kind of cross-racial public investment could be a new governing ethos for America, a country that has always linked what we give to who you are. As we become a nation with no racial majority, we will need more of this spirit to create a new basis for investing in ourselves, broadly and without prejudice. Chapter 4
IGNORING THE CANARY
When Janice and Isaiah Tomlin married in 1977, they promised each other that by their first anniversary, they’d become the first people in their families to own their own home. Janice’s mother had always dreamed of owning a house, but segregation limited the options for Black people in Wilmington, North Carolina. On the salaries of an elementary school teacher and an auto mechanic, the Tomlins saved up about a thousand dollars for a down payment and bought a two-bedroom, bright blue house on Creecy Avenue for $11,500. They moved in right before their anniversary.
All the houses on Creecy have inviting front porches, and that’s where Janice was sitting when I met her and Isaiah in the summer of 2020. A consumer attorney I’ve known for nearly two decades, Mike Calhoun, introduced me to them and connected us by a video call on their porch instead of the trip I’d planned to take to visit them, due to the pandemic.
Janice told me about her introduction to the neighborhood: One night before they moved in, she was in the house painting over the dreadful lime green the previous owners had chosen for all the rooms, when she needed to make a call. The Tomlins’ phone wasn’t hooked up yet, but a nice white lady who lived next door offered to let her come in and use hers to call Isaiah. “And I thought, ‘Gee, this is so nice. This is so kind,’ ” she recalled.
Her soon-to-be neighbor was out on the porch when Isaiah arrived later that night. Tall and deep mahogany–skinned, Isaiah was decidedly not what the white neighbor was expecting to see. Janice is fair-skinned and had just pressed her hair straight. “If you could have seen her face.” Janice whistled as Isaiah laughed at the memory. “I will tell you that she was gone within months. Never spoke to us again, and was gone within months…and I thought, ‘Oh, did we do that?’ ” She nodded to herself. “We did.” More Black families moved in, and by the late 1990s, Janice said, it was a Black neighborhood. Janice and Isaiah were raising two children and kept improving their dream house. As the equity grew and the neighborhood changed, the phone calls started coming in from people marketing refinance loans. It just so happened that Janice was determined to send her children to parochial school, and like many parents, she looked to their nest egg to help finance the tuition.
In the early spring of 1998, a company called Chase Mortgage Brokers had called the Tomlins multiple times, so Janice made an appointment to go in. “The very first meeting, the lady was so—I look back now— exceptionally kind. Just overbearing with kindness and patience,” Janice recalled. “And I’m a question person. I ask a lot of questions. And she sat and she listened to me.” For all Janice’s questions, however, there were some answers she wouldn’t get from Chase—not until they showed up as evidence in a classaction predatory lending lawsuit. It turns out that Chase held itself out as a broker, someone a borrower hires to find them the best loan and who has a fiduciary duty to the borrower under North Carolina law. But Chase had a secret arrangement with just one lending company, Emergent. The exceptionally kind salesperson received kickbacks for every Emergent loan she sold, and no matter how low an interest rate a borrower might have qualified for, if the salesperson could sell them a higher-priced loan, she received even more of a kickback.
The salesperson at Chase also hid from Janice the extent of the highcost fees that would be taken out of the Tomlins’ home equity at signing.
The included costs amounted to 12 percent of the loan on day one.
Unbeknownst to them, the Tomlins had refinanced their dream home with a subprime mortgage with an annual interest rate in the double digits, unrelated to their credit scores. This last point was important, because the official justification for the exorbitant cost of subprime mortgages was that higher costs were necessary for lenders to “price for the risk” of defaults by borrowers with poor credit.
But lenders have no duty to sell you the best rate you qualify for—the limit is whatever they can get away with. I asked Janice, “Had you ever been late on your payments or missed a mortgage payment?” Her warm voice turned firm. “Never.” “Never,” I repeated. “That was very important to you?” “Very important. Never late,” she said, shaking her head emphatically.
Subprime would become a household word during the global financial crisis of 2008. I first came across the term when I started working at Demos in 2002 and when, as part of my outreach about our consumer debt research, I went to community meetings with dozens of borrowers just like the Tomlins, disproportionately Black homeowners who were the first to be targeted by mortgage brokers and lenders. The loans are called subprime because they’re designed to be sold to borrowers who have lower-thanprime credit scores. That’s the idea, but it wasn’t the practice. An analysis conducted for the Wall Street Journal in 2007 showed that the majority of subprime loans were going to people who could have qualified for less expensive prime loans. So, if the loans weren’t defined by the borrowers’ credit scores, what did subprime loans all have in common? They had higher interest rates and fees, meaning they were more profitable for the lender, and because we’re talking about five- and six-figure mortgage debt, those higher rates meant massively higher debt burdens for the borrower.
If you sell someone a prime-rate, 5 percent annual percentage rate (APR) thirty-year mortgage in the amount of $200,000, they’ll pay you back an additional $186,512—93 percent of what they borrowed—for the privilege of spreading payments out over thirty years. If you can manage to sell that same person a subprime loan with a 9 percent interest rate, you can collect $379,328 on top of the $200,000 repayment, nearly twice over what they borrowed. The public policy justification for allowing subprime loans was that they made the American Dream of homeownership possible for people who did not meet the credit standards to get a cheaper prime mortgage. But the subprime loans we started to see in the early 2000s were primarily marketed to existing homeowners, not people looking to buy— and they usually left the borrower worse off than before the loan. Instead of getting striving people into homeownership, the loans often wound up pushing existing homeowners out. The refinance loans stripped homeowners of equity they had built up over years of mortgage payments.
That’s why these diseased loans were tested first on the segment of Americans least respected by the financial sector and least protected by lawmakers: Black and brown families.
In the latter half of the 1990s, the share of mortgages that were subprime nearly doubled. By 2000, half of the refinance loans issued in majority-Black neighborhoods were subprime. Between 2004 and 2008, Black and Latinx homeowners with good credit scores were three times as likely as whites with similar credit scores to have higher-rate mortgages. A 2014 review of the pre-crash mortgage market in seven metropolitan areas found that when controlling for credit score, loan-to-value, debt-to-income ratios, and other risk factors, Black and Latinx homeowners were 103 percent and 78 percent, respectively, more likely to receive high-cost mortgages.

AT THE CLOSING, Janice saw that her interest rate was high, but the sales rep reassured her. “She told me…that I could come back in and we could lower the interest rate once I had paid on it for a certain amount of time. [I]t was like a perk for me; the interest rate will be lower. So, I thought, ‘Well, this is good. It sounds like she’s doing everything on my behalf.’ ” Then there was the God part. Janice’s sweet voice grew an edge as she said, “She had figured me out.” Janice had told the broker that they were looking to refinance in order to free up money to pay for their children’s Christian schooling. “And so, she talked about her Christian faith, which resonated with me. I remember the crosses that she had in the office.” The sales rep had touched Janice’s hand and told her, “I know that God must have sent you to us. We’re here for you.” Janice shook her head at the memory of “this person who is talking about God…and is trying to show me that she’s giving me probably the best deal that I can get… “I wasn’t taught to doubt people who presented themselves as Godfearing people. So, I didn’t doubt.” She and Isaiah signed the paperwork.
Soon after, the address the Tomlins sent their monthly payments to began to change, frequently—the loans were being repeatedly sold—but, Janice says, “we were just trucking along and making the payments.” It wasn’t until Isaiah had a chance encounter with a local attorney that the Tomlins learned just how predatory their refinance loan was. That lawyer, Mallard “Mal” Maynard, was helping Isaiah recover a stolen tractor when Isaiah mentioned his refinance loan. (Unlike his wife, Isaiah had never had a good feeling about the salesperson or Chase.) Maynard asked if they still had the paperwork, and Janice did. “Of course I do,” she’d told her husband. “I’m a schoolteacher. I keep papers.” Mal Maynard had joined our conversation on the porch. “I got copies of his paperwork and it just blew me away,” he told me.
“What blew you away about it, Mal?” I asked. “It wasn’t the monthly payment, right, because it sounds like the monthly payment was reasonable.” “It was the equity stripping. It was the yield-spread premiums. It was the origination fee. It was the duplicative fees. They had lots of duplicative fees with words that really made no sense as to what they were for.” Chase had even charged the Tomlins a discount fee, which is what a borrower might pay a broker to get a lower rate than they qualify for—which was absurd, given that the Tomlins’ rate was higher than they qualified for.
“But Mal, they weren’t the only ones, right?” “Oh, no. That was just the tip of the iceberg when I ran into Janice and Isaiah. Started looking at the Registered Deeds Office and tracking down dozens and then hundreds of other similar loans,” Maynard said. He pointed to Janice. “She’s being modest. She was the lead plaintiff [for] thirteen hundred folks [whose homes] she helped save…who had gone through this same thing.” Overcoming their shame to be named plaintiffs in a class-action lawsuit wasn’t easy for Janice and Isaiah. “In the courtroom…there would be someone to make a mockery of my ignorance. That was really hard to swallow,” Janice admitted. “But I knew that, in the end, there would be others who would benefit from it.” As I listened to their story, my mind kept wandering back to what I’d seen early in my career, and to the millions of families who weren’t so lucky. The way the Tomlins’ story began—an American Dream deferred by segregation, white flight, a Black neighborhood targeted by unscrupulous lenders, and the steering of responsible Black homeowners into equitystripping predatory mortgages—could have been cut and pasted from a report of hundreds of Black middle-class neighborhoods across the country in that era. In the early 2000s, the economy had recovered quickly from the tech bubble and 9/11–related recession, housing prices seemed to know no limit, and financial sector profits were soaring. At Demos, when we did our first visit to Congress with copies of our research report on debt called Borrowing to Make Ends Meet, a Democratic senate staffer told us pointblank not to bother, that the banks “owned the place.” We were laughed out of the offices of Republican members of Congress with our passé talk about regulation. The consensus to loosen the rules on Wall Street investment houses and consumer banks had become bipartisan during the Clinton administration, and the proof, it seemed, was in the profits. But through my job, I had a front-row seat to what was really driving it all, a tragedy playing out in Black and brown communities that would later take center stage in the global economy.
I’ll never forget a trip I took to the Mount Pleasant neighborhood in Cleveland, Ohio. On a leafy street, residents told me how, a few years back, house by house, each homeowner—over 90 percent of them Black, with a few Latinx and South Asian immigrants—had opened an envelope, answered a knock on the door, or taken a call from someone with an offer to help consolidate their debt or lower their bills. In the ensuing years, with quiet shame and in loud public hearings—with supportive aldermen, pastors, and lawyers outmatched by the indifference of bankers and regulators with the power to help them—residents had fought to keep their homes. But by 2007, the block I was on had only two or three houses still in the hands of their rightful owners. I excused myself from the group and walked around the corner, barely getting out of their eyesight in time to fall to my knees, chest heaving. It was the weight of the history, the scale of the theft, and how powerless we had proven to change any of it. These were properties that meant everything to people whose ancestors—grandparents, in some cases—had been sold as property. To this day, it’s hard for me to think about it without emotion.
That’s why, as I looked at the Tomlins smiling at each other on their porch more than a decade later, it was like I’d slipped into the world as it could have been—as it should have been. With the relative rarity of a lightning strike—an available and dogged lawyer, a well-timed suit in a state with good consumer protections, and a particularly corrupt and inept defendant—the Tomlins had saved their home and protected more than a thousand other working- and middle-class homeowners in their state. Had more Black families targeted by subprime lenders in those early years found the Tomlins’ happy ending, history would have turned. The mortgage market would have learned its lesson about subprime mortgages earlier in the 2000s, and the worst excesses would have been checked before they spun out of control and toppled the entire economy, causing $19.2 trillion in lost household wealth and eight million lost jobs—and that was just in the United States. The earliest predatory mortgage lending victims, disproportionately Black, were the canaries in the coal mine, but their warning went unheeded.

BEFORE THE CORONAVIRUS-DRIVEN downturn of 2020, the financial crisis of 2008 (and the ensuing Great Recession) was widely considered to be the single most traumatic event in the financial life of the nation since the Great Depression. Less commonly known is that we’re beginning to understand how the tail effects may even eclipse those of the Depression in terms of lost wealth. In 2017, the country had four hundred thousand fewer homeowners than in 2006, although the population had grown by some eight million households since then. Homeownership rates reversed their historic pattern of steady increases, shrinking from 69 percent in 2004 to less than 64 percent in 2017. More than a decade after the crash, the typical family in their prime years has still not recovered the level of wealth held by people the same age in previous generations. Families headed by Millennials, who entered adulthood during the Great Recession, still have 34 percent less wealth than previous generations. They will likely never catch up.
The blast radius of foreclosures from the explosion on Wall Street was far-reaching and permanent. By means of comparison, in 2001, about 183,000 home foreclosures were reported across the nation. By 2008, a record 983,000. In 2010, a new record: more than 1,178,000. An accounting on the tenth anniversary of the crash showed 5.6 million foreclosed homes during the Great Recession. Although homeowners of color were represented out of proportion to their numbers in society, the majority of these foreclosed homes belonged to white people.
From lost homes, the losses cascaded out. Nearby houses lost value: some 95 million households near foreclosed homes lost an estimated $2.2 trillion in property value. Local communities brought in less tax revenue, which led to widely felt cuts in school funding, vital services, and public jobs. It was a contagion, and not just metaphorically. Even the dispossessed homes themselves spread sickness, as demolitions of vacant houses sent decades-dormant lead toxins as far as the wind would carry them; in Detroit, a surge in childhood lead poisoning would mark the decade after the recession. One study identified home foreclosures as the likely cause of a sharp rise in suicides during 2005–2010, while another found that the Great Recession triggered “declining fertility and self-rated health, and increasing morbidity, psychological distress, and suicide” in the United States. In 2017, an examination of all third through eighth graders in the United States revealed “significantly reduced student achievement in math and English language arts” linked to the Great Recession. Between December 2007 and early 2010, 8.7 million jobs were destroyed.

AMY ROGERS IS a white woman whose life has been forever changed because of the Great Recession. In 2001, she and her husband bought their first home, a three-bedroom house she describes as funky and long on character. She had her own savings and the money her late parents had left her ($50,000) to put into the purchase to keep their mortgage payments modest. By 2005, Amy had a great job for the first time in her life, one with a good salary and benefits, working for her county government. Then she discovered that without her knowledge, her husband had pulled all the equity out of the house and used it for his own purposes. Shocked, she began divorce proceedings. In 2007, the divorce became final, and Amy got the house refinanced in her own name. But she had to buy out her husband’s debt to do so. “Having had the house for seven years,” she said, “we owed more than we had paid. I took on $275,000 or so of debt.” As it turns out, the booming county Amy worked for was the home of the city whose fortunes had risen with the rise of the financial sector in the 1990s and 2000s, nicknamed “Banktown.” Charlotte, North Carolina, was the headquarters for large national banks that were growing by leaps and bounds in the lead-up to the crisis, including Bank of America and Wachovia. But the year after her divorce became final, all the construction of houses and office towers ground to a halt. The city began to cut back. By 2009, government employees like Amy were feeling it hard. “The first thing they did was reduce our benefits, and take away our holidays, and put us on furlough without pay. Then they gave us pay cuts. Then, after amputating us one limb at a time, I got fired.” She was able to get COBRA to extend her healthcare, but the monthly cost soared from a subsidized $80 to $779. Her mortgage payment, on a conventional thirty-year mortgage, stayed at $1,200 a month—manageable, but just barely, based on unemployment insurance, alimony, and the little bits of income she could pull together through freelance jobs.
As part of its belt-tightening, the local government reassessed properties and revalued Amy’s $255,000 house at $414,000, which almost tripled her property taxes. Six months after she was laid off, Amy realized she wasn’t going to be able to manage both her mortgage and her increased property taxes. Things were “starting to snowball,” she said.
She called the owner of her mortgage, Wells Fargo, and told them that although she had not yet been late with a payment, her financial situation had changed and she wanted to sign up for one of the programs it offered to reduce borrowers’ mortgage payments. “Then they start putting me through the wringer,” Amy said. Although she had no credit problems, Wells Fargo told her she needed to attend credit counseling. “Okay, fine, I go,” Amy said. “I went to the ‘Save Your Housing’ fair. I went to the Housing Finance Agency. I went and did every single thing that was out there to do. Wells Fargo had me jumping through hoops for three years.” The Obama administration had started a number of programs, she recalled, to enable people to extend the term of their mortgage or, in some cases, reduce the interest rate or even the principal. “I went for everything,” Amy said. “And everywhere I went, they blocked me and said, ‘You can’t apply for this [program] if you’re under consideration for that. You can’t apply for that while you’re under consideration for this. Oh, that program is over.’ And it went around and around for months and months and months.
“The minute you go and you ask for help, even if you’re not late [making your payments], your credit score drops one hundred points. So, what that meant was that my Exxon card that I’d had since 1984, [which] had six or seven hundred dollars on it for oil changes and tires—all of a sudden, they jack up the interest rate to thirty-five percent. I’ve never been late, but I’m now a ‘high-risk borrower.’ “My unemployment’s running out, and I’m selling jewelry to make the mortgage payments. And I realize they’re going to take the house anyway.” Amy put her home up for sale. “I owed altogether two hundred seventy-five thousand, and we brought them offers within ten thousand of that, and Wells Fargo turned every one of them down.” The bank would not accept a sale price any less than the full amount owed, nor would they take possession of the house instead of foreclosing on it.
“I did everything I could to avoid foreclosure,” said Amy, “knowing what that would do to my credit and my employability. So, [at this point,] I’m fifty-five years old. I’m doing piecemeal work everywhere and paying self-employment tax and COBRA and just going down in flames.” In 2013, Wells Fargo finally foreclosed on Amy Rogers. She was one of 679,923 Americans to experience foreclosure that year. But the shocks didn’t end there. When her house went up for auction, Wells Fargo bought it —from itself—for $304,000. Why such a high price for a house that was selling for $275,000? “Because every time they sent me a letter from a lawyer or made a phone call, they billed me,” Amy said. “They wanted to recoup all their costs to foreclose on me.” As the final step in the foreclosure process, “the sheriff in his big hat and his big car drives up to your house in broad, damn daylight, comes and knocks on your door and serves you with an eviction notice,” said Amy.
“That is a dark day.” She sold or gave away most of her belongings and moved into a small rental condo, where she lived until she had to move again in 2017. When Amy shared these details a year later, she said the rent on her new place was affordable, but the rundown neighborhood was gentrifying, so she feared the landlord would soon raise the rent. “I pay over ten thousand dollars a year in rent,” she said. “I earn about twenty-four thousand.” “When they foreclosed on me for the house,” said Amy, “they got [everything]. I got zero. They ruined my credit. And they ruined my employability, because any employer you go to work for now does a credit check on you. I couldn’t get a job for ten dollars an hour in Costco. I tried.
“I paid into that house for thirteen years. I’ve worked every day of my life since I’m seventeen years old. And now, today, I’m sixty-three years old, I’m unemployable, I work three part-time jobs, and I’m praying I can last long enough to get Medicare so I’ll have some health coverage.” Every part of Amy’s story was one that I knew well from my research and advocacy at Demos, from the jacked-up credit card rate, to the insufficient foreclosure prevention programs (I lobbied staff at the U.S.
Treasury Department to improve them), to the job discrimination against people with weak credit (we wrote a bill banning the practice). Not a single part of her story surprised me, but it moved me still. I was grateful that she’d been willing to share her story with me, knowing it would be made public. There’s so much shame involved in being in debt. In my experience with the bankers on the other end, however, shame is hard to find, even over their discriminatory and deceptive practices. Amy sighed. “I’ve kept it under wraps for ten years,” she said, “too afraid of the way the world would perceive me.” “If I could leave anybody who’s gone through this with one message,” she said, “it is this: Do not say, ‘I lost my house.’ You did not lose your house. It was taken away from you.” The people who took Amy’s house could do so with impunity in 2013 only because they had been doing it to homeowners of color for over a decade already, and had built the practices, corporate cultures, and legal and regulatory loopholes to enable that plunder back when few people cared.
Subprime mortgages and the attitude of lender irresponsibility they fomented would, we now know, later spread throughout the housing market.
But to truly understand where the crisis began, we have to go back earlier than the 1990s, to the reason it was so easy for lenders to target homeowners of color in the first place.

THE EXCLUSION OF free people of color from the mainstream American economy began as soon as Black people emerged from slavery after the Civil War.
Black people were essentially prohibited from using white financial institutions, so Congress created a separate and thoroughly unequal Freedman’s Bank, managed (and ultimately mismanaged into failure) by white trustees. In the century that followed, the pattern of legally enforced exclusion continued in every segment of society, from finance to education to employment to housing. In fact, the New Deal era of the early 1930s—a period of tremendous expansion of government action to help Americans achieve financial security—was also a period in which the federal government cemented residential segregation through both practice and regulation.
“We think of the New Deal and all the great things that came out of it— and there were many—but what we don’t talk about nearly as often is the extent to which those great things were structured in ways that made sure people of color didn’t have access to them,” said Debby Goldberg, a vice president of the National Fair Housing Alliance. I worked closely with the advocates at NFHA back in the early 2000s. Debby is an advocate who spends her days fighting the latest attempts to roll back commitments to fair housing, and she has an encyclopedic knowledge of the history of American homeownership, and the housing policies that had paved the way for the subprime mortgage crisis. In 1933, during the Great Depression, the U.S. government created the Home Owners’ Loan Corporation. Debby explained, “Its role was to buy up mortgages that were in foreclosure and refinance them, and put people back on their feet. It did a huge amount of that activity—billions of dollars’ [worth] within a short period of time in the thirties.” Perhaps this agency’s most lasting contribution was the creation of residential security maps, which used different colors to designate the level of supposed investment risk in individual neighborhoods. A primary criterion for defining a neighborhood’s risk was the race of its residents, with people of color considered the riskiest. These neighborhoods were identified by red shading to warn lenders not to invest there—the birth of redlining. (A typical assessment reads: “The neighborhood is graded ‘D’ because of its concentration of negroes, but the section may improve to a third class area as this element is forced out.”) The redlining maps were subsequently used by the Federal Housing Administration, created in 1934. In its early years, Goldberg explained, the FHA subsidized the purchase of housing “in a way that made it very easy for working-class white people, who had previously been renters and may never have had any expectation of becoming a homeowner, to move to the suburbs and become a homeowner because it was often cheaper than renting. Both the structure and the interest rate of the mortgage made it possible for people to do that with very little savings and relatively low income.
“But the FHA would not make or guarantee mortgages for borrowers of color,” she said. “It would guarantee mortgages for developers who were building subdivisions, but only on the condition that they include deed restrictions preventing any of those homes from being sold to people of color. Here we have this structure that facilitated…white homeownership, and therefore the creation of white wealth at a heretofore unprecedented scale—and [that] explicitly prevented people of color from having those same benefits. To a very large degree, this was the genesis of the incredible racial wealth gap we have today.” In 2016, the most recent available authoritative data, the typical white family in America had about $171,000 in wealth, mostly from homeownership—that’s about ten times that of Black families ($17,600) and eight times that of Latinx families ($20,700). That kind of wealth is self-perpetuating. I thought of Amy, who on a modest income had still been able to afford a house with a low monthly payment largely because of $50,000 from her parents.
Learning this history was crucial to me in my early days at Demos. In order to help craft new laws to change the world we inhabited, I needed to understand how government decisions had shaped it. I underwent a steady process of unlearning some of the myths about progressive victories like the New Deal and the GI Bill, achievements that I understood to have built the great American middle class. The government agencies most responsible for the vast increase in home ownership—from about 40 percent of Americans in 1920 to about 62 percent in 1960—were also responsible for the exclusion of people of color from this life-changing economic opportunity. Of all the African Americans in the United States during the decades between 1930 and 1960, fewer than 2 percent were able to get a home loan from the Veterans Administration or the Federal Housing Authority.
The civil rights movement brought changes to housing laws, but lending practices changed more slowly. For instance, although the Fair Housing Act of 1968 outlawed racially discriminatory practices by banks, it would take another twenty-four years for the Federal Reserve System, the central bank of the United States, to monitor and (spottily) enforce the law.
It is little wonder, then, that a fringe lending market flourished to offer credit and reap profits from people of color who were excluded from the mainstream financial system. These included rent-to-own contracts for household appliances and furniture and houses bought on contract. These contracts enabled Black people to buy on the installment plan—and lose everything if they missed a single payment. Unlike a conventional mortgage, land contracts did not allow buyers to build equity; indeed, they owned nothing until the final payment was made. And because the loans were unregulated, peddlers of these early forms of subprime mortgages could charge whatever exorbitant rates they chose. My great-grandmother bought the apartment building where I was born on a predatory contract.
In the 1970s, residents of redlined neighborhoods—including, actually, some white working-class as well as African American and Latinx activists —banded together to demand access to credit and economic investment in their communities. These local groups were backed and coordinated by community organizing networks such as the Chicago-based National People’s Action and by national organizations that ranged from the National Urban League to the Catholic Church.
As a result of this activism, Congress passed reforms to the discriminatory lending market in the 1970s, finally giving residents tools to combat redlining. One reform was the 1975 Home Mortgage Disclosure Act (HMDA), which required financial institutions to make public the number and size of mortgages and home loans they made in each zip code or census tract, so that patterns of discrimination could be easily identified. Another was the 1977 Community Reinvestment Act (CRA), which required financial institutions to make investments in any community from which they received deposits. (For example, a 1974 survey of federally insured Chicago-area banks revealed that in several communities of color, for each dollar residents deposited in local banks, the community received only one penny in loans.) The CRA enabled community and civil rights groups to monitor whether banks were fulfilling their obligations—and to challenge the banks when they fell short. In 1978, two of the earliest formal complaints against banks that failed to meet their CRA obligations resulted in a total of more than $20 million in home loans for low-income residents of Brooklyn, as well as the St. Louis area, where financial institutions admitted they had been making loans in only one neighborhood—a neighborhood that was all white. But 1978 also saw an ominous sign of a coming wave of deregulation when a Supreme Court decision interpreted the National Bank Act to mean if a lender was in one of the few states without any limits on interest rates, it could lend without limits nationwide, effectively invalidating thirty-seven states’ consumer protections—and Congress declined to amend the law. That’s why, today, most of your credit card statements come from South Dakota and Delaware, states with lax lending laws.
By the mid-1990s, the financial sector had become the component of the economy that produced the most profits, supplanting manufacturing. The financial sector also became the biggest spender in politics, contributing more than one hundred million dollars per election cycle since 1990 to federal candidates and political parties, on both sides of the aisle.
Translating unprecedented profits into unprecedented influence netted the industry carte blanche with legislators and regulators, who were often eyeing lucrative jobs as lobbyists or banking consultants after their tours of duty in government service. The deregulatory revolution in financial services was also spurred by antigovernment, pro-market libertarian and neoliberal economic thinking that gained a popular common sense, particularly among white people, with rising distrust of an activist government.
By the end of the 1990s, a bipartisan majority voted to repeal most of Glass-Steagall, the law that had protected consumer deposits from risky investing for decades since the Great Depression. Free of restraints, the financial sector grew wildly and with few rules.
This growth included an explosion of mortgage brokers and nonbank holding companies like those that pursued the Tomlins, many of which were not subject to the CRA and were unregulated and unaccountable to anything but the bottom line. Most important, there was no single regulator whose primary responsibility was to protect consumers; the four federal banking regulators’ primary purpose was to ensure that banks were doing well— which put the profit machine of subprime directly at odds with the regulators’ secondary consumer protection responsibilities.
The upshot for the lending market was the unchecked growth of loans and financial products that were predatory in nature, meaning they benefited the lender even when they often created a net negative financial situation for the borrower, imposed harsh credit terms out of proportion to the risk the lender took on, and used deception to hide the reality of the credit terms from the borrower. And this formula was tried and tested on Black homeowners.
Doris Dancy became a witness in a federal fair lending lawsuit based on what she saw as a credit manager for Wells Fargo in Memphis during the boom. “My job was to find as many potential borrowers for Wells Fargo as possible. We were put under a lot of pressure to call these individuals repeatedly and encourage them to come into the office to apply for a loan.
Most—eighty percent or more—of the leads on the lists I was given were African American.” The leads came from lists of Wells Fargo customers who had credit cards, car loans, or home equity loans with the company.
“We were supposed to try and refinance these individuals into new, expensive subprime loans with high interest rates and lots of fees and costs,” Dancy explained. “The way we were told to sell these loans was to explain that we were eliminating the customer’s old debts by consolidating their existing debts into one new one. This was not really true—we were not getting rid of the customer’s existing debts; we were actually just giving them a new, more expensive loan that put their house at risk.
“Our district manager pressured the credit managers in my office to convince our leads to apply for a loan, even if we knew they could not afford the loan or did not qualify for the loan….I know that Wells Fargo violated its own underwriting guidelines in order to make loans to these customers.
“Many of the mostly African American customers who came into the offices were not experienced in applying for loans….Our district manager told us to conceal the details of the loan. He thought that these customers could be ‘talked into anything.’ The way he pressured us to do all of these unethical things was as aggressive as a wolf. There was no compassion for these individuals who came to us trusting our advice.” Mario Taylor, another Wells Fargo credit manager in Memphis, explained how the bank applied pressure to its almost entirely African American prospects. “We were instructed to make as many as thirty-five calls an hour and to call the same borrower multiple times each day,” he said. “Some branch managers told us how to mislead borrowers. For example, we were told to make ‘teaser rate’ loans without informing the borrower that the loan was adjustable….Some managers…changed pay stubs and used Wite-Out on documents to alter the borrower’s income so it would look like the customer qualified for the loan. Borrowers were not told about prepayment penalties [or]…about astronomical fees that were added to the loan and that Wells Fargo profited from.” A common misperception then and now is that subprime loans were being sought out by financially irresponsible borrowers with bad credit, so the lenders were simply appropriately pricing the loans higher to offset the risk of default. And in fact, subprime loans were more likely to end up in default. If a Black homeowner finally answered Mario Taylor’s dozenth call and ended it possessing a mortgage that would turn out to be twice as expensive as the prime one he started with, is it any wonder that it would quickly become unaffordable? This is where the age-old stereotypes equating Black people with risk—an association explicitly drawn in red ink around America’s Black neighborhoods for most of the twentieth century— obscured the plain and simple truth: what was risky wasn’t the borrower; it was the loan.
Camille Thomas, a loan processor, testified that “many of these customers could have qualified for less expensive or prime loans, but because Wells Fargo Financial only made subprime loans, managers had a financial incentive to put borrowers into subprime loans with high interest rates and fees even when they qualified for better-priced loans.” The bank’s incentives to cheat its customers were rich. Elizabeth Jacobson, a loan officer from 1998 to 2007, explained the incentive system.
“My pay was based on commissions and fees I got from making [subprime] loans….In 2004, I grossed more than seven hundred thousand in sales commissions,” nearly one million in 2020 dollars. “The commission and referral system at Wells Fargo was set up in a way that made it more profitable for a loan officer to refer a prime customer for a subprime loan than make the prime loan directly to the customer.” Underwriters also made more money from a subprime than a prime loan.
Looking at these numbers, one could be tempted to minimize the role of racism and chalk it up to greed instead. I’m sure that most of the people involved in the industry would claim not to have a racist bone in their body —in fact, I heard those exact words from representatives of lending companies in the aftermath of the crash. But history might counter: What is racism without greed? It operates on multiple levels. Individual racism, whether conscious or unconscious, gives greedy people the moral permission to exploit others in ways they never would with people with whom they empathized. Institutional racism of the kind that kept the management ranks of lenders and regulators mostly white furthered this social distance. And then structural racism both made it easy to prey on people of color due to segregation and eliminated the accountability when disparate impacts went unheeded. Lenders, brokers, and investors targeted people of color because they thought they could get away with it. Because of racism, they could.
Loan officer Tony Paschal was one of the few African American employees in his section at Wells Fargo in Virginia. “Wells Fargo’s managers were almost entirely white, and there was little to no opportunity for advancement for minorities,” he testified. “Wells Fargo also discriminated against minority loan applicants by advising them that the interest rate on their loan was ‘locked,’ when in fact, Wells Fargo had the ability to lower the interest rate for the applicant if the market rates dropped prior to the loan closing,” and, he said, the bank often made this adjustment for white applicants.
“I also heard employees [of the Mortgage Resource division] on several occasions mimic and make fun of their minority customers by using racial slurs. They referred to subprime loans made in minority communities as ‘ghetto loans’ and minority customers as…‘mud people.’ ” In addition, he said, his branch manager used the N-word in the office—not in 1955, but in 2005.
Testimonies of Wells Fargo’s corruption abound, but that bank was far from alone in its exploitation of Black and brown people through the aggressive marketing of subprime mortgage loans. As one of many examples, Countrywide Financial Corporation agreed in 2011 to pay $335 million to settle claims that it overcharged more than two hundred thousand Black and Latinx borrowers for their loans, and steered some ten thousand borrowers of color into risky subprime loans instead of the safer and cheaper conventional loans for which they qualified. According to an analysis conducted by the U.S. Department of Justice of 2.5 million mortgage loans made from 2004 to 2008 by Countrywide, Black customers were at least twice as likely as similarly qualified whites to be steered into subprime loans; in some markets, they were eight times more likely to get a subprime loan than white borrowers with similar financial histories.
So much profit and so little accountability. The country’s most ubiquitous bank, Bank of America, bought the infamous Countrywide in June 2008. AmeriQuest, BancorpSouth, Citigroup, Washington Mutual, and many other banks and financial companies contributed to a wave of foreclosures that shrank the wealth of the median African American family by more than half between 2005 and 2009 and of the median Latino family by more than two-thirds.

THERE WAS A TIME—years, in fact—when the epidemic of home foreclosures could have been stopped. Bank regulators and federal policy makers were well aware of what was happening in communities of color, but despite pleas from local officials and community groups, they did nothing to stop the new lenders and their new tactics that left so many families without a home.
Between 1992 and 2008, state officials took more than nine thousand legal, regulatory, and policy actions to try to stop the predatory mortgage lenders that were devastating their communities and their tax bases. But Washington wouldn’t listen. The Federal Reserve—“the one entity with the authority to regulate risky lending practices by all mortgage lenders”—took no action at all, and the Office of the Comptroller of the Currency, the regulator in charge of national bank practices, took one action: preemption, to make sure that no state’s consumer protections applied to its national banks.
In the virtually all-white realm of federal bank regulators and legislators, there was a blindness in those early years. Lisa Donner is a slight woman whose speech is peppered with almost involuntary little laughs, which I decided, after years of working in the consumer protection trenches with her, was a defense mechanism, a release valve for the pressure of having seen all the injustice she’s seen. She got her start organizing working-class New Yorkers of color around affordable housing and foreclosure prevention with the Association of Community Organizations for Reform Now (ACORN) thirty years ago. She’s now the executive director of Americans for Financial Reform, the David founded in the wake of the crash to take on Wall Street’s lobbying Goliath and create a new regulatory structure to prevent a crash from happening again. Lisa has sat across the table from more financial regulators and bankers than probably anyone else in the country. I got in touch with her to reminisce about what it was like in the early days of the subprime phenomenon, when families like the Tomlins were being targeted by the block. The regulators were “just refusing to see that there was a problem at all,” Lisa said with one of her little laughs. “Because it wasn’t their neighbors or their neighborhood or people who looked like them, or people they knew, in the elite decision-making circles.” I have many such memories, but I’ll never forget a meeting with a young blond Senate banking committee staffer in 2003. After hearing our research presentation, she said with a sad little shake of her head, “the problem was we put these people into houses when we shouldn’t have.” I marveled at the inversion of agency in her phrasing. Who was the “we”? Not the hardworking strivers who had finally gotten their fingers around the American Dream despite every barrier and obstacle. No, the “we” was well-intentioned people in government—undoubtedly white, in her mental map. Never mind that most of the predatory loans we were talking about weren’t intended to help people purchase homes, but rather, were draining equity from existing homeowners. From 1998 to 2006, the majority of subprime mortgages created were for refinancing, and less than 10 percent were for first-time homebuyers. It was still a typical refrain, redolent of long-standing stereotypes about people of color being unable to handle money—a tidy justification for denying them ways to obtain it.
Lisa Donner understood the work that race was doing in shifting blame for irresponsible lending and deception onto the borrower. “Race was a part of weaponizing the ‘It’s the borrower’s fault’ language,” she said to me.
Conservative pundit Ann Coulter asserted it clearly, in capital letters, in the headline of one of her nationally syndicated columns: THEY GAVE YOUR MORTGAGE TO A LESS QUALIFIED MINORITY. Another conservative columnist, Jeff Jacoby, wrote, “What does it mean when Boston banks start making many more loans to minorities? Most likely, that they are knowingly approving risky loans in order to get the feds and the activists off their backs.” By 2008, Jacoby was declaring the financial crisis “a no-win situation entirely of the government’s making.” When asked during the market panic on September 17 about the root causes of the crisis, billionaire and then New York City mayor Michael Bloomberg told a Georgetown University audience that the end of redlining was to blame. “It probably all started back when there was a lot of pressure on banks to make loans to everyone….Redlining, if you remember, was the term where banks took whole neighborhoods and said, ‘People in these neighborhoods are poor; they’re not going to be able to pay off their mortgages. Tell your salesmen don’t go into those areas,’ ” Bloomberg said.
“And then Congress got involved—local elected officials, as well—and said, ‘Oh that’s not fair; these people should be able to get credit.’ And once you started pushing in that direction, banks started making more and more loans where the credit of the person buying the house wasn’t as good as you would like.” A man who’d made his fortune in financial information did not know that the mortgages at the root of the crisis were usually refinances, not home purchases, and that creditworthiness was often beside the point.
But he knew enough of the elite conventional wisdom to blame the victims of redlining.
The public conversation and the media coverage of the subprime mortgage crisis started out racialized and stayed that way. We’ve had so much practice justifying racial inequality with well-worn stereotypes that the narrative about this entirely new kind of financial havoc immediately slipped into that groove. Even when the extent of the industry’s recklessness and lack of government oversight was clear, the racialized story was there, offering to turn the predators themselves into victims. After the crash, conservatives were quick to blame the meltdown on people of color and on the government for being too solicitous of them. Ronald Utt of the Heritage Foundation claimed that “some portion of the problem— perhaps a significant portion—may stem from ‘predatory borrowing,’ defined as a transaction in which the borrower convinces the lender to lend too much.” With this banker-as-victim tale, the casting was familiar: undeserving and criminal people of color aided and abetted by an untrustworthy government. A conservative member of the U.S. Financial Crisis Inquiry Commission (FCIC), Peter Wallison, wrote a vitriolic dissent of the commission’s conclusion that the crisis was the result of insufficient regulation of the financial system. Calling that conclusion a “fallacious idea,” he claimed that “the crisis was caused by the government’s housing policies,” specifically a set of policies called “Affordable Housing Goals.” Banks, he said, “became the scapegoat.” And so many pundits blamed the Community Reinvestment Act for the financial crisis that the FCIC had to devote pages of its report to refuting the CRA’s role conclusively. Jim Rokakis was the treasurer of Cuyahoga County, Ohio, from 1997 to 2011 and saw all the devastation. In 2006, he went to his U.S. Attorney’s office having amassed boxes of evidence he hoped would lead to a RICO conspiracy case about widespread mortgage industry fraud in the mostly Black and immigrant neighborhoods in and around Cleveland. In a room of men in suits from the FBI and other government agencies, Rokakis was sure that his impressive display of charts and graphs, foreclosure data maps, and transcripts was spelling out a slam-dunk case for prosecuting a man named Roland Arnall. Arnall was one of President George W. Bush’s top donors (whom Bush had nominated as ambassador to the Netherlands in 2006)— and the CEO of the country’s biggest subprime lender at the time, AmeriQuest, and its subsidiary Argent Mortgage. But the racialized story was blinding to the government agents; they just couldn’t see that the wealthy and well-connected white man was the criminal.
“At one point, one of the U.S. attorneys…turned and said, ‘Well, who’s the victim?’ And I lost it. I said, ‘You’re the victim! We’re all the victims!
Don’t you get it? I’m here because Argent did this! I’m here because Roland Arnall and his minions have gutted Cleveland!” Unable to see those with power as sufficiently blameworthy, the federal prosecutors declined to pursue Rokakis’s case. He told me that county prosecutors did end up using his data to prosecute lower-level people. “But it was too late. Had they gotten to this early, and gotten to the foundation of this tree, Arnall and his executives, that tree would have withered and died.” Lisa Donner saw how blame-shifting to borrowers of color was so effective after the crash that it stopped the Obama administration from mounting a full-throated campaign to save Black wealth. “People who knew better let that language”—it was the borrowers’ fault; they took out loans they couldn’t afford—“control the politics of the response,” she recalled. “A whole bunch of Obama administration folks let that incredibly racialized story and their fear of the story—even if they didn’t believe the story themselves—give us the recovery that we got. Which was one that increased inequality and economic vulnerability.” The Obama administration staff wasn’t wrong about the perils of white public opinion and its political implications. In a study conducted in President Obama’s final year in office, researchers simply switching the race of a man posing in front of a home with a Foreclosure sign from white to Black made Trump-supporting whites angrier about government mortgage assistance programs and more likely to blame individuals for their situation.
But back in the early 2000s, when I was digging through the data and immersing myself in the stories of loss, at first I didn’t understand how the lenders were getting away with it, mostly escaping unharmed while making loans that were designed to fail. Why was the system not self-correcting, when the loans so quickly became unaffordable for people and ended up in foreclosure? Then I discovered that the secret was mortgage securitization: lenders were selling mortgages to investment banks who bundled them and sold shares in them to investors, creating mortgage-backed securities.
Instead of mortgage originations being driven by how much cash from deposits banks had to lend, now the driver was the virtually limitless demand from Wall Street for new investments. Unscrupulous financial companies could sell predatory mortgages they knew would sink the homeowner, package up those mortgages, and sell them to banks or Wall Street firms, which would then sell them to investors who could then resell them to still other investors—each of the sellers collecting fees and interest and then passing on the risk to the next buyer. Wall Street brokers even came up with a lighthearted acronym to describe this kind of hot-potato investment scheme: IBGYBG, for “I’ll be gone, you’ll be gone.” If someone gets burned, it won’t be us.
Securitization cut the tie of mutual interest between the lender and the borrower. Before securitization, however reluctant lenders had been to offer mortgages to people of color, once the loan was made, both parties had a vested interest in making sure it was properly serviced and repaid. Now that connection had been severed. The homeowner’s loss could be the investor’s gain.
Such financial malfeasance was allowed to flourish because the people who were its first victims didn’t matter nearly as much as the profits their pain generated. But the systems set up to exploit one part of our society rarely stay contained. Once the financial industry and regulators were able to let racist stereotypes and indifference justify massive profits from demonstrably unfair and risky practices, the brakes were off for good. The rest of the mortgage market, with its far more numerous white borrowers, was there for the taking. Having learned how profitable variable rates and payments could be by testing them out on borrowers of color in the 1990s, lenders created a new version for the broader market. These were adjustable-rate mortgages called “option ARMs.” Jim Vitarello, formerly of the U.S. Government Accountability Office, described option ARMs as “the rich man’s subprime loans.” A significant proportion of these, he said, went to white middle-class people. The average FICO credit score of borrowers who got option ARM mortgages was 700, which made them eligible for prime loans. (More than half of the $2.5 trillion in subprime loans made between 2000 and 2007 also went to buyers who qualified for safer, cheaper prime loans.) What made option ARMs so appealing to this clientele was the choice.
Debby Goldberg explained: “With an option ARM, you could pick what you wanted your monthly payment to be based on. Was it going to be enough to pay off your whole mortgage? Or was it going to be only the interest? Or was it going to be not even the interest? In that case, you had a loan that was negatively amortizing—you were building up more and more debt, because you weren’t even paying the full interest on the loan that you had on your home.” These whiter, higher-wealth option ARM borrowers were coming in at the peak of a housing boom and could see only the upside. But borrowers could choose their payments for only so long, a couple of months to a couple of years, before the lender reset the terms so that borrowers had to pay off the full amount of the loan during the remaining years of the mortgage. “And you’d have a huge increase in your monthly payment,” Goldberg explained, “because you’d go from not even paying the full amount of interest that you owed to paying a higher principal balance…plus all the interest.” That gamble worked only if the housing prices kept climbing.
By 2006, up to 80 percent of option ARM borrowers chose to make only the minimum monthly payments. Housing values began to stall and slide in some areas, and immediately, more than 20 percent of these borrowers owed more than their house was worth. The option ARMs were ticking time bombs now nestled alongside other kinds of trap-laden mortgages buried in securities owned by pension funds and mutual funds across the globe. And it wasn’t just homeowners who were dangerously leveraged; the Federal Reserve had loosened the requirements on the five biggest investment banks, so they had been investing in securities based on debt with borrowed money thirty and forty times what they could pay back.
In late 2007, when interest rates rose and housing prices started falling, the mortgage market at the center of the economy began to crumble. Wall Street firms that had bet heavily on the IBGYBG formula knew better than to trust the other investment houses that had done the same, and suddenly the market froze. By the time the housing market reached bottom, housing prices would fall by over 30 percent and all five of the major investment houses would either go bankrupt or be absorbed in a fire sale.
With the banks and the houses went the jobs. In the recession that followed, “people were losing their jobs or having their hours cut back,” Goldberg recalled. “In that situation, you had mortgages that were perfectly safe and [that,] in ordinary circumstances, should have been sustainable, but people just couldn’t afford them anymore because they had lost income.
And they couldn’t sell their home because home values were going down all across the country.” It was a vicious circle. This third wave of the financial disaster crested in 2008–2009, the period generally designated as the Great Recession, but the devastation that wave created continues even now.

THE WAVE SWEPT over Susan Parrish, a white woman living in Vancouver, Washington, and changed her life in ways she couldn’t have imagined. In 2011, Susan was fifty-one years old, recently divorced, and living in the three-story house where she and her ex-husband had raised their three children. She worked as the communications manager of a nonprofit organization. “My ex-husband was a teacher, so between us we were making close to one hundred thousand dollars a year,” Susan said. “We were both working full time. We were doing okay.” Then she got laid off from her job. The recession had already eaten away at her organization; several staff members had been laid off in 2009 and a few more shortly before Susan was let go. “I started looking for other work right away,” Susan recalled. To tide her over, she applied for unemployment insurance. “I had never filed for unemployment in my life. It was an all-new thing for me….I had to go in and sit in these classes with people from all walks of life. It was just sobering to see how many people were out of work at that time.” The classes were about how to write a résumé and conduct a job interview—things Susan had thought she was good at. “But in the years since, that’s proven to be untrue,” she said. “I haven’t been able to secure a job that pays anywhere near what I made.” Immediately after she got laid off, Susan realized she wouldn’t have the money to make the next house payment. She knew she had to sell the house —and she also knew how difficult that would be, given that she and her exhusband had tried to sell it both before and after their divorce.
“Thankfully,” she said, “I was able to sell it, but I only made seventeen hundred dollars, which was just enough to pay first month’s rent and a deposit on a six-hundred-square-foot, one-bedroom apartment in a not-great part of town. It was all I could afford.” And she couldn’t afford it for long.
“I was unemployed for three and a half months. [Finally,] I was hired as a news clerk, and it was about twenty thousand less than I had been making.
I took it because I didn’t see any other prospects.” Six months later, she was promoted to the role of education reporter. “It was good, but it was still not a living wage, and I couldn’t afford to rent an apartment now, because housing was so expensive.” “I moved five times in five years,” Susan said, “three times in three months. I spent three months living in a backyard shed/artist studio with no heat or running water or toilet or kitchen, because I had to save money to get my car fixed. That was hard, but I was glad I had the chance to live cheap so I could afford to get my car fixed.” Susan was by all measures middle class—a college degree, a whitecollar journalism job—but the Great Recession had pushed her to the brink of homelessness. “I’ve tried not to be bitter,” she said. She could not find a place to live that she could afford. “After I lived in that backyard shed, my retired minister and his wife offered me their mother-in-law suite at belowmarket prices,” she said. She lived there for four years, becoming part of their family. At almost sixty, Susan had a life very different from the one she imagined before the recession. She lived with her partner in an RV—323 square feet in size—on a ranch he owned. Ten years after the recession, she was still freelancing and looking for full-time work.

SUSAN’S STORY OF cascading loss and downward mobility has been replicated millions of times across the American landscape due to the financial industry’s actions in the 2000s. While the country’s GDP and employment numbers rebounded before the pandemic struck another blow, the damage at the household level has been permanent. Of families who lost their houses through dire events such as job loss or foreclosure, over two-thirds will probably never own a home again. Because of our globally interconnected economy, the Great Recession altered lives in every country in the world.
And all of it was preventable, if only we had paid attention earlier to the financial fires burning through Black and brown communities across the nation. Instead, the predatory practices were allowed to continue until the disaster had engulfed white communities, too—and only then, far too late, was it recognized as an emergency. There is no question that the financial crisis hurt people of color first and worst. And yet the majority of the people it damaged were white. This is the dynamic we’ve seen over and over again throughout our country’s history, from the drained public pools, to the shuttered public schools, to the overgrown yards of vacant homes.
Being among the outmatched and unheeded few who tried to prevent the catastrophe that would become the Great Recession was an experience that would forever shape my understanding of the world. I saw how money can obscure even the most obvious of truths. I learned that in order to exploit others for your own gain, you have to first sever the tie between yourself and them in your mind—and racist stereotypes are an ever-ready tool for such a task. But when I watched the CNN ticker tape announce the fall of Lehman Brothers on September 15, 2008, I was struck by an even deeper truth: ultimately, it’s impossible to sever the tie completely. Wall Street had recruited the brightest technological minds—those who a generation ago would have been putting a man on the moon or inventing vaccines—to engineer a way to completely insulate wealthy people and institutions from the pain inflicted by their profits. Ultimately, they failed, and so did one of the oldest and most successful financial firms in U.S.
history, setting off a financial contagion we still feel today.
It wasn’t until years later that my research would reveal just how literally the country’s original economic sin was connected to the financial crisis of 2008. The first mortgages and collateralized debt instruments in the United States weren’t on houses, but on enslaved people, including the debt instruments that led to the speculative bubble in the slave trade of the 1820s. And the biggest bankruptcy in American history, in 2008, was the final chapter of a story that began in 1845 with the brothers Lehman, slave owners who opened a store to supply slave plantations near Montgomery, Alabama. The brothers were Confederate Army volunteers who grew their wealth profiteering during the Civil War, subverting the cotton blockade, buying cotton at a depressed price in the Confederacy and selling it overseas at a premium. They first appeared on what would become Wall Street by commodifying the slave crop, cofounding the New York Cotton Exchange. Although the company would later diversify its business beyond the exploited labor of African Americans, like so much of American wealth, Lehman Brothers would not have existed without it.
One hundred fifty years later, a product created out of a synthesis of racism and greed yet again promised Lehman Brothers unprecedented profits, and delivered, for a short while: heavy investments in securitized toxic loans brought it the highest returns in its history from 2005 to 2007.
Even as defaults on mortgages began to skyrocket across the country, the company’s leadership held on to the idea that it could endlessly gain from others’ loss. Lehman’s CFO asserted boldly on a March 2007 investor conference call that rising mortgage defaults wouldn’t spread beyond the subprime customers, and the market believed him. So many wealthy—and yes, white—people assumed that the pain could be contained on one side of the imaginary human divide and transmuted into ever-higher profits on their side. During the same summer that I stood in the middle of the street with a foreclosure map that exposed the devastation behind almost every Blackowned door, Lehman would go on to underwrite more mortgage-backed securities than any other firm in America. By the end of the next summer the illusion had been broken. In a free fall that began on a weekend in mid-September, Lehman Brothers would go on to lose 93 percent of its stock value. A company born out of a system that treated Black people as property died from self-inflicted wounds in the course of destroying the property of Black people. Lehman’s fate provides no justice for the enslaved people whose misery the company enabled in the nineteenth century, nor for the dispossessed homeowners ruined by Lehman-owned mortgages in the twenty-first century, but it is a reminder that a society can be run as a zero-sum game for only so long.

BACK ON CREECY Avenue with the Tomlins, I felt like I was glimpsing not only an alternate past in which more borrowers had their just resolution, but maybe an alternate society in which more people had their values.
I was asking Janice and Isaiah about the court case when the lawyer Mal Maynard jumped in. “I gotta throw in my two cents here. Of course, Janice is one of my all-time heroes. One of the greatest days I’ve ever had in court was in Winston-Salem in…the North Carolina Business Court, which is always a bad [place] for consumer cases. There was a judge who was really famous for being very, very hard on the class actions, especially when they were filed by consumers.” After Janice took the stand, the skeptical judge asked her, basically, why she was there—why she was willing to swear an oath to represent the interests of over a thousand people she didn’t know.
Janice continued in her own words: “I just remember telling [the judge] that every morning when I walked into my classroom before we started our day, I taught my second-graders to place their hands on their hearts and quietly say the Pledge of Allegiance.
“And I had taught them that when you give allegiance to something, you say that ‘I honor this’ and that ‘I have faith in it.’ And I knew that if I taught that to my children, that I best be living by it myself.” Maynard continued: “And from that moment forward, she transformed Judge Tennille. She really did. He believed in us, and he believed in our case from that point forward. There was still a lot of hard-fought litigation, but he knew, and Janice convinced him, that this was really legitimate, heartfelt work that was being done by her and by the lawyers in the case.” A far-off look in Janice’s eyes made me wonder what else had been guiding her that day. Finally she said, “My daddy used to say, ‘Drop a little good in the hole before you go.’ That sticks with me. I was just trying to be a good citizen. And I was just letting that judge know that I had no other reason to come here….Because somebody’s name had to be there. Did I want it to be our names? No, I did not.” Maynard and the Tomlins’ suit for deceptive, unfair, and excessive fees and breach of fiduciary duty to the borrower prevailed in 2000, with a settlement of about $10 million. “So, borrowers all over North Carolina got checks, thanks to Janice and Isaiah,” Maynard said with pride.
Janice allowed a small smile. “That’s a lot of people being served. You know? It was more than worth our names being in the newspaper. We should have been so very embarrassed at the end of that, but I wasn’t, because I felt like I had put a little good in the hole.” Chapter 5
NO ONEFIGHTS ALONE
On August 4, 2017, a group of workers at a Nissan auto factory in Canton, Mississippi, held a historic vote about whether they should join the United Auto Workers (UAW), a move that would bring their wages, job security, and benefits closer to those of the unionized factories in the Midwest. Prounion activists had spent ten years organizing and campaigning, but in the end, their side failed by a margin of five hundred votes.
When I first read about this—autoworkers voting against unionizing—it struck me as a “man bites dog” story. I grew up in the Midwest, where driving a foreign car was seen as low treason, where the people who built American cars had the best jobs around, and where everybody knew that this was because the Big Three (GM, Chrysler, and Ford) had to negotiate with workers through the United Auto Workers. News coverage about the Nissan “no” vote referenced racial divisions in the plant, and I felt I had to learn more about what had happened. I flew into Medgar Evers Airport in Jackson, Mississippi, and drove up I-55 until I saw the bright white Nissan water tower come up on the horizon. The plant itself soon came into view.
Not many windows, set back from the highway, it extended for four miles of road.
After circling the plant, I continued driving a few miles more to the worker center, where I was to meet Sanchioni Butler, the UAW organizer.
The worker center was in a storefront in a strip mall that held vacant stores and a health clinic. When I walked in, I told a man sitting at the front desk that I had an appointment with Sanchioni, and he asked me to have a seat.
In the lobby was a coffee machine, a small fridge, and a dozen chairs around the walls, half-full with workers from the plant: four Black men, two Black women, and one white man, standard-issue Styrofoam coffee cups in most hands. They were all post-shift, leaning back heavily in their folding chairs—though, as we talked, their voices regained the energy their bodies had given up for the day. Most were wearing black T-shirts with REMEMBER on the front and a list on the back: INJURED COWORKERS, FROZEN PENSIONS, INEQUALITY FOR PATHWAYS WORKERS, REDUCED HEALTHCARE, WORKERS FIRED WITHOUT CAUSE, along with a name, DERRICK WHITING—a man who had collapsed on the plant floor and later died. Some wore a similar shirt with a positive message: WE DESERVE: A NEGOTIATED PENSION, FAIRNESS FOR PATHWAYS, A SAFER PLANT, BETTER HEALTHCARE PLANS, UNION REPRESENTATION. One woman’s shirt had a pair of clasped hands and the words NO ONE FIGHTS ALONE emblazoned across the back.
I introduced myself. Melvin, a Black man with a quick smile, an obvious charmer, rose to shake my hand first and adjusted the chairs so that I could sit in the middle of everyone. Earl was the oldest of the bunch, with gray in his mustache and a sharp crease in his pants that set his attire apart from the work pants and jeans around the room. Rhonda was the youngestlooking person in the room, wearing a gray-and-black camouflage hat with an American flag on it. The daughter of a union worker, she had kind eyes and a dimple that appeared with her tight-lipped smile. Johnny was a tall white man in a cutoff shirt with a sleeve of tattoos and a contrary attitude.
(When I asked him what he did at the plant, he said wryly, “It’s the same job. I’ve been doing the same job for fourteen years.”) Over the course of the first morning I spent in the worker center, half a dozen more people would come through and join the conversation. Almost all were Black men and women, a few white guys, and, I noticed, no white women. The name “Chip” came up a couple of times, and I jotted it down: a white guy who had caved to pressure in the final weeks and, they said, switched sides. Talking to the group about what problems they hoped a union would solve, I heard about their having to pay thousands of dollars in deductibles before their health coverage kicked in. I heard about the frozen pensions for those who had been with Nissan in the beginning, when benefits were more generous, and the insecure 401(k) for everybody else. I heard stories of women eight months pregnant being denied light duty and forced to lift fifty-pound parts. I heard, in a level of detail I wish I could forget, about the way an assembly line part can tear off a finger, yanking it away and taking the tendon with it, up and out, all the way up to the elbow.

IT WAS JARRING to hear auto plant jobs described this way, as everybody knows that manufacturing jobs are the iconic “good jobs” of the American middle class. But the truth is factory jobs used to be terrible jobs, with low pay and dangerous conditions, until the people who needed those jobs to survive banded together, often overcoming violent oppression, to demand wholesale change to entire industries: textiles, meatpacking, steel, automobiles. The early-twentieth-century fights to make good jobs out of dangerous ones— the fights, in fact, to create the American middle class—could never have been waged alone. Desperate for work and easily exploited, workers had power only in numbers. One worker could ask for a raise and be shown the door, behind which dozens of people were lined up to take his place. But what remade American work in the industrial era was the fact that companies didn’t face individual pleas for improvement; they faced mass work stoppages, slowdowns, armed protests, and strikes that forced employers to the bargaining table. The result was jobs with better pay, benefits, and safety practices and upward mobility for generations of Americans to come. These victories were possible only when people recognized their common struggles and linked arms.
And linking arms for those workers usually meant forming a union. The first time I heard the word union, it was from my father’s best friend, Jim Dyson, a man I called “Uncle Jimmy.” Uncle Jimmy was a union man in my neighborhood on the South Side of Chicago, and he wore the power of that backing like a wool coat in winter. He exuded pride in his work. His family had things that I knew, even as a child, were prized—vacations to the dunes in Wisconsin every year, braces for his daughter when the dentist said she needed them, a good-sized house that we all gathered in to watch football on Sundays. I remember once being in the back seat of my dad’s car and watching Uncle Jimmy through the open window, in a neighborhood not our own. He was standing on the street, joking and backslapping with a group of Polish- and Italian-accented men. I’d never seen that before. Later, I would put it together: Uncle Jimmy was in a union, the only place where men like that would know and trust one another in segregated Chicago.

AT THE CANTON worker center, the men and women had that same ease as they told me a shared story of the way the company kept workers apart and vying for positions at the plant. They spoke of a very clear, though informal, ranking of jobs at Nissan. First, there was a hierarchy of job status. On the top tier were the so-called “Legacy” workers, who started at Nissan when the company first came to Canton, making front-page news by offering a pay and benefits package that was generous by Mississippi standards. A few years later, the company contracted out those exact same jobs to subcontractors like Kelly Services, at about half the pay, a practice I still can’t believe is legal. Kelly is a temporary employment agency, and Nissan classifies the jobs as such—but I spoke to workers who had been full-time “temps” for more than five years. These workers, earning about $12 an hour with no benefits, were on the bottom tier. In between the top and bottom tiers were workers on a program Nissan called “Pathway,” where temp workers were put on a path to full-time status, though never at the Legacy level of pay and benefits. The result was that thousands of workers did the same job with the same skill, side by side on the line, but management kept the power to assign workers to different categories— meaning different pay, different benefits, different work rules.
Labor experts call this kind of stratification a tactic: create a sense of hierarchy and you motivate workers to compete with one another to please the bosses and get to the next category up, instead of fighting together to get rid of the categories and create a common, improved work environment for everyone. Though the company has been reluctant to publicly release exact numbers about its staffing, estimates put the number of non-employee “Pathway” and temporary workers at the plant as high as 40 percent. The non-employee workers were not allowed to cast a ballot in the union drive, which silenced the voice of the lowest-paid and most precarious workers.
As Sanchioni Butler would later tell me, “I think that it’s fair to say that if I’m working side by side with you, we’re doing the same work. I think I should be paid the same for sure….That’s the cry of the two-tier Pathway person.” The workers also spoke of an invisible ladder of difficulty that stretched from the assembly line, where workers maneuvered parts at the relentless pace of machines, to the “offline” jobs—for instance, as pre-delivery inspectors (PDI) who walk around a finished car checking off items on a list. Earl wanted me to understand the difference between being on the line or offline: “Those PDI jobs are so cush, those folks can leave work and go straight to the happy hour—they don’t even have to go home and shower.
That’s how you can tell how cush the job is.” Everyone I spoke to—white, Black, management, and production—admitted that the positions got whiter as the jobs got easier and better paid.
In the face of a possible cross-racial organizing drive, it seemed to be a company strategy to make white workers feel different from, better than, the Black majority in the plant. Rhonda had been working in the physically demanding “trim” section for years. When I asked her about conversations with white workers in her section, she shrugged. “I don’t have any white workers in my section.” After hearing the descriptions of the racial ranking when it came to the most physically taxing jobs, I had a hard time squaring that reality with one of the other things I heard repeated in my conversations with workers, particularly the antiunion white workers: that management said the Black workers were lazy, and that’s why they wanted a union. If they were so lazy, why were they doing all the hardest, most relentless and dangerous jobs, the ones that also happened to be the lowest-paying?
When I finally got in to see her, I asked Sanchioni Butler, the UAW organizer, about this contradiction. A Black woman with a determined expression, Sanchioni was once a “regular hourly worker” in parts distribution at a unionized Ford plant in Memphis, Tennessee, but she signed up one day for a UAW class on organizing and got hooked. “I’m second-generation union,” she told me with some pride. Knowing the difference that her father’s good union job had made in her family, she moved deeper south to help factory workers organize in Georgia and then Canton, where she’d been for the whole Nissan campaign. She said the claim that the Black workers were lazy was something she came up against all the time, and to deal with it, she would ask the person to compare that stereotype with what they saw at the plant: “I’ll say, ‘Okay, let’s be real.
You’re working on the assembly line. You have to keep up with the line; it’s constant, repetitive movement. How can you be lazy in a job like that?’ And then they’ll say, ‘You know what, you’re right.’ But [I have to get] them to think about the seed that the company is dropping to divide these people.”

IN THE TWO-HUNDRED-YEAR history of American industrial work, there’s been no greater tool against collective bargaining than employers’ ability to divide workers by gender, race, or origin, stoking suspicion and competition across groups. It’s simple: if your boss can hire someone else for cheaper, or threaten to, you have less leverage for bargaining. In the nineteenth century, employers’ ability to pay Black workers a fraction of white wages made whites see free Black people as threats to their livelihood. In the early twentieth century, new immigrants were added to this competitive dynamic, and the result was a zero sum: the boss made more profit; one group had new, worse work, and the other had none. In the war years, men would protest the employment of women. Competition across demographic groups was the defining characteristic of the American labor market, but the stratification only helped the employer. The solution for workers was to bargain collectively: to band together across divisions and demand improvements that lifted the floor for everyone.
America’s first union that had the ambition to organize workers across the economy into collective bargaining—the only exclusions were the morally questionable categories of “bankers, land speculators, lawyers, liquor dealers and gamblers”—was the Knights of Labor. Their motto was “That is the most perfect government in which an injury to one is a concern of all.” When the Knights began organizing in the volatile years of Reconstruction, they recruited across color lines, believing that to exclude any racial or ethnic group would be playing into the employers’ hands.
“Why should working men keep anyone out of the organization who could be used by the employer as a tool in grinding down wages?” wrote the official Knights newspaper in 1880. With Black workers in the union, white workers gained by robbing the bosses of a population they might exploit to undercut wages or break strikes; at the same time, Black workers gained by working for and benefiting from whatever gains the union won. The Knights also included women in their ranks. A journalist in 1886 Charleston, South Carolina, reported on the Knights’ success in organizing members in that city: “When everything else had failed, the bond of poverty united the white and colored mechanics and laborers.” This cross-racial win-win was the elusive holy grail of worker organizing: everybody in, nobody undercutting the cause, all fighting for a prize that would be shared by all. It wasn’t easy. When the Knights of Labor held its 1886 national convention in Richmond, Virginia, a New York branch agitated to integrate the facilities in town. The conflict grew so public that it cleaved opinion along racial lines, both within the order and among the watching public. In covering the convention, The New York Times concluded, “It is generally considered that the Order will never contain any considerable strength among the white population in the South.” But the Knights stuck together. The union spread throughout the country during the 1880s, boasting seven hundred thousand members at its peak, including many southern chapters where an estimated one-third to one-half of its members were Black. But its reign lasted only a decade as the 1890s saw the birth of Jim Crow, the end of Black-white fusion politics under Reconstruction, and the promotion of white supremacy as a cultural and political force to unite whites across class. Meanwhile, employers held the line on worker demands and essentially militarized, funding standing militias and building local armories for National Guard troops increasingly deployed as strike breakers. An 1886 explosion in Chicago’s Haymarket Square during a demonstration for shorter working hours created a public backlash against the violence increasingly accompanying labor unrest. By the late 1880s, the less radical and more discriminatory American Federation of Labor had replaced the Knights as the primary labor organization in the country. The AFL allowed affiliates to exclude Black workers, ensuring that the version of organized labor that grew in the early twentieth century had selfdefeating prejudices in its DNA. In the 1920s, the leaders of the AFL endorsed eugenicist beliefs about southern and eastern European immigrants and supported racist anti-immigration policies. Exclusion of Black workers was so prevalent in many AFL unions that whites in early unions saw Black people as synonymous with strike breakers; because unions rejected Black workers, employers routinely brought them in as substitute workers to cross picket lines.
Only when external pressures forced racial and gender integration on unions—labor shortages during World War II and the Great Migration of African Americans into the industrial Midwest put more women and people of color in factories—did the barriers begin to fall. A more radical faction of unions split off from the AFL to form the Congress of Industrial Organizations (CIO) in 1935, with an explicit commitment to interracial unity and to organizing entire industries regardless of “craft” or job title. As the great Black historian W.E.B. Du Bois wrote in 1948,
Probably the greatest and most effective effort toward interracial understanding among the working masses has come about through the trade unions….As a result [of the organization of the CIO in 1935,] numbers of men like those in the steel and automotive industries have been thrown together, fellow workers striving for the same objects. There has been on this account an astonishing spread of racial tolerance and understanding. Probably no movement in the last 30 years has been so successful in softening race prejudice among the masses.

IT WAS IN these years of cross-racial organizing that unions experienced a Solidarity Dividend, with membership climbing to levels that let unions set wages across large sectors of the economy. More and more of the country’s workforce joined a union on the job, with membership reaching a highwater mark of one out of every three workers in the 1950s. The victories these unions won reshaped work for us all. The forty-hour workweek, overtime pay, employer health insurance and retirement benefits, worker compensation—all these components of a “good job” came from collective bargaining and union advocacy with governments in the late 1930s and ’40s. And the power to win these benefits came from solidarity—Black, white, and brown, men and women, immigrant and native-born.

OF COURSE, THAT was then. It’s hard to find a good union job today, and it’s not because nonunion labor is so rich with benefits. Almost half of adult workers are classified as “low-wage,” earning about $10 an hour, or $18,000 a year, on average. Less than half of private employers offer health insurance. Only 12 percent of private-sector workers have the guaranteed retirement income of a traditional pension. After the 1950s peak, the share of workers covered by a collectively bargained work contract has fallen every decade. Today, it’s just one out of every sixteen private-sector workers. I was born at a time when the loss of union factory jobs on the South Side of Chicago was changing everything: families split up and moved, stores closed and schools cut programs, folks turned to illegal work, and neighbors stopped sitting on their porches after the streetlights came on.
As factory unions weakened, it felt like everything else did, too.
So, as I sat and listened to the men and women at the Nissan worker center describe the future they had glimpsed, fought for, and lost, I knew that what they’d been struggling for could have made a difference in their lives and in the trajectory of inequality in America. The share of workers in a union has directly tracked the share of the country’s income that goes to the middle class, and as union density has declined, the portion going to the richest Americans has increased in step. A worker under a union contract earns over 13 percent higher wages than a nonunionized worker with the same education, job, and experience. And it’s not just the factory jobs. Even today’s typically lower-paying service jobs can be made into good jobs by bargaining: In Houston in 2006, thousands of striking janitors won a union contract, a near-50 percent pay raise, health insurance, and an increase in guaranteed hours. In Las Vegas, hotel kitchen and hospitality workers joined a union and earned four dollars more per hour than the national median, and more benefits. In 2020, during the COVID-19 epidemic, thirtysix thousand casino and resort employees in Las Vegas, including those who had been furloughed, maintained their health benefits until March 2021, thanks to union bargaining. Interestingly, the benefit of unionization spreads beyond just the workers fortunate enough to be in the contract; having a higher standard in any industry forces employers to compete upward for labor. Economists have calculated that if unions were as common today as they were in 1979, weekly wages for men not in a union would be 5 percent higher; for noncollege-educated men, 8 percent higher.
If that bump sounds small, compare that to the fact that, since 1979, wages for the typical hourly worker have increased only 0.3 percent a year.
Meanwhile, pay for the richest 1 percent has risen by 190 percent.
This is not to say that unions are perfect. Like any human institution, a union can suffer from infighting, bureaucratic waste, pettiness, discrimination, and corruption. (And, in fact, leaders at the UAW and FiatChrysler were accused of misusing corporate and union funds in the last few years.) Unions have been accused of being outdated, and the democratic rules do make it easy for them to prioritize the needs of current members (in most cases, older, whiter, more male members) over changing strategies to organize new workers. But forty years of declining union membership and rising inequality have proven that we still haven’t figured out a better way to ensure that the people who spend all day baking the pie end up with a decent slice of it.
And so, the questions loom: If joining a union is such a demonstrable good, why are unions on the decline? Why would any worker hoping to better her lot in life oppose a union in a vote? People close to the issue usually offer two answers that have nothing to do with race: bare-knuckle antiunion tactics by corporations and job insecurity. Both have merit.
Business did begin to organize in the early 1970s to influence politics and set a common strategy in a way it never had before. Business lobbies like the Chamber of Commerce, the National Association of Manufacturers, and the Financial Services Roundtable expanded and created political action committees to give targeted donations to candidates willing to press their antiunion agenda. What broke labor? Many people will point to the PATCO air traffic controllers’ strike of 1981, when Ronald Reagan shocked the country by firing more than eleven thousand striking controllers rather than negotiate.
PATCO signaled to the private sector that it was “open season” on unions. It also came after Federal Reserve chairman Paul Volcker’s aggressive efforts to combat high inflation had tightened credit and weakened the job market; years of high unemployment had diminished the bargaining power of unions to the point where the PATCO defeat was even possible.
But labor’s breaking point came less suddenly than that single blow: over the course of the late 1970s, businesses had begun to freely flout the laws protecting workers’ rights to organize, accepting fines and fees as a tolerable cost of doing business. Today, one in five unionizing drives results in charges that employers illegally fired workers for union activity, despite federal protections. It’s illegal to threaten to close the workplace rather than be forced to bargain with your employees, but the majority of businesses facing union drives do it anyway. The Nissan employees attested that they heard those threats on constant repeat from the plant’s TV screens and loudspeakers, and the National Labor Relations Board issued a complaint against Nissan about illegal tactics.
A backdrop of economic insecurity makes these tactics more powerful.
A job—no matter what the pay or conditions—can seem better than the ever-present threat of no job at all. And it’s true that labor’s enemies were aided and abetted by new rules of global competition and technological change that made American jobs less secure. As large American companies began to automate and look to nations in the Global South for labor in the 1970s and ’80s, the threat and reality of job losses proved powerful in forcing unions to make concessions and slow the pace of new organizing.
The North American Free Trade Agreement (NAFTA) in 1994, the normalization of trade with China in 2000, and other trade policies supported by multinational corporations accelerated the decline of the most union-dense industries in the private sector, manufacturing. After 2001, the country lost 42,400 factories in just eight years. The United States doesn’t build much anymore; in 2017, the total value of our exports was one of the lowest in the world. —
BUT OTHER COUNTRIES have also faced globalization and automation and still maintained high rates of unionization. So, why did Americans allow their government and corporations to collude on attacking unions and depleting union membership? As it turns out, it wasn’t all Americans. Somewhere along the line, white people stopped defending the institutions that, more than almost any other, had enabled their prosperity for generations.
According to Gallup, public approval of unions was the highest in 1936, the year the question was first asked, and in 1959, but it began to trend downward in the mid-1960s. The era of declining support was one in which one of the country’s most visible unions, the United Auto Workers, was staking its reputation on backing civil rights, supporting the March on Washington for Jobs and Freedom of 1963 and using its political clout to press the Democratic Party for civil rights. The unionized capital of American manufacturing, Detroit, had also become an epicenter of Black cultural and economic power, and white people were abandoning the city for federally subsidized and racially exclusive suburbs.
White men began to leave the unionized working class as well. More and more white men moved into the professional and managerial ranks of big nonunion industries like financial services and technology, and the recession of the 1970s put the white working class on the ropes with yearslong lockouts and high unemployment. Over time, a slightly higher share of Black workers remained in unions than did white workers—today, membership is at 12.7 percent for Black workers versus 11.5 for white, with Asian American and Latinx workers at around 10 percent. (That’s because the public sector moved earlier and more successfully to integrate its workforce than the private sector, leading to a higher share of Black workers in public service, where government neutrality in union drives has kept the antiunion attacks more at bay. Today, the public sector has a unionization rate more than five times higher than that in the private sector.) Detroit reappeared in between the lines of the Gallup data I was studying when I noticed that the all-time-lowest approval of unions happened in August 2010. The country was still reeling from the financial crisis, and the Obama administration had just saved the domestic auto industry by extending federal loans (popularly seen as bailouts) to GM, Chrysler, and Ford. Resentment about the auto rescue drove support for unions down in 2010, but interestingly, white people—already slightly less favorable toward unions than people of color but, up to this point, still showing majority approval—had the most negative response to Obama’s Detroit rescue. White approval of unions fell from 60 percent to just 45 percent in 2010.
Where was this new antiunion narrative among white people coming from? The right-wing media echo chamber made the auto rescue an explicit racial zero-sum story: to them, it was a racially redistributive socialist takeover. Rush Limbaugh would explain, “[Obama] doesn’t see himself as a capitalist reformer saving a stupid automobile company. He sees this as his opportunity to take it away from the people who founded it and give it to the people he thinks have a moral right to it because somehow they have been taken advantage of, used, exploited, paid unfairly, what have you.
Yeah, it’s socialist.” But more broadly, starting with Ronald Reagan and accelerating under Newt Gingrich’s conservative takeover in 1996, the political party to which most white voters belong began to consider unions the enemy. When Wisconsin governor Scott Walker attacked the collective bargaining rights of public employees in 2011, the rhetoric about taxpayers paying for teachers’ bloated benefits was redolent of the “welfare queen” charges.
When I was growing up in the Midwest, the union was a symbol of strength: the union could make or break politicians; the union had the backs of men like Uncle Jimmy; and the “union guys” in the city were the tough guys. The international union fight song, “Solidarity Forever,” has “The union makes us strong” as its refrain. At Nissan in Canton, the antiunion forces won in part by turning the union into a sign of weakness, a refuge for the “lazy.” Messages linked the union with degrading stereotypes about Black people, so that white workers wouldn’t want any part of it. Even Black workers might think they were too good to “need” a union.
At the worker center, I asked Melvin about how unions are perceived where he lives. “The people that we see, as soon as they see UAW, and even if you bring up union, they just think color. They just see color. They think that unions, period—not just UAW—they just think unions, period, are for lazy Black people….And a lot of ’em, even though they want the union, their racism, that hatred is keeping them from joining.” Johnny agreed with Melvin’s assessment of his fellow white workers.
“They get their southern mentality….‘I ain’t votin’ [yes] because the Blacks are votin’ for it. If the Blacks are for it, I’m against it.’ ” I looked around the office, which included posters from various UAW Nissan rallies invoking the civil rights movement and the March on Washington. I wondered how the explicit embrace of civil rights imagery and language had played with the white co-workers they were trying to organize. It seemed like a catch-22: the majority of the plant was Black, and the base of the worker organizing needed to be, too—so, invoking Black struggle made sense. But particularly in the South, white workers might not see anything for themselves in a campaign redolent of the civil rights movement; at worst, the association could trigger the zero-sum reaction.
The word union itself seemed to be a dog whistle in the South, code for undeserving people of color who needed a union to compensate for some flaw in their character. As the workers spoke, I realized that it couldn’t be a coincidence that, to this day, the region that is the least unionized, with the lowest state minimum wages and the weakest labor protections overall, was the one that had been built on slave labor—on a system that compensated the labor of Black people at exactly zero.
The leadership of the labor movement has long known that the South was its Achilles’ heel. After World War II, the CIO launched an ambitious campaign to organize the South. It believed that the region’s low wages and hostility to unions threatened labor’s gains in the North and West and that the antilabor politics of the southern congressional delegation would continue to cripple national legislation for workers’ rights. But the campaign unofficially known as Operation Dixie failed spectacularly, and racism did it in.
Given the reign of white supremacy in the South, the CIO faced a choice about which fights to take on: it could champion equal rights on the job in order to recruit Black workers and change the consciousness of white workers, or it could accommodate Jim Crow and rely on white workers’ class interest as a basis for organizing. A cadre of more progressive CIO organizers tried the former, organizing the union-enthusiastic Black workers in integrated and majority-Black industries like tobacco, lumber, and food processing. But the CIO leadership took a different tack: they bet on the segregated white textile industry and assiduously avoided any talk of social justice or equality. The CIO’s southern organizing campaign director said that as far as he was concerned, “there was no Negro problem in the South.” The white southern workers’ hostility to what nonetheless felt like a northern incursion caught the CIO organizers by surprise. By the end, only about 4 percent of the region’s textile workers had organized into a union, and the CIO unofficially admitted defeat.
The failure of Operation Dixie would shape American industry and society for generations, as the South’s business owners would remain mostly unbothered by organizing among the region’s multiracial working class, up until the present day. As a result, southern comfort with people working for nothing hasn’t changed much in the past two hundred years.
The five U.S. states that have no minimum-wage laws at all are in the South: Alabama, Louisiana, Mississippi, South Carolina, and Tennessee.
Georgia has a minimum wage, but it is even lower than the federal one.
As unions succumbed to attacks in the 1980s, more employers felt free to relocate to places where workers didn’t demand high wages—both overseas and in the U.S. South. After 2000, American factories began to shutter at a faster pace, but not all regions suffered equally. The number of jobs in the industrial Midwest has never recovered, while at the same time, the number of jobs in the South grew by 13.5 percent. Japanese automakers Nissan and Toyota opened factories in the United States to step up competition with Detroit, but they planted them primarily in the nonunionized South. The foreign auto industry moved to the American South for the low wages, and once there, it drove worker pay down even further. From 2001 to 2013, the pay of workers at auto parts plants in Alabama dropped by 24 percent; in Mississippi, it was 13.6 percent. There just seemed to be no bottom, with wages for the same jobs in the same industry falling year after year. Southern states simply lacked any countervailing force, any way for workers to push back as a group against downward mobility. The later you were hired, the worse it got.
This low-wage southern labor model is no longer contained to the geographic South nor to manufacturing. (For the past two decades, the biggest driver of retail markets in the United States has been southern-based Walmart, the country’s largest private employer by far. As Walmart expanded from Arkansas, it brought its fiercely low-wage and antiunion ethos with it—and local wages and benefits tumbled in its wake.) The wage difference between workers in the industrial Midwest and the South was nearly seven dollars an hour in 2008; three years later, wage cuts in the Midwest had slashed the regional difference in half. As journalist Harold Meyerson puts it, “the South today shares more features with its antebellum ancestor than it has in a very long time. Now as then[,] white Southern elites and their powerful allies among non-Southern business interests seek to expand to the rest of the nation the South’s subjugation of workers.” To a large degree, the story of the hollowing out of the American working class is a story of the southern economy, with its deep legacy of exploitative labor and divide-and-conquer tactics, going national.

WHEN I WENT back to my hotel after the first day of conversations with Nissan workers, I was dismayed and perplexed. I had assumed that white solidarity with Black workers would be in the white workers’ self-interest, but after listening to a day of stories about the ways that white workers were given special advantages at the plant, I wasn’t so sure. Maybe the status quo— where being white actually did make it easier for you to get ahead, where a mostly white management could arbitrarily act to the benefit of those with whom they felt a kinship—was actually better for the average white worker than a union and its rules. I thought about Trent, a verbose pro-union white guy still working on the line, because, as he said proudly, he was too mouthy for white management to have promoted him to an easier job. But when describing his fellow white workers, he had said: “The unions are for putting people on equal ground. Some people see that as a threat to their society.” As Earl had said, “Even the white guys on the line, they felt they would lose some power if we had a union. The view is, white people are in charge, I’m in charge.” I realized that I had been naive to think that the benefits of the union would be obvious to white workers. Having grown up in the Midwest, I knew that the Nissan plant workers were getting a bad deal compared to unionized autoworkers—lower pay, uncertain retirement, no job security, no way to bargain for better conditions at all. But the white workers in Canton were still getting, or had the promise of getting, a better deal than someone.
The company was able to redraw the lines of allegiance—not worker to worker, but white to white—for the relatively low cost of a few perks. A white worker starting a job on the line would quickly learn the unwritten racial rules. He’d see that he could get promoted to a “cushier” job if he played his cards right, and that included not signing a union card. No matter that nobody on the plant floor, no matter how cushy their job, had a real pension or the right to bargain for improvements at the plant. They could be satisfied with a slightly better job that set them just above the Black guys on the line, more satisfied by a taste of status than they were hungry for a real pension, better healthcare, or better wages for everyone.
I had traveled to Mississippi with a few books in my suitcase, and one of them was W.E.B. Du Bois’s seminal 1935 work, Black Reconstruction in America: 1860–1880. I pulled it out and read again its most famous passage, which had never felt truer to me than it did that evening. Du Bois was describing the Black and white southern workforce of the late nineteenth century:
There probably are not today in the world two groups of workers with practically identical interests who hate and fear each other so deeply and persistently and who are kept so far apart that neither sees anything of common interest.
It must be remembered that the white group of laborers, while they received a low wage, were compensated in part by a sort of public and psychological wage. They were given public deference and titles of courtesy because they were white. They were admitted freely with all classes of white people to public functions, public parks, and the best schools. The police were drawn from their ranks, and the courts, dependent on their votes, treated them with such leniency as to encourage lawlessness. Their vote selected public officials, and while this had small effect upon the economic situation, it had great effect upon their personal treatment and the deference shown them. White schoolhouses were the best in the community, and conspicuously placed, and they cost anywhere from twice to ten times as much per capita as the colored schools. The newspapers specialized on news that flattered the poor whites and almost utterly ignored the Negro except in crime and ridicule.
On the other hand, in the same way, the Negro was subject to public insult; was afraid of mobs; was liable to the jibes of children and the unreasoning fears of white women; and was compelled almost continuously to submit to various badges of inferiority. The result of this was that the wages of both classes could be kept low, the whites fearing to be supplanted by Negro labor, the Negroes always being threatened by the substitution of white labor.

THE WAGES OF whiteness seemed to be good enough for a majority of the workers eligible to vote at the Nissan plant, a group that excluded the most precarious (and disproportionately Black) temporary workers. As difficult as it was for me to put myself in the shoes of an antiunion white factory worker, in order to truly understand what had happened to the American working class, I had to try. Perhaps, when it comes down to it, I wrote in my notebook, the being matters more than the having. Often, what we have (a nice house, cash in the bank, a good car) is a simple way of telling ourselves and others who we are. Work and our rewards for it are the means; the end is, above a certain level of subsistence, our sense of selfesteem. So, perhaps to a white person facing this tradeoff, some tangible financial benefits are easy to give up if you already have what money buys in our society, which is belonging and status.
White people today, particularly outside the South, often distance themselves from slavery and Jim Crow by insisting that their immigrant ancestors had nothing to do with these atrocities and, in fact, themselves faced discrimination but were able to overcome it. (In fact, this popular belief is one of the core ideas contributing to white racial resentment against Black people and newer immigrants of color.) But the Irish, Germans, Poles, Slavs, Russians, Italians, and other Europeans who came to the United States underwent a process of attaining whiteness, an identity created in contrast to the Blackness of unfree and degraded labor. As immigrants, these groups had an opportunity to ally themselves with abolition and, later, equal rights and to fight for better social and economic conditions for all workers. They chose instead, with few exceptions, the wages of whiteness.
Irish immigrants present a clear example. Upon the first wave of immigration in the 1820s, the jobs and even neighborhoods the Irish had access to were full of Black people, enslaved or just barely not. As ditch diggers in the expanding Deep South, domestic servants, dock workers, and livery drivers, the Irish were thrown into a labor pool where they worked shoulder to shoulder with Black people. One of the most influential Irish nationalist leaders of the 1830s and ’40s, Daniel O’Connell, was an abolitionist who railed against slavery and the exploitation of all workers; he saw that any society that treated some workers in that way would always abuse others.
But the zero-sum story proved too powerful. Every day, Irish immigrants heard from employers that they would hire either Irish or Black workers for the most menial and labor-intensive jobs, whichever group they could pay the least. They heard the zero sum from Democratic Party leaders whose strategy was twofold: become the anti-Black, pro-slavery foil to the Republicans and recruit Irish men as voters in large cities. Think about it: if you came to a country and saw the class of people in power abusing another group, and your place in relation to both groups was uncertain, wouldn’t you want to align yourself with the powerful group, and wouldn’t you be tempted to abuse the other to show your allegiance? As they fought to be considered more white than Black, Irish people gained a reputation in Black neighborhoods as brutal enforcers of the racial hierarchy, attacking those beneath them to ensure their place. “Irish attacks on blacks became so common in New York City that bricks were known as Irish confetti,” wrote historian Michael Miller Topp. In the Civil War Draft Riots that took place in New York City over four days in July 1863, more than a thousand Irish immigrants in mobs attacked the Black community, including children in an orphanage. They caused so much carnage and terror that the Black population of New York decreased by 20 percent afterward. The logic of the massacre, as much as one can find logic in mass atrocity, was the zero sum: the Irish rioters didn’t want to go fight in a war that might free millions of Black workers to come north and compete for their jobs.
David Roediger, a history and African American studies professor at the University of Kansas, wrote the books The Wages of Whiteness and Working Toward Whiteness: How America’s Immigrants Became White, which recount the process by which European immigrants were sold membership to the top category of the racial caste system. The price they paid was acquiescence to an economic caste system. Immigrants who were able to attain whiteness got the voting rights (even before becoming citizens in many places, including New York City), jobs, and education that American citizenship offered. Even as they struggled with the exploitation of sweatshops and slums, becoming “white” afforded them a civic and social esteem that could constantly be compared against the Black second-class citizens one rung below them.
Roediger uses the term “Herrenvolk republicanism” to describe American society as a representative democracy only for those considered, like Aryans under Nazism, to be part of the master race. In his words, “Herrenvolk republicanism had the advantage of reassuring whites in a society in which downward mobility was a constant fear—one might lose everything but not whiteness.” In order to keep what Du Bois called a psychological wage, white workers needed not to contest too strongly for more material wages. To fight for a fairer system, the working class would have needed collective action, which has always been in tension with the pull of American racism.

I WAS ON the phone one afternoon with Robin DiAngelo, the white writer who coined the expression “white fragility,” when she took a personal digression from the topic we were discussing. DiAngelo and her two sisters were raised in poverty by their single mom. “She was not able to feed, house, or clothe us,” DiAngelo recalled. “I mean, we were flat out. We lived in our car. We were not bathed. My mother could not take care of us. And yet, anything I ever wanted to touch, like food someone left out—I was hungry, right?—I was reprimanded: ‘Don’t touch that. You don’t know who touched it, it could have been a colored person.’ ‘Don’t sit there. You don’t know who sat there, it could have been a colored person.’ That was the language —this was the sixties. The message was clear: If a colored person touched it, it would be dirty. But I was dirty. Yet in those moments, the shame of poverty was lifted. I wasn’t poor anymore. I was white.” Robin’s story called to mind a study coauthored by Michael Norton, one of the professors who identified the increasingly zero-sum mindset among white people. Norton and his colleagues would call the psychology behind DiAngelo’s mother’s warnings “last place aversion.” In a hierarchical system like the American economy, people often show more concern about their relative position in the hierarchy than their absolute status. Norton and his colleagues used games where they gave participants the option to give money to either people who had more money than they had, or those who had less. In general, people gave money to those who had less—except for people who were in the second-to-last place in the money distribution to begin with. These players more often gave their money to the people above them in the distribution so that they wouldn’t fall into last place themselves.
The study authors also looked at real-world behaviors and found that lowerincome people are less supportive of redistributive policies that would help them than logic would suggest. Even though raising the minimum wage is overwhelmingly popular, people who make a dollar above the current minimum “and thus those most likely to ‘drop’ into last place” alongside the workers at the bottom expressed less support. “Last-place aversion suggests that low-income individuals might oppose redistribution because they fear it might differentially help a last-place group to whom they can currently feel superior,” the study authors wrote. That superior feeling, however, doesn’t fill your stomach, as DiAngelo learned as a child.

THE STORIES I heard on the ground in Mississippi painted a picture of an economic pessimism that sapped the aspirations of most workers, white and Black. The Nissan job was one of the best in the state, and the company threatened that, with a union, the plant would disappear. Workers heard this messaging 24/7 on video screens inside the plant, on leaflets (paid for by the Koch brothers) left in mailboxes at their homes, and in every statewide politician’s talking points on the radio and local news. One of the most successful threats was that with a “yes” union vote, Nissan would repossess the company cars that many workers had leased. No matter that a union would have given workers a say in this and all other benefits; “the threat of your wife or Momma’s car going missing was scary enough for some,” Melvin told me with a sigh. I thought about how, in addition to the legacy of centuries of racist programming, white antiunion workers were also dealing with at least forty years of economic programming that told them that the best days for unions and for the middle class were behind them and that everyone was on their own to grab what they could from a dwindling pie.
On my last morning in Canton, I had breakfast in my hotel’s lobby with Chip Wells, the man who’d become a controversial figure at the plant in the final weeks before the election. He had been an outspoken white voice on the pro-union organizing committee—so much so that he became the target of intimidation from antiunion people at the plant. He showed me a snapshot on his phone of his employee ID photo printed out as a poster in the security guards’ room—someone had drawn a teardrop tattoo onto his left cheek and some kind of cross or swastika on his forehead. One of the people he worked alongside in the generally antiunion maintenance department (one of the better-paid departments) began to joke about Chip “gettin’ hurt or fallin’.” Eventually, Chip said he thought, “ ‘Maybe he ain’t joking,’ you know? ’Cause he’s from Natchez, and that’s where they actually had [slave] auction blocks and stuff.” The pressure eventually got to him, and at a rally one Sunday, Chip donned an antiunion shirt, signaling to the cheers of white workers around him that he had switched sides. Chip told me that during the time he was “anti,” as they called it, he was struck by the zero-sum mindset of the people on his team. “The idea’s that if you uplift Black people, you’re downin’ white people. It’s like the world has a crab-in-a-barrel mentality.
Every time somebody’s gettin’ on top, we gotta pull them down ’cause they might try to do us wrong or keep us down.” White workers who supported the union, like Trent and Johnny, told me that their view was different from that of most of their white peers because they saw their interests as the same as those of the Black workers. In their telling, everybody would benefit from better healthcare, plant safety, pay raises, job security, retirement benefits, and a fair system for promotions to replace what they call the “buddy buddy” system, where who you hunt with matters more than your work ethic. With the yes/no, Black/white divide so stark, I wanted to know how a Black pro-union worker would approach a white worker on the opposite side. Melvin broke it down for me.
“You find out what you have in common—that common ground. And whenever you’re tired, and they’re tired, it’s the same. We bleed the same.
We get tired the same. We sweat the same. When it’s hot, you hot. You just find out that common ground. And that’s how we reach the white workers.
Some, you may not ever reach. Some, they look at me and they just look at you like you a nigger. No matter what.
“And this union organizing stuff, we have to be some of the most insane people. Because you take even the most racist, the most hateful people, and you’re willin’ to put your job on the line to fight for them, you understand?
We have to be losin’ our minds. So, it doesn’t matter. The ones that hate you the most are the ones that you fight for the most. And you care about ’em the most. You keep ’em close to you. You keep ’em close to you. But this is how you deal with your white co-workers. They are people, too. Hey, look, they got kids, you got kids. Y’all just find that common ground.” When Melvin finished speaking, there were plenty of nods around the room at the worker center, but Earl was less confident that white folks wanted to share any kind of ground with him. “If we all have better pay and better benefits, maybe now I can buy a house in Deerfield,” a white enclave in their area, he said, eyebrows arched.
“Maybe I would move in next to an Ole Miss law grad, and Saturday morning, when he comes out to get his paper, I’d be there watering my lawn. Maybe I’ll be able to demand better education for our kids.” Earl knew the exact amount by which the governor had cut the education budget in the most recent session, mentioning it multiple times. “The governor just cut five-point-three million dollars from the education budget. The union could have fought that in the statehouse.” The guys at the worker center wanted me to know that the problems didn’t stop at the factory door. They felt that nobody had a long-term commitment to the state—not the company, not the politicians. They thought a lot about what would happen to the workers after their bodies inevitably wore down. “Folks will need disability, but the governor moves to cut that, make it harder to get on disability. That drives the whole state down,” said Trent. They told me about the favors Nissan got in the statehouse, including hundreds of millions in tax breaks at a time when the state’s schools are chronically underfunded. In order to appease the community, Nissan made donations in lieu of taxes to local schools, but the worker group saw that as a sham as well. “The company gets to decide where those voluntary school payments get made—and they end up directed to the districts where the upper management’s kids go.” Earl connected the dots: “You don’t need someone to be educated if you want them to work menial jobs and feel lucky for doing it. It goes hand in hand with the goals of Nissan.” At the end of a long week in Mississippi, I boarded a small regional plane out of the state. As I sat looking over my notes, a grief I’d held at bay during all my conversations with those extraordinary everyday people grabbed at me. What they wanted for themselves, their children, and their community—what they wanted even for the people in the plant who despised them—was a little more say over the decisions that shaped their lives. And they’d been defeated, by a powerful, profitable corporation and the very old zero-sum story.

BUT THERE’S ANOTHER story woven through the history of worker struggle in America, of people refusing to fight alone and winning the Solidarity Dividend of better jobs, despite the odds. Over the past decade, that story gained a new chapter, written by some of the least likely, lowest-paid workers in our economy. The movement began on November 29, 2012, when about two hundred fast-food workers rallied just outside Times Square, in the heart of Manhattan. They were hourly workers at the bottom of the pay scale, almost all brown and Black, mostly young adults, often with children. They worked at Burger King, McDonald’s, Subway, and Sbarro, but they had either walked off the job or not gone in at all in order to attend an unprecedented one-day strike across the city. They chanted slogans like “One-two-three-four, time for you to pay more! Five-six-seveneight, don’t you dare retaliate!” Without a union’s protection, they could have been fired upon their return to work, but they gained courage in numbers. Their demand? A raise from the minimum wage of $7.25 an hour to $15 an hour and a union. It was audacious.
I’ll count myself among the gobsmacked. My Demos colleagues and I had been pressing the case to raise the minimum wage for years, using research and advocacy to argue that a poverty wage that hadn’t kept up with rising costs was contributing to economic inequality, rising debt, and bankruptcies. But the consensus advocacy goal had been a raise to $10.10, and even that modest an increase had gone precisely nowhere—little press attention, no big rallies, unsigned bills languishing in statehouses and congressional committees. Then, seemingly out of nowhere, a new, gamechanging goal appeared that said to some of the country’s poorest workers: “This could change your life.” Within a year, what would become the Fight for $15 had spread across the country. Everyday workers gained the courage to demand more with the organizing support of local antipoverty community organizers and the Service Employees International Union (SEIU). Many workers credited the 2011 Occupy Wall Street movement for raising their consciousness about the unfairness of working in poverty for profitable corporations. In fact, fast food was the most unequal industry in the economy; Demos research calculated an over one-thousand-to-one average CEO-to-worker pay gap.
Then, in November 2013, the impossible happened: a $15-an-hour victory, won by airport baggage handlers, jet fuelers, food vendors, and wheelchair attendants. These were subcontractors at the Seattle-area SeaTac Airport, making around $9.70 an hour, a poverty wage for the Seattle area. The diversity was wide-ranging—Black Americans, white Americans, immigrants from Greece, Uzbekistan, Haiti, Vietnam, Somalia, the Philippines, and elsewhere. While the noncitizen/citizen divide could have been an opening for a divide-and-conquer strategy, the organizers focused on training immigrants in their rights and teaching them the broader story of income inequality in America, a story that was reinforced by the nativeborn workers. Veteran employees could recall when their airport service jobs had been decent ones, just a decade before; but with sudden deregulation and subcontracting, almost overnight the pay and conditions for the exact same work had plummeted. This sudden change—and its clear origin in management decisions—made it easier for workers to place the blame on the companies that abruptly changed the rules to squeeze more profit, not on the new, immigrant workers. The multihued group of activists, supported by the local SEIU and Seattle-area community groups, won a ballot initiative in the airport town, Sea-Tac, to raise airport worker wages to $15. The margin of victory was just 77 votes.
Sensing momentum, however, the coalition of supporters made a wild bet that they could win in an even bigger fight, in Seattle itself. By that time, in the spring of 2013, Seattle fast-food workers of every color were walking out in one-day strikes and organizing across the city. By August 29, on a national day of action that coincided with the fiftieth anniversary of the March on Washington, the streets of sixty cities teemed with fast-food workers demanding higher wages. But they weren’t alone: retail workers from department stores like Macy’s and chains like Victoria’s Secret also joined in. A year later, the demonstrations would include adjunct professors with graduate degrees. By May 2014, the Seattle City Council voted to make theirs the first American city to raise its minimum wage to $15 an hour.

TERRENCE WISE NEVER thought that this would be his life at age forty. It’s not that he was surprised to be working in fast food: his mother raised him on a Hardee’s paycheck, so he grew up knowing that big chain restaurants offered hard but honest and always available work. Too many family bills to juggle caused Terrence to drop out of high school to work full time, and in the twenty years since, he’d barely seen a raise. Even though he worked so many hours that he was always missing his three daughters, he hadn’t been able to avoid a spell of homelessness. All that, though depressing, seemed pretty much the norm in America. What Terrence never expected, though, was that he’d find himself in the leadership of a global movement, speaking at the White House and testifying before the U.S. Congress.
Though he’d been an honors-track student and won awards for public speaking in high school, all that promise was far from his mind on the Sunday in 2012 when three people—two Black, one white—walked into his Burger King and asked him to imagine more.
“It was a Domino’s worker, McDonald’s worker, and a Subway worker,” Terrence recalled to me with a smile in his voice. The three workers asked him: Do you think fast-food workers should earn a living wage, vacation, and health benefits? “Well, I hadn’t seen a doctor—at that point, it’d been years, over a decade—so, yeah,” he recalls telling the workers. “We deserve the opportunity for benefits, paid time off, sick days, things that we don’t have.” They told him that they were organizing their fellow fast-food workers across the city. Terrence told them to count him in, and by the end of the day, he had signed up the six co-workers on his shift to join Stand Up KC (Kansas City), a group that would eventually join in the national Fight for $15.
Terrence got his first taste of collective action’s power when he and his co-workers wrote up a petition and confronted the supervisor to demand simple safety improvements: stock up the first-aid kit, fix the broken wheels on the grease trap, replace the hoses that were leaking hot grease (“just simple things that we know billion-dollar corporations like McDonald’s, Burger King, can afford”)—and it worked. The first time he went out on a one-day walkout protest from his job at Burger King, he came back, and his boss gave him a dollar or so raise, when he had been refusing for years.
“So, I’ve seen the power of coming together and organizing, and how it can make change. And I’ve definitely lived the life of when we were not organized…and how life just deteriorated over the years.” Part of what had kept the fast-food workers in Kansas City unorganized was a racial and cultural divide. Historically one of the country’s most segregated cities along lines of Black and white, Kansas City had also seen an increase in the Latino population in the late 1990s, after NAFTA.
Workers of different cultures didn’t communicate much; language was a barrier in some instances, and there were rumors that Latino managers were giving Latino workers higher pay and sick days. Getting workers out of the stores, into each other’s homes, and sharing their stories helped dispel these myths. From the beginning, Stand Up KC named racism as a common enemy. Its first printed banner read: UNITED AGAINST RACISM—GOOD JOBS FOR ALL.
The message has resonated with Terrence. “We’ve got to build a multiracial movement, a different kind of social justice movement for the [twenty-first] century. And we’ve got to talk about it, multiracial organizing and how to build the movement, you know?
“We’ve got to have a new vision for America. We’re building the Fight for $15 and a union movement, and we’ve got to have a new identity for the working class. What do we do every day in this country? All of us get up and go to work. We make this country run. And now, more than ever, workers are producing more wealth than we’ve received, you know? We’re being exploited across the board,” he told me over the phone, and I could just picture him bringing a crowd to its feet.
Many of the signs that workers carried in their Stand Up KC rallies and strikes made it clear that cross-racial solidarity was the point. RACIAL UNITY NOW: WE WON’T FIGHT EACH OTHER, read one sign. BLACK, WHITE, BROWN: WE FIGHT WAGE SLAVERY AND RACIAL DIVISION, read another. And another, BLACK, WHITE, BROWN: DEFEAT MCPOVERTY, DEFEAT HATE. I thought back to Canton; the UAW’s message about race had invoked civil rights for Black workers, but the fast-food message explicitly included white people in the coalition and named division, not just racial oppression, as a common enemy. That story helped transform the way Bridget Hughes saw the world.
Bridget is a white woman whose Irish ancestry shows up in her reddish hair and whose Missouri accent is slight but unmistakable to those who know how to listen for it. She has three children and has worked in fast food for over a decade. Like Terrence, she was an honors student with college potential, but she had to drop out to support her family when her mother got sick. When Bridget was first approached by a co-worker at Wendy’s about joining Stand Up KC, she was skeptical, to say the least. “I didn’t think that things in my life would ever change. They weren’t going to give fifteen dollars to a fast-food worker—that was just insane to me.” But she went to the first meeting anyway. When a Latinx woman rose and described her life—three children in a two-bedroom apartment with plumbing issues, the feeling of being “trapped in a life where she didn’t have any opportunity to do anything better,” Bridget was moved.
“I was really able to see myself in her. And at that point, I decided that the only way we was gonna fix it was if all of us came together. Whether we were white, brown, Black. It didn’t matter.” For Bridget to see herself in a Latinx worker was a breakthrough. She admitted, “When I first joined the movement, I had been fed this whole line of ‘These immigrant workers are coming over here and stealing our jobs…not paying taxes, committing crimes, and causing problems.’ [It was] other white people in my family who believe these kind of racist ideas. You know, us against them.” But she said she saw her bosses at Wendy’s target Latinx workers, falsely promising them a raise if they didn’t join the strikes. “They knew that if our Latino workers joined with our Black and white workers, that we’d have our strength in numbers, and that we was gonna win.” Since joining Stand Up KC, Bridget’s worldview has changed. “In order for all of us to come up, it’s not a matter of me coming up and them staying down. It’s the matter of, in order for me to come up, they have to come up, too—because we have to come up together. Because honestly, as long as we’re divided, we’re conquered. The only way that we’re going to succeed is together.” And they did succeed: Stand Up KC lobbied the city council to raise the local minimum wage to $13 an hour. (Almost immediately, the Republican state legislature passed a law forbidding any municipality from requiring a wage higher than the state’s $7.70, a move that would be replicated by Republican legislatures across the country.) As I got to know more of the workers and leaders in the Fight for $15, it became clear to me that they had thought through their racial analysis. In their protest signs, speeches, and demands, they weren’t just talking about class issues while tacking on comments about racial pay disparities, they were explicitly saying that overcoming racism was crucial to their classbased goal. They understood that Black workers would likely be their most dedicated base of organizers. Indeed, the Fight for $15’s first two national days of action were set to commemorate anniversaries of the civil rights movement. And in the early days, many chapters cross-organized with Black Lives Matter activists, recognizing that the people who were active against police brutality were often working in fast food and retail by day. The stakes of raising pay were explicitly racial in Birmingham, Alabama, in 2016. The mostly Black city government dared to raise the minimum wage for work within the city limits to $10.10 to escape what city officials and activists called “the Jim Crow economy.” In response, the mostly white state legislature quickly voted to block the pay raise from going into effect. The advocates sued, and a federal court sided with the city, finding that the white state legislature had acted with racial animus against the Black city government. Birmingham wasn’t wrong to say that making people work for so little that they can’t meet their needs is redolent of the Jim Crow economic order: the twenty-one states that have kept their minimum wages at the lowest possible level ($7.25) have some of the largest African American populations in the country. Most people of color are operating in a poverty-wage economy; nationwide, the majority of African Americans and Latinos earn less than $15 an hour. But white people are still suffering from that same economy, and in great numbers.
While only a third of white workers earn less than $15 an hour, they are still the majority of under-$15 workers, and thus will be the largest group to benefit from the organizing spearheaded by workers of color. This fight for decent pay has, like many labor struggles before it, exposed the fact that workers of color suffer the most acute economic injustices, but most of the people harmed in a wage structure built on racism are white. And like every truly successful labor movement, it has found its reach and its strength because of cross-racial solidarity.
What distinguished the autoworkers’ organizing drive from the fastfood campaign? Both were explicit about the racial dynamics at play, both invoked the civil rights struggle, and both had Black workers in leadership.
But management’s divide-and-conquer strategy did not work against the Fight for $15. When the movement first emerged, there were press reports and social media stories about usually white workers earning around $15 feeling that their work would be degraded if “burger flippers” earned just as much. Fast-food work, after all, is one of the lowest-status jobs in the economy. With the face of the fast-food worker in early press coverage being Black, I remember worrying that age-old stereotypes about the low value of Black work (and Black life) would doom the audacious campaign.
But this time would be different. The multiracial American working class had shouldered deindustrialization, deunionization, the financial crisis, and the squeeze of unaffordable housing and healthcare. At a time of record corporate profits, these service workers had become the driving force of the American economy, working underpaid jobs that couldn’t be outsourced and that required human touch, voice, and judgment. Those fighting for $15 undoubtedly had less to lose—the Legacy Nissan workers who could vote made more than twice what fast-food workers were paid and, in a way, could “free-ride” on a wage floor lifted by decades of labor organizing in Detroit. But the campaign’s strategies were also different. By inviting white workers to see how the powerful profited from selling them a racist story that cost everybody (“whether brown, Black or white,” as workers so often said), the Fight for $15 had managed to win the support of whites as well.
By almost every financial measure, the Fight for $15 has been a success, creating a Solidarity Dividend that reversed a trend of two generations of stagnant and declining wages for the lowest-paid workers. Many of the workers who rallied on that first day in 2012 near Times Square went on to testify at the New York State Capitol in 2016, when they won a statewide $15 minimum wage. So, too, did workers in states including California, Connecticut, Illinois, Maryland, Massachusetts, New Jersey, and Washington, D.C., as well as cities that include Flagstaff, Arizona; the Twin Cities; and Seattle, all of which have raised, or committed to raising, wages to $15 an hour. In addition to these policy wins, workers won private wage increases at giant employers including Walmart, Bank of America, McDonald’s (which also announced it would stop lobbying against minimum wage increases), and Amazon. All this progress was won against the prevailing business and conservative argument that raising the minimum wage would hurt exactly the workers who went on strike across the country —and that raising it to something approaching a living wage would be catastrophic. Instead, the result has been $68 billion more in the pockets of 22 million low-paid workers. It turns out that more money in people’s pockets is not just good for rich people when it comes to tax cuts—and that employers could have afforded it all along. There was no drop in employment in places with wage increases, and in fact, many places have found the opposite. The more elusive goal for the Fight for $15 and a union has been the last part: the union. With high turnover and hundreds of shops per city, organizing workers restaurant by restaurant might take decades. The key to unionization of the thousands of franchises is for the law to recognize that umbrella corporations like McDonald’s are joint employers with the franchise owners, as they set virtually all the terms of business. The SEIU made this case before the courts, the National Labor Relations Board agreed with them in 2015, and it looked like American fast-food workers were going to join their counterparts in many European countries in having a way to bargain for higher wages and benefits. But one of the first moves from the Trump administration was to reverse the Obama NLRB decision on franchise joint employment.
The majority of workers in American fast food come from the same white, working-class pool of voters who went overwhelmingly for Trump, a man whose campaign was dominated by promises to fight for the (white) working class and punish immigrants. When I spoke with her after the 2016 election, Bridget connected Trump’s election to the urgency of Stand Up KC’s cross-racial organizing: “Kind of the whole point of this movement is for white workers to understand that racism affects white workers as well.
Because it keeps us divided from our Black and our brown brothers and sisters. So, we need to understand that as white workers, we, too, need to stand up and fight against racism.”

AS I WAS wrapping up my last visit to the worker center in Canton, Mississippi, I took the time to walk around the space. An unadorned storefront had been transformed with posters, printed and hand-drawn; photos from rallies; and pictures of workers’ kids. People came in for coffee and company after their overnight shifts ended at dawn, and they’d come in before work to gear up for the long shift ahead. I recalled something Chip had said that morning as I got ready to leave our breakfast. First, he wanted me to know that despite his visible defection in the last weeks, he’d stayed true when the time came: “I got in that booth, and it was very liberating to vote yes.” Second, even though he was afraid he wouldn’t be welcome anymore at the worker center, he needed me to know about the solidarity he felt there. “I felt a sense of belonging, of love, of togetherness, friendship,” he said, with emotion in his voice. “We went through a lot together, and did a lot together, and accomplished a lot….I loved it. I loved goin’ over there….It was, I guess, utopia without havin’ utopia.” Chapter 6
NEVER A REAL DEMOCRACY
I got into public policy out of concern for what was happening in American economic life, but I learned to look further upstream to what was wrong with our democracy when I joined Demos, an organization whose name means “the people of a nation.” It’s the root word of democracy.
Working alongside voting rights lawyers and experts on campaign finance rules, I learned how our democracy is even less equal than our economy— and the two inequalities are mutually reinforcing. When I think about the nice things we just can’t seem to have in America, a functioning, representative democracy is probably the most consequential.

“I BELIEVE IF you can’t have your fundamental right of voting, what do you have? You don’t have nothin’.” These words could have been spoken by a Black person during the march from Selma to Montgomery for voting rights in 1965, but they were spoken in 2017 by Larry Harmon, a middle-aged white Ohioan. A Navy veteran and software engineer, Larry has a round face, a salt-and-pepper beard, and eyebrows that are quick to flight when he’s incredulous about something—which he was often as Demos and the ACLU represented him in a case that went all the way to the U.S. Supreme Court. It was a case that aimed to strike down a process that had imperiled Larry’s right to vote, a right he’d be the first to admit that he, unlike his Black fellow citizens, never thought he’d have to fight for.
Democracy is a secular religion in America; faith in it unites us. Even when we are critical of our politics, we wouldn’t trade our form of government for any other, and we have even gone to war to defend it from competition with rival systems. Yet our sacred system allows a Larry Harmon to lose his opportunity for self-governance as easily as one lets a postcard fall in with the grocery circulars and wind up in the trash.
The truth is, we have never had a real democracy in America. The framers of the Constitution broke with a European tradition of monarchy and aspired to a revolutionary vision of self-governance, yet they compromised their own ideals from the start. Since then, in the interest of racial subjugation, America has repeatedly attacked its own foundations.
From voter suppression to the return of a virtual property requirement in a big donor-dominated campaign finance system, a segment of our society has fought against democracy in order to keep power in the hands of a narrow white elite, often with the support of most white Americans.
A recent study by political scientist Larry M. Bartels found that Republicans who score high in what he calls “ethnic antagonism”—who are worried about a perceived loss of political and cultural power for white people in the United States—are much more likely to espouse antidemocratic, authoritarian ideas such as “The traditional American way of life is disappearing so fast that we may have to use force to save it,” and “Strong leaders sometimes have to bend the rules to get things done.” Three out of four Republicans agreed that “it is hard to trust the results of elections when so many people will vote for anyone who offers a handout,” a stunning opinion reflecting the way that decades of anti-immigrant, antipoor, anti-Black, and antigovernment political messaging can tip over into an antipathy toward democracy itself at a time of demographic change.
Then again, the antidemocratic concept of minority rule—and rule by only the wealthiest of white men, in fact—was the original design of American government, despite any stated “self-evident truths” about equality to suggest the contrary. When the Constitution was ratified, the majority of white men were excluded from participating in this vaunted new system of representation, given that every one of the original thirteen states limited the franchise to men wealthy enough to own property. I’ve found it’s easier to understand the sorry state of today’s elections if one starts by unlearning the grade school narrative of the framers’ commitment to equality and democracy and recognize that the framers left holes in the bedrock of our democracy from the outset, in order to leave room for slavery.
The South won the Three-fifths Compromise in the Constitution, giving southern states added power in Congress based on a fraction of the nonvoting population of Black people and diminishing the legislative power of white people in free states. Possibly the most consequential of the founding racist distortions in our democracy was the creation of the Electoral College in lieu of direct election of the president. James Madison believed that direct election would be the most democratic, but to secure slave states’ ratification of the Constitution, he devised the Electoral College as a compromise to give those states an advantage. As a result, the U.S. apportions presidential electoral votes to states based on their number of House and Senate members. With the South’s House delegations stacked by the three-fifths bonus, the region had thirteen extra electors in the country’s first elections and Virginia was able to boost its sons to win eight of the first nine presidential contests. The three-fifths clause became moot after Emancipation and Black male suffrage at the end of the Civil War, but the Electoral College’s distortions remain. An Electoral College built to protect slavery has sent two recent candidates to the White House, George W. Bush and Donald J. Trump, who both lost the popular vote. The Electoral College still overrepresents white people, but in an interesting parallel to the free/slave tilt from the original Constitution, not all white people benefit. The advantage accrues to white people who live in whiter, less-populated states; white people who live in larger states that look more like America are the ones underrepresented today.

AS THE FREE population of the new country skyrocketed—tripling that of the Revolutionary era in just four decades—states began to reconsider the property limitations on the franchise. In the South, the growing threat of both slave revolts and cross-racial uprisings by Africans and landless white men helped convince the plantation aristocracy that it might be better if all free white men, not just the richest, had a stake in defending a whitesupremacist government. But in state after state in the North, the push for universal suffrage among men regardless of class came in the form of a zero sum, at the expense of the few Black men who had heretofore been allowed to vote. Property requirements were eliminated in the 1820s, ’30s, and ’40s in the same stroke that removed the tenuous voting rights of free Black citizens, so that only 6 percent of the free Black population lived in states that allowed them to vote by the early 1860s. This move to make real the republic’s promise of self-government, but only for those with white skin, sent a powerful zero-sum message that white equality would be purchased with white supremacy. Universal white male suffrage redefined the meaning of human worth in a society with whipsawing economic vicissitudes: wageearning white men no longer needed to be wealthy to find esteem in the eyes of their society. They just needed to be white. In many states and territories in the nineteenth century, white-skinned immigrants didn’t even need to become citizens to be granted the prized right of citizenship, the vote.
Anti-Blackness gave citizenship its weight and its worth. Perhaps that helps explain why so many whites reacted to the post–Civil War possibility of Black citizenship not with debate but with murderous violence. John Wilkes Booth made up his mind to assassinate President Abraham Lincoln after he heard him advocate for voting rights for Black men. “That means nigger citizenship. That is the last speech he will ever make….By God, I’ll put him through,” Booth declared. He assassinated Lincoln three days later.
In the years that followed, federal troops traveled across the South registering seven hundred thousand recently freed Black men. The white backlash to Black suffrage was immediate, and not just by elites who saw their political privilege threatened. In Colfax, Louisiana, for example, when a pro-Reconstruction candidate supported by Black voters won a fiercely contested gubernatorial race in 1872, the following spring, a mob of armed white men attacked the courthouse where the certification of the election had been held, killing about one hundred Black people who were trying to defend the building, and setting the courthouse on fire. The white citizens murdered their neighbors and burned the edifice of their own government rather than submit to a multiracial democracy.

THE NEXT ONE hundred years in American history were shaped by relentless assaults on the right of Black and Indigenous Americans to vote and by elite efforts to prevent class-based interracial resistance. Because the Fifteenth Amendment barred states from denying the right to vote based on color, class served as a proxy. The Reconstruction era saw movements of impoverished white farmers making common cause with Black freedmen in political parties and populist alliances sometimes known as “Fusion.” Their aim was to break the grip the plantation oligarchy had on government and the economy, provide interest rate relief to debtors, raise taxes for public works, and resist railroad land grabs. The ruling class fought against the cross-racial populists with a campaign for “white supremacy,” promising material and other advantages to whites who broke with Blacks—and violent intimidation to those who didn’t.
When they won, the white supremacists attacked the franchise first. In 1890, unsure that one barrier to the ballot would suffice to control growing Reconstruction-era Black political power, Mississippi implemented literacy tests, new registration rules, standards for “good character,” poll taxes, and more. Other states soon created similar laws, and poor white voters were caught up in the dragnet. For instance, poll taxes, usually in the range of one to two dollars (two dollars in 1890 being almost fifty-seven dollars in today’s money), required cash of poor white, Black, and Indigenous people who were often sharecroppers with little cash to their names. In some places, grandfather clauses exempted whites whose grandfathers could vote before the war; in others, candidates or party officials would pay white voters’ taxes for them in exchange for their loyalty. But in many places, the poll tax continued to work almost as effectively to disenfranchise poor white people as it did Black people, and the result was a slow death of civic life. After several southern states adopted the menu of voter suppression tactics, turnout of eligible white voters throughout the region plummeted. In the presidential election of 1944, when national turnout averaged 69 percent, the poll tax states managed a scant 18 percent.
Some of the voter manipulation tactics of the post–Civil War era remain in full force today. The requirement that we register to vote at all before Election Day did not become common until after the Civil War, when Black people had their first chance at the franchise. Throughout its history, writes legal scholar Daniel P. Tokaji, “voter registration has thus been a means not only of promoting election integrity, but also of impeding eligible citizens’ access to the ballot.” Today, the burdensome and confusing registration process is particularly onerous on people who move frequently (young people, people of color, and low-income people) or who may not know about lower-profile, off-cycle election dates before the registration deadlines, which are as much as thirty days before the election in some states. One of the top barriers to voting, the registration requirement kept nearly 20 percent of eligible voters from the polls in 2016.
Over six million Americans are prohibited from voting as a by-product of the racist system of mass incarceration. (The only states that allow people with felony convictions to vote even while they’re in prison are Maine and Vermont, the two whitest states in the nation.) Many felony disenfranchisement laws were enacted after the Civil War alongside new Black Codes to criminalize freedmen and women. “Some crimes were specifically defined as felonies with the goal of eliminating Blacks from the electorate,” as legal scholar Michelle Alexander wrote. These included petty theft in Virginia and, in Florida, vagrancy, which was a notorious catchall used to send into prison labor any Black person in a public space without a white person to vouch for him. In 1890, Mississippi designated crimes such as bigamy and forgery as worthy of disenfranchisement, but not robbery or murder. The disenfranchisement laws, combined with discriminatory policing and sentencing, hit their target and today ensnare one in thirteen African American voters. But their reach is wider than their aim: one in fifty-six non-Black voters is impacted as well. In Florida, voters in 2018 overturned the state’s lifetime disenfranchisement of people with felony convictions by ballot measure, enabling more than a million people to regain their voting rights—the majority of whom are white. Desmond Meade is the visionary founder of the Florida Rights Restoration Coalition. I got to know him during the multiyear ballot initiative campaign because of Demos’s partnership with FRRC. We had frank conversations about the headwinds of racism and the challenges of creating a multiracial coalition on an issue as charged as criminal justice in such a conservative state. But like so many Black leaders I’ve known, Desmond had a vision with an irresistible breadth, and it attracted the grassroots energy of people from all walks of life. I asked him to put me in touch with one of FRRC’s white activists, and a week later, I was on the phone with a woman named Coral Nichols.
Coral is a white woman in her early forties from Largo, Florida, and is among the hundreds of thousands of white Floridians denied the right to vote under the state’s Reconstruction-era felony disenfranchisement law.
While she was still under probation, Coral started volunteering with FRRC —“because we’ve served our time, and we should be given the opportunity to belong,” she explained to me. Coral went door-to-door in her county encouraging local citizens to do what she could not—vote on a ballot initiative to restore voting rights to people like her. Coral could tell that a lot of people she spoke to had a preconceived notion about people with felony convictions: “They think that most felons are monsters. They don’t see the depth of a personal story, which is why I think that stories are so important.” Race played a role, too—and that’s why Coral always chose to canvass alongside an African American “brother or sister,” as she put it. “It was important that we were united together. When we encountered any type of stereotype, what could break the stereotype was what was standing in front of them.” Amendment 4 passed with 65 percent of the vote on November 6, 2018, and on April 19, 2019, Coral finally got released from the ten years of probation that followed her incarceration and was free to register to vote.
In reaction to Amendment 4, Florida’s Republican governor and legislature passed a state law that required people with a felony history to pay all outstanding fines and fees before voting. This move—redolent of the poll tax—is particularly troubling in Florida, where it is nearly impossible for returning citizens to find out what the state thinks they owe and where “there is no database…to be able to check all the different court costs that might be outstanding,” as one county supervisor of elections testified. The restrictive new law was challenged in court but upheld by a federal appeals court in September 2020. Coral is among approximately eighty-five thousand returning citizens who registered to vote before the new restrictive law went into effect and who must prove they have paid up before they can vote.

THINK ABOUT IT: today, no politician worries that their position in a representative government is illegitimate even if only a minority of citizens votes in their election. They should. What does it mean when the officials who set policy in our name are elected by so few of us? We shouldn’t take these low standards for granted. Our election system is full of unnecessary hurdles and traps—some set by malice and some by negligence—but I would argue that all are a product of the same basic tolerance for a compromised republic that was established at our founding, in the interest of racial slavery. Countries less boastful of their democracies do much better. In Australia, voting is mandatory, and nearly 97 percent of Australians are registered, compared to about 70 percent registration and 61 percent voting in the United States. Canada and Germany don’t make voting compulsory, but their registration rates are about 93 and 91 percent, respectively.
America’s fifty states, and even counties within them, confuse and discourage voters with an archaic patchwork of varying laws, rules, and practices. In some states, you can go to the polls on Election Day and sign up as you vote. In others, you have to register thirty days before an election, a deadline you’re likely to know only if you’ve missed it. In some states—a growing number since the COVID-19 epidemic—you can vote at home and mail in your ballot, while in others, you have to provide an excuse for why you could not go in person. Not surprisingly, Americans at all educational levels are deeply uncertain about their own states’ election laws. In states that prohibit early voting, only 15 percent of residents are aware of this restriction. In states that allow same-day registration, only a quarter of its residents know it. Around half of Americans are unsure whether their state permits them to vote if they have unpaid utility bills or traffic tickets— prohibitions that no states have adopted (yet).
To see what U.S. democracy would be like without the distorting factor of racism, we can look to the states that make it easiest to vote, which are some of the whitest. Oregon, for example, was judged the easiest state in which to vote by a comprehensive study. In Oregon, everyone votes by mailing in a ballot, and Oregon was the first state in the nation to adopt automatic voter registration (AVR), which means rather than making voters figure out how, when, and where to register, Oregon uses information the state already has, for instance from the DMV, to add eligible voters to the rolls. North Dakota, another largely white state, boasts of being the only state without any requirement of voter registration. Until a 2018 voter ID law aimed at Indigenous North Dakotans, you could simply have a poll worker vouch for you at the polling place. Mississippi, the state with the highest percentage of Black citizens, is dead last of the fifty states in terms of ease of voting.

FOR MOST OF America’s history, voter suppression was strongest in the Jim Crow states where the Black population threatened white political control. But after the election of the first African American president, every state became a potential threat to white control. A new wave of voter suppression, funded by a coterie of right-wing billionaires, crashed into states like Florida, North Carolina, Ohio, and Wisconsin—swing states that could turn a presidential election.
These same billionaires funded a lawsuit, Shelby County v. Holder, to bring a challenge to the Voting Rights Act’s most powerful provision.
Decided by a 5–4 majority at the beginning of President Obama’s second term, Shelby County v. Holder lifted the federal government’s protection from citizens in states and counties with long records of discriminatory voting procedures. Immediately across the country, Republican legislatures felt free to restrict voting rights. North Carolina legislators imposed a photo ID law that “target[ed] African Americans with almost surgical precision,” because it was based on research that pinpointed the kinds of identification to which white people had greater access and then allowed only those forms of ID. Texas introduced a voter ID law that essentially let the state design its own electorate, requiring photo IDs that over half a million eligible voters lacked and specifying what kinds of IDs would be permitted (gun permits, 80 percent of which are owned by white Texans) and denied (college IDs, in a state where more than 50 percent of students are people of color). Alabama demanded photo IDs from voters, such as a driver’s license, and within a year, it closed thirty-one driver’s license offices, including in eight out of ten of the most populous Black counties. Between the 2013 Shelby decision and the 2018 election, twenty-three states raised new barriers to voting. Although about 11 percent of the U.S. population (disproportionately low-income people, seniors, and people of color) do not have access to photo IDs, by 2020, six states still demanded them in order for people to vote, and an additional twenty-six states made voting much easier if you had an ID.
These policies were targeted primarily to disadvantage people of color, but such broad brooms have swept large numbers of white people into the democratic margins as well. In general, about 5 percent of white people in the United States lack a photo ID. Within certain portions of the white population, however, the numbers increase: 19 percent of white people with household incomes below $25,000 have neither a driver’s license nor a passport. The same is true of 20 percent of white people ages 17–20. Of the fifty thousand already-registered Alabama voters estimated to lack proper photo ID to vote in 2016, more than half were white.
Anti-voting lawmakers perhaps weren’t intending to make it harder for married white women to vote, but that’s exactly what they did by requiring an exact name match across all forms of identification in many states in recent years. Birth certificates list people’s original surnames, but if they change their names upon marriage, their more recent forms of ID usually show their married names. Sandra Watts is a married white judge in the state of Texas who was forced to use a provisional ballot in 2013 under the state’s voter ID law. She was outraged at the imposition: “Why would I want to vote provisional ballot when I’ve been voting regular ballot for the last forty-nine years?” Like many women, she included her maiden name as her middle name when she took her husband’s last name—and that’s what her driver’s license showed. But on the voter rolls, her middle name was the one her parents gave her at birth, which she no longer used. And like that, she lost her vote—all because of a law intended to suppress people like Judge Watts’s fellow Texan Anthony Settles, a Black septuagenarian and retired engineer.
Anthony Settles was in possession of his Social Security card, an expired Texas identification card, and his old University of Houston student ID, but he couldn’t get a new photo ID to vote in 2016 because his mother had changed his name when she remarried in 1964. Several lawyers tried to help him track down the name-change certificate in courthouses, to no avail; his only recourse was to go to court for a new one, at a cost of $250.
Elderly, rural, and low-income voters are more likely not to have birth certificates or to have documents containing clerical errors. Hargie Randell, a legally blind Black Texan who couldn’t drive but who had a current voter registration card used before the new Texas law, had to arrange for people to drive him to the Department of Public Safety office three times, and once to the county clerk’s office an hour away, only to end up with a birth certificate that spelled his name wrong by one letter.
Possibly the most insidious anti-voting innovation to appear after the Obama election was the purge of unwitting voters already registered to vote. In 2015, Larry Harmon’s elected secretary of state, Jon Husted, used a purge process to eliminate two hundred thousand registered Ohio voters from the rolls in the state’s twenty most populous counties, all in the name of list maintenance to prevent voter fraud. As in most states, these highpopulation counties were also the ones whose residents were most likely to be people of color and to vote Democratic.
Here’s how the purge process worked. If an Ohio voter failed to vote during a two-year period—say, he voted in the presidential election but sat out the midterms—the state mailed the voter a postcard to verify his address. If the voter didn’t return the postcard, the state launched a process that, unless the person cast a ballot within the next four years, would result in his name being purged from the rolls: no longer considered a valid voter in the state. There are a number of problems with this approach, starting with the fact that in the United States, voting is not a use-it-or-lose-it right.
What’s more, as Secretary Husted knew perfectly well, the vast majority of people who receive these address-verification postcards in the mail do not return them. In 2012, Ohio went to the trouble and expense to send out 1.5 million address-verification notices to people who hadn’t voted in 2011 —out of a total of only 7.7 million registered voters. Presuming a change in registration for almost one out of every five registered voters is a remarkably wasteful effort, given that only about three out of every one hundred people move out of a registrar’s jurisdiction in any given year.
Of the 1.5 million postcard recipients, 1.2 million never responded. This should have been a clue that something was wrong with the state’s notification process, not with the voters. Or perhaps the process was working precisely as intended: people of color, renters, and young people are significantly less likely to respond to official mail than are white people, homeowners, and older people, as the Census Bureau had discovered.
“I’ve lived in Ohio my entire life,” explained Larry Harmon, “except for when I served in the Navy, and even then, I paid Ohio taxes.” Yet, in 2015, Larry felt like he’d been disappeared in the eyes of the state. “When I went to vote, I went into the hall and I looked up my name, and I looked and I looked, but I didn’t see my name.
“While I was at work on my lunch hour, I tried to google to see, did I do something wrong?…I didn’t quite understand why I wouldn’t be on the list; I’d voted there before.” Then he ran across information on Ohio’s purge of inactive voters. “I didn’t think I was required to vote in every election!” Larry said, incredulity in his voice.
He had been voting since 1976, mostly in presidential elections. His reasons for skipping the 2012 election were, like those of so many Americans, both personal and political: a combination of a lack of inspiration and the pressures of real life. “I think I went through a period after my mother’s death that I wasn’t interested in voting, and I didn’t think it did a whole lot of good, so I didn’t vote for one presidential election and, they told me, one midterm election.” But in 2015, Larry was closely following an issue that he knew would be on the ballot—a proposal to legalize marijuana but to concentrate the industry in a few corporate hands. He was opposed to the idea and was eager to have his say. And the more Larry thought about being denied the opportunity to vote, the more upset he became. “I thought, ‘Well, jeez. You know, I pay my taxes every year, and I pay my property taxes, and I register my car.’ So, the state had to know I’m still a voter. Why should we fight for the country if they’re gonna be taking away my rights? I mean, I’m a veteran, my father’s a veteran, my grandfather’s a veteran. Now they aren’t giving me my right to vote, the most fundamental right I have?” Lawyers at my organization learned of Ohio’s singularly aggressive purging practice—no other state initiated a purge process after just one missed federal election—and concluded that it violated federal law, the National Voter Registration Act of 1993. Most commonly known as the “motor voter” law because it made registration more available at DMVs and other government offices, the law also bars states from a number of burdensome voter registration practices, including purging registered voters for not voting. In early 2016, we took Ohio to court and, over the next two years, battled the case all the way up to the Supreme Court.
On January 10, 2018, I was in Washington, D.C., to watch the oral arguments before the Supreme Court for the case, which was now called Husted v. A. Philip Randolph Institute (APRI). A membership group of Black trade unionists whose volunteer activities include voter registration were plaintiffs, along with Larry Harmon and the Ohio Coalition for the Homeless. The early morning was chilly as I walked to the Court building with my colleague Stuart Naifeh, who had argued the case successfully in the lower court. As we climbed the wide stone steps, I looked up at the words inscribed in marble above the Court’s columns: EQUAL JUSTICE UNDER THE LAW. I couldn’t help contrasting those stirring words with the mess Ohio had made of its voting system.
As another Demos colleague, Chiraag Bains, later wrote in a Washington Post op-ed, “In the United States, if you don’t buy a gun for several years, you do not lose your Second Amendment right to bear arms.
If you never write a letter to the editor or participate in a street demonstration, you retain your full First Amendment rights to free speech.
If you skip church for years on end, the government cannot stop you from finally attending a service.” But despite our contemporary reverence for the idea of equality under the law, the truth is the Constitution wasn’t written with an affirmative right to vote for all citizens. It’s always been a power struggle to create a representative electorate, and currently, the forces against equality have the upper hand. Purges and other kinds of voter suppression are forms of racial oppression that vitiate the goal of democracy, and white voters like Larry Harmon end up being collateral damage in a trap not set for them. Across the country, states purged almost 16 million voters between 2014 and 2016. Some 7 percent of Americans report that they or a member of their household went to their polling place only to be told that their name was not on the voter roll, even though they knew they were registered. In the courtroom, we didn’t hear much about race—Demos was arguing that Ohio’s purge process violated federal election law, not civil rights law or the Equal Protection Clause—until Justice Sonia Sotomayor, the country’s first Latinx Supreme Court justice, spoke to what she called the “essence of this case.” She said, “It appears as if what you’re reading is that the failure to vote is enough evidence to suggest that someone has moved….[I]s that a reasonable effort to draw that conclusion, when [what] you do results in disenfranchising disproportionately certain cities where large groups of minorities live?” she asked. “There’s a strong argument…that at least in impact, this is discriminatory.” In the end, a conservative majority of the Supreme Court ruled against Harmon and allowed Ohio’s secretary of state to continue deregistering voters for elections to come.

WHERE DID JON Husted get the idea to purge up to a million of his own state’s voters? There is a playbook of anti-voting tactics drawn up by a connected set of benign-sounding organizations such as the legislation-drafting network of conservative lawmakers, the American Legislative Exchange Council, and the legal organizations Project on Fair Representation and the Public Interest Legal Foundation, all of which are funded in turn by a group of radical right-wing millionaires and billionaires, chief among them fossil fuel baron Charles Koch. (Until his death in 2019, Charles’s brother David was his partner in these efforts, and the two men, among the richest in the world, were widely known as “the Koch brothers.”) Over the past fifty years, the Kochs organized vast sums of money to advance a vision for America that includes limited democracy, a rollback of civil rights, and unfettered capitalism.
We wouldn’t know much about the radical aims of the Koch brothers, whose political spending was often as secretive as their charitable giving was public (the dance hall at New York’s Lincoln Center, for example, is the David H. Koch Theater), if it weren’t for journalists like Jane Mayer and a little-known history professor named Nancy MacLean. In 2013, Duke University professor Nancy MacLean found a neglected storehouse of papers in the archives of James Buchanan, an influential economist who had recently died. The Buchanan papers became the basis for her awardwinning book Democracy in Chains: The Deep History of the Radical Right’s Stealth Plan for America.
MacLean has thick brown hair that she often pushes impatiently out of her eyes as she speaks, which she does at a breakneck pace. Her words chase one another out of her mouth, accelerating to a crescendo at the end of every packed sentence. Considering the astonishing revelations that are flying at you that quickly, the whole experience of a MacLean conversation can leave you feeling like you’ve just been picked up by a twister and dropped in an entirely different universe.
Sadly, though, the universe she describes is ours. Her work has exposed an influential movement of radical right-wing libertarians opposed to the very idea of democracy. Through five decades of money and organizing, this movement has permeated conservative media and the Republican Party with its fringe, self-serving vision of an undemocratic society. Its goal is a country with concentrated wealth and little citizen power to levy taxes, regulate corporate behavior, fund public goods, or protect civil rights. The obstacle to this goal is representative democracy.
That’s why the hundreds of millionaires in the Koch network have taken aim at the rules of democracy, funding think tanks, legal organizations, public intellectuals, and advocacy groups to promote a smaller and less powerful electorate and weaker campaign finance laws. Since 2010, the groups they fund have spurred more than one hundred pieces of state legislation to make it harder to vote, almost half of which have passed; launched dozens of lawsuits attacking both voter protections and controls on big money in politics (including both Shelby County v. Holder and the case resulting in the notorious “corporations are people” decision, Citizens United); and invested in technology to allow extreme partisan gerrymandering. The scale of their organization is as large as a political party, but they use front groups and shell companies to keep their funding mostly secret. The core philosophy that unites their economic aims with their attacks on a multiracial democracy is that a robust democracy will lead to the masses banding together to oppose property owners’ concentration of wealth and power.
On its face, the aim of this movement is not white supremacy. Professor MacLean says they’re about “property supremacy.” But racism has long been useful to the movement. James Buchanan was awarded a Nobel Prize for his ideas about taxes, the size of government, and the deficit, but he first made his name in 1959 by offering a way for Virginia to resist desegregating public schools after Brown v. Board of Education. Buchanan co-wrote a memo to Virginia legislators and advocated in support of using public funds for private (and therefore segregable) schools, which could be economically efficient if the state used the revenue from public assets like school buildings. Buchanan and his co-author wrote, “We believe every individual should be free to associate with persons of his own choosing. We therefore disapprove of both involuntary (or coercive) segregation and involuntary integration.” Many of today’s right-wing political actors take their libertarian economic philosophy from people like Buchanan and their funding from the Koch brothers network. The success of their policy agenda hinges on an unrepresentative electorate, because their vision can’t garner majority support. Their unpopular ideas include lowering taxes on the wealthy (64 percent of Americans want higher wealth taxes), slashing government spending and eliminating public transit (70 percent want a big infrastructure plan paid for by a wealth tax), and drastically minimizing the government’s role in health insurance (56 percent support a fully public single-payer system).
This is where racism becomes strategically useful. Whatever the Koch movement operatives (which now include many Republican politicians) believe in their hearts about race, they are comfortable with deploying strategic racism because popular stereotypes can help move unpopular ideas, including limiting democracy. Take for example the widespread unconscious association between people of color and criminals; anti-voting advocates and politicians exploited this connection to win white support for voter suppression measures. They used images of brown and Black people voting in ads decrying “voter fraud,” which has been proven repeatedly to be virtually nonexistent and nonsensical: it’s hard enough to get a majority of people to overcome the bureaucratic hurdles to vote in every election; do we really think that people are risking jail time to cast an extra ballot?
Nonetheless, the combination of the first Black president and inculcation through repetition led to a new common sense, particularly among white Republicans, that brown and Black people could be committing a crime by voting. With this idea firmly implanted, the less popular idea—that politicians should change the rules to make it harder for eligible citizens to vote—becomes more tolerable.
And this opens the door to a complete undermining of American democracy. As one of the architects of today’s right-wing infrastructure, Paul Weyrich, said in a 1980 speech, “I don’t want everybody to vote.
Elections are not won by a majority of people; they never have been from the beginning of our country and they are not now. As a matter of fact, our leverage in the elections quite candidly goes up as the voting populace goes down.” Adherents to this belief system “see democracy as essentially infringing on economic liberty, and particularly the economic liberty of the most wealthy and corporations,” MacLean told me.
Voter suppression, an age-old racist tactic, has been reanimated in recent years by subtly anti-Black and anti-brown propaganda, but is now useful against a broad base of white people who could be in a multiracial coalition with people of color. MacLean recalled, “The voter suppression legislation in many cases, certainly in [my state of] North Carolina, didn’t only aim at African Americans. It also aimed in particular at young people. And this older generation of white conservatives…understand[s] that young people are not liking these ideas….That young people are…raising questions about the inequities in the way that capitalism is operating.
“So, for example, in my state, they took pains to eliminate a program that led to the automatic registration of high school students….They took aim at early voting, which tends to be something that many young people also use. And frankly, many white people prefer, too.
“They also moved polling sites away from campuses,” said MacLean.
“A really egregious example of that was in Boone, North Carolina, which is a predominantly white community in the western mountains….The Republicans in charge moved the polling place from the campus, which is right in the city and very convenient to lots and lots of people….[T]hey moved it halfway down the mountain to a place where there was no parking, no public transportation, and it was dangerous to walk along the road to get to this place.”

IN 1956, TWENTY-FOUR-YEAR-OLD Air Force captain Henry Frye went to register to vote in Richmond County, North Carolina. The state had enacted a literacy test in 1899 as part of the White Supremacy Democrats’ defeat of the Fusion Party populists, but Frye, an honors college graduate, was more than literate. Nonetheless, he was turned away. The test as administered by the white clerk? Name all the signers of the Declaration of Independence.
Nearly a decade later, North Carolina’s voter suppression law and hundreds of similar restrictions across the country finally fell under the Voting Rights Act of 1965, Dr. Martin Luther King Jr.’s crowning achievement and the cause for which marchers were beaten on Bloody Sunday in Selma, Alabama. It’s hard to overstate the difference that the Voting Rights Act made in the country’s journey toward true democracy. The year before its passage, less than 10 percent of eligible African Americans in Mississippi were registered; five years later, that figure was almost 60 percent. In 1962, only 36 percent of Black North Carolinians were registered; one year after the Voting Rights Act, it had grown to 50 percent. Throughout the South, about one million new African American voters registered within a few years of the Voting Rights Act’s taking hold. Back in Richmond County, North Carolina, the law freed Henry Frye to become a voter, then to run and win a campaign for the state legislature. He would go on to become the chief justice of the North Carolina Supreme Court. The fear that drives the violence and mendacity of American voter suppression is rooted in a zero-sum vision of democracy: either I have the power and the spoils, or you do. But the civil rights–era liberation of the African American vote in the South offered a Solidarity Dividend for white people as well. The elimination of the poll tax in particular freed up the political participation of lower-income white voters. Indeed, white voters in Georgia and Virginia had challenged the poll tax requirement, but the courts upheld it in 1937 and 1951. After the civil rights movement knocked down voting barriers, white as well as Black registration and turnout rates rose in former Jim Crow states. And a fuller democracy meant more than just a larger number of ballots; it meant a more responsive government for the people who hadn’t been wealthy enough to have influence before. It meant a break, finally, from what the southern political scientist V.  O. Key described in 1949 as the stranglehold of white supremacy, single-party politics, and the dominance of the Bible Belt planter class.
“When you talk about the effects of the Voting Rights Act and political participation, just going to the ballot and casting your vote is only one step,” economist Gavin Wright told me. He’s the author of Sharing the Prize, which details the economic benefits the civil rights movement brought to the entire South, whites included. “What the Black political leadership got, and economic leadership, was a seat at the table.” With that seat, they won investments in public infrastructure, including hospitals, roads, schools, and libraries that had been starved when one-party rule allowed only the southern aristocracy to set the rules. More voters of all races meant more competitive elections; for the first time since the end of Reconstruction, a white supremacy campaign wasn’t enough. Candidates had to promise to deliver something of value to southern families, white and Black. In Sharing the Prize, Wright writes that “after the Voting Rights Act…southern…gubernatorial campaigns increasingly featured nonracial themes of economic development and education.” Pre–civil rights Alabama was a quintessential example of racist inequality starving the public. Nearly half the state’s citizens over age twenty-five had no more than an elementary school education in 1960. This was the case for two out of three Black Alabamians, but also for two in five white Alabamians. After the Voting Rights Act swelled the electorate, Gov. Albert Brewer faced arch-segregationist George Wallace and hoped to appeal to a modern-day Fusion coalition of the white middle class, newly enfranchised Black Alabamians, and working-class whites outside the retrograde former plantation counties in the Black Belt. So, he called a 1969 Special Session on Education that passed twenty-nine bills and appropriated an unprecedented one hundred million dollars toward education in the state.
Brewer narrowly lost in a runoff, but the impact of the educational investments he spearheaded continued.

IN ORDER TO prevent a thriving multiracial democracy, the same movement that puts up barriers to voting has hacked away at the safeguards against money flooding into elections. It’s not very often thought of this way, but the current big-money campaign finance system is a linchpin of structural racism, and the stealth movement to create it has been driven by people who often also work against government action to advance civil rights and equality. (Fifty years after libertarian economists laid out the case for school privatization instead of integration, a Koch brothers–founded libertarian group helped dismantle one of the country’s first and few remaining voluntary school integration systems, in Wake County, North Carolina.) Most people who wonder why our politics are so corrupt can’t draw the line from racist theories of limited democracy to today’s system, but the small group of white men who are funding the effort to turn back the clock on political equality can lay claim to a long ideological pedigree: from the original property requirement to people like John C. Calhoun, who advocated states’ rights and limited government in defense of slavery, to the Supreme Court justices who decided Shelby County and Citizens United.
Over the past few decades, a series of money-in-politics lawsuits, including Citizens United, have overturned anticorruption protections, making it possible for a wealthy individual to give more than $3.5 million to a party and its candidates in an election cycle, for corporations and unions to spend unlimited sums to get candidates elected or defeated, and for secret money to sway elections. The result is a racially skewed system of influence and electoral gatekeeping that invalidates the voices of most Americans. When you consider the impact that the flow of money and lobbying has on policy making, it’s no exaggeration to say that the white male property requirement for having a say in government is still the default mode of business. One pair of political scientists stated, “Economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.” They conclude that “in the United States…the majority does not rule—at least in the causal sense of actually determining policy outcomes.” Another political scientist found that “senators’ [policy] preferences diverge dramatically from the preference of the average voter in their state…unless these constituents are those who write checks and attend fundraisers.” Still another wrote that the preferences of people “in the bottom one-third of the income distribution have no discernable impact on the behavior of their elected representatives.” Since the early 1970s—not coincidentally, shortly after the 1965 Voting Rights Act began to dramatically increase the voting participation of African Americans—the donor class in America has grown more powerful and more secretive, but the number of donors who give contributions large enough to require tracking (above $200) is minuscule, less than 1.2 percent of the entire adult population. Their outsize donations totaled more than 71 percent of all campaign contributions during the 2018 election cycle.
This tiny coterie of elite donors who hold such sway over our political process do not look or live like most Americans. Obviously, they are wealthier than the rest of us; of donors who gave more than five thousand dollars to congressional candidates in 2012–2016, 45 percent are millionaires, while millionaires comprise only 3 percent of the U.S.
population. As a team of New York Times reporters described in an exposé of the 158 families who dominated funding for the 2016 presidential election, “They are overwhelmingly white, rich, older and male, in a nation that is being remade by the young, by women, and by black and brown voters.” The big-money campaign finance system is like so much of modern-day structural racism: it harms people of color disproportionately but doesn’t spare non-wealthy white people; it may be hard to assign racist intent, but it’s easy to find the racist impacts.
Two-thirds of Americans consider it a major problem that “wealthy individuals and corporations” have “disproportionate influence” in our elections. Though the impact is most acutely felt among people of color whose voices are the least represented, the reach is widespread enough that there’s a powerful Solidarity Dividend waiting to be unlocked for all of us.
After a history of high-profile corruption cases earned the state the nickname “Corrupticut” and led to the imprisonment of a sitting governor in 2004, Connecticut passed a sweeping campaign finance reform measure.
The Connecticut Citizens’ Election Program offered candidates the chance to qualify for public grants to fund their campaigns if they could collect enough grassroots donations from people in their district, in increments of five to one hundred dollars. In the first years after the reform, the change was dramatic. Candidates spent most of their campaigning time hearing the concerns of their constituents instead of those of wealthy people and checkwriting lobbyists. James Albis, representative of East Haven, recalled, “I announced my reelection bid in February, and by April, I was done fundraising. So, from April to November, I could focus only on talking to constituents. Without public financing, I would have been fundraising through that entire period.” Corporate lobbyists had less sway over legislators’ agendas. Reform lifted the wealth requirement from running for office, too. “Public financing definitely made the legislature more diverse.
There are more people of color, more young people, more women, and more young women,” noted the secretary of state, Denise Merrill.
One of those people was state senator Gary Holder-Winfield, an African American activist and former electrical construction manager for a power plant, who describes himself as “the candidate who wasn’t supposed to win.” He was in the first class of legislators who ran under the Citizens’ Elections system. He explains, “I didn’t come from money. I am a candidate of color, and I wasn’t a candidate for the political party or machine apparatus. I didn’t have the nomination, and I was actually able to defeat the person that had the nomination by talking about issues that he wasn’t.
“I’m beholden to the people who have been saying for a long period of time that we don’t have a voice,” he said. As I listened to Senator Holder-Winfield talk about his neighborhood constituents telling him they wanted him to focus on unfairness in the juvenile justice system, I realized I was hearing about something rare: a functioning representative democracy. “I don’t know what the issue is going to be, but I know where it’s going to come from,” Holder-Winfield explained. “It’s revolutionary in the way that it works.” I couldn’t help but think about the myth I’d learned as a child about the American Revolution creating that kind of bottom-up, egalitarian democracy—untrue then but within our grasp today.
Connecticut’s Solidarity Dividend was almost immediate. In the first legislative cycles after public financing, the more diverse (by measures of race, gender, and class) legislature passed a raft of popular public-interest bills, including a guarantee of paid sick days for workers, a minimum wage increase, a state Earned Income Tax Credit, in-state tuition for undocumented students, and a change to an obscure law championed by beverage distributor lobbyists that resulted in $24 million returning to the state—money that could contribute to funding the public financing law.
Despite regular efforts to curtail it, Connecticut’s Citizens’ Election Program has endured for over a decade, highly popular with both Connecticut residents and candidates, 73 percent of whom opted into the system in 2014. This kind of reform has national popular support as well; among the most potent opposition messages is that it’s taxpayer money for politicians. Senator Winfield has a response to that: “Yeah, we are using the public’s money, but it’s the public’s government, and if you want it to remain the public’s government, you might have to use the public’s money.
Otherwise, you’re going to have government by the few who have been paying for government.”

OUR POLITICAL SYSTEM has been rigged, from the drafting of the Constitution onward, chiefly to diminish Black political participation. This flawed system has also limited the choices and voices of poorer white Americans and thwarted working-class coalitions that could have made economic and social life richer for all. A genuine, truly representative democracy is still an aspiration in America, but the vision of it has propelled waves of communities to claim a right from which they were excluded in our founding slavocracy. Class-blind suffrage in 1855; Black male enfranchisement in 1870; women’s suffrage in 1920; the full enfranchisement of Native Americans in 1962; the Voting Rights Act of 1965; and the inclusion of young adults in 1971. Professor Nancy MacLean, who has studied how powerful the opposition to democracy is, continues to be optimistic. “I do think that something is happening now,” she told me, “where not only is the audaciousness of the push to change the country from the right accelerating at a rapid pace that’s waking a lot of people up, but also I think good people of all backgrounds and commitments are starting to…get into action to try to defend democracy….And I’m heartened by the way that I see people, including so many white people, also recognizing that…we are all bound to one another. When one of us is hurting, that’s going to come along and hurt everyone.” Chapter 7
LIVING APART
A white boy with freckles telling me I was one of the good ones. Girls jumping double-dutch during recess, and the jelly sandals of Ayesha, who did her best to teach me how. In the cafeteria, knowing I should want the latest Lunchables but secretly preferring my hot lunch, served by Gracie, who reminded me of my aunt. The lonely walk up to the top floor to sit in reading class with the big kids in eighth grade, when I was just in third. The scratch of the phonograph before “Lift Ev’ry Voice and Sing,” which we sang after the Pledge of Allegiance instead of “The Star Spangled Banner.” The white and Chinese boys rapping “Never date a girl with a weave, why?
Because a girl with a weave has got a trick up her sleeve!” at me while they flicked my braids in homeroom. The lines I had to remember, and the things I had to forget, to keep auditioning for plays with all-white characters. What it felt like to sit in class debating the Supreme Court cases that had treated my ancestors’ humanity as a subject for debate.
These are the memories that flashed when I forced myself to think of all the places where I’ve been on the spectrum of integration, from an entirely Black school to a virtually all-white one; from a suburb famous for its integration to a 98 percent Black neighborhood; and at a law school where there were four times as many Asian American students as Black ones. At the age of eleven, I ended up in an all-white rural town for boarding school, a sacrifice my parents made to give a restless child access to elite educational circles, despite the culture shock they knew would await her. I was lonely, and it was hard, but eventually, I found loving teachers and some other misfits—a Puerto Rican girl from Queens, a pair of sisters from Hong Kong, a gangly white girl who shared my love of female-driven fantasy books like The Mists of Avalon—who helped me find my footing.
As I moved through school, then college, law school, and work, I was never again in as overwhelmingly white an institution as that early country school, but (like many Black people) I was often navigating largely white worlds. I learned to expect to be the “only” Black person in white rooms, the one who would force a new racial awareness. I know that these experiences of racial proximity and distance profoundly influenced me, for good and for bad—that I had to subtly redefine myself with each move and, more important, often wildly reevaluate what I thought of others. This is one of those truths that we Americans know without a doubt and yet like to deny: Who your neighbors, your co-workers, and your classmates are is one of the most powerful determinants of your path in life. And most white Americans spend their lives on a path set out for them by a centuries-old lie: that in the zero-sum racial competition, white spaces are the best spaces.
White people are the most segregated people in America.
That’s a different way to think about what has perennially been an issue cast with the opposite die: people of color are those who are segregated, because the white majority separates out the Black minority, excludes the Chinese, forces Indigenous Americans onto reservations, expels the Latinos. Segregation is a problem for those on the outside because what is good is reserved for those within. While that has historically been materially true, as government subsidies nurtured wealth inside white spaces and suppressed and stripped wealth outside, I wanted to investigate the damage done to all of us, including white people, by the persistence of segregation. The typical white person lives in a neighborhood that is at least 75 percent white. In today’s increasingly multiracial society, where white people value diversity but rarely live it, there are costs—financial, developmental, even physical—to continuing to segregate as we do. Marisa Novara, a Chicago housing official, put it this way: “I think as a field, we use the word segregation incorrectly. I think we tend to use it as if it’s a synonym for places that are low-income, where Black and brown people live. And we ignore all of the places that are majority white, that are exclusive enclaves, as if those are not segregated as well.”

FEW PEOPLE TODAY understand the extent to which governments at every level forced Americans to live apart throughout our history. Our governments not only imposed color restrictions on where people could live and work, but also where we could shop and buy gas, watch movies, drink water, enter buildings, and walk on the sidewalk. The obsession with which America drew the color line was all-consuming and absurd. And contrary to our collective memory, segregation didn’t originate in the South; nor was it confined to the Jim Crow states. Segregation was first developed in the northern states before the Civil War. Boston had a “Nigger Hill” and “New Guinea.” Moving west: territories like Illinois and Oregon limited or barred free Black people entirely in the first half of the 1800s. In the South, white dependence on Black labor, and white need for physical control and access to Black bodies, required proximity, the opposite of segregation. The economic imperative set the terms of the racial understanding; in the South, Blacks were seen as inferior and servile but needed to be close. In the North, Black people were job competition, therefore seen as dangerous, stricken with a poverty that could be infectious.
The Reconstruction reforms after the Civil War should have ended segregation. Congress passed a broad Civil Rights Act in 1875, banning discrimination in public accommodations. During Reconstruction, many southern cities had “salt-and-pepper” integration, in which Black and white people lived in the same neighborhoods and even dined in the same restaurants. Multiracial working-class political alliances formed in North Carolina, Alabama, and Virginia. As it did after Bacon’s Rebellion, though, the wealthy white power structure reacted to the threat of class solidarity by creating new rules to promote white supremacy. This time, they reasoned that everyday physical separation would be the most powerful way to ensure the allegiance of the white masses to race over class.
In 1883, the U.S. Supreme Court struck down America’s first Civil Rights Act, and the Black Codes of Jim Crow took hold, with mirrors in the North. In the words of the preeminent southern historian C.  Vann Woodward, “Jim Crow laws put the authority of the state or city in the voice of the street-car conductor, the railway brakeman, the bus driver, the theater usher, and also into the voice of the hoodlum of the public parks and playgrounds. They gave free rein and the majesty of the law to mass aggressions that might otherwise have been curbed, blunted or deflected.” Any white person was now deputized to enforce the exclusion of Black people from white space, a terrible power that led to decades of sadistic violence against Black men, women, and children.
For the next eighty years, segregation dispossessed Native Americans, Latinos, Asian Americans, and Black Americans of land and often life. No governments in modern history save Apartheid South Africa and Nazi Germany have segregated as well as the United States has, with precision and under the color of law. (And even then, both the Third Reich and the Afrikaner government looked to America’s laws to create their systems.) U.S. government financing required home developers and landlords to put racially restrictive covenants (agreements to sell only to white people) in their housing contracts. And as we’ve already seen, the federal government supported housing segregation through redlining and other banking practices, the result of which was that the two investments that created the housing market that has been a cornerstone of building wealth in American families, the thirty-year mortgage and the federal government’s willingness to guarantee banks’ issuance of those loans, were made on a whites-only basis and under conditions of segregation.
Even after the Supreme Court ruled in 1948 that governments could no longer enforce racial covenants in housing, the government continued to discriminate under the pretext of credit risk. Planners for the Interstate Highway System designated Black and brown areas as undesirable and either destroyed them to make way for highways or located highways in ways that separated the neighborhoods from job-rich areas. The effects of these policy decisions are no more behind us than the houses we live in.
Recent Federal Reserve Bank of Chicago research has found, with a granular level of detail down to the city block, that the refusal to lend to Black families under the original 1930s redlining maps is responsible for as much as half of the current disparities between Black and white homeownership and for the gaps between the housing values of Black and white homes in those communities. Richard Rothstein, author of the seminal book on segregation, Color of Law: How the Government Segregated America, reminds us that there is no such thing as “de facto” segregation that is different from de jure (or legal) segregation. All segregation is the result of public policy, past and present.
Instead of whites-only clauses in rental advertisements and color-coded maps, today’s segregation is driven by less obviously racially targeted policies. I’ve often wondered how our suburbs became so homogenous, with such similar house sizes and types. It turns out that, like so much of how we live, it was no accident: after the Supreme Court invalidated city ordinances banning Black people from buying property in white neighborhoods in 1917, over a thousand communities rushed to adopt “exclusionary zoning” laws to restrict the types of housing that most Black people could afford to buy, especially without access to subsidized mortgages (such as units in apartment buildings or two-family homes).
These rules remain today, an invisible layer of exclusion laid across 75 percent of the residential map in most American cities, effectively banning working-class and many middle-income people from renting or buying there. Exclusionary zoning rules limit the number of units constructed per acre; they can outright ban apartment buildings; they can even deem that a single-family house has to be big enough to preserve a neighborhood’s “aesthetic uniformity.” The effect is that they keep land supply short, house prices high, and multifamily apartment buildings out. In 1977, the Supreme Court failed to recognize that these rules were racial bans recast in class terms, and the impact on integration—not to mention housing affordability for millions of struggling white families—has been devastating. Today, the crisis surrounding housing affordability in the United States is reaching a fever pitch: the majority of people in the one hundred largest U.S. cities are now renters, and the majority of those renters spend more than half their income on rent. Homeownership rates are falling for many Americans as costs continue to increase, construction productivity continues to decline, and incomes don’t keep pace. Nationwide, the typical home costs more than 4.2 times the typical household income; in 1970, the same ratio was 1.7.
One solution many cities are investigating or implementing is an increase in the housing supply by limiting or eradicating single-family zoning. While the net effect of increasing housing supply doesn’t always lead automatically to greater affordability without additional policy changes, the lasting legacy of the racism designed into American property markets did increase costs for all Americans.

I WAS BORN on the South Side of Chicago, in a neighborhood that is still a working middle-class community, full of teachers and other public servants who found doors open in government that were closed in the private sector.
There were also lots of owners of small businesses with an ethos that they’d rather make their own way than be “last hired, first fired,” as the saying goes, in a white person’s shop. The apartment where I was born is located in a four-story brick building that my great-grandmother Flossie McGhee bought on a “land sale contract,” one of the notorious high-interest contracts whites sold to Black homebuyers lacking access to mortgages due to redlining and bank discrimination. (In the 1960s, 85 percent of Black homeowners bought on contract.) As I discuss earlier, when you bought on contract, you built no equity until the end and could be evicted and lose everything if you missed a single payment. Against all odds, Grandma Flossie kept the payments coming with money she made by combining jobs as a nanny to white families with a lucky streak with the numbers. In our neighborhood of Chatham/Avalon, as far as I can recall and Census data can confirm, there were no white people within a fifteen-block radius of us.
Chicago is one of the most segregated cities in America, by design. Before the 1948 racial covenant Supreme Court decision, 80 percent of the city of Chicago carried racial covenants banning Black people from living in most neighborhoods, a percentage that was similar in other large cities around the country, including Los Angeles.
A few times a week after school, I would visit my paternal grandparents, Earl and Marcia McGhee, a Chicago police officer and Chicago public schools social worker. They lived on the other side of the “L,” in a neighborhood known as “Pill Hill” because of all the single-family houses belonging to doctors from the neighboring hospital. Over there, it was the picture of success in brick and concrete: houses with manicured lawns, single-car garages, and monogrammed awnings over the doorsteps.
But it was almost all-Black, too; a few Jewish families hung on into the 1970s, but there were none on my grandparents’ block when I was a kid. It was our own American Dream, hard-won and, for many who remember its glory days, almost utopian.
I asked my grandma Marcia about what the segregated South Side was like in those days. “We had a common history, all of us: parents who came up from terror and sharecropping to…” She laughed. “To deeds and degrees. In just one generation. And nobody gave us a thing. They were always trying to take, in fact. So, you’d walk down the street and see the new car in the driveway, the kids in the yard, and everybody was happy for each other’s success, and you knew everybody’d be there for each other when you were down.” I was in kneesocks when Earl and Marcia McGhee were hosting regular card games and Democratic Party meetings in their finished basement on Bennett Avenue in Pill Hill, but I remember it as she does. There was a feeling that although the energy of the civil rights movement had dissipated, it hadn’t completely moved on, but had settled in the fibers that connected us. Folks like my parents and grandparents had their day jobs, but they all knew that no matter what you were doing, you were also doing it at least in part for the betterment of the community.
I never knew why the South Side where I grew up was so Black, or that it hadn’t always been. In the 1950s, Chatham’s population was over 90 percent white. Ten years later, it was more than 60 percent Black. By the time I was born there, in 1980, the population had been over 90 percent African American for a decade. But when I left home in middle school for an almost entirely all-white boarding school in rural Massachusetts, I learned two things about where I came from. The first was that the thickness of my Black community—close-knit, represented in civic institutions, and economically dynamic—was rare. In Boston, Black meant poor in a way I simply had never realized. The everyday sight of Black doctors and managers (particularly native-born) was a rarity in that oldmoney city where Black political power had never gained a hold and where negative stereotypes of Blackness filled in the space. Second, I learned that although we knew about white people even if we didn’t live with them— they were co-workers, school administrators, and of course, every image onscreen—segregation meant that white people didn’t know much about us at all.
For all the ways that segregation is aimed at limiting the choices of people of color, it’s white people who are ultimately isolated. In a survey taken during the uprisings in Ferguson, Missouri, after the police killing of Michael Brown, an unarmed Black teenager, the majority of white Americans said they regularly came in contact with only “a few” African Americans, and a 2019 poll reported that 21 percent “seldom or never” interacted with any people of color at all. In 2016, three-quarters of white people reported that their social network was entirely white.
This white isolation continues amid rising racial and ethnic diversity in America, though few white people say they want it to—in fact, quite the opposite. Diversity has become a commonly accepted “good” despite its elusiveness; people seem to know that the more you interact with people who are different from you, the more commonalities you see and the less they seem like “the other.” Research repeatedly bears this out. Take, for example, a meta-analysis that examined 515 studies conducted in 38 countries from the 1940s through 2000, which encompassed responses from 250,000 people. The social psychologist Linda Tropp explained the findings of this research in 2014, when she testified before the New York City Council in a hearing about the city’s school system, the most segregated in the United States. “Approximately ninety-four percent of the cases in our analysis show a relationship such that greater contact is associated with lower prejudice.” What’s more, she said, “contact reduces our anxiety in relation to other groups and enhances our ability to empathize with other groups.” This is the strange paradox with white attitudes toward integration: in the course of two generations and one lifetime, white public opinion went from supporting segregation to recognizing integration as a positive good.
Ask most white people in the housing market, and they will say they want to live in racially integrated communities. But they don’t. Professor Maria Krysan and her colleagues from the University of Illinois looked into how people of all races think about diversity in their neighborhood housing choices. In their study, white people could even specify how much diversity they wanted: a neighborhood with about 47 percent white people. Black and Latinx people who participated in Krysan’s study also knew what level of diversity they sought: areas that are 37 percent Black and 32 percent Latinx, respectively. I find it fascinating that all three groups say that, ideally, they want to live in communities in which they do not constitute a majority. Yet researchers found that while Black and Latinx people actually search for housing in neighborhoods that match their desired levels of diversity, white people search in neighborhoods that are 68 percent white, and they end up living in areas that are 74 percent white. They say they want to be outnumbered by people of color; instead, they end up choosing places where they outnumber others three to one. Somewhere in between their stated desires and their actions is where the story of white racial hierarchy slips in—sometimes couched in the neutral-sounding terms of “good schools” or “appealing neighborhoods” or other codes for a racialized preference for homogeneity—and turns them back from their vision of an integrated life, with all its attendant benefits. It’s a story that the law wrote in the mind and on the land through generations of mandated segregation.
In another study, Professor Krysan and her colleagues showed white and Black people videos of identical neighborhoods, with actors posing as Black and white residents, and asked them to rate the neighborhoods. They found that “both the racially mixed and the all-black neighborhood were rated by whites as significantly less desirable than the all-white neighborhood. The presence of African Americans in a neighborhood resulted in a downgrading of its desirability.” The white people’s judgment wasn’t about class—they didn’t use the actors’ race as a proxy for how nice the houses were or how well the streets were maintained or what people on the block were doing outside—because all those cues remained the same in the videos. It was simply the presence of Black people that made them turn from the neighborhood. Krysan’s experiment was a video simulation, but the real-world patterns of persistent white segregation bear it out. White people are surely losing something when they end up choosing a path closer to their grandparents’ racially restricted lives than the lives they profess to want for their children. —
PUBLIC POLICY CREATED this problem, and public policy should solve it. Because of our deliberately constructed racial wealth gap, most Black and brown families can’t afford to rent or buy in the places where white families are, and when white families bring their wealth into Black and brown neighborhoods, it more often leads to gentrification and displacement than enduring integration. The solution is more housing in more places that people can afford on the average incomes of workers of color. What gets in the way is objections about the costs—to real estate developers, to public budgets, and to existing property owners.
But what about the costs we’re already paying? Frustrated by the usual hand-wringing over the costs of reform in Chicago, Marisa Novara and her colleagues at Chicago’s Metropolitan Planning Council and the Urban Institute decided to flip the ledger. They asked instead, what is the cost of segregation to Chicago? They analyzed quality-of-life indicators that were correlated with segregation in the one hundred biggest cities and compared them to Chicago’s, which allowed them to see how their city would benefit from not even eliminating segregation—but just from bringing it down to the not-very-good American average.
The findings are stark. Higher Black-white segregation is correlated with billions in “lost income, lost lives, and lost potential” in Chicago. The city’s segregation costs workers $4.4 billion in income, and the area’s gross domestic product $8 billion. As compared to a more integrated city, eightythree thousand fewer Chicagoans are completing bachelor’s degrees—the majority of whom (78 percent) are white. That means a loss of approximately $90 billion in total lifetime earnings in the city. Reducing segregation to the national median would have an impact on Chicago’s notoriously high homicide rate—by an estimated 30 percent—increasing safety for everyone while lowering public costs for police, courts, and corrections facilities; raising real estate values; and preserving the income, tax revenue, and priceless human lives of the more than two hundred people each year who would be saved from a violent death. By reducing the segregation between white and Latino residents, the researchers found, Chicago could increase life expectancy for both. Our local economies and public health statistics aren’t the only realms in which to measure the costs of segregation; the costs are environmental as well. The environmental justice movement has long established that industry and government decision makers are more likely to direct pollutants, ranging from toxic waste dumps to heavy truck traffic, into neighborhoods where people of color, especially Black people, live. This injustice has typically been understood as a life-and-death benefit of white privilege: white people can sidestep the poisoned runoff of our industrial economy. But less well known is the fact that segregation brings more pollution for white people, too. It turns out that integrated communities are less polluted than segregated ones. It’s a classic racial divide-and-conquer, collective action problem: the separateness of the population leaves communities less able to band together to demand less pollution in the first place, for everyone. An environmental health scientist from the University of California, Berkeley, Rachel Morello-Frosch, conducted a major study examining pollutants that are known carcinogens and found that more segregated cities had more of them in the air. As she explained it to me, “In those segregated cities, white folks are much worse off than their white counterparts who live in less segregated cities, in terms of pollution burden.” I marveled at the force of the finding: segregated cities have higher cancer-causing pollutants—for white people, too—than more integrated ones. Professor Morello-Frosch was quick to add: “And it’s not explained by poverty….That effect remains even after you’ve taken into account the relative concentrations of poverty.”

“THE WAY WE talk about ‘good schools’ and ‘good neighborhoods’ makes it clear that the absence of people of color is, in large part, what defines our schools and neighborhoods as good,” writes Robin DiAngelo. This old belief— which few white families would consciously endorse today—remains undeniably persistent as long as all public schools aren’t “good schools.” So, why aren’t they? This thorny but solvable problem is rooted in our country’s centuries-old decisions to segregate communities as well as decisions we continue to make today about how we draw our school districts and how we fund public schools.
Although the federal government kicks in a small portion, schools are financed primarily by local and state taxes, so the wealth of the community you live in will determine how well resourced your local schools are. White communities tend to draw their district boundaries narrowly, in order to make ultra-local and racially and socioeconomically homogenous districts, enabling them to hoard the wealth that comes from local property taxes.
Meanwhile, areas with lower property values serve greater numbers of children of color with fewer resources. Nationwide, overwhelmingly white public school districts have $23 billion more in funding than overwhelmingly of-color districts, resulting in an average of $2,226 more funding per student. If we recall how much of white wealth is owed to racist housing subsidies, the decision to keep allowing local property taxes to determine the fate of our children becomes even less defensible.
Of course, even these all-white, high-income public school districts are rare, as most white parents know. Increasingly, public education has been hollowed out by the way that racism drains the pool in America: public goods are seen as worthy of investment only so long as the public is seen as good. Today, the majority of public school students in the United States are children of color. Why? Because a disproportionate number of white students are enrolled in private schools, comprising 69 percent of K–12 private school enrollment. The boom in private schools, particularly in the South and West, occurred as a reaction to school integration in the 1950s and ’60s. Unsurprisingly given so many private schools’ advent as “segregation academies,” today, almost half of private school kids attend schools that are essentially all white.
The pricing up and privatization of public goods has a cost for us all— most white families included. A house in a neighborhood unencumbered by the systemic racism found in public schools serving children of color will cost significantly more. In the suburbs of Cincinnati, a house near a highly rated school cost 58 percent more per square foot than a nearby house with the same one-story design and high ceilings, just in a different school district. The national picture is consistent, according to the real estate data firm ATTOM Data, which looked at 4,435 zip codes and found that homes in zip codes that had at least one elementary school with higher-thanaverage test scores were 77 percent more expensive than houses in areas without. Paying a 77 percent premium may be fine for white families with plenty of disposable income and job flexibility, but it’s a tax levied by racism that not everyone can afford. That’s why so many families feel like they’re in an arms race, fleeing what racism has wrought on public education, with the average person being priced out of the competition.
ATTOM Data calculated that someone with average wages could not afford to live in 65 percent of the zip codes with highly rated elementary schools.
(CNN covered that study with a blunt headline that would be surprising to few people: YOU PROBABLY CAN’T AFFORD TO LIVE NEAR GOOD SCHOOLS.) Families who can afford a house near a “good” school, in turn, get set up for a windfall of unearned cash: a 2016 report found that homeowners in zip codes with “good” schools “have gained $51,000 more in home value since purchase than homeowners in zips without ‘good’ schools.” In order to chase these so-called good schools, white families must be able and willing to stretch their budgets to live in increasingly expensive, and segregated, communities. This is a tangible cost both of systemic racism and of often unconscious interpersonal racism: fear itself. These white parents are paying for their fear because they’re assuming that whitedominant schools are worth the cost to their white children; essentially, that segregated schools are best.

BUT WHAT IF the entire logic is wrong? What if they’re not only paying too high a cost for segregation, but they’re also mistaken about the benefit? Here’s where things get interesting. Compared to students at predominantly white schools, white students who attend diverse K–12 schools achieve better learning outcomes and even higher test scores, particularly in areas such as math and science. Why? Of course, white students at racially diverse schools develop more cultural competency—the ability to collaborate and feel at ease with people from different racial, ethnic, and economic backgrounds—than students who attend segregated schools. But their minds are also improved when it comes to critical thinking and problem solving. Exposure to multiple viewpoints leads to more flexible and creative thinking and greater ability to solve problems.
The dividends to diversity in education pay out over a lifetime. Cultural competency is a necessity in today’s multicultural professional world, and U.S. corporations spend about eight billion dollars a year on diversity training to boost it among their workforce. In the long run, research reveals that racially diverse K–12 schools can produce better citizens—white students who feel a greater sense of civic engagement, who are more likely to consider friends and colleagues from different races as part of “us” rather than “them,” who will be more at ease in the multicolor future of America in which white people will no longer be the majority. The benefits of diversity are not zero-sum gains for white people at the expense of their classmates of color, either. Amherst College psychology professor Dr.
Deborah Son Holoien cites several studies of college students—the largest of which included more than seventy-seven thousand undergraduates—in which racially and ethnically diverse educational experiences resulted in improvements in critical thinking and learning outcomes, and in the acquisition of intellectual, scientific, and professional skills. The results were similar for Black, white, Asian American, and Latinx students.
All this untapped potential. All these perverse incentives pulling us apart, two generations after segregation’s supposed end. I felt compelled to look again at the 1954 Supreme Court decision that should have changed everything, Brown v. Board of Education of Topeka. Brown struck down state and local laws that racially segregated public schools and rejected the premise of “separate but equal,” which had been the law of the land since the Court’s 1896 decision in Plessy v. Ferguson. The NAACP (and, later, the NAACP Legal Defense and Educational Fund) had been litigating against segregation since the 1930s, focusing less on why segregation was wrong and more on the government’s failure to guarantee “equal” or even sufficient facilities, resources, and salaries at Black colleges and public schools. It was a strategy to ratchet up the public cost of state segregation.
But in the end, what led to a historic unanimous decision from the Supreme Court was disapproval not of inequality but of separateness itself.
In Brown, the civil rights lawyers employed the expertise of social scientists to argue that it was segregation and the message it sent, which reinforced the notion of human hierarchy, that hurt children more than mere out-ofdate books and unheated classrooms ever could. Thirty-two experts submitted an appendix to the appellants’ briefs detailing the damage of segregation to the development of “minority” children. The facts in this appendix were the indelible details—most memorably the Black children learning to prefer white dolls—that formed the moral basis for the Court’s decision in Brown. And Brown gave rise to a progeny of cases over the following decades, cases protecting brown and Black children from the sting of inferiority, an inferiority signaled by being excluded from white schools.
But there was another path from Brown, one not taken, with profound consequences for our understanding of segregation’s harms. The nine white male justices ignored a part of the social scientists’ appendix that also described in prescient detail the harm segregation inflicts on “majority” children. White children “who learn the prejudices of our society,” wrote the social scientists, were “being taught to gain personal status in an unrealistic and non-adaptive way.” They were “not required to evaluate themselves in terms of the more basic standards of actual personal ability and achievement.” What’s more, they “often develop patterns of guilt feelings, rationalizations and other mechanisms which they must use in an attempt to protect themselves from recognizing the essential injustice of their unrealistic fears and hatreds of minority groups.” The best research of the day concluded that “confusion, conflict, moral cynicism, and disrespect for authority may arise in [white] children as a consequence of being taught the moral, religious and democratic principles of justice and fair play by the same persons and institutions who seem to be acting in a prejudiced and discriminatory manner.” As Sherrilyn Ifill, president and director-counsel of the NAACP Legal Defense Fund, reminded us on the sixty-second anniversary of the decision, this profound insight—that segregation sends distorting messages not just to Black and brown but also to white children—was lost in the triumphalism of Brown. She wrote, “I believe that we must have a public reckoning with the history of the full record presented to the Court in Brown, which predicted with devastating clarity the mind-warping harm of segregation on white children.” The now-lost rationale for why segregation must fall—the rationale that included the costs to us all—might have actually uprooted segregation in America. After all, arguing that Black and brown children suffered from not being with white children affirmed the reality of unequal conditions, but once the argument was divorced from the context of legal segregation, it also subtly reaffirmed the logic of white supremacy. Today, it’s that logic that endures—that white segregated schools are better and that everyone, even white children, should endeavor to be in them.
It’s a bit of a platitude that children don’t see race, that they must learn to hate. It’s in fact the subject of one of the most popular tweets of all time, from Barack Obama, who captioned a photo of himself looking into a window at the faces of four children: two white, one Asian, and one black: “No one is born hating a person because of the color of their skin or his background or his religion.” But I think about my own childhood, which was filled with judgments, conflicts, and alliances around race, memories from as early as nursery school. The truth is, children do learn to categorize, and rank, people by race while they are still toddlers. By age three or four, white children and children of color have absorbed the message that white is better, and both are likely to select white playmates if given a choice.
While still in elementary school, white children begin to learn the unspoken rules of our segregated society, and they will no longer say aloud to a researcher who asks them to distribute new toys that “these kids should get them because they’re white.” Instead, they’ll come up with an explanation: “These [white] kids should get the new toys because they work harder.” For all my efforts to enumerate the costs of segregation, the loss is incalculable. “The most profound message of racial segregation for whites may be that there is no real loss in the absence of people of color from our lives,” wrote Robin DiAngelo. “Not one person who loved me, guided me, or taught me ever conveyed that there was loss to me in segregation; that I would lose anything by not having people of color in my life.”

MY JOURNEY INTRODUCED me to families who are discovering the Solidarity Dividend in integration. Because the dominant narrative about school quality is color blind—the conversation is about numerical test scores and teacher-student ratios, not race or culture, of course—it’s easy to walk right into a trap set for us by racism. It’s an easy walk for millions of white parents who don’t consider themselves racist. It was even an easy walk for Ali Takata, a mother of two who doesn’t even consider herself white.
“Full disclosure I’m fifty percent white,” she wrote in an introductory email. “I’m Hapa—Japanese and Italian. My husband is Sri Lankan, born in Singapore and raised in Singapore and England. Even though we are a mixed Asian family,” she freely acknowledged, “I’ve approached public school as a privileged [half-white] person. Depending on the situation, I am white-passing, although it’s always hard to know how people perceive me.” Ali and her family moved from the San Francisco Bay Area to Austin, Texas, so her husband could begin a new job at the University of Texas. Ali researched the area, using school-rating resources such as GreatSchools.org to find what she then considered “a good neighborhood and a good school” for their two daughters, who were in preschool and first grade, respectively.
“Austin is divided east and west,” Ali said. “And the farther west you go, the wealthier and the whiter the city becomes. The farther east you go, the more impoverished and browner and blacker the people are.” This was no accident, she explained. “The 1928 Austin city plan segregated the city, forcing the Black residents east. Then Interstate Thirty-five was built as a barrier to subjugate the Black and brown residents even further. So… historically I-35 was the divide between east and west.” Ali’s family could have paid less for a home in East Austin, where the school ratings were lower, but instead, they found a house in what she described as “a white, wealthy neighborhood” on the west side of Austin.
Ali herself had grown up in a similar community, in a suburb of Hartford, Connecticut. As someone who is part white and part Asian, she had never felt totally comfortable there. But everything was nudging her to choose a similar world for her kids: the social conditioning, the data, even the signals that our market-based society sends about higher-priced things simply being better.
So, the Austin neighborhood Ali chose in order to find a “good” school ended up being very much like suburban Connecticut in the 1980s. “I recall specifically feeling like something was wrong with my eyes,” said Ali.
“Where were the Asian people? And where were the Black people? They were virtually invisible here….And it was just because I live on the west side.” She sent her kids to the local public school, whose student body reflected the neighborhood. “I will say that the first year was great,” Ali said. “I found the people very welcoming….It took me about a year to find a niche at that school, among the white wealthy people. But I did, you know. And I called them friends.” And yet, certain aspects of the school’s culture began to disturb her.
Parents were deeply involved in the school—not only fundraising and volunteering, but intruding into the school day in ways that seemed to Ali like “helicopter parenting.” Parents tried to “micromanage the teachers and curriculum,” Ali saw, to “insert themselves into the inner workings of the school, and to assume that ‘I know just as much or more than the teacher or administrator.’ ” It slowly dawned on her that many of the behaviors of both students and parents that she found off-putting were expressions of white privilege. “I feel like there’s a way in which we upper-middle-class parents…want [our kids] to be unencumbered in their lives,” including, she feels, by rules. “It’s this entitlement. And it’s this feeling of…is there a rule? I don’t need to respect this rule. It doesn’t pertain to me.” By her children’s third year in the school, Ali realized, “ ‘I just can’t do this. This is not me.’ It just—I felt kind of disgusted by the culture.” It was everywhere, and yet she didn’t have a name for it until she became involved in an affinity group for parents choosing integrated schools. “The competitiveness, complete with humble bragging. The insularity and superficiality, the focus on ‘me and my kid only,’ ” Ali said. “By staying at [that] school, I was supporting a white supremacy institution. That felt so wrong.” Yet virtually her entire social circle in Austin was composed of parents who were active in the school and immersed in its values.
She began to research alternatives, visiting eight public schools on Austin’s East Side, where her daughters would not be “surrounded by all that privilege,” Ali said. “I was going to make this decision to desegregate my kids. You know, if the city wasn’t going to do it, there’s no policy around it, then I was going to do it. “It was a very lonely process. I didn’t talk to anybody about it except for my husband.” The next fall, Ali and her husband transferred their daughters, then in second and fourth grade, to a school that was 50 percent African American, 30 percent Latinx, 11 percent white, 3 percent Asian, and 5 percent students of two or more races. Eighty-seven percent of the students were economically disadvantaged.
Ali’s daughters are mixed Asian, with features and skin tones that make it unlikely they will be perceived by others as white, the way Ali sometimes is. At their old school or the new one, she said, “My girls will always go to school with kids who look different from them.” Still, “I did not want to raise my girls in such a homogenous, unrealistic community….I wanted them to experience difference.” The new school, she said, is predominantly “Black and brown, and that is what…permeates the school. There’s music playing right when you walk in. Fun music, hip-hop music. And there’s a step team.” Ali values that her children will not grow up ignorant of the culture of their peers on the other side of town, but the advantages of the new school go much deeper than music and dance. “It’s also more community-focused, which is antithetical to the white, privileged culture” of making sure my child gets the best of everything.
As for the parents in the West Austin neighborhood, “there has been a deafening silence around my decision.” When she runs into some of her former friends, they may talk about how their children are doing, but they don’t ask her anything at all about hers. “The white community I left felt stifling and oppressive,” Ali said. “That part surprised me. My profound relief surprised me. I had no idea that living my values would feel so liberating.” Transferring to a new school in which they are surrounded by kids with different experiences and frames of reference has had its bumps, but it “has been an eye-opening experience for [my girls], I think,” Ali said. “And it has brought up really healthy [family] discussions…about wealth and class and how it feels for them…to be called [out] for being the rich kids….I think it’s been an amazing experience.” Integrated Schools is a nationwide grassroots effort to empower, educate, and organize parents who are white and/or privileged like Ali, parents who want to shift their priorities about their children’s education away from centering metrics like test scores or assumptions about behavior and discipline and toward contributing to an antiracist public educational system. The movement acknowledges that “white parents have been the key barrier to the advancement of school integration and education equity.” Through resources including reading lists and guides for awkward conversations along with traditional community organizing and coalition building tactics, the movement encourages parents not to view “diversity primarily as a commodity for the benefit of our own children” and not to view schools that serve primarily students of color as “broken and in need of white parents to fix them.” Rather, the goal of leveraging parents’ choices about schools should be to disrupt segregation because of the ways it distorts our democracy and corrodes the prospects of all our children. The group offers tools and tips to enable parents to live their values and to raise antiracist children who can help build an antiracist future.
As for Ali Takata, she lost a circle of friends but gained something far more valuable. “Through my experience at the new school, I’ve been able to see how steeped in white upper-middle-class culture I had been,” she said. And now, “Oh my goodness, I cannot believe the peace I feel with my decision and my life.”

SENDING HER TWO children to the local public schools twenty-something years ago wasn’t so much a decision for Tracy Wright-Mauer, a white woman who moved to Poughkeepsie, New York, when her husband got a job at IBM. It was more of a decision not to act, not to pull her children away from the urban neighborhood she fell in love with, with its beautiful old homes. “My husband and I, we didn’t consciously say, ‘Okay. We’re going to…be, you know, be the integrators,’ or anything. We just didn’t think not to buy a house in the district, and we didn’t think, ‘Oh, well, I’ll send my kids to private school [because] the school doesn’t look like my kids.’ ” The most thought she ever gave to it was when other white parents would ask her questions such as “Well, when you get to middle school, are you going to send them to private school?” or “What about high school? You’re going to send them to Lourdes, right?” referring the nearly 90 percent white Catholic school.
Many of these white parents had purchased their houses in Spackenkill, a wealthy part of Poughkeepsie that fought for school district independence in the 1960s and ’70s. Spackenkill successfully sued to keep its district separate from the larger city, walling off its richer tax base (including the revenues from the IBM headquarters). One can find similar stories all across the country, with predominantly white school districts drawing narrower boundaries to serve far fewer children (typically just fifteen hundred) than majority of-color low-income districts that serve an average of over ten thousand. It’s a hoarding of resources by white families who wouldn’t have such a wealth advantage if it weren’t for generations of explicit racial exclusion and predation in the housing market.
A few years ago, Tracy was cleaning her house and came across her daughter’s second-grade class photo: fifteen smiling prepubescent boys and girls in their Photo Day best. She snapped a picture and posted it on Facebook, and one of her Black friends pointed out to her that “other than the teachers, [Fiona] was the only white kid in the class.” Her daughter, Fiona, is now in college; her son, Aidan, is wrapping up high school.
They’re both the products of what parents in the Integrated Schools Facebook group Tracy now belongs to call “Global Majority” public schools, and “both have learned to discuss race,” she offered. “They talk about it all the time. They discuss class. They discuss racism and equity, and they just are really, really engaged with their friends about these subjects. And, you know, I think it’s pretty awesome.” I had to ask Tracy the million-dollar question: Were they good schools?
What about the standardized test scores, the yardstick by which all quality is measured? Tracy didn’t pause: “Maybe I’m an anomaly. I think other parents look to the test scores…to judge a school. Just because the test scores are not, you know, the highest in the state, or in the top ten, it doesn’t mean to me that the kids aren’t getting really great teachers and being challenged and doing interesting things in their classes.” Her son, Aidan, who was graduating the year we spoke, is the only white guy in his friend group, and all his friends were going on to college. “His friends are smart kids who work hard, and they do well on their SATs, and they’re very motivated.” I was able to reach Fiona, a freshman on a rowing scholarship at Drexel University in Philadelphia, in her dorm room. I asked her what it had been like going to a high school where just 10 percent of the student body was white. Fiona recalled it making for some uncomfortable conversations with white kids in other school districts. They’d go something like this: “I’d say, oh, I’m from Poughkeepsie [High],’ and they’d be like, ‘Oh, I’m so sorry.’ Which someone actually said to me.” I cringed. “It’s really just disappointing. Because I love Poughkeepsie [High], and I loved my time there, and the friends I made.” Fiona said her direction in life had been influenced by how she learned to see the world at Poughkeepsie High; she credited the experience with giving her the skills to be an advocate. “It helped me empathetically. I don’t know if I want to be a politician, or if I want to work with some environmental justice organization, but empathy has a lot to do with that: looking at both sides of the story and not trying to put a Band-Aid over something, but getting to the root of the problem. I think that’s where my skills lie. And…a lot of that comes from where I grew up and where I went to school.” Fiona’s now at a college where more than half of the students are white, and just 8 and 6 percent are African American and Latinx, respectively. It’s a big shift. Many of her white peers are just not as comfortable around people of color. “If there’s a roomful of Black people and [we walk in and] we’re the only white people? I think they sort of say—like, ‘Oh, like, let’s leave.’ Or they say, like, if we’re out at night, ‘Oh, this is, like, a little sketchy.’ Things like that, I notice.” But she also doesn’t want to suggest that white kids who grew up in segregated schools are hopeless when it comes to race. “I think one of the downfalls of growing up in a homogeneous setting is that the process of understanding…racial inequalities and recognizing one’s own privilege can be very uncomfortable and might take longer, but it doesn’t mean they don’t get there.” In that way, Fiona feels lucky. “I got to spend my time with people who didn’t look like me, and that didn’t really matter. And I hope to strive to feel that way throughout my whole life. To not be surprised when I’m in a diverse group of people, and just be like, ‘This is normal. This is how it’s supposed to be.’