Left, Right, or Fed Up: How a Drop in Critical Thinking Warps Our Elections—and What AI Might Do About It

Have you noticed—regardless of whether you lean left, right, or somewhere in between—that a surprising number of public officials no longer strike us as the sharpest minds in the room? Some seem proudly unversed in basic policy, some treat fact-checking as optional, and a few appear to view elective office as little more than a stage for showmanship. Some mornings, the headlines read less like coverage of a functioning government in DC and more like a cafeteria food-fight report from the local middle school, complete with taunting and name-calling. 

That uneasy sense—“How did that candidate get the job?”—isn’t just dinner-table grumbling. It points to a deeper shift in the nation’s reasoning skills, one that shows up in hard data and may explain why campaigns built on slogans and spectacle so often out­perform those built on sober analysis. The decline permeates the whole system, thinning the talent pool from which parties draw their candidates and dulling the judgment of the voters who choose them.

In this blog post, we’ll review the evidence to confirm whether a decline is truly underway; in the next post, we’ll explore how the emerging wave of AI might worsen—or possibly correct—those trends.

Measuring critical thinking trends in the US adult population

What we really want to know is whether Americans can still think critically—that is, look past any tribal alliances and biases, question their sources, weigh evidence, spot weak arguments, and reach conclusions that stand up to scrutiny. No national survey measures that skill head-on or year after year. Instead, researchers track three stand-ins that together capture much of what critical thinking requires:

  1. Literacy

  2. Numeracy (working with numbers and simple statistics)

  3. Digital problem-solving (navigating unfamiliar software to find and judge information)

The best source for all three is the Programme for the International Assessment of Adult Competencies (PIAAC). About every six to eight years the OECD sends trained interviewers to thousands of U.S. homes, testing adults on those very skills and comparing the results with more than thirty other countries.

Below are the headline results from the OECD’s PIAAC assessments. 

The left and middle panels trace literacy and numeracy for U.S. adults (ages 16–65) across the last three survey waves—2012/14, 2017, and 2023—while the right hand panel gives the first nationwide reading on digital problem-solving, a module introduced in the most recent cycle. The direction could not be clearer: both literacy and numeracy have slipped markedly since 2017, and the declines are large enough to be judged statistically significant at the 95-percent confidence level.

Let's examine what these scores actually mean. PIAAC groups every test-taker into three broad literacy bands.


Key point: the average U.S. literacy and numeracy scores on the earlier charts sit firmly inside Level 2—good for everyday tasks, but shaky once ideas get layered.

Drilling Deeper into the Numbers

Let’s dig deeper: the next charts display the share of U.S. adults in each skill band. (Percentages may not sum to exactly 100 percent because of rounding.)


Here is how each of these bands map approximately into grade level:


Some conclusions:
  • At the 95 % confidence level, fewer than half of American adults reach the band where solid critical thinking begins (Level 3).

  • The Level 1 share (middle school comprehension or less) has climbed by about one-third since 2017, meaning nearly three in ten adults now operate at or below a middle-school reading level.

Together, these figures confirm a decade-long slide that leaves most citizens—and many would-be office-holders—without the background typically expected of 9th- or 10th-graders.

Note that same pattern appears in every subgroup within the PIAAC data—gender, age, race and ethnicity, education level, employment status, even self-reported health—showing that the trend is not confined to any single slice of the population. (Drill down is available here, on the PIAAC website, if you'd like to examine these trends yourself.)

Beyond PIAAC: Converging Signals of a National Skills Shortfall

Comprehensive skill surveys are scarce. Apart from PIAAC, most projects mine the same PIAAC test scores, enrich them with regional, health-, or income-level data, and draw fresh conclusions. Yet whatever lens they use, they keep arriving at the same message: too many U.S. adults read, compute, and problem-solve below the level that a healthy democracy requires, and the decline is speeding up.


Spotlight: The Barbara Bush Foundation’s Literacy Gap Map

This interactive study layers PIAAC data onto county-level records for health, income, and education. A few take-aways:

  • 54 % of adults (≈ 130 million people) read below a sixth-grade level.

  • Counties with the weakest literacy are also hotspots for poor health, high poverty, and low economic mobility.

  • As can be seen in the map below, a clear regional split appears: the Midwest and Northeast post relatively higher scores, while the South and West show sharp swings from county to county (the PIAAC scores in the map represent the percentage of adult population that scores below basic literacy).

Want to explore more? Zoom to your own state or metro area here. The exercise can be eye-opening—especially when you compare low-literacy counties with the districts that elected your “favorite” headline-making officials.

Here is an example for my home state of Delaware:


More Threads, Same Fabric

There are several other sources of analysis, most use the underlying PIAAC data and come to similar conclusions. Here are a few in the table below:


Different data analyses (by economic loss, regional spread, international standing) all circle back to one conclusion: the foundational skills that power critical thinking are eroding for a large share of the U.S. population.

Conclusion: When the Skills We Lose Shape the Leaders We Get

Taken together, the numbers tell an uncomfortable story: nearly three in ten American adults now read and reason at—or below—the level expected of middle-school students, while barely four in ten reach the band where real critical thinking begins. Because political parties recruit from the same talent pool that fills the voting booths, a shrinking supply of strong reasoners shows up twice—first in the electorate and then in the slate of candidates.

  • Voters who struggle to weigh evidence lean harder on slogans, party colors, and cable-news passion plays.

  • Candidates who struggle with the same skills default to sound bites, avoid detailed policy debates, and treat fact-checking as optional stagecraft.

  • The feedback loop tightens: each election elevates a few more officials whose public arguments rarely climb above Level 2, further lowering the standard for the next round of campaigns.

If we do nothing, the “brain gap” will keep widening, and the quality of public decision-making will fall with it—no matter which side of the aisle you prefer.


Up Next: AI—Amplifier or Antidote?

In the next post we’ll examine how the fast-moving world of artificial intelligence could magnify this reasoning deficit—or, with the right guard-rails, bridge it:

  • How large-language models can flood low-skill information ecosystems with persuasive junk—or become free tutors that teach source-checking.

  • Why default settings (proof-of-origin badges, citation demands) matter more than any single breakthrough in model size.

  • What schools, libraries, and everyday users can do right now to tilt AI toward raising, rather than eroding, our collective thinking skills.

Stay tuned—because the same tools that generate viral rumors in seconds could also rebuild the habits of inquiry that a healthy democracy can’t live without.





Comments

Popular posts from this blog

Will Tariffs Jack Up Your Bills and Push the World to the Brink?

America’s Ideological Dumpster Fire