Overview:

An educator uses historical and contemporary political examples to teach students how to critically evaluate data, recognize bias, distinguish correlation from causation, and build stronger, evidence-based arguments to foster statistical literacy and thoughtful discourse.

I created a class called “Math Applications.”  It is designed as a course for students that have completed their math requirements but are not interested in taking pre-calculus or above. Before I teach any principles of descriptive statistics, I show students this famous picture of Truman holding up a newspaper titled “Dewey Defeats Truman.”

I explain the basic history of the erroneous headline, published by the Chicago Tribune on December 3rd, 1948.  I ask the students to come up with as many possible reasons as they can think of for how the Chicago Tribune came to that conclusion.

The students do a pretty good job of predicting a bunch of the reasons.  Without knowing the history they correctly infer that the Chicago Tribune chiefs didn’t like Truman and that there was a political bias that influenced their position.  They also correctly assumed that the paper looked at polling data that favored their own biases.  

What my students didn’t know, and this is not a dig against my students, is why the polling data was wrong. Poorly designed sampling data, including an overrepresentation of people who owned telephones in a time when many people did not, led to over-confidence and an incorrect prediction.  

The students laughed at the arrogance of the newspapers for ignoring such a huge swath of the population.  “Serves them right,” one of my students told me after explaining what happened.  When I asked them if this could ever happen again they assured me, now that we all have smartphones and are technologically savvy, that this kind of error could not happen again.  

Then I have the students discuss the polling data from the 2016 election.    

Students immediately make the connection between the arrogance shown by the media in both eras and the same poorly designed sampling data that did not accurately reflect the population.  

Students walk away from those discussions a bit humbled.  They see that for all of our technological sophistication, we suffer from motivated reasoning.  

With the goal of protecting ourselves from our own biases and the biases of the media, I teach students a couple of important concepts:

  • Correlation and the distinction between correlation and causation
  • The hierarchy of data; Intuition and anecdotes, well designed studies, cleanly devised experiments and meta-analyses of multiple studies offer different levels of information that can be useful but need to be given appropriate context
  • Sample size and making sure that the sample captures the true population 

Whenever a major story comes up that seems to have emotional resonance, I ask “can I use this story to teach?”  

The story of Trump and Tylenol offers such an opportunity.  Trump opined that women should “tough it out” when they are pregnant because of the association between Tylenol and Autism.   When Trump implied a causal link between Tylenol and autism he was making a claim that can be statistically evaluated.  

Whenever I use politics as a teaching moment, I have to be careful.  It is very important to me that I do not alienate or discourage students from participating in discourse because they are worried that I will disapprove of their own political beliefs.  I don’t want my own biases to come out either way; students should be able to come to their own conclusions so long as they can point to the data that support them. 

Towards that end, I teach students how to “Steel Man” an argument.  A Steel Man argument requires the person to search for the strongest evidence they can to support the claim that they do not believe.  The goal is that if they can find the strongest evidence supporting a claim, and are still able to poke holes in that argument, then they can be reasonably confident that they are not simply engaging in their own confirmation bias.  


For example, one of the studies that is being cited is a study titled Acetaminophen use during pregnancy, behavioral problems, and hyperkinetic disorders” published by the Journal of the American Medical Association.


If students can read the entire article that is wonderful but basic statistical understanding is enough to read the abstract and see that while there is compelling data from a very large sample of mothers from all across Denmark collected over a long period, there are also some weaknesses to the study. For example, observational data from one country, with other potential confounders, such as other health conditions, may or may not generalize to other countries.

The point of this exercise is to recognize both the strengths and weaknesses of the data. In the ideal students would be able to articulate the closest thing to the truth that we currently understand; there is some compelling evidence that there may be a correlation between acetaminophen use and negative infant outcomes.  This evidence suggests pregnant women should be cautious and judicious in their use of acetaminophen.  Despite this evidence of correlation It would be incorrect to assume a causal relationship at this point. The majority of the data comes from observational studies, animal model experiments and biomarkers found in umbilical tissue.  In other words, based on the evidence presented, there is insufficient available evidence to determine a causal relationship.  

By engaging in this kind of exercise students are learning to fight weak ideas with stronger ideas instead of engaging in ad hominem attacks or other forms of unproductive vitriol as we have seen political debates degrade over the past few decades.  

Statistical literacy is vital because bias is a bipartisan problem.    Our job is not to make our children choose sides.  Our job is to help them learn to extract the kernels of truth that we discover and incorporate those into as accurate a picture of the world as they can express.  If we do that we will have served our children well.

Selim Tlili teaches 8th grade science at the Speyer Legacy School in Manhattan.  He earned his bachelors in biology from SUNY Geneseo and his masters in Environmental Public Health from Hunter College.  His first book in science education, Sketching for Science, will be published in January.  Follow his writing at selim.digital and instagram.com/sketchingforscience.

Selim Tlili teaches 8th grade science at the Speyer Legacy School in Manhattan. He earned his bachelors...

Join the Conversation

4 Comments

  1. Your blog is a breath of fresh air in the often stagnant world of online content. Your thoughtful analysis and insightful commentary never fail to leave a lasting impression. Thank you for sharing your wisdom with us.

  2. the Truman newspaper example is kind of funny at first, but connecting it to the 2016 polling made it feel way less like a “historical mistake” and more like something that keeps happening. I didn’t expect the same kind of sampling issues to still show up in a different form.

  3. I liked the example about the Truman vs. Dewey prediction—it’s kind of wild how confident everyone was and still got it wrong. It reminded me of how easy it is to trust numbers without really questioning where they come from.

  4. That’s a cool way to teach critical thinking. Using historical blunders like Truman’s headline to show how biases mess with data is smart. Plus, stuff like this is way more engaging than just lecturing.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.