- A Playbook for Building Common Core Support Among Teachers - October 8, 2014
- Shifting Our Mindset Around Teacher Evaluations - September 3, 2014
- A Profession for My Generation - August 19, 2014
- The Difference Between Calculation and Mathematics - August 5, 2014
- Four Little Tips to Transform Your Classroom - August 5, 2014
- Just the Facts: Charter High School Performance in Memphis, TN - July 30, 2014
- Tennessee Education's Perception Problem - July 9, 2014
- Irrational Fears Prevent Real Common Core Progress - June 30, 2014
- Performance Based Tests Take the Guesswork Out of Assessing - June 4, 2014
- Teaching and the Off-Season - May 27, 2014
My school has a strong data driven culture. We use data not just to identify grade level or class wide trends, but trends with individual students. Because it’s not enough for me to know that the entire class scored, say, an 80 percent on a specific assessment. I need to be able to identify high, middle and low performers on each assessment and reflect on what has made them successful. This is a wonderful practice because it helps me identify what the strong performers are doing well and consider how I can use this information to help the weaker performers improve.
I think this offers a perfect analogy for what we need to be doing with charter schools here in Tennessee.
Last week I wrote a profile of charter schools in Tennessee with the purpose of establishing that collectively charters in Tennessee are in fact the real deal. They are doing very well in the aggregate compared to traditional public schools. From a public policy perspective this is a vital question to answer because it informs our view about where best to spend our limited public resources.
But its not enough just to know that collectively we’re doing well. Just like in my classroom, it’s important to be able to identify who is doing well so that we can study and potentially replicate their success, especially when it comes to serving students from a low income background.
This piece looks at charter schools compared to three different groups of schools to identify which individual schools are best serving this student population. When we look closely at charters, particularly high schools here in Memphis, it turns out that we have a lot to be excited about.
Summary of Methods (a more detailed explanation can be found at the end)
To create a valid measure, I first decided to compare individual Memphis charter high schools to traditional high public schools using data from the Tennessee Department of Education’s 2013 Report Card. The report card allows anyone to separate out achievement data by category, in this case of economically disadvantaged students only (economically disadvantaged is defined as students living at 185 percent or below of the poverty level).
I can then compared each charter school performance to that of traditional public schools using three measures:
1) A group of peer schools with similar poverty rates average for each subject
2) The legacy MCS average for each subject
3) The average of a group of the best optional schools in Memphis (schools with our version of an honors program) for each subject
To determine how each charter high school performs in relation to each measure I simply subtracted the average of each comparison group from each charters achievement data. The resulting number is the difference. Lists of the schools in each group can be found at the end of this piece.
Before going into the data, I want to add a couple of notes. First, I want to fully acknowledge that teach in one of the schools in question, The Soulsville Charter School, but that this post is in no way affiliated with the school nor has it been commissioned by them. It is entirely my independent creation.
Second, I chose achievement data rather than growth data for this first comparison because a charge often levied against charters is that while they grow students they don’t cut it when compared on an absolute achievement scale. As such I think this is an important point to examine and to my knowledge it hasn’t been undertaken by any official studies here in Tennessee.
Third, my goal is not to advocate for an expansion of charter schools nor call for shutting down any individual ones based on this data, especially given that I’ve only used achievement data for one year. All I’m trying to identify is who is doing well educating students in poverty using one measure from one year to identify whom we should be studying.
You can see all data shown in the table below (green cells = higher than average achievement, red = lower) followed in the next section by analysis:
The Analysis by Comparison Group:
In analyzing the data I decided to break it up further by comparison measure. To achieve this I placed all six schools together on a graph for each measure. For each graph, a bar above zero indicates higher than average scores while a bar below zero indicates lower than average scores for the comparison group.
Comparison #1, Peer School: this compared each charter school to the average of a group of schools serving a similar percentage of economically disadvantaged students. Three schools stand out as having higher than average scores; Memphis Academy of Health Sciences (MAHS), Power Center Academy (PCA) and The Soulsville Charter School. These schools displayed above average achievement in all three accountability subjects. Notably all three of these schools scored 18 percentage points higher than the peer average in Algebra II (a very difficult test), Biology and English I. Two other schools perform at or near the peer school average, Memphis Academy of Science and Engineering (MASE) and Memphis Business Academy (MBA). In both cases these schools scored higher than the comparison group average in at least two subjects. One school, City University, scored below the peer average in all five EOC subjects.
I should note that there has been some discussion about the most appropriate measure for comparison, free and reduced price lunch or just free lunch. Some people contend that the reduced lunch population can vary enough within schools that they produce completely different environments. Just to be safe I also ran the same analysis using schools with only similar populations of free lunch students, but the results were negligibly different, only 2-3% in most cases. I decided to keep free and reduced lunch peer comparisons rather than just free because of this and because the sample size was larger for all comparison groups.