Data Visualization #20—”Lying” with statistics

In teaching research methods courses in the past, a tool that I’ve used to help students understand the nuances of policy analysis is to ask them to assess a claim such as:

In the last 12 months, statistics show that the government of Upper Slovobia’s policy measures have contributed to limiting the infollcillation of ramakidine to 34%.

The point of this exercise is two-fold: 1) to teach them that the concepts we use in social science are almost always socially constructed, and we should first understand the concept—how it is defined, measured, and used—before moving on to the next step of policy analysis. When the concepts used—in this case, infollcillation and ramakidine—are ones nobody has every heard of (because I invented them), step 1 becomes obvious. How are we to assess whether a policy was responsible for something when we have zero idea what that something even means? Often, though, because the concept is a familar one—homelessness, polarization, violence—we often skip right past this step and focus on the next step (assessing the data).

2) The second point of the exercise is to help students understand that assessing the data (in this case, the 34% number) can not be done adequately without context. Is 34% an outcome that was expected? How does that number compare to previous years and the situation under previous governments, or the situation with similar governments in neighbouring countries? (The final step in the policy analysis would be to set up an adequate research design that would determine the extent to which the outcome was attributable to policies implemented by the South Slovobian government.)

If there is a “takeaway” message from the above, it is that whenever one hears a numerical claim being made, first ask yourself questions about the claim that fill in the context, and only then proceed to evaluate the claim.

Let’s have a look at how this works, using a real-life example. During a recent episode of Real Time, host Bill Maher used his New Rules segment to admonish the public (especially its more left-wing members) for overestimating the danger to US society of the COVID-19 virus. He punctuated his point by using the following statistical claim:

Maher not only claims that the statistical fact that 78% of COVID-19-caused fatalities in the USA have been from those who were assessed to have been “overweight” means that the virus is not nearly as dangerous to the general USA public as has been portrayed, but he also believes that political correctness run amok is the reason that raising this issue (which Americans are dying, and why) in public is verboten. We’ll leave aside the latter claim and focus on the statistic—78% of those who died from COVID-19 were overweight.

Does the fact that more than 3-in-4 COVID-19 deaths in the USA were individuals assessed to have been overweight mean that the danger to the general public from the virus has been overhyped? Maher wants you to believe that the answer to this question is an emphatic ‘yes!’ But is it?

Whenever you are presented with such a claim follow the steps above. In this case, that means 1) understand what is meant by “overweight” and 2) compare the statistical claim to some sort of baseline.

The first is relatively easy—the US CDC has a standard definition for “overweight”, which can be found here: https://www.cdc.gov/obesity/adult/defining.html. Assuming that the definition is applied consistently across the whole of the USA, we can move on to step 2. The first question you should ask yourself is “is 78% low, or high, or in-between?” Maher wants us to believe that the number is “high”, but is it really? Let’s look for some baseline data with which to compare the 78% statistic. The obvious comparison is the incidence of “overweight” in the general US population. Only when we find this data point will we be able to assess whether 78% is a high (or low) number. What do we find? Let’s go back to the US CDC website and we find this: “Percent of adults aged 20 and over with overweight, including obesity: 73.6% (2017-2018).”

So, what can we conclude? The proportion of USA adults dying from COVID-19 who are “overweight” (78%) is almost the same proportion of the USA adult population that is “overweight (73.6%).” Put another way, the likelihood of randomly selecting a USA adult who is overweight versus randomly selecting one who is not overweight is 73.6/26.4≈3.29. If one were to randomly select an adult who died from COVID-19, one would be 78/22≈3.55 times more likely to select an overweight person than a non-overweight person. Ultimately, in the USA at least, as of the end of April overweight adults are dying from COVID-19 at a rate that is about equal to their proportion in the general adult US population.

We can show this graphically via a pie chart. For many reasons, the use of pie charts is generally frowned upon. But, in this case, where there are only two categories—overweight, and non-overweight—pie charts are a useful visualization tool, which allows for easy visual comparison. Here are the pie charts, and the R code that produced them below:

Created by: Josip Dasović

We can clearly see that the proportion of COVID-19 deaths from each cohort—overweight, non-overweight—is almost the same as the proportion of each cohort in the general USA adult population. So, a bit of critical analysis of Maher’s claim shows that he is not making the strong case that he believes he is.

# Here is the required data frame
covid.df <- data.frame("ADULT"=rep(c("Overweight", "Non-overweight"),2), 
                       "Percentage"=c(0.736,0.264,0.78,0.22),
                       "Type"=rep(c("Total Adult Population","COVID-19 Deaths"),each=2))

library(ggplot2)

# Now the code for side-by-side pie charts:

ggpie.covid <- ggplot(covid.df, aes(x="", y=Percentage, group=ADULT, fill=ADULT, )) +
  geom_bar(width = 1, stat = "identity") +
  scale_fill_manual(values=c("#33B2CC","#D71920"),name ="ADULT CATEGORY") + 
  labs(x="", y="", title="Percentage of USA Adults who are Overweight",
       subtitle="(versus percentage of USA COVID-19 deaths who were overweight)") + 
  coord_polar("y", start=0) + facet_wrap(~ Type) +
  theme(axis.text = element_blank(),
        axis.ticks = element_blank(),
        panel.grid  = element_blank(),
        plot.title = element_text(hjust = 0.5, size=16, face="bold"),
        plot.subtitle = element_text(hjust=0.5, face="bold"))

ggsave(filename="covid19overweight.png", plot=ggpie.covid, height=5, width=8)


Data Visualization #19—Using panelView in R to produce TSCS (time-series cross-section) plots

As part of a project to assess the influence, or impact, of Canadian provincial government ruling ideologies on provincial economic performance I have created a time-series cross-section summary of my party ideology variable across provinces over time. A time-series cross-section research design is one in which there is variation across space (cross-section) and also over time. The time units can literally be anything although in comparative politics they are often years. The cross-section part can be countries, cities, individuals, states, or (in my case) provinces. Here is the snippet of the data structure in my dataset (data frame in R):

canparty.df[1:20,c(2:4,23)]

Year                    Region                      Pol.Party party.ideology
1981                   Alberta Progressive Conservative Party              1
1981          British Columbia            Social Credit Party              1
1981                  Manitoba Progressive Conservative Party              1
1981             New Brunswick Progressive Conservative Party              1
1981 Newfoundland and Labrador Progressive Conservative Party              1
1981               Nova Scotia Progressive Conservative Party              1
1981                   Ontario Progressive Conservative Party              1
1981      Prince Edward Island Progressive Conservative Party              1
1981                    Quebec                Parti Quebecois              0
1981              Saskatchewan           New Democratic Party             -1
1982                   Alberta Progressive Conservative Party              1
1982          British Columbia            Social Credit Party              1
1982                  Manitoba           New Democratic Party             -1
1982             New Brunswick Progressive Conservative Party              1
1982 Newfoundland and Labrador Progressive Conservative Party              1
1982               Nova Scotia Progressive Conservative Party              1
1982                   Ontario Progressive Conservative Party              1
1982      Prince Edward Island Progressive Conservative Party              1
1982                    Quebec                Parti Quebecois              0
1982              Saskatchewan Progressive Conservative Party              1

To get the plot picture below, we use the R code at the bottom of this post. But a couple of notes: first, the year data are not in true date format. Rather, they are in periods, which I have conveniently labelled years. In other words, what is important for the analysis that I will do (generalized synthetic control method) is to periodize the data. Second, because elections occur at any point during the year, I have had to make a decision as to which party is coded as having been in government that year.

Since my main goal is to assess economic performance, and because economic policies take time to be passed, and to implement, I made the decision to use June 30th as a cutoff point. If a party was elected prior to that date, it is coded as having governed the province in that whole year. If the election was held on July 1st (or after), then the incumbent party is coded as having governed the province the year of the election and the new government is coded as having started its mandate the following year.

Here’s the plot, and the R code below:

library(gsynth)
library(panelView)
library(ggplot2)

ggpanel1 <- panelView(Prop.seats.gov ~ party.ideology + Prov.GDP.Cap, data = canparty.df, 
          index = c("Region", "Year"), main = "Provincial Ruling Party Ideology", 
          legend.labs = c("Left", "Centre", "Right"), col=c("orange", "red", "blue"), 
          axis.lab.gap = c(2,0), xlab="", ylab="")
## I've used Prop.seats.gov and Prov.GDP.Cap b/c they are two 
## of my IVs, but any other IVs could have been used to 
## create the plot. The important part is the party.ideology 
## variable and the two index variables--Region (province)  ## and Year.

## Save the plot as a .png file

ggsave(filename="ProvRulingParty.png", plot=ggpanel1, height=8,width=7)

Data Visualization #18—Maps with Inset Maps

There are many different ways to make use of inset maps. The general motivation behind their use is to focus in on an area of a larger map in order to expose more detail about a particular area. Here, I am using the patchwork package in R to place a series of inset maps of major Canadian cities on the map that I created in my previous post. Here is the map and a snippet of the R code below:

Created by: Josip Dasović

You can see that a small land area contains just under 50% of Canada’s electoral districts. Once again, in a democracy, citizens vote. Land doesn’t.

In order to use the patchwork package, it is helpful to first create each of the city plots individually and store those as ggplot objects. Here are examples for Vancouver and Calgary.

## My main map (sf) object is named can_sf. Here I'm created a plot using only 
## districts in the Vancouver, then Calgary areas, respectively. I also limit the
## districts to those that comprise the "red" (see map) 50% population group.

library(ggplot2)
library(sf)

yvr.plot <- ggplot(can_sf[can_sf$prov.region=="Vancouver and Northern Lower Mainland" &
                             can_sf$Land.50.Pop.2016==1 | can_sf$prov.region=="Fraser Valley and Southern Lower Mainland" &
                             can_sf$Land.50.Pop.2016==1,]) + 
  geom_sf(aes(fill = Land.50.Pop.2016), col="black", lwd=0.05) + 
  scale_fill_manual(values=c("red")) +
  theme(axis.text.x=element_blank(),
        axis.text.y=element_blank(),
        axis.ticks.x = element_blank(),
        axis.ticks.y = element_blank(),
        legend.position = "none",
        plot.margin=unit(c(0,0,0,0), "mm"))

yyc.plot <- ggplot(can_sf[can_sf$prov.region=="Calgary" &
                            can_sf$Land.50.Pop.2016==1,]) + 
  geom_sf(aes(fill = Land.50.Pop.2016), col="black", lwd=0.05) + 
  scale_fill_manual(values=c("red")) +
  theme(axis.text.x=element_blank(),
        axis.text.y=element_blank(),
        axis.ticks.x = element_blank(),
        axis.ticks.y = element_blank(),
        legend.position = "none",
        plot.margin=unit(c(0,0,0,0), "mm"))

I have created a separate ggplot2 object for each of the cities that are inset onto the main Canada map above. The final R code looks like this:

## Add patchwork library
library(patchwork)

## Note: gg50 is the original map used in the previous post.

gg.50.insets <- gg.50 + inset_element(yvr.plot, left = -0.175, bottom = 0.25, 
                            right = 0, top = 0.45, on_top=TRUE, align_to = "plot",
                            clip=TRUE) +
  labs(title="VANCOUVER") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(yeg.plot, left = -0.175, bottom = 0, 
                right = -0.025, top = 0.2, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="EDMONTON") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(yyc.plot, left = -0.1, bottom = 0, 
                right = 0.20, top = 0.25, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="CALGARY") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(yxe.plot, left = 0.125, bottom = 0, 
                right = 0.275, top = 0.15, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="SASKATOON") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(yqr.plot, left = 0.225, bottom = 0, 
                right = 0.45, top = 0.175, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="REGINA") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(ywg.plot, left = 0.375, bottom = 0, 
                right = 0.55, top = 0.175, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="WINNIPEG") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(yyz.yhm.plot, left = 0.725, bottom = 0, 
                right = 1.15, top = 0.25, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="TORONTO/\nHAMILTON") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(yul.plot, left = 1, bottom = 0, 
                right = 1.175, top = 0.175, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="MONTREAL") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
        panel.border = element_rect(colour = "black", fill=NA, size=1)) +
  inset_element(yqb.plot, left = 0.95, bottom = 0.175, 
                right = 1.2, top = 0.375, on_top=TRUE, align_to = "plot",
                clip=TRUE) +
  labs(title="QUEBEC\nCITY") +
  theme(plot.title = element_text(hjust = 0.5, size=9, vjust=-1, face="bold"),
  panel.border = element_rect(colour = "black", fill=NA, size=1))

ggsave(filename="can_2019_50_pct_population_insets.png", plot=gg.50.insets, height=10, width=13)

Data Visualization #17—Land Doesn’t Vote, Citizens Do

As I have noted in previous versions of this data visualization series, standard electoral-result area maps often obfuscate and mislead/misrepresent rather than clarify and illuminate. This is especially the case in electoral systems that are based on single-member electoral districts in which there is a single winner in every district (as known as ‘first-past-the-post’). Below you can see an example of a typical map used by the media (and others) to represent an electoral outcome. The map below is a representation of the results of the 2019 Canadian Federal Election, after which the incumbent Liberal Party (with Justin Trudeau as Prime Minister) was able to salvage a minority government (they had won a resounding majority government in the 2015 election).

There are two key guiding constitutional principles of representation regarding Canada’s lower house of parliament—the House of Commons: i) territorial representation, and ii) the principle of one citizen-one vote. As of the 2019 election these principles, combined with other elements of Canadian constitutionalism (like federalism) and the course of political history, have created the current situation in which the 338 federal electoral districts vary not only in land area, but also in population size, and often quite dramatically (for a concise treatment on this topic, click here).

Because a) “land doesn’t vote,” and b) we only know the winner of each electoral district (based on the colour) and not how much of the vote this candidate received) the map below misrepresents the relative political support of parties across the country. As of 2016, more than 2/3 of the Canadian population lived within 100 kilometres of the US border, a total land mass that represents only 4% of Canada’s total.

I’ve created a map that splits Canada in two—each colour represents 50% of the total Canadian population. You can see how geographically-concentrated Canada’s population really is.

The electoral ridings in red account for half of Canada’s population, while those in light grey account for the other half. Keep this in mind whenever you see traditional electoral maps of Canada’s elections.

## This is code for the first map above. This assumes you 
## have crated an sf object named can_sf, which has these 
## properties:

```
Simple feature collection with 6 features and 64 fields
Geometry type: MULTIPOLYGON
Dimension:     XY
Bounding box:  xmin: 7609465 ymin: 1890707 xmax: 9015737 ymax: 2986430
Projected CRS: PCS_Lambert_Conformal_Conic
```

library(ggplot2)
library(sf)
library(tidyverse)

gg.can <- ggplot(data = can_sf) +
  geom_sf(aes(fill = partywinner_2019), col="black", lwd=0.025) + 
  scale_fill_manual(values=c("#33B2CC","#1A4782","#3D9B35","#D71920","#F37021","#2B2D2F"),name ="Party (2019)") +
  labs(title = "Canadian Federal Election Results \u2013 October 2019",
       subtitle = "(by Political Party and Electoral District)") +
  theme_void() + 
  theme(legend.title=element_blank(),
        legend.text = element_text(size = 16),
        plot.title = element_text(hjust = 0.5, size=20, vjust=2, face="bold"),
        plot.subtitle = element_text(hjust=0.5, size=18, vjust=2, face="bold"),
        legend.position = "bottom",
        plot.margin = margin(0.5, 0.5, 0.5, 0.5, "cm"),
        legend.box.margin = margin(0,0,30,0),
        legend.key.size = unit(0.65, "cm"))

ggsave(filename="can_2019_all.png", plot=gg.can, height=10, width=13)

Here is the code for the 50/50 map above:

#### 50% population Map
gg.50 <- ggplot(data = can_sf) +
  geom_sf(aes(fill = Land.50.Pop.2016), col="black", lwd=0.035) + 
  scale_fill_manual(values=c("#d3d3d3", "red"),
        labels =c("Electoral Districts with combined 50% of Canada's Population",
                "Electoral Districts with combined other 50% of Canada's Population")) +
  labs(title = "Most of Canada's Land Mass in Uninhabited") +
  theme_void() + 
  theme(legend.title=element_blank(),
        legend.text = element_text(size = 11),
        plot.title = element_text(hjust = 0.5, size=16, vjust=2, face="bold"),
        legend.position = "bottom", #legend.spacing.x = unit(0, 'cm'),
        legend.direction = "vertical",
        plot.margin = margin(0.5, 0.5, 0.5, 0.5, "cm"),
        legend.box.margin = margin(0,0,0,0),
        legend.key.size = unit(0.5, "cm"))
#       panel.border = element_rect(colour = "black", fill=NA, size=1.5))

ggsave(filename="can_2019_50_pct_population.png", plot=gg.50, height=10, width=13)

Data Visualization #16—Canadian Federal Equalization Payments per capita

My most recent post in this series analyzed data related to the federal equalization program in Canada using a lollipop plot made with ggplot2 in R. The data that I chose to visualize—annual nominal dollar receipts by province—give the reader the impression that over the last five-plus decades the province of Quebec (QC) is the main recipient (by far) of these federal transfer funds. While this may be true, the plot also misrepresents the nature of these financial flows from the federal government to the provinces. The data does not take into account the wide variation in populations amongst the 10 provinces. For example, Prince Edward Island (PEI) as of 2019 has a population of about 156,000 residents, while Quebec has a population of approximately 8.5 million, or about 55 times as much as PEI. That is to say a better way of representing the provincial receipt of equalization funds is to calculate the annual per capita (i.e., for every resident) value, rather than a provincial total.

For the lollipop chart below, I’ve not only calculated an annual per-capita measure of the amount of money received by province, I’ve also controlled for inflation, understanding that a dollar in 1960 was worth a lot more (and could be used to buy many more resources) in 1960 than today. Using Canadian GDP deflator data compiled by the St. Louis Federal Reserve, I’ve created plotting variable—annual real per-capita federal equalization receipts by province, with a base year of 2014. Here, we see that the message of the plot is no longer Quebec’s dominance but a story in which Canadians (regardless of where they live) are treated relatively equally. Of course, every year, Canadian in some provinces receive no equalization receipts.

Here’s the plot, and the R code below it:

Source: https://open.canada.ca/data/en/dataset/4eee1558-45b7-4484-9336-e692897d393f
require(ggplot2)
require(gganimate) 

gg.anim.lol3 <- ggplot(eq.pop.df[eq.pop.df$Year!="2019-20",], aes(x=province, y=real.value.per.cap, label=real.value.per.cap.amt)) + 
  geom_point(stat='identity', size=14, aes(col=as.factor(zero.dummy))) +  #, fill="white"
  scale_color_manual(name="zero.dummy", 
                     #      labels = c("Above", "Below"), 
                     values = c("0"="#000000", "1"="red")) + 
  labs(title="Per Capita Federal Equalization Entitlements (by Province): {closest_state}",
       subtitle="(Real $ CAD—2014 Base Year)",
       x=" ", y="$ CAD (Real—2014 Base Year)") +
  geom_segment(aes(y = 0, 
                   x = province, 
                   yend = real.value.per.cap, 
                   xend = province), 
               color = "red",
               size=1.5) +
  scale_y_continuous(breaks=seq(0,3000,500)) +
  theme(legend.position="none",
        plot.title =element_text(hjust = 0.5, size=23),
        plot.subtitle =element_text(hjust = 0.5, size=19),
        axis.title.x = element_text(size = 16),
        axis.title.y = element_text(size = 16),
        axis.text.y =element_text(size = 14),
        axis.text.x=element_text(vjust=0.5,size=16, colour="black")) +
  geom_text(color="white", size=4) +
  transition_states(
    Year,
    transition_length = 1,
    state_length = 9
  ) +
  enter_fade() +
  exit_fade()

animate(gg.anim.lol3, nframes = 610, fps = 10, width=800, height=680, renderer=gifski_renderer("equal_real_per_cap_lollipop.gif"))  

Data Visualization #15—Canadian Federal Equalization Payments over time using an animated lollipop graph

There is likely no federal-provincial political issue that stokes more anger amongst Albertans (and is so misunderstood) as equalization payments (entitlements) from the Canadian federal to the country’s 10 provinces. Although some form of equalization has always been a part of the federal government’s policy arsenal, the current equalization program was initiated in the late 1950s, with the goal of providing, or at least helping achieve an equal playing field across the country in terms of basic levels of public services. As Professor Trevor Tombe notes:

Regardless of where you live, we are committed (indeed, constitutionally committed) to ensure everyone has access to “reasonably comparable levels of public services at reasonably comparable levels of taxation.”

Finances of the Nation, Trevor Tombe

For more information about the equalization program read Tombe’s article and links he provides to other information. The most basic misunderstanding of the program is that while some provinces receive payments from the federal government (the ‘have-nots’) the other, more prosperous, provinces (the ‘haves’) are the source of these payments. You often hear the phrase “Alberta sends X $billion to Quebec every year!” That’s not the case. The funds are generated and distributed from federal revenues (mostly income tax) and disbursed from this same fund of resources. The ‘have’ provinces don’t “send money” to other provinces. The federal government collects tax revenue from all individuals and if a province has a higher proportion of high-earning workers than another, it will generally receive less back in money from the federal government than its workers send to Ottawa. (To reiterate, read Tombe for more about the particulars.)

Using data provided by the Government of Canada, I have decided to show the federal equalization outlays over time using what is called a lollipop chart. I could have used a bar chart, but I like the way the lollipop chart looks. Here’s the chart and the R code below:

Created by Josip Dasović

The data source is here: https://open.canada.ca/data/en/dataset/4eee1558-45b7-4484-9336-e692897d393f, and I am using the table called Equalization Entitlements.

N.B.: The original data, for some reason, had Alberta abbreviated as AL, so I had to edit the my final data frame and the gif.

## You'll need these two libraries
require(ggplot2)
require(gganimate)

gg.anim.lol1 <- ggplot(melt.eq.df, aes(x=variable, y=value, label=amount)) + 
  geom_point(stat='identity', size=14, aes(col=as.factor(zero.dummy))) +  #, fill="white"
  scale_color_manual(name="zero.dummy", 
                     values = c("0"="#000000", "1"="red")) + 
  labs(title="Canada—Federal Equalization Entitlements (by Province): {closest_state}",
       x=" ", y="Millions of nominal $ (CAD)") +
  geom_segment(aes(y = 0, 
                   x = variable, 
                   yend = value, 
                   xend = variable), 
               color = "red",
               size=1.5) +
  scale_y_continuous(breaks=seq(0,15000,2500)) +
  theme(legend.position="none",
        plot.title =element_text(hjust = 0.5, size=23),
        axis.title.x = element_text(size = 16),
        axis.title.y = element_text(size = 16),
        axis.text.y =element_text(size = 14),
        axis.text.x=element_text(vjust=0.5,size=16, colour="black")) +
  geom_text(color="white", size=4) +
    transition_states(
  Year,
  transition_length = 2,
  state_length = 8
) +
  enter_fade() +
  exit_fade()

animate(gg.anim.lol1, nframes = 630, fps = 10, width=800, height=680, renderer=gifski_renderer("equal.lollipop.gif"))  

Data Visualization #13—Roulette and Temperature with R code

In the most recent post in my data visualization series I made an analogy between climate, weather and the spins of a roulette wheel that demonstrated that short-term randomness does not mean we can’t make accurate long-term predictions.

Towards the end of the post I appended an animation of 1000 random spins of a roulette wheel. In that post, I plotted the 1000 individual outcomes of these random spins of the roulette wheel. I chose to show only one outcome at a time as the animation cycled through all 1000 spins. In this post, I wanted to show you how to keep all of the outcomes from disappearing. Rather than having the value of each spin appear, and then disappear, I will change the code slightly to have every spin’s outcome stay on the plot, but faded so that the focus remains on the next spin value. Here’s what I mean.:

Created by Josip Dasović

Here is the R code for the image above:

## These are the packages needed to draw, and animate, the plots.
library(ggplot2)
library(gganimate)
library(dplyr)  # needed for cummean function

## Set up a data frame for the 1000 random spins of the roulette wheel

mywheel <-c(rep(0,2),1:36)  # a vector with the 38 wheel values
wheel.df<-data.frame("x"=1:1000,"y"=sample(mywheel,1000,rep=T))

## Plot, then animate the result of 1000 random spins of the wheel

## the code to plot
gg.roul.1000.point<- ggplot(wheel.df,aes(x, y, colour = "firebrick4")) + 
  geom_point(show.legend = FALSE, size=2) +
  theme_gray() + 
  labs(title = "1000 Random Spins of a Roulette Wheel", 
       x = expression("the"~n^th~"roll of the wheel"), 
       y = 'Value of a single spin') +
  theme(plot.title = element_text(hjust = 0.5, size = 14, color = "black")) +
  scale_y_continuous(expand = c(0, 0)) +
  transition_time(wheel.df$x) +
  shadow_mark(past = T, future=F, alpha=0.2)

## the code to animate
gg.roul.anim.point <- animate(gg.roul.1000.point, nframes=500, fps=25, width=500, height=280, renderer=gifski_renderer("gg_roulette_1000.gif"))  
 
## No plot and animate a line chart that depicts the cumulative mean from spin 1 to spin 1000.

gg.roul.1000.line <- ggplot(wheel.df, aes(x, y = cummean(y))) +
  geom_line(show.legend = FALSE, size=1, colour="firebrick4") +
  theme_gray() +
  ggtitle("Cumulative Mean of Roulette Wheel Spins is Stable over Time") +
  theme(plot.title = element_text(hjust = 0.5, size = 14, color = "black")) +
  labs(x = expression("the"~n^th~"roll of the wheel"), 
       y = 'Running (i.e., cumulative) Mean of all Rolls at Roll n') +
  scale_y_continuous(expand=c(0,0), limits=c(0,36)) +
  transition_reveal(wheel.df$x) +
  ease_aes('linear') 

gg.roul.anim.line <- animate(gg.roul.1000.line, nframes=500, fps=25, width=500, height=280, renderer=gifski_renderer("cummean_roulette_1000.gif"))  

## Now combine the plots into one figure, using the magick library

library(magick)

a_mgif <- image_read(gg.roul.anim.point)
b_mgif <- image_read(gg.roul.anim.line)

roul_gif <- image_append(c(a_mgif[1], b_mgif[1]),stack=TRUE)
for(i in 2:500){
  combined <- image_append(c(a_mgif[i], b_mgif[i]),stack=TRUE)
  roul_gif <- c(roul_gif, combined)
}

## Save the final file as a .gif file

image_write(roul_gif, "roulette_stacked_point_line_500.gif")

Stay tuned for a Python version of this chart.

Data Visualization #9–Non-ideal use of Stacked Bar Plots

Stacked bar plots (charts) are a very useful data visualization type…when used correctly. In an otherwise excellent report on the “Escalating Terrorism Problem in the United States” from the Center for Strategic and International Studies, there is a problematic stacked bar chart (actually, a stacked percentage chart) that should have been replaced by a grouped bar chart (or something else). Here is the, in my opinion, problematic chart:

The reason I believe this chart is problematic is because the chart could potentially obscure the nature (and trend) of the underlying data. The chart above is consistent with any number of underlying data patterns. Just as an example, let’s look at 2019 and 2020. We have the following percentage breakdown over the two years:

Type of Violence20192020
Ethnonationalist3%0%
Left-wing4%0%
Other0%0%
Religious30%7%
Right-wing63%93%

While it is obvious that ethnonationalist, and left-wing, violence have decreased (they are 0% in 2020), it is not clear whether right-wing and religious violence have increased, or decreased absolutely. Does right-wing violence in 2020 comprise 93% of 14 acts of terrorist violence? Or is it 93% of 200 acts of terrorist violence? We don’t know. To be fair to the authors of the report, they do provide a breakdown in absolute numbers later in the report. Still, I believe that a more appropriate use of a stacked bar/percentage chart is when the absolute number of instances is (relatively) static over the time/area of comparison.

Here’s an example from college football. The Pacific-12 conference has two divisions–North, and South. Every year each of the 6 teams in each division plays against 4 of the teams in the other division, for a total of 24 inter-divisional games every year. In addition, there is a PAC12 Championship Game, which pits the winner of each of the two divisions against each other at the end of the year. Therefore, there are 25 total inter-divisional PAC12 football games every year. A stacked percentage chart can be used to gauge the relative winning percentages of the two divisions against each other since the establishment of the PAC12 conference in 2011 (when Utah and Colorado were added).

Created by Josip Dasović

Here, each of the years refers to a total of 25 inter-divisional games. We can easily see the nature of the quality of the respective divisions by comparing the percentage of games won by each (over the other) between the years 2011 and 2019. We see that the North (which, by the way produced 8 of the 9 PAC12 champions during this period) has generally been stronger. In 6 of the 9 years, the North won a greater percentage of the inter-divisional games than did the South. And even in those years where the South won a greater percentage of the inter-divisional games, it wasn’t a much greater percentage.

So, use stacked percentage charts only when it is appropriate.

Data Visualization #7–Treemaps using US Counties and 2016 Presidential Vote

While we’re still waiting on the availability of official county-level results 2020 the 2020 US Presidential Elections*, I thought I’d create a treemap of the county-level results from the 2016 election. You may be thinking to yourself, “What is a treemap?”

Treemaps are ideal for displaying large amounts of hierarchically structured (tree-structured) data. The space in the visualization is split up into rectangles that are sized and ordered by a quantitative variable.

Link to Source

Treemaps, therefore, can help us visualize the relationships within our quantitative data in a unique, visually-pleasing, and meaningfully effective manner. Let’s see how with the example of the US 2016 Presidential Election.

Here’s a picture of then newly-elected President Donald Trump looking at a map given to him by his advisers depicting the results of the 2016 election. This specific depiction of the results overstates the extent of the support across the USA for Trump in the 2016 election. As those in the know often say “land mass does not vote.” Indeed, if one were ignorant about US politics, and US political demography, looking at that map one would be most likely be perplexed were one told that the “blue” candidate actually won 3 million more votes than did the “red” candidate.

Here is my reproduction of these data\2013using publicly-available data from MIT Election Data and Science Lab, 2018, “County Presidential Election Returns 2000-2016”, https://doi.org/10.7910/DVN/VOQCHQ, Harvard Dataverse, V6, UNF:6:ZZe1xuZ5H2l4NUiSRcRf8Q== [fileUNF]. I’ve added the R-code at the end of this post.

We can see that the vast majority of counties are small, and that voters in these counties were more likely to have voted for Trump than for Clinton. Indeed, Clinton win fewer than 16% of all counties.

The problem with this map is that it essentially dichotomizes quantitative data into qualitative data. To be precise, the decision whether to colour a county blue or red is made simply on the basis of whether, of those who voted, more voted for Trump, or for Clinton. If a county voted 51-50 for Trump, it gets a red colour. If a county voted 1,000,000-100,000 for Clinton it gets coloured blue. And, to make things even more confusing, the total of red that each county receives is related ONLY to country land area, and doesn’t take account of the number of voters.

As is the case in many parts of the world today, the US is increasingly split demographically\u2013with those living in rural areas (and suburbs/exurbs) voting for the conservative parties (Republican) and those in the urban areas voting for liberal parties (Democratic). We see this clearly in the map above. The problem with US counties is that they are not uniform either in terms of their land area, or their population. There are apartment buildings in New York City and Los Angeles that have more residents than some counties.

We can use treemaps to more “accurately” depict electoral outcomes. By accurately, I mean that the visual representation of the data more closely reflects how many voted for each candidate (party).

The first example below represents the vote at the county level and describes two quantitative variables. The size of each rectangle represents the total number of voters in each county\u2013the larger the rectangle the greater the numbers of voters in that county. The second variable, which is mapped using the colour scale, represents the difference\u2013in raw vote totals between the two candidates. Reddish shades denote a county that was won by Trump, while bluish shades represent counties won by Clinton.

There are a couple of things to notice. First, the wide disparity in the total number of voters across the counties. Second, we see that most of the counties have shades that are only very lightly blue (or red) and look mostly white. This is because the range on the variable must be so expansive in order to include outliers like Los Angeles and Cook Counties. Thus, in the vast majority of US counties the raw vote total differences between Trump’s totals and Clinton’s totals are in the 1000s range. This is why Trump was able to win more than 84% of US counties and still lose the popular vote by more than 3 million.

Our next (and final) treemap is similar to the one above except that the scale for the colouring is not the raw vote difference between Trump and Clinton in each county, but the percentage-point differential in vote between the two candidates.

We see much more red and blue in this map because the scale is confined to 100% Trump win to 100% Clinton win. Notice the striking disparity in where the blue and red colours, respectively, are found. The reddish shades dominate in small-population counties (in the top-right corner of each state subgroup), while the bluish shades dominate in large-population counties (in the bottom-left corners of each state subgroup). Finally, the larger (greater population) counties tend be be much smaller geographically than the less-populous counties, which is why the map on Trump’s desk looks like it does.

gg.geom.uscounty <- ggplot(us_df_final_2163) +
        geom_sf(aes(fill = winner), col="black", lwd=0.1) + 
        scale_fill_manual(values=c("blue","red"), labels=c("Clinton","Trump"), breaks=c("Democrat","Republican")) + # breaks...to get rid of NA
        labs(title = "US 2016 Presidential Election Results by County ('Lower 48')") +
        theme_void() + 
        coord_sf(xlim = c(-1900000,2400000), ylim = c(-2050000, 625000)) +
        theme(legend.title=element_blank(),
              legend.text = element_text(size = 12),
              plot.title = element_text(hjust = 0.5, size=16, vjust=2),
              legend.position = "bottom",
              plot.margin = margin(0.5, 0.5, 0.5, 0.5, "cm"),
              legend.box.margin = margin(0,0,30,0),
              legend.key.size = unit(0.75, "cm"))

gg.geom.uscounty

R Code for treemaps: (this is vote the “total vote” variable. Replace that variable with a “percentage-vote” variable–with appropriate limits and breaks (-100,100) because you are now working with percentages).

gg.tree.tot <- ggplot(us_df_final, aes(area = totalvote, fill = vote_win_diff, label=NAME, subgroup=State.Name)) +
        geom_treemap() +
        geom_treemap_subgroup_border(colour="black", size=2) +
        geom_treemap_subgroup_text(place = "centre", grow=F, alpha = 0.5, colour =
                                           "black", fontface = "italic", min.size = 0) +
        geom_treemap_text(colour = "black", place = "center", reflow = T) +
        scale_fill_distiller(type = "div", palette=5, direction=1, guide="coloursteps", limits=c(-2000000,2000000), breaks=seq(-2000000,2000000, by=500000),
        labels=c("2000000","1500000","1000000","500000","0","500000","1000000","1500000","2000000")) +
        labs(title = "US 2016 Presidential Election by County (Areas Proportional to Total Votes in County",
             fill="Difference\u2013County Vote Totals between Trump (red) & Clinton (blue)") + 
        theme(legend.key.height = unit(0.75, 'cm'),
              legend.key.width = unit(2.35,"cm"),
              legend.text = element_text(size=8),
              plot.title = element_text(hjust = 0.5, size=14, vjust=1),
              legend.position = "bottom") +
        guides(fill = guide_coloursteps(title.position="top", title.hjust = 0.5),
               size = guide_legend(title.position="top", title.hjust = 0.5))    

* The electoral process that determines who becomes president of the United States is complicated. In effect, it is a series of elections that are run by individual states, and not a single federally-run election like it is in most presidential systems.

Data Visualization #6–US Counties are [essentially] Meaningless in Presidential Elections

The inspiration (so to speak) for this latest instalment of my Data Visualization series is a meme that I have been seeing spread across social media in the wake of the recent US Presidential Election. The meme, in essence, notes that numer of US counties (there are over 3000) that were “won” by the incumbent, Donald J. Trump. Indeed, it seems as though the challenger, Joe Biden, ironically won the most votes of any US Presidential candidate in US history while simultaneously having “won” the lowest percentage of counties (about 17%) of any winner of the Presidency ever.

Why did I place “won” in quotation marks? Two reasons: first, I am assuming that the authors of this meme suggest that Trump “won” these counties by having won (at least) a plurality of the vote in each. Which, I suppose, is true. The more important reason that I put “won” in quotation marks above is because US counties are effectively meaningless when it comes to determining who wins the US Presidency. They are only important insofar as receiving more votes than one’s opponent in any individual county helps increase the odds of winning what is important–a plurality of the vote in any individual state (or in Congressional Districts in the cases of Nebraska and Maine). Counties have no official weight when determining electoral college votes, and it doesn’t matter how many counties a candidate wins, as long as they reach at least 270 electoral votes. Counties in the USA vary in population from fewer than 100 (Kalawao County in Hawaii) to over 10,000,000 (Los Angeles County in California). So, discussing who “won” more counties is essentially meaningless.

Here’s an example of how absurd referring to counties won becomes. The aforementioned Los Angeles County is a county that Joe Biden handily “won” in November, by a margin of 72.5% to 27.5% for Donald Trump. In short, Trump was walloped by Biden in LA County. Yet, when you compare Trump’s vote in LA County (about 1.15 million) to his total vote in all of the states (and DC) it might shock you to learn that Trump won more votes in LA County than he won in 25 individual states (and in DC). For example, Trump won more total votes in LA County (which, remember, he lost 72.5%-27.5%) than he won in the state of Oklahoma, where he won all 6 Electoral College votes. Moreover, Biden won more votes in LA County alone than Donald Trump won in each of all but three states–Florida, Texas, and Ohio. To be clear, for example, Biden won more total votes in LA County (which, alone, didn’t win him a single Electoral College vote) than Trump won in North Carolina (for which Trump won 15 Electoral College votes).

Here is a bar plot that I’ve created to visualize these data (click on the image to open a larger version). The yellow bar at the far-right represents the number of votes won by Biden in LA County (just over 3 million). The other yellow bar represents the votes won by Trump in LA Country (just over 1 million). Every other bar is the number of votes won by Trump in each of the states (and DC) listed below (Texas, Ohio, and Florida are missing because Trump won more votes in each of those states than Biden won in LA County). The red bars are states won by Trump, while the blue bars represent states won by Biden. Remember, each of the bars (except for the one on the far-right) represent the number of votes Donald Trump won in that state (and LA County).

Created by: Josip Dasovic