Global Warming ‘Hiatus’ Expected to end by 2030

For this week’s seminar, we read and discussed (amongst other things) a general (i.e., non-academic) article–in The Guardian newspaper-regarding the recent so-called hiatus in global warming. (Here’s another look at the same issue from The Economist.) The issue arises from recent global surface temperature data. To wit:

BETWEEN 1998 and 2013, the Earth’s surface temperature rose at a rate of 0.04°C a decade, far slower than the 0.18°C increase in the 1990s. Meanwhile, emissions of carbon dioxide (which would be expected to push temperatures up) rose uninterruptedly. This pause in warming has raised doubts in the public mind about climate change. A few sceptics say flatly that global warming has stopped. Others argue that scientists’ understanding of the climate is so flawed that their judgments about it cannot be accepted with any confidence. (From The Economist)

As the article quoted above goes on to note, there are many compelling scientific accounts for why global surface temperatures have not risen as quickly as in the past, though the author argues that they, in combination, explain too much. To understand what that means, please read the article yourself.

We viewed a video by climate scientist Matt England, in which he explained one plausible reason for this ‘hiatus’–the changing trade winds in the Pacific Ocean.

After having viewed Professor England’s explanation–more heat than normal was being trapped in deeper layers of the western Pacific Ocean–some students wondered when that extra trapped heat might once again rise to the surface. Not being a climate scientist, I did not know the answer. I now know, however, that some scientists predict this to occur by about 2030.

The Atlantic Ocean has masked global warming this century by soaking up vast amounts of heat from the atmosphere in a shift likely to reverse from around 2030 and spur fast temperature rises, scientists said.

The theory is the latest explanation for a slowdown in the pace of warming at the Earth’s surface since about 1998 that has puzzled experts because it conflicts with rising greenhouse gas emissions, especially from emerging economies led by China.

But, if you read the linked article carefully, you’ll notice that these study and explanation cited has nothing to do with the Pacific Ocean. Indeed, the study is by a group of scientists based at the University of Washington:

“We’re pointing to the Atlantic as the driver of the hiatus,” Ka-Kit Tung, of the University of Washington in Seattle and a co-author of Thursday’s study in the journal Science, told Reuters

The study said an Atlantic current carrying water north from the tropics sped up this century and sucked more warm surface waters down to 1,500 metres (5,000 feet), part of a natural shift for the ocean that typically lasts about three decades.

It said a return to a warmer period, releasing more heat stored in the ocean, was likely to start around 2030. When it does, “another episode of accelerated global warming should ensue”, the authors wrote.

So, what do we take from these two different studies. Is the article in The Economist correct that the current warming hiatus is ‘over-explained’? Is this just another example of scientists blindly whacking away at a pinata, hoping to hit upon an explanation? Or, is this another episode of how science is done in the real world. Theory and data combine to make predictions, which may be more or less true. When anomalies occur (that is, predictions are not quite accurate), scientists go about finding new data, and developing new theories to improve upon existing theories and knowledge. Or, is this just a loosely-linked cabal of money-seeking scientists trying to make off like bandits with our tax (i.e., research) money and blithely destroying our freedom while they’re at it?

Climate of Doubt–Example of a QIP

Before heading off into the Vancouver evening last week, we watched the first half of the PBS documentary, Climate of Doubt, which examined the manner in which a coalition of powerful moneyed interests, in alliance with like-minded citizens’ groups and (mostly) Republican politicians was able to successfully stymie US congressional efforts to address some of the potential negative consequences of a warming planet.

The documentary takes us through a case of public opinion formation that fits Charles Lindblom’s definition of circularity perfectly. According to Lindblom, government policies that reflect the will of the general public may nonetheless be considered undemocratic if those opinions are formed as the result of undue influence by powerful interest groups and large corporations (read: Exxon Mobil). I have embedded the documentary below, so please watch the remainder. I’ll also use my first post of the semester to provide an example of the what a suitable QIP might look like (after the fold).

Continue reading “Climate of Doubt–Example of a QIP”

Islam, the Koran, and Women’s Rights from the Perspective of Muslim Women

For those of you who are writing on the influence of Islam on the prospects for democratization in predominantly Muslim countries, here is an interesting video, which asks Muslim women about their views on the compatibility of Islam with women’s rights and democracy. This is a nice complement to the Fish article that we read two weeks ago. Here is an illuminating quote from one of the women interviewed in the film:

“First of all I didn’t understand why my brother didn’t have to do housework and I have to do housework…as a little girl it did not make sense to me. Just because he’s a boy he doesn’t have to do housework?!? So for me the questioning was from the family, but the family never used religion to justify why [boys didn’t have to do housework], so I always knew it was culture and tradition.”

“We wanted to break the monopoly, that only the lama, only the religious authorities, have the right to talk about Islam and define what is Islam and what is not Islam.”

Zainah Anwar
Co-founder, Sisters in Islam
Kuala Lumpur, Malaysia

Here is the very interesting video, which is about 26 minutes long. Throughout this film many of the concepts that we have learned this semester are brought into play.

http://vimeo.com/88043539

Joseph Chan on Confucianism and Democracy

http://www.youtube.com/watch?v=CsdI4J-lv_c

In IS210 today, we viewed a short clip from this interesting lecture by Professor Joseph Chan given at Cornell University. Professor Chan of the University of Hong Kong talks about the shared moral basis of contemporary Chinese society. With Leninism/Marxism/Maoism being discredited amongst most Chinese, the search begins for a new moral basis/foundation for society.

As Professor Dick Miller says in his introductory remarks:

In China, as in the United States, people feel a great need for an adequate, shared, ethical basis for public life. There, as here, people don’t think that freedom to get as rich as you can is an adequate basis.

So, what is that basis, if the official ruling ideology of the political regime no longer seems legitimate. Liberal democracy? Confucianism. There are adherents in China of both of these as the proper ethical foundation. What does Professor Chan have to say about the compatibility of Confucian ideals with democracy? Watch and find out. It’s a very informative lecture.

How many of the world’s inhabitants have become free since 1945?

One of the empirical facts of the post-WWII era has been the inexorable rise not only in the number of democratic states, but also in the number of the world’s denizens who reside in democracies. We’ve probably all seen the Freedom House world Maps of Freedom, which are published on an annual basis.

Freedom House World Map of Freedom 2014

That’s great for providing a quick visual idea of how many of the world’s states have democratic regimes. But, it doesn’t tell us how many of the world’s inhabitants live in democracies. This clever cartogram by Gleditsch and Ward does this. Cartograms bend and mis-shape world maps on the basis of the values of the underlying variable–in this case, population. What do you think? The map below shows a dramatic rise since 1945 in both the number of states and the number of the world’s citizens who live in democracies. You’ll note that this map is from 2002 data, and there have been some important changes, notably Russia’s slide back toward autocracy in the last decade or so. Also, look at how massive India and China are (population-wise)!

gleditsch_ward_cartogram_democratization

Proportional Represenation versus Plurality

In IS210 we will discuss the relative merits of the two most frequently instituted electoral systems–proportional representation and plurality (also called majority or “first-past-the-post” electoral systems.

In advance, here is a chart that I’ve created, which shows the electoral results (in terms of number of seats won in the House of Commons) of the 2011 Canadian Federal election. The bottom of the chart contains the actual number of seats won, while the top lists the hypothetical number of seats each party would have won if Canada’s electoral system were one of proportional representation. So, Canada’s electoral system is working as it should, correct?

canada_2011_election_PR

Obstacles to Democratization in North Africa and the Middle East

In conjunction with this week’s readings on democracy and democratization, here is an informative video of a lecture given by Ellen Lust of Yale University. In her lecture, Professor Lust discuses new research that comparative analyzes the respective obstacles to democratization of Libya, Tunisia, and Egypt. For those of you in my IS240 class, it will demonstrate to you how survey analysis can help scholars find answers to the questions they seek. For those in IS210, this is a useful demonstration in comparing across countries. [If the “start at” command wasn’t successful, you should forward the video to the 9:00 mark; that’s where Lust begins her lecture.]

Maybe there’s a use for Pie Charts, after all.

Pie charts have been justifiably criticized for one very important reason (and many less important reasons: pie charts are bad at “the one thing they’re ostensibly designed to do,” and that is to show the relationship of parts of the whole.  Check out this site for some egregious examples of failing to represent one’s data clearly.

A student of mine in IS240 (Intro to Research Methods in Intl. Studies) may have unknowingly redeemed the besmirched reputation of the pie chart. The upshot, though, is that she was using the pie chart (along with some clever colour manipulation) to compare results across pie charts, not within.
Here are three pie charts, depicting the answers to a question in the World Values Survey that taps into the concept of homophobia. The potential response set for this question was ordinal in nature, ranging from 0 to 10, with 1 representing the most homophobic response, and 10 the least. Using a colour ramp, this student produced the pie charts you can see below. Essentially, the charts are easy to compare across countries: the more red you see, the more homophobic the responses to that question!
Very nicely done! The R-code to produce these is below. You’ll need v202 and v2 of the World Values Surveys in a data frame (which we have called four.df):
canada_italy_thailand
Here is the R code to produce three separate PDF files, one with each chart:
piecolor<-colorRampPalette(c("red","white"))
names<-c("canada","italy","thailand")
Cnames<-c("Canada","Italy","Thailand")

for (i in 1:3) {
+pdf(file=Cnames[i].pdf)
+ pie(table(factor(four.df$v202[four.df$v2==names[i]])),col=piecolor(10), main=Cnames[i])
+ dev.off()
+ }

A new Measure of State Capacity

In a recent working paper by Hanson and Sigman, of the Maxwell School of Citizenship and Public Affairs at Syracuse University, the authors explore the concept(s) of state capacity. The paper title–Leviathan’s Latent Dimensions: Measuring State Capacity for Comparative Political Research, complies with my tongue-in-cheek rule about the names of social scientific papers. Hanson and Sigman use statistical methods (specifically, latent variable analysis) to tease out the important dimensions of state capacity. Using a series of indexes created by a variety of scholars, organizations, and think tanks, the authors conclude that there are three distinct dimensions of state capacity, which they label i) extractive, ii) coercive, and iii) administrative state capacity.

Here is an excerpt:

The meaning of state capacity varies considerably across political science research. Further complications arise from an abundance of terms that refer to closely related attributes of states: state strength or power, state fragility or failure, infrastructural power, institutional capacity, political capacity, quality of government or governance, and the rule of law. In practice, even when there is clear distinction at the conceptual level, data limitations frequently lead researchers to use the same
empirical measures for differing concepts.

For both theoretical and practical reasons we argue that a minimalist approach to capture the essence of the concept is the most effective way to define and measure state capacity for use in a wide range of research. As a starting point, we define state capacity broadly as the ability of state institutions to effectively implement official goals (Sikkink, 1991). This definition avoids normative conceptions about what the state ought to do or how it ought to do it. Instead, we adhere to the notion that capable states may regulate economic and social life in different ways, and may achieve these goals through varying relationships with social groups…

…We thus concentrate on three dimensions of state capacity that are minimally necessary to carry out the functions of contemporary states: extractive capacity, coercive capacity, and administrative capacity. These three dimensions, described in more detail below,accord with what Skocpol identifies as providing the “general underpinnings of state capacities” (1985: 16): plentiful resources, administrative-military control of a territory, and loyal and skilled officials.

Here is a chart that measures a slew of countries on the extractive capacity dimension in extractive_capacity

Research Results, R coding, and mistakes you can blame on your research assistant

I have just graded and returned the second lab assignment for my introductory research methods class in International Studies (IS240). The lab required the students to answer questions using the help of the R statistical program (which, you may not know, is every pirate’s favourite statistical program).

The final homework problem asked students to find a question in the World Values Survey (WVS) that tapped into homophobic sentiment and determine which of four countries under study–Canada, Egypt, Italy, Thailand–could be considered to be the most homophobic, based only on that single question.

More than a handful of you used the code below to try and determine how the respondents in each country answered question v38. First, here is a screenshot from the WVS codebook:

wvs_v38Students (rightfully, I think) argued that those who mentioned “Homosexuals” amongst the groups of people they would not want as neighbours can be considered to be more homophobic than those who didn’t mention homosexuals in their responses. (Of course, this may not be the case if there are different levels of social desirability bias across countries.) Moreover, students hypothesized that the higher the proportion of mentions of homosexuals, the more homophobic is that country.

But, when it came time to find these proportions some students made a mistake. Let’s assume that the student wanted to know the proportion of Canadian respondents who mentioned (and didn’t mention) homosexuals as persons they wouldn’t want to have as neighbours.

Here is the code they used (four.df is the data frame name, v38 is the variable in question, and country is the country variable):


prop.table(table(four.df$v38=="mentioned" | four.df$country=="canada"))

FALSE     TRUE
0.372808 0.627192

Thus, these students concluded that almost 63% of Canadian respondents mentioned homosexuals as persons they did not want to have as neighbours. That’s downright un-neighbourly of us allegedly tolerant Canadians, don’tcha think?. Indeed, when compared with the other two countries (Egyptians weren’t asked this question), Canadians come off as more homophobic than either the Italians or the Thais.


prop.table(table(four.df$v38=="mentioned" | four.df$country=="italy"))

FALSE      TRUE
0.6106025 0.3893975

prop.table(table(four.df$v38=="mentioned" | four.df$country=="thailand"))

FALSE      TRUE
0.5556995 0.4443005

So, is it true that Canadians are really more homophobic than either Italians or Thais? This may be a simple homework assignment but these kinds of mistakes do happen in the real academic world, and fame (and sometimes even fortune–yes, even in academia a precious few can make a relative fortune) is often the result as these seemingly unconventional findings often cause others to notice. There is an inherent publishing bias towards results that seem to run contrary to conventional wisdom (or bias). The finding that Canadians (widely seen as amongst the most tolerant of God’s children) are really quite homophobic (I mean, close to 2/3 of us allegedly don’t want homosexuals, or any LGBT persons, as neighbours) is radical and a researcher touting these findings would be able to locate a willing publisher in no time!

But, what is really going on here? Well, the problem is a single incorrect symbol that changes the findings dramatically. Let’s go back to the code:


prop.table(table(four.df$v38=="mentioned" | four.df$country=="canada"))

The culprit is the | (“or”) character. What these students are asking R to do is to search their data and find the proportion of all responses for which the respondent either mentioned that they wouldn’t want homosexuals as neighbours OR the respondent is from Canada. Oh, oh! They should have used the & symbol instead of the | symbol to get the proportion of Canadian who mentioned homosexuals in v38.

To understand visually what’s happening let’s take a look at the following venn diagram (see the attached video above for Ali G’s clever use of what he calls “zenn” diagrams to find the perfect target market for his “ice cream glove” idea; the code for how to create this diagram in R is at the end of this post). What we want is the intersection of the blue and red areas (the purple area). What the students’ coding has given us is the sum of (all of!) the blue and (all of!) the red areas.

To get the raw number of Canadians who answered “mentioned” to v38 we need the following code:


table(four.df$v38=="mentioned" & four.df$v2=="canada")

FALSE  TRUE
7457   304

Rplot_venn_canada_v38

But what if you then created a proportional table out of this? You still wouldn’t get the correct answer, which should be the proportion that the purple area on the venn diagram comprises of the total red area.


prop.table(table(four.df$v38=="mentioned" & four.df$v2=="canada"))

FALSE       TRUE
0.96082979 0.03917021

Just eyeballing the venn diagram we can be sure that the proportion of homophobic Canadians is larger than 3.9%. What we need is the proportion of Canadian respondents only(!) who mentioned homosexuals in v38. The code for that is:


prop.table(table(four.df$v38[four.df$v2=="canada"]))

mentioned not mentioned
0.1404806     0.8595194

So, only about 14% of Canadians can be considered to have given a homophobic response, not the 62% our students had calculated. What are the comparative results for Italy and Thailand, respectively?


prop.table(table(four.df$v38[four.df$v2=="italy"]))

mentioned not mentioned
0.235546      0.764454

prop.table(table(four.df$v38[four.df$v2=="thailand"]))

mentioned not mentioned
0.3372781     0.6627219

The moral of the story: if you mistakenly find something in your data that runs against conventional wisdom and it gets published, but someone comes along after publication and demonstrates that you’ve made a mistake, just blame it on a poorly-paid research assistant’s coding mistake.

Here’s a way to do the above using what is called a for loop:


four<-c("canada","egypt","italy","thailand")
for (i in 1:length(four)) {
+ print(prop.table(table(four.df$v38[four.df$v2==four[i]])))
+ print(four[i])
+ }

mentioned not mentioned
0.1404806     0.8595194
[1] "canada"

mentioned not mentioned

[1] "egypt"

mentioned not mentioned
0.235546      0.764454
[1] "italy"

mentioned not mentioned
0.3372781     0.6627219
[1] "thailand"

Here’s the R code to draw the venn diagram above:

install.packages("venneuler")

library(venneuler}

v1<-venneuler(c("Mentioned"=sum(four.df$v38=="mentioned",na.rm=T),"Canada"=sum(four.df$v2=="canada",na.rm=T),"Mentioned&Canada"=sum(four.df$v2=="canada" & four.df$v38=="mentioned",na.rm=T)))

plot(v1,main="Venn Diagram of Canada and v38 from WVS", sub="v38='I wouldn't want to live next to a homosexual'", col=c("blue","red"))
Design a site like this with WordPress.com
Get started