Stats Project: Class Voting Reborn

Introduction

When I did a post-mortem on my class voting project I outlined three problems that I wanted to return to: sample size, sample weighting, and graphs. There was a fourth problem which I had at the back of my mind: missing values. Furthermore, I was determined to return to the project and perform my analysis using R instead of Excel. After a long break over the Christmas period I am back to delve into this statistical chaos!

The “Evil” of Missing Values

Back when I was tidying my data ready for use in Excel I faced a small problem. Some of the respondents to the European Social Survey hadn’t answered certain questions. For instance, many people refused to say which political party they voted for. I didn’t know this at the time, but this is called ‘item nonresponse’. I “dealt” with this problem by excluding all incomplete responses from my data. In practice this meant that I was removing 160 entries from my dataset. Since my dataset was already quite small this resulted in me losing almost 20% of my data. I didn’t think much of this at the time and resolved to come back to the matter latter.

Well here I am. I learned that the method I used to brush my incomplete data under the carpet is called ‘listwise deletion’. King, Honaker, Joseph, and Scheve (1998) called this process “evil”. I had apparently committed a cardinal sin. I wasn’t alone: a lot has been written about social scientists mishandling missing values in their data. So what’s the big deal?

You lose some of your data meaning that your standard errors are going to be larger. This is a big problem that I’ve already written quite a lot about. But more importantly, you can end up with a biased analysis if your data isn’t missing completely at random (MCAR). Data is MCAR when the probability of a variable Y being missing isn’t related to Y itself or the value of X (any other variable in the dataset). If the data isn’t MCAR then the sample is no longer random, which violates the assumptions underpinning the regression analysis.

I read quite a lot about missing values and there’s still a lot I have to learn. But I concluded that my class voting data was probably MNAR (missing not at random). Data is MNAR when the “missingness” of a variable Y is related to the value of Y itself. In my case, I considered it quite likely that the probability of refusing to answer questions about voting were related to that person’s voting record. The Shy Tory factor (link) is a well-known problem in UK opinion polling and it’s possible that a similar effect was in play here.

Handling data that is MNAR is very tricky and requires some pretty advanced techniques which I don’t have a great understanding of at the moment. However, I found some evidence that a technique called multiple imputation produced less biased regression coefficients than listwise deletion even when the data is MNAR. Multiple imputation essentially generates, or imputes, multiple possible values for the missing data based on all the variables in the dataset. Analysis is then performed on every data set (in my case, a logistic regression) before the results are “pooled” into a single definitive set.

Using R for Data Analysis

I’ve spent the last few weeks getting to grips with R and decided to try and use R to recreate my class voting analysis. But this time I would use multiple imputation rather than listwise deletion. I’ve included my spaghetti code as a pdf here and in the appendix (I can’t use code snippets in WordPress without plugins which require a hefty subscription). This is my first piece of R code and it’s a total mess, but it got the job done.

I imported data from CSV files into R where I used dplyr to manipulate and “tidy” this data into something more usable. I then used the mice package to perform my multiple imputation and the glm() function for the logistic regressions. One of the advantages of using R instead of Excel is that you (and I) can see exactly what I’ve done to my data. This makes it easier to recreate for other analyses or spot mistakes. I’ve really enjoyed using R and can’t wait to use it for other regressions.

The Wonders of ggplot2

The graphs I used in my original class voting analysis were atrocious. But after some helpful feedback I resolved to do better. I like to think the visual presentation of data in my more recent blogs is much better. Even so, I’ve become very aware of some of the shortcomings of Excel. For this analysis I decided to see what I could do with ggplot2 in R. You can see the code I’ve used in the attached pdf in the appendix.

I have since fallen in love with ggplot2. It is so easy to use and create great-looking graphs with. I’m sure that there’s so much more that I can do with it, but I think I’ve created a pretty nice graph to display my results.

Results

The results of my logistic regression are displayed below in a table and a graph:

Odds Ratio p-value
Constant 0.413 0.00234
Gender
Female (reference)
Male 0.934 0.674
Age
20-35 0.728 0.166
36-50 (reference)
51-65 1.320 0.119
Education
Compulsory or incomplete (reference)
Vocational Secondary 1.768 0.0180
Post Secondary 0.974 0.955
Tertiary 1.357 0.150
Sector
Private (reference)
Public 0.489 0.000476
Class
Clerks (reference)
Socio-cultural Specialists 1.219 0.547
Service Workers 0.802 0.465
Technical Specialists 0.811 0.591
Production Workers 0.905 0.763
Managers 1.395 0.214
Traditional Bourgeoisie 1.229 0.700
Small Business Owners 1.400 0.290

rplot01

I’ve used odds ratios since they’re easier to interpret than coefficients. From the data we can see that vocational secondary is significant at the 0.05 level whereas public is significant at the 0.001 level. How does this compare to the Excel analysis?

In my Excel analysis I used listwise deletion which reduced my sample size and could have biased my results. The odds ratios in my new analysis are not all too dissimilar to the ones calculated in my older analysis, but the results are (slightly) more significant.

From these results, we can see that people with a vocational secondary education were ~ 1.8 times more likely to have voted for the Conservatives than the reference group (about 41% of whom voted for the party). This is quite an unusual result and it’s difficult to think why this might be the case.

Public sector workers were roughly half as likely to vote for the Conservatives. This might not be surprising since the main issue in the 2010 election was the fallout of the financial crisis and the deficit reduction plans of the different parties.

Although the other results are not significant I think some are quite interesting. We can see a clear age / generational voting pattern. Younger voters were less likely to have voted for the Conservatives than older voters. This is a pattern which seems to persist 8 years later.

Small business owners, managers, and the traditional bourgeoisie were more likely to have voted Conservative whereas production workers and service workers were less likely to vote for the Conservatives. This matches previous work done by Oesch (2008).

Socio-cultural specialists were more likely to have voted for the Conservatives. This is an unexpected result which could indicate a realignment of class voting. But I suspect that this is a consequence of the particularly poor performance of the Labour party rather than a permanent upheaval.

It’s unfortunate that most of these results have to be taken with a pinch of salt – the standard errors are simply too large to be able to draw any reliable conclusions.

Further Problems

As I was going over this analysis again, some more problems surfaced which I’ll discuss a little here. I hope to be able to confront these problems in the coming weeks.

Firstly, I have only performed an unweighted analysis of my data. This is because it is slightly more difficult to do a logistic regression using survey weights in R. I wanted to have a better understanding of the problem before I tried to use the “survey” package (link).

Secondly, I haven’t included an R² value in my results. This is because an R² value is calculated for every single imputed dataset, but the mice package cannot pool R² values that have been calculated using the glm() function. This is a problem that I can’t find a solution for at the present, but I’ll definitely keep looking.

Thirdly, whilst I think my graphs have dramatically improved with ggplot2, tables in WordPress don’t look very good. There are plugins that produce much nicer tables, but they require much more expensive subscriptions to WordPress. Aside from creating tables as images, I’m not too sure how I can present data more effectively.

Finally, even though I am using multiple imputation for missing values, I am still facing the problem of a small sample size. The European Social Survey might be too small for this ongoing project. The way forward here is quite clear. I’m currently applying for access to the British Household Panel Survey (BHPS) through the UK Data Service. The BHPS is a much larger survey than the ESS so I might be able to produce more precise results with that.

Conclusion

There’s been a lot of headaches along the way, but my first regression in R is finally here. Logistic regressions were a pain in Excel and they’re still a pain in R, but I’m slowly eliminating problems and I think further analyses will be much easier to do in R. I’ve also learned a lot about missing values and the practicalities of using multiple imputation in R.

For now, I’m going to go back to doing some linear regressions in R and discuss the results in much greater detail than I have for my class voting project. I’m sure there’ll be more pitfalls and problems along the way!

Appendix: Data Reference and R Code

The data I used for this analysis can be found at the European Social Survey:

ESS Round 7: European Social Survey Round 7 Data (2014). Data file edition 2.2. NSD – Norwegian Centre for Research Data, Norway – Data Archive and distributor of ESS data for ESS ERIC.

I’ve also used Daniel Oesch’s social class scripts to code this data.

I’ve uploaded my (terrible) R code here. Hopefully that link works. I’ve edited out the enormous console output that the code generates (thanks to multiple imputation) for readability.

Stats Project: Class Voting Post-Mortem

Introduction

In my last post I wrote up my first ever logistic regression and noted that it didn’t go as well as I would have liked. I’ve learned a lot about the practice of performing regressions in Excel so the process was still useful. I had some specific issues with my sample size and sampling weights, which I want to discuss a little bit more now.

Sample Size

I performed some Wald tests on the estimated coefficients of the regression (see last blog for the results). One of the determinants of the Wald statistic is the sample size. Put simply, as the sample size increases so too does the size of the Wald statistic. A larger Wald statistic has a smaller associated p-value meaning that the null hypothesis (in this case that the coefficient is equal to 0) will be rejected more often. If I had a larger sample more of my results would be significant.

Was my sample size too small? How much larger would my sample need to be to produce the statistical significance I wanted? The first question is trickier than the second. I began by reading some academic articles about sample sizes and found that there was a bit of a debate around the topic. For logistic regressions there is a rule of thumb that there need to be 10 events per predictor variable. Vittinghoff and McCulloch have argued that such a rule is too conservative, but others have argued the opposite.

In my case that meant that I needed 10 Conservative voters for every variable I had thrown into my regression (150 Conservative voters in total). My sample had a total of 238 such voters so it would seem that, despite my previous conclusion, my sample was sufficiently large to generate solid results. In fact, my sample size would be large enough to look at Labour voters as well (but I wouldn’t be able to do an analysis of Lib Dem voters).

But surely I could get better results if my sample size was much bigger? After all, there are some much larger surveys out there (which are admittedly much more difficult to get hold of). It turns out that increasing sample size has diminishing returns and that a sample as large as the one I had was about as good as it got. My results might have been a little bland, but that wasn’t necessarily a result of the size of the ESS.

Sample Weighting

Turning to sample weighting, the academic literature here was a bit more intimidating. Solon, Haider and Wooldridge show just how tricky the decision to weight samples actually is. Sometimes it is best to use weights in regressions and other times it makes everything worse. It seems academics have as much trouble and confusion around the topic as I have myself. This was reassuring if also deeply troubling.

My decision to include both weighted and unweighted regressions seems to have been the correct one. If in doubt, shrug and throw both of them into the mix. There are also tests that help decide whether it’s appropriate to weight or not, but this seems a little advanced for me at the moment. This discussion is definitely something I definitely want to explore in more detail when I come to doing some linear regressions and have a clearer understanding of what I’m actually doing.

Graphs, Charts, and Excel

The last thing I want to discuss in this post is how to present regression results visually. I was discussing this project with a friend and they were very keen to get me to present my findings in something other than a table. I spent a lot of time in my second and third years at university staring at tables of regression coefficients and I was reluctant to try and present my findings in any other format.

I gave in and threw in some box plots (after making them a little nicer to look at) that Excel had vomited up for me. Presenting the data in the form of a graph didn’t make much sense to me since all my variables were binary. If anyone knows of any other way to present my results then let me know!

I think that linear regressions are much easier to present visually than what I’ve just worked on. During my research I came across a lot of cool ways to present data and I’d love to get around to doing some of that soon. It looks like learning R might be a good idea, particularly for partial dependence plots, as purely relying on Excel might become a little limiting.

Final Thoughts

Performing a grisly post-mortem on my logistic regression was very useful and I think I might come back to this at some point in the future when I’ve got some more statistical experience under my belt. In hindsight I bit off a lot more than I could chew. With that in mind, I’m going to do some very basic stuff next: hypothesis tests and linear regressions. I’m not sure what the topic will be yet, but I’ve got a few ideas!

Stats Project: Class Voting Analysis

Return of the Stats Project

It’s been almost exactly a year since I started this blog and my statistics project. Life got in the way and I had to put this on the backburner (whoops). But I’m back and ready to kick some multivariate ass!

Last time I had identified my data source, the ESS, and had some ideas about how to proceed, if only I could get the data into Excel. A very helpful person commented on my post (blog?) saying that I could directly download the ESS datasets as an Excel spreadsheet. At the time I don’t think I was able to download the ESS Round 8 as a CSV file (I think you can now) so I settled for the ESS Round 7.

Class Voting

For my analysis I decided to use the work of Daniel Oesch, a political sociologist, to try and investigate the effects of class on voting behaviour in the UK (the 2010 general election to be precise). I was already familiar with Oesch’s unique class schema, but coding the ESS data into something workable seemed like quite the daunting task. Fortunately Oesch had some material on his website which allowed me to easily convert the ESS data (in ISCO format) into 8 different classes. Thanks!

With my goal in mind, I set about cleaning the dataset into something usable. I was only interested in a handful of variables: gender, age, education, public sector employment, class (defined by Oesch’s schema), and voting record. Using these variables I could keep my work as closely comparable to Oesch’s as possible and avoid making too many mistakes. My hope was that I could easily compare my results to his.

Many of the ESS responses had no data available for the variables I was going to investigate. Eventually my sample size had been reduced from 2265 to 735. The biggest culprit here was probably voter turnout. One third of my data was unusable as I was only interested in people who had actually voted. I knew this was a problem at the time, but only realised how big it was when I reached the end of my regression. I’ve got more to say on this but I will save it for another, more exciting, blog post! Wowee.

I began by looking at the probability of voting Conservative. I needed to use a logistic regression since my dependent variable (regressand) was a binary variable. Using a linear regression would have produced some pretty strange results. My regressors were also all binary variables, so I was mindful not to fall into the dummy variable trap. Finding a way to do a logistic regression in Excel without paying a fortune was a little difficult, but I overcame this minor hurdle and was finally ready to press the big red button.

When I initially chose to use ESS data I noted that they had sampling weights. At the time I thought this was great. I have since learned that there is a lot of disagreement and confusion about when it is appropriate to use weights, and that they can make results less reliable. On top of this, I was unsure if I was using the weights in my regression correctly. So I performed an unweighted regression and a weighted regression to see if the results were much different. I’ll come back to this next time.

With the results in front of me, I needed to know if the coefficients were significant to any degree. A handful of Wald tests later and … oh dear. In the unweighted analysis, only the coefficient on public sector employment was significant. In the weighted analysis, this remained significant, but the coefficients on vocational secondary employment and managers were weakly significant. These results were disappointing and I decided not to continue any more analysis on my data as my sample sadly seemed too small to obtain any significant results.

Full results below:

GB 2010:

Conservative Party

Unweighted Weighted
Class
Socio-cultural specialists 1.32 1.60
Service workers 0.91 1.03
Technical specialists 0.82 1.14
Production workers 0.98 1.36
Managers 1.73 2.05*
Clerks (reference)
Traditional bourgeoisie 1.71 2.04
Small business owners 1.48 1.92
Education
Compulsory or incomplete schooling (reference)
Vocational secondary 1.49 1.73*
General secondary 0.91 0.98
Post-secondary, but no tertiary degree 0.68 0.73
Tertiary degree 1.17 1.13
Gender
Male 0.91 1.00
Age
20-35 0.66 0.72
35-50 (reference)
51-65 1.36 1.14
Sector
Private (reference)
Public 0.48** 0.54**
N 735 735

These figures are the odds ratios of the chance of voting for the Conservative party in the UK 2010 general election against not voting for them, with respect to the reference categories. *** Significant at the 0.001 level; ** at the 0.01 level; * at the 0.05 level. Data source: European Social Survey Round 7.

That table looked a lot nicer in Word!

Conservative Weighted

Conservative Unweighted

These two charts show the standardised coefficients with their 95% confidence intervals. I’ve shaded the significant results in green. I’m aware some of the labels are a hard to read and the images are blurry.

I also had a little look at trying to represent the results graphically on the advice of a friend. That seemed to be more trouble than it was worth in this case and at this time. I might come back to that at a later date.

A Sad Conclusion

So my first regression didn’t go quite as planned. I had some problems with my sample size, I’m still not sure if I was using the sampling weights correctly, and I’m not as familiar with logistic regressions as I would like. But I’m not going to give up. I plan on writing another post about some of the specific issues I had, and possible solutions to them. Then I’m going to try and do some nice easy linear regressions. We can all breathe a sigh of relief.

Stats Project: Introduction

A Short Introduction

Welcome to my stats project! I’ve wanted to do some hands-on statistical analysis for years but I’ve never gotten round to it – until now. In this post (blog? article?), I want to outline what I’m planning to do, and in future posts I’ll discuss every painful stage of my little stats project. Hopefully this will be interesting.

So what is my project? Well, I’m going to play around with some data from the European Social Survey. I’m hoping to import the data from the ESS into Excel and R, and then perform some multivariate regressions. I’m not sure exactly what data I’m going to be looking at, but I suspect I’ll end up looking at the determinants of voting behaviour. I know the academic literature on political sociology quite well so I think I know what I’m doing.

The European Social Survey

I think a good place to start with this project is to have a little look at what the ESS actually is. I know that it’s used quite a lot in academic articles and that it’s a survey that covers lots of European countries. Other than that I’m really not sure. So let’s have a look!

The ESS has been around since 2001. It collects data every two years through face-to-face interviews. It’s been conducted in 35 European countries (unless I can’t count), but only 15 countries have participated in each round of the ESS. Not to worry. I only plan on having a look at data from the UK which has participated in all 8 rounds of the ESS. The questionnaire of the ESS is made up of two parts – a core module and a rotating module. It’s an absolute beast of a survey.

But how reliable is this survey? Can I trust any conclusions I draw from its data? Fortunately, the ESS seems to have thought quite a lot about this. The main issues I can understand (I’m not used to this) are sampling, measurement error, and non-response bias. I’m going to ignore methodological concerns for the time being and assume that the ESS has designed the perfect survey. I’ll probably come back to it in a future post. Bet you can’t wait for that one!

The Way Forward

Data from the ESS is publicly available but can only be downloaded in SAS, SPSS or STATA formats. This is my first obstacle. I know that you can take this data and squish it into Excel and R but I don’t have any idea how. I’ll have to use some googlefu to work that one out. I’ll return to this project in another blog thing when I’ve done that.