Linear Regression: Red Wine Quality

Introduction

I came across some data on the quality of red wine along with some of its chemical properties and thought that it would be quite fun to look at. I decided to answer the burning question of our time: is more alcoholic red wine better? So I’m going to do a handful of linear regressions on the data to try and find an answer whilst discussing some statistical issues along the way.

I would also like to apologise for the state of some of the graphs and tables that I’ve included. They always seem to look much better before I stick them into the main body of the blog.

Data

The data I’m using can be found here. It was originally discussed by Cortez et al. The data has probably been put to better and more interesting use elsewhere, but I wanted to see what I could produce myself.

The data contains 1,599 entries for different kinds of Portuguese vinho verde wine. 12 variables were recorded including alcohol content and the perceived quality of the wine by three sensory assessors in a blind taste test. The other variables include chemical properties of the wine such as pH level, acidity levels, residual sugar content, and sulphate content. Sadly, the data does not include factors such as grape type, wine brand, or the price of a bottle. Knowing this would be very useful when buying Christmas presents!

Hypothesis and Model

My hypothesis that I’m aiming to test here is that wines with a higher alcohol content are more likely to be of a higher quality. I’m not a wine expert at all, so I’m coming at this regression with no sense of what I might find.

I’ll begin with a simple univariate regression of quality against alcohol content. Then I shall include some variables that I think might reduce omitted variable bias. Finally, I’m going to throw all the variable I have available into the regression to see what happens. I’m aware that this approach is not necessarily the most appropriate, but I might as well put these variables to some good use. 

Regression 1: Alcohol

The first regression has alcohol content as the only independent variable. As we can see from the graph below, it would seem that the quality of the red wine does indeed increase with alcohol content.

Regression1

Below is a table showing the output of the regression. From this we can see that an increase in the alcohol content by one percentage point increases the quality of the wine by about 0.36. The null hypothesis (that quality is not related to alcohol content) can be easily rejected.

Coefficient
Standard Error
t-Stat
p-value
Lower 95%
Upper 95%
Intercept
1.875
0.175
10.732
0.000
1.532
2.218
Alcohol
0.361
0.017
21.639
0.000
0.328
0.394

 

Regression Statistics
Multiple R 0.476
0.227
Adjusted R² 0.226
Standard Error 0.710
Observations 1599

A final thing to note here is the R² value. This is the fraction of the variance of the wine quality explained, or predicted, by alcohol content. It’s a measure of how well the model fits the data. In this case alcohol explains, or predicts, about 23% of the variance in quality. You can’t predict the quality of a wine using alcohol content alone, but it does seem to be an important factor.

Omitted Variable Bias

I’m only interested in how alcohol affects the quality of red wine, but that doesn’t mean that I can ignore the other variables. My results could be vulnerable to the dreaded omitted variable bias. This bias occurs when a variable has been omitted from the regression that correlates with an included variable and determines the dependent variable. In other words, alcohol might correlate with another variable I’ve excluded which in turn partly explains the quality of the wine.

Why is this a problem? Well, one of the assumptions behind OLS (ordinary least squares) regressions is that the conditional distribution of the error term on the independent variable(s) is 0. If the omitted variable is a determinant of the quality of red wine, then it will be a part of the error term. If the omitted variable is correlated with alcohol content, then the error term will be correlated with the independent variable and the conditional mean will be nonzero.

Below is a correlation matrix between the 12 variables. I’ve colour-coded it to highlight the strongest correlations. We can see that there are several that seem to correlate quite strongly with both alcohol and quality. It seems reasonable to think that volatile acidity, measuring the concentration of acetic acid, can affect the quality since high levels of acetic acid can make the wine vinegary. Sulphur dioxide, density, and the concentration of chlorides might also affect the quality of the wine.

MAtrix

 

Regression 2: Alcohol, Volatile Acidity, Sulphur Dioxide, Density, and Chlorides

To take the possibility of omitted variable bias into account, I’ve included these variables into another regression. By doing so, I hope that this will remove much of the bias that might exist in the first regression.  

The results are a little difficult to display graphically, especially when limited to Excel. I get the impression that this is much easier to do in R or a commercial statistical package. Nonetheless, I’ve included a graph below showing the predicted quality of wine against its alcohol content.

Regression3

The relationship here is much as it was in the first regression. But if we look more closely at the results we can see that the coefficient related to alcohol as fallen from 0.36 to 0.32. It would seem that the variables omitted from the first regression created a result that was biased upwards. Again, we can reject the null hypothesis that alcohol content and quality are unrelated.

Coefficients
Standard Error
t-Stat
p-value
Lower 95%
Upper 95%
Intercept
-18.149
10.318
-1.759
0.079
-38.387
2.089
Alcohol
0.317
0.019
16.795
0.000
0.280
0.355
Volatile Acidity
-1.350
0.095
-14.175
0.000
-1.537
-1.164
Total Sulphur Dioxide
-0.002
0.001
-3.727
0.000
-0.003
-0.001
Density
21.385
10.253
2.086
0.037
1.274
41.496
Chlorides
-0.416
0.364
-1.141
0.254
-1.130
0.299

 

Regression Statistics
Multiple R 0.570
0.325
Adjusted R² 0.323
Standard Error 0.664
Observations 1599

The R² has also increased, from 0.23 to 0.33, which would suggest that the new model “explains” more of the variance in wine quality than the first. But is it so wise to rely on R²? Sadly not. It is an interesting quirk of R² that it increases whenever a new variable is added, even if the included variable has no relation to the dependent variable. The adjusted R² compensates for this effect to some extent. In this case they aren’t really all that different.

Reading the Tea Leaves: Residual Plots

One way to tell if a regression model is not quite right is to have a look for any strange results in the residual plots of the regression. A residual plot shows the relationship between an independent variable and its residual – the difference between the predicted value of the dependent variable and the actual value.

By doing this I can see if there is any nonlinearity which would give rise to model specification bias. I can also see if the residuals are homoskedastic or heteroskedastic.

I grouped the residual plots into three groups. Density, volatile acidity, and alcohol content seemed mostly fine to me. Sulphur dioxide exhibited clear heteroskedasticity, as shown below. That means that the variance of the residuals was dependent on the value of sulphur dioxide. Here we can see that the variance fell as sulphur dioxide increased. Chloride seemed to have some large outliers, which might violate one of the key assumptions underpinning OLS regressions.

Regression4

The fact that the residual for sulphur dioxide is heteroskedastic is not a problem for this regression. Some textbooks include homoskedasticity as an assumption behind the OLS model. The Gauss-Markov theorem shows that with homoskedasticity, the OLS estimators are more efficient than other estimators. This means that the variance of the OLS estimators is lower than the variance of other estimators. Provided that heteroskedasticity-robust standard errors are used then this is not a particular problem. Unfortunately I can’t seem to confirm that the standard errors provided by Excel’s Analysis ToolPak are robust. This might be a problem for my analysis, but there’s not a lot I can do about it at the moment.

The outliers for chloride do not seem to be sufficiently large to create any problems. For these to be too large, the kurtosis of the distribution of the variable must be nonfinite. In this case, the kurtosis is roughly 42.

Regression 3: All Variables

After wading through some of those issues and concluding that there weren’t any glaring problems, I decided to throw all the variables into the regression to see what will happen. By doing this, I will be eliminating as much omitted variable bias as possible and getting the best estimate possible for the effect alcohol has on quality.

The graph below shows the relationship between alcohol and quality. The coefficient has fallen further to ~0.29. The null hypothesis that alcohol has no effect on quality can be rejected. Furthermore, the adjusted R² value has increased to about 0.36.

Regression5

Coefficients
Standard Error
t-Stat
p-value
Lower 95%
Upper 95%
Intercept
6.180
13.437
0.460
0.646
-20.176
32.535
Volatile Acidity
-1.078
0.121
-8.911
0.000
-1.315
-0.841
Citric Acid
-0.135
0.139
-0.975
0.330
-0.407
0.137
Residual Sugar
0.010
0.014
0.746
0.456
-0.016
0.037
Chlorides
-1.968
0.408
-4.828
0.000
-2.768
-1.169
Free Sulphur Dioxide
0.005
0.002
2.128
0.034
0.000
0.009
Total Sulphur Dioxide
-0.003
0.001
-4.835
0.000
-0.005
-0.002
Density
-1.517
13.389
-0.113
0.910
-27.779
24.745
pH
-0.546
0.133
-4.099
0.000
-0.808
-0.285
Sulphates
0.900
0.113
7.961
0.000
0.678
1.121
Alcohol
0.290
0.022
13.047
0.000
0.246
0.334

 

Regression Statistics
Multiple R 0.600
R² 0.360
Adjusted R² 0.356
Standard Error 0.648
Observations 1599

I won’t go into detail about the residual plots here as none seem to display clear nonlinearity. I have already discussed issues of heteroskedasticity and outliers so I don’t want to repeat myself too much.

Conclusion

In conlusion, it would seem that more alcoholic red wines are more likely to be rated as a higher quality. I reduced omitted variable bias by controlling for more variables. I discussed heteroskedasticity and some of the assumptions behind OLS regressions, and why my regressions were still valid.

It’s been fun to do a linear regression and get to grips with some of the problems and difficulties associated with them. I’m going to be moving on to learning some R now and trying to perform some linear and logistic regressions using that. It might be a little while before I post anything new.

Stats Project: Class Voting Post-Mortem

Introduction

In my last post I wrote up my first ever logistic regression and noted that it didn’t go as well as I would have liked. I’ve learned a lot about the practice of performing regressions in Excel so the process was still useful. I had some specific issues with my sample size and sampling weights, which I want to discuss a little bit more now.

Sample Size

I performed some Wald tests on the estimated coefficients of the regression (see last blog for the results). One of the determinants of the Wald statistic is the sample size. Put simply, as the sample size increases so too does the size of the Wald statistic. A larger Wald statistic has a smaller associated p-value meaning that the null hypothesis (in this case that the coefficient is equal to 0) will be rejected more often. If I had a larger sample more of my results would be significant.

Was my sample size too small? How much larger would my sample need to be to produce the statistical significance I wanted? The first question is trickier than the second. I began by reading some academic articles about sample sizes and found that there was a bit of a debate around the topic. For logistic regressions there is a rule of thumb that there need to be 10 events per predictor variable. Vittinghoff and McCulloch have argued that such a rule is too conservative, but others have argued the opposite.

In my case that meant that I needed 10 Conservative voters for every variable I had thrown into my regression (150 Conservative voters in total). My sample had a total of 238 such voters so it would seem that, despite my previous conclusion, my sample was sufficiently large to generate solid results. In fact, my sample size would be large enough to look at Labour voters as well (but I wouldn’t be able to do an analysis of Lib Dem voters).

But surely I could get better results if my sample size was much bigger? After all, there are some much larger surveys out there (which are admittedly much more difficult to get hold of). It turns out that increasing sample size has diminishing returns and that a sample as large as the one I had was about as good as it got. My results might have been a little bland, but that wasn’t necessarily a result of the size of the ESS.

Sample Weighting

Turning to sample weighting, the academic literature here was a bit more intimidating. Solon, Haider and Wooldridge show just how tricky the decision to weight samples actually is. Sometimes it is best to use weights in regressions and other times it makes everything worse. It seems academics have as much trouble and confusion around the topic as I have myself. This was reassuring if also deeply troubling.

My decision to include both weighted and unweighted regressions seems to have been the correct one. If in doubt, shrug and throw both of them into the mix. There are also tests that help decide whether it’s appropriate to weight or not, but this seems a little advanced for me at the moment. This discussion is definitely something I definitely want to explore in more detail when I come to doing some linear regressions and have a clearer understanding of what I’m actually doing.

Graphs, Charts, and Excel

The last thing I want to discuss in this post is how to present regression results visually. I was discussing this project with a friend and they were very keen to get me to present my findings in something other than a table. I spent a lot of time in my second and third years at university staring at tables of regression coefficients and I was reluctant to try and present my findings in any other format.

I gave in and threw in some box plots (after making them a little nicer to look at) that Excel had vomited up for me. Presenting the data in the form of a graph didn’t make much sense to me since all my variables were binary. If anyone knows of any other way to present my results then let me know!

I think that linear regressions are much easier to present visually than what I’ve just worked on. During my research I came across a lot of cool ways to present data and I’d love to get around to doing some of that soon. It looks like learning R might be a good idea, particularly for partial dependence plots, as purely relying on Excel might become a little limiting.

Final Thoughts

Performing a grisly post-mortem on my logistic regression was very useful and I think I might come back to this at some point in the future when I’ve got some more statistical experience under my belt. In hindsight I bit off a lot more than I could chew. With that in mind, I’m going to do some very basic stuff next: hypothesis tests and linear regressions. I’m not sure what the topic will be yet, but I’ve got a few ideas!

Stats Project: Class Voting Analysis

Return of the Stats Project

It’s been almost exactly a year since I started this blog and my statistics project. Life got in the way and I had to put this on the backburner (whoops). But I’m back and ready to kick some multivariate ass!

Last time I had identified my data source, the ESS, and had some ideas about how to proceed, if only I could get the data into Excel. A very helpful person commented on my post (blog?) saying that I could directly download the ESS datasets as an Excel spreadsheet. At the time I don’t think I was able to download the ESS Round 8 as a CSV file (I think you can now) so I settled for the ESS Round 7.

Class Voting

For my analysis I decided to use the work of Daniel Oesch, a political sociologist, to try and investigate the effects of class on voting behaviour in the UK (the 2010 general election to be precise). I was already familiar with Oesch’s unique class schema, but coding the ESS data into something workable seemed like quite the daunting task. Fortunately Oesch had some material on his website which allowed me to easily convert the ESS data (in ISCO format) into 8 different classes. Thanks!

With my goal in mind, I set about cleaning the dataset into something usable. I was only interested in a handful of variables: gender, age, education, public sector employment, class (defined by Oesch’s schema), and voting record. Using these variables I could keep my work as closely comparable to Oesch’s as possible and avoid making too many mistakes. My hope was that I could easily compare my results to his.

Many of the ESS responses had no data available for the variables I was going to investigate. Eventually my sample size had been reduced from 2265 to 735. The biggest culprit here was probably voter turnout. One third of my data was unusable as I was only interested in people who had actually voted. I knew this was a problem at the time, but only realised how big it was when I reached the end of my regression. I’ve got more to say on this but I will save it for another, more exciting, blog post! Wowee.

I began by looking at the probability of voting Conservative. I needed to use a logistic regression since my dependent variable (regressand) was a binary variable. Using a linear regression would have produced some pretty strange results. My regressors were also all binary variables, so I was mindful not to fall into the dummy variable trap. Finding a way to do a logistic regression in Excel without paying a fortune was a little difficult, but I overcame this minor hurdle and was finally ready to press the big red button.

When I initially chose to use ESS data I noted that they had sampling weights. At the time I thought this was great. I have since learned that there is a lot of disagreement and confusion about when it is appropriate to use weights, and that they can make results less reliable. On top of this, I was unsure if I was using the weights in my regression correctly. So I performed an unweighted regression and a weighted regression to see if the results were much different. I’ll come back to this next time.

With the results in front of me, I needed to know if the coefficients were significant to any degree. A handful of Wald tests later and … oh dear. In the unweighted analysis, only the coefficient on public sector employment was significant. In the weighted analysis, this remained significant, but the coefficients on vocational secondary employment and managers were weakly significant. These results were disappointing and I decided not to continue any more analysis on my data as my sample sadly seemed too small to obtain any significant results.

Full results below:

GB 2010:

Conservative Party

Unweighted Weighted
Class
Socio-cultural specialists 1.32 1.60
Service workers 0.91 1.03
Technical specialists 0.82 1.14
Production workers 0.98 1.36
Managers 1.73 2.05*
Clerks (reference)
Traditional bourgeoisie 1.71 2.04
Small business owners 1.48 1.92
Education
Compulsory or incomplete schooling (reference)
Vocational secondary 1.49 1.73*
General secondary 0.91 0.98
Post-secondary, but no tertiary degree 0.68 0.73
Tertiary degree 1.17 1.13
Gender
Male 0.91 1.00
Age
20-35 0.66 0.72
35-50 (reference)
51-65 1.36 1.14
Sector
Private (reference)
Public 0.48** 0.54**
N 735 735

These figures are the odds ratios of the chance of voting for the Conservative party in the UK 2010 general election against not voting for them, with respect to the reference categories. *** Significant at the 0.001 level; ** at the 0.01 level; * at the 0.05 level. Data source: European Social Survey Round 7.

That table looked a lot nicer in Word!

Conservative Weighted

Conservative Unweighted

These two charts show the standardised coefficients with their 95% confidence intervals. I’ve shaded the significant results in green. I’m aware some of the labels are a hard to read and the images are blurry.

I also had a little look at trying to represent the results graphically on the advice of a friend. That seemed to be more trouble than it was worth in this case and at this time. I might come back to that at a later date.

A Sad Conclusion

So my first regression didn’t go quite as planned. I had some problems with my sample size, I’m still not sure if I was using the sampling weights correctly, and I’m not as familiar with logistic regressions as I would like. But I’m not going to give up. I plan on writing another post about some of the specific issues I had, and possible solutions to them. Then I’m going to try and do some nice easy linear regressions. We can all breathe a sigh of relief.

Stats Project: Introduction

A Short Introduction

Welcome to my stats project! I’ve wanted to do some hands-on statistical analysis for years but I’ve never gotten round to it – until now. In this post (blog? article?), I want to outline what I’m planning to do, and in future posts I’ll discuss every painful stage of my little stats project. Hopefully this will be interesting.

So what is my project? Well, I’m going to play around with some data from the European Social Survey. I’m hoping to import the data from the ESS into Excel and R, and then perform some multivariate regressions. I’m not sure exactly what data I’m going to be looking at, but I suspect I’ll end up looking at the determinants of voting behaviour. I know the academic literature on political sociology quite well so I think I know what I’m doing.

The European Social Survey

I think a good place to start with this project is to have a little look at what the ESS actually is. I know that it’s used quite a lot in academic articles and that it’s a survey that covers lots of European countries. Other than that I’m really not sure. So let’s have a look!

The ESS has been around since 2001. It collects data every two years through face-to-face interviews. It’s been conducted in 35 European countries (unless I can’t count), but only 15 countries have participated in each round of the ESS. Not to worry. I only plan on having a look at data from the UK which has participated in all 8 rounds of the ESS. The questionnaire of the ESS is made up of two parts – a core module and a rotating module. It’s an absolute beast of a survey.

But how reliable is this survey? Can I trust any conclusions I draw from its data? Fortunately, the ESS seems to have thought quite a lot about this. The main issues I can understand (I’m not used to this) are sampling, measurement error, and non-response bias. I’m going to ignore methodological concerns for the time being and assume that the ESS has designed the perfect survey. I’ll probably come back to it in a future post. Bet you can’t wait for that one!

The Way Forward

Data from the ESS is publicly available but can only be downloaded in SAS, SPSS or STATA formats. This is my first obstacle. I know that you can take this data and squish it into Excel and R but I don’t have any idea how. I’ll have to use some googlefu to work that one out. I’ll return to this project in another blog thing when I’ve done that.