1. I have been toying around with Kaggle's Million Song Dataset Challenge recently because I have some interest in collaborative filtering (using matrix factorization). I haven't made much progress with the competition (all 3 of my submissions are below the baseline), but I have learned a few things about dealing with large amounts of data.

    The goal of the competition is to predict the 500 most likely songs each of 110,000 users will listen to next. As the name implies, there are 1,000,000 songs in the full dataset. To simplify things, I decided to concentrate on the most popular songs. I created a 110,000 x 2,000 matrix of 0's and 1's. Row i, column j is 1 if user i had listened to song j (the jth most popular song) and 0 if user i had not. As you can imagine, there are a lot more 0's than 1's in this matrix. The first few rows and columns look like this:

    0 0 1 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 ...
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...
    1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ...
    ...

    This matrix was about 430 Mb and took a while to load into MATLAB. So I wisened up and created a sparse matrix. A sparse matrix realizes that most of the values are 0's and does not record them. Instead, it lists the locations of the non-zero elements and what the value is. For example, this is what the first few rows of the sparse matrix looks like:

    1 3 1
    1 7 1
    1 10 1
    1 13 1
    1 82 1
    1 717 1
    2 1111 1
    2 2972 1
    2 3516 1
    ...

    The first column is the row number, the second is the column number, and the third is the value at that location. In this application, all the values are 1. For this matrix, I used the 50,000 most popular songs (instead of just 2,000), and the size was much smaller -- just 17 Mb.

    It is easy to load the sparse matrix into MATLAB with the spconvert command, and many of MATLAB's functions (like singular value decomposition) are optimized for sparse matrices.
    0

    Add a comment

  2. Update: I have moved my blog to andland.github.io. Check it out for more recent posts. Thanks!

    Random forests ™ are great. They are one of the best "black-box" supervised learning methods. If you have lots of data and lots of predictor variables, you can do worse than random forests. They can deal with messy, real data. If there are lots of extraneous predictors, it has no problem. It automatically does a good job of finding interactions as well. There are no assumptions that the response has a linear (or even smooth) relationship with the predictors.

    As cited in the Wikipedia article, they do lack some interpretability. But what they lack in interpretation, they more than make up for in prediction power, which I believe is much more important than interpretation in most cases. Even though you cannot easily tell how one variable affects the prediction, you can easily create a partial dependence plot which shows how the response will change as you change the predictor. You can also do this for two variables at once to see the interaction of the two.

    Also helping in the interpretation is that they can output a list of predictor variables that they believe to be important in predicting the outcome. If nothing else, you can subset the data to only include the most "important" variables, and use that with another model. The randomForest package in R has two measures of importance. One is "total decrease in node impurities from splitting on the variable, averaged over all trees." I do not know much about this one, and will not talk about it further. The other is based on a permutation test. The idea is that if the variable is not important (the null hypothesis), then rearranging the values of that variable will not degrade prediction accuracy. Random forests use out-of-bag (OOB) samples to measure prediction accuracy.

    In my experience, it does a pretty good job of finding the most important predictors, but it has issues with correlated predictors. For example, I was working on a problem where I was predicting the price that electricity trades. One feature that I knew would be very important was the amount of electricity being used at that same time. But I thought there might also be a relationship between price and the electricity being used a few hours before and after. When I ran the random forest with these variables, the electricity used 1 hour after was found to be more important than the electricity used at the same time. When including the 1 hour after electricity use instead of the current hour electricity use, the cross validation (CV) error increased. Using both did not significantly change the CV error compared to using just current hour. Because the electricity used at the current hour and the hour after are very correlated, it had trouble telling which one was more important. In truth, given the electricity use at the current hour, the electricity use at the hour after did not improve the predictive accuracy.

    Why does the importance measure give more weight correlated predictors? Strobl et al. give some intuition behind what is happening and propose a solution. Basically, the permutation test is ill-posed. The permutation test is testing that the variable is independent of the response as well as all other predictors. Since the correlated predictors are obviously not independent, we get high importance scores. They propose a permutation test where you condition on the correlated predictors. This is a little tricky when the correlated predictors are continuous, but you can read the paper for more details.

    Another way to think of it is that, since each split only considers a subset of the possible variables, a variable that is correlated with an "important" variable may be considered without the "important" variable. This would cause the correlated variable to be selected for the split. The correlated value does hold some predictive value, but only because of the truly important variable, so it is understandable why this procedure would consider it important.

    I ran a simulation experiment (similar to the one in the paper) to demonstrate the issue. I simulated 5 predictor variables. Only the first one is related to the response, but the second one has a correlation of about 0.7 with the first one. Luckily, Strobl et al. included their version of importance in the package party in R. I compare variable importance from the randomForest package and the importance with and without taking correlated predictors into account from the party package.

    # simulate the data
    x1=rnorm(1000)
    x2=rnorm(1000,x1,1)
    y=2*x1+rnorm(1000,0,.5)
    df=data.frame(y,x1,x2,x3=rnorm(1000),x4=rnorm(1000),x5=rnorm(1000))
    
    # run the randomForest implementation
    library(randomForest)
    rf1 <- randomForest(y~., data=df, mtry=2, ntree=50, importance=TRUE)
    importance(rf1,type=1)
    
    # run the party implementation
    library(party)
    cf1 <- cforest(y~.,data=df,control=cforest_unbiased(mtry=2,ntree=50))
    varimp(cf1)
    varimp(cf1,conditional=TRUE)
    

    For the randomForest, the ratio of importance of the the first and second variable is 4.53. For party without accounting for correlation it is 7.35. And accounting for correlation, it is 369.5. The higher ratios are better because it means that the importance of the first variable is more prominent. party's implementation is clearly doing the job.

    There is a downside. It takes much longer to calculate importance with correlated predictors than without. For the party package in this example, it took 0.39 seconds to run without and 204.34 seconds with. I could not even run the correlated importance on the electricity price example. There might be a research opportunity to get a quicker approximation.

    Possibly up next: confidence limits for random forest predictions.
    13

    View comments

Total Pageviews
Total Pageviews
279378
Blog Archive
About Me
About Me
Blogroll
Blogroll
Loading
Powered by Blogger.