Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Regularization Paths for Generalized Linear Models via Coordinate Descent

Regularization Paths for Generalized Linear Models via Coordinate Descent We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multi- nomial regression problems while the penalties include ` (the lasso), ` (ridge regression) 1 2 and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal eciently with sparse features. In comparative timings we nd that the new algorithms are considerably faster than competing methods. Keywords : lasso, elastic net, logistic regression, ` penalty, regularization path, coordinate- descent. 1. Introduction The lasso (Tibshirani 1996) is a popular method for regression that uses an ` penalty to achieve a sparse solution. In the signal processing literature, the lasso is also known as basis pursuit (Chen et al. 1998). This idea has been broadly applied, for example to general- ized linear models (Tibshirani 1996) and Cox's proportional hazard models for survival data (Tibshirani 1997). In recent years, there has been an enormous amount of research activity devoted to related regularization methods: 1. The grouped lasso (Yuan and Lin 2007; Meier et al. 2008), where variables are included or excluded in groups. 2. The Dantzig selector (Candes and Tao 2007, and discussion), a slightly modi ed version of the lasso. 3. The elastic net (Zou and Hastie 2005) for correlated variables, which uses a penalty that is part ` , part ` . 1 2 2 Regularization Paths for GLMs via Coordinate Descent 4. ` regularization paths for generalized linear models (Park and Hastie 2007a). 5. Methods using non-concave penalties, such as SCAD (Fan and Li 2005) and Friedman's generalized elastic net (Friedman 2008), enforce more severe variable selection than the lasso. 6. Regularization paths for the support-vector machine (Hastie et al. 2004). 7. The graphical lasso (Friedman et al. 2008) for sparse covariance estimation and undi- rected graphs. Efron et al. (2004) developed an ecient algorithm for computing the entire regularization path for the lasso for linear regression models. Their algorithm exploits the fact that the coef- cient pro les are piecewise linear, which leads to an algorithm with the same computational cost as the full least-squares t on the data (see also Osborne et al. 2000). In some of the extensions above (items 2,3, and 6), piecewise-linearity can be exploited as in Efron et al. (2004) to yield ecient algorithms. Rosset and Zhu (2007) characterize the class of problems where piecewise-linearity exists|both the loss function and the penalty have to be quadratic or piecewise linear. Here we instead focus on cyclical coordinate descent methods. These methods have been proposed for the lasso a number of times, but only recently was their power fully appreciated. Early references include Fu (1998), Shevade and Keerthi (2003) and Daubechies et al. (2004). Van der Kooij (2007) independently used coordinate descent for solving elastic-net penalized regression models. Recent rediscoveries include Friedman et al. (2007) and Wu and Lange (2008). The rst paper recognized the value of solving the problem along an entire path of values for the regularization parameters, using the current estimates as warm starts. This strategy turns out to be remarkably ecient for this problem. Several other researchers have also re-discovered coordinate descent, many for solving the same problems we address in this paper|notably Shevade and Keerthi (2003), Krishnapuram and Hartemink (2005), Genkin et al. (2007) and Wu et al. (2009). In this paper we extend the work of Friedman et al. (2007) and develop fast algorithms for tting generalized linear models with elastic-net penalties. In particular, our models include regression, two-class logistic regression, and multinomial regression problems. Our algorithms can work on very large datasets, and can take advantage of sparsity in the feature set. We provide a publicly available package glmnet (Friedman et al. 2009) implemented in the R programming system (R Development Core Team 2009). We do not revisit the well- established convergence properties of coordinate descent in convex problems (Tseng 2001) in this article. Lasso procedures are frequently used in domains with very large datasets, such as genomics and web analysis. Consequently a focus of our research has been algorithmic eciency and speed. We demonstrate through simulations that our procedures outperform all competitors | even those based on coordinate descent. In Section 2 we present the algorithm for the elastic net, which includes the lasso and ridge regression as special cases. Section 3 and 4 discuss (two-class) logistic regression and multi- nomial logistic regression. Comparative timings are presented in Section 5. Although the title of this paper advertises regularization paths for GLMs, we only cover three important members of this family. However, exactly the same technology extends trivially to Journal of Statistical Software 3 other members of the exponential family, such as the Poisson model. We plan to extend our software to cover these important other cases, as well as the Cox model for survival data. Note that this article is about algorithms for tting particular families of models, and not about the statistical properties of these models themselves. Such discussions have taken place elsewhere. 2. Algorithms for the lasso, ridge regression and elastic net We consider the usual setup for linear regression. We have a response variable Y 2 R and a predictor vector X 2 R , and we approximate the regression function by a linear model E(YjX = x) = + x . We have N observation pairs (x ; y ). For simplicity we assume 0 i i P P N 1 N the x are standardized: x = 0, x = 1, for j = 1; : : : ; p. Our algorithms ij ij i=1 N i=1 ij generalize naturally to the unstandardized case. The elastic net solves the following problem " # > 2 min R ( ; ) = min (y x ) + P ( ) ; (1) 0 i 0 p+1 p+1 2N ( ; )2R ( ; )2R 0 0 i=1 where P ( ) = (1 ) jj jj + jj jj (2) ` 1 1 2 = (1 ) + j j : (3) j=1 P is the elastic-net penalty (Zou and Hastie 2005), and is a compromise between the ridge- regression penalty ( = 0) and the lasso penalty ( = 1). This penalty is particularly useful in the p  N situation, or any situation where there are many correlated predictor variables. Ridge regression is known to shrink the coecients of correlated predictors towards each other, allowing them to borrow strength from each other. In the extreme case of k identical predictors, they each get identical coecients with 1=kth the size that any single one would get if t alone. From a Bayesian point of view, the ridge penalty is ideal if there are many predictors, and all have non-zero coecients (drawn from a Gaussian distribution). Lasso, on the other hand, is somewhat indi erent to very correlated predictors, and will tend to pick one and ignore the rest. In the extreme case above, the lasso problem breaks down. The lasso penalty corresponds to a Laplace prior, which expects many coecients to be close to zero, and a small subset to be larger and nonzero. The elastic net with = 1" for some small " > 0 performs much like the lasso, but removes any degeneracies and wild behavior caused by extreme correlations. More generally, the entire family P creates a useful compromise between ridge and lasso. As increases from 0 to 1, for a given  the sparsity of the solution to (1) (i.e., the number of coecients equal to zero) increases monotonically from 0 to the sparsity of the lasso solution. Figure 1 shows an example that demonstrates the e ect of varying . The dataset is from (Golub et al. 1999), consisting of 72 observations on 3571 genes measured with DNA microar- rays. The observations fall in two classes, so we use the penalties in conjunction with the Zou and Hastie (2005) called this penalty the naive elastic net, and preferred a rescaled version which they called elastic net. We drop this distinction here. 4 Regularization Paths for GLMs via Coordinate Descent Figure 1: Leukemia data: pro les of estimated coecients for three methods, showing only rst 10 steps (values for ) in each case. For the elastic net, = 0:2. Journal of Statistical Software 5 logistic regression models of Section 3. The coecient pro les from the rst 10 steps (grid values for ) for each of the three regularization methods are shown. The lasso penalty admits at most N = 72 genes into the model, while ridge regression gives all 3571 genes non-zero coecients. The elastic-net penalty provides a compromise between these two, and has the e ect of averaging genes that are highly correlated and then entering the averaged gene into the model. Using the algorithm described below, computation of the entire path of solutions for each method, at 100 values of the regularization parameter evenly spaced on the log-scale, took under a second in total. Because of the large number of non-zero coecients for the ridge penalty, they are individually much smaller than the coecients for the other methods. Consider a coordinate descent step for solving (1). That is, suppose we have estimates and for ` 6= j, and we wish to partially optimize with respect to . We would like to compute ` j ~ ~ ~ the gradient at = , which only exists if 6= 0. If > 0, then j j j j @R 1 ~ ~ j = x (y x ) + (1 ) +  : (4) ~ ij i o j @ N i=1 ~ ~ A similar expression exists if < 0, and = 0 is treated separately. Simple calculus shows j j (Donoho and Johnstone 1994) that the coordinate-wise update has the form (j) S x (y y ~ ); ij i i=1 i (5) 1 + (1 ) where (j) ~ ~ y ~ = + x is the tted value excluding the contribution from x , and 0 i` ` ij i `6=j (j) hence y y ~ the partial residual for tting . Because of the standardization, i j (j) x (y y ~ ) is the simple least-squares coecient when tting this partial ij i i=1 i residual to x . ij S(z; ) is the soft-thresholding operator with value z if z > 0 and < jzj sign(z)(jzj ) = z + if z < 0 and < jzj (6) 0 if  jzj. The details of this derivation are spelled out in Friedman et al. (2007). Thus we compute the simple least-squares coecient on the partial residual, apply soft- thresholding to take care of the lasso contribution to the penalty, and then apply a pro- portional shrinkage for the ridge penalty. This algorithm was suggested by Van der Kooij (2007). 2.1. Naive updates Looking more closely at (5), we see that (j) y y ~ = y y ^ + x i i i ij j = r + x ; (7) i ij j 6 Regularization Paths for GLMs via Coordinate Descent where y ^ is the current t of the model for observation i, and hence r the current residual. i i Thus N N X X 1 1 (j) x (y y ~ ) = x r + ; (8) ij i ij i j N N i=1 i=1 because the x are standardized. The rst term on the right-hand side is the gradient of the loss with respect to . It is clear from (8) why coordinate descent is computationally ecient. Many coecients are zero, remain zero after the thresholding, and so nothing needs to be changed. Such a step costs O(N ) operations| the sum to compute the gradient. On the other hand, if a coecient does change after the thresholding, r is changed in O(N ) and the step costs O(2N ). Thus a complete cycle through all p variables costs O(pN ) operations. We refer to this as the naive algorithm, since it is generally less ecient than the covariance updating algorithm to follow. Later we use these algorithms in the context of iteratively reweighted least squares (IRLS), where the observation weights change frequently; there the naive algorithm dominates. 2.2. Covariance updates Further eciencies can be achieved in computing the updates in (8). We can write the rst term on the right (up to a factor 1=N ) as X X x r = hx ; yi hx ; x i ; (9) ij i j j k k i=1 ~ k:j j>0 where hx ; yi = x y . Hence we need to compute inner products of each feature with y j ij i i=1 initially, and then each time a new feature x enters the model (for the rst time), we need to compute and store its inner product with all the rest of the features (O(Np) operations). We also store the p gradient components (9). If one of the coecients currently in the model changes, we can update each gradient in O(p) operations. Hence with m non-zero terms in the model, a complete cycle costs O(pm) operations if no new variables become non-zero, and costs O(Np) for each new variable entered. Importantly, O(N ) calculations do not have to be made at every step. This is the case for all penalized procedures with squared error loss. 2.3. Sparse updates We are sometimes faced with problems where the Np feature matrix X is extremely sparse. A leading example is from document classi cation, where the feature vector uses the so- called \bag-of-words" model. Each document is scored for the presence/absence of each of the words in the entire dictionary under consideration (sometimes counts are used, or some transformation of counts). Since most words are absent, the feature vector for each document is mostly zero, and so the entire matrix is mostly zero. We store such matrices eciently in sparse column format, where we store only the non-zero entries and the coordinates where they occur. Coordinate descent is ideally set up to exploit such sparsity, in an obvious way. The O(N ) inner-product operations in either the naive or covariance updates can exploit the sparsity, by summing over only the non-zero entries. Note that in this case scaling of the variables will Journal of Statistical Software 7 not alter the sparsity, but centering will. So scaling is performed up front, but the centering is incorporated in the algorithm in an ecient and obvious manner. 2.4. Weighted updates Often a weight w (other than 1=N ) is associated with each observation. This will arise naturally in later sections where observations receive weights in the IRLS algorithm. In this case the update step (5) becomes only slightly more complicated: (j) S w x (y y ~ ); i ij i i=1 i : (10) w x + (1 ) i=1 ij If the x are not standardized, there is a similar sum-of-squares term in the denominator (even without weights). The presence of weights does not change the computational costs of either algorithm much, as long as the weights remain xed. 2.5. Pathwise coordinate descent We compute the solutions for a decreasing sequence of values for , starting at the smallest value  for which the entire vector = 0. Apart from giving us a path of solutions, this max scheme exploits warm starts, and leads to a more stable algorithm. We have examples where it is faster to compute the path down to  (for small ) than the solution only at that value for . ~ ~ When = 0, we see from (5) that will stay zero if jhx ; yij <  . Hence N  = j j max max jhx ; yij. Our strategy is to select a minimum value  =  , and construct a ` ` min max sequence of K values of  decreasing from  to  on the log scale. Typical values are max min = 0:001 and K = 100. 2.6. Other details Irrespective of whether the variables are standardized to have variance 1, we always center each predictor variable. Since the intercept is not regularized, this means that = y , the mean of the y , for all values of and . It is easy to allow di erent penalties  for each of the variables. We implement this via a penalty scaling parameter  0. If > 0, then the penalty applied to is  =  . j j j j j If = 0, that variable does not get penalized, and always enters the model unrestricted at the rst step and remains in the model. Penalty rescaling would also allow, for example, our software to be used to implement the adaptive lasso (Zou 2006). Considerable speedup is obtained by organizing the iterations around the active set of features| those with nonzero coecients. After a complete cycle through all the variables, we iterate on only the active set till convergence. If another complete cycle does not change the active set, we are done, otherwise the process is repeated. Active-set convergence is also mentioned in Meier et al. (2008) and Krishnapuram and Hartemink (2005). 8 Regularization Paths for GLMs via Coordinate Descent 3. Regularized logistic regression When the response variable is binary, the linear logistic regression model is often used. Denote by G the response variable, taking values in G = f1; 2g (the labeling of the elements is arbitrary). The logistic regression model represents the class-conditional probabilities through a linear function of the predictors Pr(G = 1jx) = ; (11) ( +x ) 1 + e Pr(G = 2jx) = +( +x ) 1 + e = 1 Pr(G = 1jx): Alternatively, this implies that Pr(G = 1jx) log = + x : (12) Pr(G = 2jx) Here we t this model by regularized maximum (binomial) likelihood. Let p(x ) = Pr(G = 1jx ) be the probability (11) for observation i at a particular value for the parameters ( ; ), i 0 then we maximize the penalized log likelihood " # max I (g = 1) log p(x ) + I (g = 2) log(1 p(x )) P ( ) : (13) i i i i p+1 ( ; )2R i=1 Denoting y = I (g = 1), the log-likelihood part of (13) can be written in the more explicit i i form 1 > > ( +x ) `( ; ) = y  ( + x ) log(1 + e ); (14) 0 i 0 i=1 a concave function of the parameters. The Newton algorithm for maximizing the (unpe- nalized) log-likelihood (14) amounts to iteratively reweighted least squares. Hence if the ~ ~ current estimates of the parameters are ( ; ), we form a quadratic approximation to the log-likelihood (Taylor expansion about current estimates), which is > 2 2 ~ ~ ` ( ; ) = w (z x ) + C ( ; ) (15) Q 0 i i 0 0 2N i=1 where y p ~(x ) i i ~ ~ z = + x + ; (working response) (16) i 0 p ~(x )(1 p ~(x )) i i w = p ~(x )(1 p ~(x )); (weights) (17) i i i and p ~(x ) is evaluated at the current parameters. The last term is constant. The Newton update is obtained by minimizing ` . Our approach is similar. For each value of , we create an outer loop which computes the ~ ~ quadratic approximation ` about the current parameters ( ; ). Then we use coordinate Q 0 descent to solve the penalized weighted least-squares problem min f` ( ; ) + P ( )g : (18) Q 0 p+1 ( ; )2R This amounts to a sequence of nested loops: Journal of Statistical Software 9 outer loop: Decrement . ~ ~ middle loop: Update the quadratic approximation ` using the current parameters ( ; ). Q 0 inner loop: Run the coordinate descent algorithm on the penalized weighted-least-squares problem (18). There are several important details in the implementation of this algorithm. When p  N , one cannot run  all the way to zero, because the saturated logistic regression t is unde ned (parameters wander o to 1 in order to achieve probabilities of 0 or 1). Hence the default  sequence runs down to  =  > 0. min max Care is taken to avoid coecients diverging in order to achieve tted probabilities of 0 or 1. When a probability is within " = 10 of 1, we set it to 1, and set the weights to ". 0 is treated similarly. Our code has an option to approximate the Hessian terms by an exact upper-bound. This is obtained by setting the w in (17) all equal to 0.25 (Krishnapuram and Hartemink 2005). We allow the response data to be supplied in the form of a two-column matrix of counts, sometimes referred to as grouped data. We discuss this in more detail in Section 4.2. The Newton algorithm is not guaranteed to converge without step-size optimization (Lee et al. 2006). Our code does not implement any checks for divergence; this would slow it down, and when used as recommended we do not feel it is necessary. We have a closed form expression for the starting solutions, and each subsequent solution is warm-started from the previous close-by solution, which generally makes the quadratic approximations very accurate. We have not encountered any divergence problems so far. 4. Regularized multinomial regression When the categorical response variable G has K > 2 levels, the linear logistic regression model can be generalized to a multi-logit model. The traditional approach is to extend (12) to K 1 logits Pr(G = `jx) log = + x ; ` = 1; : : : ; K 1: (19) 0` ` Pr(G = Kjx) Here is a p-vector of coecients. As in Zhu and Hastie (2004), here we choose a more symmetric approach. We model +x 0` ` Pr(G = `jx) = (20) +x 0k k k=1 This parametrization is not estimable without constraints, because for any values for the K K parameters f ; g , f c ; cg give identical probabilities (20). Regularization 0` ` 0` 0 ` 1 1 deals with this ambiguity in a natural way; see Section 4.1 below. 10 Regularization Paths for GLMs via Coordinate Descent We t the model (20) by regularized maximum (multinomial) likelihood. Using a similar notation as before, let p (x ) = Pr(G = `jx ), and let g 2 f1; 2; : : : ; Kg be the ith response. ` i i i We maximize the penalized log-likelihood " # N K X X max log p (x )  P ( ) : (21) g i ` K(p+1) f ; g 2R 0` ` i=1 `=1 Denote by Y the N  K indicator response matrix, with elements y = I (g = `). Then we i` can write the log-likelihood part of (21) in the more explicit form " !# N K K X X X K > +x 0` ` `(f ; g ) = y ( + x ) log e : (22) 0` ` i` 0` ` 1 i i=1 `=1 `=1 The Newton algorithm for multinomial regression can be tedious, because of the vector nature of the response observations. Instead of weights w as in (17), we get weight matrices, for example. However, in the spirit of coordinate descent, we can avoid these complexities. We perform partial Newton steps by forming a partial quadratic approximation to the log- likelihood (22), allowing only ( ; ) to vary for a single class at a time. It is not hard to 0` ` show that this is > 2 K ~ ~ ` ( ; ) = w (z x ) + C (f ; g ); (23) Q` 0` ` i` i` 0` ` 0k k i 1 2N i=1 where as before y p ~ (x ) i` ` ~ ~ z = + x + ; (24) i` 0` ` p ~ (x )(1 p ~ (x )) ` i ` i w = p ~ (x )(1 p ~ (x )); (25) i` ` i ` i Our approach is similar to the two-class case, except now we have to cycle over the classes as well in the outer loop. For each value of , we create an outer loop which cycles over ` ~ ~ and computes the partial quadratic approximation ` about the current parameters ( ; ). Q` 0 Then we use coordinate descent to solve the penalized weighted least-squares problem min f` ( ; ) + P ( )g : (26) Q` 0` ` ` p+1 ( ; )2R 0` ` This amounts to the sequence of nested loops: outer loop: Decrement . middle loop (outer): Cycle over ` 2 f1; 2; : : : ; K; 1; 2 : : :g. middle loop (inner): Update the quadratic approximation ` using the current parame- Q` ~ ~ ters f ; g . 0k k inner loop: Run the co-ordinate descent algorithm on the penalized weighted-least-squares problem (26). Journal of Statistical Software 11 4.1. Regularization and parameter ambiguity As was pointed out earlier, if f ; g characterizes a tted model for (20), then f 0` ` 0` c ; cg gives an identical t (c is a p-vector). Although this means that the log-likelihood part of (21) is insensitive to (c ; c), the penalty is not. In particular, we can always improve an estimate f ; g (w.r.t. (21)) by solving 0` ` min P ( c): (27) c2R `=1 This can be done separately for each coordinate, hence c = arg min (1 )( t) + j tj : (28) j j` j` `=1 Theorem 1 Consider problem (28) for values 2 [0; 1]. Let be the mean of the , and j j` M M a median of the (and for simplicity assume  . Then we have j` j j c 2 [ ; ]; (29) j j with the left endpoint achieved if = 0, and the right if = 1. The two endpoints are obvious. The proof of Theorem 1 is given in Appendix A. A con- sequence of the theorem is that a very simple search algorithm can be used to solve (28). The objective is piecewise quadratic, with knots de ned by the . We need only evaluate j` solutions in the intervals including the mean and median, and those in between. We recenter the parameters in each index set j after each inner middle loop step, using the the solution c for each j. Not all the parameters in our model are regularized. The intercepts are not, and with our 0` penalty modi ers (Section 2.6) others need not be as well. For these parameters we use mean centering. 4.2. Grouped and matrix responses As in the two class case, the data can be presented in the form of a N  K matrix m of i` non-negative numbers. For example, if the data are grouped: at each x we have a number of multinomial samples, with m falling into category `. In this case we divide each row by i` the row-sum m = m , and produce our response matrix y = m =m . m becomes an i i i i` i` i` observation weight. Our penalized maximum likelihood algorithm changes in a trivial way. The working response (24) is de ned exactly the same way (using y just de ned). The i` weights in (25) get augmented with the observation weight m : w = m p ~ (x )(1 p ~ (x )): (30) i` i ` i ` i Equivalently, the data can be presented directly as a matrix of class proportions, along with a weight vector. From the point of view of the algorithm, any matrix of positive numbers and any non-negative weight vector will be treated in the same way. 12 Regularization Paths for GLMs via Coordinate Descent 5. Timings In this section we compare the run times of the coordinate-wise algorithm to some competing algorithms. These use the lasso penalty ( = 1) in both the regression and logistic regression settings. All timings were carried out on an Intel Xeon 2.80GH processor. We do not perform comparisons on the elastic net versions of the penalties, since there is not much software available for elastic net. Comparisons of our glmnet code with the R package elasticnet will mimic the comparisons with lars (Hastie and Efron 2007) for the lasso, since elasticnet (Zou and Hastie 2004) is built on the lars package. 5.1. Regression with the lasso We generated Gaussian data with N observations and p predictors, with each pair of predictors X ; X 0 having the same population correlation . We tried a number of combinations of N j j and p, with  varying from zero to 0.95. The outcome values were generated by Y = X + k  Z (31) j j j=1 where = (1) exp(2(j 1)=20), Z  N (0; 1) and k is chosen so that the signal-to-noise ratio is 3.0. The coecients are constructed to have alternating signs and to be exponentially decreasing. Table 1 shows the average CPU timings for the coordinate-wise algorithm, and the lars proce- dure (Efron et al. 2004). All algorithms are implemented as R functions. The coordinate-wise algorithm does all of its numerical work in Fortran, while lars (Hastie and Efron 2007) does much of its work in R, calling Fortran routines for some matrix operations. However com- parisons in Friedman et al. (2007) showed that lars was actually faster than a version coded entirely in Fortran. Comparisons between di erent programs are always tricky: in particular the lars procedure computes the entire path of solutions, while the coordinate-wise procedure solves the problem for a set of pre-de ned points along the solution path. In the orthogonal case, lars takes min(N; p) steps: hence to make things roughly comparable, we called the latter two algorithms to solve a total of min(N; p) problems along the path. Table 1 shows timings in seconds averaged over three runs. We see that glmnet is considerably faster than lars; the covariance-updating version of the algorithm is a little faster than the naive version when N > p and a little slower when p > N . We had expected that high correlation between the features would increase the run time of glmnet, but this does not seem to be the case. 5.2. Lasso-logistic regression We used the same simulation setup as above, except that we took the continuous outcome y, de ned p = 1=(1 + exp(y)) and used this to generate a two-class outcome z with Pr(z = 1) = p, Pr(z = 0) = 1 p. We compared the speed of glmnet to the interior point method l1logreg (Koh et al. 2007b,a), Bayesian binary regression (BBR, Madigan and Lewis 2007; Genkin et al. 2007), and the lasso penalized logistic program LPL supplied by Ken Lange (see Wu and Lange 2008). The latter two methods also use a coordinate descent approach. The BBR software automatically performs ten-fold cross-validation when given a set of values. Hence we report the total time for ten-fold cross-validation for all methods using the Journal of Statistical Software 13 Linear regression { Dense features Correlation 0 0.1 0.2 0.5 0.9 0.95 N = 1000; p = 100 glmnet (type = "naive") 0.05 0.06 0.06 0.09 0.08 0.07 glmnet (type = "cov") 0.02 0.02 0.02 0.02 0.02 0.02 lars 0.11 0.11 0.11 0.11 0.11 0.11 N = 5000; p = 100 glmnet (type = "naive") 0.24 0.25 0.26 0.34 0.32 0.31 glmnet (type = "cov") 0.05 0.05 0.05 0.05 0.05 0.05 lars 0.29 0.29 0.29 0.30 0.29 0.29 N = 100; p = 1000 glmnet (type = "naive") 0.04 0.05 0.04 0.05 0.04 0.03 glmnet (type = "cov") 0.07 0.08 0.07 0.08 0.04 0.03 lars 0.73 0.72 0.68 0.71 0.71 0.67 N = 100; p = 5000 glmnet (type = "naive") 0.20 0.18 0.21 0.23 0.21 0.14 glmnet (type = "cov") 0.46 0.42 0.51 0.48 0.25 0.10 lars 3.73 3.53 3.59 3.47 3.90 3.52 N = 100; p = 20000 glmnet (type = "naive") 1.00 0.99 1.06 1.29 1.17 0.97 glmnet (type = "cov") 1.86 2.26 2.34 2.59 1.24 0.79 lars 18.30 17.90 16.90 18.03 17.91 16.39 N = 100; p = 50000 glmnet (type = "naive") 2.66 2.46 2.84 3.53 3.39 2.43 glmnet (type = "cov") 5.50 4.92 6.13 7.35 4.52 2.53 lars 58.68 64.00 64.79 58.20 66.39 79.79 Table 1: Timings (in seconds) for glmnet and lars algorithms for linear regression with lasso penalty. The rst line is glmnet using naive updating while the second uses covariance updating. Total time for 100  values, averaged over 3 runs. same 100  values for all. Table 2 shows the results; in some cases, we omitted a method when it was seen to be very slow at smaller values for N or p. Again we see that glmnet is the clear winner: it slows down a little under high correlation. The computation seems to be roughly linear in N , but grows faster than linear in p. Table 3 shows some results when the feature matrix is sparse: we randomly set 95% of the feature values to zero. Again, the glmnet procedure is signi cantly faster than l1logreg. 14 Regularization Paths for GLMs via Coordinate Descent Logistic regression { Dense features Correlation 0 0.1 0.2 0.5 0.9 0.95 N = 1000; p = 100 glmnet 1.65 1.81 2.31 3.87 5.99 8.48 l1logreg 31.475 31.86 34.35 32.21 31.85 31.81 BBR 40.70 47.57 54.18 70.06 106.72 121.41 LPL 24.68 31.64 47.99 170.77 741.00 1448.25 N = 5000; p = 100 glmnet 7.89 8.48 9.01 13.39 26.68 26.36 l1logreg 239.88 232.00 229.62 229.49 22.19 223.09 N = 100; 000; p = 100 glmnet 78.56 178.45 205.94 274.33 552.48 638.50 N = 100; p = 1000 glmnet 1.06 1.07 1.09 1.45 1.72 1.37 l1logreg 25.99 26.40 25.67 26.49 24.34 20.16 BBR 70.19 71.19 78.40 103.77 149.05 113.87 LPL 11.02 10.87 10.76 16.34 41.84 70.50 N = 100; p = 5000 glmnet 5.24 4.43 5.12 7.05 7.87 6.05 l1logreg 165.02 161.90 163.25 166.50 151.91 135.28 N = 100; p = 100; 000 glmnet 137.27 139.40 146.55 197.98 219.65 201.93 Table 2: Timings (seconds) for logistic models with lasso penalty. Total time for tenfold cross-validation over a grid of 100  values. 5.3. Real data Table 4 shows some timing results for four di erent datasets. Cancer (Ramaswamy et al. 2002): gene-expression data with 14 cancer classes. Here we compare glmnet with BMR (Genkin et al. 2007), a multinomial version of BBR. Leukemia (Golub et al. 1999): gene-expression data with a binary response indicating type of leukemia|AML vs ALL. We used the preprocessed data of Dettling (2004). InternetAd (Kushmerick 1999): document classi cation problem with mostly binary features. The response is binary, and indicates whether the document is an advertise- ment. Only 1:2% nonzero values in the predictor matrix. Journal of Statistical Software 15 Logistic regression { Sparse features Correlation 0 0.1 0.2 0.5 0.9 0.95 N = 1000; p = 100 glmnet 0.77 0.74 0.72 0.73 0.84 0.88 l1logreg 5.19 5.21 5.14 5.40 6.14 6.26 BBR 2.01 1.95 1.98 2.06 2.73 2.88 N = 100; p = 1000 glmnet 1.81 1.73 1.55 1.70 1.63 1.55 l1logreg 7.67 7.72 7.64 9.04 9.81 9.40 BBR 4.66 4.58 4.68 5.15 5.78 5.53 N = 10; 000; p = 100 glmnet 3.21 3.02 2.95 3.25 4.58 5.08 l1logreg 45.87 46.63 44.33 43.99 45.60 43.16 BBR 11.80 11.64 11.58 13.30 12.46 11.83 N = 100; p = 10; 000 glmnet 10.18 10.35 9.93 10.04 9.02 8.91 l1logreg 130.27 124.88 124.18 129.84 137.21 159.54 BBR 45.72 47.50 47.46 48.49 56.29 60.21 Table 3: Timings (seconds) for logistic model with lasso penalty and sparse features (95% zero). Total time for ten-fold cross-validation over a grid of 100  values. NewsGroup (Lang 1995): document classi cation problem. We used the training set cultured from these data by Koh et al. (2007a). The response is binary, and indicates a subclass of topics; the predictors are binary, and indicate the presence of particular tri-gram sequences. The predictor matrix has 0:05% nonzero values. All four datasets are available online with this publication as saved R data objects (the latter two in sparse format using the Matrix package, Bates and Maechler 2009). For the Leukemia and InternetAd datasets, the BBR program used fewer than 100  values so we estimated the total time by scaling up the time for smaller number of values. The InternetAd and NewsGroup datasets are both sparse: 1% nonzero values for the former, 0.05% for the latter. Again glmnet is considerably faster than the competing methods. 5.4. Other comparisons When making comparisons, one invariably leaves out someones favorite method. We left out our own glmpath (Park and Hastie 2007b) extension of lars for GLMs (Park and Hastie 2007a), since it does not scale well to the size problems we consider here. Two referees of 16 Regularization Paths for GLMs via Coordinate Descent Name Type N p glmnet l1logreg BBR/BMR Dense Cancer 14 class 144 16,063 2.5 mins 2.1 hrs Leukemia 2 class 72 3571 2.50 55.0 450 Sparse InternetAd 2 class 2359 1430 5.0 20.9 34.7 NewsGroup 2 class 11,314 777,811 2 mins 3.5 hrs Table 4: Timings (seconds, unless stated otherwise) for some real datasets. For the Cancer, Leukemia and InternetAd datasets, times are for ten-fold cross-validation using 100 values of ; for NewsGroup we performed a single run with 100 values of , with  = 0:05 . min max MacBook Pro HP Linux server glmnet 0.34 0.13 penalized 10.31 OWL-QN 314.35 Table 5: Timings (seconds) for the Leukemia dataset, using 100  values. These timings were performed on two di erent platforms, which were di erent again from those used in the earlier timings in this paper. an earlier draft of this paper suggested two methods of which we were not aware. We ran a single benchmark against each of these using the Leukemia data, tting models at 100 values of  in each case. OWL-QN: Orthant-Wise Limited-memory Quasi-Newton Optimizer for ` -regularized Objectives (Andrew and Gao 2007a,b). The software is written in C++, and available from the authors upon request. The R package penalized (Goeman 2009b,a), which ts GLMs using a fast implementa- tion of gradient ascent. Table 5 shows these comparisons (on two di erent machines); glmnet is considerably faster in both cases. 6. Selecting the tuning parameters The algorithms discussed in this paper compute an entire path of solutions (in ) for any particular model, leaving the user to select a particular solution from the ensemble. One general approach is to use prediction error to guide this choice. If a user is data rich, they can set aside some fraction (say a third) of their data for this purpose. They would then evaluate the prediction performance at each value of , and pick the model with the best performance. Journal of Statistical Software 17 Gaussian Family 99 99 97 95 93 75 54 21 12 5 2 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −6 −5 −4 −3 −2 −1 0 log(Lambda) Binomial Family Binomial Family 100 98 97 88 74 55 30 9 7 3 2 100 98 97 88 74 55 30 9 7 3 2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −9 −8 −7 −6 −5 −4 −3 −2 −9 −8 −7 −6 −5 −4 −3 −2 log(Lambda) log(Lambda) Figure 2: Ten-fold cross-validation on simulated data. The rst row is for regression with a Gaussian response, the second row logistic regression with a binomial response. In both cases we have 1000 observations and 100 predictors, but the response depends on only 10 predictors. For regression we use mean-squared prediction error as the measure of risk. For logistic regression, the left panel shows the mean deviance (minus twice the log-likelihood on the left-out data), while the right panel shows misclassi cation error, which is a rougher measure. In all cases we show the mean cross-validated error curve, as well as a one-standard-deviation band. In each gure the left vertical line corresponds to the minimum error, while the right vertical line the largest value of lambda such that the error is within one standard-error of the minimum|the so called \one-standard-error" rule. The top of each plot is annotated with the size of the models. Deviance Mean Squared Error 0.8 0.9 1.0 1.1 1.2 1.3 1.4 24 26 28 30 32 Misclassification Error 0.20 0.30 0.40 0.50 18 Regularization Paths for GLMs via Coordinate Descent Alternatively, they can use K -fold cross-validation (Hastie et al. 2009, for example), where the training data is used both for training and testing in an unbiased way. Figure 2 illustrates cross-validation on a simulated dataset. For logistic regression, we some- times use the binomial deviance rather than misclassi cation error, since the latter is smoother. We often use the \one-standard-error" rule when selecting the best model; this acknowledges the fact that the risk curves are estimated with error, so errs on the side of parsimony (Hastie et al. 2009). Cross-validation can be used to select as well, although it is often viewed as a higher-level parameter and chosen on more subjective grounds. 7. Discussion Cyclical coordinate descent methods are a natural approach for solving convex problems with ` or ` constraints, or mixtures of the two (elastic net). Each coordinate-descent step 1 2 is fast, with an explicit formula for each coordinate-wise minimization. The method also exploits the sparsity of the model, spending much of its time evaluating only inner products for variables with non-zero coecients. Its computational speed both for large N and p are quite remarkable. An R-language package glmnet is available under general public licence (GPL-2) from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=glmnet. Sparse data inputs are handled by the Matrix package. MATLAB functions (Jiang 2009) are available from http://www-stat.stanford.edu/~tibs/glmnet-matlab/. Acknowledgments We would like to thank Holger Hoe ing for helpful discussions, and Hui Jiang for writing the MATLAB interface to our Fortran routines. We thank the associate editor, production editor and two referees who gave useful comments on an earlier draft of this article. Friedman was partially supported by grant DMS-97-64431 from the National Science Foun- dation. Hastie was partially supported by grant DMS-0505676 from the National Science Foundation, and grant 2R01 CA 72028-07 from the National Institutes of Health. Tibshirani was partially supported by National Science Foundation Grant DMS-9971405 and National Institutes of Health Contract N01-HV-28183. References Andrew G, Gao J (2007a). OWL-QN: Orthant-Wise Limited-Memory Quasi-Newton Op- timizer for L1-Regularized Objectives. URL http://research.microsoft.com/en-us/ downloads/b1eb1016-1738-4bd5-83a9-370c9d498a03. Andrew G, Gao J (2007b). \Scalable Training of L1-Regularized Log-Linear Models." In ICML '07: Proceedings of the 24th International Conference on Machine Learning, pp. 33{40. ACM, New York, NY, USA. doi:10.1145/1273496.1273501. Bates D, Maechler M (2009). Matrix: Sparse and Dense Matrix Classes and Methods. R package version 0.999375-30, URL http://CRAN.R-project.org/package=Matrix. Journal of Statistical Software 19 Candes E, Tao T (2007). \The Dantzig Selector: Statistical Estimation When p is much Larger than n." The Annals of Statistics, 35(6), 2313{2351. Chen SS, Donoho D, Saunders M (1998). \Atomic Decomposition by Basis Pursuit." SIAM Journal on Scienti c Computing, 20(1), 33{61. Daubechies I, Defrise M, De Mol C (2004). \An Iterative Thresholding Algorithm for Lin- ear Inverse Problems with a Sparsity Constraint." Communications on Pure and Applied Mathematics, 57, 1413{1457. Dettling M (2004). \BagBoosting for Tumor Classi cation with Gene Expression Data." Bioinformatics, 20, 3583{3593. Donoho DL, Johnstone IM (1994). \Ideal Spatial Adaptation by Wavelet Shrinkage." Biometrika, 81, 425{455. Efron B, Hastie T, Johnstone I, Tibshirani R (2004). \Least Angle Regression." The Annals of Statistics, 32(2), 407{499. Fan J, Li R (2005). \Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties." Journal of the American Statistical Association, 96, 1348{1360. Friedman J (2008). \Fast Sparse Regression and Classi cation." Technical report, Depart- ment of Statistics, Stanford University. URL http://www-stat.stanford.edu/~jhf/ftp/ GPSpub.pdf. Friedman J, Hastie T, Hoe ing H, Tibshirani R (2007). \Pathwise Coordinate Optimization." The Annals of Applied Statistics, 2(1), 302{332. Friedman J, Hastie T, Tibshirani R (2008). \Sparse Inverse Covariance Estimation with the Graphical Lasso." Biostatistics, 9, 432{441. Friedman J, Hastie T, Tibshirani R (2009). glmnet: Lasso and Elastic-Net Regularized Generalized Linear Models. R package version 1.1-4, URL http://CRAN.R-project.org/ package=glmnet. Fu W (1998). \Penalized Regressions: The Bridge vs. the Lasso." Journal of Computational and Graphical Statistics, 7(3), 397{416. Genkin A, Lewis D, Madigan D (2007). \Large-scale Bayesian Logistic Regression for Text Categorization." Technometrics, 49(3), 291{304. Goeman J (2009a). \L1 Penalized Estimation in the Cox Proportional Hazards Model." Biometrical Journal. doi:10.1002/bimj.200900028. Forthcoming. Goeman J (2009b). penalized: L1 (Lasso) and L2 (Ridge) Penalized Estimation in GLMs and in the Cox Model. R package version 0.9-27, URL http://CRAN.R-project.org/ package=penalized. Golub T, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, Bloom eld CD, Lander ES (1999). \Molecular Classi cation of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring." Science, 286, 531{536. 20 Regularization Paths for GLMs via Coordinate Descent Hastie T, Efron B (2007). lars: Least Angle Regression, Lasso and Forward Stagewise. R package version 0.9-7, URL http://CRAN.R-project.org/package=Matrix. Hastie T, Rosset S, Tibshirani R, Zhu J (2004). \The Entire Regularization Path for the Support Vector Machine." Journal of Machine Learning Research, 5, 1391{1415. Hastie T, Tibshirani R, Friedman J (2009). The Elements of Statistical Learning: Prediction, Inference and Data Mining. 2nd edition. Springer-Verlag, New York. Jiang H (2009). \A MATLAB Implementation of glmnet." Stanford University, URL http:: //www-stat.stanford.edu/~tibs/glmnet-matlab/. Koh K, Kim SJ, Boyd S (2007a). \An Interior-Point Method for Large-Scale L1-Regularized Logistic Regression." Journal of Machine Learning Research, 8, 1519{1555. Koh K, Kim SJ, Boyd S (2007b). l1logreg: A Solver for L1-Regularized Logistic Regression. R package version 0.1-1. Avaliable from Kwangmoo Koh ([email protected]). Krishnapuram B, Hartemink AJ (2005). \Sparse Multinomial Logistic Regression: Fast Al- gorithms and Generalization Bounds." IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 27(6), 957{968. Fellow-Lawrence Carin and Senior Member-Mario A. T. Figueiredo. Kushmerick N (1999). \Learning to Remove Internet Advertisements." In AGENTS '99: Proceedings of the Third Annual Conference on Autonomous Agents, pp. 175{181. ACM, New York, NY, USA. ISBN 1-58113-066-X. doi:10.1145/301136.301186. Lang K (1995). \NewsWeeder: Learning to Filter Netnews." In A Prieditis, S Russell (eds.), Proceedings of the 12th International Conference on Machine Learning, pp. 331{339. San Francisco. Lee S, Lee H, Abbeel P, Ng A (2006). \Ecient L1 Logistic Regression." In Proceedings of the Twenty-First National Conference on Arti cial Intelligence (AAAI-06). URL https: //www.aaai.org/Papers/AAAI/2006/AAAI06-064.pdf. Madigan D, Lewis D (2007). BBR, BMR: Bayesian Logistic Regression. Open-source stand- alone software, URL http://www.bayesianregression.org/. Meier L, van de Geer S, Buhlmann P (2008). \The Group Lasso for Logistic Regression." Journal of the Royal Statistical Society B, 70(1), 53{71. Osborne M, Presnell B, Turlach B (2000). \A New Approach to Variable Selection in Least Squares Problems." IMA Journal of Numerical Analysis, 20, 389{404. Park MY, Hastie T (2007a). \L -Regularization Path Algorithm for Generalized Linear Mod- els." Journal of the Royal Statistical Society B, 69, 659{677. Park MY, Hastie T (2007b). glmpath: L1 Regularization Path for Generalized Lin- ear Models and Cox Proportional Hazards Model. R package version 0.94, URL http: //CRAN.R-project.org/package=glmpath. Journal of Statistical Software 21 Ramaswamy S, Tamayo P, Rifkin R, Mukherjee S, Yeang C, Angelo M, Ladd C, Reich M, Lat- ulippe E, Mesirov J, Poggio T, Gerald W, Loda M, Lander E, Golub T (2002). \Multiclass Cancer Diagnosis Using Tumor Gene Expression Signature." Proceedings of the National Academy of Sciences, 98, 15149{15154. R Development Core Team (2009). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http: //www.R-project.org/. Rosset S, Zhu J (2007). \Piecewise Linear Regularized Solution Paths." The Annals of Statistics, 35(3), 1012{1030. Shevade K, Keerthi S (2003). \A Simple and Ecient Algorithm for Gene Selection Using Sparse Logistic Regression." Bioinformatics, 19, 2246{2253. Tibshirani R (1996). \Regression Shrinkage and Selection via the Lasso." Journal of the Royal Statistical Society B, 58, 267{288. Tibshirani R (1997). \The Lasso Method for Variable Selection in the Cox Model." Statistics in Medicine, 16, 385{395. Tseng P (2001). \Convergence of a Block Coordinate Descent Method for Nondi erentiable Minimization." Journal of Optimization Theory and Applications, 109, 475{494. Van der Kooij A (2007). Prediction Accuracy and Stability of Regrsssion with Optimal Scaling Transformations. Ph.D. thesis, Department of Data Theory, University of Leiden. URL https://openaccess.leidenuniv.nl/dspace/handle/1887/12096. Wu T, Chen Y, Hastie T, Sobel E, Lange K (2009). \Genome-Wide Association Analysis by Penalized Logistic Regression." Bioinformatics, 25(6), 714{721. Wu T, Lange K (2008). \Coordinate Descent Procedures for Lasso Penalized Regression." The Annals of Applied Statistics, 2(1), 224{244. Yuan M, Lin Y (2007). \Model Selection and Estimation in Regression with Grouped Vari- ables." Journal of the Royal Statistical Society B, 68(1), 49{67. Zhu J, Hastie T (2004). \Classi cation of Expression Arrays by Penalized Logistic Regression." Biostatistics, 5(3), 427{443. Zou H (2006). \The Adaptive Lasso and its Oracle Properties." Journal of the American Statistical Association, 101, 1418{1429. Zou H, Hastie T (2004). elasticnet: Elastic Net Regularization and Variable Selection. R package version 1.02, URL http://CRAN.R-project.org/package=elasticnet. Zou H, Hastie T (2005). \Regularization and Variable Selection via the Elastic Net." Journal of the Royal Statistical Society B, 67(2), 301{320. 22 Regularization Paths for GLMs via Coordinate Descent A. Proof of Theorem 1 We have 1 2 c = arg min (1 )( t) + j tj : (32) j j` j` `=1 Suppose 2 (0; 1). Di erentiating w.r.t. t (using a sub-gradient representation), we have [(1 )( t) s ] = 0 (33) j` j` `=1 where s = sign( t) if 6= t and s 2 [1; 1] otherwise. This gives j` j` j` j` t = + s (34) j j` K 1 `=1 It follows that t cannot be larger than since then the second term above would be negative and this would imply that t is less than . Similarly t cannot be less than , since then the j j second term above would have to be negative, implying that t is larger than . Aliation: Trevor Hastie Department of Statistics Stanford University California 94305, United States of America E-mail: [email protected] URL: http://www-stat.stanford.edu/~hastie/ Journal of Statistical Software http://www.jstatsoft.org/ published by the American Statistical Association http://www.amstat.org/ Volume 33, Issue 1 Submitted: 2009-04-22 January 2010 Accepted: 2009-12-15 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Journal of Statistical Software Unpaywall

Regularization Paths for Generalized Linear Models via Coordinate Descent

Journal of Statistical SoftwareJan 1, 2010

Loading next page...
 
/lp/unpaywall/regularization-paths-for-generalized-linear-models-via-coordinate-0S6006gsu0

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Unpaywall
ISSN
1548-7660
DOI
10.18637/jss.v033.i01
Publisher site
See Article on Publisher Site

Abstract

We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, two-class logistic regression, and multi- nomial regression problems while the penalties include ` (the lasso), ` (ridge regression) 1 2 and mixtures of the two (the elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal eciently with sparse features. In comparative timings we nd that the new algorithms are considerably faster than competing methods. Keywords : lasso, elastic net, logistic regression, ` penalty, regularization path, coordinate- descent. 1. Introduction The lasso (Tibshirani 1996) is a popular method for regression that uses an ` penalty to achieve a sparse solution. In the signal processing literature, the lasso is also known as basis pursuit (Chen et al. 1998). This idea has been broadly applied, for example to general- ized linear models (Tibshirani 1996) and Cox's proportional hazard models for survival data (Tibshirani 1997). In recent years, there has been an enormous amount of research activity devoted to related regularization methods: 1. The grouped lasso (Yuan and Lin 2007; Meier et al. 2008), where variables are included or excluded in groups. 2. The Dantzig selector (Candes and Tao 2007, and discussion), a slightly modi ed version of the lasso. 3. The elastic net (Zou and Hastie 2005) for correlated variables, which uses a penalty that is part ` , part ` . 1 2 2 Regularization Paths for GLMs via Coordinate Descent 4. ` regularization paths for generalized linear models (Park and Hastie 2007a). 5. Methods using non-concave penalties, such as SCAD (Fan and Li 2005) and Friedman's generalized elastic net (Friedman 2008), enforce more severe variable selection than the lasso. 6. Regularization paths for the support-vector machine (Hastie et al. 2004). 7. The graphical lasso (Friedman et al. 2008) for sparse covariance estimation and undi- rected graphs. Efron et al. (2004) developed an ecient algorithm for computing the entire regularization path for the lasso for linear regression models. Their algorithm exploits the fact that the coef- cient pro les are piecewise linear, which leads to an algorithm with the same computational cost as the full least-squares t on the data (see also Osborne et al. 2000). In some of the extensions above (items 2,3, and 6), piecewise-linearity can be exploited as in Efron et al. (2004) to yield ecient algorithms. Rosset and Zhu (2007) characterize the class of problems where piecewise-linearity exists|both the loss function and the penalty have to be quadratic or piecewise linear. Here we instead focus on cyclical coordinate descent methods. These methods have been proposed for the lasso a number of times, but only recently was their power fully appreciated. Early references include Fu (1998), Shevade and Keerthi (2003) and Daubechies et al. (2004). Van der Kooij (2007) independently used coordinate descent for solving elastic-net penalized regression models. Recent rediscoveries include Friedman et al. (2007) and Wu and Lange (2008). The rst paper recognized the value of solving the problem along an entire path of values for the regularization parameters, using the current estimates as warm starts. This strategy turns out to be remarkably ecient for this problem. Several other researchers have also re-discovered coordinate descent, many for solving the same problems we address in this paper|notably Shevade and Keerthi (2003), Krishnapuram and Hartemink (2005), Genkin et al. (2007) and Wu et al. (2009). In this paper we extend the work of Friedman et al. (2007) and develop fast algorithms for tting generalized linear models with elastic-net penalties. In particular, our models include regression, two-class logistic regression, and multinomial regression problems. Our algorithms can work on very large datasets, and can take advantage of sparsity in the feature set. We provide a publicly available package glmnet (Friedman et al. 2009) implemented in the R programming system (R Development Core Team 2009). We do not revisit the well- established convergence properties of coordinate descent in convex problems (Tseng 2001) in this article. Lasso procedures are frequently used in domains with very large datasets, such as genomics and web analysis. Consequently a focus of our research has been algorithmic eciency and speed. We demonstrate through simulations that our procedures outperform all competitors | even those based on coordinate descent. In Section 2 we present the algorithm for the elastic net, which includes the lasso and ridge regression as special cases. Section 3 and 4 discuss (two-class) logistic regression and multi- nomial logistic regression. Comparative timings are presented in Section 5. Although the title of this paper advertises regularization paths for GLMs, we only cover three important members of this family. However, exactly the same technology extends trivially to Journal of Statistical Software 3 other members of the exponential family, such as the Poisson model. We plan to extend our software to cover these important other cases, as well as the Cox model for survival data. Note that this article is about algorithms for tting particular families of models, and not about the statistical properties of these models themselves. Such discussions have taken place elsewhere. 2. Algorithms for the lasso, ridge regression and elastic net We consider the usual setup for linear regression. We have a response variable Y 2 R and a predictor vector X 2 R , and we approximate the regression function by a linear model E(YjX = x) = + x . We have N observation pairs (x ; y ). For simplicity we assume 0 i i P P N 1 N the x are standardized: x = 0, x = 1, for j = 1; : : : ; p. Our algorithms ij ij i=1 N i=1 ij generalize naturally to the unstandardized case. The elastic net solves the following problem " # > 2 min R ( ; ) = min (y x ) + P ( ) ; (1) 0 i 0 p+1 p+1 2N ( ; )2R ( ; )2R 0 0 i=1 where P ( ) = (1 ) jj jj + jj jj (2) ` 1 1 2 = (1 ) + j j : (3) j=1 P is the elastic-net penalty (Zou and Hastie 2005), and is a compromise between the ridge- regression penalty ( = 0) and the lasso penalty ( = 1). This penalty is particularly useful in the p  N situation, or any situation where there are many correlated predictor variables. Ridge regression is known to shrink the coecients of correlated predictors towards each other, allowing them to borrow strength from each other. In the extreme case of k identical predictors, they each get identical coecients with 1=kth the size that any single one would get if t alone. From a Bayesian point of view, the ridge penalty is ideal if there are many predictors, and all have non-zero coecients (drawn from a Gaussian distribution). Lasso, on the other hand, is somewhat indi erent to very correlated predictors, and will tend to pick one and ignore the rest. In the extreme case above, the lasso problem breaks down. The lasso penalty corresponds to a Laplace prior, which expects many coecients to be close to zero, and a small subset to be larger and nonzero. The elastic net with = 1" for some small " > 0 performs much like the lasso, but removes any degeneracies and wild behavior caused by extreme correlations. More generally, the entire family P creates a useful compromise between ridge and lasso. As increases from 0 to 1, for a given  the sparsity of the solution to (1) (i.e., the number of coecients equal to zero) increases monotonically from 0 to the sparsity of the lasso solution. Figure 1 shows an example that demonstrates the e ect of varying . The dataset is from (Golub et al. 1999), consisting of 72 observations on 3571 genes measured with DNA microar- rays. The observations fall in two classes, so we use the penalties in conjunction with the Zou and Hastie (2005) called this penalty the naive elastic net, and preferred a rescaled version which they called elastic net. We drop this distinction here. 4 Regularization Paths for GLMs via Coordinate Descent Figure 1: Leukemia data: pro les of estimated coecients for three methods, showing only rst 10 steps (values for ) in each case. For the elastic net, = 0:2. Journal of Statistical Software 5 logistic regression models of Section 3. The coecient pro les from the rst 10 steps (grid values for ) for each of the three regularization methods are shown. The lasso penalty admits at most N = 72 genes into the model, while ridge regression gives all 3571 genes non-zero coecients. The elastic-net penalty provides a compromise between these two, and has the e ect of averaging genes that are highly correlated and then entering the averaged gene into the model. Using the algorithm described below, computation of the entire path of solutions for each method, at 100 values of the regularization parameter evenly spaced on the log-scale, took under a second in total. Because of the large number of non-zero coecients for the ridge penalty, they are individually much smaller than the coecients for the other methods. Consider a coordinate descent step for solving (1). That is, suppose we have estimates and for ` 6= j, and we wish to partially optimize with respect to . We would like to compute ` j ~ ~ ~ the gradient at = , which only exists if 6= 0. If > 0, then j j j j @R 1 ~ ~ j = x (y x ) + (1 ) +  : (4) ~ ij i o j @ N i=1 ~ ~ A similar expression exists if < 0, and = 0 is treated separately. Simple calculus shows j j (Donoho and Johnstone 1994) that the coordinate-wise update has the form (j) S x (y y ~ ); ij i i=1 i (5) 1 + (1 ) where (j) ~ ~ y ~ = + x is the tted value excluding the contribution from x , and 0 i` ` ij i `6=j (j) hence y y ~ the partial residual for tting . Because of the standardization, i j (j) x (y y ~ ) is the simple least-squares coecient when tting this partial ij i i=1 i residual to x . ij S(z; ) is the soft-thresholding operator with value z if z > 0 and < jzj sign(z)(jzj ) = z + if z < 0 and < jzj (6) 0 if  jzj. The details of this derivation are spelled out in Friedman et al. (2007). Thus we compute the simple least-squares coecient on the partial residual, apply soft- thresholding to take care of the lasso contribution to the penalty, and then apply a pro- portional shrinkage for the ridge penalty. This algorithm was suggested by Van der Kooij (2007). 2.1. Naive updates Looking more closely at (5), we see that (j) y y ~ = y y ^ + x i i i ij j = r + x ; (7) i ij j 6 Regularization Paths for GLMs via Coordinate Descent where y ^ is the current t of the model for observation i, and hence r the current residual. i i Thus N N X X 1 1 (j) x (y y ~ ) = x r + ; (8) ij i ij i j N N i=1 i=1 because the x are standardized. The rst term on the right-hand side is the gradient of the loss with respect to . It is clear from (8) why coordinate descent is computationally ecient. Many coecients are zero, remain zero after the thresholding, and so nothing needs to be changed. Such a step costs O(N ) operations| the sum to compute the gradient. On the other hand, if a coecient does change after the thresholding, r is changed in O(N ) and the step costs O(2N ). Thus a complete cycle through all p variables costs O(pN ) operations. We refer to this as the naive algorithm, since it is generally less ecient than the covariance updating algorithm to follow. Later we use these algorithms in the context of iteratively reweighted least squares (IRLS), where the observation weights change frequently; there the naive algorithm dominates. 2.2. Covariance updates Further eciencies can be achieved in computing the updates in (8). We can write the rst term on the right (up to a factor 1=N ) as X X x r = hx ; yi hx ; x i ; (9) ij i j j k k i=1 ~ k:j j>0 where hx ; yi = x y . Hence we need to compute inner products of each feature with y j ij i i=1 initially, and then each time a new feature x enters the model (for the rst time), we need to compute and store its inner product with all the rest of the features (O(Np) operations). We also store the p gradient components (9). If one of the coecients currently in the model changes, we can update each gradient in O(p) operations. Hence with m non-zero terms in the model, a complete cycle costs O(pm) operations if no new variables become non-zero, and costs O(Np) for each new variable entered. Importantly, O(N ) calculations do not have to be made at every step. This is the case for all penalized procedures with squared error loss. 2.3. Sparse updates We are sometimes faced with problems where the Np feature matrix X is extremely sparse. A leading example is from document classi cation, where the feature vector uses the so- called \bag-of-words" model. Each document is scored for the presence/absence of each of the words in the entire dictionary under consideration (sometimes counts are used, or some transformation of counts). Since most words are absent, the feature vector for each document is mostly zero, and so the entire matrix is mostly zero. We store such matrices eciently in sparse column format, where we store only the non-zero entries and the coordinates where they occur. Coordinate descent is ideally set up to exploit such sparsity, in an obvious way. The O(N ) inner-product operations in either the naive or covariance updates can exploit the sparsity, by summing over only the non-zero entries. Note that in this case scaling of the variables will Journal of Statistical Software 7 not alter the sparsity, but centering will. So scaling is performed up front, but the centering is incorporated in the algorithm in an ecient and obvious manner. 2.4. Weighted updates Often a weight w (other than 1=N ) is associated with each observation. This will arise naturally in later sections where observations receive weights in the IRLS algorithm. In this case the update step (5) becomes only slightly more complicated: (j) S w x (y y ~ ); i ij i i=1 i : (10) w x + (1 ) i=1 ij If the x are not standardized, there is a similar sum-of-squares term in the denominator (even without weights). The presence of weights does not change the computational costs of either algorithm much, as long as the weights remain xed. 2.5. Pathwise coordinate descent We compute the solutions for a decreasing sequence of values for , starting at the smallest value  for which the entire vector = 0. Apart from giving us a path of solutions, this max scheme exploits warm starts, and leads to a more stable algorithm. We have examples where it is faster to compute the path down to  (for small ) than the solution only at that value for . ~ ~ When = 0, we see from (5) that will stay zero if jhx ; yij <  . Hence N  = j j max max jhx ; yij. Our strategy is to select a minimum value  =  , and construct a ` ` min max sequence of K values of  decreasing from  to  on the log scale. Typical values are max min = 0:001 and K = 100. 2.6. Other details Irrespective of whether the variables are standardized to have variance 1, we always center each predictor variable. Since the intercept is not regularized, this means that = y , the mean of the y , for all values of and . It is easy to allow di erent penalties  for each of the variables. We implement this via a penalty scaling parameter  0. If > 0, then the penalty applied to is  =  . j j j j j If = 0, that variable does not get penalized, and always enters the model unrestricted at the rst step and remains in the model. Penalty rescaling would also allow, for example, our software to be used to implement the adaptive lasso (Zou 2006). Considerable speedup is obtained by organizing the iterations around the active set of features| those with nonzero coecients. After a complete cycle through all the variables, we iterate on only the active set till convergence. If another complete cycle does not change the active set, we are done, otherwise the process is repeated. Active-set convergence is also mentioned in Meier et al. (2008) and Krishnapuram and Hartemink (2005). 8 Regularization Paths for GLMs via Coordinate Descent 3. Regularized logistic regression When the response variable is binary, the linear logistic regression model is often used. Denote by G the response variable, taking values in G = f1; 2g (the labeling of the elements is arbitrary). The logistic regression model represents the class-conditional probabilities through a linear function of the predictors Pr(G = 1jx) = ; (11) ( +x ) 1 + e Pr(G = 2jx) = +( +x ) 1 + e = 1 Pr(G = 1jx): Alternatively, this implies that Pr(G = 1jx) log = + x : (12) Pr(G = 2jx) Here we t this model by regularized maximum (binomial) likelihood. Let p(x ) = Pr(G = 1jx ) be the probability (11) for observation i at a particular value for the parameters ( ; ), i 0 then we maximize the penalized log likelihood " # max I (g = 1) log p(x ) + I (g = 2) log(1 p(x )) P ( ) : (13) i i i i p+1 ( ; )2R i=1 Denoting y = I (g = 1), the log-likelihood part of (13) can be written in the more explicit i i form 1 > > ( +x ) `( ; ) = y  ( + x ) log(1 + e ); (14) 0 i 0 i=1 a concave function of the parameters. The Newton algorithm for maximizing the (unpe- nalized) log-likelihood (14) amounts to iteratively reweighted least squares. Hence if the ~ ~ current estimates of the parameters are ( ; ), we form a quadratic approximation to the log-likelihood (Taylor expansion about current estimates), which is > 2 2 ~ ~ ` ( ; ) = w (z x ) + C ( ; ) (15) Q 0 i i 0 0 2N i=1 where y p ~(x ) i i ~ ~ z = + x + ; (working response) (16) i 0 p ~(x )(1 p ~(x )) i i w = p ~(x )(1 p ~(x )); (weights) (17) i i i and p ~(x ) is evaluated at the current parameters. The last term is constant. The Newton update is obtained by minimizing ` . Our approach is similar. For each value of , we create an outer loop which computes the ~ ~ quadratic approximation ` about the current parameters ( ; ). Then we use coordinate Q 0 descent to solve the penalized weighted least-squares problem min f` ( ; ) + P ( )g : (18) Q 0 p+1 ( ; )2R This amounts to a sequence of nested loops: Journal of Statistical Software 9 outer loop: Decrement . ~ ~ middle loop: Update the quadratic approximation ` using the current parameters ( ; ). Q 0 inner loop: Run the coordinate descent algorithm on the penalized weighted-least-squares problem (18). There are several important details in the implementation of this algorithm. When p  N , one cannot run  all the way to zero, because the saturated logistic regression t is unde ned (parameters wander o to 1 in order to achieve probabilities of 0 or 1). Hence the default  sequence runs down to  =  > 0. min max Care is taken to avoid coecients diverging in order to achieve tted probabilities of 0 or 1. When a probability is within " = 10 of 1, we set it to 1, and set the weights to ". 0 is treated similarly. Our code has an option to approximate the Hessian terms by an exact upper-bound. This is obtained by setting the w in (17) all equal to 0.25 (Krishnapuram and Hartemink 2005). We allow the response data to be supplied in the form of a two-column matrix of counts, sometimes referred to as grouped data. We discuss this in more detail in Section 4.2. The Newton algorithm is not guaranteed to converge without step-size optimization (Lee et al. 2006). Our code does not implement any checks for divergence; this would slow it down, and when used as recommended we do not feel it is necessary. We have a closed form expression for the starting solutions, and each subsequent solution is warm-started from the previous close-by solution, which generally makes the quadratic approximations very accurate. We have not encountered any divergence problems so far. 4. Regularized multinomial regression When the categorical response variable G has K > 2 levels, the linear logistic regression model can be generalized to a multi-logit model. The traditional approach is to extend (12) to K 1 logits Pr(G = `jx) log = + x ; ` = 1; : : : ; K 1: (19) 0` ` Pr(G = Kjx) Here is a p-vector of coecients. As in Zhu and Hastie (2004), here we choose a more symmetric approach. We model +x 0` ` Pr(G = `jx) = (20) +x 0k k k=1 This parametrization is not estimable without constraints, because for any values for the K K parameters f ; g , f c ; cg give identical probabilities (20). Regularization 0` ` 0` 0 ` 1 1 deals with this ambiguity in a natural way; see Section 4.1 below. 10 Regularization Paths for GLMs via Coordinate Descent We t the model (20) by regularized maximum (multinomial) likelihood. Using a similar notation as before, let p (x ) = Pr(G = `jx ), and let g 2 f1; 2; : : : ; Kg be the ith response. ` i i i We maximize the penalized log-likelihood " # N K X X max log p (x )  P ( ) : (21) g i ` K(p+1) f ; g 2R 0` ` i=1 `=1 Denote by Y the N  K indicator response matrix, with elements y = I (g = `). Then we i` can write the log-likelihood part of (21) in the more explicit form " !# N K K X X X K > +x 0` ` `(f ; g ) = y ( + x ) log e : (22) 0` ` i` 0` ` 1 i i=1 `=1 `=1 The Newton algorithm for multinomial regression can be tedious, because of the vector nature of the response observations. Instead of weights w as in (17), we get weight matrices, for example. However, in the spirit of coordinate descent, we can avoid these complexities. We perform partial Newton steps by forming a partial quadratic approximation to the log- likelihood (22), allowing only ( ; ) to vary for a single class at a time. It is not hard to 0` ` show that this is > 2 K ~ ~ ` ( ; ) = w (z x ) + C (f ; g ); (23) Q` 0` ` i` i` 0` ` 0k k i 1 2N i=1 where as before y p ~ (x ) i` ` ~ ~ z = + x + ; (24) i` 0` ` p ~ (x )(1 p ~ (x )) ` i ` i w = p ~ (x )(1 p ~ (x )); (25) i` ` i ` i Our approach is similar to the two-class case, except now we have to cycle over the classes as well in the outer loop. For each value of , we create an outer loop which cycles over ` ~ ~ and computes the partial quadratic approximation ` about the current parameters ( ; ). Q` 0 Then we use coordinate descent to solve the penalized weighted least-squares problem min f` ( ; ) + P ( )g : (26) Q` 0` ` ` p+1 ( ; )2R 0` ` This amounts to the sequence of nested loops: outer loop: Decrement . middle loop (outer): Cycle over ` 2 f1; 2; : : : ; K; 1; 2 : : :g. middle loop (inner): Update the quadratic approximation ` using the current parame- Q` ~ ~ ters f ; g . 0k k inner loop: Run the co-ordinate descent algorithm on the penalized weighted-least-squares problem (26). Journal of Statistical Software 11 4.1. Regularization and parameter ambiguity As was pointed out earlier, if f ; g characterizes a tted model for (20), then f 0` ` 0` c ; cg gives an identical t (c is a p-vector). Although this means that the log-likelihood part of (21) is insensitive to (c ; c), the penalty is not. In particular, we can always improve an estimate f ; g (w.r.t. (21)) by solving 0` ` min P ( c): (27) c2R `=1 This can be done separately for each coordinate, hence c = arg min (1 )( t) + j tj : (28) j j` j` `=1 Theorem 1 Consider problem (28) for values 2 [0; 1]. Let be the mean of the , and j j` M M a median of the (and for simplicity assume  . Then we have j` j j c 2 [ ; ]; (29) j j with the left endpoint achieved if = 0, and the right if = 1. The two endpoints are obvious. The proof of Theorem 1 is given in Appendix A. A con- sequence of the theorem is that a very simple search algorithm can be used to solve (28). The objective is piecewise quadratic, with knots de ned by the . We need only evaluate j` solutions in the intervals including the mean and median, and those in between. We recenter the parameters in each index set j after each inner middle loop step, using the the solution c for each j. Not all the parameters in our model are regularized. The intercepts are not, and with our 0` penalty modi ers (Section 2.6) others need not be as well. For these parameters we use mean centering. 4.2. Grouped and matrix responses As in the two class case, the data can be presented in the form of a N  K matrix m of i` non-negative numbers. For example, if the data are grouped: at each x we have a number of multinomial samples, with m falling into category `. In this case we divide each row by i` the row-sum m = m , and produce our response matrix y = m =m . m becomes an i i i i` i` i` observation weight. Our penalized maximum likelihood algorithm changes in a trivial way. The working response (24) is de ned exactly the same way (using y just de ned). The i` weights in (25) get augmented with the observation weight m : w = m p ~ (x )(1 p ~ (x )): (30) i` i ` i ` i Equivalently, the data can be presented directly as a matrix of class proportions, along with a weight vector. From the point of view of the algorithm, any matrix of positive numbers and any non-negative weight vector will be treated in the same way. 12 Regularization Paths for GLMs via Coordinate Descent 5. Timings In this section we compare the run times of the coordinate-wise algorithm to some competing algorithms. These use the lasso penalty ( = 1) in both the regression and logistic regression settings. All timings were carried out on an Intel Xeon 2.80GH processor. We do not perform comparisons on the elastic net versions of the penalties, since there is not much software available for elastic net. Comparisons of our glmnet code with the R package elasticnet will mimic the comparisons with lars (Hastie and Efron 2007) for the lasso, since elasticnet (Zou and Hastie 2004) is built on the lars package. 5.1. Regression with the lasso We generated Gaussian data with N observations and p predictors, with each pair of predictors X ; X 0 having the same population correlation . We tried a number of combinations of N j j and p, with  varying from zero to 0.95. The outcome values were generated by Y = X + k  Z (31) j j j=1 where = (1) exp(2(j 1)=20), Z  N (0; 1) and k is chosen so that the signal-to-noise ratio is 3.0. The coecients are constructed to have alternating signs and to be exponentially decreasing. Table 1 shows the average CPU timings for the coordinate-wise algorithm, and the lars proce- dure (Efron et al. 2004). All algorithms are implemented as R functions. The coordinate-wise algorithm does all of its numerical work in Fortran, while lars (Hastie and Efron 2007) does much of its work in R, calling Fortran routines for some matrix operations. However com- parisons in Friedman et al. (2007) showed that lars was actually faster than a version coded entirely in Fortran. Comparisons between di erent programs are always tricky: in particular the lars procedure computes the entire path of solutions, while the coordinate-wise procedure solves the problem for a set of pre-de ned points along the solution path. In the orthogonal case, lars takes min(N; p) steps: hence to make things roughly comparable, we called the latter two algorithms to solve a total of min(N; p) problems along the path. Table 1 shows timings in seconds averaged over three runs. We see that glmnet is considerably faster than lars; the covariance-updating version of the algorithm is a little faster than the naive version when N > p and a little slower when p > N . We had expected that high correlation between the features would increase the run time of glmnet, but this does not seem to be the case. 5.2. Lasso-logistic regression We used the same simulation setup as above, except that we took the continuous outcome y, de ned p = 1=(1 + exp(y)) and used this to generate a two-class outcome z with Pr(z = 1) = p, Pr(z = 0) = 1 p. We compared the speed of glmnet to the interior point method l1logreg (Koh et al. 2007b,a), Bayesian binary regression (BBR, Madigan and Lewis 2007; Genkin et al. 2007), and the lasso penalized logistic program LPL supplied by Ken Lange (see Wu and Lange 2008). The latter two methods also use a coordinate descent approach. The BBR software automatically performs ten-fold cross-validation when given a set of values. Hence we report the total time for ten-fold cross-validation for all methods using the Journal of Statistical Software 13 Linear regression { Dense features Correlation 0 0.1 0.2 0.5 0.9 0.95 N = 1000; p = 100 glmnet (type = "naive") 0.05 0.06 0.06 0.09 0.08 0.07 glmnet (type = "cov") 0.02 0.02 0.02 0.02 0.02 0.02 lars 0.11 0.11 0.11 0.11 0.11 0.11 N = 5000; p = 100 glmnet (type = "naive") 0.24 0.25 0.26 0.34 0.32 0.31 glmnet (type = "cov") 0.05 0.05 0.05 0.05 0.05 0.05 lars 0.29 0.29 0.29 0.30 0.29 0.29 N = 100; p = 1000 glmnet (type = "naive") 0.04 0.05 0.04 0.05 0.04 0.03 glmnet (type = "cov") 0.07 0.08 0.07 0.08 0.04 0.03 lars 0.73 0.72 0.68 0.71 0.71 0.67 N = 100; p = 5000 glmnet (type = "naive") 0.20 0.18 0.21 0.23 0.21 0.14 glmnet (type = "cov") 0.46 0.42 0.51 0.48 0.25 0.10 lars 3.73 3.53 3.59 3.47 3.90 3.52 N = 100; p = 20000 glmnet (type = "naive") 1.00 0.99 1.06 1.29 1.17 0.97 glmnet (type = "cov") 1.86 2.26 2.34 2.59 1.24 0.79 lars 18.30 17.90 16.90 18.03 17.91 16.39 N = 100; p = 50000 glmnet (type = "naive") 2.66 2.46 2.84 3.53 3.39 2.43 glmnet (type = "cov") 5.50 4.92 6.13 7.35 4.52 2.53 lars 58.68 64.00 64.79 58.20 66.39 79.79 Table 1: Timings (in seconds) for glmnet and lars algorithms for linear regression with lasso penalty. The rst line is glmnet using naive updating while the second uses covariance updating. Total time for 100  values, averaged over 3 runs. same 100  values for all. Table 2 shows the results; in some cases, we omitted a method when it was seen to be very slow at smaller values for N or p. Again we see that glmnet is the clear winner: it slows down a little under high correlation. The computation seems to be roughly linear in N , but grows faster than linear in p. Table 3 shows some results when the feature matrix is sparse: we randomly set 95% of the feature values to zero. Again, the glmnet procedure is signi cantly faster than l1logreg. 14 Regularization Paths for GLMs via Coordinate Descent Logistic regression { Dense features Correlation 0 0.1 0.2 0.5 0.9 0.95 N = 1000; p = 100 glmnet 1.65 1.81 2.31 3.87 5.99 8.48 l1logreg 31.475 31.86 34.35 32.21 31.85 31.81 BBR 40.70 47.57 54.18 70.06 106.72 121.41 LPL 24.68 31.64 47.99 170.77 741.00 1448.25 N = 5000; p = 100 glmnet 7.89 8.48 9.01 13.39 26.68 26.36 l1logreg 239.88 232.00 229.62 229.49 22.19 223.09 N = 100; 000; p = 100 glmnet 78.56 178.45 205.94 274.33 552.48 638.50 N = 100; p = 1000 glmnet 1.06 1.07 1.09 1.45 1.72 1.37 l1logreg 25.99 26.40 25.67 26.49 24.34 20.16 BBR 70.19 71.19 78.40 103.77 149.05 113.87 LPL 11.02 10.87 10.76 16.34 41.84 70.50 N = 100; p = 5000 glmnet 5.24 4.43 5.12 7.05 7.87 6.05 l1logreg 165.02 161.90 163.25 166.50 151.91 135.28 N = 100; p = 100; 000 glmnet 137.27 139.40 146.55 197.98 219.65 201.93 Table 2: Timings (seconds) for logistic models with lasso penalty. Total time for tenfold cross-validation over a grid of 100  values. 5.3. Real data Table 4 shows some timing results for four di erent datasets. Cancer (Ramaswamy et al. 2002): gene-expression data with 14 cancer classes. Here we compare glmnet with BMR (Genkin et al. 2007), a multinomial version of BBR. Leukemia (Golub et al. 1999): gene-expression data with a binary response indicating type of leukemia|AML vs ALL. We used the preprocessed data of Dettling (2004). InternetAd (Kushmerick 1999): document classi cation problem with mostly binary features. The response is binary, and indicates whether the document is an advertise- ment. Only 1:2% nonzero values in the predictor matrix. Journal of Statistical Software 15 Logistic regression { Sparse features Correlation 0 0.1 0.2 0.5 0.9 0.95 N = 1000; p = 100 glmnet 0.77 0.74 0.72 0.73 0.84 0.88 l1logreg 5.19 5.21 5.14 5.40 6.14 6.26 BBR 2.01 1.95 1.98 2.06 2.73 2.88 N = 100; p = 1000 glmnet 1.81 1.73 1.55 1.70 1.63 1.55 l1logreg 7.67 7.72 7.64 9.04 9.81 9.40 BBR 4.66 4.58 4.68 5.15 5.78 5.53 N = 10; 000; p = 100 glmnet 3.21 3.02 2.95 3.25 4.58 5.08 l1logreg 45.87 46.63 44.33 43.99 45.60 43.16 BBR 11.80 11.64 11.58 13.30 12.46 11.83 N = 100; p = 10; 000 glmnet 10.18 10.35 9.93 10.04 9.02 8.91 l1logreg 130.27 124.88 124.18 129.84 137.21 159.54 BBR 45.72 47.50 47.46 48.49 56.29 60.21 Table 3: Timings (seconds) for logistic model with lasso penalty and sparse features (95% zero). Total time for ten-fold cross-validation over a grid of 100  values. NewsGroup (Lang 1995): document classi cation problem. We used the training set cultured from these data by Koh et al. (2007a). The response is binary, and indicates a subclass of topics; the predictors are binary, and indicate the presence of particular tri-gram sequences. The predictor matrix has 0:05% nonzero values. All four datasets are available online with this publication as saved R data objects (the latter two in sparse format using the Matrix package, Bates and Maechler 2009). For the Leukemia and InternetAd datasets, the BBR program used fewer than 100  values so we estimated the total time by scaling up the time for smaller number of values. The InternetAd and NewsGroup datasets are both sparse: 1% nonzero values for the former, 0.05% for the latter. Again glmnet is considerably faster than the competing methods. 5.4. Other comparisons When making comparisons, one invariably leaves out someones favorite method. We left out our own glmpath (Park and Hastie 2007b) extension of lars for GLMs (Park and Hastie 2007a), since it does not scale well to the size problems we consider here. Two referees of 16 Regularization Paths for GLMs via Coordinate Descent Name Type N p glmnet l1logreg BBR/BMR Dense Cancer 14 class 144 16,063 2.5 mins 2.1 hrs Leukemia 2 class 72 3571 2.50 55.0 450 Sparse InternetAd 2 class 2359 1430 5.0 20.9 34.7 NewsGroup 2 class 11,314 777,811 2 mins 3.5 hrs Table 4: Timings (seconds, unless stated otherwise) for some real datasets. For the Cancer, Leukemia and InternetAd datasets, times are for ten-fold cross-validation using 100 values of ; for NewsGroup we performed a single run with 100 values of , with  = 0:05 . min max MacBook Pro HP Linux server glmnet 0.34 0.13 penalized 10.31 OWL-QN 314.35 Table 5: Timings (seconds) for the Leukemia dataset, using 100  values. These timings were performed on two di erent platforms, which were di erent again from those used in the earlier timings in this paper. an earlier draft of this paper suggested two methods of which we were not aware. We ran a single benchmark against each of these using the Leukemia data, tting models at 100 values of  in each case. OWL-QN: Orthant-Wise Limited-memory Quasi-Newton Optimizer for ` -regularized Objectives (Andrew and Gao 2007a,b). The software is written in C++, and available from the authors upon request. The R package penalized (Goeman 2009b,a), which ts GLMs using a fast implementa- tion of gradient ascent. Table 5 shows these comparisons (on two di erent machines); glmnet is considerably faster in both cases. 6. Selecting the tuning parameters The algorithms discussed in this paper compute an entire path of solutions (in ) for any particular model, leaving the user to select a particular solution from the ensemble. One general approach is to use prediction error to guide this choice. If a user is data rich, they can set aside some fraction (say a third) of their data for this purpose. They would then evaluate the prediction performance at each value of , and pick the model with the best performance. Journal of Statistical Software 17 Gaussian Family 99 99 97 95 93 75 54 21 12 5 2 1 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −6 −5 −4 −3 −2 −1 0 log(Lambda) Binomial Family Binomial Family 100 98 97 88 74 55 30 9 7 3 2 100 98 97 88 74 55 30 9 7 3 2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −9 −8 −7 −6 −5 −4 −3 −2 −9 −8 −7 −6 −5 −4 −3 −2 log(Lambda) log(Lambda) Figure 2: Ten-fold cross-validation on simulated data. The rst row is for regression with a Gaussian response, the second row logistic regression with a binomial response. In both cases we have 1000 observations and 100 predictors, but the response depends on only 10 predictors. For regression we use mean-squared prediction error as the measure of risk. For logistic regression, the left panel shows the mean deviance (minus twice the log-likelihood on the left-out data), while the right panel shows misclassi cation error, which is a rougher measure. In all cases we show the mean cross-validated error curve, as well as a one-standard-deviation band. In each gure the left vertical line corresponds to the minimum error, while the right vertical line the largest value of lambda such that the error is within one standard-error of the minimum|the so called \one-standard-error" rule. The top of each plot is annotated with the size of the models. Deviance Mean Squared Error 0.8 0.9 1.0 1.1 1.2 1.3 1.4 24 26 28 30 32 Misclassification Error 0.20 0.30 0.40 0.50 18 Regularization Paths for GLMs via Coordinate Descent Alternatively, they can use K -fold cross-validation (Hastie et al. 2009, for example), where the training data is used both for training and testing in an unbiased way. Figure 2 illustrates cross-validation on a simulated dataset. For logistic regression, we some- times use the binomial deviance rather than misclassi cation error, since the latter is smoother. We often use the \one-standard-error" rule when selecting the best model; this acknowledges the fact that the risk curves are estimated with error, so errs on the side of parsimony (Hastie et al. 2009). Cross-validation can be used to select as well, although it is often viewed as a higher-level parameter and chosen on more subjective grounds. 7. Discussion Cyclical coordinate descent methods are a natural approach for solving convex problems with ` or ` constraints, or mixtures of the two (elastic net). Each coordinate-descent step 1 2 is fast, with an explicit formula for each coordinate-wise minimization. The method also exploits the sparsity of the model, spending much of its time evaluating only inner products for variables with non-zero coecients. Its computational speed both for large N and p are quite remarkable. An R-language package glmnet is available under general public licence (GPL-2) from the Comprehensive R Archive Network at http://CRAN.R-project.org/package=glmnet. Sparse data inputs are handled by the Matrix package. MATLAB functions (Jiang 2009) are available from http://www-stat.stanford.edu/~tibs/glmnet-matlab/. Acknowledgments We would like to thank Holger Hoe ing for helpful discussions, and Hui Jiang for writing the MATLAB interface to our Fortran routines. We thank the associate editor, production editor and two referees who gave useful comments on an earlier draft of this article. Friedman was partially supported by grant DMS-97-64431 from the National Science Foun- dation. Hastie was partially supported by grant DMS-0505676 from the National Science Foundation, and grant 2R01 CA 72028-07 from the National Institutes of Health. Tibshirani was partially supported by National Science Foundation Grant DMS-9971405 and National Institutes of Health Contract N01-HV-28183. References Andrew G, Gao J (2007a). OWL-QN: Orthant-Wise Limited-Memory Quasi-Newton Op- timizer for L1-Regularized Objectives. URL http://research.microsoft.com/en-us/ downloads/b1eb1016-1738-4bd5-83a9-370c9d498a03. Andrew G, Gao J (2007b). \Scalable Training of L1-Regularized Log-Linear Models." In ICML '07: Proceedings of the 24th International Conference on Machine Learning, pp. 33{40. ACM, New York, NY, USA. doi:10.1145/1273496.1273501. Bates D, Maechler M (2009). Matrix: Sparse and Dense Matrix Classes and Methods. R package version 0.999375-30, URL http://CRAN.R-project.org/package=Matrix. Journal of Statistical Software 19 Candes E, Tao T (2007). \The Dantzig Selector: Statistical Estimation When p is much Larger than n." The Annals of Statistics, 35(6), 2313{2351. Chen SS, Donoho D, Saunders M (1998). \Atomic Decomposition by Basis Pursuit." SIAM Journal on Scienti c Computing, 20(1), 33{61. Daubechies I, Defrise M, De Mol C (2004). \An Iterative Thresholding Algorithm for Lin- ear Inverse Problems with a Sparsity Constraint." Communications on Pure and Applied Mathematics, 57, 1413{1457. Dettling M (2004). \BagBoosting for Tumor Classi cation with Gene Expression Data." Bioinformatics, 20, 3583{3593. Donoho DL, Johnstone IM (1994). \Ideal Spatial Adaptation by Wavelet Shrinkage." Biometrika, 81, 425{455. Efron B, Hastie T, Johnstone I, Tibshirani R (2004). \Least Angle Regression." The Annals of Statistics, 32(2), 407{499. Fan J, Li R (2005). \Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties." Journal of the American Statistical Association, 96, 1348{1360. Friedman J (2008). \Fast Sparse Regression and Classi cation." Technical report, Depart- ment of Statistics, Stanford University. URL http://www-stat.stanford.edu/~jhf/ftp/ GPSpub.pdf. Friedman J, Hastie T, Hoe ing H, Tibshirani R (2007). \Pathwise Coordinate Optimization." The Annals of Applied Statistics, 2(1), 302{332. Friedman J, Hastie T, Tibshirani R (2008). \Sparse Inverse Covariance Estimation with the Graphical Lasso." Biostatistics, 9, 432{441. Friedman J, Hastie T, Tibshirani R (2009). glmnet: Lasso and Elastic-Net Regularized Generalized Linear Models. R package version 1.1-4, URL http://CRAN.R-project.org/ package=glmnet. Fu W (1998). \Penalized Regressions: The Bridge vs. the Lasso." Journal of Computational and Graphical Statistics, 7(3), 397{416. Genkin A, Lewis D, Madigan D (2007). \Large-scale Bayesian Logistic Regression for Text Categorization." Technometrics, 49(3), 291{304. Goeman J (2009a). \L1 Penalized Estimation in the Cox Proportional Hazards Model." Biometrical Journal. doi:10.1002/bimj.200900028. Forthcoming. Goeman J (2009b). penalized: L1 (Lasso) and L2 (Ridge) Penalized Estimation in GLMs and in the Cox Model. R package version 0.9-27, URL http://CRAN.R-project.org/ package=penalized. Golub T, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, Coller H, Loh ML, Downing JR, Caligiuri MA, Bloom eld CD, Lander ES (1999). \Molecular Classi cation of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring." Science, 286, 531{536. 20 Regularization Paths for GLMs via Coordinate Descent Hastie T, Efron B (2007). lars: Least Angle Regression, Lasso and Forward Stagewise. R package version 0.9-7, URL http://CRAN.R-project.org/package=Matrix. Hastie T, Rosset S, Tibshirani R, Zhu J (2004). \The Entire Regularization Path for the Support Vector Machine." Journal of Machine Learning Research, 5, 1391{1415. Hastie T, Tibshirani R, Friedman J (2009). The Elements of Statistical Learning: Prediction, Inference and Data Mining. 2nd edition. Springer-Verlag, New York. Jiang H (2009). \A MATLAB Implementation of glmnet." Stanford University, URL http:: //www-stat.stanford.edu/~tibs/glmnet-matlab/. Koh K, Kim SJ, Boyd S (2007a). \An Interior-Point Method for Large-Scale L1-Regularized Logistic Regression." Journal of Machine Learning Research, 8, 1519{1555. Koh K, Kim SJ, Boyd S (2007b). l1logreg: A Solver for L1-Regularized Logistic Regression. R package version 0.1-1. Avaliable from Kwangmoo Koh ([email protected]). Krishnapuram B, Hartemink AJ (2005). \Sparse Multinomial Logistic Regression: Fast Al- gorithms and Generalization Bounds." IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 27(6), 957{968. Fellow-Lawrence Carin and Senior Member-Mario A. T. Figueiredo. Kushmerick N (1999). \Learning to Remove Internet Advertisements." In AGENTS '99: Proceedings of the Third Annual Conference on Autonomous Agents, pp. 175{181. ACM, New York, NY, USA. ISBN 1-58113-066-X. doi:10.1145/301136.301186. Lang K (1995). \NewsWeeder: Learning to Filter Netnews." In A Prieditis, S Russell (eds.), Proceedings of the 12th International Conference on Machine Learning, pp. 331{339. San Francisco. Lee S, Lee H, Abbeel P, Ng A (2006). \Ecient L1 Logistic Regression." In Proceedings of the Twenty-First National Conference on Arti cial Intelligence (AAAI-06). URL https: //www.aaai.org/Papers/AAAI/2006/AAAI06-064.pdf. Madigan D, Lewis D (2007). BBR, BMR: Bayesian Logistic Regression. Open-source stand- alone software, URL http://www.bayesianregression.org/. Meier L, van de Geer S, Buhlmann P (2008). \The Group Lasso for Logistic Regression." Journal of the Royal Statistical Society B, 70(1), 53{71. Osborne M, Presnell B, Turlach B (2000). \A New Approach to Variable Selection in Least Squares Problems." IMA Journal of Numerical Analysis, 20, 389{404. Park MY, Hastie T (2007a). \L -Regularization Path Algorithm for Generalized Linear Mod- els." Journal of the Royal Statistical Society B, 69, 659{677. Park MY, Hastie T (2007b). glmpath: L1 Regularization Path for Generalized Lin- ear Models and Cox Proportional Hazards Model. R package version 0.94, URL http: //CRAN.R-project.org/package=glmpath. Journal of Statistical Software 21 Ramaswamy S, Tamayo P, Rifkin R, Mukherjee S, Yeang C, Angelo M, Ladd C, Reich M, Lat- ulippe E, Mesirov J, Poggio T, Gerald W, Loda M, Lander E, Golub T (2002). \Multiclass Cancer Diagnosis Using Tumor Gene Expression Signature." Proceedings of the National Academy of Sciences, 98, 15149{15154. R Development Core Team (2009). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http: //www.R-project.org/. Rosset S, Zhu J (2007). \Piecewise Linear Regularized Solution Paths." The Annals of Statistics, 35(3), 1012{1030. Shevade K, Keerthi S (2003). \A Simple and Ecient Algorithm for Gene Selection Using Sparse Logistic Regression." Bioinformatics, 19, 2246{2253. Tibshirani R (1996). \Regression Shrinkage and Selection via the Lasso." Journal of the Royal Statistical Society B, 58, 267{288. Tibshirani R (1997). \The Lasso Method for Variable Selection in the Cox Model." Statistics in Medicine, 16, 385{395. Tseng P (2001). \Convergence of a Block Coordinate Descent Method for Nondi erentiable Minimization." Journal of Optimization Theory and Applications, 109, 475{494. Van der Kooij A (2007). Prediction Accuracy and Stability of Regrsssion with Optimal Scaling Transformations. Ph.D. thesis, Department of Data Theory, University of Leiden. URL https://openaccess.leidenuniv.nl/dspace/handle/1887/12096. Wu T, Chen Y, Hastie T, Sobel E, Lange K (2009). \Genome-Wide Association Analysis by Penalized Logistic Regression." Bioinformatics, 25(6), 714{721. Wu T, Lange K (2008). \Coordinate Descent Procedures for Lasso Penalized Regression." The Annals of Applied Statistics, 2(1), 224{244. Yuan M, Lin Y (2007). \Model Selection and Estimation in Regression with Grouped Vari- ables." Journal of the Royal Statistical Society B, 68(1), 49{67. Zhu J, Hastie T (2004). \Classi cation of Expression Arrays by Penalized Logistic Regression." Biostatistics, 5(3), 427{443. Zou H (2006). \The Adaptive Lasso and its Oracle Properties." Journal of the American Statistical Association, 101, 1418{1429. Zou H, Hastie T (2004). elasticnet: Elastic Net Regularization and Variable Selection. R package version 1.02, URL http://CRAN.R-project.org/package=elasticnet. Zou H, Hastie T (2005). \Regularization and Variable Selection via the Elastic Net." Journal of the Royal Statistical Society B, 67(2), 301{320. 22 Regularization Paths for GLMs via Coordinate Descent A. Proof of Theorem 1 We have 1 2 c = arg min (1 )( t) + j tj : (32) j j` j` `=1 Suppose 2 (0; 1). Di erentiating w.r.t. t (using a sub-gradient representation), we have [(1 )( t) s ] = 0 (33) j` j` `=1 where s = sign( t) if 6= t and s 2 [1; 1] otherwise. This gives j` j` j` j` t = + s (34) j j` K 1 `=1 It follows that t cannot be larger than since then the second term above would be negative and this would imply that t is less than . Similarly t cannot be less than , since then the j j second term above would have to be negative, implying that t is larger than . Aliation: Trevor Hastie Department of Statistics Stanford University California 94305, United States of America E-mail: [email protected] URL: http://www-stat.stanford.edu/~hastie/ Journal of Statistical Software http://www.jstatsoft.org/ published by the American Statistical Association http://www.amstat.org/ Volume 33, Issue 1 Submitted: 2009-04-22 January 2010 Accepted: 2009-12-15

Journal

Journal of Statistical SoftwareUnpaywall

Published: Jan 1, 2010

There are no references for this article.