Why is computing ridge regression with a Cholesky decomposition much quicker than using SVD?Why is it much quicker to compute ridge regression than regular linear regression?Computing ridge regression with prior different from 0SVD in linear regressionGaussian Process: Using partitions of a Cholesky decomposition solution for conditional deductionConfirming an understanding of SVDIll-conditioned covariance matrix in GP regression for Bayesian optimizationThe proof of shrinking coefficients using ridge regression through “spectral decomposition”Simulate Moderated Regression with Cholesky DecompositionCoefficients of Principal Components Regression in terms of original regressorsFor symmetric matrices, is the Cholesky decomposition better than the SVD?Why is it much quicker to compute ridge regression than regular linear regression?

Why the color red for the Republican Party

When a wind turbine does not produce enough electricity how does the power company compensate for the loss?

Why would one plane in this picture not have gear down yet?

Reverse string, can I make it faster?

Is it necessary to separate DC power cables and data cables?

How does NOW work?

If I receive an SOS signal, what is the proper response?

How can I ensure my trip to the UK will not have to be cancelled because of Brexit?

Bash script should only kill those instances of another script's that it has launched

Makefile strange variable substitution

Hotkey (or other quick way) to insert a keyframe for only one component of a vector-valued property?

How do I express some one as a black person?

Motivation for Zeta Function of an Algebraic Variety

Why was Goose renamed from Chewie for the Captain Marvel film?

What are some noteworthy "mic-drop" moments in math?

How does one describe somebody who is bi-racial?

PTIJ: Should I kill my computer after installing software?

What are the practical Opportunity Attack values for a bugbear, holding a reach weapon, with the Polearm Master feat?

How to draw cubes in a 3 dimensional plane

Reversed Sudoku

Do f-stop and exposure time perfectly cancel?

meaning and function of 幸 in "则幸分我一杯羹"

Shifting between bemols (flats) and diesis (sharps)in the key signature

At what distance can a bugbear, holding a reach weapon, with the Polearm Master feat, get their Opportunity Attack?



Why is computing ridge regression with a Cholesky decomposition much quicker than using SVD?


Why is it much quicker to compute ridge regression than regular linear regression?Computing ridge regression with prior different from 0SVD in linear regressionGaussian Process: Using partitions of a Cholesky decomposition solution for conditional deductionConfirming an understanding of SVDIll-conditioned covariance matrix in GP regression for Bayesian optimizationThe proof of shrinking coefficients using ridge regression through “spectral decomposition”Simulate Moderated Regression with Cholesky DecompositionCoefficients of Principal Components Regression in terms of original regressorsFor symmetric matrices, is the Cholesky decomposition better than the SVD?Why is it much quicker to compute ridge regression than regular linear regression?













3












$begingroup$


By my understanding, for a matrix with n samples and p features:



  • Ridge regression using Cholesky decomposition takes O(p^3) time

  • Ridge regression using SVD takes O(p^3) time

  • Computing SVD when only the diagonal matrix is needed (and not u and v) takes O(np^2) time

I tested this out in scipy on both random and real-world data with p > n (p = 43624, n = 1750) and found ridge regression with a Cholesky decomposition to be much quicker than computing it using SVD and much quicker than just computing the diagonal matrix of the SVD. Why is this?



import numpy as np
from sklearn import linear_model
import scipy
import time

x = np.random.rand(1750, 43264)
y = np.random.rand(1750)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='cholesky')
clf.fit(x, y)
print("Time taken to solve Ridge with cholesky: ", time.time() - old_time)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='svd')
clf.fit(x, y)
print("Time taken to solve Ridge with SVD: ", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False)
print("Time taken for SVD", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False, compute_uv=False)
print("Time taken for SVD, just s", time.time() - old_time)


Output:



Time taken to solve Ridge with cholesky: 3.2336008548736572
Time taken to solve Ridge with SVD: 118.47378492355347
Time taken for SVD 92.01217150688171
Time taken for SVD, just s 44.7129647731781









share|cite|improve this question







New contributor




pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    Cholesky decomposes a square symmetric matrix (such as correlations) and is much faster itself than SVD which, in general, decomposes a rectangular matrix (such as dataset) into 3 matrices.
    $endgroup$
    – ttnphns
    5 hours ago















3












$begingroup$


By my understanding, for a matrix with n samples and p features:



  • Ridge regression using Cholesky decomposition takes O(p^3) time

  • Ridge regression using SVD takes O(p^3) time

  • Computing SVD when only the diagonal matrix is needed (and not u and v) takes O(np^2) time

I tested this out in scipy on both random and real-world data with p > n (p = 43624, n = 1750) and found ridge regression with a Cholesky decomposition to be much quicker than computing it using SVD and much quicker than just computing the diagonal matrix of the SVD. Why is this?



import numpy as np
from sklearn import linear_model
import scipy
import time

x = np.random.rand(1750, 43264)
y = np.random.rand(1750)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='cholesky')
clf.fit(x, y)
print("Time taken to solve Ridge with cholesky: ", time.time() - old_time)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='svd')
clf.fit(x, y)
print("Time taken to solve Ridge with SVD: ", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False)
print("Time taken for SVD", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False, compute_uv=False)
print("Time taken for SVD, just s", time.time() - old_time)


Output:



Time taken to solve Ridge with cholesky: 3.2336008548736572
Time taken to solve Ridge with SVD: 118.47378492355347
Time taken for SVD 92.01217150688171
Time taken for SVD, just s 44.7129647731781









share|cite|improve this question







New contributor




pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    Cholesky decomposes a square symmetric matrix (such as correlations) and is much faster itself than SVD which, in general, decomposes a rectangular matrix (such as dataset) into 3 matrices.
    $endgroup$
    – ttnphns
    5 hours ago













3












3








3


1



$begingroup$


By my understanding, for a matrix with n samples and p features:



  • Ridge regression using Cholesky decomposition takes O(p^3) time

  • Ridge regression using SVD takes O(p^3) time

  • Computing SVD when only the diagonal matrix is needed (and not u and v) takes O(np^2) time

I tested this out in scipy on both random and real-world data with p > n (p = 43624, n = 1750) and found ridge regression with a Cholesky decomposition to be much quicker than computing it using SVD and much quicker than just computing the diagonal matrix of the SVD. Why is this?



import numpy as np
from sklearn import linear_model
import scipy
import time

x = np.random.rand(1750, 43264)
y = np.random.rand(1750)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='cholesky')
clf.fit(x, y)
print("Time taken to solve Ridge with cholesky: ", time.time() - old_time)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='svd')
clf.fit(x, y)
print("Time taken to solve Ridge with SVD: ", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False)
print("Time taken for SVD", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False, compute_uv=False)
print("Time taken for SVD, just s", time.time() - old_time)


Output:



Time taken to solve Ridge with cholesky: 3.2336008548736572
Time taken to solve Ridge with SVD: 118.47378492355347
Time taken for SVD 92.01217150688171
Time taken for SVD, just s 44.7129647731781









share|cite|improve this question







New contributor




pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




By my understanding, for a matrix with n samples and p features:



  • Ridge regression using Cholesky decomposition takes O(p^3) time

  • Ridge regression using SVD takes O(p^3) time

  • Computing SVD when only the diagonal matrix is needed (and not u and v) takes O(np^2) time

I tested this out in scipy on both random and real-world data with p > n (p = 43624, n = 1750) and found ridge regression with a Cholesky decomposition to be much quicker than computing it using SVD and much quicker than just computing the diagonal matrix of the SVD. Why is this?



import numpy as np
from sklearn import linear_model
import scipy
import time

x = np.random.rand(1750, 43264)
y = np.random.rand(1750)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='cholesky')
clf.fit(x, y)
print("Time taken to solve Ridge with cholesky: ", time.time() - old_time)

old_time = time.time()
clf = linear_model.Ridge(alpha=1.0, solver='svd')
clf.fit(x, y)
print("Time taken to solve Ridge with SVD: ", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False)
print("Time taken for SVD", time.time() - old_time)

old_time = time.time()
scipy.linalg.svd(x, full_matrices=False, compute_uv=False)
print("Time taken for SVD, just s", time.time() - old_time)


Output:



Time taken to solve Ridge with cholesky: 3.2336008548736572
Time taken to solve Ridge with SVD: 118.47378492355347
Time taken for SVD 92.01217150688171
Time taken for SVD, just s 44.7129647731781






regression ridge-regression svd scipy cholesky






share|cite|improve this question







New contributor




pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question







New contributor




pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question






New contributor




pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 6 hours ago









pighead10pighead10

1264




1264




New contributor




pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






pighead10 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • $begingroup$
    Cholesky decomposes a square symmetric matrix (such as correlations) and is much faster itself than SVD which, in general, decomposes a rectangular matrix (such as dataset) into 3 matrices.
    $endgroup$
    – ttnphns
    5 hours ago
















  • $begingroup$
    Cholesky decomposes a square symmetric matrix (such as correlations) and is much faster itself than SVD which, in general, decomposes a rectangular matrix (such as dataset) into 3 matrices.
    $endgroup$
    – ttnphns
    5 hours ago















$begingroup$
Cholesky decomposes a square symmetric matrix (such as correlations) and is much faster itself than SVD which, in general, decomposes a rectangular matrix (such as dataset) into 3 matrices.
$endgroup$
– ttnphns
5 hours ago




$begingroup$
Cholesky decomposes a square symmetric matrix (such as correlations) and is much faster itself than SVD which, in general, decomposes a rectangular matrix (such as dataset) into 3 matrices.
$endgroup$
– ttnphns
5 hours ago










1 Answer
1






active

oldest

votes


















5












$begingroup$

First, we note that ridge regression can be transformed into an OLS problem via the data augmentation trick.. This adds $p$ rows to the design matrix $mathbfX$, so the new matrix $mathbfX_lambda$ has dimensions $ (n + p) times p $.



We can then look up the run time for various OLS algorithms, for example in the book Least Squares Data Fitting with Applications or in Golub's own book on Matrix Computations. (Golub was one of inventors of currently state-of-the-art Golub-Reinsch SVD algorithm.) Since linking to Google Books is not very reliable, these costs can also be found, for example, here.



We see that relevant costs for the non-ridge case, assuming $mathbfX$ is $n times p$, are:



$$ C_LU = 2 n p^2 - frac23 p^3 tag1 $$



$$ C_SVD = 4 n p^2 + 8 p^3 tag2 $$



To obtain equations for ridge regression, we need to substitute $n rightarrow n + p $ to correctly account for data augmentation:



$$ C_LU = 2 n p^2 + frac43 p^3 tag3 $$



$$ C_SVD = 4 n p^2 + 12 p^3 tag4 $$



Note that unlike your example, $n > p$ for most typical regression problems. If we say that $n approx p$, we can obtained simplified costs in only one variable:



$$ C_LU = frac103 p^3 tag5 $$
$$ C_SVD = 16 p^3 tag6 $$



This is probably where those rough estimates you reference came from, but as you yourself discovered, this simplification hides important detail and makes inappropriate assumptions that don't fit your case. Note, for example, that the constant for SVD is much worse than for Cholesky, an important fact hidden by the big $mathordO$ notation.



What you are doing is exactly the opposite - you have $n ll p$. That means the second term is much more important! By comparing the coefficients of the $p^3$ terms of equations (3) and (4) we can argue the SVD approach will require roughly 10 times as many floating point operations as the Cholesky approach when $n ll p$.



Beyond that, it's hard to know exactly what scipy is doing under the hood - is it calculating both $mathbfU$ and $mathbfV$ when doing SVD? Is it using the randomized algorithm? Is it using data augmentation for ridge regression, or some other approach? And even once those details are known, it's hard to translate these theoretical operation counts to real-world run time because libraries like LAPACK (scipy is almost surely using LAPACK or ATLAS under the hood) are so highly vectorized (taking advantage of CPU SIMD instructions) and often multithreaded that's it's hard to estimate accurately from purely theoretical arguments. But it's safe to say that it's reasonable to expect Cholesky factorization to be much faster than SVD for a given problem, especially when $n ll p$.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Explicit data augmentation would be a very slow way to compute ridge solution when $nll p$... Everything that scales as $p^3$ can instead be rewritten to scale as $n^3$.
    $endgroup$
    – amoeba
    5 hours ago











  • $begingroup$
    Great answer. This problem arose because a paper I was following suggested, since n ≪ p, computing the SVD in order to grid-search for the best alpha, since the resulting computation for each alpha only takes O(pn^2). However, my results/your answer shows that this would only take less time if there are greater than ~10 different values of alpha to try, due to the initial overheard of the SVD being huge.
    $endgroup$
    – pighead10
    4 hours ago










Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






pighead10 is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f396914%2fwhy-is-computing-ridge-regression-with-a-cholesky-decomposition-much-quicker-tha%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









5












$begingroup$

First, we note that ridge regression can be transformed into an OLS problem via the data augmentation trick.. This adds $p$ rows to the design matrix $mathbfX$, so the new matrix $mathbfX_lambda$ has dimensions $ (n + p) times p $.



We can then look up the run time for various OLS algorithms, for example in the book Least Squares Data Fitting with Applications or in Golub's own book on Matrix Computations. (Golub was one of inventors of currently state-of-the-art Golub-Reinsch SVD algorithm.) Since linking to Google Books is not very reliable, these costs can also be found, for example, here.



We see that relevant costs for the non-ridge case, assuming $mathbfX$ is $n times p$, are:



$$ C_LU = 2 n p^2 - frac23 p^3 tag1 $$



$$ C_SVD = 4 n p^2 + 8 p^3 tag2 $$



To obtain equations for ridge regression, we need to substitute $n rightarrow n + p $ to correctly account for data augmentation:



$$ C_LU = 2 n p^2 + frac43 p^3 tag3 $$



$$ C_SVD = 4 n p^2 + 12 p^3 tag4 $$



Note that unlike your example, $n > p$ for most typical regression problems. If we say that $n approx p$, we can obtained simplified costs in only one variable:



$$ C_LU = frac103 p^3 tag5 $$
$$ C_SVD = 16 p^3 tag6 $$



This is probably where those rough estimates you reference came from, but as you yourself discovered, this simplification hides important detail and makes inappropriate assumptions that don't fit your case. Note, for example, that the constant for SVD is much worse than for Cholesky, an important fact hidden by the big $mathordO$ notation.



What you are doing is exactly the opposite - you have $n ll p$. That means the second term is much more important! By comparing the coefficients of the $p^3$ terms of equations (3) and (4) we can argue the SVD approach will require roughly 10 times as many floating point operations as the Cholesky approach when $n ll p$.



Beyond that, it's hard to know exactly what scipy is doing under the hood - is it calculating both $mathbfU$ and $mathbfV$ when doing SVD? Is it using the randomized algorithm? Is it using data augmentation for ridge regression, or some other approach? And even once those details are known, it's hard to translate these theoretical operation counts to real-world run time because libraries like LAPACK (scipy is almost surely using LAPACK or ATLAS under the hood) are so highly vectorized (taking advantage of CPU SIMD instructions) and often multithreaded that's it's hard to estimate accurately from purely theoretical arguments. But it's safe to say that it's reasonable to expect Cholesky factorization to be much faster than SVD for a given problem, especially when $n ll p$.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Explicit data augmentation would be a very slow way to compute ridge solution when $nll p$... Everything that scales as $p^3$ can instead be rewritten to scale as $n^3$.
    $endgroup$
    – amoeba
    5 hours ago











  • $begingroup$
    Great answer. This problem arose because a paper I was following suggested, since n ≪ p, computing the SVD in order to grid-search for the best alpha, since the resulting computation for each alpha only takes O(pn^2). However, my results/your answer shows that this would only take less time if there are greater than ~10 different values of alpha to try, due to the initial overheard of the SVD being huge.
    $endgroup$
    – pighead10
    4 hours ago















5












$begingroup$

First, we note that ridge regression can be transformed into an OLS problem via the data augmentation trick.. This adds $p$ rows to the design matrix $mathbfX$, so the new matrix $mathbfX_lambda$ has dimensions $ (n + p) times p $.



We can then look up the run time for various OLS algorithms, for example in the book Least Squares Data Fitting with Applications or in Golub's own book on Matrix Computations. (Golub was one of inventors of currently state-of-the-art Golub-Reinsch SVD algorithm.) Since linking to Google Books is not very reliable, these costs can also be found, for example, here.



We see that relevant costs for the non-ridge case, assuming $mathbfX$ is $n times p$, are:



$$ C_LU = 2 n p^2 - frac23 p^3 tag1 $$



$$ C_SVD = 4 n p^2 + 8 p^3 tag2 $$



To obtain equations for ridge regression, we need to substitute $n rightarrow n + p $ to correctly account for data augmentation:



$$ C_LU = 2 n p^2 + frac43 p^3 tag3 $$



$$ C_SVD = 4 n p^2 + 12 p^3 tag4 $$



Note that unlike your example, $n > p$ for most typical regression problems. If we say that $n approx p$, we can obtained simplified costs in only one variable:



$$ C_LU = frac103 p^3 tag5 $$
$$ C_SVD = 16 p^3 tag6 $$



This is probably where those rough estimates you reference came from, but as you yourself discovered, this simplification hides important detail and makes inappropriate assumptions that don't fit your case. Note, for example, that the constant for SVD is much worse than for Cholesky, an important fact hidden by the big $mathordO$ notation.



What you are doing is exactly the opposite - you have $n ll p$. That means the second term is much more important! By comparing the coefficients of the $p^3$ terms of equations (3) and (4) we can argue the SVD approach will require roughly 10 times as many floating point operations as the Cholesky approach when $n ll p$.



Beyond that, it's hard to know exactly what scipy is doing under the hood - is it calculating both $mathbfU$ and $mathbfV$ when doing SVD? Is it using the randomized algorithm? Is it using data augmentation for ridge regression, or some other approach? And even once those details are known, it's hard to translate these theoretical operation counts to real-world run time because libraries like LAPACK (scipy is almost surely using LAPACK or ATLAS under the hood) are so highly vectorized (taking advantage of CPU SIMD instructions) and often multithreaded that's it's hard to estimate accurately from purely theoretical arguments. But it's safe to say that it's reasonable to expect Cholesky factorization to be much faster than SVD for a given problem, especially when $n ll p$.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    Explicit data augmentation would be a very slow way to compute ridge solution when $nll p$... Everything that scales as $p^3$ can instead be rewritten to scale as $n^3$.
    $endgroup$
    – amoeba
    5 hours ago











  • $begingroup$
    Great answer. This problem arose because a paper I was following suggested, since n ≪ p, computing the SVD in order to grid-search for the best alpha, since the resulting computation for each alpha only takes O(pn^2). However, my results/your answer shows that this would only take less time if there are greater than ~10 different values of alpha to try, due to the initial overheard of the SVD being huge.
    $endgroup$
    – pighead10
    4 hours ago













5












5








5





$begingroup$

First, we note that ridge regression can be transformed into an OLS problem via the data augmentation trick.. This adds $p$ rows to the design matrix $mathbfX$, so the new matrix $mathbfX_lambda$ has dimensions $ (n + p) times p $.



We can then look up the run time for various OLS algorithms, for example in the book Least Squares Data Fitting with Applications or in Golub's own book on Matrix Computations. (Golub was one of inventors of currently state-of-the-art Golub-Reinsch SVD algorithm.) Since linking to Google Books is not very reliable, these costs can also be found, for example, here.



We see that relevant costs for the non-ridge case, assuming $mathbfX$ is $n times p$, are:



$$ C_LU = 2 n p^2 - frac23 p^3 tag1 $$



$$ C_SVD = 4 n p^2 + 8 p^3 tag2 $$



To obtain equations for ridge regression, we need to substitute $n rightarrow n + p $ to correctly account for data augmentation:



$$ C_LU = 2 n p^2 + frac43 p^3 tag3 $$



$$ C_SVD = 4 n p^2 + 12 p^3 tag4 $$



Note that unlike your example, $n > p$ for most typical regression problems. If we say that $n approx p$, we can obtained simplified costs in only one variable:



$$ C_LU = frac103 p^3 tag5 $$
$$ C_SVD = 16 p^3 tag6 $$



This is probably where those rough estimates you reference came from, but as you yourself discovered, this simplification hides important detail and makes inappropriate assumptions that don't fit your case. Note, for example, that the constant for SVD is much worse than for Cholesky, an important fact hidden by the big $mathordO$ notation.



What you are doing is exactly the opposite - you have $n ll p$. That means the second term is much more important! By comparing the coefficients of the $p^3$ terms of equations (3) and (4) we can argue the SVD approach will require roughly 10 times as many floating point operations as the Cholesky approach when $n ll p$.



Beyond that, it's hard to know exactly what scipy is doing under the hood - is it calculating both $mathbfU$ and $mathbfV$ when doing SVD? Is it using the randomized algorithm? Is it using data augmentation for ridge regression, or some other approach? And even once those details are known, it's hard to translate these theoretical operation counts to real-world run time because libraries like LAPACK (scipy is almost surely using LAPACK or ATLAS under the hood) are so highly vectorized (taking advantage of CPU SIMD instructions) and often multithreaded that's it's hard to estimate accurately from purely theoretical arguments. But it's safe to say that it's reasonable to expect Cholesky factorization to be much faster than SVD for a given problem, especially when $n ll p$.






share|cite|improve this answer









$endgroup$



First, we note that ridge regression can be transformed into an OLS problem via the data augmentation trick.. This adds $p$ rows to the design matrix $mathbfX$, so the new matrix $mathbfX_lambda$ has dimensions $ (n + p) times p $.



We can then look up the run time for various OLS algorithms, for example in the book Least Squares Data Fitting with Applications or in Golub's own book on Matrix Computations. (Golub was one of inventors of currently state-of-the-art Golub-Reinsch SVD algorithm.) Since linking to Google Books is not very reliable, these costs can also be found, for example, here.



We see that relevant costs for the non-ridge case, assuming $mathbfX$ is $n times p$, are:



$$ C_LU = 2 n p^2 - frac23 p^3 tag1 $$



$$ C_SVD = 4 n p^2 + 8 p^3 tag2 $$



To obtain equations for ridge regression, we need to substitute $n rightarrow n + p $ to correctly account for data augmentation:



$$ C_LU = 2 n p^2 + frac43 p^3 tag3 $$



$$ C_SVD = 4 n p^2 + 12 p^3 tag4 $$



Note that unlike your example, $n > p$ for most typical regression problems. If we say that $n approx p$, we can obtained simplified costs in only one variable:



$$ C_LU = frac103 p^3 tag5 $$
$$ C_SVD = 16 p^3 tag6 $$



This is probably where those rough estimates you reference came from, but as you yourself discovered, this simplification hides important detail and makes inappropriate assumptions that don't fit your case. Note, for example, that the constant for SVD is much worse than for Cholesky, an important fact hidden by the big $mathordO$ notation.



What you are doing is exactly the opposite - you have $n ll p$. That means the second term is much more important! By comparing the coefficients of the $p^3$ terms of equations (3) and (4) we can argue the SVD approach will require roughly 10 times as many floating point operations as the Cholesky approach when $n ll p$.



Beyond that, it's hard to know exactly what scipy is doing under the hood - is it calculating both $mathbfU$ and $mathbfV$ when doing SVD? Is it using the randomized algorithm? Is it using data augmentation for ridge regression, or some other approach? And even once those details are known, it's hard to translate these theoretical operation counts to real-world run time because libraries like LAPACK (scipy is almost surely using LAPACK or ATLAS under the hood) are so highly vectorized (taking advantage of CPU SIMD instructions) and often multithreaded that's it's hard to estimate accurately from purely theoretical arguments. But it's safe to say that it's reasonable to expect Cholesky factorization to be much faster than SVD for a given problem, especially when $n ll p$.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered 5 hours ago









olooneyolooney

1,326614




1,326614







  • 1




    $begingroup$
    Explicit data augmentation would be a very slow way to compute ridge solution when $nll p$... Everything that scales as $p^3$ can instead be rewritten to scale as $n^3$.
    $endgroup$
    – amoeba
    5 hours ago











  • $begingroup$
    Great answer. This problem arose because a paper I was following suggested, since n ≪ p, computing the SVD in order to grid-search for the best alpha, since the resulting computation for each alpha only takes O(pn^2). However, my results/your answer shows that this would only take less time if there are greater than ~10 different values of alpha to try, due to the initial overheard of the SVD being huge.
    $endgroup$
    – pighead10
    4 hours ago












  • 1




    $begingroup$
    Explicit data augmentation would be a very slow way to compute ridge solution when $nll p$... Everything that scales as $p^3$ can instead be rewritten to scale as $n^3$.
    $endgroup$
    – amoeba
    5 hours ago











  • $begingroup$
    Great answer. This problem arose because a paper I was following suggested, since n ≪ p, computing the SVD in order to grid-search for the best alpha, since the resulting computation for each alpha only takes O(pn^2). However, my results/your answer shows that this would only take less time if there are greater than ~10 different values of alpha to try, due to the initial overheard of the SVD being huge.
    $endgroup$
    – pighead10
    4 hours ago







1




1




$begingroup$
Explicit data augmentation would be a very slow way to compute ridge solution when $nll p$... Everything that scales as $p^3$ can instead be rewritten to scale as $n^3$.
$endgroup$
– amoeba
5 hours ago





$begingroup$
Explicit data augmentation would be a very slow way to compute ridge solution when $nll p$... Everything that scales as $p^3$ can instead be rewritten to scale as $n^3$.
$endgroup$
– amoeba
5 hours ago













$begingroup$
Great answer. This problem arose because a paper I was following suggested, since n ≪ p, computing the SVD in order to grid-search for the best alpha, since the resulting computation for each alpha only takes O(pn^2). However, my results/your answer shows that this would only take less time if there are greater than ~10 different values of alpha to try, due to the initial overheard of the SVD being huge.
$endgroup$
– pighead10
4 hours ago




$begingroup$
Great answer. This problem arose because a paper I was following suggested, since n ≪ p, computing the SVD in order to grid-search for the best alpha, since the resulting computation for each alpha only takes O(pn^2). However, my results/your answer shows that this would only take less time if there are greater than ~10 different values of alpha to try, due to the initial overheard of the SVD being huge.
$endgroup$
– pighead10
4 hours ago










pighead10 is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















pighead10 is a new contributor. Be nice, and check out our Code of Conduct.












pighead10 is a new contributor. Be nice, and check out our Code of Conduct.











pighead10 is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f396914%2fwhy-is-computing-ridge-regression-with-a-cholesky-decomposition-much-quicker-tha%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







-cholesky, regression, ridge-regression, scipy, svd

Popular posts from this blog

Mobil Contents History Mobil brands Former Mobil brands Lukoil transaction Mobil UK Mobil Australia Mobil New Zealand Mobil Greece Mobil in Japan Mobil in Canada Mobil Egypt See also References External links Navigation menuwww.mobil.com"Mobil Corporation"the original"Our Houston campus""Business & Finance: Socony-Vacuum Corp.""Popular Mechanics""Lubrite Technologies""Exxon Mobil campus 'clearly happening'""Toledo Blade - Google News Archive Search""The Lion and the Moose - How 2 Executives Pulled off the Biggest Merger Ever""ExxonMobil Press Release""Lubricants""Archived copy"the original"Mobil 1™ and Mobil Super™ motor oil and synthetic motor oil - Mobil™ Motor Oils""Mobil Delvac""Mobil Industrial website""The State of Competition in Gasoline Marketing: The Effects of Refiner Operations at Retail""Mobil Travel Guide to become Forbes Travel Guide""Hotel Rankings: Forbes Merges with Mobil"the original"Jamieson oil industry history""Mobil news""Caltex pumps for control""Watchdog blocks Caltex bid""Exxon Mobil sells service station network""Mobil Oil New Zealand Limited is New Zealand's oldest oil company, with predecessor companies having first established a presence in the country in 1896""ExxonMobil subsidiaries have a business history in New Zealand stretching back more than 120 years. We are involved in petroleum refining and distribution and the marketing of fuels, lubricants and chemical products""Archived copy"the original"Exxon Mobil to Sell Its Japanese Arm for $3.9 Billion""Gas station merger will end Esso and Mobil's long run in Japan""Esso moves to affiliate itself with PC Optimum, no longer Aeroplan, in loyalty point switch""Mobil brand of gas stations to launch in Canada after deal for 213 Loblaws-owned locations""Mobil Nears Completion of Rebranding 200 Loblaw Gas Stations""Learn about ExxonMobil's operations in Egypt""Petrol and Diesel Service Stations in Egypt - Mobil"Official websiteExxon Mobil corporate websiteMobil Industrial official websiteeeeeeeeDA04275022275790-40000 0001 0860 5061n82045453134887257134887257

Frič See also Navigation menuinternal link

Identify plant with long narrow paired leaves and reddish stems Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?What is this plant with long sharp leaves? Is it a weed?What is this 3ft high, stalky plant, with mid sized narrow leaves?What is this young shrub with opposite ovate, crenate leaves and reddish stems?What is this plant with large broad serrated leaves?Identify this upright branching weed with long leaves and reddish stemsPlease help me identify this bulbous plant with long, broad leaves and white flowersWhat is this small annual with narrow gray/green leaves and rust colored daisy-type flowers?What is this chilli plant?Does anyone know what type of chilli plant this is?Help identify this plant