For example, the objective function used in LASSO (L1-penalized regression) is of the form
where, for the genomics problem, y is the phenotype vector, X the matrix of genomes, beta the vector of effect sizes, and lambda the penalization. Optimization of this function seems to require access to the full matrix X and vector y -- i.e., requires access to potentially all the genomes and phenotypes at once. Is there a modified version of the algorithm that works on summary statistics, where only subsets of X and y are available? Carson Chow has advocated this approach to me for some time. If one can separately estimate X'X (LD matrix of genomic correlations), and gather X'y (phenotype-SNP correlations) from summary statistics, then LASSO over silo-ed data may become a reality. Of course, the devil is in the details. The paper below describes an approach to this problem.
Polygenic scores via penalized regression on summary statisticsSee also Bayesian large-scale multiple regression with summary statistics from genome-wide association studies.
Timothy Mak, Robert Milan Porsch, Shing Wan Choi, Xueya Zhou, Pak Chung Sham
doi: https://doi.org/10.1101/058214
Polygenic scores (PGS) summarize the genetic contribution of a person's genotype to a disease or phenotype. They can be used to group participants into different risk categories for diseases, and are also used as covariates in epidemiological analyses. A number of possible ways of calculating polygenic scores have been proposed, and recently there is much interest in methods that incorporate information available in published summary statistics. As there is no inherent information on linkage disequilibrium (LD) in summary statistics, a pertinent question is how we can make use of LD information available elsewhere to supplement such analyses. To answer this question we propose a method for constructing PGS using summary statistics and a reference panel in a penalized regression framework, which we call lassosum. We also propose a general method for choosing the value of the tuning parameter in the absence of validation data. In our simulations, we showed that pseudovalidation often resulted in prediction accuracy that is comparable to using a dataset with validation phenotype and was clearly superior to the conservative option of setting the tuning parameter of lassosum to its lowest value. We also showed that lassosum achieved better prediction accuracy than simple clumping and p-value thresholding in almost all scenarios. It was also substantially faster and more accurate than the recently proposed LDpred.
No comments:
Post a Comment