Deflated Sharpe Ratio
   HOME

TheInfoList



OR:

The Deflated Sharpe Ratio (DSR) is a statistical method used to determine whether the
Sharpe ratio In finance, the Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) measures the performance of an investment such as a security or portfolio compared to a risk-free asset, after adjusting for ...
of an investment strategy is statistically significant, developed in 2014 by
Marcos López de Prado Marcos may refer to: People with the given name ''Marcos'' *Marcos (given name) *Marcos family Sports ;Surnamed * Dayton Marcos, Negro league baseball team from Dayton, Ohio (early twentieth-century) * Dimitris Markos, Greek footballer * Néls ...
at
Guggenheim Partners Guggenheim Partners, Inc is a global investment and advisory financial services firm that engages in investment banking, asset management, capital markets services, and insurance services. Guggenheim has c. 2,000 employees. The firm has offices ...
and
Cornell University Cornell University is a Private university, private Ivy League research university based in Ithaca, New York, United States. The university was co-founded by American philanthropist Ezra Cornell and historian and educator Andrew Dickson W ...
, and David H. Bailey at
Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory (LBNL, Berkeley Lab) is a Federally funded research and development centers, federally funded research and development center in the Berkeley Hills, hills of Berkeley, California, United States. Established i ...
. It corrects for
selection bias Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population inte ...
, backtest overfitting, sample length, and non-normality in return distributions, providing a more reliable test of financial performance, especially when many trials are evaluated. The application of the DSR, helps practitioners to detect false investment strategies. DSR offers a more precise and robust adjustment for
multiple testing Multiple comparisons, multiplicity or multiple testing problem occurs in statistics when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values. The larger the number ...
compared to traditional methods like the
Šidák correction In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all null hypotheses are true, the method ...
because it explicitly models both the selection bias arising from choosing the best among many trials and the estimation uncertainty inherent in Sharpe ratios. Unlike Šidák, which assumes independence and adjusts
p-value In null-hypothesis significance testing, the ''p''-value is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct. A very small ''p''-value means ...
s based only on the number of tests, the DSR accounts for the variance of Sharpe estimates, the number of trials, and their effective independence, often estimated through clustering. This leads to a more realistic threshold for statistical significance that reflects the true probability of a false discovery in data-mined environments. As a result, the DSR is particularly well-suited for finance, where researchers often conduct large-scale, correlated searches for profitable strategies without strong prior hypotheses.


Relation to the Sharpe Ratio

One of the most important statistics for assessing the performance of an investment strategy is the
Sharpe Ratio In finance, the Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) measures the performance of an investment such as a security or portfolio compared to a risk-free asset, after adjusting for ...
(SR). The Sharpe ratio was developed by
William F. Sharpe William Forsyth Sharpe (born June 16, 1934) is an American economist. He is the STANCO 25 Professor of Finance, Emeritus at Stanford University's Graduate School of Business, and the winner of the 1990 Nobel Memorial Prize in Economic Sciences. ...
and is a widely used measure of risk-adjusted return, calculated as the annualized ratio of excess return over the risk-free rate to the standard deviation of returns. While useful, the Sharpe Ratio has important limitations, especially when applied to multiple strategy evaluations. Issues such as selection bias, where the best-performing strategy is chosen from a large set, and backtest overfitting, where a strategy is tailored to past data, can inflate the Sharpe Ratio, leading to misleading conclusions about a strategy's effectiveness. Additionally, the Sharpe Ratio assumes normally distributed returns, an assumption often violated in practice, and it does not take into account sample length.


Applying the Deflated Sharpe Ratio in Practice


1. Get a record of all the trials.

In order to apply the DSR, researchers need to record the investment performance in returns (%), for every backtest that they ran. This is in relation to the development of a single specific strategy. For example: when building a momentum based strategy that trades at the end-of-day, 100 historical simulations were run to evaluate the performance and the best set of parameters were selected for the final strategy. Here all 100 simulations need to be recorded, with the strategies daily returns in %.


2. Estimating the Effective Number of Trials N.

In practice, many trials are not independent due to overlapping features. To estimate the effective number of independent trials N, López de Prado (2018) proposes 3 techniques to clustering similar strategies using unsupervised learning techniques: # The Optimal Number of Clusters (ONC) algorithm. #
Hierarchical clustering In data mining and statistics, hierarchical clustering (also called hierarchical cluster analysis or HCA) is a method of cluster analysis that seeks to build a hierarchy of clusters. Strategies for hierarchical clustering generally fall into two ...
could be used to get a conservative lower bound for N. # Alternatively, spectral methods (e.g. eigenvalue distribution of the correlation matrix) can also provide estimates of N. Tip: * Multiple testing exercises should be carefully planned in advance, so as to avoid running an unnecessary large number of trials. Investment theory, not computational power, should motivate what experiments are worth conducting. Steps to estimate N: 2.1. Convert the correlation matrix to a distance matrix. In order to apply a clustering algorithm to the returns data, we need make use of a statistical association measure (such as a correlation matrix) and we need to transform it into a distance matrix (such as angular distance) so that elements that are very similar to each other will be close together in their higher-dimensional space. 2.2. Apply a clustering algorithm to estimate the number of independent trials. The number of clusters N, are an estimate of the number of independent trials. 2.3 Plot the Block Correlation Matrix In the figure below we can see a correlation matrix before and after clustering has been applied. Note how we can see blocks down the diagonal, each block corresponds to a cluster. Tip: If you don't use the ONC algorithm to cluster, then you can have blocks with trials that don't match very closely. The ONC algorithm uses silhouette scores to make sure each trial is in the best cluster, at the expense of high computational complexity and longer run times.


3. Compute the Sharpe ratio variance, across clusters.

3.1 Calculate the Sharpe ratio for each cluster. Each cluster will now form a collection of time series returns (in%), for each cluster you need to create a new time series which represents that cluster using the Inverse Variance Portfolio (IVP) and then compute the Sharpe Ratio for each IVP portfolio. One doesn't need to use the IVP - the goal is to form an aggregate cluster return time series. For this a weighting scheme needs to be used, another alternative could be the minimum variance portfolio. 3.2 Compute the variance of these Sharpe Ratios \mathbf hat_n/math> is used in the next step, where we apply the False Strategy Theorem to determine the Expected Maximum Sharpe ratio.


4. Compute the Expected Maximum Sharpe ratio using the False Strategy Theorem.

Using the equation from the False Strategy Theorem (FST) we can compute SR_0 , which is the threshold Sharpe Ratio that reflects the highest Sharpe Ratio expected from N unskilled strategies. SR_0 = \sqrt \left( (1 - \gamma) \Phi^ \left 1 - \frac \right+ \gamma \Phi^ \left 1 - \frac \right\right) Where: * \mathbf hat_n/math> is the cross-sectional variance of Sharpe Ratios across trials, * \gamma is the Euler-Mascheroni constant (approx. 0.5772), * e is Euler's number, * \Phi^ is the inverse standard normal CDF, * N is the number of independent strategy trials. Note: The FST highlights that the optimal outcome of an unknown number of historical simulations is right-unbounded, with enough trials, there is no Sharpe ratio sufficiently large enough to reject the hypothesis that a strategy is false, i.e., that it is over-fit and wont generalize in the out-of-sample data.


5. Compute the DSR for each cluster.

You now have all the variables you need to compute the DSR. \text = \Phi \left( \frac \right) Where: * \hat^* is the observed Sharpe Ratio (not annualized), * SR_0 is the threshold Sharpe Ratio that reflects the highest Sharpe Ratio expected from N unskilled strategies, * \hat_ is the skewnewss of the returns, * \hat_ is the kurtosis of the returns, * T is the returns' sample length. * \Phi is the standard normal cumulative distribution function. Notes: * Readers may recognize that the DSR is the Probabilistic Sharpe Ratio (PSR), where SR_0 is the maximum expected Sharpe Ratio (estimated using the False Strategy Theorem) instead of a simple threshold SR (often 0). * The PSR assumes that only 1 trial was run and is often used to determine that the observed SR is greater than 0. * To account for multiple-testing, use the DSR. * The DSR will increase with: ** Greater observed SRs. ** Longer track records. ** Positively skewed returns. * The DSR decreases with: ** Fatter tails (Kurtosis).


6. Complete the Template for Disclosing Multiple Tests.

6.1 Aggregate statistics into a table. Several peer reviewed papers recommend to aggregate the cluster statistics into a table format. The table below is Exhibit 7 from "A Practitioner’s Guide to the Optimal Number of Clusters Algorithm". Where: * Cluster is the index of the cluster; there are N clusters. * Strat Count is the number of strategies included in that cluster. * aSR is the annualized Sharpe Ratio of that cluster's inverse variance portfolio (IVP). * SR is the non-annualized Sharpe Ratio of that cluster's IVP. * Skew is the skew of the returns of that cluster's IVP. * Kurt is the kurtosis of the returns of that cluster's IVP. * T is the number of observations in the cluster's IVP. * sqrt(V R is the square root of the variance of Sharpe Ratios that was computed in step 3. * E ax SRis the Expected Maximum Sharpe ratio (SR_0), computed in step 4. * DSR is the Deflated Sharpe Ratio for that cluster's IVP. 6.2 Plot the Sharpe Ratios, for each cluster. In the figure above, we can see a collection of non-annualized Sharpe ratios for the 26 independent trials that were tested in the development of this investment strategy. The bars are highlighted based on if they passed the DSR at a 95% confidence level. Note that this bar chart doesn't correspond to table above in Exhibit 7 but shares the result that only 1 cluster passed the DSR. The goal with this analysis is to show that for all clusters, except 1 - all of them failed the DSR. This would indicate that the strategy is over-fit and is likely to be a false investment strategy. 6.3 Plot the cumulative returns of the strategies. In the figure above the cumulative returns are plotted. On the y axis is the total return in% and the x axis are the time indexes. Do you see the very straight line (the strategy with an outlier performance)?


7. Derive a conclusion from these results.

As seen in the plot of cumulative returns, there is one outlier strategy which is likely a false investment strategy as this outlier has very high performance relative to its own cluster and others. We can see in the bar plots that all the cluster portfolios failed to pass the DSR at a 95% confidence level, except for the one that included this outlier strategy.


Mathematical Definitions


The Deflated Sharpe Ratio (DSR)

:\text = \Phi \left( \frac \right) Where: * \hat^* is the observed Sharpe Ratio (not annualized), * SR_0 is the threshold Sharpe Ratio that reflects the highest Sharpe Ratio expected from N unskilled strategies, * \hat_ is the skewnewss of the returns, * \hat_ is the kurtosis of the returns, * T is the returns' sample length. * \Phi is the standard normal cumulative distribution function. The threshold SR_0 is approximated by: : SR_0 = \sqrt \left( (1 - \gamma) \Phi^ \left 1 - \frac \right+ \gamma \Phi^ \left 1 - \frac \right\right) Where: * \mathbf hat_n/math> is the cross-sectional variance of Sharpe Ratios across trials, * \gamma is the Euler-Mascheroni constant (approx. 0.5772), * e is Euler's number, * \Phi^ is the inverse standard normal CDF, * N is the number of independent strategy trials.


False Strategy Theorem: Statement and Proof

The False Strategy Theorem provides the theoretical foundation for the Deflated Sharpe Ratio (DSR) by quantifying how much the best Sharpe Ratio among many unskilled strategies is expected to exceed zero purely due to chance. Even if all tested strategies have true Sharpe Ratios of zero, the highest observed Sharpe Ratio will typically be positive and statistically significant—unless corrected. The DSR corrects for this inflation.


Statement

Let \ be N Sharpe Ratios independently drawn from a normal distribution with mean zero and variance \sigma^2. Then the expected maximum Sharpe Ratio among these N trials is approximately: : SR_0 = \sqrt \cdot \left( (1 - \gamma) \Phi^\left(1 - \frac \right) + \gamma \Phi^ \left(1 - \frac \right) \right) Where: * \Phi^ is the quantile function (inverse CDF) of the standard normal distribution, * \gamma \approx 0.5772 is the Euler–Mascheroni constant, * e \approx 2.718 is Euler’s number, * N is the number of independent trials. This value SR_0 is the **expected maximum Sharpe Ratio** under the null hypothesis of no skill, H_0:SR=SR_0. It represents a benchmark that any observed Sharpe Ratio must exceed in order to be considered statistically significant.


Proof Sketch

Let X_1, X_2, \dots, X_N \sim \mathcal(0, 1) be independent standard normal variables. The expected maximum of N such variables is approximated by: : \mathbb max(X_1, \dots, X_N)\approx (1 - \gamma) \Phi^\left(1 - \frac \right) + \gamma \Phi^\left(1 - \frac \right) Now let \hat_i \sim \mathcal(0, \sigma^2) for each i. Then: : \mathbb\left max(\hat_1, \dots, \hat_N)\right= \sigma \cdot \mathbb max(X_1, \dots, X_N) Combining the two expressions gives: : SR_0 = \sigma \cdot \left( (1 - \gamma) \Phi^\left(1 - \frac \right) + \gamma \Phi^\left(1 - \frac \right) \right) If \sigma^2 is estimated as the cross-sectional variance of Sharpe Ratios \mathbf hat_n/math>, then: :SR_0 = \sqrt \cdot \left( (1 - \gamma) \Phi^\left(1 - \frac \right) + \gamma \Phi^\left(1 - \frac \right) \right) This completes the derivation.


Implication for the DSR

The False Strategy Theorem shows that in large-scale testing, even unskilled strategies will produce apparently "significant" Sharpe Ratios. To correct for this, the DSR adjusts the observed Sharpe Ratio \hat^* by subtracting the expected maximum from noise SR_0, and scaling by the standard error around the null hypothesis: :\text = \Phi \left( \frac \right) This yields the probability that the observed Sharpe Ratio reflects true skill, not selection bias or overfitting. DSR is more accurate than methods based on
Šidák correction In statistics, the Šidák correction, or Dunn–Šidák correction, is a method used to counteract the problem of multiple comparisons. It is a simple method to control the family-wise error rate. When all null hypotheses are true, the method ...
, because DSR takes into account the dispersion across trials, \mathbf hat_n/math>.


Confidence and Power of the Sharpe Ratio under Multiple Testing

To assess the significance of Sharpe Ratios under multiple testing, López de Prado (2018) derives closed-form expressions for the Type I and Type II errors.


Confidence

DSR is the probability of observing a Sharpe ratio less extreme than the estimated \hat^*, subject to H_0:SR\leq0 being true, where the multiple testing-adjusted baseline is SR_0. This can also be interpreted as the maximum confidence with which the null hypothesis can be rejected after observing \hat^*: :\text = P(\hat<\hat^*, H_0)=\Phi \left( \frac \right) where the standard deviation around the null hypothesis is: \sigma_ = \sqrt


Power

The power of a test is the proportion of positives that are correctly identified. This is also known in
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
as the test's
true positive rate In medicine and statistics, sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a medical condition. If individuals who have the condition are considered "positive" and those who do ...
or recall, and sensitivity in medicine. Let SR_1 be the expected value of the alternative hypothesis, H_1:SR>0. For instance, this may be the average Sharpe ratio observed among strategies that have yielded positive excess returns. Then, the false negative rate (\beta, type II error) is defined as the probability of not rejecting H_0 given that H_1 is true, :\beta = P(\hat where \alpha is the false positive rate (type I error), and: : SR_c=SR_0+\sigma_ \Phi^(1-\alpha) :\sigma_ = \sqrt Finally, power is the probability of rejecting the null hypothesis when it is false, namely: :P(\hat\geq SR_c, H_1) = 1-\beta The above equations reveal that power (1-\beta) decreases the number of trials K, through the effect that SR_0 has on SR_c. These equations quantify the reliability of observed Sharpe Ratios under multiple testing and return non-normality. They can be used to assess the sample size needed to reject H_0 with a given power 1 - \beta.


Minimum Track Record Length

A related concept is the Minimum Track Record Length (MinTRL), which computes the minimum sample size T needed such that a null hypothesis SR_0 is rejected with confidence DSR^*, given an observed \hat^*. Formally, the problem can be stated as MinTRL=min_T \ with solution MinTRL=1+\Bigl(1 - \hat_3 SR_0 + \frac SR_0^2 \Bigr)\Bigl(\frac \Bigr)^2 For example, given an observed annualized \hat{SR}^*=0.95, we need approximately 3 years worth of daily strategy returns in order to reject the null hypothesis H_0:SR_0=0 with confidence 95%. This provides mathematical support to the common expectation among investors that a hedge fund must produce track records with a minimum length of 3 years, which may be reduced to 2 years for Sharpe ratios above 1.15. It is important to understand MinTRL as a minimum requirement, since this assumes a single trial (more trials will require longer track records).


See also

*
Sharpe Ratio In finance, the Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) measures the performance of an investment such as a security or portfolio compared to a risk-free asset, after adjusting for ...
*
Backtesting Backtesting is a term used in modeling to refer to testing a predictive model on historical data. Backtesting is a type of retrodiction, and a special type of cross-validation applied to previous time period(s). Financial analysis In the econo ...
*
Overfitting In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfi ...
*
Selection bias Selection bias is the bias introduced by the selection of individuals, groups, or data for analysis in such a way that proper randomization is not achieved, thereby failing to ensure that the sample obtained is representative of the population inte ...
*
Multiple comparisons problem Multiple comparisons, multiplicity or multiple testing problem occurs in statistics when one considers a set of statistical inferences simultaneously or estimates a subset of parameters selected based on the observed values. The larger the numbe ...


References

Bailey, D. H., & Borwein, J. & López de Prado, M. (2014): "Pseudo-Mathematics and Financial Charlatanism: The Effects of Backtest Overfitting on Out-Of-Sample Performance". ''Notices of the American Mathematical Society'', 61(5), pp. 458-471. Bailey, D. H., & López de Prado, M. (2014)
The Deflated Sharpe Ratio: Correcting for Selection Bias, Backtest Overfitting, and Non-Normality
''The Journal of Portfolio Management'', 40(5), 94–107.
López de Prado, M., & Bailey, D. H. (2018)
The False Strategy Theorem: A Financial Application of Experimental Mathematics
''American Mathematical Monthly'', Volume 128, Number 9, pp. 825-831.
López de Prado, M., & Lewis, M. J. (2019): Detection of False Investment Strategies Using Unsupervised Learning Methods. ''Quantitative Finance'', 19(9), pp.1555-1565. López de Prado, M. (2019): A Data Science Solution to the Multiple-Testing Crisis in Financial Research. ''Journal of Financial Data Science'', 1(1), pp. 99-110. López de Prado, M. (2022): "Type I and Type II Errors of the Sharpe Ratio under Multiple Testing", ''The Journal of Portfolio Management'', 49(1), pp. 39 - 46
Financial ratios Portfolio theories Statistical analysis