Forecasting the Equity Risk Premium: The Role of Technical Indicators
with Christopher J. Neely,
David E. Rapach and Jun Tu (current version: August, 2013).
(An On-line Appendix)
Academic research has extensively used macroeconomic variables to forecast the U.S. equity risk premium, with little
attention paid to the technical indicators widely employed by practitioners. Our paper fills this gap by comparing the
forecasting ability of technical indicators with that of macroeconomic variables. Technical indicators display
statistically and economically significant in-sample and out-of-sample forecasting power, matching or exceeding that of
macroeconomic variables. Furthermore, technical indicators and macroeconomic variables provide complementary information
over the business cycle: technical indicators better detect the typical decline in the equity risk premium near
business-cycle peaks, while macroeconomic variables more readily pick up the typical rise in the equity risk premium near
cyclical troughs. In line with this behavior, we show that combining information from both technical indicators and
macroeconomic variables significantly improves equity risk premium forecasts versus using either type of information alone.
Overall, the substantial countercyclical fluctuations in the equity risk premium appear well captured by the combined
information in macroeconomic variables and technical indicators.
Management Science, forthcoming.
A New Anomaly: The Cross-Sectional Profitability of Technical Analysis
(with Yufeng Han and Ke Yang)
In this paper, we document that an application of a moving average
timing strategy of technical analysis to portfolios sorted by
volatility generates investment timing portfolios that outperform
the buy-and-hold strategy substantially. For high volatility
portfolios, the abnormal returns, relative to the CAPM and the
Fama-French three-factor models, are of great economic significance,
and are greater than those from the well known momentum strategy.
Moreover, they cannot be explained by market timing
ability, investor sentiment, default and
Similar results also hold if the portfolios are sorted based on other
proxies of information uncertainty.
Journal of Financial
and Quantitative Analysis, forthcoming.
The Supply Factor in the Bond Market: Implications for
Bond Risk and Return
(with Longzhen Fan and Canlin Li)
Recent empirical studies suggest that demand and supply factors
have important effects on bond yields. Both market segmentation and
preferred habitat hypothesis are used to explain these demand and
supply effects. In this paper, we use an affine preferred-habitat
term structure model and the unique Chinese bond market data to
study these two hypotheses. Chinese bond market is unique because
there exists an official term structure of lending rates, set
exogenously by the government, on preferred habitat investors'
alternative investments on loans. We show that demands of both the
preferred-habitat investors and the arbitrageurs affect bond yields
and returns. Moreover, we also find that the preferred-habitat
investors' alternative investment opportunities have expected effect
on bond yields and returns. We further show that the
preferred-habitat and demand factors improve bond pricing and return
predictability in a no-arbitrage term structure model. Variance
decomposition analysis shows that the preferred-habitat factor
explains an important part of bond yield variations.
Journal of Fixed Income, 23, 2013, 62--81.
International Stock Return Predictability: What is the Role of the United States?
(with David E. Rapach and Jack K. Strauss; first draft,
July 2009; current version: May 22, 2012)
(An On-line Appendix)
We present significant evidence of out-of-sample equity premium predictability for a host
of industrialized countries over the postwar period. There are important differences, however,
in the nature of equity premium predictability between the United States and other developed
countries. Taken collectively, U.S. economic variables are significant out-of-sample predictors
of the U.S. equity premium, while lagged international stock returns have no predictive power.
In contrast, lagged international stock returns-- especially lagged U.S. returns--substantially
outperform economic variables as out-of-sample equity premium predictors for non-U.S. countries,
pointing to a leading role for the United States with respect to international return predictability.
The leading role of the United States is consistent with information frictions in
international equity markets. In addition, the predictability patterns are enhanced during economic
downturns, linking return predictability to business-cycle fluctuations and the diffusion
of news on macroeconomic fundamentals across countries. The leading role of the United
States stands out during the recent global financial crisis: lagged U.S. stock returns deliver
especially sizable gains for forecasting the monthly equity premium in other countries, evidenced
by out-of-sample R2 statistics of 10% or greater, more than triple the postwar average.
Journal of Finance, 68, 2013, 1633--1662.
Volatility Trading: What is the Role of the
Long-Run Volatility Component?
(with Yingzi Zhu; Current version: August, 2010)
In this paper, we study an investor's asset allocation problem
with a recursive utility and with tradable volatility that follows a two-factor stochastic volatility model.
Consistent with Liu and Pan (2003) and Egloff, Leippold, and Wu's (2009)
finding under the additive utility, we show that volatility trading
generates substantial hedging demand, and so the investor can benefit substantially from volatility trading.
However, unlike existing studies, we find that the impact of elasticity of intertemporal substitution on investment decisions
is of first-order importance in the two-factor stochastic volatility model
when the investor has access to the derivatives market to optimally hedge the persistent component of
the volatility shocks. Moreover, we study the economic impact of model and parameter misspecifications
and find that an investor can incur substantial economic losses if he
uses an incorrect one-factor model instead of the two-factor
model or if he incorrectly estimates one of the key parameters in
the two-factor model. In addition, we find that the elasticity of intertemporal substitution is a more sensible description
of an investor's attitude toward model and parameter misspecifications than the risk aversion parameter.
Journal of Financial and Quantitative Analysis, 47, 2012, 273--307.
(with Raymond Kan)
The paper presents a thorough study on the spanning:
points out years old errors in the literature and provides geometrical/economic
interpretations, small sample distributions and power analysis for
likelihood ratio, Wald, and Lagrange
multiplier tests, and a comparison among them and between
the stochastic discount factor approach, in addition to a new sequential test that weighs
explicitly economic significance into the size of the test.
Annals of Economics and Finance 13, 2012,
How Predictable Is the Chinese Stock Market? (in Chinese)
(with Jiang Fuwei,
David Rapach, Jack Strauss and Jun Tu)
We analyze return predictability for the Chinese stock market, including the aggregate market portfolio and
the components of the aggregate market, such as portfolios sorted on industry, size, book-to-market and ownership
concentration. Considering a variety of economic variables as predictors, both in-sample and out-of-sample tests
highlight significant predictability in the aggregate market portfolio of the Chinese stock market and
substantial differences in return predictability across components. Among industry portfolios, Finance
and insurance, Real estate, and Service exhibit the most predictability, while portfolios of small-cap,
low book-to-market ratio and low ownership concentration firms also display considerable predictability.
Two key findings provide economic explanations for component predictability: (i) based on a novel out-of-sample
decomposition, time-varying systematic risk premiums captured by the conditional CAPM model largely account
for component predictability; (ii) industry concentration significantly explain differences in return
predictability across industries, consistent with the information-flow frictions emphasized by Hong, Torous, and
Journal of Financial Research (½ðÈÚÑÐ¾¿) 9, 2011,
Markowitz Meets Talmud: A Combination of Sophisticated and Naive Diversification Strategies
(with Jun Tu)
(The Longer 2008 EFA version)
The modern portfolio theory pioneered by Markowitz (1952)
is widely used in practice and extensively taught to MBAs.
However, the estimated Markowitz's portfolio rule and most of its extensions not only
underperform the naive 1/N rule (that invests equally across
N assets) in simulations, but also lose money on a risk-adjusted basis in many real data sets.
In this paper, we propose an optimal combination of the naive 1/N
rule with one of the four sophisticated strategies--- the
Markowitz rule, the Jorion (1986) rule, the MacKinlay and
Pastor (2000) rule, and the Kan and Zhou (2007) rule--- as a
way to improve performance.
We find that the combined rules not only have a significant impact
in improving the sophisticated strategies, but also outperform the 1/N rule in most scenarios.
Since the combinations are theory-based,
our study may be interpreted as reaffirming the
usefulness of the Markowitz theory in practice.
Journal of Financial Economics 99, 2011,
Predicting Market Components Out of Sample: Asset Allocation Implications
(with Aiguo Kong, David
Rapach and Jack Strauss)
We analyze out-of-sample return predictability for components of the aggregate market, focusing on
the well-known Fama-French size/value-sorted portfolios. Employing a forecast combination approach based on a variety of
economic variables and lagged component returns as predictors, we find significant evidence of out-of-sample return
predictability for nearly all component portfolios. Moreover, return predictability is typically much stronger for
small-cap/high book-to-market value stocks. The pattern of component return predictability is enhanced during business-cycle
recessions, linking component return predictability to the real economy. Considering various component-rotation investment
strategies, we show that out-of-sample component return predictability can be exploited to substantially improve portfolio
Journal of Portfolio Management 37, 2011, 2011, 29--41.
Cross Sectional Asset Pricing Tests
(with Ravi Jagannathan and Ernst Schaumburg)
A major problem in finance is to understand why different financial assets earn vastly different returns on average. In this
paper, we survey various econometric approaches that have been developed to empirically examine various asset
pricing models used to explain the difference in cross section of security returns. The approaches range from regressions
to the generalized method of moments, and the associated asset pricing models are both conditional and unconditional. In
addition, we review some of the major empirical studies.
Annual Review of Financial
Economics 2, 2010, 49--74.
Bayesian Portfolio Analysis
(with Doron Avramov)
This paper reviews the literature on Bayesian portfolio analysis. Information about events,
macro conditions, asset pricing theories, and security-driving forces can serve as useful priors in
selecting optimal portfolios. Moreover, parameter uncertainty and model uncertainty are practical
problems encountered by all investors. The Bayesian framework neatly accounts for these
uncertainties, whereas standard statistical models often ignore them. We review Bayesian portfolio
studies when asset returns are assumed both independently and identically distributed as well as
predictable through time. We cover a range of applications, from investing in single assets and
equity portfolios to mutual and hedge funds. We also outline existing challenges for future work.
Annual Review of Financial
Economics 2, 2010, 25--47.
Incorporating Economic Objectives into Bayesian Priors:
Portfolio Choice Under Parameter Uncertainty
(with Jun Tu; The First Version, April 2004)
(The Published Version)
Economic objectives are often ignored when
estimating parameters, though the loss of doing so can be substantial.
This paper proposes a way to allow Bayesian priors to reflect the
Using monthly returns of the Fama-French 25 size and book-to-market portfolios and
their three factors from January 1965 to December
2004, we find that investment performance under the
objective-based priors can be significantly
different from that under alternative priors, with
differences in terms of annual certainty-equivalent
returns greater than 10% in many cases.
In terms of out-of-sample performance,
the Bayesian rules under the objective-based priors can
outperform substantially some of the best rules developed in the classical framework.
Journal of Financial and Quantitative Analysis 45, 2010,
How Much Stock Return Predictability Can We Expect From an Asset Pricing Model?
First draft, September, 2008.
(The Published Version)
Stock market predictability is of considerable interest in both academic research and investment practice. Ross (2005)
provides a simple and elegant upper bound on the predictive regression R-squared that R2 <= (1 + R_f)2
Var(m) for a given asset pricing model with kernel m, where R_f is the riskfree rate of return. In this paper, we tighten
this bound by a squared factor of the correlation between the default pricing kernel and the state variables of the economy.
Since the correlation can be substantially smaller than one, our bound can be much tighter than Ross's. An empirical
application illustrates that while Ross's bound is not binding, our bound does.
Economics Letters 108, 2010,
Robust Portfolios: Contributions from Operations
Research and Finance
(with Frank J. Fabozzi and Dashan Huang)
In this paper we provide a survey of recent contributions to robust portfolio strategies
from operations research and finance to the theory of portfolio selection. Our survey
covers results derived not only in terms of the standard mean-variance objective, but also in
terms of two of the most popular risk measures, mean-VaR and mean-CVaR developed recently.
In addition, we review optimal estimation methods and Bayesian robust approaches.
Annals of Operations Research
Limited Participation, Consumption,
and Saving Puzzles: A Simple Explanation and the Role of Insurance
(with Todd Gormley and Hong Liu)
In this paper, we use a simple model to illustrate that the existence of a large, negative
wealth shock and insufficient insurance against such a shock can potentially explain both the
limited stock market participation puzzle and the low-consumption-high-savings puzzle that
are widely documented in the literature. We then conduct an extensive empirical analysis on
the relation between household portfolio choices and access to private insurance and various
types of government safety nets, including social security and unemployment insurance. The
empirical results demonstrate that a lack of insurance against large, negative wealth shocks
is strongly correlated with lower participation rates and higher saving rates. Overall, the evidence
suggests an important role of insurance in household investment and savings decisions.
Journal of Financial Economics 96, 2010, 331--344.
Out-of-Sample Equity Premium Prediction:
Combination Forecasts and Links to the Real Economy
(with David Rapach and Jack
While a host of economic variables have been identified in the literature with the apparent
in-sample ability to predict the equity premium, Welch and Goyal (2008) find that these variables
fail to deliver consistent out-of-sample forecasting gains relative to the historical average.
Arguing that substantial model uncertainty and instability seriously impair the forecasting
ability of individual predictive regression models, we recommend combining individual model
forecasts to improve out-of-sample equity premium prediction. Combining delivers statistically
and economically significant out-of-sample gains relative to the historical average on a
consistent basis over time. We provide two empirical explanations for the benefits of the forecast
combination approach: (i) combining forecasts incorporates information from numerous
economic variables while substantially reducing forecast volatility; (ii) combination forecasts
of the equity premium are linked to the real economy.
Review of Financial Studies 23, 2010, 821--862.
Is the Recent Financial Crisis Really a `Once-in-a-century'
(with Yingzi Zhu)
(The Longer working paper version)
In the recent financial crisis, the Dow Jones stock
dropped about 54% from a high of 14164.53 on October 9, 2007 to a low of 6547.05 on March 9, 2009.
Alan Greenspan calls this a ``once-in-a
century" crisis. While we do not know how he drew his conclusion,
we show that the probability of a stock market drop of 50%
from its high within a century is about 90% based on the popular random walk
model of the stock prices. With a broad market index of the S&P500
and a more sophisticated asset pricing model which captures more risks in the economy,
the probability rises to above 99%. The message of this paper is that
a market drop of 50% or more is very likely in long-run stock market
investments, and the investors should be prepared for it.
Financial Analysts Journal 66 (1), 2010, 24--27.
Beyond Black-Litterman: Letting
the Data Speak
The Black-Litterman model is a popular approach for asset
allocation by blending an investor's proprietary views with the
views of the market. However, their model ignores the
data-generating process whose dynamics can have significant impact
on future portfolio returns. This paper extends, in two ways, the Black-Litterman
model to allow Bayesian learning to exploit all available
the market views, the investor's proprietary views as well as the
data. Our framework allows practitioners to combine insights from
the Black-Litterman model with the data to generate potentially
more reliable trading strategies and more robust portfolios.
Further, we show that many Bayesian learning tools can now be
readily applied to practical portfolio selections in conjunction
with the Black-Litterman model.
Journal of Portfolio Management 36 (1), 2009, 36--45.
What Will the Likely Range of My Wealth Be?
(with Raymond Kan)
The median is a better measure than the mean in evaluating the
long-term value of a portfolio. However, the standard plug-in
estimate of the median is too optimistic. It has a substantial
upward bias that can easily exceed a factor of two. In this paper,
we provide an unbiased forecast of the median of the long-term value
of a portfolio. In addition, we also provide an unbiased forecast of an
arbitrary percentile of the long-term portfolio value distribution.
This allows us to construct the likely range of the long-term
portfolio value for any given confidence level. Finally, we provide
an unbiased forecast of the probability for the long-term portfolio
value falling into a given interval. Our unbiased estimators
provide a more accurate assessment of the long-term value of a
portfolio than the traditional estimators, and are useful for
long-term planning and investment.
Financial Analysts Journal 65 (4), 2009, 68--77.
Technical Analysis: An Asset Allocation
Perspective on the Use of Moving Averages
(with Yingzi Zhu)
(The Longer 2007 EFA version)
In this paper, we analyze the usefulness of technical analysis,
specifically the widely used moving average trading rule from an
asset allocation perspective. We show that when stock returns are
predictable, technical analysis adds value to commonly used
allocation rules that invest fixed proportions of wealth in stocks.
When there is uncertainty about predictability which is likely in
practice, the fixed allocation rules combined with technical
analysis can outperform the prior-dependent optimal learning rule
when the prior is not too informative. Moreover, the technical
trading rules are robust to model specification, and they tend to
substantially outperform the model-based optimal trading strategies
when there is uncertainty about the model governing the stock price.
Journal of Financial Economics 92, 2009, 519--544.
the Fundamental Law of Active Portfolio Management: How to Make Conditional Investments Unconditionally Optimal?
The fundamental law of active portfolio management
tells an active manager how to transform his alpha forecasts into the valued-added
of his active portfolio by using a linear strategy with active positions proportional to the forecasts.
This linear strategy is conditionally optimal because it is
optimal each period conditional on the forecasts at that
However, the unconditional value-added (the valued-added over the long haul or over multiple periods)
is what usually the manager strives earnestly for.
Under this unconditional objective, the linear strategy can
approach zero value-added if the forecasts or signals have a
high kurtosis. To overcome this problem, we provide an investment
strategy that maximizes the
unconditional value-added with the optimal use of conditional information.
Our strategy is nonlinear in the forecasts, but has a simple economic interpretation.
When the alpha forecasts are high, we invest less aggressively than
the linear strategy, and when the forecasts are low, we invest more
aggressively. In this way, we tend to smooth our value-added
over time, and hence, on a risk-adjusted basis, our long-term unconditional value-added will in most cases be
substantially higher than that based on the linear strategy, particularly when the alpha forecasts experience high kurtosis.
Journal of Portfolio Management 35 (1), 2008,
the Fundamental Law of Active Portfolio Management: What Happens if Our Estimates Are Wrong?
The fundamental law of active portfolio management
pioneered by Grinold (1989) provides profound insights on the
creation process of managed funds. However, a key weakness of the
law and its various extensions is that they ignore the estimation
risk associated with the parameter inputs of the law.
We show that the estimation errors have a substantial impact on the
value-added of an actively managed portfolio, and they can easily
destroy all the value promised by the law
if they are not dealt
with carefully. For bettering the chance of active managers to beat benchmark indices,
we propose two methods, scaling and diversification,
that can be used effectively to minimize the impact of the estimation errors significantly.
Journal of Portfolio Management 34 (4), 2008,
Asymmetries in Stock Returns: Statistical Tests and Economic
Yongmiao Hong and Jun Tu)
In this paper, we provide a model-free test for asymmetric correlations in which stocks move
more often with the market when the market goes down than when it goes up. We also provide
such tests for asymmetric betas and covariances. In addition, we evaluate the economic significance
of incorporating asymmetries into investment decisions. When stocks are sorted by size, book-to-market and momentum, we find strong
evidence of asymmetry for both the size and momentum
portfolios, but no evidence for the book-to-market portfolios. Moreover, the asymmetries can be
of substantial economic importance for an investor with a disappointment aversion preference of
Ang, Bekaert and Liu (2005). If the investors's felicity function is of the power utility form and if
his coefficient of disappointment aversion is between 0.55 and 0.25, he can achieve over 2% annual
certainty-equivalent gains when he switches from a belief in symmetric stock returns into a belief
in asymmetric ones.
Review of Financial Studies 20, 2007,
Portfolio Choice with Parameter Uncertainty
In this paper, we analytically derive the expected
loss function associated with using sample means and covariance matrix
of returns to estimate the optimal portfolio. Our
analytical results show that the standard plug-in approach that
replaces the population parameters by their sample estimates can
lead to very poor out-of-sample performance.
We further show that with parameter uncertainty, holding
the sample tangency portfolio and the riskless asset is never optimal.
An investor can benefit by holding some other risky portfolios that help reduce the
estimation risk. In particular, we show that a portfolio that
optimally combines the riskless asset, the sample tangency portfolio,
and the sample global minimum-variance portfolio dominates
a portfolio with just the riskless asset and the sample
tangency portfolio, suggesting that the presence of estimation risk
completely alters the theoretical recommendation of a two-fund
Journal of Financial and Quantitative Analysis 42, 2007,
Estimating and Testing Beta Pricing Models: Alternative
Methods and Their Performance in Simulations
(A typo correction on the LR Estimator)
In this paper, we provide a comprehensive theoretical and small sample
study of the Fama and MacBeth (1973) two-pass
procedure that is fundamental in understanding to what extent
cross-sectional expected returns/values are explained by
certain factor attributes.
While existing studies use almost exclusively this procedure, we
show that alternative two-pass methods can have better small
sample performance. In addition, we provide tractable GMM
approaches that accommodate conditional heteroscedasticity of
Moreover, the risk premium estimates and t-ratios of the Fama and MacBeth procedure
provide no information on whether the model is misspecified or not, and
they can be misleadingly interpreted if the model is indeed
misspecified. We not only provide formal model misppecification tests,
but also how that various
estimation methods are useful in detecting model
Journal of Financial Economics 84, 2007, 40--86.
Using Bootstrap to Test Portfolio Efficiency
To facilitate wide use of the bootstrap method in finance, this
paper shows by intuitive arguments and by simulations how it can
improve upon existing tests to allow less restrictive distributional
assumptions on the data and to yield more reliable (higher-order
accurate) asymptotic inference. In particular, we apply the method
to examine the efficiency of CRSP value-weighted stock index, and to
test the well-known Fama and French (1993) three-factor model. We
find that existing tests tend to over-reject.
Annals of Economics and Finance 7, 2006, 217--249.
Portfolio Optimization under Asset Pricing Anomalies
Pin-Huang Chou and Wen-Shen Li)
Fama and French (1993) find that the
SMB and the HML factors explain much of the
cross-section stock returns that are unexplained by the CAPM,
whereas Daniel and Titman (1997) show that it is the characteristics of the stocks that are responsible rather than the
But both arguments are largely based only on expected return comparisons, and little is known about how important each of
the two explanations
matters to an investor's investment decisions in general and portfolio optimization in particular. In this paper, we show that a
mean-variance maximizing investor who exploits the
asset pricing anomaly of the CAPM
can achieve substantial economic gains than simply holding the market index. Indeed, using Japanese data over the period
we find that the optimized portfolio constructed from
characteristics-based model and based on the first 200 largest stocks is the best performing one and
has monthly returns more than 0.81% (10.16% annualized)
over the Nikkei 225 index with no greater risk.
Japan & The World Economy 18, 2006, 121--142.
A New Variance Bound
on the Stochastic Discount Factor
In this paper, we construct a new variance bound on any stochastic
discount factor (SDF) of the form m=m(x) where x is a vector
of random state variables. In contrast to the well known
Hansen-Jagannathan bound that places a lower bound on the variance
of m(x), our bound tightens it by a ratio of (1/ρx,m0)2,
where ρx,m0 is the multiple correlation coefficient
between x and the standard minimum variance SDF, m0. In many
applications, the correlation is small, and hence our bound can
be substantially tighter than Hansen-Jagannathan's. For
example, when x is the gross growth rate of consumption, based
on Cochrane's (2001) estimates of market volatility and
ρx,m0, the new bound is 25 times greater than the Hansen-Jagannathan
bound, making it much
more difficult to explain the equity-premium puzzle based on
existing asset pricing models. Another example is
applying the new bound, with the growth rate of consumption as
a state variable, to the 25 size and book-to-market sorted portfolios
used by Fama and French (1993), then it is more than 100 times greater
than the Hansen-Jagannathan bound.
Business 79, 2006, 941--961.
Data-generating Process Uncertainty:
What Difference Does It Make in Portfolio
As the usual normality assumption is firmly rejected by the data, investors encounter a data-generating
process (DGP) uncertainty in making investment decisions. In this paper, we
propose a novel way to incorporate uncertainty about the DGP into portfolio analysis. We
find that accounting for fat tails leads to nontrivial changes in both parameter estimates and
optimal portfolio weights, but the certainty–equivalent losses associated with ignoring fat tails
are small. This suggests that the normality assumption works well in evaluating portfolio
performance for a mean-variance investor.
Financial Economics 72, 2004, 385--421.
What Determines Expected
International Asset Returns?
Campbell Harvey and Bruno Solnik)
This paper characterizes the forces that determine time-variation in expected
international asset returns. We offer a number of innovations. By
using the latent factor technique, we do not have to prespecify the sources of
risk. We solve for the latent premiums and characterize their time-variation.
We find evidence that the first factor premium resembles the expected return
on the world market portfolio. However, the inclusion of this premium alone
is not sufficient to explain the conditional variation in the returns. We find
evidence of a second factor premium which is related to foreign exchange risk.
Our sample includes new data on both international industry portfolios and
international fixed income portfolios. We find that the two latent factor model
performs better in explaining the conditional variation in asset returns than
a prespecified two factor model. Finally, we show that differences in the risk
loadings are important in accounting for the cross-sectional variation in the
Annals of Economics and Finance 3, 2002, 83--127.
On Rate of Convergence of Discrete-time Contingent Claims
This paper characterizes the rate of convergence of discrete-time multinomial option prices.
We show that it depends on the smoothness of option payoff function, and is much lower than commonly believed because the
payoff functions are often all-or-nothing type and not continuously differentiable. We propose two methods, one of which is
to smooth the payoff function, that help to yield the same rate of convergence as smooth payoff functions.
Mathematical Finance 10, 2000, 53--75.
Investment Horizon and the Cross Section of Expected Returns: Evidence from
the Tokyo Stock Exchange
and Yuan-Lin Hsu)
Using data from the Tokyo Stock Exchange, we study how beta, size, and
ratio of book to market equity (BE/ME) account for the cross-section of expected stock returns over different lengths of investment
horizons. We find
that beta, adjusted for infrequent trading or not, fails to explain the cross-section
of monthly expected returns, but does a much better job for horizons over half-
and one-year. However, either the size or the BE/ME alone is still a significant
factor in explaining the cross-section expected returns, but the size significantly
diminishes for longer horizons when beta is included as an additional independent
Annals of Economics and Finance 1, 2000, 79--100.
Security Factors as Linear
Combinations of Economic Variables
A new framework is proposed to find the best linear combinations of economic
variables that optimally forecast security factors. In particular, we obtain such combinations
from Chen et al. (Journal of Business 59, 383--403, 1986) five economic variables,
and obtain a new GMM test for the APT which is more robust than existing tests. In
addition, by using Fama and French's (1993) five factors, we test whether fewer factors
are sufficient to explain the average returns on 25 stock portfolios formed on size and
book-to-market. While inconclusive in-sample, a three-factor model appears to perform
better out-of-sample than both four- and five-factor models.
Journal of Financial Markets 2, 1999, 403--432.
Testing Multi-beta Pricing Models
(with Raja Velu)
This paper presents a complete solution to the estimation and testing of multi-beta
models by providing a small sample likelihood ratio test when the usual normality
assumption is imposed and an almost analytical GMM test when the normality assumption
is relaxed. Using 10 size portfolios from January 1926 to December 1994, we reject the
joint efficiency of the CRSP value-weighted and equal-weighted indices. We also apply the
tests to analyze a new version of Fama and French [Fama, E.F., French, K.R. 1993.
Common risk factors in the returns on stocks and bonds. Journal of Financial Economics 33, 3–-56] three-factor model in addition to two
standard ones, and find that the new
version performs the best.
Journal of Empirical Finance 6, 1999, 219--241.
A Critique of the Stochastic Discount Factor Methodology
(with Raymond Kan)
In this paper, we point out that the widely used stochastic discount factor (SDF) methodology ignores a fully specified model
for asset returns. As a result, it suffers from two potential problems when asset returns follow a linear factor model. The
first problem is that the risk premium estimate from the SDF methodology is unreliable. The second problem is that the
specification test under the SDF methodology has very low power in detecting misspecified models. Traditional methodologies
typically incorporate a fully specified model for asset returns, and they can perform substantially better than the SDF
Journal of Finance 54, 1999, 1021--1048.
Going to Extremes: Correcting Simulation Bias in Exotic Option Valuation
(with Phil Dybvig and David Beaglehole)
Monte Carlo simulation is widely used in practice to value exotic options for which analytical formulas are not available. When
valuing those options that depend on extreme values of the underlying asset, convergence of the standard simulation is slow as
the time grid is refined, and even a daily simulation interval produces unacceptable errors. This article suggests
approximating the extreme value on a subinterval by a random draw from the known theoretical distribution for an extreme of a
Brownian bridge on the same interval. This approach provides reliable option values and retains the flexibility of simulations,
in that it allows great freedom in choosing a price process for the underlying asset or a joint process for the asset price,
its volatility, and other asset prices.
Financial Analysts Journal 53, 1997,
Temporary Components of Stock Returns: What Do the Data Tell Us?
(with Chris Lamoureux)
Within the past few years several articles have suggested that returns on large equity portfolios may contain a significant
predictable component at horizons 3 to 6 years. Subsequently, the tests used in these analyses have been criticized
(appropriately) for having widely misunderstood size and power, rendering the conclusions inappropriate. This criticism however
has not focused on the data, it addressed the properties of the tests. In this article we adopt a subjectivist analysis -
treating the data as fixed - to ascertain whether the data have anything to say about the permanent/temporary decomposition.
The data speak clearly and they tell us that for all intents and purposes, stock prices follow a random walk.
of Financial Studies 9, 1996, 1033--1059.
Measuring the Pricing Error of the Arbitrage Pricing Theory
(with John Geweke)
This article provides an exact Bayesian framework for analyzing the arbitrage pricing theory (APT). Based on the Gibbs sampler,
we show how to obtain the exact posterior distributions for functions of interest in the factor model. In particular, we
propose a measure of the APT pricing deviations and obtain its exact posterior distribution. Using monthly portfolio returns
grouped by industry and market capitalization, we find that there is little improvement in reducing the pricing errors by
including more factors beyond the first one.
of Financial Studies 9, 1996, 553--583.
Time-to-Build Effects and the Term Structure
This paper shows that real macroeconomic variables have power in predicting movements in the term structure of interest rates,
recent studies on the links of structure to expected stock returns. We find that up to 86 percent of the variation in the
term premia are due to changes in the macroeconomy. The predictive power can be attributed to time-to-build effect of
Journal of Financial
Research 18, 1995, 115--127.
Small Sample Rank Tests with Applications to Asset Pricing
This paper proposes small sample tests for rank restrictions that arise in many asset pricing models, economic fields and
others, complementing the usual asymptotic theory which can be unreliable. Using monthly portfolio returns grouped by industry
and using two sets of instrumental variables, we cannot reject a one-factor model for the industry returns.
Journal of Empirical Finance 2, 1995, 71--93.
Analytical GMM Tests: Asset Pricing with Time-Varying Risk Premiums
We propose alternative generalized method of moments (GMM) tests that are analytically solvable in many econometric models,
yielding in particular analytical GMM tests for asset pricing models with time-varying risk premiums. We also provide
simulation evidence showing that the proposed tests have good finite sample properties and that their asymptotic distribution
is reliable for the sample size commonly used. We apply our tests to study the number of latent factors in the predictable
variations of the returns on portfolios grouped by industries. Using data from October 1941 to September 1986 and two sets of
instrumental variables, we find that the tests reject a one factor model but not a two-factor one.
of Financial Studies 7, 1994, 687--709.
Asset Pricing Tests Under Alternative Distributions
Given the normality assumption, we reject the mean-variance
efficiency of the Center for Research in Security Prices
value-weighted stock index for three of the six consecutive ten-year
subperiods from 1926 to 1986. However, the normality assumption is
strongly rejected by the data. Under plausible alternative
distributional assumptions of the elliptical class, the efficiency
can no longer be rejected. When the normality assumption is violated
but the ellipticity assumption is maintained, many tests tend to be
biased toward over-rejection and both the accuracy of estimated beta
and R2 are usually overstated.
Journal of Finance 48, 1993, 1927--1942.
International Asset Pricing with Alternative Distributional Specifications
(with Campbell Harvey)
The unconditional mean-variance efficiency of the Morgan Stanley Capital International world equity index is investigated. Using
data from 16 OECD countries and Hong Kong and maintaining the assumption of multivariate normality, we cannot reject the
efficiency of the benchmark. However, residual diagnostics reveal significant departures from normality. We test the sensitivity
of the results by specifying error structures that are t-distributed and mixtures of normal distributions. Even after relaxing
the i.i.d. assumption, we cannot reject the mean-variance efficiency of the world portfolio. Our results suggest that differences
in country risk exposure, measured against the MSCI world portfolio, will lead to differences in expected returns.
Journal of Empirical Finance 1, 1993, 107--131.
Small Sample Tests of Portfolio Efficiency
This paper presents an eigenvalue test of the efficiency of a portfolio when there is no riskless asset, complementing the test
of Gibbons, Ross, and Shanken (1989). Besides optimal upper and lower bounds, an easily-implemented numerical method is
provided for computing the exact P-value. Our approach makes it possible to draw statistical inferences on the efficiency of a
given portfolio both in the context of the zero-beta CAPM and with respect to other linear pricing models. As an application,
using monthly data for every consecutive five-year period from 1926 to 1986, we reject the efficiency of the CRSP
value-weighted index for most periods.
Journal of Financial Economics 30, 1991, 165--191.
Algorithms for the Estimation of Possibly Nonstationary Time Series
This paper presents efficient algorithms for computing time series projections, the maximum likelihood function and its
gradient in possibly nonstationary vector times series model (VARMA).
Journal of Time Series Analysis
13, 1991, 171--188.
Bayesian Inference in Asset Pricing Tests
(with Campbell Harvey)
(An Unpublished TechAppendix)
We test the mean-variance efficiency of a given portfolio using a Bayesian framework. Our test is more direct than Shanken's
(1987b), because we impose a prior on all the parameters of the multivariate regression model. The approach is also easily
adapted to other problems. We use Monte Carlo numerical integration to accurately evaluate 90-dimensional integrals.
Posterior-odds ratios are calculated for 12 industry portfolios from 1926–1987. The sensitivity of the inferences to the
prior is investigated by using three different distributions. The probability that the given portfolio is mean-variance
efficient is small for a range of plausible priors.
Journal of Financial Economics 26, 1990, 221--254.