top of page
Search

Comparing Analyst to Market Valuations Does Not Provide Long-term Information

Abstract: Professional equity analysts generate firm valuations. Many of those valuations and recommendations based on those valuations are available to the public. Analysis of these valuations reveal that they tend to be chronically optimistic. How should professional valuations be utilized, and do these valuations carry any useful medium or long-term information? We find that the average analyst valuation relative to its market price is not predictive of returns over long horizons.


Introduction

Public facing sell-side equity analysts are known to be optimistic: their incentives and the Sisyphean task of forecasting the future allows them to make rosy predictions.[1] While there is extensive research about biases and sources for their optimism, it is also widely accepted that their perspective provides investment value. If analysts are optimistic in general, then perhaps the degree of their collective optimism provides predictive information. This paper seeks to explore this idea by studying whether the relative average valuation is predictive or not over a 1–5-year horizon. We develop a framework in which we compare the average analyst valuation with its corresponding market price. The result of this research is that relative prices – on a company or market level – are not predictive of medium- or long-term risk or returns. This paper is organized as follows: in Part 1, we discuss the prevailing optimism and accuracy among analysts and associated findings. In Part 2, we present an extended theoretical framework as a hypothesis for interpreting information. Part 3 shows the data employed. Part 4. presents research results associated with the market to analyst valuation ratios and concludes.


Part 1: Analyst Optimism and Accuracy

Shiller (2015), has pointed out that professional valuations are often highballed.[2] His explanation was that analysts were in cahoots with the companies they valued. He built upon research from authors such as Hou et al. (2012) and Bradshaw (2011), who explained there were conflicts of interests which typically led to high valuations.[3] Bradshaw specifically identifies six separate conflicts of interests: 1. Banking fees associated with continuing business – and firms are more likely to stay with a bank that provides favorable coverage, 2. To please the client’s management and thereby gain additional private information, 3. If reports say that equity should be bought, trading activity will pick up, which will generate fees for the trading division of the bank, 4. Institutional investors have relationships with banks and hold securities in their portfolios – if an analyst negatively reviews a security it may have adverse effects on its price and hence on the institutional investor’s portfolio, 5. It is common to simply pay analysts to conduct research directly for companies, 6. The relationship between analysts and managers can drive forecasts, and managers are often optimistically biased.


If we briefly delve into research associated with a common valuation method that requires the most subjective decisions to be made – the discounted cash flow analysis – we note some of the ways analysts see the world through excessively rosy lens.[4]


1.A. Short-Term Earnings

The rate at which future revenue is discounted – the discount rate – chosen by analysts falsely assumes a degree of predictability. Past volatility does not always predict future volatility; past periods of volatility become subsumed into the average rate, but the variance of a security’s return is not unchanging over time. The attempt to fold historical volatility is all the more redundant if markets follow a Cauchy distribution with an undefined mean and variance. The relationship between a change in a stock price and the cost for that firm to raise capital is also tenuous: just because a stock price jumped up by 5% does not mean that the firm will actually have to pay 5% to its investors.[5] Speaking about the capital asset pricing model (CAPM) used in almost all DCF valuations, Eugene Fama and Kenneth French noted, “despite its seductive simplicity, the CAPM’s empirical problems probably invalidate its use in applications. These models fail because they assume predictability: they assume that it is possible to forecast future dividends and future variance by using past data.”[6] The same critique can in turn be applied to the ever more complicated models used to deduce the “proper” discount rate, from the three-factor model, to the weighted average cost of capital model (WACC), to Robert Merton’s Intertemporal Capital Asset Pricing Model (ICAPM). In summary, the discount rate used in a DCF is a simplified number that incorporates some past market information.


1.B. Terminal Value

Recent earnings and short-term forecasts affect current prices less than long term forecasts.[7] In a DCF, this phenomenon is reflected in weight bestowed upon the terminal value. Studies show that this second stage of valuation generates above 50-80% of the value of firm value in mature industries, and over 100% of company value in industries with high growth potential but large initial costs (for instance, venture capital backed technology firms).[8] Perhaps the most important reason the terminal value is so important is because of how equity holders make money. The short-term forecast of cash flows is somewhat moot as much of that cash will not be returned to shareholders, and while the cash flow is a determinant for how much the firm will be able to spend on dividends or repurchases, the total amount of value derived still pales in comparison to the proportion contributed by the terminal value of the firm and market pricing; Shiller (1981) noted that expected dividends do not explain the volatility of prices. Unlike the assumption made by Gordon (1959) and Williams (1938) - that stock returns should be based upon the discounted dividend return of companies - stock returns tend to be based mostly upon their change in price. Indeed dividends matter less and less; Skinner (2008) pointed out that repurchases have overwhelmingly replaced dividends as the primary method for returning value to shareholders. This observation is of course part of the reason Modigliani and Miller developed the theory which led to the use of a DCF model in the first place: Forecasts of the future are the primary determinant of value and fluctuations in price.[9]In practice, investors have historically earned more from price appreciation, not dividends (perhaps because of the historically greater tax burden on dividends, but also because repurchasing shares or reinvesting income tends to be more beneficial).


Because the financial market is a marketplace that consists of buyers and sellers, improvements in price imply that the buyer foresees an amelioration in circumstances that will either result in future dividends or in further price appreciation – in both cases these are unrealized expected returns in the future, and therefore heavily dependent upon continued firm existence and growth, which is supposedly captured in the terminal value. Thus the terminal value is logically very important, but much research suggests it is in fact nonsensical. Edwards and Bell (1961) and Ohlson (1995) argued that the return on equity cannot be greater than the cost of capital over the long run – meaning that the worth of a terminal value should be zero since there would be no leftover profit for equity holders. The sentiment is echoed by Miller (2008) who believes the return on invested capital must be equal to the weighted average cost of capital in a competitive market. Bernard (1994) argues that the only relevant variable for long term returns on capital is R&D, but investments in R&D only yield – on average – about 28% of firm value (in his estimate), not the 50-100+% yielded by many terminal values. The issue is further compounded by creative destruction in markets; firms must adapt and improve to thrive, but the pressures to increase scale often stifle the creative forces that allow firms to be agile in the first place.[10] Meanwhile, a terminal value, even one with a negative long-term growth rate, does a poor job of reflecting enterprise dissolutions and insolvencies. This is important because there is a strong relationship between loose bankruptcy laws and growth, allowing borrowers to take chances with new investments and make daring bets.[11] Firms in countries with high investments in R&D (which contributes to high firm values) should also have higher chances of going bankrupt.[12]

Publicly held companies are not immune from bankruptcy, in the United States alone the number of publicly held firms that went bankrupt tends to range between ~50-250 per year, while about 15% - on average – of listed companies ‘die a bad death every 10 years.[13]So, while a terminal value is essentially implied in the worth of a firm, the theoretical reasoning behind it is tenuous at best. But let us momentarily assume a terminal value is necessary and pivot our attention to the ‘proper’ growth rate attached to it.


1.C. Long-Term Earnings Growth Rate in the Terminal Value

The long-term free cash flow growth rate has an enormous impact over the total terminal value, making its forecast perhaps the most important element in a DCF model. Just like for short term forecasts, analysts are also known to be optimistic about long-term growth rates.[14] Cassia, Plati and Vismara (2007) showed that it is not possible to eke out growth rates on investments above the cost of invested capital, and that competitive advantages are difficult to maintain into long term growth – which is why analyses such as Porter’s Five Forces are rather explanatory and not predictive.[15] Karceski and Lakonishok (2003) concur, “there is no evidence of persistence in terms of growth in the bottom line as reflected by operating income before depreciation and income before extraordinary items. Instead, the number of firms delivering sustained high growth in profits is not much different from what is expected by chance. The results for subsets of firms, and under a variety of definitions of what constitutes consistently superior growth, deliver the same verdict. Put more bluntly, the chances of being able to identify the next Microsoft are about the same as the odds of winning the lottery.”[16] The same authors found that ‘there is no predictability in earnings growth beyond chance,’ and therefore suggested all long-term growth rates be arbitrarily set to the long-term real rate of GDP growth, then adjusted lower to account for the survivorship bias inherent in the data available, a strong departure from analyst average estimates of 11% with a range of between 5-20% (all higher than the 1-3% annual Real GDP growth rate).[17]


1.D. Art of Forecasting

John Burr Williams wrote “It may be objected that no one can possibly look with certainty so far into the future as the new methods require and that the new methods of appraisal must therefore be inferior to the old. But is good forecasting, after all, so completely impossible? Does not experience show that careful forecasting—or foresight as it is called when it turns out to be correct—is very often so nearly right as to be extremely helpful to the investor?”[18] Decades of researchers investigated his claim, and have answered with a soft no. It is therefore worth providing a brief overview of forecasting and how it applies to the financial field. Forecasting is not a science – there is no perfect model. Unexpected factors or improper weighting of their probabilities almost always affect projections. For a slew of reasons, forecasting the future with any certainty is impossible. Not only can factors unknown to any in the present or past arise, but philosophical and epistemological considerations also prevent forecasting from being accurate. Popper for instance argued that if historical events are driven or affected by technological innovation, it is impossible to predict them because innovations are unpredictable, as are their applications. If changes in the world are driven by political events, people (including experts and major organizations such as the intelligence community) are equally as inept at predicting them.[19] The markets which “incorporate expectations into present stock prices” are just as poor at predicting rapid changes, prices did not truly change until after the [First World] war had begun and did not in any way ‘incorporate expectations’ about the post-war malaise, depression, or the next world war. Poincaré provides a different perspective with similar conclusions: as one forecasts into the future, the probability of error increases very rapidly: while tomorrow may not be very different from today, there is an enormous chance that many elements of 2050 will be very different from 2020. One requires a comprehensive and precise model to deal with the future, but the forecast degrades abruptly and the dynamics of the forecast can change as well. The problem is cleverly illustrated by Michael Berry’s illustration of the requisite information to predict how a billiard game will look in over 50 hits: every particle of the universe must be accounted for.[20] It would be necessary to understand the past with infinite precision; such a feat is impossible, all historiographers and historians understand that history is an argument and that the past is impossible to ascertain with absolute precision.[21] For every given factual statement exist a number of logical and consistent interpretations.[22] People can hold incompatible beliefs and interpretations on the exact same data – an idea applied to asset values by Kurz (1997); every field of study – from economics to medicine – feature disagreements over topics for which all have the same available information.


The result is that it is almost impossible for professional financial forecasters to be good at forecasting. Tetlock (2005) found that expert forecasters performed worse than simple chance - financial analysts included. He was echoed by other writers, including Barber et al. (2001, 2003), Cowles (1933), and Baker & Dumont (2014). Tuszka and Zielonka (2002) found that while financial analysts are poor forecasters, they are very confident in their forecasts. Putting forecasts into practice is another and perhaps better way to see whether they were correct, requiring forecasters to “put their money where their mouth is.” The result, actively managed funds, have underperformed and outperformance can be attributed to chance.[23] In summary, facing an uncertain future, analysts choose to “predict” one that looks stable and positive.


1.E. Analyst recommendations are accurate and value accretive

In complete contrast to the previous passages, research also shows that analyst recommendations are not only generally accurate but following them leads to outsized returns. Elton, Gruber and Grossman (1986), Stickel (1995), Womack (1996), Barber Lehavy McNichols, and Trueman (2001), Li (2005), Cheng, Lieu and Qian (2006), and Howe, Unlu, and Yan (2009) – among others – all show dramatic proof that sell-side analyst recommendations matter and can drive portfolio performance. Analysts that are consistently more accurate affect prices more than those who are less consistently accurate, Gu and Wu (2003) show that accuracy is one of the most important aspects of forecast performance; research also shows a directionality: more accurate forecasts lead to larger price movements, and the analysts making those forecasts have better careers.[24] Furthermore, recommendations are highly informative.[25] Finally, as a direct link to our study, Bilinski, Lyssimachou, and Walker (2013) find that analysts exhibit differential and persistent abilities to forecast target prices accurately. Reconciling this knowledge with our previous discussion relies upon the strength of the recommendations: while the vast majority of recommendations are positive (i.e. buy/hold), strong buy is a more robust positive signal, the magnitude of recommendation changes, the reputation of the analyst and brokerage house making the recommendation, and consensus views are among the signals that drive the value of the information conveyed by the recommendation.[26] However, given the argument made in Part 1.D., I suspect that accuracy is derived from preferential and early access to information rather than a better crystal ball as it were.


Part 2: Hypothetical Framework

As was mentioned at the beginning of this article, there are a variety of reasons why analysts would inflate their valuations, which have been extensively chronicled in literature. A forward-looking valuation model requires the modeler to assume the future is predictable, understandable, and plannable, while an analyst tends to assume the future is looking pretty good. But investors should know that none of those are objectively possible, no one knows what the future holds. So how should the valuations provided by analysts be interpreted?


We set out to test three different hypotheses for using analyst data in this paper: relative average analyst valuations provide predictive information, analyst valuations provide no information whatsoever, and volatility of analyst valuations provide predictive information. The hypothetical logic is expanded upon in turn. We use the consensus target price as the consensus analyst valuation price because if the analysts and the market already had a very strong reason to predict that the price should be worth a certain value in the near future – unless there was uncertainty that needed to be compensated for – the price should be that in the present. That uncertainty is exactly what we’re hoping to study.


2.A. Relative average analyst valuations provide information

Let us, for a moment, look at the valuation of a start-up firm by a Private Equity group – a world where risk is even more glaringly obvious than the public markets and the probability of failure is extremely high.


At the end of a deal, when the Private Equity group invests a sum into the business for a stake in the firm, a firm valuation is yielded. That value implies a probability of success for that firm, where success is a profitable exit, which in turn is defined as an initial public offering, management buyout, or a secondary purchase.


Mathematically,


ree

















In simplified English, when a Private Equity group invests a sum into the business for a stake in the firm, it implies a certain valuation. For instance, a $010,000 investment in exchange for 10% equity implies a valuation of $100,000/10% = 1,000,000. That valuation can, in turn, be compared to the discounted cash flow model to see what the implied probability of success for the firm is. For instance, in the previous example, if the PE firm valuation is $1M but the DCF model suggests a valuation of $3M, the implied probability of success is $1M/$3M = 33%, and the probability that the firm will not succeed is 1-33% = 67%. If we increase the number of entities performing valuations and increase the number of entities that are buying and selling, we now essentially have public equity valuation analysts and investors in the stock market; instead of PE investing and startup exits, we have analyst forecasts and market values. Despite the volatility and various behavioral irrationalities embedded in asset prices, by its very nature a competitive financial market’s prices will rapidly incorporate far more information, and do so more efficiently, than an analyst’s model; prices can even communicate private information that is otherwise unavailable to most uninformed traders.[27] If modeling cannot replicate reality, what does this model actually provide in terms of useful information? Recall that the return on invested capital for a share will primarily be in the form of price change of that share, and the market is providing a dynamic view of the fortunes of a company given present circumstances; althewhile the analyst-provided valuation will remain relatively stable and high – buoyed by its theoretical terminal value. If the market knows that one cannot reliably forecast or predict growth, then what an analyst-determined model provides is essentially a best-case scenario: relatively stable short-term forecasts and a perpetual favorable position in the market. It is therefore logical that the value of the asset not be aligned to its ‘professional’ valuation by an analyst.


If we apply the logic of a PE firm valuing an entrepreneur, but replace the PE firm’s the ‘approximate exit valuation’ with an ‘approximate valuation given favorable short-term forecasts and long-term survival’ provided by the analysts, and increase the number of buyers & sellers to substitute the PE firm with the market, then we are provided with an implied probability that the best-case scenario will be achieved. Dividing the market value by the DCF model value would imply a market-based probability that the firm will actually achieve the promise of a DCF model: that the firm will continue to survive into perpetuity, maintain its niche in the economy, return value to shareholders, and grow at a high rate.


As an equation,

P/A_Val = Market Price / Mean Analyst Target Price


If the market share price is equal to mean analyst target price, then the implicit probability that the firm will succeed at maintaining and outperforming, given all currently available information, is 100% or 1. A change in the market value can be considered an update to the firm’s posterior probability of future success – which will change slowly since analysts do not revise their estimates nearly as often as prices actually change.


On the flip side, a value greater than one implies two major possibilities (assuming the valuation was done correctly): A. The market believes the firm will be expanding into new opportunities, resulting in future cash flows that are as yet untapped and not priced into the DCF model, B. The market is ‘overheated,’ and paying more than it should for a best-case scenario for that firm. Given the existence of the equity premium puzzle and natural ponzi characteristics of the market, I find the second possibility to be more likely.[28]


Part 3: Data

Our initial sample was the composition of the Standard and Poor 500 Index from January 1963 to September 2021. If the company had been in the S&P 500 Index at any point over this time period, it would have been included in our query. Focusing on the S&P 500 allows us to dull the size and value effects on risk and returns found by Fama and French (1992). For each company, we extracted the following data between March 1999 and August 2021 (the maximum range available):

· Number of Price Target Estimates

· Price Targets Mean

· Price Targets Standard Deviation


The output was 53084 entries. We removed all valuations with two or fewer analyst price forecasts in order to limit idiosyncratic forecasts. Monthly prices were extracted from Yahoo Finance for each company, at the same date for which the IBES price target was available.[29] We chose to use the closing price as the market price for stock. For 3% of entries, data was unavailable due to factors such as bankruptcy or acquisition. Where possible, we used bid-ask average prices from the CRSP database, aligning the stock price with the same month’s IBES price target. The result of these modifications was a sample of 240 companies - with two duplicates - in 50613 rows. The average price of our sample is 99.1% correlated with the S&P 500’s price as denoted by the SPDR S&P 500 ETF Trust. For each month we divided the market price by the public equity analyst average valuation to create a ratio we called P/A_Val. The P/A_Val ratio average was 0.88 with a standard deviation of 0.25. As expected, the market price tends to be lower than the consensus analyst target valuation.


Table 1: Summary Statistics of Sample Portfolio


ree

As the vast majority of our data is sourced through the stock history function on Excel, it is not adjusted for dividends. Our data thus reflects monthly returns, not total monthly returns.


Part 4: Results


4.i. P/A_Val ratios approach

Having ascribed a P/A_Val ratio for each company for every month where data was available, we sorted the dataset to align available P/A_Val ratios given a date. We ran two types of analyses: a ‘zoomed out’ one to determine if there was any relationship between 1-year market returns and extreme average P/A_Val ratios, and portfolio comparisons to determine if P/A_Val ratios contained any useful information about returns and risks.


For the first set of analyses, we extracted the average P/A_Val ratio for each month of data; if a company had a P/A_Val ratio for a given month, its P/A_Val ratio was included in the average. Each company had the same weight. The top 10 highest and lowest average P/A_Val ratios for the entire sample were then compared to see if they were associated with differences of return using 1-year S&P 500 compound returns as the basis for determining if differences in return existed.

For the second set of analyses, we sorted companies based on the P/A_Val ratios given a time period. For each relevant year of our dataset, in the month of January, we assembled a portfolio of the highest and lowest decile companies sorted based on their P/A_Val ratios. The intent was to compare portfolios over the same time frames so as to try and isolate the information (if any) of portfolios sorted by P/A_Val ratios from other factors (i.e. macroeconomic fluctuations). Daily returns, standard deviation of daily returns, and holding period yield were calculated for each portfolio component, and a standard deviation of holding period yields was calculated for the portfolio. Each of these calculations were done on a 1, 3, and 5 year time period. Like in our efforts to develop a P/A_Val ratio, we sometimes had to fill in missing data from the CRSP database. We followed the same protocol discussed above and crudely adjusted monthly data to reflect daily movements by dividing returns and standard deviations by 20 (the number of trading days in a month). With our max 5-year time frame and given our data set, we assembled 16 portfolios of low P/A_Val companies and 16 portfolios of high P/A_Val companies to compare against each other. We also compared the low portfolio outputs with a set of portfolios with P/A_Val’s of up to 1 to test whether the output of the high-decile portfolios was reasonable. Summary statistics can be found in Appendix A and B, while the statistical tests are in Part 4.A.ii.

The actual dates associated with each portfolio, as pertaining to this section and all others in this paper, are listed in the footnote; years are displayed there without months or days for brevity’s sake.[30]


4.ii. Absolute P/A_Val ratios results

We researched whether or not absolutely low P/A_Val ratios were associated with differential returns in the market. We calculated the 1 year holding period yield of buying the S&P 500 at the date with the lowest and highest P/A_Val ratios, then compared the returns to note if the difference was statistically significant.


Table 2: Absolute P/A_Val Ratios


ree

ree

There is no statistically significant difference between the two samples (difference of 2.72%, P-Value of 0.46). High and low average P/A_Val ratios do not carry much information about 1-year returns.


4.iii. Relative P/A_Val ratios results

While comparatively high or low average P/A_Val ratios are not associated with differences in market returns, perhaps companies sorted on a P/A_Val basis will be associated with differential returns and risk. Comparing the 16 portfolios of low P/A_Val companies minus high P/A_Val also does not generate any statistically significant differences (using a two-sampled T-test - the statistical output was too large for this Wix page).


We conclude that there is no strong relationship conferred by differences between low and high P/A_Val ratios.



Footnotes [1] That optimism in turn translates often to favorable recommendations that have become ever more favorable over time: in 1983, analyst recommendations were 26.8% “sells,” 24.5% “buys,” 48.7% “holds,” while by 1999 only 1% were “sells,” 69.5% “buys,” and 29.9% “holds.” Shiller (2015) pp. 49, 292. [2] Shiller (2015), pp. 49ff. [3] See also, O’Brien (1988); Hou et al. (2012); Francis and Philbrick (1993); McNichols and O’Brian (1997); Easton and Sommers (2007); Khan, Rozenbaum, and Sadka (2013); Abarbanell and Bernard (1992). [4] While the majority of analysts use asset or earnings multiples and discounted cash flow analyses to derive firm valuations, research suggests that when analysts publish their results, the market does not react differently based on what method was used. This result is unsurprising when considering that using different types of valuation methods leads to the same level of accuracy – following Cavezzali, Rigoni, Siva (2013). The implication is A. that the assumptions inherent in a complicated discounted cash flow analysis are also reflected in multiple-based valuations, B. that the DCF analysis is reflective of market sentiment and that the valuations yielded do provide a measure of fundamental worth, C. that the market discounts analyst valuations, so it doesn’t matter what valuation they provide. Over 99% of analysts use earnings multiples (P/E ratio, P/EBIT multiple, or relative P/E ratio), Paul Asquith, Mikhail, and Au (2002), pp. 12-13, 28; Loughran and Ritter (1997); Cavezzali, Rigoni, Siva (2013). [5] Unless the assumption is made that the change in price occurred because of changes in payout policy, but Shiller (1981) very clearly demonstrated that changes in future expected dividends do not explain the volatility of asset values. [6] Fama and French (2004), pp. 44. [7]Goedhart, Koller, and Wessels (2010); on analysts responding little to short term fluctuations, see Gigerenzer (2015), pp. 86-92; Behr, Mielcarz, and Osiichuk (2018); That analysts underreact to news suggests their focus is on long-term earnings rather than short term volatility – or in Berger and Kaplan’s (2014) words “analyst reaction does a good job of capturing the extent to which earnings news is transitory versus permanent.” [8] Bradbury and Ferguson (1998); Koller, Goedhart, Wessels (2010); Reis and Augusto (2013): pp. 1631. [9] Miller and Modigliani (1961); Modigliani and Miller (1958); Kurz and Motolese (2001) [10] Gelfand (2018) [11] Claessens and Klapper (2002). [12]Claessens and Klapper (2002), pp. 34. [13] Fama and French (2003), Daepp et. al. (2015); For data between 1980 and 2001, see Large and Small Companies Exhibit Diverging Bankruptcy Trends, FDIC Division of Insurance: Bank Trends Analysis of Emerging Risks in Banking, Number 02-01 January 2002. 182 publicly held firms filed for bankruptcy in 2002, 62 in 2005, 66 in 2006, 78 in 2007, 138 in 2008, 210 in 2009, 106 in 2010, 86 in 2011, 87 in 2012, 71 in 2013, 52 in 2014, 79 in 2015, 99 in 2016, 71 in 2017, 58 in 2018, and 63 in 2019 (these numbers are pulled from Jones Day yearly insights, which are sourced from BankruptcyData.com and New Generation Research Inc.) Due to M&A’s and bankruptcies, only a fraction of the companies originally in the S&P 500 index when it was reassembled in 1957 are still in existence. [14] See for instance Dechow, Hutton, and Sloan (1999); Chan, Karceski, and Lakonishok (2003); Sharpe (2005). [15] Daniel Rasmussen, “The Gospel According to Michael Porter,” Institutional Investor, November 8, 2017. [16] Karceski and Lakonishok (2003), pp. 663. [17] Chan, Karceski, and Lakonishok (2003). pp. 681. In 2001 average analyst long-term growth rate forecasts for U.S. companies were of 18%, over 5X larger than recommended by the authors. See also Sharpe (2005), pp. 11; the observed range in growth rates is particularly surprising considering that many firms have negative revenues and negative growth rates. Additionally a long term economic growth rate of 2% may not be predictive – Piketty (2013) argues for instance that the future long term growth rate will drop to 1%. [18] Williams (1938), pp. 188. [19] Tetlock (2005). [20] Note that billiard is a relatively simple game with far fewer moving pieces, feedback loops, and fixed rules. Note also that physicists are still discovering properties of the universe – even taking into account all known matter of the universe in 1978 could have yielded incorrect results in the model. See Berry (1978). [21] See for instance Bloch (1953), in particular Chapter V: Historical Causation; Gleick (1987); McCloskey (1991), pp. 21-36. [22] Quine (1951), [23] See for instance Mandelbrot (1963); Dunn and Theisen (1983); Sharpe (1991); Liang (2001); Fama and French (2010); Gigerenzer (2015); Chapter 3: Skill, Scale, and Luck in Active Fund Management in Langlois and Lussier (2017); Dimensional Fund Advisors, LP (2019) – along with all other available yearly mutual fund landscape reports. [24] Hilary and Hsu (2013), pg. 272ff. [25] Bradley, Liu, and Pantzalis (2014) [26] Stickel (1995) [27] See for instance Malkiel (2005) [28] Mehra and Prescott, (1985), pp. 145–161; Shiller (2015), pp. 70ff. [29] We used Microsoft Excel Stock History function STOCKHISTORY function (microsoft.com) - data sources found at: “About the Stocks financial data sources,” Microsoft Support page: https://support.microsoft.com/en-us/office/about-the-stocks-financial-data-sources-98a03e23-37f6-4776-beea-c5a6c8e787e6 [30] 1/20/2000; 1/18/2001; 1/17/2002; 1/16/2003; 1/15/2004; 1/20/2005; 1/19/2006; 1/18/2007; 1/17/2008; 1/15/2009; 1/14/2010; 1/20/2011; 1/19/2012; 1/17/2013; 1/16/2014; 1/15/2015; 1/14/2016; 1/19/2017; 1/18/2018; 1/17/2019; 1/16/2020

 
 
 

Recent Posts

See All

Comments


Post: Blog2_Post

Subscribe Form

Thanks for submitting!

  • Facebook
  • Twitter
  • LinkedIn

©2020 by Miscellaneous Thoughts. Proudly created with Wix.com

bottom of page