Sunteți pe pagina 1din 44

Review of Literature

CHAPTER II
REVIEW OF LITERATURE

2.1 Introduction

Managers who make good strategic decisions based on analysis of intrinsic value, the
financial markets reward them by setting stock prices according to their company’s
financial fundamentals. This relationship helps the manager put the company’s
resources to their best use and create maximum value for shareholders. Whenever
deviations occur between intrinsic and market prices, the stock market corrects itself
within a few years to its intrinsic valuation level. Hence the Corporate Managers
and investors need to understand the true, intrinsic value of companies to exploit any
market deviations if and when they occur by proper timing of the implementation of
strategic decisions.

The ultimate test of corporate strategy is whether a firm creates economic value for
its shareholders. A decade ago there was considerably less knowledge about enterprise
value and doubt about its relevance to corporate governance. But in today’s economy it is
essential to excel at measuring, managing and maximising shareholder and company value.
The complexity of measurement of value deals not only with company’s historical financial
results but also with its ability to create value in the future. The current chapter compares
the various valuation methods and brings out the advantages, limitations and suitability of
the same. The forthcoming session has a discussion of the plethora of work conducted on
firm valuation and its related aspects. It also gives a theoretical examination of the
association between value drivers and enterprise value and abnormal returns of stock with
the intrinsic value.

A company’s value is different for different buyers and it may also be different
for the buyer and the seller. Value is the quantity agreed between the seller and the
buyer in the sale of a company and is not the same as price. This difference in a specific
company’s value may be due to a multitude of reasons. Knowing what an asset is worth
and what determines that value is a pre requisite for intelligent decision making, in
choosing investments for a portfolio, in deciding the appropriate price to pay or receive
in a takeover and in making investment and financing choices when running a business.

17
Every asset has an intrinsic value that can be estimated, based upon its characteristics in
terms of cash flows, growth and risk. The premise of valuation is that reasonable
estimates of a firm considering its real and financial assets can be made. Creating a
realistic understanding of the value of the business and the value of the shares in
business is critical to personal decision making and planning.

2.2 Importance of use of Cash flows in Valuation

Companies thrive when they create economic value for their shareholders. Value is
created when capital is invested at a rate of return higher than the cost of capital. The
cross currents of corporate scandals, active shareholders and Board members create
more pressure on the companies to build long term shareholder value. Companies
dedicated to value creation are healthier, have higher living standards and more
opportunities for individuals.

As the world economy globalised and capital became more mobile, valuation
gained importance in emerging markets with privatisation, joint ventures, mergers and
acquisitions and fund based value management. In an emerging market like India,
valuation is more difficult as risks and obstacles that companies face was greater than in
developed countries. Moreover India has high levels of economic uncertainty, volatility
of capital markets, controls on the flow of capital in and out of the country, high levels
of political and inflationary tendencies.

Over the past decades, considerable attention has been paid to the relationship
between accounting numbers and firm value. The attention to the relationship between
theoretical firm value and the performance stream has attracted considerable research
interest and resulted in a number of proprietary models being introduced. Ball and Brown’s
research in 1968 was the first to discuss the information content of accounting numbers.
They measured the association between annual earnings (cash flows from operations)
and the abnormal return using the operating earnings as proxy for operating cash flows and
they reported that earnings showed a higher correlation with abnormal stock return than
cash flows.

To investigate the association between these performance measures and the


variation in the stock price (return) the work of Ball and Brown was replicated by

18
sequences of empirical research using various proxies for annual earnings (Beaver,
1968; Beaver and Dukes 1972; and Patell and Kaplan, 1977). Unfortunately, the results
regarding the relevancy of the investigated measures were contradictory. The
contradiction in the result obtained by different market-based accounting research, the
criticisms which have arisen against the accruals such as subjectivity and easy
manipulation and the main components of traditional measures, directs to the fact that
increasing attention was being paid to new financial performance measures as
substitutes for traditional accounting-based measures.

The Cash Flow statements gained the attention of the International Accounting
Standard Board (IASB) and more studies had been conducted to examine the
incremental information content of cash flows over earnings (Finger, 1994;
Clubb, 1995; Barth et al., 2001). These studies concluded that cash flows had
information content.

2.3 Methods of Valuation

Academic literature suggests that there are many valuation methods useful for the
valuation of firms. It is important to obtain knowledge of various valuation methods and
previous research in this field to gain insight of the suitability and accuracy of these
models. Benninga and Sarig, (1997) advised to use more than one valuation method to
estimate the firm value because there is a great deal of uncertainty in relation to value
estimation as it involved predicting the returns of the company and if the different
methods gave the similar results it meant that the estimated value was reliable.

There are basically five methods to firm valuation. They are a) Liquidation and
accounting valuation b) Contingent Claim/Real Option valuation c) Goodwill valuation
d) Relative/Multiple valuation and e) Discounted Cash Flow valuation. The forthcoming
section discusses the various methods of valuation and previous research in them.

Liquidation and Accounting Valuation

This method uses existing assets based on accounting values. (Damodaran, (2006)
Fernandez, (2002). The method uses accounting based valuation in comparison to
Discounted Cash Flow (DCF) which bases its valuation on forecasted cash flows
thereby increasing the chances of inaccuracy. Hence, valuation based on book values is

19
preferred Damodaran, (2006).The method uses static values of assets from the Balance
Sheet but these historical values do not reflect the future potential of the firm. In
addition, earnings can be misleading and may be inflated in the short term. Valuation
based on these methods lead to undervaluation of high growth companies.

Contingent Claim/Real Option Valuation

In the last few years, Option valuation has been recognised as an alternative in the
valuation of investment opportunities in real markets.Contingent Claim /Real Option
valuation of an asset depends on whether or not an event occurs. The present value of
options was then discounted back to the present.

Real Option valuation was a powerful tool in investment-intensive industries


where companies make investments in sequences involving a high degree of
uncertainty. Some of these industries included Energy, Oil and Gas and Research
andDevelopment intensive industries like Biotechnology, Pharmaceutical and other
High Technology industries with high marketing investments. Real Option valuation
considered the flexibility that was inherent in many projects in a way that DCF does not.
Hence management possibilities to expand an investment or to abandon a project are
given at a correct value. Thomas E. Copeland (2000) has done several researches on
valuation and propagates that Real Option valuation is better as a valuation method.

Goodwill Valuation

The Goodwill method tries to value the intangible assets that represent value but are
actually not there in the Balance Sheet. As assets are valued with figures from the
Income Statement and Balance Sheet it suffers with limitations of these static values.
The method captures the value of the company with the value of tangible assets and
intangibles value created in the future but the difficulty is, it uses accounting numbers
and not cash flows. There was no consensus on the valuation of goodwill
(Fernandez, 2002).

Relative/Multiple Valuation

Relative Valuation methods used a ratio or multiple to express the value of a company
in relation to a certain variable. The primary ratio was Price/Earnings and two other

20
ratios were the Price /Book Value and Price/Sales. (Damodaran, 2004) The P/E ratio
which was calculated by dividing market price by Earnings per Share (EPS) is
frequently used by retail investors. A lower P/E ratio means that the stock is
undervalued and there was a scope for appreciation in future. The valuation is based on
the rationale that perfect substitutes should sell for the same price (Baker and Ruback,
1999) and very often used in practice (Damodaran, 2006 Imam et al., 2008) for when
Companies A and B are comparable firms and Company A has twice as much sales as
Company B; Company A should trade at twice the price of Company B. Popularity of
this method is its simplicity (Yoo, 2006).

Alford (1992) found that the multiples method was more accurate than earnings.
This was also supported by Erik Lie and Heidi Lie (2002) who tested the accuracy of ten
different multiples in an empirical study of 8,621 companies. The Price/Book value (P/B)
ratio generated exact and unbiased results than other multiples like Price/Earnings (P/E) and
Price/Sales (P/S). (Koller et al.,2003). The P/E ratio generated imprecise results as it is
affected by the capital structure of the company and earnings are affected by the non
operating revenues and surplus.

Price/Sales (P/S) showed less accurate results according to Lie and Lie (2002)
and Dragos(1999). They stated that P/E performed best based on empirical analysis.
Cheng and Mc Namara (2000) analysed the performance of P/E and P/B ratios based on
30,310 observations of over 20 years company data and concluded that all combinations
of these ratios gave the most exact results. However P/E is found to be superior to the
P/B ratio. Valuation with the use of the P/E ratio was often used in the case of Initial
Public Offerings (IPO’s). Kim and Ritter (1999) analysed valuation using the multiples
method based on P/E and P/S ratios and found satisfying results only when future
forecasted numbers were used. They concluded that the firm gave more accurate results
for older than younger firms.

Koller et al., (2005) recommends the use of Enterprise Value/Earnings before


Interest Taxes Depreciation and Amortisation (EV/EBITDA) because it was
independent of the company’s capital structure and was applied to compare companies
with different capital structures. Koller et al., (2005) also recommends the sales
multiples for valuing companies with small or negative profits. Among the sales

21
multiples, the Enterprise Value/Sales (EV/Sales) ratio was better than P/S ratio. The
EV/S multiple was more accurate for small firms. (Ruben van de Sande, 2012) It was
even used for negative earnings. In general multiples were of importance when they
were based on cash flows or earnings. (Imamet al., 2008). Besides the P/E ratio the
investor also requires to check the quality of profits as well as its sustainability before
taking a final call. Further, they should also carefully analyse the outstanding liabilities
of the company because the P/E fails to cover the same.

An alternative to the P/E ratio, the EV/EBITDA, valuation multiple was often
used. Typically, this ratio was applied while valuing cash-based businesses. An
advantage of this multiple was that it is capital structure-neutral. Therefore, this multiple
was used for direct cross-companies application. The EV/Sales was another multiple
calculated by dividing Enterprise Value by annual sales of the Company. Generally,
EV/Sales method was used for valuation of companies with lower profits/losses, but
large turnover. The other common multiples included cash flow and EBITDA multiples,
revenue multiples, asset multiples and operating multiples. When accounting
information was not easily available Relative Valuation was a handy tool for the
financial analysts to make a quick assessment of the company‘s value but this
methodology failed to incorporate the time value of money.

Recent research by Kim and Ritter (1999) tested the use of multiple-based
valuation for IPO valuations and they found that P/E multiples with forecasted earnings
gave accurate estimated value than multiples using trailing earnings. In their recent
research on the valuation methodologies used in 104 analyst report, Demirakos et al.,
(2004) documents that (1) valuation by comparatives is highly used in the Beverages
sector than in Electronics or Pharmaceuticals; (2) the dominant valuation model
typically used by analysts is either a P/E model or an explicit multi period DCF
valuation models and (3) none of the analysts use the price to cash flow as their
dominant valuation model. Similarly, Boatsman and Baskin, (1981) applied P/E
multiple with the selection of comparable firms: both randomly and with firms with
almost similar 10 year average earnings growth rate. They found that accuracy of
approach of the latter higher. Consistent with Boatsman and Baskin, (1981), Foster,
(1986) documents that selecting comparables from the same industry could improve
comparability because they use the same accounting standards and methods.

22
Penman (2001) proposed that different accounting methods for comparables and
target firm create implementations problems with method of comparables. Previous
academic research which insists on industry based comparables had no definition for
industry. However Alford (1992) provides clear guidelines for selecting appropriate
comparable firms.

Erik Lie and Heidi J. Lie (2002) evaluated various multiples found, first, that the
asset multiple (market value to book value of assets) generally generated more precise and
less biased estimates than the sales and the earnings multiples. They adjusted the companies
forcash levels but it did not improved estimates of company value, the use of forecasted
earnings rather than trailing earnings did. The Earnings Before Interest, Taxes,
Depreciation, and Amortization (EBITDA) multiple yielded better estimates than the
Earnings Before Interest andTax (EBIT) multiple. The company size, company
profitability, and the extent of intangible value in the company influenced the relative
performance of the multiples as well as accuracy and bias of value estimates.

Feng Chen, Kenton K & Yee Yong Keun Yoo (2007) addressed the question
adoption of forward-looking methods, which improved or reduced valuation accuracy in
shareholder litigation. The results indicate the adoption of forward-looking valuation
methods that enhance valuation accuracy in shareholder litigation.

Harmonic mean is always lower than the simple mean because using a simple
mean multiple would overestimate the value. Liu et al., (2002) found that harmonic
mean outperformed the mean or median ratio of multiples and the performance got
better when the researcher used harmonic mean. Similarly, Barker and Ruback (1999)
suggested to use harmonic mean to estimate industry multiple. After calculating
harmonic mean for each year multiples, firms were sorted by industry classification
code (Standard Industrial Classification code). Alford (1992) found that the best
criterion for selecting comparables firms was either industry membership or a
combination of risk and earnings growth rates. He documented that accuracy improved
when the number of SIC digits of the comparable firm were increased up to three digits.
After calculating harmonic mean signed and absolute valuation errors for valuations
from multiples were calculated. Signed valuation errors was computed to evaluate
whether the value estimated was either biased positively or negatively or unbiased.

23
Absolute valuation errors showed how accurate these estimated values were, how much
the valuation errors are close to zero.

Recent methodology studies had demonstrated that the characteristic-matched


control samples provided more reliable inferences in market-based research (Barber and
Lyon, 1997), (Lyon et al., 1999) Sanjeev Bhojraj et al., (2003). They extended this line
of research and presented a more precise technique for matching sample firms based on
characteristics identified by valuation theory. The results showed that the use of “smart
multiples,” which incorporate industry, country, and firm-specific factors in selecting
peer firms, would greatly reduce the problems associated with accounting diversity and
differences in cross-border risks. Three recent studies that provided some insight on this
topic were Kim and Ritter (KR); (1999), Liu, Nissim, and Thomas (LNT); (2002), and
Baker and Ruback (BR); (1999). All three examined the relative accuracy of alternative
multiples in different settings. KR used alternative multiples to value initial public
offers (IPOs), while LNT and BR investigated the valuation accuracy relative to current
stock prices. KR and LNT both found that forward earnings performed much better than
the historical earnings. LNT showed that in terms of accuracy relative to current prices,
the performance of forward earnings was followed by that of historical earnings
measures, cash flow measures, book value, and finally, sales.

In addition, Baker and Ruback (1999) examined the advantages of the use of
harmonic means that was, the inverse of the average of inversed ratios when
aggregating common market multiples. Zarowin (1990) studied the cross-sectional
determinants of Earning Price ratios. He indicated that the forecasted growth in long-
term earnings was a dominant source of variation in these ratios. Other factors, such as
risk, historical earnings growth, forecasted short-term growth, and differences in
accounting methods, proved to be less important.

The greatest difficulty experienced in Multiple Valuation is to find comparable


firms. Multiple valuations require many comparable firms on the industry and these
firms were required to be priced correctly (Damodaran, 1994). Comparable firms were
those that are similar in terms of profitability, growth potential, business risk and
financial risk.Cheng and Mc Namara (2002) recommended that the industry was the
most important factor for selecting comparable firms, when P/B and P/E were used for

24
valuation. Benninga and Sarig, (1997) felt that multiples method was not used as a
primary method to value the company but as a secondary method to verify the results.

The enterprise value and equity value in multiple valuation was found to be lower
than DCF valuation. (Doreen Nassaka and Zarema Rottenburg, 2011) Multiples
valuation was based on historical values or forecasted figures. Forecasted earnings are
used instead of historical data whenever possible (Koller et al., 2005). If historical
values were used for the purpose of calculating firm value it was better to use several
multiples to increase the accuracy of the results (Yoo, 2006). If the multiples was based
on a mixture of historical data and forecasted earnings then the improvements in firm
value was not observed. Another problem associated with use of Relative Valuation
techniques is a focus on short-term earnings. While research revealed that reported
earnings are decreasingly important in explaining stock prices e.g. Lev and Zarowin,
(1999), the market’s focus on earnings has steadily increased. Alfred Rappaport, (2003)
related to this problem of relying too heavily on next year’s earnings was the problem of
accurately forecasting them. Several studies had shown that analysts made large
mistakes in forecasting earnings (Dreman, 1998).

Discounted Cash Flow Valuation

The traditional method which was popularly used by academicians was the Discounted Cash
Flow method (DCF).These methods determined the company’s value with the estimation of
the cash flows generated in the future and then they were discounted at a discount rate
matched to the flows’ risk. In Discounted Cash Flow valuation, the present value of the asset
was the present value of the expected cash flows of the asset discounted back at a rate that
reflects the riskiness of these cash flows. Damodaran, (2006) Cash flow discounting methods
were based on the detailed, careful forecast, for each period, of each of the financial items
related with the generation of the cash flows corresponding to the company’s operations. DCF
is argued to be the best model by Kaplan and Ruback, (1995) and Fernandez, (2002), most
frequently used model in firm valuation and capital budgeting decision. Gitman and
Trahan(1995); Graham and Harvey (2001) and Imam et al., (2008) and is regarded as one of
the most important valuation methods by analysts. According to Modigliani and Miller
(1961), DCF was used to value a whole company whereby the company was considered as
combination of several projects (Cooper and Argyris et al., 1998). To determine the firm

25
value, the PV of future cash flows from all the projects in the firm‘s operations were identified
(Penman, 2010). FCF was used in DCF method because it represents the net cash flow of the
company after being reduced by other planned expenditure. Damodaran, (2006) Free Cash
Flow is independent of leverage Koller et al.,(2005) and it determined a company’s capability
to pay off its debt and equity claims (Penman, 2010). The four steps of DCF method
suggested by Penman (2010)

1. Estimate free cash flows in a given year

2. Determine Weighted Average Cost of Capital (WACC) and discounting FCF using
this discount rate
3. Determining Continuing/Terminal value
4. Determine Enterprise value

Cash flows in the DCF were estimated using different cash flow proxies such as
FCF, dividends or accounting earnings (Koller et al., 2005). Empirical evidence showed
that the different proxies led to different firm value estimates. (Torrez et al., 2006). It
was documented that almost fifty percent of all financial analysts used a Discounted
Cash Flow (DCF) method when valuing potential objects to acquire (Hult, 1998). In a
study Absiye and Diking (2001) found that all seven of their respondents, who were
analysts, used the DCF method when they were conducting a firm valuation, the other
valuation methods were just used as complements to the valuation done by the
DCF method.

Skantz and Marcheini, (1992) used a DCF model to value liquidating firms when the
cash flows and growth patterns were known. They concluded that the market appeared
to value stocks by discounting expected cash flows using a risk-adjusted required rate of
return. The uniqueness of their sample however made a generalization to going concern
companies difficult.

Economic Value Added

Residual income is also known as abnormal earnings, or Economic Value Added


(EVA). EVA® was an analytical tool to estimate a company‘s economic profit. It was
developed in 1982 by Joel Stern and G. Bennett Stewart III (Grant, 1996). Since then,
EVA® became a registered trademark owned by Stern Stewart & Co. Even when

26
EVA® appears without the ®‘symbol it was understood as a registered trademark of
Stern Stewart & Co. It had excellent metric for monitoring a firm‘s profitability and the
use of capital. It is one of the most useful analytical tools for appraising a company‘s
financial performance. The opportunity cost of invested capital, which was also known
as the capital charged was determined by multiplying WACC and the capital invested.
(Wilson, 1997). The company’s capital does not only concern liabilities but also
shareholder‘s equity. Therefore the formula that is used is the sum of adjusted
Shareholder’s Equity and interest bearing liabilities and Present value of capital base
change. The capital base change is the difference between capital invested in the
terminal year and capital invested in the last forecasted year. In order to compute the
present value of the capital base change, it is multiplied with a discount factor.

Stern (1990)observed that EVA as a performance measure captured the true


economic profit of an organisation. EVA-based financial management and incentive
compensation scheme gave managers better-quality information and superior motivation
to make decisions that created the maximum shareholder wealth in an organisation.
(Stewart, 1994).There had been an adoption of the EVA system by more and more
companies throughout the world. Grant (1996) found that the EVA concept had
everlastingly changed the way real profitability was measured. EVA was a financial tool
that focused on the difference between company's after tax operating profit and its total
cost of capital. Luber (1996)confirmed that a positive EVA over a period of time would
also have an increased Market Value Added (MVA) while negative EVA would bring
down MVA as the market loses confidence in the competence of a company to ensure a
handsome return on the invested capital.

EVA valuations were mathematically identical to DCF valuations, but it did not
involve future cash flows forecasting and the measure the present value like DCF
valuation does (Wilson, 1997). It was earning in excess of normal return on capital
employed (O’Hanlon and Peasnell, 2002). The idea behind EVA was that shareholders
must earn a return that compensates the risk taken. Stewart (1991) defined NOPAT (Net
Operating Profits after Tax) as the "profits derived from the company's operations after
taxes but before financing costs and non-cash-book keeping entries." But, in eliminating
the impact of "non cash-book keeping" entries, Stewart made an exception.
Depreciation was subtracted to arrive at NOPAT. Stewart argued that depreciation is

27
subtracted because it is "a true economic expense. Stewart identified 120 adjustments
(Ehrbar, 1998) to be made to accounting profit as reported in the profit and loss account.
These adjustments would eliminate potential distortions in accounting results based on
Generally Accepted Accounting Principles (GAAP) of a country.

Recent research also showed that analysts frequently used Discounted Cash Flow
valuation and the Residual Income valuation methods to get fame (Copeland et al.,
2000; Penman. 2001; and Sougiannis and Yaekura., 2001). In addition to its prominent
role in equity valuation, Residual Income valuation is used as a measure of performance
(Ohlson, 2002). EVA can be a diagnostic tool showing managers where the firm needs
to improve to increase its value in the future. (Wilson, 1997). Unlike the DCF-approach,
which concerned about both internal and external factor to the formula, EVA did not
take external effects like inflations into consideration of accounting value in capital and
accounting profit. (Wilson,1997). In addition, positive EVA results through years did
not mean that the company operated well, but it might be that the situation occurred
when the invested capital used in accounting return was too small.

The EVA input was less than DCF valuation. The metric needed data extracted
from Income Statement and Balance Sheet to calculate the surplus value between
NOPAT and capital cost rate. There were several pathways to get the final value of the
company using EVA but all of them based on the foundation of Stern Stewart which
needs NOPAT, company‘s capital (C), Capital cost rate, Cost of debt capital, cost of
equity capital and tax. The advantage of EVA over cash flows was also echoed by
Sirower and O'Byrne (1998).

Stewart (1991) emphasized that to get significant benefits, EVA be fully


integrated into a company linking executive compensation for improvement in EVA.
Stewart maintained that if Executives' bonus and other incentives were linked to
traditional parameters [Earnings per share (EPS), turnover, Return on Net worth
(RONW)], EVA would fail as a performance measure. Stewart (1991) argued that
market value of a firm was largely driven by its EVA generating capacity. The study
conducted by Ashok Banerjee (2000) found the relevance of Stewart's claim in the
Indian context. The relationship between EVA and market value was tested on a sample
of 200 companies. The results of his study confirmed Stewart's claim.

28
Corporations in the US started disclosing EVA information from the beginning
of 90s. Since then, the number of companies adopting EVA had increased. More than
300 companies, with revenue approaching a trillion dollars a year, had implemented the
EVA framework for Financial Management and incentive compensation. Adopting
EVA philosophy forced a company to find ingenious ways to do more with less capital
(Tully, 1993).

Vijayakumar A.(2012) examined whether EVA had a better predictive power


relative to the traditional accounting measures such as EPS, RONW, capital productivity
and labour productivity. The results of the study showed that 53 per cent to 76 per cent
of the sample companies had registered negative EVA during the terminal years of the
study period. The results of factor analysis showed that, three factors had been extracted
and these three factors put together explain 69.90 per cent of the total variance. Further,
the results showed that sales and profit after tax were found to have a stronger
relationship with EVA. The results of multiple regressions indicated that four variables
namely EPS, Sales, PAT and MVA better explained the EVA.

2.4 Valuation using Multiples and Valuation Errors

Syed Umar Farooq et al., (2010) valuation models like the Residual Income Valuation
Method (RIVM) offer advantages over the use of multiples-based approaches like the
P/E, EV/EBITDA and EV/Sales ratios. They found that sophisticated model like RIVM
perform better for both high and low intangible firms compared to multiple based
valuations. They compared the overall performance of the valuation models
(EV/EBITDA, EV/Sales, P/E, RIVM); and whether the underlying differences between
these two industries significantly affected the performance of these models. They
reported signed valuation error for multiple. In terms of bias, P/E and EV/EBITDA
rendered very small median signed valuation errors. However, the mean sign valuation
error was positive for EV/EBITDA, EV/ Sales and P/E. The P/E and EV/Sales tend to
overestimate the observed share price by 12% and 15% simultaneously. The intrinsic
values calculated by using EV/EBITDA gave the lowest valuation errors followed by
P/E. Similarly the performance of newly introduced multiple (EV/Sales) was worse than
both P/E and EV/EBITDA. In terms of accuracy the RIVM generated more accurate
value estimates for high intangibles than for low intangibles. The absolute valuation

29
errors for high intangibles are around 31% around, and 35% for low intangibles. The
overall absolute valuation errors indicated that the RIVM gave the lowest errors in all
firms irrespective of high and low intangibles. The results were consistent with Francis
et al., (2000) that RIVM give the lowest valuation error in term of accuracy. In terms of
bias, the RIVM generated value estimates in the same directions for each industry:
negatively biased for all firms irrespective of whether they are high or low intangible.
For all firms the RIVM tends to underestimate the observed prices by 10% followed by
11% for high intangibles and 8% for low intangibles.

Copeland, Koller and Murrin, (1994), Damodaran,(1996), and Palepu, Healey,


and Bernard, (2000) had discussed price multiples. While globally the concentration
was on all multiples, Indian work had been confined to P/E ratios with most of the
research work done on historical earnings and cash flows. Boatsman and Baskin, (1981)
studied the valuation accuracy of P/E multiples based on two sets of comparable firms
from the same industry. They observed that valuation errors was minimised when
comparable firms were chosen based on similar historical earnings growth relative to
when they are chosen randomly. Alford (1992) examined the effect of choosing
comparables based on industry, size (risk), and earnings growth on the meticulousness
of valuation using P/E multiples. He found that pricing errors declined when the
industry definition used to select comparable firms narrowed from a broad single-digit
Sector Industry Company (SIC) code to classifications based on two and three digits. He
also observed that controlling for size and earnings growth over and above industrial
controls did not reduce valuation errors. Kaplan and Ruback, (1995) analysed the
valuation properties of DCF approach for highly leveraged transactions and found that
although DCF valuations approximate transacted values well, and EBITDA (earnings
before interest, tax, depreciation and amortisation) multiples also result in similar
valuation accuracy.

Penman, (1997) interpreted the P/E ratio and market-to-book ratio. The study
also described the role of book rate of return on equity (the ratio of their denominators)
in the determination of ratios and the relationship between them. The study proved that
the description of the P/E ratio reconciled the standard growth interpretation of P/E with
the transitory earnings (Molodovsky Effect, 1953) interpretation. Both were correct only
in special cases. It also revealed that because a given level of P/E was associated with

30
alternative combinations of current & expected future return on equity, the current
return on equity is not (unconditionally) a good indicator of P/E. Penman (1997)
investigated approximate benchmark valuations that combined earnings and book value
together. He examined the robustness of these weights over time. He tried to combine
the two multiples into one price so that the information provided by both of them could
be used. The study showed that weights vary in a nonlinear way over the amount of
earnings relative to book value and systematically so over time. His study also
demonstrated that the estimated weights were robust over time and could be used to
predict prices.

More empirical tests were done on the absolute investment performance of


different multiples. Studies over many decades and in different countries had shown that
low multiple stocks (value stocks) performed better than the high multiple stocks
(growth stocks). Among many others Basu(1977), Lakonishok, Shleifer and Vishny,
(1994), and Dreman, (1998) showed that low P/E stocks earned positive abnormal
returns relative to the market and high P/E stocks which gave negative abnormal
returns. Goodman and Peavy (1983) found the same with the use of industry relative
P/E ratios. Peters (1991) tested the Price Earnings Growth (PEG) ratio approach and
found significant higher returns for low PEG stocks than the high PEG stocks. Fama and
French (1992) and Dreman (1998) again among many others, found that low P/B (or
low B/M) stocks performed better than stocks with high such ratios. Capaul, Rowley
and Sharpe, (1993) extended the analysis of P/B ratios across international markets, and
concluded that low multiple stocks earned abnormal returns in every market they
analysed. The results of studies on the Price/Sales (P/S) and Price/Cash Flow (P/CF)
and even Price/Dividend Yield (P/DY) were no different (Dreman, 1998).

Tasker, (1998) compared across-industry patterns in the selection of comparable


firms by investment bankers and analysts in acquisition transactions. She found that the
systematic use of industry-specific multiples, were consistent with different multiples
being more appropriate in different industries. Beatty, Riffe, and Thompson,(1999)
analysed different linear combinations of value drivers derived from earnings, book
value, dividends, and total assets and documented the benefits of using the harmonic
mean and introduced the price-scaled regressions. They found that the best performance
was achieved by using weights derived from harmonic mean book and earnings

31
multiples and coefficients from price-scaled regressions on earnings and book value.
Baker and Ruback (1999) studied econometric problems associated with different ways
to compute industry multiples and compared the relative performance of multiples based
on EBITDA, EBIT (or earnings before interest and taxes), and sales. They empirically
showed that absolute valuation errors were proportional to value. Instead of focusing
only on historical accounting numbers( Kim and Ritter,1999), in their investigation of
how initial public offering prices were set using multiples, forecasted earnings to a
conventional list of value drivers, which included book value, earnings, cash flows, and
sales. They found that Forward P/E multiples, based on forecasted earnings, dominated
all other multiples in valuation accuracy and that the EPS forecast for the next year
dominated the current-year EPS forecast.

Sanjay Sehgal and Asheesh Pandey ( 2009) found the following results with
regard to the relative performance of different value drivers: (i) forward earnings
performed the best, and performance improved when the forecast horizon was
lengthened and when earnings forecasted over different horizons were aggregated; (ii)
the intrinsic value measures, based on short-cut residual income models performed
considerably worse than forward earnings; (iii) among drivers derived from historical
data, sales performed the worst, earnings performed better than book value; and a)
earnings (which excludes many one-time items) outperformed earnings; cash flow
measures, defined in various forms, performed poorly; and (v) using enterprise value,
rather than equity value, for sales and EBITDA further reduced performance. Liu,
Nissim and Thomas (2002b) extended their previous work over different countries.
They examined the ability of industry multiples to approximate observed stock prices in
ten countries. The value drivers forecasted numbers for earnings, dividends, cash flows
and sales. They found that multiples based on earnings performed the best, those based
on sales perform the worst and dividends and cash flow multiples exhibited intermediate
performance. Second, using forecasts improved performance over multiples based on
reported numbers, with the greatest (smallest) improvement being earnings (sales).
Third, multiples based on earnings forecasts represented a reasonably accurate valuation
technique, with the implied valuations for over half the firms in different countries
being within 30% of observed valuations. They noticed that a sustained decline in the
performance of all value drivers after 1997 due to increase within-industry

32
heterogeneity in market valuations during this period. Liu, Nissim and Thomas, (2002)
tried to find whether valuations based on cash flow multiples were better than earnings
multiples. They observed that operating cash flows were better than earnings, stock
prices were better explained by reported earnings than by reported operating cash flows.

Huang, Tsai and Chen (2007) re-examined the P/E anomaly by decomposing
P/E ratios into a fundamental component and a residual component, which enabled them
to capture factors that potentially provide better measures of investor overreaction. They
found that both firm specific and macroeconomic factors determined P/E multiples.
Analyst's long-term growth rate forecasts, the dividend pay-out ratio, and firm size were
all positively associated with P/E ratio, while financial risk and aggregate bond yielded
were negatively associated with P/E ratios. They also discovered strong evidence of
performance reversals for the top P/E and bottom P/E portfolios in the years subsequent
to the portfolio formation year, with the strongest reversal occurring in the first post-
formation year. A small body of literature on price multiples was also available for
emerging markets. Irina, Alexander and Ivan (2007) determined that in cross-border
valuations, the use of market multiples (valuation ratios, P/E) should be restated, taking
into account the direct use of comparable companies (peers) from developed markets to
value companies in emerging markets was inaccurate. They proved that using peers
from developed markets would overstate the estimation of equity value in emerging
markets, as companies from emerging markets were subject to various factors such as
political and economic risk, a low Equity Valuation. Using Price Multiples, a low level
of corporate governance and high negative skewness required an adequate discount.

Gill, (2003) demonstrated empirically that stock-market valuations were no


longer driven solely by traditional investment principles and found that the low P/E ratio
as an indicator does not hold good anymore. She observed that there is an acceptable
P/E range for different industries and that it is not only the past record of the P/E ratio
but also the average P/E ratio for the industry that should be looked into. She also
observed that the use of the P/E ratio along with the EPS growth rate could produce the
more useful price earnings to growth (PEG) ratio, which was a sound indicator of a
company's potential value. The “Price/Earnings” (P/E)and the “Price/Book Value”
(P/BV)ratios are among the most widely used for the Relative Valuation of private (non
public) companies (Reilly et al., 2003). They varied across companies, sectors and

33
markets. Their levels were high before the financial crisis and went significantly down
after that. The popular Gordon constant-growth dividend modelfor valuing common
stock was an excellent starting point for the derivation of the fundamental (theoretical)
P/E and P/BV ratios. He divided both sides of the Gordon model by EPS (the current
earnings per share), and after a couple of algebraic transformations, came up with the
fundamental P/E model. Dhankar and Kumar, (2007) measured the performance of a set
of portfolios, which were based on P/E of stocks. The study found no consistency
between the portfolios' expected return and their corresponding P/E ratios. It was
observed that the stock market failed to reflect an instantaneous response pertaining to
earnings information. These findings questioned the efficient market hypothesis but held
the application of capital asset pricing model in the Indian stock market.

An extremely valuable finding related to P/E ratios was the counter movement
principle of “Molodovsky’s”.According to Block, Molodovsky’s counter movement
principle (rule) was a major breakthrough which provided analysts with their first clear
insight into the behaviour of price-earnings ratios”. Estimated future earnings (or “basic
earning power”) were essentially an average. They contained within themselves high
earnings as well as low. Therefore, when current earnings increased above the estimated
basic earning power, they should be capitalized by the application of a lower multiplier
(that is a lower P/E); when they fall below such an estimate, the multiplier should be
higher than if it were used for capitalizing earning power itself (Molodovsky,1953).

Alford (1992) used the P/E multiple to assess how the benchmark companies
should be chosen. Using such criteria as industry, assets, return on equity, and
combinations of these factors and a sample of 4,698 companies from 1978, 1982, &
1986, he examined seven potential sets of comparable companies. He found that
choosing benchmark companies based on industry alone or in combination with Return
on Equity (ROE) or total assets led to the most accurate valuations and that the accuracy
improved as the number of SIC digits used to define an industry were increased up to
the third digit. He also found a positive relationship between company size and
valuation accuracy. The median percentage errors in valuation ranged from 23.9 percent
to 25.3 percent.

34
Sehgal and Pandey (2009) examined the behaviour of price multiples in India
from 1990–2007. They also found that there was a very weak relationship between price
multiples and their fundamental determinants. They found that in the case of historical
standalone multiples, both at the sectoral level as well as at the market level, historical
P/E is the most efficient price multiple for equity valuation. The results were consistent
with those for matured markets as shown by Liu, Nissim and Thomas (2002 and 2002b).
Both the forecast evaluation measures had the minimum pricing error for P/E. The P/BV
was the next most efficient price multiple, whereas P/CF was the worst performer and
P/S was in the third place according to both the tests. The study also proved that price
multiples were sensitive to market conditions and, therefore, were generally higher in
upturns with the exception of infrastructure related sectors.

Moonchul Kim & Jay R.Ritter (1999) examined the pricing of IPOs using
comparable firm multiples. Valuing IPOs on the basis of the price-to-earnings, price-to-
sales, enterprise value-to-sales, and enterprise value-to-operating cash flow ratios of
comparable firms was of limited use when historical numbers rather than forecasts were
used. Within an industry, the variation in these ratios was so large, both for public firms
and IPOs, that they had only modest predictive value. Many idiosyncratic factors were
not captured by industry multiples unless various adjustments for differences in growth
and profitability was made. The use of earnings forecasts improved the valuation
accuracy substantially. The valuation accuracy was higher for older firms than for
young firms.

Kaplan and Ruback (1995) estimated valuations for a sample of highly leveraged
transactions (HLTs) based on market value to EBITDA (earnings before interest, taxes,
depreciation, and amortization). The benchmark multiples were the median multiples for
companies in the same industry, companies that were involved in similar transactions, or
companies in the same industry that were involved in similar transactions. For their sample
of 51 HLTs between 1983 and 1989, they found both the DCF and multiple methods to be
useful valuation tools with similar levels of precision. Depending on the benchmark
multiple used, 37–58 percent of the valuations fell within 15 percent of the actual HLT
transaction value.

35
Mingcherng Deng et al., (2009) studied errors in enterprise and equity valuation
based on multiples of firm fundamentals. Based on a more representative sample (including
firms with losses, smaller start up firms.), the study complements the existent studies.
Contrary to the results in the extent studies, they found that: (1) valuation errors for
multiples based on sales were, on an average, lowest for both enterprise and market value;
and (2) when compared to book value and earnings as valuation fundamentals, they found
that book-value based multiples outperformed the earnings-based multiples. They focussed
on multiples of current financial variables. They reported vast improvement in valuation
errors when an average omitted variable (intercept) was incorporated in the calculation of
harmonic means, valuation errors had significantly improved when combining
fundamentals from different financial statements; the largest improvement observed when
balance sheet fundamentals (net operating assets and book value of equity) was combined
with fundamentals from the income statement (such as EBITDA). Another study which
evaluated the value created in acquisitions of bankrupt companies relative to non bankrupt
companies was done by Hotchkiss and Mooradian (1998). They first used valuation by
multiples to estimate the value of bankrupt companiesand compared these values with the
acquisition prices to determine the degree of discounting associated with the bankrupt
companies. The multiples they applied were the ratios of enterprise value to sales and of
enterprise value to assets, in which “enterprise value” was defined as the transaction price
minus fees and expenses plus liabilities. They reported that bankrupt companies were
acquired at discounts of 40–70 percent.

According to literature, equity valuation using multiples was a very popular


method. Penman (2001) defined multiple as the ratio of stock price to a special
accounting number from the financial statements e.g. price –earnings ratio (P/E), the
price-to-book ratio (P/B), the price to sales ratio (P/S), and the price to cash flow ratio.

Sanjeev Bhojraj and Charles M. C. Lee, (2003) examined the efficacy of the
selected comparable firms in predicting future (one- to three-year-ahead) market
multiples. They pointed out that comparable firms selected in this manner offer sharp
improvements over comparable firms selected on the basis of other techniques,
including industry and size matches. The results held that for all four accounting-based
multiples and the adjusted r-square was typically more than double than those achieved
using simply industry and size matches. They also found that the closest matching firm

36
which used the warranted multiples technique was from a different country. The results
suggested that by systematically matching firms on the basis of their warranted
multiples, helped to identify superior comparable firms.

Dimiter N. Nenkov (2010) used Relative Valuation for estimating the value of
companies and the risks associated with its application in Bulgaria. The fundamental
price earnings and price-to-book value ratios had also been estimated and compared
with the corresponding actual ratios in the Bulgarian Stock Exchange. The results of
the study indicated that during the period prior to the financial crisis the average levels
of the actual ratios on the Bulgarian capital market were considerably higher than the
levels suggested by fundamentals.

The multiples valuation methods was used to “justify” high stock prices in the
environment of growing, bull equity markets. For developing markets this was usually
done through mechanically “implanting” the inflated actual ratios from international
mature markets, without analysis, without accounting for fundamentals.

2.5 Comparison of Valuation Methods

Valuation models gave the same intrinsic value but still some models outperformed in a
particular situation with different assumptions and growth rates. Similarly, industry
characteristics matters and specific models perform better in a particular industry.
Francis et al., (2000) examined whether the theoretically equivalent models give the
same intrinsic value in practice. They compared value estimates of three models by
reference to (a) bias with respect to observed prices, (b) accuracy with respect to
observed prices and (c) explainability of observed prices. They found that the value
estimate from abnormal earnings model was more accurate and explained significant
variation in stock price than the discounted dividend model and the discounted free cash
flow model. Lie and Lie (2002) found that the asset value multiple provided more
precise value than sales and earnings multiples both in financial and non
financial companies.

Kaplan and Ruback (1995) examined the DCF approach in the context of highly
leveraged transactions such as management buyouts and recapitalizations. They found
that transaction prices were close to the present value of projected cash flows, although

37
they were unable to reject the hypothesis that the projections were made to justify the
price. They also reported that a CAPM-based, DCF valuation approach had
approximately the same valuation accuracy as a comparable firm valuation approach
with earnings before interest, taxes, depreciation, and amortization as the accounting
measure being capitalized. Their sample firms were typically large, mature firms, unlike
our firms going public. Gilson, Hotchkiss, and Ruback (2000) also found that, for firms
emerging from bankruptcy, DCF valuations had about the same degree of accuracy as
valuations based upon comparable firm multiples. They showed that the economic
interests of various parties in the bankruptcy proceedings affected the cash flow
forecasts that were used.

Gilson et al., (2000) compared the DCF valuation method and the use of
multiples for valuation of companies emerging from bankruptcy. When they used
EBITDA multiples based on the median of companies in the same industry, about 21
percent of the valuations were within 15 percent of market values. Although the value
estimates generated by the earnings multipliers in the Gilson et al study were generally
unbiased and they exhibited a wide degree of dispersion.

In a recent research by shown by Demirakos et al., (2004), the results showed


that the analysts chose either discounted cash flow model or P/E model. His findings of
Demirakos et al., (2004) gave an indicator about the practicalities of the residual income
valuation model. Francis et al., (2000) examined whether the theoretically equivalent
methods (Discounted Dividend Method, the Discounted Free Cash Flow method, and
the discounted abnormal earnings method) provided the same intrinsic value in practice.
They compared value estimates of three models by reference to (a) bias with respect to
observed prices, (b) accuracy with respect to observed prices and (c) explainability of
observed prices. They documented that the value estimated from abnormal earnings
model was more accurate and explained more variation in stock price than discounted
dividend model and the discounted free cash flow model and therefore was more
superior because large proportion of the estimated value of equity came from book
value, and/or the model had greater forecasting precision. Like Francis et al., (2000),
Penman and Sougiannis (1998) also compared whether theoretically equivalent
valuation models (dividend discount model, discounted cash flow model and accrual
earnings base valuation model) had the same intrinsic value? However, Penman and

38
Sougiannis (1998) used ex-post payoff rather than the ex-ante analyst’s forecasts. Based
on their results which reflected consistency with Francis et al., (2000) indicated that
accrual earnings based valuations dominate Free Cash Flow and Dividend Discount
methods. Penman (2001) reported that the Residual Income Valuation model provided
the same value as the Discounted Cash Flow, if the valuation attributes were treated
properly, themodel gave the same result in finite horizon (Fernandez, 2002; Lundholm
and O’Keefe 2001b).

More academic research compared the DCF and various Residual Income
models. The evidence regarding the relative superiority of these methods was mixed.
Bernard (1995), Penman and Sougiannis (1998), Frankel and Lee (1998) and Francis et
al.,(2000b) found that the RI valuation models predicted or explained stock prices better
than the models based on discounting short-term forecasts of dividends or cash flows.
On the other hand, studies by Stober (1996), Dechow, Hutton and Sloan (1999), Myers
(2009) and Callen and Morel (2000) evidenced that the Residual Income (RI) model
was of limited empirical validity.

Ruben van de Sande (2012) did not find DCF yielding values closer to the
market value than the multiples method. Excluding Terminal Value led to more accurate
valuations. Doreen Nassaka and Zarema Rottenburg (2011)calculated the firm value
using DCF and found values higher using DCF and Real Options than multiples
method. They concluded that 1% change in WACC leads to large change in firm value
and share price. The cost of capital has direct implication on value of the firm as lower
cost of capital led to higher firm value. The DCF was the best method but only when the
company was profitable (Russell 2007).

In recent years, academic research had provided empirical evidence on the


relative superiority of cash flow versus earnings based valuation techniques. Dechow
(1994) found that stock returns were more highly associated with earnings than with
cash flow. Similarly Penman and Sougiannis (1998) documented that earnings valuation
techniques consistently outperformed cash flow valuation techniques over alternative
forecasting horizons. In another study, however, Black et al., (1998) found that the
relative superiority of earnings versus cash flow existed only for companies in mature

39
life cycle stages. In the start up stage, growth stage and the declining stage operating
cash flow was more value relevant.

The results of the studies on EVA were mixed. Stewart (1991) had first studied
the relationship with market data of 618 US companies. Stewart observed that the
relationship between EVA and MVA (Market value Added) was highly correlated
among US companies. Lehn and Makhija (1996) in their study of 241 US companies
over two periods (1987-1988 and 1992- 993) observed that both measures (EVA and
MVA) correlated positively with stock returns and that the correlation was slightly
better with EVA than that with traditional performance measures like Return on Assets
(ROA) and Return on Equity (ROE). On the predictive power of EVA in explaining
MVA or shareholder wealth, several researchers Uyemura, Kantor and Petit, (1996);
McCormack and Vytheeswaran, (1998); O'Byrne, (1996); Milunovich and Tsuei, (1996)
and Grant (1996) observed that EVA was better correlated with MVA or shareholder
wealth than other traditional parameters such as ROCE, RONW and EPS.

Riceman, et al., (2002) argued that EVA was a performance measure that was
being used by an increasing number of companies, but academic research on EVA was
limited. Several researchers discussed the relationship between EVA and Market Value
among various companies in India. The results of the analysis confirmed Stern's
hypothesis that the company's current operational value was more significant in
contributing to change in market value of share in the Indian context. Bardia
(2002)revealed that in a dynamic environment, a common investor found it increasingly
difficult to monitor his investments. EVA guides investors in evaluating the
performance of the company and monitoring their investments. Stern Joel,
(2003)presented the results of Stem Stewart's research on Indian companies, which
showed the considerable need to improve the wealth creation performance and
allocation of capital in the Indian economy. They explained how the effective
implementation of the EVA framework could be a solution to address this problem. Ali
M Ghanbari and Narges Sarlak (2006)empirically reviewed the trend of EVA of Indian
Automobile Companies. The results indicated that there was a significant increasing
trend in EVA during the period of study and the firms in the automobile industry were
moving towards the improvement of their firm's value.

40
The valuation of the company was not finished at the moment of generating three or
two values due to the use of a few valuation methods. These values differed, which could be
the result of the fact that the market was not effective, differences in methodologies,
differences in assumptions made and even errors in valuation. Imam, Barker, and Clubb
(2008) conducted semi-structured interviews with sell-side and buy-side analysts in the
United Kingdom to determine which valuation methods analysts used, why they used them,
and how they used them. Their results showed that the two most widely used valuation
methods are the P/E ratio and the FCF to the firm model. In contrast, few analysts used
economic value analysis, multiples based on book values (whether price or enterprise value
multiples), or the P/Sales ratio. Approximately 60 percent of the analysts expressed a strong
preference for cash flow–based valuation methods, particularly buy-side analysts. However,
most analysts admit that they often complement their cash flow–based analysis with a
multiples-based analysis. Some valuation methods were sector specific. For example, the
P/B ratio and EV/Sales ratios were rarely used, except to value financial institutions and
retailers, respectively.

2.6 Determinants of Enterprise Value Creation

To create value, the management must have a deep understanding of the


performance variables that drive the value of the business. Called key-value drivers,
there were two reasons why such an understanding was essential. First, t he
organization cannot act directly on value. It has to act on things that influence, such
as customer satisfaction, cost, capital expenditures, and so on. Second, it is through
these drivers of value that the management understands the rest of the organisation
and communicates is expected to be accomplished.

Sustainable value creation was hindered by the short term focus that was prevalent
in the corporate world. This was illustrated by proxy statement disclosures, as 90% of
1200 listed companies CEO’s longest-accountable business-performance period was three
years or less. Value was created when the Return on Invested Capital (ROIC) was greater
than weighted average cost of capital (WACC). When the ROIC was less than WACC,
then value was destroyed and the business strategy was not creating value. (Stephen
O’Byrne et al., 2014)

41
A value driver is any variable that significantly affects the value of the
organization. To be useful, however, value drivers need to be organized so that the
management could identify which had the greatest impact on value and assign
responsibility for their performance to individuals who can help the organization meet
its targets.

FIGURE 2.1 CORPORATE OBJECTIVES AND VALUE DRIVERS

Many scholars have addressed the relationship between knowledge assets and a
firm’s market. In the vast majority of these studies, knowledge assets have been
operationalised by research and development investments and patents. The typical and
quite consistent findings revealed a positive effect of research and developmentactivities
on the market valuation of firms, and inventive outcomes as well as quality indicators of
inventive outcomes were found to generate a premium for it. Firms which engaged in
fundamental research were regarded as being more capable of re-combining
technologically distant knowledge and created more valuable innovations than the other
firms (Nelson, 1959; Rosenberg, 1990). Firms with (basic) research capacities typically
had superior capabilities to understand and integrate external knowledge and to identify
promising trajectories for applied research and development (Cohen and Levinthal,
1990; Fleming and Sorenson, 2004). Corresponding empirical research showed that
internal and external knowledge was complimentary particularly when firms invested
into basic research, leading to superior inventive outcomes (Cockburn and Henderson,
1998; Lim, 2004; Cassiman and Veugelers, 2006; Fabrizio, 2009). Realizing first mover

42
advantages was by firms that were engaged in basic research, which developed
inventions at a faster pace which allowed for (Rosenberg, 1990; Fabrizio, 2009).
However, the execution of basic research often required specific human resource
compositions and incentive systems, and the research was subject to higher uncertainty
Nelson, (1959); Rosenberg, (1990). Despite these costs, overall a positive impact of
basic research was expected on firm value. Deng et al., (1999) examined whether
science-based patents led to an effect on the market valuation beyond the standard
measures of R&D and patent stocks. They documented a positive effect, which
indirectly indicated that (basic) research led to more successful innovations.

Srishty Sarawgi Jain, JV Ramana Raju, Anirban Dutta and Mihir Dash (2010)
studies address the issue of valuation of Information Technology-enabled Services
(ITeS) companies. Traditionally, enterprise valuation models were either asset-based,
earnings-based, or a mixture of these. Assets-based valuation models focussed on fixed
assets or tangible assets as a source of value creation, while earnings-based valuation
models focussed on earnings/profitability and growth. For ITeS companies, human
resources (intangible assets) were expected to play a more important role in driving
value than tangible assets. The study proposed a model for enterprise value in the ITeS
sector which involved tangible (fixed assets, financial/working capital, and intangible
assets (human/intellectual capital). In particular, the model explained the relative impact
of each of these factors on enterprise value. The model was found to be significant and
explained 53.8% of the variation in enterprise value. The variable with highest effect on
enterprise value was working capital, followed by employee cost, and EBITDA. Fixed
assets was found not to have a significant effect on enterprise value, though it was
significant in conjunction with working capital and “human capital”

Jeffery M. Bacidore (1997) selected 600 of the 1,000 firms in the database and
proceeded to calculate each firm's EVA, REVA, total shareholder return, and risk-
adjusted abnormal return for each year. They concluded that EVA did well in terms of
its correlation with this measure of shareholder value creation.

Zarowin (1990) examined the several determinants of (Earnings/Price) E/P ratios


and showed that long-term growth was very important in determining E/P ratios, while
short-term growth and risk was relatively less important. Liu and Ziebart (1994) also

43
examined the cross sectional variability in E/P ratios and found a significant relationship
between E/P ratios and growth, dividend payout, and size. They did not find a
significant relationship between E/P and systematic risk. Ohlson’s (1995) model showed
that the M/B ratio was a function of the firm's abnormal earnings generating power and
thus reflected the firm’s growth potential.

2.7 Abnormal returns

Measures of shareholder wealth creation focus on the firm's stock price performance and
seek to determine how much shareholders increased their wealth from one period to the
next based on the dividends they receive and the appreciation in the firm's stock price.
Essentially, such trading-based performance measures assess how well an investor would
have done if he or she had purchased a share of stock at the beginning of the period and sold
it at the end. This type of measure of shareholder wealth creation is called a trading-based
measure of performance.

Accounting information was relevant if it had the capability to affect or change


decision-makers expectations of value. One way to measure the value relevance of
financial statements was to examine the reaction of market price or volume to
accounting numbers announcements, and the most uncommon measure was the total
return to be earned from pre-disclosure knowledge of financial statement information
(Hùegh-Krohn et al., 2000). The use of stock market data to measure the value
relevance that was helpful in showing investors actual reactions to a particular action
(published financial statements). However, there should be a trade off between
relevance and verifiability as concentrating solely on relevance resulted in financial
number games (Allen & Richard Herring, 2001).

The literature on long-horizon security price performance following corporate


events is summarized extensively in many studies, including Fama (1998), Kothari and
Warner (1997), and Kothari (2001). Two main methods for assessing and calibrating post-
event risk-adjusted performance are used: characteristic-based matching approach and the
Jensen’s alpha approach, which is also known as the calendar-time portfolio approach.
More recently, considerable accumulated evidence suggests the importance of a liquidity
factor in determining expected returns Brennan and Subrahmanyam, (1996), or Mitchell
and Stafford, (2000).

44
Francis and Schipper (1999) examined whether ‘financial statements had lost their
relevance’. They documented an increase in the relevance of Balance Sheet and book value
while a decrease in the relevance of Income Statement (Francis and Schipper., 1999 Collins et
al., 1997). The empirical study also showed that the value-relevance of accounting numbers
had decreased with time and accounting numbers and particularly reported that the earning
were less effective to explain share price variations than before (Rayburn, 1986); (Beaver et
al., 1980). In another cross border value relevance study, Alford et al.,. (1993) found that
earnings prepared under the UK GAAP were timelier or more value-relevant than earnings
prepared under US GAAP.

Javad Moradi (2013) in his study investigated the convergence of prices to


fundamental value of stocks to deepen the insights of investors about the market
mechanisms. The research checked the value relevance of accounting and financial
reporting information and addressed the usefulness of Value/Price (V/P) ratio as a good
predictor of stock returns which can be exploited by analysts. The findings supported
the prediction ability of V/P ratio with respect to long-run normal and abnormal returns.
Also, the results showed the return accumulation in price convergence Subgroup of
sample; the primary source of this abnormal return was the convergence of prices to
fundamental values of stocks. The correlation between accounting and financial
reporting information with market prices had been evident.

The standard valuation approach indicated that stock prices should be driven by
information that signalled the future fundamental values. A large body of evidence
suggested that the measures of fundamental value to market value ratio were associated
strongly with stock prices and returns [Frankel R, Lee C.(1998),Fama E, French K.(1992),
Fama E, French K.(1995), Abarbanell J, Bushee B.(1997), Dechow P, Hutton A, Sloan
R.(1999)]. These studies propagated that the investors could earn abnormal returns by
trading on various signals of fundamental information, as the market failed to fully
incorporate a firms’ fundamental value based on historical financial data into prices in a
timely manner (Lo K, Lys T, 2000).They showed that the V/P ratio was better than
alternative valuation multiples, in explaining the stock returns. (Esterer F, 2011). Some
other researchers repeated this analysis, but they used analysts’ forecasts to predict future
earnings (Frankel R, Lee C., 1998). More recently, some researchers presented a similar
concept of the value-to-book ratio (V/B). They proposed a decomposition of the V/B ratio

45
into an industry V/B and a firm V/B ratio, and proved that these multiples improved the
predictability of the ratio compared to the V/P by Frankel and Lee (Johnson WB, Xie S.,
1998). The Value/Price (V/P) ratio has been advocated as an alternative to traditional
valuation multiples. This quotient of fundamental firm value and market capitalization has
been applied to predict returns of individual assets (Frankel and Lee, 1998) or entire
markets (Lee et al., 1999). Previous studies compared the ability of the V/P ratio with other
multiples, such as the price-to-earnings ratio, or the price-to-book ratio with focus on
market returns and the use of macroeconomic control variables and supported the V/P ratio
ability to predict aggregate Dow Jones returns (Lee et al., 1999). The researchers in the field
concluded that the return value resulted from the V/P strategy application, but they
unanimously accept the interpretation of the findings dealing with the prediction ability of
the V/P ratio. Generally, there are three types of interpretation as follows:

_ Explanations by unidentified risk factors (Fama E, French K., 1992);


_ Defect in research design (Barber B, Lyon J., 1997);
_ Temporary mispricing (Kim K, Lee C & Tiras S. 2009)

Emily Xu (2002) in his paper assessed the incremental contribution of each of


two firm-specific anomalies that prior studies have shown to be associated with
subsequent abnormal securities returns. The first anomaly is the ratio of one-year-ahead
analysts’ early-in-the-year forecasts of future earnings to share prices FY1/P. and the
second is the ratio of implicit values of firms to trading values of firms, hereafter V/P.
The empirical results in this paper indicated that for firms with low analyst coverage,
FY1/P provided value-relevant information only for short investment horizons.
Moreover, the evidence suggested that the residual income model had the price-
prediction ability, yet only for a long investment period, that was, a three-year
investment horizon. Therefore, Frankel and Lee’s (1998) concluded that the relation of
delayed returns to V/P was due to the prediction ability of the residual income model
was valid, but only for long term.

Elgers et al., (2001) had shown the relation of one-year analysts’ forecasts to
subsequent securities returns. Among all the constituent variables of V/P, there were
four other variables that had been documented as associated with subsequent securities
returns: analysts’ long-term growth rate forecasts, earnings-to-price ratio, book values-

46
to-price ratio, and dividends-to-price ratio. In short, the prediction ability of the residual
income model was not convincingly demonstrated by the results reported in Frankel and
Lee (1998).

The study by Florian Esterer (2012) demonstrated that analysts helped to predict
the cross-section of stock returns across international capital markets. Analyst estimates
were captured by the implied cost of capital (ICC) and the value-to-price (V/P) ratio,
two risk and valuation measures derived from analyst forecasts. The study revealed that
both measures explained the cross-section of stock returns beyond the risk-return trade-
off implied by standard asset pricing models. However, the predictive power of ICC and
V/P ratio varied across countries. Analysts were most valuable in France, Japan and the
United States, whereas they were only marginally useful in Germany and Italy.

The Dividend Discount Model (DDM) was the simplest model used in equity
valuation. There are contradictory opinions about the impact of dividends on valuation.
One school of thought like Modigliani- Miller was its irrelevance to value, while others
such as Walters’s model and Gordon model believed that dividends were value relevant.
Despite Modigliani-Miller (1961) irrelevance proposition, many researchers had
claimed the contribution of dividend in valuation. Black & Scholes (1974) found that a
change in dividend policy had affected stock price which was a function of dividend
policy. Also, Fisher (1961) described that dividend and profit had a similar affect on
share prices, and there was a link between dividend per share, profit per share and
dividend growth per share with the prevailing share price.

The Free Cash Flow to the Firm (FCFF) valuation method was the entity
perspective approach of the DCF. Unlike the residual income valuation model, where a
major portion of the intrinsic value is embedded within the observable accounting book
value, the cash flow-based models gave inaccurate intrinsic value because it depended
upon uncertain future forecasts Vardavaki et al., (2007). Penman (2001) indicated that
free cash flow doesn’t consider accrual accounting and it ignored value addition in short
term. Thus the value gained cannot be matched with value lost. Similarly, due to
mandatory capital expenditures the cash flow of a growing firm may be negative which
may result into negative intrinsic value of equity. In theory discounted cash flow model
parallel with other accounting flow based valuation models provided the same intrinsic

47
value. Copeland et al.,. (2000) stated that the option to use different model was ‘‘driven
by the instincts of the user’’. The cash flow based valuation model was popular among
practitioners, investors and academia. Those who favour cash flow based valuation model
commonly perceived that operating cash flows were better than earnings at explaining
equity valuations. Earnings were to be the more representative value driver because
earnings reflect value changes regardless of when the cash flows occurred. Dechow
(1994) documented a strong relationship between earnings and stock return in short term,
but in long run the realized cash flow gradually improved. She also considered current
cash flow as a base for future earnings and cash flow. Her findings indicated that earnings
were superior to cash flow because of accruals. Therefore, future earnings and cash flow
could be forecasted on the basis of current earnings rather than cash flow.

Residual income was also known as abnormal earnings, or economic value


added (EVA).In addition to its prominent role in equity valuation, residual income
valuation model was used as a measure of performance (Ohlson,1995). Although the
Residual Income valuation model was the most preferred model in academia, yet it
has some implementation barriers. In a recent research by Demirakos et al., (2004),
analyst chose either discounted cash flow model or P/E model. The findings of
Demirakos et al., (2004) gave an indicator about the practicalities of the Residual
Income valuation model.

Fama and French (1992) examined the relationship between betas and returns
between 1963 and 1990 and concluded that there was no relationship. These results had
been contested on three fronts. First, Amihud, Christensen, and Mendelson (1992) used
the same data, performed different statistical tests, and showed that differences in betas
did in fact explain differences in returns during the time period. Second, Kothari and
Shanken (1995) estimated betas using annual data instead of the shorter intervals used in
many tests, and concluded that betas explained a significant proportion of the
differences in returns across investments. Third, Chan and Lakonishok (2001) looked at
a much longer time series of returns from 1926 to 1991 and found that the positive
relationship between betas and returns broke down only in the period after 1982. They
also found that betas were a useful guide to risk in extreme market conditions, with the
riskiest firms (the 10 percent with highest betas) performing far worse than the market
as a whole in the 10 worst months for the market between 1926 and 1991.

48
The study of Jeffiey M. Bacidore et al., (1997) usedcomprehensive statistical
analysis of both REVA and EVA was used to estimate their correlationwith and their
ability to predict shareholder value creation. The results indicated that the proportion of
positive REVA that correspond to positive abnormal returns was significantly higher
than the same proportion for EVA. Thus, although EVA on its own predicts abnormal
rectums’ fairly well, REVA performed significantly better. This finding was important
because senior management seek a performance measure with the greatest ability to
predict accurately, directional changes in shareholder wealth. EVA was significantly
related to abnormal returns.

GaryC. Biddle, Robert M. Bowen & James S. Wallace (1998) study tests
asserted that Economic Value Added (EVA®) was more highly associated with stock
returns and firm values than accrual earnings, and evaluates which components of EVA,
if any, contribute to these associations. Relative information content tests revealed
earnings to be more highly associated with returns and firm values than EVA, residual
income, or cash flow from operations. Incremental tests suggest that EVA components
add only marginally to information content beyond earnings. Considered together, these
results do not support claims that EVA dominates earnings in relative information
content, and suggest rather that the earnings generally outperforms EVA.

In recent years, academic research provided empirical evidence on the relative


superiority of cash flow versus earnings based valuation techniques. Dechow (1994)
found that stock returns were more highly associated with earnings than with cash flow.
Similarly document Penman and Sougiannis (1998) that earnings valuation techniques
consistently outperformed cash flow valuation techniques over alternative forecasting
horizons. In another study, however, Black (1998) found that the relative superiority of
earnings versus cash flow exists only for companies in mature life cycle stages. In the
start up stage, growth stage and the declining stage operating cash flow were more value
relevant. Furthermore Biddle, Seow and Siegel (1995) signified that the relative
superiority differed from industry to industry, without however differentiating the life
cycle of the industries.

49
A study by Sloan (1996) brought another interesting insight. He found that when
the market considered earnings, it made a cognitive error in relation to the two types of
information contained in earnings – accrual earnings and cash flows. He revealed that
investors systematically overreact to accrual earnings, despite their lower persistence than
cash earnings. Sloan captured the mispricing with a trading strategy that held a long
position in low accrual firms and a short position in high accrual firms. This simple
strategy yielded an average annual excess return of more than 10% and generates positive
returns in 28 of the 30 years in the sample. His results were later confirmed by Houge and
Loughran (2000) and Xie (2001).These studies indicated that firms with large accrual
earnings had lower subsequent returns. Investors focussed too much on earnings and do
not consider adequately the temporary accrual components of those earnings. Block
(1999) provided evidence that earnings fixation was persistent throughout the financial
community. His survey revealed that financial analysts rank earnings as a more important
valuation tool than cash flows. Because the market anchors on earnings, investors
consistently underestimate the transitory nature of accruals and the long-term persistence
of cash flows. This mistake was avoided with focus on cash flow rather than earnings.
Furthermore, Gentry et al.,. (2002) held that all individual components of free cash flow
were significantly related to capital gains and hence were value relevant.

Frankel and Lee (1998) tested the residual income model of Ohlson (1995)
operationalised with analysts’ earnings forecasts. They found that the model predicted
abnormal returns over one-, two, and three-year holding periods. Specifically, a
portfolio constructed by taking along position in the most undervalued quintile of firms
and a short position in firms in the most overvalued quintile produced cumulative
returns of 3.1%, 15.2%, and 30.6%, over one, two, and three-year holding periods. The
results of Frankel and Lee improved further by using more refined model estimation
procedures. Bradshaw (2000) and Ali, Hwang and Trombley (2003) confirmed these
results. In a similar study Frankel and Lee (1999) found that the residual income model
applied internationally produced abnormal returns in a cross-country investment
strategy.

Dechow, Hutton and Sloan (1999) implemented the RIM using a number of
different time series models for predicting future ROEs, as opposed to analyst based
forecasts in Frankel and Lee (1998). Despite this fact, they still find that under- (over-

50
valued) stocks as identified by the model earn higher (lower) future returns, particularly
over horizons of 3 to 5 years. In different variations Lee, Myers and Swaminathan
(1999) also profitable trading strategies based on comparing stock prices to intrinsic
values from residual income models.

Pascal S. Froidevaux (2004) had taken a behavioural view on the process of


common stock valuation. The main goal was to value common stocks using a
sophisticated discounted cash flow (DCF) valuation model. They built the model and
estimated its inputs by replication of possible investors’ behaviour in valuing stocks in
the stock market and consequently a mix of different methods to determine cash flow
growth, the growth duration and the discount rate. They tested the model’s ability to
differentiate between under- and overvalued stocks in the US market over the ten year
period from 1993-2002. The results of the approach were very promising: an investment
strategy buying undervalued stocks as identified by the model yields an annual return of
27.57% over the ten year testing period compared to a benchmark return of 19.47% and
the returns of a portfolio of overvalued stocks as identified by the model of only 6.26%.
They concluded that a complex discounted cash flow valuation model identified and
exploited systematic mispricing in the stock market.

Sanjay Basu (1982) examined the empirical relationship between earnings’ yield,
firm size and returns on the common stock NYSE firms. The results showed that the
E/Peffect was sufficiently weak for larger than average NYSE firms that from a stochastic
viewpoint it either was not significant or, at best, was marginally significant. In addition, the
empirical findings indicated that the E/Panomaly could not be attributed to earnings
information effects and, as such, attest to the descriptive validity of Ball’s hypothesis that
the E/Panomaly probably implies a misspecification of the equilibrium pricing model rather
than capital market efficiency. The findings suggested that the effect of earnings’ yield and
size on expected returns was substantially more complicated than previously documented in
the literature. While neither E/P norsize can be considered to cause expected returns, the
evidence lend credence to the view that, most likely, both variables were just proxies for
more fundamental determinants of expected returns for common stocks. Similarly, Banz
(1981) shows that common stock of small NYSE firms earned higher risk-adjusted returns,
on average, than the common stock of large NYSE firms.

51
2.8 Key components in valuation

The following section lists the key components of valuation and empirical research
conducted on them.

Free Cash Flow (FCF)

It is the operating cash flow, generated by operations, without taking into account
borrowing (financial debt), after tax. It is the cash flow for the shareholders and if the
company had no debt: there would be no financial expenses and the money that could be
distributed after covering fixed asset investment and working capital requirements. In
order to calculate future free cash flows, there should be a forecast of the cash received
and paid in each period. This was the basic approach used to draw up a cash budget.
However, in company valuations, this task required forecasting cash flows further ahead
in time than is normally done in any cash budget. The FCF cannot be calculated directly
from a company‘s reported financial statements, thus it is necessary to reformulate these
statements so as to identify a company‘s operating items, non-operating items and
financial structure (Koller et. al., 2005). The limitations of free cash flow include its
inability to identify value created that does not involve cash flows, the fact that it
evaluates investments as loss of value, and the option to increase the FCF such as
investing less (Penman, 2010).

The cash flows used were free cash flows after considering the reinvestment
needs of the firm and non cash charges in the firm. Free cash flow is “the difference
between cash flow from operations and cash investment in operations” (Penman, 2010).
It is the cash flow that is available to investors after investments in fixed assets and
working capital (Brealey and Myers et al., 2007). FCF is also independent of leverage
(Koller et. al, 2005) and it determines a company‘s capability to pay off its debt and
equity claims (Penman, 2010). Additionally, FCF is a good indicator of the company‘s
ability to generate cash and therefore also profit. A negative FCF does not mean that the
company‘s operations were unprofitable, but it was a sign that the company was
growing fast and therefore making large investments. Fast growth was good for the
company as long as it earned more than the cost of capital on its investments (Brealey
and Myers et al., 2007).

52
The importance of cash flow was stressed in (Imam et al., 2008) where analysts
ranked FCF as one and two when ranking accounting variable in the order of
importance. FCF is the cash flow that is available to be distributed among security
holders of an organisation including equity holders, debt holders and preference stock
holders and so on. Similarly, due to mandatory capital expenditures the cash flow of a
growing firm may be negative which may result into negative intrinsic value of equity.

Forecasts

Dechow (1994) documented a strong relationship between earnings and stock return in
short term, but in long run the realized cash flow gradually improved. She considered
the current cash flow was a base for future earnings and cash flow. Her findings showed
that earnings were superior to cash flow because of accruals. Some key factors that can
be compared are return on invested capital, sales growth, ROE and free cash flow.
Understanding how these drivers behaved in the past will help to make more reliable
estimates of future cash flows. (Koller, Goedhart and Wessels 2005).

According to Bernstein (1996), forecasts are one of the most important inputs
Managers develop to aid them in the decision making process. Virtually every important
operation decision depends to some extent on a forecast (Hanke, 2001). Two articles are
relevant in the context of forecasts for firm valuation (Elliot and Lidroff, 1972) and
(Elliot, 1972).These studies discussed the feasibility of using econometric models to
predict the company’s sales. Predicting the future always involves risk but
scenario analysis, sensitivity analysis, decision trees and simulation can help to analyse
the uncertainty related to the valuation results as well as to ensure whether the
assumptions used are realistic. (Damodaran, 2007) There are different methods to
forecast sales. The authors who had researched on sales forecast include Copeland et al.,
(2001) Cooley (1994) and Abram, (2001).

53
The following table gives an overview of forecasting methods:

Table 2.1 Overview of forecasting methods

Qualitative (Subjective
Quantitative (Objective forecasting)
forecasting)
Executive opinion Time Series analysis Casual methods
Sales force composite Moving average Leading indicators
Buyer or consumer survey Exponential Econometric
smoothening forecasting
Delphi method Mathematical models Regression models
Source: Kinnear and Taylor (1996) and Nahmias (2001)

Time series techniques were the most popular quantitative methods. Two major
types were moving average and exponential smoothing. The Causal method which
considers a number of variables is more powerful than the time series methods. Karl O
Olsson, Jones Ribbing and Madeleine Warner suggested that DCF improved with the
use of Causal methods approach through multiple regression models.

Evidence from the earnings forecasting literature provided some insight into the
forecast accuracy of each method. Many researchers (Collins and Hopwood, 1980; Fried
and Givoly, 1982) examined the analysts’ ability to forecast earnings compared to
mechanical extrapolation models. The results of their studies confirmed that analysts do
better in forecasting earnings at least over the short term. They found that analysts
outperform time series models for one-quarter ahead and two-quarter ahead forecasts do
as well for three-quarter ahead forecasts and do worse than the time series models for
four quarter ahead forecasts. Thus, the advantage gained by analysts from other
information sources than financial statements seemed to deteriorate as the time horizon
for forecasting was extended.

Capital Asset Pricing Model (CAPM)

The Capital Asset Pricing Model, which was introduced by William F. Sharpe (1964),
John Linter (1965) and Black (1972) based on Harry Markowitz‘s (1952) Portfolio
theory, explained the relationship between risk and expected return. CAPM was
calculated by taking the risk-free rate of a security plus risk premium. The risk premium
equals to the beta of security times a risk premium of the security. (Kuerschner, 2008).
According to the CAPM, the risk free rate and the market risk premium was the same

54
for all companies, it is only beta that is different for each company (Koller et. al, 2005)
the CAPM is developed as a method to evaluate market risk (Mukherji, 2011; Hillier et
al., 2008).

Cost of Capital

The Discounted Cash Flow (DCF) model depends on two inputs; the numerator, which
is an estimated future cash flow and the denominator i.e. discount rate (weighted
average cost of capital). Calculation of the denominator was the major concern of some
scientific reports (Bohlin, 1995) as well as the topic of large discussions in financial
texts. (Copeland, 2001, Perrakis, 1991 and Ross, 1991) Cost of capital is an invisible
indicator of good or bad corporate performance. (G.Bennett Stewart, 1991). Stewart
described it as the opportunity cost for investors to invest in the firm in terms of time
and money (Stewart, 1991). If costs of capital do not (account) worth money and time
investors make, they will invest somewhere else. Mathematically, it is the combination
of rate of return and rate of debt. Consequently, it is neither a cost nor a required return
but a weighted average of both components (G. Bennett Stewart 1991). Koller et al.,
(2005) suggested that determining a firm’s capital structure and then form some
expectations about the future.

The cash flows were discounted at a rate that reflects their risk (Damodaran
2010). The weighted average cost of capital was calculated with book value or market
value weights. The calculation using market value weights was a better representation of
reality. The Capital Asset Pricing Model (CAPM) was developed as a method to
evaluate market risk. (Mukherjee 2011; Hillier et al., 2008)

The WACC is calculated by weighing the cost of the debt (Kd) and the cost of
the equity (Ke) with respect to the company’s financial structure. The cost of debt was
the rate that a company paid to borrow money Damodaran,(2009). There were three
factors needed to calculate the cost of debt: the risk free rate, the default spread and the
tax rateDamodaran, (2010). To compensate for this risk, the lenders add a default or
credit spread to the risk free rate (Damodaran, 2010). A credit spread is the difference
between the risk free rate and the interest rate that a company pays to borrow money
(Steiger, 2010)

55
The second factor, which is the default spreadcould be determined in three ways,
which was chosen depending on the company to be evaluated: a) if a company had
outstanding bonds, then the cost of debt was calculated by applying the current market
interest rate (yield to maturity, YTM) on the company‘s long-term bonds, b) if a firm
had bond ratings from rating agencies such as Moody‘s or Standard and Poor (S&P), the
default spread can be determined based on the ratings (Steiger, 2010), and c) if the firm
is not rated, an artificial rating can be made based on the firm‘s interest coverage ratio
(EBIT/ Interest expense) (Damodaran, 2009).

The last part for determining the cost of debt is the tax rate. Interest payments on
debt were subtracted from income before tax was determined, thus taking on debt can
act as a tax shield (Brealey & Myers et al., 2007). When valuing the company as a
whole (debt plus equity), the required return to debt and the required return to equity in
the proportion to which they finance the company has to be considered.

The cost of equity was calculated by using asset pricing models that help to
determine the expected rate of return on a company‘s stock. There are three asset
pricing models: the capital asset pricing model (CAPM), Fame and French three factor
model and the Arbitrage Pricing Theory (APT). The main difference between these
models was the way they identify a stock‘s risk. The CAPM, which is the most widely
used model, states that a stock‘s risk depends on its sensitivity to the stock market.
Fama and French’s model claimed that a stock‘s risk is based on its sensitivity to three
factors: the stock market, a portfolio based on firm size and a portfolio based on book-
to-market ratios, and the APT extended Fama and French’s model, and argued
that a security‘s risk depends on even more factors (Koller et. al., 2005; Bartholdy and
Paula, 2003).

Beta

The beta measures the risk of a share. If the company had debt, the incremental risk
arising from the leverage must be added to the intrinsic systematic risk of the
company’s business, thus obtaining the levered beta. Thus given certain values for the
equity’s beta, the risk-free rate and the market risk premium; it was possible to calculate
the required return to equity

56
The CAPM model used beta which represents the volatility of the stock return in
relation to the market return. The advantage of beta is that it is the most used measure of
asset riskiness. Shalim & Yitzhaki, (2002) provides a quantifiable way of evaluating the
required rate of return on a risky investment as well as help investors recognise
attractive stocks based on risk preferences. Beta was a good standard for discussing
market efficiency and evaluating stock performance compared to the market (Liang,
2006). Beta was a very important part of the CAPM model and the usefulness of
CAPM depended on the accuracy of beta. Beta measured unsystematic risk and firm
specific risk is not fully evaluated by beta. Damodaran (2011) suggested bottom up beta,
average beta of similar firms in the industry and also adjusted for variation in the
financial leverage.

Non cash working capital

The difference between current assets and current liabilities is net working capital.
Consequently, net working capital was considered as one of the classic metric of firm’s
operating liquidity. (Bhattacharyya, 2007). The study uses non cash working capital
which comprised the difference between inventory and accounts receivable and
accounts payables. Any investment in this measure tied up cash, therefore increases
(decreases) in working capital reduced (increased) cash inflows in that period. In order
to calculate the FCFF, the combination of EBIT, tax and depreciation deducting to
change in net working capital and capital expenditure was determined (Mills, Bible and
Mason, 2002).

Capital Expenditures

The equity shareholders cannot withdraw from the firm the entire cash from operations
since some or all has to be invested to maintain the existing assets. Firms in high growth
phase generally had higher capital expenditures than firms in the stable growth phase.

Equity Risk Premium

According to Damodaran (2002), Equity risk premium is the sum of Base premium for
mature equity market and Country premium.

57
To calculate the base premium for a mature market, the Treasury bond was found to be
consistent with his choice of a risk free rate and geometric averages to reflect for a risk
premium that can be used for longer term expected returns. After the selection of a
mature base premium, country premium is to be estimated. Damodaran had reviewed
three main approaches for estimating the country premium, which use default risk
spreads, default spreads plus relative standard deviations and relative volatility. The
country risk measure captured in the default spread was an intermediate step towards
estimation of the risk premium to be used in a risk model.

Country Risk Premium

The term country risk is often used when cross-border investments were considered and
observed and tested from the foreign investor’s perspective. The country risk for a given
country is therefore the unique risk faced by foreign investors when investing in
that specific country as compared to the alternative of investing in other countries
(Nordal, 2001).

Growth Rate

Prior studies such as Penman and Sougiannis (1998), and Francis et al.,. (2000) have
used a constant growth rate of 4% across all sample firms. They felt that it was
important to examine whether the use of firm specific growth rates be more consistent
with the valuation of individual firms in practice. This resulted in more unbiased and
accurate valuations than the use of constant growth rates.

Stable Period

The firm cannot be valued indefinitely into the future. Hence Terminal
Value/Continuing value was calculated as the present value at a future point in time of
all future cash flows with the assumption that the firm reached its period of
stable growth.

Terminal Value

Terminal Value (or Continuing Value) was the concept applied to all kinds of valuation
methods. However it is impossible to know an exact value of the asset over an infinite
time period, under the assumption that the asset in the future will have a steady growth
or conditions. Several researchers claim that continuing value calculations account for

58
more than half of the firm’s value and that a small change in the growth rate leads to a
major change in firm value. (Brealey and Myers et al., 2007; Steiger 2010) The large
impact of the continuing value was due to the cash flows in the explicit forecast period
caused by investments that are used to generate cash flows after the explicit forecast
period. (Copeland et al.,., 2000)

Explicit Period

The cash flows can be forecasted for the explicit period of five to fifteen years. Based on
findings of Dechow (2001), the minimum growth duration was fixed at five years and the
maximum growth duration at 15 years. It was necessary to make an assumption about the
company’s future (Steiger, 2010). FCF was forecasted by adding the historic growth
rate to the FCF of the base year. The growth rate was the CAGR of FCF for the preceding
three years.

Capital market efficiency

A market was considered as efficient when it is able to correct price of securities


automatically by the time the latest information was available. It cannot make economic
profit on the basis of the available information. Shortly, it conceived that financial
markets wereefficient in terms of information (Downing, Underwood & Xing, 2007).
Professor Eugene Fama (1965), who brought the phrase―efficient market, defined
market efficiency as all participants making excellent decisions. This leads to the
situation when the actual price of securities equals to its intrinsic value. The purpose of
valuation model was the only justification of this value.

2.9 Chapter Summary

Analysts use a wide range of methods in practice, ranging from the simple to the
sophisticated. These methods often make different assumptions, but they do share some
common characteristics and can be classified in broader terms. The extensive survey of
literature provided an elegant framework to select and apply the methods for the
purpose of carrying out valuation. The survey of literature helped in the selection of the
three methods of firm valuation, namely Free Cash Flow to the Firm, Economic
Value Added and Relative Valuation. In addition, it facilitated the choice of
enterprise and price multiples, computations of valuation errors and the assessment of
abnormal returns.
59

S-ar putea să vă placă și