Documente Academic
Documente Profesional
Documente Cultură
To apply relevant knowledge, skills and exercise professional judgement in selecting and applying
strategic management accounting techniques in different business contexts and to contribute to the
evaluation of the performance of an organisation and its strategic development.
MAIN CAPABILITIES
Use strategic planning and control models to plan and monitor organisational performance.
Assess and identify relevant macroeconomic, fiscal and market factors and key external influences
on organisational performance.
Identify and evaluate the design features of effective performance management information and
monitoring systems.
Apply appropriate strategic performance measurement techniques in evaluating and improving
organisational performance.
Advise clients and senior management on strategic business performance evaluation and on
recognising vulnerability to corporate failure.
Identify and assess the impact of current developments in management accounting and
performance management on measuring, evaluating and improving organisational performance.
Section A
Section B
Section B will contain 3 optional questions worth 25 marks each. Candidates will be required to answer
two of these questions. At least one of the questions in Part B is usually entirely discursive in nature.
Paper F5
Paper P5 builds on Paper F5 (Performance Management) and you are expected to have a thorough
understanding of the Paper F5 syllabus. Although some of the topics from Paper F5 are revised in these
notes, it is impossible to revise all of them. If (because of previous syllabus changes) you did not take
Paper F5, or if you have forgotten F5, then it is vital that you obtain a set of F5 notes and work through
them properly yourself.
Paper P3
In addition, there is considerable overlap between Papers P5 and P3 in the area of strategic planning and
control. Although this area is revised briefly in these notes you should make sure that you are prepared to
demonstrate your P3 knowledge in the Paper P5 exam.
Because of the overlap of P5 with both F5 and P3, it will appear that there is not a lot new to learn for P5.
In one way that is true with respect to the technical content of the syllabus, but it is certainly not true with
respect o the style of questions and skills needed to pass this exam. Question practice is essential.
The examiner has written an article explaining his approach to the exam You can find the article on the
ACCA website:
(http://www.accaglobal.com/uk/en/student/acca-qual-student-journey/qual-
resource/accaqualification/p5/technical-articles/examiner-approach-to-paper-p5.html)
It is strongly recommended that you read this article before (and after!) your studies.
Organisations have objectives they wish to meet, normally these objectives are only possible if the
organisation improves in some way e.g. increase profit, increase market share, improve customer
satisfaction etc. Performance measurement is looking at how we are doing now Performance management
is about how we can improve
We could improve by; Measuring different KPIs which are more aligned with our objectives
Improving the way targets are set and behaviour is governed through budgeting
Improving the way that rewards are granted to staff
Introducing new management accounting techniques such as ABM, EMA
Introducing new production techniques such as JIT, Kaizen
Improving the quality of products / services provided
Improving the quality of information used by the organisation
Improving the information systems within the organisation
Improving the way information is presented to those who need it
One of the biggest issues is that all the above are interlinked
WHAT IS STRATEGY?
„The direction and scope of an organization over the long-term, which achieves advantage in a changing
environment through its configuration of resources and competences with the aim of fulfilling stakeholder
expectations‟:
The above simply means that strategy is how an organization attempts to meet its objectives.
Corporate Appraisal
SWOT
Mission Statement
Objectives
Identify CSF’s
Strategic Choice
Strategic
Implementation
Formulate
plans/budgets
When formulating a strategic plan, the organization should use a structured model which breaks the
planning process up into a number of stages:
The company produce a mission statement and define clear objectives that it wants to achieve. It will then
have a clear purpose in society.
Many organizations operate in a dynamic environment and needs to know the current and future external
forces it will face. A continuous study should be undertaken of the external environment using PEST
analysis and competitor analysis.
Internal analysis
Analysis needs to be undertaken to establish internal resources at present and in the future. Internal
strengths and weaknesses will be established during this stage.
Corporate appraisal
A full corporate appraisal will be undertaken using SWOT analysis. This will enable the organization to
analyses its current and future position.
Strategic choice
The options available will be identified and evaluated at this stage. Each option indicates a strategic
pathway that the organization can follow.
Review
The strategy implemented must be continuously monitored and updated whenever the need arises, e.g.
when the external environment changes.
A mission is a broad statement of the overall purpose of the business and should reflect the core values
of the business. It will set out the overriding purpose of the business in line with the values and
expectations of stakeholders.
There is no one best mission statement for an organization as the contents of mission statements will vary
in terms of length, format and level of detail from one organization to another.
Mission statements are normally brief and address three main questions:
Why do we exist?
What are we providing?
For whom do we exist?
Strategic objectives
Tactical objectives
Operational objectives
These can be defined as „those things that must go right if the objectives and goals are to be achieved‟.
Critical success factors may be financial or nonfinancial, but they must be high-level.
Each CSF must have a Key Performance Indicator (KPI) attached to it so as to allow measurement of
progress towards the CSF. Performance indicators are low level and detailed. They are measures of
performance which indicate whether the CSFs have been achieved or not.
The process is a participative one at management level and the management accountant would be
involved in mapping the above process, developing the KPIs and monitoring them.
There are a number of stages. Each of these will be discussed in more detail later in the notes.
POSITION ANALYSIS
There are various ways in which a company can assess its position and the environment in which it
operates:
SWOT analysis is a strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities,
and Threats involved in a project or in a business venture. This may be incorporated into the strategic
planning model.
The purpose of SWOT is to provide a summarized analysis of the company‟s current position in the
market place and from this it can address the gap between the current position and where it wants to be.
It can be used to help identify CSF‟s and performance indicators.
Benchmarking
Internal/External
Strategic
Functional
Operational
Again, the above can only be done if the company has adopted appropriate performance measures.
Implying that there is a single best way to do things which must be copied by all.
It is not appropriate if the industry is changing radically.
It can mean the company is always behind its rivals.
The wrong activities might be examined.
How accurate are the measurements.
Strategic
● Market share
● Return on assets
● Gross profit margin
Functional
● % deliveries on time
● Orderturnaround time
● Order costs per order
Operational
Show reasons for a functional performance gap and enable
Identification of corrective actions required
Stakeholders
Stakeholder‟s aspirations and requirements often conflict. Managers can use Medlow‟s matrix to help
manage stakeholders
Stakeholder Power
High Low
Probability of
Will have to be given most of what they Should be kept informed (though they
Exercising High
want in short term at least have little power)
Powers
Will have to be kept satisfied (may
Low Can be ignored
become militant)
Organizational culture
Values
Attitudes
Norms
Expectations
Levels of culture
Ethical codes
Employee-employer relationship
Customer care
Supplier relations
PEST
PEST analysis stands for “Political, Economic, Social, and Technological analysis” and describes a
framework of macro-environmental factors used in the environmental scanning component of strategic
management.
With PESTEL the key things to consider are is the current environment making it easier or harder for the
organization? In the exam things are usually getting harder, look for the financial indicators to be getting
worse because of this.
If the environment is making conditions harder, what can the organization do about it? Remember that
the macro-environment will affect an entire industry in the same way. This means all the organization‟s
rivals will also be affected.
If the company is going to move into a new industry what will the conditions be like (different industries
will be affected in different ways)?
Economic – consider local economic trends, interest and exchange rates, and inflation.
Inflation – is inflation driving up material and labour costs?
Legal – impact of local employment law.
Political – is gov‟t policy affecting competition?
EU – consider product standards and minimum labour costs?
Cultural – these issues can affect motivation, and the adaptability of the organization.
Business Cycle – is there an economic boom or a recession?
Political climate
ILLUSTRATION
Speedy Eat is the world‟s largest and best-known food service retailing group with more than 30,000
„fast-food‟ outlets in over 120 countries. Currently half of its restaurants are in the USA, where it first
began 50 years ago, but up to 1,000 new restaurants are opened every year worldwide. Restaurants are
wholly owned by the group (it has previously considered, but rejected, the idea of a franchising of
operations and collaborative partnerships).
As market leader in a fiercely competitive industry, Speedy Eat has strategic strengths of instant global
brand recognition, experienced management, site development expertise and advanced technological
systems. Speedy Eat‟s basic approach works as well in Kandy or Kuala Lumpur as it does in Kansas:
although the products are broadly similar, menus are modified to reflect local tastes. Analysts agree that
it continues to be profitable because it is both efficient and innovative.
You are part of a strategy steering team responsible for investigating the key factors concerning Speedy
Eat‟s entry for the first time into the restaurant industry in the Republic of Borderland.
Required:
(a) Justify the use of a PEST framework to assist your team’s environmental analysis for
the Republic of Borderland. (8 marks)
(b) Discuss the main issues arising from applying this framework. (12 marks)
(20 Marks)
Economic factors
Things to consider include:
● Economic prosperity
The more prosperous the nation the more money people will have to invest in „fast-food‟.
Examining the current and likely future prosperity enables the organization to understand
the potential of this market and the likely future investment required.
● Interest rates
This affects the cost of borrowing within Borderland. If high it may mean overseas funding is
necessary. A big differential between interest rates in Borderland and the US is also likely to
cause instability in the exchange rate (see below).
Interest rates also affect the availability of money for the people of the country. Low interest
rates mean more disposable income to spend increasing the potential for Speedy Eat.
● Exchange rates
Speedy Eat will be affected by exchange rates for items they export to Borderland (clothing,
fittings). An unfavorable movement in exchange rates could make exporting to Borderland
expensive and reduce profitability. It can also affect the value of profits when converted back
to US dollars.
● Position in economic cycle
Different countries are often at different positions in the economic cycle of growth and
recession. The current position of Borderland will affect the current prosperity of the nation
and the potential for business development for Speedy Eat.
● Inflation rates
High inflation rates create instability in the economy which can affect future growth
prospects. They also mean that prices for supplies and prices charged will regularly change
and this difficulty would need to be considered and processes implemented to account for
this.
Social factors
Things to consider include:
● Brand reputation/anti-Americanism
As a global brand, the reputation of Speedy Eat might be expected to have reached
Borderland. If not, more marketing will be required. If it has, the reputation will need to be
understood and the marketing campaign set up accordingly.
This is particularly relevant given the anti-Americanism which is prevalent currently in some
countries. Speedy Eat may have a significant hurdle to climb to convince people to eat there if
this is the case in Borderland.
● Cultural differences
Each country has its own values, beliefs, attitudes and norms of behaviors which means that
people of that country may like different foods, architecture, music and so on, in comparison
Porter’s 5 Forces
Porter‟s 5 forces model looks at why some industries might be more profitable than others. In general, the
more of the forces that are favorable within an industry the more profits will be earned. Unfortunately if
the industry becomes more attractive then more rivals will want to enter it.
Competitors (new)
Competitors (existing)
Customers
Suppliers
Substitutes
Supplier’s Buyer’s
Competitive rivalry
bargaining power bargaining power
New entrants always drive down profit margins (as companies have to spend more on marketing or lower
prices to keep customers). New competitors are only kept out by barriers to entry. These include:
Competitors (existing)
If there is a lot of rivalry in an industry then profit margins will be lower as companies constantly fight to
retain their customers.
Customers
Powerful customers prevent companies from putting prices up or implementing other changes.
Suppliers
Powerful suppliers might put their prices up or impose other changes on the company.
Substitutes
If there are many substitutes for a product then it becomes harder to raise prices.
Direct – where the customer buys the same product from a different manufacturer.
Indirect – where the customer buys a product from a different industry to meet the same need.
Monetary – where different industries are competing for the same part of a customer‟s income.
A „planning gap‟ is the gap between the forecast position based upon an extrapolation of projected current
activities and the forecast of the desired position. The planning gap is most often measured in terms of
demand but may also be reported in terms of net profit, return on capital employed etc.
Ultimate objective
$
Revenue GAP
Future
projects
Current
operations
Years
An organization will forecast the likely performance of its existing projects and also the expected
contribution of future projects.
This is far more difficult since future projects are subject to much greater uncertainty than current
operations and therefore forecasts of future projects have a much wider margin of error.
Where a gap exists, additional strategies are required. Ansoff‟s matrix identifies various options that
might be considered in order to close the planning gap
There are a number of ways in which profits could be increased. These include:
Products
Existing New
Product/Build Product Development
Consolidation New Products in existing
Existing Market Penetration market
Market
Withdrawal Use existing capabilities or
Efficiency Gain develop new ones
Market Development Diversification
New New Market for existing products Related
New segments of existing market Unrelated
The above strategies are not mutually exclusive. An organization might well pursue a penetration strategy
whilst seeking to enter new markets.
Other strategies that can be used include efficiency strategies which are designed to increase profits (or
throughput) by making better use of resources in order to reduce costs. Also it is possible to reduce a
planning gap that is measured in terms of profit by divesting of loss-making business units. This would
obviously not be the case where a planning gap is measured in terms of sales revenue
A multinational company is one which undertakes a substantial proportion of its business in countries
other than the one in which it is based.
The strategic planning process in these companies and the strategic choices made must take account of
certain special features, and you must be able to briefly describe these for the examination.
Process specialisation e.g. place labour intensive operations in countries with low wage rates
Product specialization e.g. consumers in different countries have different requirements and
„tastes‟
International trade issues e.g. the economics of a business may be particularly sensitive to
exchange rate fluctuations. There could be import restrictions. There might be transportation
problems.
Political sensitivities e.g. particular countries may have particular political risks.
Administrative issues e.g. the transfer of profits may result in tax being payable twice.
Ownership of foreign companies might be subject to special rules.
In earlier management accounting papers you will have looked at various topics such as costing
techniques, budgeting and the calculation of variances.
Short-term
Backwards looking
Because of this, all of these measures are useful for CONTROL purposes.
We saw earlier that strategy is concerned with big decisions, such as whether:
Any strategic decision such as those noted above will need to be justified. This means the accountant will
be involved with preparing information to support a decision.
Externally focused.
Forward looking.
Aimed at achieving the goals of the entire organization.
Produced when needed.
Not in a standard form.
Product profitability – why is one product making more profit than another
Customer profitability - why are some customers worth more than others
Pricing decisions – including looking at customers and competitors
Brand values – How much should be invested in a brand
Shareholder wealth – What choices will increase it
Possible acquisition targets
Expected synergistic gains
Decisions on entering new industries or markets
Decisions on launching new products
Decisions on whether to expand certain parts of the business
Decisions on whether to close or sell-off various parts of the business
The last two points are particularly important. Senior management will need to identify which parts of
the business are performing well and which are underperforming.
To do this senior management will need to introduce a set of performance measures which can be used to
summarize the performance of the business.
Important areas
Management accountants should become internal consultants for managers The main reasons for this
change are:
REVOLUTION OF IT SYSTEMS
Managers have access to the more information within the „IT system‟.
Changed quality and quantity of information used to make decisions.
Wide variety of reports that can be generated from IT systems.
More analytical and interpretative skills required (“Big data”)
COMPETITION
A performance report is what the Manager or Board of Directors sees. It is used to identify;
It is important that the correct information is provided in the report. This means including information
on;
Information on its own may not be helpful unless it is benchmarked against something. Possible
benchmarks include;
This chapter looks at budgeting used as a method of control within an organisation. You will already have
been examined on budgeting in previous examinations, and much of this chapter is therefore revision.
In this examination, questions are more likely to focus on written aspects, and the syllabus includes
budgeting in not-for-profit organisations; modern developments; and behavioural aspects.
Purpose of budgeting
Forecasting
Planning
Communication
Co-ordination
Control
Authorising and delegating
Motivation
Evaluation of performance
The principal budget factor is the factor that limits the activity for the budget period. Normally this is the
level of sales and therefore the sales budget is usually the first budget to be prepared – this then leads to
the others.
However, it could be (for example) a limit on the availability of raw materials that limits activity. In this
case raw materials would be the principal budget factor, and this would be the first budget to be prepared.
Fixed budget
A Fixed Budget is designed to remain unchanged irrespective of the volume of output or turnover
attained. The budget remains fixed over a given period and does not change with the change in the
volume of production or level of activity attained.
A problem with the annual, incremental budget is that departments are given money each year because
they have been given it in the past. Zero based is designed to eliminate this wastage.
ZBB starts with the idea that each manager begins with a zero base of resources. The manager only
receives resources if they can be justified.
It would be particularly suitable for a department which pursues different projects each year (marketing,
IT).
Rolling budgets
A problem with the annual, incremental budget is that it can quickly become unrealistic. This leads to the
targets becoming unreachable and managers becoming demotivated.
The rolling budget is regularly updated based on actual performance. This should lead to more realistic
targets.
ILLUSTRATION 1
A company uses rolling budgeting and has a sales budget as follows:
Q1 ($) Q2 ($) Q3 ($) Q4 ($) Total ($)
Sales 125,750 132,038 138,640 145,572 542,000
Actual sales for Quarter 1 were $123,450. The adverse variance is fully explained by competition being mo
re intense than expected and growth being lower than anticipated. The budget committee has proposed th
at the revised assumption for sales growth should be 3% per quarter for Quarters 2, 3 and 4.
Required:
Update the budget figures for Quarters 2–4 as appropriate
ANSWER
The revised budget should incorporate 3% growth starting from the actual sales figure of Q1.
Q2 ($) Q3 ($) Q4 ($)
Sales 127,154 130,969 134,898
Workings
Q2: Budget = $123,450 × 103%
Q3: Budget = $127,154 × 103%
Q4: Budget = $130,969 × 103%
Flexed budgets
A problem with the annual, incremental budget is that cost centers are given the same amount of money
to spend regardless of activity.
Flexed budgets adjust the target to reflect the amount of work to be carried out.
The major objection to flexed budgets is that it is difficult to motivate managers to achieve a target if they
do not know what the target is until the end of the period.
Budget Actual
Level of activity (units of output) 1,000 1,200
Cost ($) 20,000 23,000
Required:
(a) Assuming all costs are variable, has the company done better or worse than expected?
(b)If $10,000 of the budgeted costs are fixed costs, the remainder being variable, has the c
ompany performed better or worse than expected?
ANSWER
(a) At first sight, the costs are higher meaning the company has done worse, from a cost control
angle, but then the activity level is 20% higher than planned. If all costs are variable, we would
expect costs to rise in line with activity, making expected costs 20,000 × 1.2 = $24,000. In this
case the company has done better than expected.
(b) The fixed costs of $10,000 will NOT rise in line with activity levels where as the variable costs
of $10,000 will increase in line with activity levels. Therefore, the expected cost of the actual
level of activity will be ($10,000 × 1.2) + $10,000 = $22,000. The actual cost is $23,000 so the
company has spent more than expected
Incremental
Incremental budgeting is budgeting based on slight changes from the preceding period‟s budgeted results
or actual results.
Incremental amounts are added to the previous period‟s budget for the new budget period. Since this is
based on allocations from the previous period and is progressive it could lead to a “spend it or lose it”
attitude.
This is a common approach in businesses where management does not intend to spend a great deal of
time formulating budgets, or where it does not perceive any great need to conduct a thorough evaluation
of the business.
This is the application of the idea of Activity Based Costing to the process of budgeting, and as such has
particular relevance to budgeting for fixed overheads.
At the planning stage, attempts are made to identify which activities drive (cause) various overheads.
Costs are spread over these cost drivers using whatever basis appears to be appropriate in the
circumstances. A better understanding between costs and their causes should result in better budgets
better decision making and better performance.
Participation
If the budget process is not handled properly, it can easily cause dysfunctional activity. It is therefore
necessary to give thought to the behavioural aspects.
Top-down budgeting: This is where budgets are imposed by top management without the
participation of the people who will actually be involved in implementing it.
Bottom-up budgeting: Here the budget-holders do participate in the setting of their own
budgets.
Targets can assist motivation and appraisal if they are set at the right level.
Budgets are often used for the evaluation of performance: hit a budget and you‟ve done well, miss it and
you could be in trouble. However, proper evaluation requires care as performance might not be
controllable and, indeed the budget could have been or could become incorrect.
Hopwood identified three approaches to the use of budget information by managers in performance
evaluation:
A cost overrun or a revenue shortfall is always bad and is always the subordinate‟s fault.
Budget Even if the subordinate had spent more for a good reason (for example to appease a very
constrained important customer who had had poor service), that expenditure would be criticized -
style: even though it might have led to the customer being retained. This approach leads to very
bad relations between superior and subordinate; it can also lead to misreporting.
Profit Long–term profitability and long term performance are the important measures. Cost
conscious overruns will be looked at, but will usually be tolerated for the sake of long-term success.
style: This is probably how most of us would like to be managed.
Here, the manager is not particularly interested in accounting and budgets. At one stage
Non- this approach would have been found in many hospitals in the UK. Treatments were
accounting relatively basic and cheap and expenditure didn‟t have to be watched. Now with more
style: expensive treatments and an aging population, financial budgets have become much
more important.
Responsibility accounting
Management by objectives:
A system of management incorporating clearly established objectives at every level of the organisation.
Here there is less emphasis on monetary budgets and more emphasis on taking action which helps the
business to achieve its objectives.
Employees are given objectives then it is substantially left up to them to decide how to achieve those
objectives. It can be very motivating because employees are given the responsibility to choose how best to
meet their objectives.
Issues that tend to arise in budgeting that are specific to not-for-profit organizations include the
following:
There might be little control over revenue. For example, it might arise from an allocation of
government money.
There might be no revenue because goods and services are provided free. Therefore, how is
success to be identified?
The organisation may be prevented from borrowing funds or from budgeting for a deficit.
The organisation may not be allowed to transfer funds from one budget head to another.
The budgeting tends to be just for one financial year (i.e. short-term rather than longterm)
incremental budgeting is the method most widely used
BEYOND BUDGETING
Annual budgeting adds little value and takes up too much valuable management time.
Too heavy reliance on budgetary control in managing performance has an adverse impact on
management behavior.
The use of budgeting as a base for communicating corporate goals, setting objectives, assisting
continuous improvement, etc. is seen as contrary to its original purpose as a financial control
mechanism.
Most budgets are not based on a rational causal model of resource consumption and are,
therefore, of little use in determining strategy.
The process has insufficient external focus from which to derive targets or benchmarks.
The argument may be put that increased focus on knowledge or intellectual capital through
competent managers, skilled workforce, effective systems, loyal customers and strong brands is
more likely to yield improved business effectiveness.
Create a culture based on beating the competition (since goals are related to external
benchmarks) rather than simply gaining more internal resources.
Rewards can be team-based increasing the amount of motivation.
It is easier to judge the performance of people lower down the organization (who are closer to the
customers).
It empowers more junior managers meaning they can respond more quickly to changes in the
external environment
Rewards
Individual departments have their own Viewed as one team
targets and therefore are unwilling to Break down barriers with
share expertise, skills and information emphasis on learning
Beyond Budgeting organizations operate with speed and simplicity. Especially to customer
requests.
An open, continuous and adaptive strategy rather than being constrained by a fixed, outdated and
bureaucratic plan.
Trust to share knowledge and best practices.
Rewards based on performance relative to the peers.
Only by removing traditional budgets will people be motivated to question fixed costs and seek
sustainable long term cost reductions.
Ask “does it add value to the customer?” often ensures that unnecessary work is eliminated.
Place customer value needs at the center of their strategy. Respond to demands for improvement
in quality and cost.
Fast response to customer requests is vital. Thus front line people must have the authority to
make quick decisions and manage their own bit of the business.
This chapter looks at the different types of business structure, and the effect the structure has on the
information needed. It also looks at the types of changes that business might implement to improve their
performance.
Functional structure
One of the common structures found in medium-sized organisations is the functional structure. This
means that people within an organisation are organised by function. So, for example, there is a finance
department, a manufacturing department, a sales department, and so on.
Main board
Because top management in functional organisations is centralised, data from each department needs to
be aggregated before top management can review and give feedback on it.
The aggregation can introduce delays in responding to the information. In addition, top management
needs the skills to deal with many departments, markets and issues.
As organisations grow they will often develop a divisional structure, where each division has its own
functional departments and where the divisional manager has a degree of autonomy.
Products.
Geography.
Type of customer.
Divisional managers are more motivated as they are provided with performance targets that are easier to
define measure and evaluate.
Decisions are made ‘closer to the action’ so that faster decisions can be made.
Divisions can specialize. For example, the N American division can concentrate making goods to suit that
market, pricing them competitively and countering the competition there.
Junior managers have more responsibility and get training for more senior positions in the future The
disadvantages of such a structure are:
Head office management may need to restrict the autonomy of divisional managers, which can reduce
motivation and cause dissatisfaction
Divisional managers are concerned about their own division’s performance rather than that of the
organisation as a whole, which can lead to a loss of goal congruence.
Poorer coordination.
There can be transfer pricing issues.
There can be some duplication of service departments eg to finance departments.
Each divisional manager needs information about the performance of his division – aggregating the data
from each department within the division. This aggregated information is then passed upwards to head
office.
Head office does however need to aggregate the information received from each division in order to assess
the overall performance of the organisation.
An example of this may be found in firms of accountants, where there may be managers responsible for
each individual office within a country, but at the same time there may be managers responsible for
different activities in all offices throughout the country.
As a result, an employee working in the tax department of an office in one town will be reporting both to
the manager of that office, and to the nationwide tax manager.
These employees are responsible to both the project leader of project B and to the quality control
manager.
There can be conflicting pressures brought to bear on employees by the different managers to whom they
report (but that might happen even in a conventional structure.
There can be confusion over which boss has the ultimate say.
Data needs to be aggregated in two ways – both for the manager of the division and for the manager of the
activity.
As with a divisional structure, the aggregated information is passed upwards to head office, and head
office need to be able to aggregate it in order to assess the performance of the organisation as a whole.
BUSINESS INTEGRATION
Business integration refers to linkages of various activities and processes to add value in an organization
This model represents organizations by setting out the activities they carry out. Firm infrastructure,
technology development, human resources and procurement are known as support activities (mostly
indirect-costs). The other activities are primary activities. By carrying out these activities organization can
manage to make proits. However, it is essential for the organization to know what gives the right (or
ability) to make proits.
Whatever it is that customers value is the key to an organisation‟s success and its performance there needs
to be carefully managed. The organisation also has to be careful about changing or removing activities or
performance that customers value. If an organisation is left carrying out tasks that are not valued by
customers, how will the organisation survive? Short term performance improvements in one area might
lead to longterm performance decreases in another.
Business process reengineering involves re-thinking and radically re-designing of the way an
organisations processes operate.
It is not simply attempting to improve the existing way of doing things, but starting almost with a blank
piece of paper and designing how best to operate the business. The starting point it to determine what the
desired outcome is of the organisation and then to design how best to achieve it.
A leading advocate of business process reengineering – Michael Hammer – claimed that most of the work
being done does not add any value for customers, and that this work should be removed, rather than
simply speeded up, using technology. Information technology in particular has been used primarily for
automating existing processes whereas us should be used as a way of making non-value added work
obsolete.
Zero-based: if you were starting the business now, how would you choose to organize it?
Simplification – eliminate duplication and redundant steps
Value-added analysis – remove non-value adding activities
Gaps and disconnects – check flows between departments
McKinsey’s 7S model:
This model represents organizations using the following inter-related elements. To carry out a strategy
successfully, consideration has to be given to getting each element correct:
Structure
Strategy
System
Shared
values
Style
Skills
Staff
Plans on how to reach identified goals and for dealing with the environment, competition, customers, new
technology and so on
Structure:
The way the organization‟s units relate to each other: centralized, functional divisions, divisionalisation,
tall/narrow or wide/flat, decentralized (the trend in larger organizations); matrix etc.
Systems:
The procedures, processes and routines define how work is to be done: financial systems, quality control
systems, recruitment, promotion and performance appraisal systems, information systems, safety
procedures.
Skills:
Staff:
Style:
Cultural style of the organization and how key managers behave in achieving the organization‟s goals. For
example an organisation could adopt a role culture or a task culture.
Shared Values:
What the organization stands for and what it believes in. Central beliefs and attitudes.
The upper three elements on the dark background are the „hard Ss‟ , meaning that they are relatively easy
to describe and define. Many organization focus too much on these because they are easy to define and
describe.
The lower three on the white background and the central element are the „soft Ss‟ and are less easy to
describe and define. Therefore, these tend to be ignored.
Additionally, all the elements are all inter-dependant so that changing one will affect others. For example,
the introduction of a new production system will probably affect skills structure, style and staff. It could
even have an impact on strategy if it allowed, for example, more flexible production.
This chapter considers the impact of IT on management accounting. There is a lot of terminology, which
may or may not be already familiar to you. You are unlikely to be tested on specific terminology, but you
should be aware of the various items listed in this chapter.
The nature of what is provided by service orientated businesses is often different to what manufacturing
businesses provide in the following respects:
Heterogeneity:
Manufacturing often produces many identical units; service industries often produce tailored products eg
an audit. Costing information and efficiency measurement will be quite different. Pricing will be very
different as customer (or clients) will find it more difficult to judge prices.
Perishability:
Many services are perishable ie they lose their value after a certain time. An example is airline seats: once
the aircraft departs the seats have no value. Again, this presents interesting pricing challenges.
Performance will be improved by attracting each extra passenger at the maximum marginal price, but if
everyone knows that prices will fall near the departure date, passengers will be encouraged to postpone
booking until prices reduce.
It is difficult to show potential customer what they will get for their money. Auditing firms cannot show
clients an audit or audit file so how can potential clients judge value for money?
Simultaneity:
In manufacturing, production and sale can be separated. This allows products to be quality checked
before dispatch and allow flexibility in timing. For example, production can be carried out steadily
throughout the year and inventory can be stored until busy sales periods. Services cannot be stored and
are often instantly delivered. This places additional demands on scheduling, pricing and quality control
information
No transfer of ownership.
Often services or the use of a service provider is for a limited period of time. Pricing and demand
information has to reflect this. For example, the pricing of hotel rooms will vary from week-days to
weekends. In addition because a service is being provided for a limited period only, consumers are likely
to be very demanding during that period.
The information needed to perform well when providing a service will often be more related to qualitative
than quantitative aspects. For example, reputation, customer satisfaction, availability of the service when
required,
IT has made it possible to access data and information instantly. This should mean that delays between
events, processing the results of those events and feedback to alter future events should be much shorter.
With manual accounting systems it took significant time to collect and process results, prepare reports
and for those reports to be distributed to managers. Now is common for managers to have daily update on
events (for example sales of many different products in supermarkets) and to take action to improve
performance much more quickly. Indeed this can often be in real time. For example, as a particular airline
flight receives bookings, air fares can be changed many times per day to try to maximize the marginal
revenue that can be earned.
Databases:
Large amounts of data are held in a way that allows many diverse users to access the data and to update it.
Every will see the data in the same state ie it is consistent. Controls are needed to ensure that the data is
held securely and confidentially.
Data warehouse:
A vast amount of data. For example, supermarkets recording every loyalty card owner‟s purchases.
Searching through a data warehouse in the hope of finding information of use – particularly unexpected
useful information.
Groupware:
Internet:
Gives access to websites. Searches can be made on keywords (eg using Google) to find sites that might be
of use.
Intranets:
Extranets:
An organisation‟s intranet given access to another‟s intranet.
A system that integrates internal and external management information across an entire organization,
including: finance/accounting, manufacturing, sales and service, customer relationship management, etc.
ERP systems automate these activities with an integrated software application and they facilitate the flow
of information between all business functions of the organization.
Helps managers to cope with unstructured decisions such as what should next year‟s budget show.
Spreadsheets are a good example.
Used by top management. Flexible with the ability to „drill down‟ to more and more detailed information.
Access to external information is essential at this level.
These can make decisions that replicate the decisions an expert would make. They rely on extracting
knowledge from the expert and storing this in a knowledge base. Situations can then be presented to the
system which uses the knowledge base to come to a conclusion or recommendation. The type of data
needed depends on the management level:
Traditionally, data was input into the computer systems using a keyboard. This takes time, and inevitably
results in input errors.
IT has enabled more and more data to be input remotely and/or automatically. You should be aware of
the uses of the following:
Laptop/notebook computers often with WiFi or 3G (or 4FG) connectabiity allow sales personnel
to contact head office to check on inventory and to enter new orders.
Handheld devices (including smartphones and iPads) can be used to input inventory counts and
update production statistics
Barcodes (standard super-market technology)
RFID tags (radio frequency identification tags). RFID tags are tracking consumer products
worldwide. Many manufacturers use the tags to track the location of each product they make from
the time it's made until it's pulled off the shelf and tossed in a shopping cart.
However well a management accounting system has been designed, it is vitally important that it is
continually re-appraised, refined and developed if a business is to maintain or improve its performance.
The marketplace is increasingly competitive and increasingly global, creating different information needs
for management.
Big data
Technical article
There are many definitions of the term ‘big data’ but most suggest something like the following:
'Extremely large collections of data (data sets) that may be analysed to reveal patterns, trends, and
associations, especially relating to human behaviour and interactions.'
In addition, many definitions also state that the data sets are so large that conventional methods of
storing and processing the data will not work.
In 2001 Doug Laney, an analyst with Gartner (a large US IT consultancy company) stated that big data
has the following characteristics, known as the 3Vs:
Volume
Variety
Velocity
The commonest fourth 'V' that is sometimes added is: Veracity: is the data true and can its accuracy be
relied upon?
VOLUME
The volume of big data held by large companies such as Walmart (supermarkets), Apple and EBay is
measured in multiple petabytes. What is a petabyte? It‟s 1015 bytes (characters) of information. A typical
disc on a personal computer (PC) holds 109 bytes (a gigabyte), so the big data depositories of these
companies hold at least the data that could typically be held on 1 million PCs, perhaps even 10 to 20
million PCs.
These numbers probably mean little even when converted into equivalent PCs. It is more instructive to list
some of the types of data that large companies will typically store.
Retailers
Via loyalty cards being swiped at checkouts: details of all purchases you make, when, where, how you pay,
use of coupons.
Via websites: every product you have every looked at, every page you have visited, every product you have
ever bought.
Friends and contacts, postings made, your location when postings are made, photographs (that can be
scanned for identification), any other data you might choose to reveal to the universe.
Numbers you ring, texts you send (which can be automatically scanned for key words), every location your
phone has ever been whilst switched on (to an accuracy of a few metres), your browsing habits. Voice
mails.
Every site and every page you visit. Information about all downloads and all emails (again these are
routinely scanned to provide insights into your interests). Search terms which you enter.
Banking systems
Every receipt, payment, credit card information (amount, date, retailer, location), location of ATM
machines used.
VARIETY
Some of the variety of information can be seen from the examples listed above. In particular, the
following types of information are held:
Structured data: This data is stored within defined fields (numerical, text, date etc) often with defined
lengths, within a defined record, in a file of similar records. Structured data requires a model of the
types and format of business data that will be recorded and how the data will be stored, processed and
accessed. This is called a data model. Designing the model defines and limits the data which can be
collected and stored, and the processing that can be performed on it.
Unstructured data: Refers to information that does not have a pre-defined data-model. It comes in
all shapes and sizes and it is this variety and irregularity which makes it difficult to store in a way that
will allow it to be analysed, searched or otherwise used. An often quoted statistic is that 80% of business
data is unstructured, residing it in word processor documents, spreadsheets, PowerPoint files, audio,
video, social media interactions and map data.
Here is an example of unstructured data and an example of its use in a retail environment:
You enter a large store and have your mobile phone with you. That allows your movement round the
store to be tracked. The store might or might not know who you are (depending on whether it knows
your mobile phone number). The store can record what departments you visit, and how long you spend
in each. Security cameras in the ceiling match up your image with the phone, so now they know what
you look like and would be able to recognise you on future visits. You pass near a particular product and
previous records show that you had looked at that product before, so a text message can be sent perhaps
reminding you about it, or advertising a 10% price reduction. Perhaps the store has a marketing
campaign that states that it will never be undersold, so when you pass near products you might be
making a price comparison and the store has to check prices on other stores websites and message you
with a new price. If you buy the product then the store might have further marketing opportunities for
related products and consumables and this data has to be recorded also. You pay with an affinity credit
card (a card with associations with another organisation such as a charity or an airline), so now the
store has some insight into your interests. Perhaps you buy several products and the store will want to
discover if these items are generally bought together.
So just walking round a store can generate a vast quantity of data which will be very different in size
and nature for every individual.
VELOCITY
You will understand that the volume and variety conspire against velocity and, so, methods have to be
found to process huge quantities of non-uniform, awkward data in real-time.
Without getting too technical on this issue, a library of software known as Apache Hadoop is specifically
designed to allow for the distributed processing of large data sets (ie big data) across clusters of
computers using simple programming models. (Clusters of computers are needed to hold the vast
volume of information.) Hadoop IT is designed to scale up from single servers to thousands of machines,
each offering local computation and storage.
Data mining: analysing data to identify patterns and establish relationships such as associations
(where several events are connected), sequences (where one event leads to another) and
correlations.
Predictive analytics: a type of data mining which aims to predict future events. For example, the
chance of someone being persuaded to upgrade a flight.
Text analytics: scanning text such as emails and word processing documents to extract useful
information. It could simply be looking for key-words that indicate an interest in a product or
place.
Voice analytics: as above but with audio.
Google provides website owners with Google Analytics that will track many features of website traffic.
For example, the website OpenTuition.com provides free ACCA study resources. Google analytics
reports statistics such as the following:
AGE OF USER
Better marketing
Better customer service and relationship management
Increased customer loyalty
Increased competitive strength
Increased operational efficiency
The discovery of new sources of revenue.
Netflix: this company began as a DVD mailing service and developed algorithms to help it to predict
viewers’ preferences and habits. Now it delivers films over the internet and can easily collect
information about when movies are watched, how often films might be stopped and restarted, where
they might be abandoned, and how users rate films. This allows Netflix to predict which films will be
popular with which customers. It is also being used by Netflix to produce its own TV series, with much
greater assurance that these will be hits.
Amazon: the world’s leading e-retailer collects huge amounts of information about customers’
preferences and habits which allow it to market very accurately to each customer. For example, it
routinely makes recommendations to customers based on books or DVDs previously purchased.
Airlines: they know where you’ve flown, preferred seats, cabin class, when you fly, how often you search
for a flight before booking, how susceptible you are to price reductions, probably which airline you
might book with instead, whether you are returning with them but didn’t fly out with them, whether car
hire was purchased last time, what class of hotel you might book through their site, which routes are
growing in popularity, seasonality of routes. They also know the profitability of each customer so that,
for example, if a flight is cancelled they can help the most valuable customers first.
This information allows airlines to design new routes and timings, match routes to planes and also to
make individualised offers to each potential passenger.
Disease epidemic identification: In 2009, Google was able to track the spread of influenza across the
USA faster than the government’s Center for Disease Control and Prevention. How? They monitored
users entering terms like ‘Flu symptoms’, ‘Flu remedies’, High temperature’. This connection was
uncovered by web analytics looking at popular search terms then finding a correlation with other
information confirming influenza infections. Of course, you have to be careful drawing conclusions
about correlations: the association between the use of search terms and the outbreak of flu might be
driven by news articles on the spread of the epidemic rather than the epidemic itself.
Target: Target is the second largest discount retailer in the USA. There is an often quoted story about
their ability to predict when a customer is pregnant – frequently before the customer has informed her
family. By looking at about 25 products it is claimed that they can create a pregnancy predictor. For
example, early pregnancy often causes morning sickness so consumers would perhaps change to
blander food and less perfumed shower gel. Why would Target be interested in knowing whether a
consumer is pregnant? Well that person will require different products during the pregnancy then in a
Despite the examples of the use of big data in commerce, particularly for marketing and customer
relationship management, there are some potential dangers and drawbacks.
Cost: It is expensive to establish the hardware and analytical software needed, though these costs are
continually falling.
Regulation: Some countries and cultures worry about the amount of information that is being collected
and have passed laws governing its collection, storage and use. Breaking a law can have serious
reputational and punitive consequences.
Loss and theft of data: Apart from the consequences arising from regulatory breaches as mentioned
above, companies might find themselves open to civil legal action if data were stolen and individuals
suffered as a consequence.
Incorrect data (veracity): If the data held is incorrect or out of date incorrect conclusions are likely.
Even if the data is correct, some correlations might be spurious leading to false positive results.
The business environment has been changing rapidly in recent years due to factors such as: \
Increased competition
Globalisation
Privatisation
Technology in general; information technology; the Internet
Rapid changes in customer requirements
New approaches to manufacturing e.g. just-in-time; dedicated cells.
PEST
PEST analysis stands for “Political, Economic, Social, and Technological analysis” and describes a
framework of macro-environmental factors used in the environmental scanning component of strategic
management.
Is the current environment making it easier or harder for the organisation? In the exam things are usually
getting harder, look for the financial indicators to be getting worse because of this. If the environment is
making conditions harder, what can the organisation do about it? Remember that the macro-
environment will affect an entire industry in the same way. This means all the organisation‟s rivals will
also be affected.
If the company is going to move into a new industry what will the conditions be like (different industries
will be affected in different ways)?
Economic – consider local economic trends, interest and exchange rates, and inflation.
Inflation – is inflation driving up material and labour costs?
Legal – impact of local employment law.
Political – is gov‟t policy affecting competition?
EU – consider product standards and minimum labour costs?
Cultural – these issues can affect motivation, and the adaptability of the organisation.
Business Cycle – is there an economic boom or a recession? Any others?
A government can increase aggregate demand for goods and services by increased government
spending and/or by reducing taxation so that firms (and individuals) have more after tax income
available to spend.
Government policy may encourage firms to locate to particular areas. This is particularly the case
where there is high unemployment in such areas.
Government policy via the use of quotas and import tariffs might make it more difficult for
overseas firms to compete in domestic markets.
A government can regulate monopolies in particular with regard to the prices they charge and the
quality of their goods and services.
Government policy can regulate the activities of those firms which do not act in the best interests
of the environment.
You have studied traditional management accounting techniques, such as variance analysis, for earlier
examinations.
It has however been argued that in today‟s environment they are less than adequate. Listed below are
some examples of areas where traditional management accounting is criticised.
Cooper and Kaplan stated that it was the support activities that were the cause of many overheads, for
example, material handling, quality inspection, setting up machinery, material acquisition, etc.
Method
1. Identify the organisation‟s major activities. Ideally about 30 to 50 activities should be identified.
However, over time, some large firms have been known to develop hundreds of activities. A
suitable rule of thumb is to apply the 80/20 rule: identify the 20% of activities that generate 80%
of the overheads, and analyse these in detail.
2. Estimate the costs associated with performing each activity – these costs are collected into cost
pools.
3. Identify the factors that influence the cost pools. These are known as the cost drivers. For
example, the number of set-ups will influence the cost of setting up machinery.
4. Calculate a cost driver rate, for example a rate per set-up, or a rate per material requisition, or a
rate per inspection. Cost driver rate = Cost pool /Level of cost drivers
5. Charge the overheads to the products by applying the cost driver rates to the activity usage of the
products.
Activity based budgeting comes from the principles of activity based costing. The basic ideas are:
Activity-based management (ABM) is a method of identifying and evaluating activities that a business
performs using activity-based costing to carry out a value chain analysis or a re-engineering initiative to
improve strategic and operational decisions in an organisation.
Activity-based costing establishes relationships between overhead costs and activities so that overhead
costs can be more precisely allocated to products, services, or customer segments.
Activity-based management focuses on managing activities to reduce costs and improve customer value.
Kaplan and Cooper (1998) divide ABM into operational ABM and strategic ABM:
A risk with ABM is that some activities have an implicit value, not necessarily reflected in a financial value
added to any product. For instance a particularly pleasant workplace can help attract and retain the best
staff, but may not be identified as adding value in operational ABM.
A customer that represents a loss based on committed activities, but that opens up leads in a new market,
may be identified as a low value customer by a strategic ABM process.
ABM can give middle managers an understanding of costs to other teams to help them make decisions
that benefit the whole organisation, not just their activities‟ bottom line.
Value analysis
Value analysis is the examination and assessment by an organization of a product‟s features to ensure that
its cost is no greater than is necessary to carry out its functions.
The product‟s functions are again determined by customers and the company must examine the factors
affecting the cost of a product or service in order to attempt to reduce costs whilst still delivering the
required standard of quality and reliability.
Note that some costs are associated with a product‟s functional and some with its esteem value. Luxury
and cheap products might carry out the same function but the styling or quality of the luxury product
might be essential in the eyes of consumers. It is important for the manufacturer damages neither
function nor esteem value when trying to reduce costs. A value added activity is one which adds value to
the customer‟s perception of a product or service, whereas a non-value added activity is one that does not
add value in the eyes of the customer.
Costs that do not add value to the product should be targeted for elimination. However, this is not always
the case – the removal of some non-value added activities (such as quality control) could add further
costs. A further classification is the breakdown of activities between core (such as time spent with
potential customers), support (such as travelling time to customers), and discretionary (such as correcting
accounting errors).
Effective cost management is about reducing or eliminating costs spent on non-core activities.
You might have heard a phrase such as “He/she has become institutionalised”. It is often used in the
context of someone who has spent a long time in hospital, care or prison. When we say that they have
become institutionalised we mean that they have been conditioned to act in a particular way and they
would find it hard to change if given the opportunity.
Procedures and people in commercial organisations become institutionalised. For example, management
accounting systems are influenced by legal requirements, culture and the copying of successful firms. This
can mean that it is difficult for accounting systems to „break free‟ from what is accepted as normal and this
could hinder organisations‟ progress.
Burns and Scapens (2000) sought to provide a framework describing the process of institutionalisation.
The main concern of their framework was to understand the processes through which management
accounting rules and routines become taken-for-granted assumptions and so become institutionalised
within the organisation.
As such, management accounting systems, for example the budgeting system, carry the values of
rationality and financial orientation, which if taken-for-granted can become institutionalised.
However, Burns and Scapens also note that not all newly introduced accounting rules and routines will
necessarily become institutionalised. In particular, if new management accounting systems and practices
challenge the prevailing institutions (ie the currently accepted way of doing things) in the organisation,
they may not be widely adopted and may fail to become an institutionalised basis for behaviour. This
framework has been used by various researchers to study management accounting change – or lack of it.
Risk and uncertainty is a topic on which you have been examined previously, but is deemed knowledge
and it therefore repeated here as revision.
Decision making involves making decisions now which will affect future outcomes which are unlikely to
be known with certainty.
Risk exists where a decision maker has knowledge that several possible outcomes are possible – usually
due to past experience. This past experience enables the decision maker to estimate the probability or the
likely occurrence of each potential future outcome.
Uncertainty exists where the future is unknown and where the decision maker has no past experience on
which to base predictions.
Whatever the reasons for the uncertainty, the fact that it exists means that there is no ‚rule„ as to how to
make decisions. For the examination you are expected to be aware of, and to apply, several different
approaches that might be useful.
Risk preference:
A RISK SEEKER will be interested in the best possible outcome, no matter how small the change that they
may occur. Someone who is RISK NEUTRAL will be concerned with the most likely or „average‟ outcome.
A RISK AVOIDER makes decisions on the basis of the worst possible outcomes that may occur.
EXPECTED VALUES
Decision-making involves dealing with future events that cannot be predicted with any certainty.
It may, however, be possible to predict a range of possible costs and revenues and the likelihood of them
arising.
Expected Values (EVs) are weighted average values based on probabilities. EVs are a useful tool in
business.
They can, for Exercise, be used to calculate the likely number of faulty components in a
production batch and the likely sales of a product over a range of time periods.
They can also be used to calculate the likely profits of a project, together with the most profitable
course of action. Expected values are of most use in longer term planning though they still have a
role in one off decisions.
The results from a particular decision or action are often uncertain and depend on the circumstances
prevailing at the time.
Which CONSEQUENCE (or payoff) that arises from each action depends on the CIRCUMSTANCES
operating at the time a decision is made. These circumstances are independent of the actions themselves,
and it is often possible to assign a probability value to each of them.
It is possible to construct a payoff table which shows all of the possible consequences of a particular
decision. It is customary to display circumstances as rows and actions as columns. Consequences or
payoffs are cells in the table.
DECISION RULES
Depending on a decision-maker‟s attitude to risk a company may adopt different approaches to deciding
which project or course of action to take.
MaxiMin
In this strategy the decision-maker takes the project that has the least bad outcome – in effect playing it
safe.
Remember this is a very conservative strategy that can lead to low returns for a company. It is also one
that completely ignores the likelihood of something happening.
MaxiMax
In this approach the company seeks to maximize the best possible outcome.
Remember this can be a high risk strategy as no account is take of possible losses or how likely each
outcome is.
Minimax regret
ILLUSTRATION 1
Exton Health Centre specializes in the provision of exercise and dietary advice to clients. The service is
provided on a residential basis and clients stay for whatever number of days suits their needs.
Budgeted estimates for the year ending 30 June 2012 are as follows:
(i) The maximum capacity of the center is 50 clients per day for 350 days in the year.
(ii) Clients will be invoiced at a fee per day. The budgeted occupancy level will vary with the
client fee level per day and is estimated at different percentages of maximum capacity as
follows:
Client fee per Occupancy level Occupancy as percentage of day maximum capacity
(iii) Variable costs are also estimated at one of three levels per client day. The high, medium and
low levels per client day are $95, $85 and $70 respectively.
The range of cost levels reflects only the possible effect of the purchase prices of goods and
services.
Required:
(a) a. Prepare a summary which shows the budgeted contribution earned by Exton Health Centre for
the year ended 30 June 2012 for each of nine possible outcomes. (6 marks)
(b)State the client fee strategy for the year to 30 June 2012 which will result from the use of each of
the following decision rules: (i) maximax; (ii) maximin; (iii) minimax regret.
Your answer should explain the basis of operation of each rule. Use the information from your
answer to (a) as relevant and show any additional working calculations as necessary. (9 marks)
(c) The probabilities of variable cost levels occurring at the high, medium and low levels provided in
the question are estimated as 0.1, 0.6 and 0.3 respectively. Using the information available,
determine the client fee strategy which will be chosen where maximization of expected value of
contribution is used as the decision basis. (5 marks)
(d)The residents are provided with breakfast and evening meals at no extra cost. However, they also
have an option to buy a lunchtime meal. Each meal costs $7 to prepare and would be priced at $15
to customers. All lunches must be prepared in advance. Based on expected occupancy levels, the
restaurant manager has predicted that daily demand will either be 10 meals (probability 0.2), 20
meals (probability 0.5) or 30 meals (probability (0.3).
Prepare a pay-off matrix showing the outcomes if the restaurant manager decides to
make 10, 20 or 30 lunches in advance. How many lunches should the restaurant
manager make? (5 marks)
(25 marks)
Occupancy Client days = Fee per Var. cost Contribution Total level 50 clients x
client per per client contrib.
350 days x day client day per year occupancy% day
$ $ $ $
(b) Maximax
The maximax rule looks for the largest contribution from all outcomes.
Fee per client day Best possible contribution ($) ($)
180 1,732,500
200 1,706,250
220 1,575,000
In this case the decision maker will choose a client fee of $180 per day where there is a
possibility of a contribution of $1,732,500.
Maximin
The maximin rule looks for the strategy which will maximize the minimum possible
contribution.
Client fee per day Minimum possible contribution ($) (S)
180 1,338,750
200 1,378,125
220 1,312,500
In this case the decision maker will choose client fee of $200 per day where the lowest
contribution is $1,378,125.
Minimax regret
The minimax regret rule requires the choice of the strategy which will minimise the
maximum regret from making the wrong decision. Regret in this context is the
opportunity lost through making the wrong decision.
Using the calculations from part (a) we may create an opportunity loss table as follows:
State of variable cost Client fee per day strategy $180 $200 $220
Hence choose a client fee of $200 per day to give the maximum expected value contribution
of $1,555,313.
Note that there is virtually no difference between this and the contribution where a fee of
$180 per day is used.
(d)
Probability Outcome = Decision =
lunches lunches
demanded made
10 20 30
0.2 10 $80 profit $10 profit ($60) loss
0.5 20 $80 profit $160 profit $90 profit
0.3 30 $80 profit $160 profit $240 profit
1.0 - $80 profit $130 profit $105 profit
Example of working - Decision is made to prepare 20 lunches
Clearly, risk permeates most aspects of corporate decision making (and life in general), and few can
predict with any precision what the future holds in store
Risk can take myriad forms – ranging from the specific risks faced by individual companies (such as
financial risk, or the risk of a strike among the workforce), through the current risks faced by particular
industry sectors (such as banking, car manufacturing, or construction), to more general economic risks
resulting from interest rate or currency fluctuations, and, ultimately, the looming risk of recession. Risk
often has negative connotations, in terms of potential loss, but the potential for greater than expected
returns also often exists.
Clearly, risk is almost always a major variable in real-world corporate decision-making, and managers
ignore its vagaries at their peril. Similarly, trainee accountants require an ability to identify the
presence of risk and incorporate appropriate adjustments into the problem-solving and
decision-making scenarios encountered in the exam hall. While it is unlikely that the precise
probabilities and perfect information which feature in exam questions can be transferred to real-world
scenarios, a knowledge of the relevance and applicability of such concepts is necessary.
In this first article, the concepts of risk and uncertainty will be introduced together with the use of
probabilities in calculating both expected values and measures of dispersion. In addition, the attitude to
risk of the decision maker will be examined by considering various decision-making criteria, and the
usefulness of decision trees will also be discussed. In the second article, more advanced aspects of risk
assessment will be addressed, namely the value of additional information when making decisions,
further probability concepts, the use of data tables, and the concept of value-at-risk.
The basic definition of risk is that the final outcome of a decision, such as an investment, may differ from
that which was expected when the decision was taken. We tend to distinguish between risk and
uncertainty in terms of the availability of probabilities. Risk is when the probabilities of the possible
outcomes are known (such as when tossing a coin or throwing a dice); uncertainty is where the
randomness of outcomes cannot be expressed in terms of specific probabilities. However, it has been
suggested that in the real world, it is generally not possible to allocate probabilities to potential
outcomes, and therefore the concept of risk is largely redundant. In the artificial scenarios of exam
questions, potential outcomes and probabilities will generally be provided, therefore a knowledge of the
basic concepts of probability and their use will be expected.
The term ‘probability’ refers to the likelihood or chance that a certain event will occur, with potential
values ranging from 0 (the event will not occur) to 1 (the event will definitely occur). For example, the
probability of a tail occurring when tossing a coin is 0.5, and the probability when rolling a dice that it
will show a four is 1/6 (0.166). The total of all the probabilities from all the possible outcomes must
equal 1, ie some outcome must occur.
A real world example could be that of a company forecasting potential future sales from the
introduction of a new product in year one (Table 1).
From Table 1, it is clear that the most likely outcome is that the new product generates sales of
£1,000,000, as that value has the highest probability.
An independent event occurs when the outcome does not depend on the outcome of a previous event. For
example, assuming that a dice is unbiased, then the probability of throwing a five on the second throw
does not depend on the outcome of the first throw.
In contrast, with a conditional event, the outcomes of two or more events are related, ie the outcome of
the second event depends on the outcome of the first event. For example, in Table 1, the company is
forecasting sales for the first year of the new product. If, subsequently, the company attempted to
predict the sales revenue for the second year, then it is likely that the predictions made will depend on
the outcome for year one. If the outcome for year one was sales of $1,500,000, then the predictions for
year two are likely to be more optimistic than if the sales in year one were $500,000.
The availability of information regarding the probabilities of potential outcomes allows the calculation
of both an expected value for the outcome, and a measure of the variability (or dispersion) of the
potential outcomes around the expected value (most typically standard deviation). This provides us with
a measure of risk which can be used to assess the likely outcome.
Using the information regarding the potential outcomes and their associated probabilities, the expected
value of the outcome can be calculated simply by multiplying the value associated with each potential
outcome by its probability. Referring back to Table 1, regarding the sales forecast, then the expected
value of the sales for year one is given by:
Expected value
= ($500,000)(0.1) + ($700,000)(0.2) + ($1,000,000)(0.4) + ($1,250,000)(0.2) + ($1,500,000)(0.1)
= $50,000 + $140,000 + $400,000 + $250,000 + $150,000
= $990,000
In addition to the expected value, it is also informative to have an idea of the risk or dispersion of the
potential actual outcomes around the expected value. The most common measure of dispersion is
standard deviation (the square root of the variance), which can be illustrated by the example given
in Table 2 above, concerning the potential returns from two investments.
In addition to the expected value, it is also informative to have an idea of the risk or dispersion of the
potential actual outcomes around the expected value. The most common measure of dispersion is
standard deviation (the square root of the variance), which can be illustrated by the example given
in Table 2 above, concerning the potential returns from two investments.
To estimate the standard deviation, we must first calculate the expected values of each investment:
Investment A
Investment B
The calculation of standard deviation proceeds by subtracting the expected value from each of the
potential outcomes, then squaring the result and multiplying by the probability. The results are then
totalled to yield the variance and, finally, the square root is taken to give the standard deviation, as
shown in Table 3.
The coefficient of variation is calculated simply by dividing the standard deviation by the expected
return (or mean):
For example, assume that investment X has an expected return of 20% and a standard deviation of 15%,
whereas investment Y has an expected return of 25% and a standard deviation of 20%. The coefficients
of variation for the two investments will be:
The interpretation of these results would be that investment X is less risky, on the basis of its lower
coefficient of variation. A final statistic relating to dispersion is the standard error, which is a measure
often, confused with standard deviation. Standard deviation is a measure of variability of a sample,
used as an estimate of the variability of the population from which the sample was drawn. When we
calculate the sample mean, we are usually interested not in the mean of this particular sample, but in
the mean of the population from which the sample comes. The sample mean will vary from sample to
sample and the way this variation occurs is described by the ‘sampling distribution’ of the mean. We can
estimate how much a sample mean will vary from the standard deviation of the sampling distribution.
This is called the standard error (SE) of the estimate of the mean.
The standard error of the sample mean depends on both the standard deviation and the sample size:
SE = SD/√(sample size)
The standard error decreases as the sample size increases, because the extent of chance variation is
reduced. However, a fourfold increase in sample size is necessary to reduce the standard error by 50%,
DECISION-MAKING CRITERIA
The decision outcome resulting from the same information may vary from manager to manager as a
result of their individual attitude to risk. We generally distinguish between individuals who are risk
averse (dislike risk) and individuals who are risk seeking (content with risk). Similarly, the appropriate
decision-making criteria used to make decisions are often determined by the individual’s attitude to risk.
1. Maximin
2. Maximax
3. Minimax regret
An ice cream seller, when deciding how much ice cream to order (a small, medium, or large order),
takes into consideration the weather forecast (cold, warm, or hot). There are nine possible combinations
of order size and weather, and the payoffs for each are shown in Table 4.
The highest payoffs for each order size occur when the order size is most appropriate for the weather, ie
small order/cold weather, medium order/warm weather, large order/hot weather. Otherwise, profits
are lost from either unsold ice cream or lost potential sales. We shall consider the decisions the ice cream
seller has to make using each of the decision criteria previously noted (note the absence of probabilities
regarding the weather outcomes).
1. Maximin
This criteria is based upon a risk-averse (cautious) approach and bases the order decision upon
maximising the minimum payoff. The ice cream seller will therefore decide upon a medium order,
as the lowest payoff is £200, whereas the lowest payoffs for the small and large orders are £150
and $100 respectively.
2. Maximax
This criteria is based upon a risk-seeking (optimistic) approach and bases the order decision upon
maximising the maximum payoff. The ice cream seller will therefore decide upon a large order, as
the highest payoff is $750, whereas the highest payoffs for the small and medium orders are $250
and $500 respectively.
3. Minimax regret
The decision is then made on the basis of the lowest regret, which in this case is the large order with the
maximum regret of $200, as opposed to $600 and $450 for the small and medium orders.
DECISION TREES
The final topic to be discussed in this first article is the use of decision trees to represent a decision
problem. Decision trees provide an effective method of decision-making because they:
Clearly lay out the problem so that all options can be challenged
Allow us to fully analyse the possible consequences of a decision
Provide a framework in which to quantify the values of outcomes and the probabilities of
achieving them
Help us to make the best decisions on the basis of existing information and best guesses.
The first step is to simply represent the decision to be made and the potential outcomes, without any
indication of probabilities or potential payoffs, as shown in Figure 1 below.
The next stage is to estimate the payoffs associated with each market response and then to allocate
probabilities. The payoffs and probabilities can then be added to the decision tree, as shown in Figure
2 below.
The expected values along each branch of the decision tree are calculated by starting at the right hand
side and working back towards the left recording the relevant value at each node of the tree. These
expected values are calculated using the probabilities and payoffs. For example, at the first node, when
a new product is thoroughly developed, the expected payoff is:
The calculations are then completed at the other nodes, as shown in Figure 3 below.
We have now completed the relevant calculations at the uncertain outcome modes. We now need to
include the relevant costs at each of the decision nodes for the two new product development decisions
and the two consolidation decisions, as shown in Figure 4 below.
The payoff we previously calculated for ‘new product, thorough development’ was $420,400, and we
have now estimated the future cost of this approach to be $150,000. This gives a net payoff of $270,400.
The net benefit of ‘new product, rapid development’ is $31,400. On this branch, we therefore choose the
most valuable option, ‘new product, thorough development’, and allocate this value to the decision node.
The outcomes from the consolidation decision are $99,800 from strengthening the products, at a cost of
$30,000, and $12,800 from reaping the products without any additional expenditure.
By applying this technique, we can see that the best option is to develop a new product. It is worth much
more to us to take our time and get the product right, than to rush the product to market. And it’s better
just to improve our existing products than to botch a new product, even though it costs us less.
In the next article, we will examine the value of information in making decisions, the use of data tables,
and the concept of value-at-risk.
In this second article on the risks of uncertainty, we build upon the basics of risk and
uncertainty addressed in the first article published in April 2009 to examine more
advanced aspects of incorporating risk into decision making
In particular, we return to the use of expected values and examine the potential impact of the
availability of additional information regarding the decision under consideration. Initially, we examine
a somewhat artificial scenario, where it is possible to obtain perfect information regarding the future
outcome of an uncertain variable (such as the state of the economy or the weather), and calculate the
potential value of such information. Subsequently, the analysis is revisited and the more realistic case of
imperfect information is assumed, and the initial probabilities are adjusted using Bayesian analysis.
Some decision scenarios may involve two uncertain variables, each with their own associated
probabilities. In such cases, the use of data/decision tables may prove helpful where joint probabilities
are calculated involving possible combinations of the two uncertain variables. These joint probabilities,
along with the payoffs, can then be used to answer pertinent questions such as what is the probability of
a profit/(loss) occurring?
The other main topic covered in the article is that of Value-at-Risk (VaR), which has been referred to as
'the new science of risk management'. The principles underlying VaR will be discussed along with an
illustration of its potential uses.
To illustrate the potential value of additional information regarding the likely outcomes resulting from
a decision, we return to the example given in the first article, of the ice cream seller who is deciding how
much ice cream to order but is unsure about the weather. We now add probabilities to the original
information regarding whether the weather will be cold, warm or hot, as shown in Table 1.
We are now in a position to be able to calculate the expected values associated with the three sizes of
order, as follows:
Expected value (small) = 0.2 ($250) + 0.5 ($200) + 0.3 ($150) = $195
Expected value (medium) = 0.2 ($200) + 0.5 ($500) + 0.3 ($300) = $380
Expected value (large) = 0.2 ($100) + 0.5 ($300) + 0.3 ($750) = $395
In the case of the ice cream seller, perfect information would be certainty regarding the outcome of the
weather.
If this was the case, then the ice cream seller would purchase the size of order which gave the highest
payoff for each weather outcome - in other words, purchasing a small order if the weather was forecast
to be cold, a medium order if it was forecast to be warm, and a large order if the forecast was for hot
weather. The resulting expected value would then be:
The value of the perfect information is the difference between the expected values with and without the
information, ie
Exam questions are often phrased in terms of the maximum amount that the decision maker would be
prepared to pay for the information, which again is the difference between the expected values with and
without the information.
However, the concept of perfect information is somewhat artificial since, in the real world, such perfect
certainty rarely, if ever, exists. Future outcomes, irrespective of the variable in question, are not
perfectly predictable. Weather forecasts or economic predictions may exhibit varying degrees of
accuracy, which leads us to the concept of imperfect information.
With imperfect information we do not enjoy the benefit of perfect foresight. Nevertheless, such
information can be used to enhance the accuracy of the probabilities of the possible outcomes and
therefore has value. The ice cream seller may examine previous weather forecasts and, on that basis,
estimate probabilities of future forecasts being accurate. For example, it could be that when hot weather
is forecast past experience has suggested the following probabilities:
The probabilities given do not add up to 1 and so, for example, P (forecast hot but weather cold) cannot
mean P (weather cold given that forecast was hot), but must mean P (forecast was hot given that
weather turned out to be cold).
We can use a table to determine the required probabilities. Suppose that the weather was recorded on
100 days. Using our original probabilities, we would expect 20 days to be cold, 50 days to be warm, and
Hot 6** 20 21 47
Other 14 30 9 53
20* 50 30 100
From past data, cold weather occurs with probability of 0.2 ie on 0.2 of the 100 days in the sample = 20
days. Other percentages are also derived from past data.
** If the actual weather is cold, there is a 0.3 probability that hot weather had been forecast. This will
occur on 0.3 of the 20 days on which the weather was poor = 6 days (0.3 x 20). Similarly, 20 = 0.5 x 40
and 21 = 0.7 x 30.
P (Cold)=6/47=0.128
P (Warm) = 20/47 = 0.425
P (Hot) = 21/47 = 0.447
Expected value (small) = 0.128 ($250) + 0.425 ($200) + 0.447 ($150) = $184
Expected value (medium) = 0.128 ($200) + 0.425 ($500) + 0.447 ($300) = $372
Expected value (large) = 0.128 ($100) + 0.425 ($300) + 0.447 ($750) = $476
Value of imperfect information = $476 - $395 = 81
The estimated value for imperfect information appears reasonable, given that the value we had
previously calculated for perfect information was $130.
BAYES' RULE
Bayes' rule is perhaps the preferred method for estimating revised (posterior) probabilities when
imperfect information is available. An intuitive introduction to Bayes' rule was provided in The
Economist, 30 September 2000:
'The essence of the Bayesian approach is to provide a mathematical rule explaining how you should
change your existing beliefs in the light of new evidence. In other words, it allows scientists to combine
new data with their existing knowledge or expertise. The canonical example is to imagine that a
precocious newborn observes his first sunset, and wonders whether the sun will rise again or not. He
assigns equal prior probabilities to both possible outcomes, and represents this by placing one white and
one black marble into a bag. The following day, when the sun rises, the child places another white
For example, consider a medical test for a particular disease which is 90% accurate, ie if you test
positive then there is a 90% probability that you have the disease and a 10% probability that you have
been misdiagnosed. If we further assume that 3% of the population actually have this disease, then the
probability of having the disease (given that you have tested positive) is shown by:
P(Disease|Test = +) =
P(Test = +|Disease) x P(Disease)
= 0.218
This result suggests that you have a 22% probability of having the disease, given that you tested
positive. This may seem a low probability but only 3% of the population have the disease and we would
expect them to test positive. However, 10% of tests will prove positive for people who do not have the
disease. Therefore, if 100 people are tested, approximately three out of the 13 positive tests will actually
have the disease.
Bayes' rule has been used in a practical context for classifying email as spam on the basis of certain key
words appearing in the text.
DATA TABLES
Data tables show the expected values resulting from combinations of uncertain variables, along with
their associated joint probabilities. These expected values and probabilities can then be used to estimate,
for example, the probability of a profit or a loss.
To illustrate, assume that a concert promoter is trying to predict the outcome of two uncertain
variables, namely:
1. The number of people attending the concert, which could be 300, 400, or 600 with estimated
probabilities of 0.4, 0.4, and 0.2 respectively.
2. From each person attending, the profit on drinks and confectionary, which could be $2, $4, or $6
with estimated probabilities of 0.3, 0.4 and 0.3 respectively.
The two tables could then be used to answer questions such as:
2. The probability of making a profit of more than $3,500? = 0.08 + 0.12 + 0.06 = 0.26
VALUE-AT-RISK (VAR)
Although financial risk management has been a concern of regulators and financial executives for a
long time, Value-at-Risk (VaR) did not emerge as a distinct concept until the late 1980s. The triggering
event was the stock market crash of 1987 which was so unlikely, given standard statistical models, that
it called the entire basis of quantitative finance into account.
VaR is a widely used measure of the risk of loss on a specific portfolio of financial assets. For a given
portfolio, probability, and time horizon, VaR is defined as a threshold value such that the probability
that the mark-to-market loss on the portfolio over the given time horizon exceeds this value (assuming
normal markets and no trading) is the given probability level. Such information can be used to answer
questions such as 'What is the maximum amount that I can expect to lose over the next month with
95%/99% probability?'
For example, large investors, interested in the risk associated with the FT100 index, may have gathered
information regarding actual returns for the past 100 trading days. VaR can then be calculated in three
different ways:
This method simply ranks the actual historical returns in order from worst to best, and relies on the
assumption that history will repeat itself. The largest five (one) losses can then be identified as the
threshold values when identifying the maximum loss with 5% (1%) probability.
This relies upon the assumption that the index returns are normally distributed, and uses historical data
to estimate an expected value and a standard deviation. It is then a straightforward task to identify the
worst 5 or 1% as required, using the standard deviation and known confidence intervals of the normal
distribution - ie -1.65 and -2.33 standard deviations respectively.
While the historical and variance-covariance methods rely primarily upon historical data, the
simulation method develops a model for future returns based on randomly generated trials.
Admittedly, historical data is utilised in identifying possible returns but hypothetical, rather than
actual, returns provide the data for the confidence levels.
Of these three methods, the variance-covariance is probably the easiest as the historical method involves
crunching historical data and the Monte Carlo simulation is more complex to use.
VaR can also be adjusted for different time periods, since some users may be concerned about daily risk
whereas others may be more interested in weekly, monthly, or even annual risk. We can rely on the idea
that the standard deviation of returns tends to increase with the square root of time to convert from one
time period to another. For example, if we wished to convert a daily standard deviation to a monthly
equivalent then the adjustment would be :
For example, assume that after applying the variance-covariance method we estimate that the daily
standard deviation of the FT100 index is 2.5%, and we wish to estimate the maximum loss for 95 and
99% confidence intervals for daily, weekly, and monthly periods assuming five trading days each week
and four trading weeks each month:
95% confidence
Daily = -1.65 x 2.5% = -4.125%
Weekly = -1.65 x 2.5% x √5 = -9.22%
Monthly = -1.65 x 2.5% x √20 = -18.45%
99% confidence
Daily = -2.33 x 2.5% = -5.825%
Weekly = -2.33 x 2.5% x √5 = -13.03%
Monthly = -2.33 x 2.5% x √20 = -26.05%
Therefore we could say with 95% confidence that we would not lose more than 9.22% per week, or with
99% confidence that we would not lose more than 26.05% per month.
On a cautionary note, New York Times reporter Joe Nocera published an extensive piece entitled Risk
Mismanagement on 4 January 2009, discussing the role VaR played in the ongoing financial crisis.
CONCLUSION
These two articles have provided an introduction to the topic of risk present in decision making, and the
available techniques used to attempt to make appropriate adjustments to the information provided.
Adjustments and allowances for risk also appear elsewhere in the ACCA syllabus, such as sensitivity
analysis, and risk-adjusted discount rates in investment appraisal decisions where risk is probably at its
most obvious. Moreover in the current economic climate, discussion of risk management, stress testing
and so on is an everyday occurrence.
References
Jorion, P (2006), Value at Risk: The New Benchmark for Managing Financial Risk, 3rd edition,
McGraw Hill
When new working practices or products are introduced, the theory is that as a workforce gains
experience in a task, it will come to perform that task quicker.
This means that labor costs and variable overheads (if labor hour driven) will be lower in later periods of
production than when the new product or production technique is introduced.
Assumptions
The theory of learning curves will only hold if the following conditions apply:
As cumulative output doubles, the cumulative average time per unit falls to a given percentage of the
previous cumulative average time per unit.
Cumulative average time is the average time for all units produced so far.
EXAMPLE
Required
If the budgeted time for the first batch is 100 hours, calculate the time to produce eight
batches in total.
Y = axb
Where:
Eventually, the time per unit will reach a steady state where no further improvement can be made.
Required
Using the formula Y= aXb , calculate the cost of producing:
(a) the first 10 batches in total
(b) the 10th batch only
ANSWER
a)
Steps
- First calculate „Cumulative Average time per Unit‟ (Y)
- Calculate Total time (Hours) by multiplying total no. of units with „Y‟
Y = 100 x 10 – 0.415
= 38.459 hrs
b)
Steps
- Calculate Total time (Hours) of which you have to find (i.e. 10 batches) as calculated above
- Calculate Total time (Hours) of which you have to find less 1 (i.e. 10 – 1 = 9 batches)
- Difference of step 2 and step 1 is the tome taken to produce a specific unit (i.e. 10 th batch)
Working
Y = 100 x 9 – 0.415
= 40.1781 hrs
Total time to produce 9 batches = 9 x 40.1781 = 361.60 hrs
ILLUSTRATION 3
XYZ Ltd has just produced the first full batch of a new product taking 200 hours.
XYZ has a learning curve effect of 85%. XYZ expects that after the 30th batch has been produced, the
learning effect will cease. From the 31st batch onwards, each batch will take the same time as the 30th
batch.
Required
(a) How long will it take to produce the next 15 batches?
(b) What time per batch should be budgeted from 31st batch?
(c) long will it take to produce the next 45 batches?
Y = 200 x 15 -.2345
= 105.99
b)
Steps (31st batch time will be the same as of 30th batch hence we calculate the time of 30th
batch)
- Calculate Total time (Hours) of which you have to find (i.e. 30 batches) as calculated above
- Calculate Total time (Hours) of which you have to find less 1 (i.e. 30 – 1 = 29 batches)
- Difference of step 2 and step 1 is the tome taken to produce a specific unit (i.e. 10th batch)
Note: Total time can be calculated in the same way as calculated earlier
c)
Steps
To calculate the total time of 45 batches we need to distribute these into two
- S1 Calculate Total time (Hours) of which you have to find (i.e. 30 batches) as calculated
above.
- S2 Calculate the time (Hours) to produce 30th batch
- S3 Calculate the time (Hours) to produce next 15 batches using time of 30 th batch.
- SUM S1 and S2 to assess the total time for 45 batches
This chapters considers the information needs of an organisation, particularly in respect of control
systems to ensure that the organisation maintains performance.
Cannot be calculated
Cannot be relied on when making decisions
In order to be useful to management, information should possess the following attributes: [ACCURATE]
Accurate: Sufficient for its purpose. Note that at higher managerial levels information
does not normally need to be as accurate as at lower levels
Complete: Obviously, incomplete information is likely to mislead
Cost-beneficial: Benefits should exceed costs
User-targeted: It should provide the information by needed by the user to make the
decision/perform the job
Relevant: Irrelevant information distracts and wastes people‟s time.
Authoritative: Well, you know how unreliable some web-site data is: sometimes deliberately
misleading, sometimes sloppy, sometimes out-of-date.
Timely: Information should be received quickly enough to enable better decisions.
There is no need for all information to be „instantly‟ available and speed often
has a cost.
Easy to use: Well-set out and annotated.
Note that introducing new management techniques eg Kaizen costing or new management accounting
techniques eg ABC will require a change in the information needed.
In many situations these techniques involve obtaining more detailed information. For example;
Identify activities that do not add value (eg dealing with customer returns)
Identify causes
Improve / redesign processes to reduce cost of the activity
SOURCES OF INFORMATION
Internal sources
Formal
Informal
Cross-departmental networking
The “Grapevine”
Social gatherings
External sources
Formal
Informal
Key points
These will have knock on effects such as reduced customer satisfaction (delays in production, delivery etc)
This can be solved using a unified IT system and data base (a data warehouse). “Big data” refers to the
vast quantity of information available within an organisation that can be mined to help with decision-
making
Techniques such as JIT go further than this in that IT systems (or at least information sharing) needs to
be co-ordinated with customers and suppliers as well.
An Enterprise Resource Planning System allows this cross department / company communication
Vast amounts of quantitative data are collected by EPOS systems through bar codes and loyalty cards.
Data such as customer complaints and sales visit reports are also vital evidence of not only how much is
spent but also why. This can give accurate profiles of consumers.
This data needs to be evaluated for accuracy and relevance so should pass through an analytical system.
Qualitative information is information that cannot normally be expressed in numerical terms. It is often
in the form of opinions, which show the effects of decisions on people and the community within which
the entity operates.
Use of standard templates and definitions for all information that has to be collected.
Reports should be examined periodically to ensure that they are actually being used.
The cost of producing the reports should be compared to the benefit that they give. Regular backups to
safeguard data.
Ensuring PCs are not in publicly accessible areas and that they have password controls.
It is very common in the examination to be given information about a company and to be asked to
comment on the performance. It is clearly important in practice to have measures in order to determine
whether or not the company is performing well.
It is important to measure both financial and non-financial performance, but in this chapter we will
consider only financial performance. You will be given extracts from the company‟s accounts for several
years and be expected to analyse and interpret this information.
Approach
Although you must be aware of several key measures of financial performance, it is important that you do
not fall into the trap of simply calculating every ratio imaginable for every year available. What the
examiner is after is much more of an over-view and being able to determine the key measures and to
comment adequately.
For example, if you are looking at the information from the shareholders‟ perspective, then growth (or
otherwise) in the share price will be of great interest.
However, if you are looking at how well the managers are performing, the growth (or otherwise) in the
profit (to the extent to which they control it) is perhaps of more importance.
Growth:
Always make some comment as to the level of growth. The amount of detail required depends on the
information available and the number of marks allocated, but growth in turnover, in profit, and in share
price are all potentially relevant.
Look at the overall level of growth and look for any trends, do not waste time doing detailed year- by-year
analysis.
Subject again to exactly what you are being asked to comment on, the following areas are likely to be
worthy of consideration:
Most measures mean little on their own, and are only really useful when compared with something.
Depending on the information given in the question, any comparison is likely to be with one of the
following:
Profitability
Liquidity
Current ratio
Acid ratio
Inventory days
Receivables days
Payables days
Gearing
Dividend cover
Interest cover
Earnings per share
Price earnings ratio
RATIOS
It gives a measure of the underlying performance of the business before finance. It gives an indication of
the health of the business in generating a return on its investments.
Gearing has no impact on the return and hence this is the most important measure of profitability to
calculate. The ratio is calculated before tax allowing return to be compared between companies under
differing tax regimes.
Note: Capital employed represents the total funds invested in the business, it includes equity and long-
term debt.
The portion of a company‟s profit allocated to each outstanding share of common stock. Earnings per
share serves as an indicator of a company‟s profitability.
The P/E ratio is a measure of future earnings growth; it compares the market value to the current
earnings.
The higher the P/E ratio, the greater the market expectation of future earnings growth. This may also be described
as market potential.
The amount of profits attributable to shareholders that are actually paid out in the form of dividend.
RISK
Operational gearing
Operational gearing looks at the risk associated with the level of fixed costs within a business. The higher
the fixed cost the more volatile the profit. The level of fixed cost is normally determined by the type of
industry and cannot be changed. It is mainly calculated as
It is important to note that the level of operating risk will impact on the level of financial risk that a
company is willing to take on.
Capital gearing
Capital gearing may be calculated in a number of different ways and it is likely that the examiner will
specify the method required.
Interest cover
A profit and loss account measure that considers the ability of the business to cover the cost of debt as it
falls due.
All permanent capital charging a fixed interest may be considered debt. This includes:
Equity
All ordinary share capital and share premium together with reserves
WORKING CAPITAL
Calculation of days
LIQUIDITY RATIOS
A simple measure of how much of the total current assets are financed by current liabilities. If, for example the
measure is 2:1 this means that only a limited amount of the assets are funded by the current liabilities.
A measure of how well current liabilities are covered by liquid assets. A measure of 1:1 means that we are
able to meet our existing liabilities if they all fall due at once.
These liquidity ratios are a guide to the risk of cash flow problems and insolvency. If a company suddenly
finds that it is unable to renew its short term liabilities (for instance if the bank suspends its overdraft
facilities) there will be a danger of insolvency unless the company is able to turn enough of its current
assets into cash quickly.
Overtrading is trading by an organization beyond the resources provided by its existing capital.
Overtrading tends to lead to liquidity problems as too much stock is bought on credit and too much credit
is extended to its customers, so that ultimately there is not sufficient cash available to pay the debts as
they arise.
Indicators:
Remedies:
EBITDA is a financial performance measure that has appeared relatively recently. It stands for „earnings
before interest, taxes, depreciation and amortisation‟ and is particularly popular with high-tech startup
businesses.
Consideration of earnings before interest and tax has long been common – before interest in order to
measure the overall profitability before any distributions to providers and capital, and before tax on the
basis that this is not under direct control of management.
The reason that EBITDA additionally considers the profit before depreciation and amortisation is in order
to approximate to cash flow, on the basis that depreciation and amortisation are non-cash expenses.
A major criticism, however, of EBITDA is that it fails to consider the amounts required for fixed asset
replacement.
Advantages Disadvantages
1. It is a measure of underlying performance. 1. It ignores changes in working capital
2. Tax and Interest are externally generated and and the impact on cash flows.
therefore not relevant to underlying performance. 2. It fails to consider the amount of fixed
3. It is easy to calculate. asset replacement needed.
4. It is easy to understand 3. It can easily be manipulated by
aggressive accounting policies
Solution
Begin with a review of the summary information - notable points
Growth in turnover
Growth in PBIT
Growth in PAT
Growth in total assets, debtors approx. in line with turnover, creditors at a higher rate.
Reduction of gearing (result of rights issue?) and reduced interest charge
Dividend growth
P/E ratio has overtaken industry average.
Profitability Year 1 Year 2 Year3 Year 4
ROCE 26% 27% 20% 22%
Profit Margin 19.9% 19.8% 17.2% 19.2%
Asset Turnover 1.3 1.4 1.2 1.2
Gearing
Gearing (book values) 50% 34.6% 6% 3.9%
Interest cover (times) 7.25 9.5 48.5 75.3
Working capital
Debtor days 73 76 71 70
Creditor days 68 76 81 83
Investor ratios Share Price* $ 9.63 11.40 9.66 11.95
Market Capitalisation $m 86.67 102.60 115.92 143.4
In this chapter we will consider the situation where an organisation is divisonalised (or decentralised) and
the importance of proper performance measurement in this situation.
We will also consider the possible problems that can result from the use of certain standard performance
measures.
Divisionalisation is a term for the division of an organisation into divisions. Each divisional manager is
responsible for the performance of the division. A division may be a cost centre (responsible for its costs
only), a profit centre (responsible for revenues and profits) or an investment centre or Strategic Business
Unit (responsible for costs, revenues and assets).
Advantages:
a) Divisionalisation can improve the quality of decisions made because divisional managers (those
taking the decisions) know local conditions and are able to make more informed judgements.
Moreover, with the personal incentive to improve the division's performance, they ought to take
decisions in the division's best interests.
b) Decisions should be taken more quickly because information does not have to pass along the
chain of command to and from top management. Decisions can be made on the spot by those who
are familiar with the product lines and production processes and who are able to react to changes
in local conditions quickly and efficiently.
c) The authority to act to improve performance should motivate divisional managers.
d) Divisional organisation frees top management from detailed involvement in day-to-day
operations and allows them to devote more time to strategic planning.
e) Divisions provide valuable training grounds for future members of top management by giving
them experience of managerial skills in a less complex environment than that faced by top
management.
f) In a large business organisation, the central head office will not have the management resources
or skills to direct operations closely enough itself. Some authority must be delegated to local
operational managers.
Disadvantages
a) Divisional managers may make dysfunctional decisions (decisions that are not in the best
interests of the organisation).
b) There is a need for a performance appraisal system to assess the performance of individual
managers.
c) Top management may lose control by delegating decision making to divisional managers, since
they are not aware of what is going on in the whole organisation.
d) Lack of economies of scale. For example, efficient cash management can be achieved much more
effectively if all cash balances are centrally controlled.
If managers are to be given autonomy in their decision making, it becomes impossible for senior
management to „watch over‟ them on a day-to-day basis – this would remove the whole benefit of having
divisionalised!
The way to control their performance is to establish in advance a set of measures that will be used to
evaluate their performance at (normally) the end of each year. These measures provide a way of
determining whether or not they are managing their division well, and also communicate to the managers
how they are expected to perform.
For example, suppose a manager was simply given one performance measure – to increase profits. This
may seem sensible, in that in any normal situation the company will want the division to become more
profitable. However, if the manager expects to be rewarded on the basis of how well he achieves the
measure, all his actions will be focussed on increasing profit to the exclusion of everything else. This
would not however be beneficial to the company if the manager were to achieve it by taking actions that
reduced the quality of the output from the division. (In the long- term it may not be beneficial for the
manager either, but managers tend to focus more on the short-term achievement of their performance
measures.)
It is therefore necessary to have a series of performance measures for each division manager.
Maybe one measure will relate to profitability, but at the same time have another measure relating to
quality. The manager will be assessed on the basis of how well he has achieved all of his measures.
We wish the performance measures to be goal congruent, that is to encourage the manager to make
decisions that are not only good for him but end up being good for the company as a whole also.
In this chapter we will consider only financial performance. However, non-financial performance is just as
important and we will consider that in the next chapter.
Controllable profits
However, if the measure is to be used to assess the performance of the divisional manager it is important
that any costs outside his control should be excluded.
For example, it might be decided that pay increases in all division should be fixed centrally by human
resources staff at Head Office. In this case it would be unfair to penalise (or reward) the manager for any
effect on the division‟s profits in respect of this cost. For these purposes therefore a profit and loss account
would be prepared ignoring wages and it would be on the resulting controllable profit that the manager
would be assessed.
As stated earlier, divisionalisation implies that the divisional manager has some degree of autonomy.
In the case of an investment centre, the manager is given decision-making authority not only over costs
and revenues, but additionally over capital investment decision.
In this situation it is important that any measure of profitability is related to the level of capital
expenditure. Simply to assess on the absolute level of profits would be dangerous – the manager might
increase profits by $10,000 and be rewarded for it, but this would hardly be beneficial to the company if it
had required capital investment of $1,000,000 to achieve!!
The most common way of relating profitability to capital investment is to use Return on Investment as a
measure. However, as we will see, this can lead to a loss of goal congruence and a measure known as
Residual Income is theoretically better.
It is equivalent to Return on Capital Employed and this is one of the reasons that it is very popular in
practice as a divisional performance measure.
Instead of using a percentage measure, as with ROI, the Residual Income approach assesses the manager
on absolute profit. However, in order to take account of the capital investment, notional (or imputed, or
„pretend‟) interest is deducted from the Income Statement profit figure. The balance remaining is known
as the Residual Income.
(Note that the interest charge is only notional, and is only made for performance measurement purposed).
Question 1
There are two divisions with the following performance for the current year
Division X Y
Investment ($m) 10 30
Controllable Profit 2 3
Required rate of return 15%
Required:
Calculate the performance of each division based using:
a) ROI
b) RI
Which division has superior performance?
Solution
(a)
ROI
X = $2 m / $10 m = 20% Y = $3m / $30m = 10%
(b)
RI = NOPAT – internal cost of capital
X $2m - $1.5m = $0.5m Y $3m - $4.5m = -$1.5m
Hence division A has performed better currently on both yardsticks.
Solution
X ROI = $80,000/$500,000 = 16% RI = $80,000 - $75,000 = $5,000
Y ROI = $120,000/$1,000,000 = 12% RI = $120,000 - $150,000 = -$30,000
a) X would accept the project on the basis of both measures. Y can accept the project on the basis
of ROI as it is higher than current one, but overall they would resist as it still not meeting the
requirements of the head office.
b) Head office should go for both projects. Clearly X is acceptable. But Y can also be accepted on
the basis of ROI as it will improve the results, and may motivate the managers of Y in the long
run.
ROI vs RI
Note that both RI and ROI will favour divisions with older assets because those divisions will:
(1) Probably have bought the assets more cheaply than new divisions which buy at inflated prices.
(2) The assets are more heavily depreciated so that the capital employed figures is less in the division
with older assets – and this affects both the denominator in ROI and the notional interest charge
in RI
(3) Both methods can also suffer distortions because of assets leased on operating leases and also if
head office accounts for some „divisional‟ assets (for example HO holding all receivables).
In practice, ROI is more popular than RI, despite the fact that RI is technically superior in terms of
encouraging managers to make the correct investment decisions.
Advantages Disadvantages
1. It is easy to understand and easy to 1. It fails to take account of the project life or the timing of
calculate. cash flows and time value of money within that life.
2. ROCE is still the most common way in 2. When assets are valued at net book value, reported
which business unit performance is performance improves with time as the assets get old.
measured and evaluated, and is certainly In this case there is a disincentive to invest in new
the most visible to shareholders. assets.
3. Managers may be happy in expressing 3. It uses accounting profit and capital employed, hence
project attractiveness in the same terms subject to manipulation due to various accounting
in which their performance will be conventions.
reported to shareholders, and according 4. Performance measurement based on ROCE encourages
to which they will be evaluated and short-termism in decision making. Failure to invest in
rewarded. new assets could be harmful to the long term interest of
4. The continuing use of the ROCE method the division and the organisation as a whole.
can be explained largely by its utilisation 5. It is difficult to assess the significance of ROI. There is
of balance sheet and income statement no definite investment signal. The decision to invest or
magnitudes familiar to managers, namely not remains subjective in view of the lack of objectively
profit and capital employed. set target ROI
5. It is relative measure. 6. ROI is sometime confused with internal rate of return
(IRR)
Residual income
Advantages Disadvantages
Residual income overcomes many of the problems of Like ROI, residual income is also based on
ROI: accounting profit and capital employed which
It encourages investment centre managers to can be manipulated.
undertake new investments if they add to It encourages investment centers managers to
residual income. think in the short-term about how to increase
As a consequence, it is more consistent with next year‟s residual income for the centre,
the objective of maximizing the total hence does not encourage decision making for
profitability of the company. long-term.
It is possible to use different rates of interest Residual income is not as widely used as the
for different types of asset. ROI despite overcoming some of the problems
It aware managers about cost of financing in ROI
their division.
Economic value added (EVA) is a performance metric that is very similar in approach to Residual Income,
and is defined as being: EVA = Net operating profit after tax – WACC x book value of capital employed
EVA is a trade-marked technique, developed by consultants called Stern Stewart and Co.
The principle behind it is that a business is only really creating value if its profit is in excess of the
required minimum rate of return that shareholders and debt holders could get by investing in other
securities of comparable risk. The capital employed is the opening capital employed, adjusted for the
items set out below.
EVA allows all management decisions to be modelled, monitored, communicated, and compensated in a
single and consistent way – always in terms of the value added to shareholder investment.
However, EVA makes certain adjustments because certain types of expenditure which appear in the
statements of profit and loss under ISAs and IFRSs are NOT regarded as expenses when using EVA and
cash accounting is regarded as more reliable than accruals accounting).
Expenditure on building for the future (e.g. research expenditure, marketing expenditure and
staff training):
Non-cash expenses
Provisions
Goodwill written off
Depreciation: add back book depreciation and deduct economic depreciation. If economic
depreciation is not given, assume it is the same as book depreciation and that there is no net
adjustment.
Interest on debt capital Add back to net profit after adjusting for any tax relief. Treat the debt as
part of capital employed
EVA is based on economic profit which is not the same as accounting profit:
o Value-building expenditure (e.g. R&D;advertising) is added back to profit
o Non-cash items are eliminated
o One-off, unusual items are excluded
Charge for accounting depreciation is added back to profit (under EVA) and a charge for
economic depreciation made instead
Capital charge uses different bases for net assets. EVA usually uses replacement cost of assets
Question 3
Division B has a reported operating profit of $8.4 million, which includes a charge of $2million for the ful
l cost of developing and launching a new product that is expected to generate profits for 4 years. The com
pany has an after tax weighted average cost of capital of 10%. The operating book value of the division‟s a
ssets at the beginning of the year is $60 million, and the replacement cost has been estimated at $75 milli
on. Assume the tax charge is $0.
Required:
Calculate division B’s EVA
ANSWER
NOPAT
$m
Controllable operating profit 8.4
Add back items that add value: development costs 2
Deduct amortization of development costs ($2m ÷ 4 years) (0.5)
____
NOPAT 9.9
____
EVA
$m
NOPAT 9.9
Less: adjusted value of capital employed × WACC ($75m × 10%) (7.5)
____
EVA 2.4
____
In the previous two chapters we were looking at measures of financial performance. However, as we
stated, it is important to have a range of performance measures considering non-financial as well as
financial matters.
In general, financial performance is easy to measure (earning per share, profit, dividends, EVA etc) but
these measurements do not tell managers why financial performance has improved. For example, sales
might have increased either because prices have been lowered or the company has spent money
developing a new, innovative product. In this chapter we will consider the various areas where
performance measures are likely to be needed.
Note that although we might all like to think that, for example, customer service is a foundation for
company success, it is not necessarily so. Some low-cost airlines have been very successful despite giving
poor customer service. Good customer service, and the other non-financial qualities which are mentioned
about below are not ends in themselves. They become important in profit seeking organisations only if the
enable financial success.
Kaplan and Norton devised a range of measures that assist managers to focus on what affects the
performance of a division. The four areas to look at are:
Financial perspective – how does the division create value for the organisation?
Customer perspective – what do new and existing customers value the division for?
Innovation and learning – how can the division continue to deliver value?
Internal business – what processes must the division excel at to meet the objectives of the
organisation and the customers?
Customer Internal
The idea behind this model is that the entire organisation needs to work together in order to achieve
success. The approach is to try to link the overall strategy of the company with what is happening on a
day-to day level.
At the top is the vision – how it will achieve long-term success and competitive advantage. This will also
usually include generating returns for shareholders .
The second level is the business unit, which includes setting CSF‟s. If the market is satisfied and the
company performs well then it will be able to generate returns for shareholders.
At the bottom level of the pyramid is what Lynch and Cross label as measuring in the trenches. Here the
objective is to increase quality and delivery and decrease cycle time and waste. At this level a number of
non-financial indicators will be used in order to measure the operations.
The four levels of the pyramid are seen to fit into each other in the achievement of objectives. For
example, improved quality will assist in the achievement of customer satisfaction and hence growth and
market position.
The left hand side contains the measures which are predominantly externally focused and non-financial.
On the right hand side is internally focused, thus improving efficiency and are mostly financial in nature.
Note that the dimensions of the performance pyramid (Lynch and Cross) are designed for manufacturing
companies.
BCG MATRIX:
The BCG matrix helps a company think about the portfolio of products and services which it offers and
make decisions about which it should keep, which it should let go and which it should invest further in.
Dogs:
These are products, which have low market shares and low market growth rates. The options for many
companies is to phase these products out, however some organisation do go for the strategy of re-
inventing and injecting new life into the product.
Cash Cow:
Cash Cows are products at the mature stage of the lifecycle, they generate high amounts of cash for the
company, but growth rate is slowing. There are chances that the product may slip into decline;
appropriate marketing mix strategies should be employed to try to prevent this from happening.
Stars have high market shares that operate in growing markets. The product at this stage should be
generating positive returns for the company.
Fitzgerald and Moon focussed on performance measurement in service businesses. They said that
organisations need:
Standards: KPIs need to be capable of ownership (ie the person responsible feels able to influence the
measure), should be achievable and should be fair.
Rewards: should be clear, provide motivation and controllable ie managers can influence their rewards
by their behaviour.
PERFORMANCE PRISM
The prism takes an alternative look at performance management, and sets out explicitly to identify how
managers can use measurement data to improve business performance.
It was developed by the Centre for Business Performance at the Cranfield School of Management. It aims
to take into account all of the stakeholders interests.
● Stakeholder Satisfaction
● Strategies
● Processes
● Capabilities
● Stakeholder Contribution
Non-profit seeking organisations are those whose prime goal cannot be assessed by economic means.
Examples would include charities and state bodies such as the police and the health service.
For this sort of organisation, it is not possible or desirable to use standard profit measures. Instead (in for
example the case of the health service) the objective is to ensure that the best service is provided at the
best cost.
In this chapter we will consider the problems of performance measures and suggestions as to how to
approach it.
Multiple objectives: Even if all objectives can be clearly identified, it may be impossible to
identify an overriding objective or to choose between competing objectives
The difficulty of measuring outputs: An objective of the health service is obviously to make
ill people better. However, how can we in practice measure how much better they are?
Financial constraints: Public sector organisations have limited control over the level of
funding that they receive and the objectives that they can achieve.
Political, social and legal considerations: The public have higher expectations from public
sector organisations than from commercial ones, and such organisations are subject to greater
scrutiny and more onerous legal requirements.
Little market competition and no profit motive.
As the services of such organisations are very often not expressed in monetary terms, it is important to
ensure that value for money is achieved. In order to do this, the principle of 3 E‟s can be applied.
3 Es
Economy – The optimisation of the resources which the organisation has; ensuring the
appropriate quality of input resources are obtained at the lowest cost
Efficiency – The optimisation of the process by which inputs are turned into outputs
Effectiveness – How the outputs of the organisation meet its goals
Note that many NFPs are providing a service so elements of Fitzgerald and Moon may be approprirate
A single table can be produced showing overall performance or one which shows particular KPIs
The manager of each division knows that if the division‟s performance is good they will receive a bonus
and their division might be expanded alternatively if the division‟s performance is poor they will get fired
and the division might be closed down
Two common methods of evaluating performance seen earlier include ROI & RI
Both of these measures include a profit figure, so the manager of each division will try to maximize
revenue and or minimize costs
The idea behind transfer pricing is to promote goal congruence. In other words, managers do what is
good for the company because it is also good for their own division.
In order to promote goal congruence, we must ensure that the transfer price encourages the divisions to
trade with each other only when it is appropriate for the organization as a whole.
The aim is to set a price that will give a fair measure of performance and to do this we follow a simple rule:
Maximum Price (from Buying division Point of View) The Market Price
Minimum Price (from Selling division point of view) The relevant cost
ILLUSTRATION 1
Maple Ltd has been offered supplies of special ingredient Z at a transfer price of $15 per kg by Hexton
Ltd, which is part of the same group of companies.
Hexton Ltd processes and sells special ingredient Z to customers external to the group at $15 per kg.
Hexton Ltd bases its transfer price on total cost-plus 25% profit mark-up. Total cost has been
estimated as 75% variable and 25% fixed.
Required:
Discuss the transfer prices at which Hexton Ltd should offer to transfer special
ingredient Z to Maple Ltd in order that group profit maximizing decisions may be taken
on financial grounds in each of the following situations:
(i) Hexton Ltd has an external market for all its production of special ingredient Z at a
selling price of $15 per kg. Internal transfers to Maple Ltd would enable $1.50 per kg of
variable packing cost to be avoided.
(ii) Conditions are as per (i) but Hexton Ltd has production capacity for 3,000kg of
special ingredient Z for which no external market is available.
(iii) Conditions are as per (ii) but Hexton Ltd has an alternative use for some of its spare
production capacity. This alternative use is equivalent to 2,000kg of special ingredient Z
and would earn a contribution of $6,000.
Market price
If there is one single price that the external customer is willing to pay and that the external supplier is
willing to charge then it makes sense to use this as the transfer price.
In some cases, there will costs that can be avoided if the goods are transferred internally (e.g. packing
costs). It makes sense when calculating the transfer price to deduct these from the market price used
above.
Division M will have fixed costs in addition to variable ones. If the marginal cost only is used there is a
danger that M will make a loss. The transfer price might be set as the marginal cost plus an amount
towards these (the same as using a fixed overhead absorption rate). Alternatively a lump sum could be
paid annually to help pay the fixed costs.
This is where different prices are recorded in the books of M and A. For example A would record a cost of
$3 (the marginal cost) in its books but M would record a sale of $6 in its books. This would obviously
mean an adjustment would need to be carried out at the end of the year when the consolidated accounts
were being prepared.
There are a number of additional issues when the transfer is between divisions in
different countries:
Technical article
Transfer prices are almost inevitably needed whenever a business is divided into more
than one department or division
In accounting, many amounts can be legitimately calculated in a number of different ways and can be
correctly represented by a number of different values. For example, both marginal and total absorption
cost can simultaneously give the correct cost of production, but which version of cost you should use
depends on what you are trying to do.
Similarly, the basis on which fixed overheads are apportioned and absorbed into production can
radically change perceived profitability. The danger is that decisions are often based on accounting
figures, and if the figures themselves are somewhat arbitrary, so too will be the decisions based on
them. You should, therefore, always be careful when using accounting information, not just because
information could have been deliberately manipulated and presented in a way which misleads, but also
because the information depends on the assumptions and the methodology used to create it. Transfer
pricing provides excellent examples of the coexistence of alternative legitimate views, and illustrates
how the use of inappropriate figures can create misconceptions and can lead to wrong decisions.
Transfer prices are almost inevitably needed whenever a business is divided into more than one
department or division. Usually, goods or services will flow between the divisions and each will report
its performance separately. The accounting system will usually record goods or services leaving one
department and entering the next, and some monetary value must be used to record this. That monetary
value is the transfer price. The transfer price negotiated between the divisions, or imposed by head
office, can have a profound, but perhaps arbitrary, effect on the reported performance and subsequent
decisions made.
Take the following scenario shown in Table 1, in which Division A makes components for a cost of $30,
and these are transferred to Division B for $50. Division B buys the components in at $50, incurs own
costs of $20, and then sells to outside customers for $90.
As things stand, each division makes a profit of $20/unit, and it should be easy to see that the group will
make a profit of $40/unit. You can calculate this either by simply adding the two divisional profits
together ($20 + $20 = $40) or subtracting both own costs from final revenue ($90 – $30 – $20 = $40).
You will appreciate that for every $1 increase in the transfer price, Division A will make $1 more profit,
and Division B will make $1 less. Mathematically, the group will make the same profit, but these
changing profits can result in each division making different decisions, and as a result of those
decisions, group profits might be affected.
Consider the knock-on effects that different transfer prices and different profits might have on the
divisions:
Performance evaluation. The success of each division, whether measured by return on investment
(ROI) or residual income (RI) will be changed. These measures might be interpreted as indicating that a
division’s performance was unsatisfactory and could tempt management at head office to close it down.
Make/abandon/buy-in decisions. If the transfer price is very high, the receiving division might
decide not to buy any components from the transferring division because it becomes impossible for it to
make a positive contribution. That division might decide to abandon the product line or buy-in cheaper
components from outside suppliers.
Motivation. Everyone likes to make a profit and this ambition certainly applies to the divisional
managers. If a transfer price was such that one division found it impossible to make a profit, then the
employees in that division would probably be demotivated. In contrast, the other division would have
an easy ride as it would make profits easily, and it would not be motivated to work more efficiently.
Investment appraisal. New investment should typically be evaluated using a method such as net
present value. However, the cash inflows arising from an investment are almost certainly going to be
affected by the transfer price, so capital investment decisions can depend on the transfer price.
As you can see, therefore, transfer prices can have a profound effect on group performance because they
affect divisional performance, motivation and decision making.
Be perceived as being fair for the purposes of performance evaluation and investment decisions.
Permit each division to make a profit: profits are motivating and allow divisional performance to be
measured using positive ROI or positive RI.
Encourage divisions to make decisions which maximise group profits: the transfer price will achieve
this if the decisions which maximise divisional profit also happen to maximise group profit – this is
known as goal congruence. Furthermore, all divisions must want to do the same thing. There’s no point
in transferring divisions being very keen on transferring out if the next division doesn’t want to transfer
in.
In the following examples, assume that Division A can sell only to Division B, and that Division B’s only
source of components is Division A. Example 1 has been reproduced but with costs split between variable
and fixed. A somewhat arbitrary transfer price of $50 has been used initially and this allows each
division to make a profit of $20.
Example 2
See Table 2. The following rules on transfer prices are necessary to get both parties to trade with one
another:
For the transfer-out division, the transfer price must be greater than (or equal to) the marginal cost of
production. This allows the transfer-out division to make a contribution (or at least not make a negative
For the transfer-in division, the transfer in price plus its own marginal costs must be no greater than the
marginal revenue earned from outside sales. This allows that division to make a contribution (or at
least not make a negative one). In Example 2, the transfer price must be no higher than $80 as:
$80 (transfer-in price) + $10 (own variable cost) = $90 (marginal revenue)
Usually, this rule is restated to say that the transfer price should be no greater than the net marginal
revenue of the receiving division, where the net marginal revenue is marginal revenue less own
marginal costs. Here, net marginal revenues = $80 = $90 – $10.
So, a transfer price of $50 (transfer price ≥ $18, ≤ $80), as set above, will work insofar as both parties
will find it worth trading at that price.
And
As well as permitting interdivisional trade to happen at all, this rule will also give the correct economic
decision because if the final selling price is too low for the group to make a positive contribution, no
operative transfer price is available.
So, in Example 2, if the final selling price were to fall to $25, the group could not make a contribution
because $25 is less than the group’s total variable costs of $18 + $10. The transfer price that would
make both divisions trade must be no less than $18 (for Division A) but no greater than $15 (net
marginal revenue for Division B = $25 – $10), so clearly no workable transfer price is available.
If, however, the final selling price were to fall to $29, the group could make a $1 contribution per unit. A
viable transfer price has to be at least $18 (for Division A) and no greater than $19 (net marginal
revenue for Division B = $29 – $10). A transfer price of $18.50, say, would work fine.
Therefore, all that head office needs to do is to impose a transfer price within the appropriate range,
confident that both divisions will choose to act in a way that maximises group profit. Head office
therefore gives each division the impression of making autonomous decisions, but in reality each
division has been manipulated into making the choices head office wants.
In addition, a transfer price range as derived in Example 1 and 2 will often be dynamic. It will keep
changing as both variable production costs and final selling prices change, and this can be difficult to
manage. In practice, management would often prefer to have a simpler transfer price rule and a more
stable transfer price – but this simplicity runs the risk of poorer decisions being made.
In order to address these concerns, some common practical approaches to transfer price fixing exist:
1. Variable cost:
A transfer price set equal to the variable cost of the transferring division produces very good economic
decisions. If the transfer price is $18, Division B’s marginal costs would be $28 (each unit costs $18 to
buy in then incurs another $10 of variable cost). The group’s marginal costs are also $28, so there will
be goal congruence between Division B’s wish to maximise its profits and the group maximising its
profits. If marginal revenue exceeds marginal costs for Division B, it will also do so for the group.
Although good economic decisions are likely to result, a transfer price equal to marginal cost has certain
drawbacks:
Division A will make a loss as its fixed costs cannot be covered. This is demotivating.
Performance measurement is distorted. Division A is condemned to making losses while Division B gets
an easy ride as it is not charged enough to cover all costs of manufacture. This effect can also distort
investment decisions made in each division. For example, Division B will enjoy inflated cash inflows.
There is little incentive for Division A to be efficient if all marginal costs are covered by the transfer
price. Inefficiencies in Division A will be passed up to Division B. Therefore, if marginal cost is going to
be used as a transfer price, at least make it standard marginal cost, so that efficiencies and inefficiencies
stay within the divisions responsible for them.
Example 3
See Table 3.
A transfer price set at full cost as shown in Table 3 (or better, full standard cost) is slightly more
satisfactory for Division A as it means that it can aim to break even. Its big drawback, however, is that
it can lead to dysfunctional decisions because Division B can make decisions that maximise its profits
but which will not maximise group profits. For example, if the final market price fell to $35, Division B
would not trade because its marginal cost would be $40 (transfer-in price of $30 and own marginal
costs of $10). However, from a group perspective, the marginal cost is only $28 ($18 + $10) and a
positive contribution would be made even at a selling price of only $35. Head office could, of course,
instruct Division B to trade but then divisional autonomy is compromised and Division B managers will
resent being instructed to make negative contributions which will impact on their reported
performance. Imagine you are Division B’s manager, trying your best to hit profit targets, make wise
decisions, and move your division forward by carefully evaluated capital investment.
The full cost plus approach would increase the transfer price by adding a mark up. This would now
motivate Division A, as profits can be made there and may also allow profits to be made by Division B.
However, again this can lead to dysfunctional decisions as the final selling price falls.
A transfer price set to the market price of the transferred goods (assuming that there is a market for the
intermediate product) should give both divisions the opportunity to make profits (if they operate at
normal industry efficiencies), but again such a transfer price runs the risk of encouraging dysfunctional
decision making as the final selling price falls towards the group marginal cost. However, market price
has the important advantage of providing an objective transfer price not based on arbitrary mark ups.
Market prices will therefore be perceived as being fair to each division, and will also allow important
performance evaluation to be carried out by comparing the performance of each division to outside,
stand-alone businesses. More accurate investment decisions will also be made.
The difficulty with full cost, full cost plus, variable cost plus, and market price is that they all result in
fixed costs and profits being perceived as marginal costs as goods are transferred to Division B.
Division B therefore has the wrong data to enable it to make good economic decisions for the group –
even if it wanted to. In fact, once you get away from a transfer price equal to the variable cost in the
transferring division, there is always the risk of dysfunctional decisions being made unless an upper
limit – equal to the net marginal revenue in the receiving division – is also imposed.
There are two approaches to transfer pricing which try to preserve the economic information inherent
in variable costs while permitting the transferring division to make profits, and allowing better
performance valuation. However, both methods are somewhat complicated.
Dual pricing:. In this approach, Division A transfers out at cost plus a mark up (perhaps market
price), and Division B transfers in at variable cost. Therefore, Division A can make a motivating profit,
while Division B has good economic data about cumulative group variable costs. Obviously, the
divisional current accounts won’t agree, and some period-end adjustments will be needed to reconcile
those and to eliminate fictitious interdivisional profits.
Consider Example 1 again, but this time assume that the intermediate product can be sold to, or bought
from, a market at a price of either $40 or $60. See Table 4.
Division A would rather transfer to Division B, because receiving $50 is better then receiving $40.
Division B would rather buy in at the cheaper $40, but that would be bad for the group because there is
now a marginal cost to the group of $40 instead of only $18, the variable cost of production in Division
A. The transfer price must, therefore, compete with the external supply price and must be no higher than
that. It must also still be no higher than the net marginal revenue of Division B ($90 – $10 = $80) if
Division B is to avoid making negative contributions.
Division B would rather buy from Division A ($50 beats $60), but Division A would sell as much as
possible outside at $60 in preference to transferring to Division B at $50. Assuming Division A had
limited capacity and all output was sold to the outside market, that would force Division B to buy
outside and this is not good for the group as there is then a marginal cost of $60 when obtaining the
intermediate product, as opposed to it being made in Division A for $18 only. Therefore, we must
encourage Division A to supply to Division B and we can do this by setting a transfer price that is high
enough to compensate for the lost contribution that Division A could have made by selling outside.
Therefore, Division A has to receive enough to cover the variable cost of production plus the lost
contribution caused by not selling outside:
Basically, the transfer price must be as good as the outside selling price to get Division B to transfer
inside the group.
Minimum (fixed by transferring division) Transfer price ≥ marginal cost of transfer-out division + any
lost contribution
And
Maximum (fixed by receiving division) Transfer price ≤ the lower of net marginal revenue of transfer-
in division and the external purchase price
CONCLUSION
You might have thought that transfer prices were matters of little importance: debits in one division,
matching credits in another, but with no overall effect on group profitability. Mathematically this might
be the case, but only at the most elementary level. Transfer prices are vitally important when
motivation, decision making, performance measurement, and investment decisions are taken into
account – and these are the factors which so often separate successful from unsuccessful businesses.
Poor cash flow - Poor cash flow might render an organization unable to pay its debts as and when they
fall due for payment. This might mean, for example, that providers of finance might be able to invoke the
terms of a loan covenant and commence legal action against an organization which might eventually lead
to its winding-up.
Lack of new production/service introduction - Innovation can often be seen to be the difference
between „life and death‟ as new products and services provide continuity of income streams in an ever-
changing business environment. A lack of new product/service introduction may arise from a shortage of
funds available for re-investment. This can lead to organizations attempting to compete with their
competitors with an out of date range of products and services, the consequences of which will invariably
turn out to be disastrous.
General economic conditions - Falling demand and increasing interest rates can precipitate the
demise of organizations. Highly geared organizations will suffer as demand falls and the weight of the
interest burden increases. Organizations can find themselves in a vicious circle as increasing amounts of
interest payable are paid from diminishing gross margins leading to falling profits/increasing losses and
negative cash flows. This leads to the need for further loan finance and even higher interest burden,
further diminution in margins and so on.
Lack of financial controls - The absence of sound financial controls has proven costly to many
organizations. In extreme circumstances it can lead to outright fraud (e.g. Enron and WorldCom).
Internal rivalry - The extent of internal rivalry that exists within an organization can prove to be of
critical significance to an organization as managerial effort is effectively channeled into increasing the
amount of internal conflict that exists to the detriment of the organization as a whole. Unfortunately, the
adverse consequences of internal rivalry remain latent until it is too late to redress them.
Loss of key personnel - In certain types of organization the loss of key personnel can „spell the
beginning of the end‟ for an organization. This is particularly the case where individuals possess
knowledge which can be exploited by direct competitors, e.g. sales contacts, product specifications,
product recipes, etc.
Z Score
Z score attempts to anticipate strategic and financial failure by examining company financial statements.
Calculating five ratios generates the Z score. These five ratios, once combined were considered to be the
best predictor of failure.
ILLUSTRATION 1
Company B Company C Company
D Company E
Required
Using the data below calculate the Z score for each of the four companies and comment on
your findings
ANSWER
Company B Company C Company D Company E
1.2X1 0.86 0.07 1.56 0.3
1.4X2 1.19 0.04 1.12 0.29
3.3X3 10.25 0.3 3.63 1.65
Argenti’s model
Qualitative models such as Argenti use a variety of qualitative and some no accounting factors such as
management experience, dependence on one or a few customers or suppliers, a history of qualified audit
opinions and the business environment including the industry and economic situation.
Argenti developed a model, which is intended to predict the likelihood of company failure based on three
connecting areas that indicate likely failure: defects, mistakes made, and symptoms of failure which are all
awarded a specific score.
Management defects are to do with the characteristics of senior management for example an
autocratic chief executive (8) and a passive board (2).
Accounting defects could be a lack of budgetary control, lack of cash flow planning and costing
systems (all 3 each).
The score for all three connected areas is then added together and the overall score is calculated. If the
pass mark is 25 or any more then the company is at the risk of failing.
Although this model attempts to quantify the causes and symptoms associated with failure. Its predictive
value has not been adequately tested, but a misclassification rate of 5% has been suggested.
It is also worth noting that there are other reasons why companies fail such as:
Avoiding failure
Ross and Kami listed „Ten Commandments‟ that should be followed by a company to avoid failure: ๏ You
must have a strategy
You have studied investment appraisal previously so most of this chapter will be revision for you. Of the
few new items in this chapter, the most important is Modified Internal Rate of Return and you should
make sure that you learn the technique involved.
Here is a list of the main points to remember when performing a net present value calculation. After we
will look at a full example containing all the points. ๏ Remember it is cash flows that you are considering,
and only cash flows. Non-cash items (such as depreciation) are irrelevant. ๏ It is only future cash flows
that you are interested in. Any amounts already spent (such as market research already done) are sunk
costs and are irrelevant. ๏ There is very likely to be inflation in the question, in which case the cash flows
should be adjusted in your schedule in order to calculate the actual expected cash flows. The actual cash
flows should be discounted at the actual cost of capital (the money, or nominal rate). (Note: alternatively,
it is possible to discount the cash flows ignoring inflation at the cost of capital ignoring inflation (the real
rate). We will remind you of this later in this chapter, but it is much less likely to be relevant in the
examination.) ๏ There is also very likely to be taxation in the question. Tax is a cash flow and needs
bringing into your schedule. It is usually easier to deal with tax in two stages – to calculate the tax payable
on the operating cash flows (ignoring capital allowances) and then to calculate separately the tax saving
on the capital allowances. ๏ You are often told that cash is needed to finance additional working capital
necessary for the project. These are cash flows in your schedule, but they have no tax effects and, unless
told otherwise, you assume that the total cash paid out is received back at the end of the project.
X Discount Factor X X X X X
Present Values (X) X X X X
Net Present Value X
Rome plc is considering buying a new machine in order to produce a new product.
The machine will cost $1,800,000 and is expected to last for 5 years at which time it will have an
estimated scrap value of $1,000,000.
They expect to produce 100,000 units p.a. of the new product, which will be sold for $20 per unit in the
first year.
Materials $8
Labour $7
Materials are expected to inflate at 8% p.a. and labour is expected to inflate at 5% p.a..
Fixed overheads of the company currently amount to $1,000,000.
The management accountant has decided that 20% of these should be absorbed into the new product.
The company expects to be able to increase the selling price of the product by 7% p.a..
An additional $200,000 of working capital will be required at the start of the project.
Capital allowances: 25% reducing balance Tax: 25%, payable immediately Cost of capital: 10%
Calculate the NPV of the project and advise whether or not it should be accepted.
Solution
0 1 2 3 4 5
Sales 2,000 2,140 2,290 2,450 2,622
Materials (864) (933) (1,008) (1,088) (1,175)
Labour (735) (772) (810) (851) (893)
Net operating flow 401 435 472 511 554
Tax on operating flow (100) (109) (118) (128) (139)
Cost (1,800)
Scrap 1,000
Tax on saving on 113 84 63 47 (107)
capital allowed
Working Capital (200) 200
Net cash flow (2,000) 414 410 417 430 1,508
d.f. @ 10% 1 0.909 0.826 0.751 0.683 0.621
P.V. (2,000) 376 339 313 294 936
NPV = $258
A technique that considers a single variable at a time and identifies by how much that variable has to
change for the decision to change (from accept to reject).
It indicates which variables may impact most upon the net present value (critical variables) and the extent
to which those variables may change before the investment results in a negative NPV.
IRR is the total rate of return offered by an investment over its life. Calculative, The rate of return at
which the NPV equals zero.
Formula to calculate
A criticism of the IRR method is that in calculating the IRR, an assumption is that all cash flows earned by
the project can be reinvested to earn a return equal to the IRR.
Modified internal rate of return is a calculation of the return from a project, as a percentage yield, where it
is assumed that cash flows earned from a project will be reinvested to earn a return equal to the
company‟s cost of capital.
It might be argued that if a company wishes to use the discounted return on investment as a method of
capital investment appraisal, it should use MIRR rather than IRR, because MIRR is more realistic because
it is based on the cost of capital as the reinvestment rate.
Take the negative net cash flows in the early years of the project, and discount these to a present
value. The total PV of these cash flows is the PV of the investment phase of the project.
Take the cash flows from the year that the project cash flows start to turn positive and compound
these to an end-of-project terminal value, assuming that cash flows are reinvested at the cost of
capital.
The MIRR is then calculated as follows:
Where
n = the project life in years
PVR = the end-of-year investment returns during the recovery phase of the project (as calculated
in Step 2)
PVI = the present value of the capital investment in the investment phase (as calculated in Phase
1).
The MIRR is usually lower than the IRR, because it assumes that the proceeds are re-invested at the Cost
of Capital. However in practice the proceeds are often re-invested elsewhere within the firm. It does
however have the advantage of being much quicker to calculate than the IRR.
This relates to the „management‟ part of performance management. If one knows that one‟s performance
is being measured (and very often one‟s rewards are tied into the performance measure) then it is human
nature to concentrate on those aspects of the work that are being measured. Indeed many would claim
that „what you measure you change‟ with the implication that what you do no measure will not change.
It is important therefore that the performance measures encourage goal congruence (i.e. encourage
working for the overall good of the company) and that they encourage long-term as opposed to short-term
thinking.
Management encourage employees to achieve goals by having rewards linked to their success of
failure in achieving desired levels of performance. Potential benefits of implementing a reward
scheme include: Rewards and incentives shape the behaviour of employees – a well-designed
scheme will be consistent with the organisational objectives
A reward scheme provides an incentive to achieve good performance.
Key incentives can be emphasised in the reward scheme – it is a way of communicating the goals
of the company to the employee.
An effective scheme will create an environment in which all employees are focussed on
continuous improvement.
Schemes that incorporate share ownership can encourage behaviour that in the longer-term
increases the market value of the business.
In one of his articles for Student Accountant, the previous examiner highlighted the following specific
problems that can occur with performance measurement schemes:
Tunnel vision Undue focus on performance measures to the detriment of other areas („What you
measure you change‟)
Sub-optimisation Ceasing effort when acceptable performance is achieved (eg when budgeted sales
have been achieved), even though better performance might be achievable.
Myopia Focussing on the short-term resulting in the ignoring of the long-term
Measure fixation Behaviour and activities in order to achieve specific performance measure that may
not be effective. For example, measuring behavior or results that are not important
Misrepresentation Using creative reporting to suggest that performance measures have been achieved
Gaming Behaviour designed to achieve some strategic advantage. For example, not passing on
sales leads to a colleague so that your sales are comparatively higher.
Ossification The unwillingness to change a performance measure scheme once it has been set up.
Involve staff at all levels in the development and implementation of the scheme
Be flexible in the use of performance measures
Keep the performance measurement system under constant review
REVOLUTION OF IT SYSTEMS
COMPETITION
There is indeed a broadening of the MA‟s responsibility and a transformation into more of a business
partner role. However, this is not easy and they may meet problems such as:
Quality management
Quality control refers to the processes (such as sampling and testing) that an organisation employs to
check on quality.
Quality assurance is the sum of the management allow an organisation to dependably achieve a stated
level of quality
Quality management is the overseeing of all the activities needed to achieve and maintain the required
quality. It includes establishing the required quality level, setting quality control procedures and also
considering quality improvement
TQM is defined as “the continuous improvement in quality, productivity and effectiveness obtained by
establishing management responsibility for processes as well as outputs. In this, every process has an
identified process owner and every person in an entity operates within a process and contributes to its
improvement”.
Any manufacturing company will want to deliver goods to the customer that are of sufficiently high
quality to avoid goods being returned. In order to check this, the company will have some form of quality
control checks on goods leaving the factory. However, even though good quality control will result in poor
quality goods being rejected, and therefore not reaching the customer, there remain the costs associated
with waste and poor quality work.
It is therefore important that all possible steps are taken not only to check quality at each stage, but to
design processes and educate the workforce to facilitate good quality production. If everything is done
right first time, there will be no quality control problems and no waste of materials or time.
TQM does not apply only to the manufacturing system. It will also apply to phone answering, provision of
information, the organisation‟s web-site, order processing, invoicing, recruitment and training.
LEAN SYSTEMS
Structurise – Try to create a more logical pattern for where inventory (particularly WIP) is stored
Systemise – have a logical system for identifying inventory
Sanitise – make sure that inventory is looked after so it is a usable condition (knowing the location of an
unusable item is unhelpful)
Standardise – Take a consistent approach
Self-discipline – reports through exception should identify any items stored incorrectly. Note the
connection with IT systems required – no longer sufficient just to know the value of inventory from the
finance system.
Six Sigma
The main theory of quality improvement in Paper P5 is the Six Sigma concept. The idea is to try and
reduce the chance of an item failing to be of a good enough quality. This does not mean having a single
standard, there may be a range of values which are acceptable.
This range is known as the tolerance. For example a hamburger chain may say that as long as a burger is
not too hot or too cold it is acceptable. This would give a range of acceptable temperatures (the
tolerance).
The six sigma approach is about many gradual improvements rather than occasional large ones.
Define
Measure
Analyze
Improve
Develop solutions.
Implement them.
Control
Monitor changes.
Deal with problems arising.
The control process will focus on key performance measures.
Life-cycle costing
When seeking to make a profit on a product it is essential that the total revenue arising from the product
exceeds total costs, whether these costs are incurred during the phases of design, manufacture, operation,
end-of-life:
1) All costs should be taken into account when working out the cost of a unit and its profitability.
2) Attention to all costs will help to reduce the cost per unit and will help an organization achieve its target
cost.
3) Many costs will be linked. For example, more attention to design can reduce manufacturing and warranty
costs. More attention to training can reduce machine maintenance costs. More attention to waste
disposal during manufacturing can reduce end-of life costs.
4) Costs are committed and incurred at very different times. A committed cost is a cost that will be incurred
in the future because of decisions that have already been made. Costs are incurred only when a resource
is used.
The diagram shows that by the end of the design phase approximately 80% of costs are committed.
For example, the design will largely dictate material, labour and machine and environmental costs. The
company can try to haggle with suppliers over the cost of components but if, for example, the design
specifies ten units of a certain component, negotiating with suppliers is likely to have only a small overall
effect on costs. A bigger cost decrease would be obtained if the design had specified only eight units of the
component. The design phase locks the company in to most future costs and it this phase which gives the
company its greatest opportunities to reduce those costs.
Conventional costing records costs only as they are incurred, but recording those costs is different to
controlling those costs and performance management depends on cost control, not cost measurement.
Many costs in the manufacturing phase can only be controlled by what happened in the design phase.
Just-in-time ( JIT)
Traditionally, most manufacturing companies have considered it necessary to have a certain level of stock
of raw materials, work-in-progress, and finished goods.
However, not only may this be costly in terms of physically holding the stock and in terms of the
possibility of damage and obsolescence, but also the requirement to hold stock may be symptomatic of
inefficiencies within the company.
For example, the level of work-in-progress is determined by the length of time of the manufacturing
process. If the process can be streamlined and production time reduced, then the level of work- in-
progress will be reduced but the company will make additional gains as a result of greater efficiency.
Raw materials
Work-in-progress
To have some partially made inventory that will allow fast completion
Technical reasons (eg production maturing, chemical processes that take time to complete)
Finished goods
Note that any disruption of the supply of raw materials and components quickly causes serious problems:
no raw materials implies no production, implies an idle work-force and unhappy customers.
TARGET COSTING
Cost Target
Reduce the cost gap
now cost
Early external focus -The organization will have an early external focus to its product development.
Businesses have to compete with others (competitors) and an early consideration of this will tend to make
them more successful. Traditional approaches (by calculating the cost and then adding a margin to get a
selling price) are often far too internally driven.
Value adding features only - Only those features that are of value to customers will be included in the
product design. Target costing at an early stage considers carefully the product that is intended. Features
that are unlikely to be valued by the customer will be excluded.
Early cost control - Cost control will begin much earlier in the process. If it is clear at the design stage that a cost
gap exists, then more can be done to close it by the design team. Traditionally, cost control takes place at the „cost
incurring‟ stage, which is often far too late to make a significant impact on a product that is too expensive to make.
Lower costs per unit - Costs per unit are often lower under a target costing environment. This
enhances profitability. Target costing has been shown to reduce product cost by between 20% and 40%
depending on product and market conditions. In traditional cost plus systems an organization may not be
fully aware of the constraints in the external environment until after the production has started. Cost
reduction at this point is much more difficult as many of the costs are „designed in‟ to the product.
Reduced time to market - It is often argued that target costing reduces the time taken to get a product
to market. Under traditional methodologies there are often lengthy delays whilst a team goes „back to the
drawing board‟. Target costing, because it has an early external focus, tends to help get things right first
time and this reduces the time to market.
Kaizen budgeting incorporates expectations for continuous improvement into budgetary estimates.
Kaizen costing determines target cost reductions for a period, such as a month. Thus, variances are the differences
between actual and targeted cost reduction.
The objective is to reduce actual costs below standard costs. The cost-reduction activities associated with the
Kaizen approach minimize costs throughout the entire product life cycle. Therefore, it has the advantage
of being closely related to the entity‟s profit-planning procedures
Kaizen is a daily activity whose purpose goes beyond improvement. It is also a process that, when done
correctly, humanizes the workplace, eliminates overly hard work (both mental and physical), and teaches
people how to perform experiments using the scientific method and how to learn to spot and eliminate
waste in business processes.
People at all levels of an organization participate in kaizen, from the CEO down, as well as external
stakeholders when applicable. The format for kaizen can be individual, small groups or large groups.
Within Toyota it was a local improvement within a local area and involved a small group in improving
their own work environment and productivity.
Whilst Kaizen (in Toyota) usually deliver small improvements the culture of continual small
improvements and standardization yields large results in a form of compound productivity improvement.
ENVIRONMENTAL ACCOUNTING
In an ideal world, organizations would reflect environmental factors in their accounting processes via the
identification of the environmental costs attached to products, processes, and services.
Many existing conventional accounting systems are unable to deal adequately with environmental costs
and as a result simply attribute them to general overhead accounts.
Consequently, managers are unaware of these costs, have no information with which to manage them and
have no incentive to reduce them.
Many overestimate the cost and underestimate the benefits of improving environmental practices.
Management accounting techniques can distort and misrepresent environmental issues, leading to
managers making decisions that are bad for businesses and bad for the environment. The most obvious
example relates to energy usage.
EMA is concerned with the accounting information of managers in relation to corporate objectives. It
involves:
Classification of costs:
Environmental prevention costs: the costs of activities undertaken to prevent the production of waste.
Environmental detection costs: costs incurred to ensure that the organization complies with regulations
and voluntary standards.
Environmental internal failure costs: costs incurred from performing activities that have produced
contaminants and waste that have not been discharged into the environment.
Environmental external failure costs: costs incurred on activities performed after discharging waste into
the environment.
Identification of costs:
Conventional costs: raw material and energy costs which have environmental relevance.
Potentially hidden costs: costs captured by accounting systems but then losing their identity in ‘general
overheads’.
Contingent costs: costs to be incurred at a future date, e.g. clean up costs.
Image and relationship costs: costs that, by their nature, are intangible, for example, the costs of
preparing environmental reports.
Input/outflow analysis
This technique records material inflows and balances this with outflows on the basis that, what comes in,
must go out.
So, if 100kg of materials have been bought and only 80kg of materials have been produced, for example,
then the 20kg difference must be accounted for in some way. It may be, for example, that 10% of it has
been sold as scrap and 90% of it is waste. By accounting for outputs in this way, both in terms of physical
quantities and, at the end of the process, in monetary terms too, businesses are forced to focus on
environmental costs.
This technique uses not only material flows but also the organizational structure. It makes material flows
transparent by looking at the physical quantities involved, their costs and their value. It divides the
material flows into three categories: material, system and delivery and disposal. The values and costs of
each of these three flows are then calculated. The aim of flow cost accounting is to reduce the quantity of
materials which, as well as having a positive effect on the environment, should have a positive effect on a
business‟ total costs in the long run.
Activity-based costing
ABC allocates internal costs to cost centers and cost drivers on the basis of the activities that give rise to
the costs. In an environmental accounting context, it distinguishes between environment-related costs,
which can be attributed to joint cost centers, and environment‑driven costs, which tend to be hidden on
general overheads.
Lifecycle costing
Within the context of environmental accounting, lifecycle costing is a technique which requires the full
environmental consequences, and, therefore, costs, arising from production of a product to be taken
account across its whole lifecycle, literally „from cradle to grave‟.
The most significant problem of EMA lies in the absence of a clear definition of environmental costs. This
means it is likely that organizations are not monitoring and reporting such costs.
The increase in environmental costs is likely to continue, which will result in the increased information
needs of managers and provide the stimulus for the agreement of a clear definition.
However, whatever the difficulties, the use of EMA will probably increase with positive
effects for both organizations and the environment in which they operate.