Documente Academic
Documente Profesional
Documente Cultură
Forecasting
Advanced Forecasting
Operations Analysis
Using MS Excel 1
Forecasting
Forecasting is the process of extrapolating the past
into the future
Forecasting is something that organization have to
do if they are to plan for future. Many forecasts
attempt to use past data in order to identify short,
medium or long term trends, and to use these
patterns to project the current position into the
future.
Backcasting: method of evaluating forecasting
techniques by applying them to historical data and
comparing the forecast to the actual data.
2
Demand Forecast A Deviation A
Jan 11.8 7.1 4.7
Feb 6.3 11.8 5.5
Mar 9.5 6.3 3.2
Apr 5.3 9.5 4.2
May 10.1 5.3 4.8
Jun 7 10.1 3.1
Jul 11.3 7 4.3
Aug 7.3 11.3 4
Sep 9.5 7.3 2.2
Oct 5 9.5 4.5
Nov 10.7 5 5.7
Dec 6 10.7 4.7
Jan 6 6
5
D e v i a tio n s
0
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan
3
Forecasting
Why Forecasting?
4
Steps to Forecasting
• Starts with gathering and recording information about the
situation.
• Enter the data into a worksheet or any other business
analysis tool
• Creation of graphs
• Examine the data and the graphs visually to get some
understanding of the situation (judgmental phase)
• Developing hypotheses and models
• Try for alternative forecasting approaches and do ‘what-
if’ analysis to check if the resulting forecast fits the data
5
Evaluation of Forecasting Model
To judge how well a forecasting model fit the past
observation, both precision and bias must be
considered.
Measuring the precision of a forecasting model:
There are three possible measures used to evaluate
precision of forecasting systems, each of them is
based on the error or deviation between the
forecasted and actual values: MAD, MSE, MAPE
6
Evaluation of Forecasting Model
Mean Absolute Deviation - MAD
7
Evaluation of Forecasting Model
Mean Square Error - MSE
9
Which of the measure of forecast
accuracy should be used?
The most popular measures are MAD and MSE.
The problem with the MAD is that it varies according
to how big the number are.
MSE is preferred because it is supported by theory,
and because of its computational efficiency.
MAPE is not often used.
In general, the lower the error measure (BIAS, MAD,
MSE) or the higher the R2, the better the forecasting
model 10
Good Fit – Bad Forecast
11
a- Outlier
Outlier may result from simple data entry errors. or
sometime the data may be correct but can be considered as
atypical observed values.
Outlier may occur for example in time periods when the
product was just introduced or about to be phased out.
So experienced analyst are well aware that raw data sample
may not be clear.
Demand data with an outlier
100
90
P
80
70
60
50
40
30
20
10
0
J an Feb Mar Apr May J un J ul Aug Sep Oct Nov Dec J an Feb
12
b- Causal data adjustment
Cause-and-effect relationships should be examined before applying
any quantitative analysis on the historical data sample.
Examples of causes that may affect the patterns in data sample :
1- The data sample before a particular year may not be applicable
because:
- Economic conditions have changed
- The product line was changed
2- Data for a particular year may not be applicable because:
- There was an extraordinary marketing effort
- A natural disaster prevented demand from occurring
13
c- Illusory (misleading) patterns
14
To prepare a valid forecast, the following factors
that influence the forecasting model should be
examined:
- Company actions
- Competitors actions
- Industry demand
- Market share
- Company sales
- Company costs
- Environmental factors
15
Forecasting Approaches
1- Qualitative Forecasting
Forecasting based on experience, judgment, and
knowledge. Used when situation is vague and little
data exists. Example: new products and new
technology
2- Quantitative Forecasting
Forecasting based on data and models. Used when
situation is ‘stable’ and historical data exist. Example:
existing product, current technology
16
Forecasting Approaches
Judgmental/Qualitative Quantitative models
Market survey
Time Series Causal
Expert opinion
Decision conferencing Moving average Regression
Exponential smoothing Curve fitting
Data cleaning
Trend projection Econometric
Data adjustment
Seasonal indexes
Environmental factors
17
Quantitative Forecasting
Casual Models:
Price
Population
Causal Year 2000
Model Sales
Advertising
……
18
Time Series Forecasting
Is based on the hypothesis that the future can be
predicted by analyzing historical data samples.
19
Time series model
The Time series model can be also classified as
Forecasting directly from the data value
• Moving average
• Weighted moving average
• Exponential smoothing
Forecasting by identifying patterns in the past
• Trend projection
• Seasonal influences
• Cyclical influences
20
Forecasting directly from the data value
1- Moving Average Method
- The forecast is the mean of the last n observation. The
choice of n is up to the manager making the forecast
- If n is too large then the forecast is slow to respond to
change
- If n is too small then the forecast will be over-influenced by
chance variations
-This approach can be used where a large number of
forecasting needed to be made quickly, for example in a
stock control system where next week’s demand for every
item needs to be forecast
21
Month Demand Moving
Average
Forecast
Oct 6
Nov 5
Dec 5
Jan 1.63 5.33 AVERAGE(B3:B5)
Feb 1.95 3.88 AVERAGE(B4:B6)
Mar 7.5 2.86 AVERAGE(B5:B7)
Apr 2.49 3.69 AVERAGE(B6:B8)
May 6.18 3.98 AVERAGE(B7:B9)
Jun 9.18 5.39 =
Jul 12
5.24 5.95
Demand =
10
Aug 8
8.3 6.87 =
Sep 6 2.72 7.57 =
Oct 4 7.43 5.42 =
Nov 2
7.49 Forecast
6.15 =
0
Dec 1 9.58
2 3 4 5 6 7 5.88
8 9
=
10 11 12 13 14 15 16 17
n=3
F4 = ((w1* d1)+(w2 * d2)+ (w3 * d3))/(w1 + w2 + w3)
Where w1, w2, w3 are weights and d1, d2 & d3 are
demands.
Many books on forecasting state that the sum of weights
(w1+w2+w3) must be equal to 1.
25
3- Exponential Smoothing
• The exponential smoothing techniques gives weight to all past
observations, in such a way that the most recent observation
has the most influence on the forecast, and the older
observation always has the less influence on the forecast.
• It is only necessary to store two values the last actual
observation and the last forecast.
• Smoothing constant () is the proportion of the difference
between the actual value and the forecast.
• The value of the smoothing constant () is needed to be
included in the model in order to make the next period’s
forecast.
27
3- Exponential Smoothing
28
4- Trend – Adjusted Exponential Smoothing
5. Cyclical component: is much like the seasonal component, only its period
is much longer. Affected by business cycle, political, and economic factors.
31
Components of A Time Series Model
Trend Cycle
Random
movement
Time Time
Demand
Time Time
32
Forecasting by identifying patterns in the past
Cyclical and Seasonal Issues
Seasonal Decomposition of Time Series Data
65 P2=68
D2
55 D3
D1
45
0 1 2 3 4
38
Linear Trend analysis
The linear trend model or sloping line rather than horizontal line.
The forecasting equation for the linear trend model is
Y = +X or Y = a + bX
39
Linear Trend analysis
Forecasting using three data items
Current
Intercept: 42
Current
Slope: 8
Sums of
Squares: 8
MSE: 1.63
Table of MSE
Slope
1.632993 4 5 6 7 8 9 10
38 12.33 10.23 8.16 6.16 4.32 2.94 2.83
40 10.39 8.29 6.22 4.24 2.58 2.16 3.46
42 8.49 6.38 4.32 2.45 1.63 2.94 4.90
Intercept 44 6.63 4.55 2.58 1.41 2.58 4.55 6.63
46 4.90 2.94 1.63 2.45 4.32 6.38 8.49
48 3.46 2.16 2.58 4.24 6.22 8.29 10.39
50 2.83 2.94 4.32 6.16 8.16 10.23 12.33
52 3.46 4.55 6.22 8.12 10.13 12.19 14.28
40
Linear Trend analysis
Simple Linear Regression Analysis
Regression analysis is a statistical method of taking one or more
variable called independent or predictor variable- and developing
a mathematical equation that show how they relate to the value
of a single variable- called the dependent variable.
Regression analysis applies least-squares analysis to find the best-
fitting line, where best is defined as minimizing the mean square
error (MSE) between the historical sample and the calculated
forecast.
Regression analysis is one of the tools provided by Excel.
41
Simple Linear Regression Analysis
Quarters Demand SUMMARY OUTPUT
1 1 3.47
2 4 3.12 Regression Statistics
3 9 3.97 Multiple R 0.866
4 16 4.50 R Square 0.749
5 25 4.06 Adjusted R Square 0.738
6 36 6.90 Standard Error 1.986
7 49 3.60 Observations 24
8 64 6.47
9 81 4.27 ANOVA
10 100 5.24 df SS MS F Significance F
11 121 6.39 Regression 1 259.031 259.031 65.691 0.000
12 144 5.45 Residual 22 86.750 3.943
13 169 5.88 Total 23 345.782
14 196 8.99
15 225 4.12 Coefficients Standard Error t Stat P-value Lower 95% Upper 95%
16 256 6.68 Intercept 1.495 0.837 1.787 0.088 -0.240 3.231
17 289 9.44 Quarters 0.475 0.059 8.105 0.000 0.353 0.596
18 324 7.75
19 361 9.91
Intercept
20 400 9.14 Slope
21 441 14.25
22 484 14.89
23 529 14.22
42
24 576 15.56
Fitted
Quarters Demand Demand Difference
1 3.47 1.97 2.24 Intercept 1.495
2 3.12 2.45 0.45 Slope 0.475
3 3.97 2.92 1.11 MSE 1.901
4 4.50 3.40 1.23
5 4.06 3.87 0.04
6 6.90 4.35 6.54
7 3.60 4.82 1.48 20.00
8 6.47 5.30 1.39
9 4.27 5.77 2.26 15.00
10 5.24 6.25 1.01 10.00
11 6.39 6.72 0.11
12 5.45 7.20 3.03 5.00
13 5.88 7.67 3.22
0.00
14 8.99 8.15 0.71
15 4.12 8.62 20.25 1 5 9 13 17 21 25
16 6.68 9.10 5.83
17 9.44 9.57 0.02
18 7.75 10.05 5.26
19 9.91 10.52 0.38
20 9.14 11.00 3.45
21 14.25 11.47 7.74
22 14.89 11.95 8.70
23 14.22 12.42 3.24
24 15.56 12.90 7.09
25 13.37
26 13.85
27 14.32
43
28 14.80
Linear Trend analysis
Multiple Linear Regression Analysis
44
Multiple Linear Regression Analysis
Hours Number of
Before Computer
Breakdown Age Controls
Two factors that control the
205 59 1 frequency of breakdown. So they
236 48 1
260 25 0
are the independent variables.
176 39 0
245 20 1 Y = a + bX1 + cX2
123 66 2
176 40 0
150 62 0
148 70 0
265 20 0
200 52 1
45 75 0
110
216
75
25
0
0
Intercept Slope 1
176 63 1
90
176
75
69
0
2 Slope2
112 65 0
230 30 0
280 23 1
45
Multiple Linear Regression Analysis
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.905
R Square 0.818
Adjusted R Square 0.797
Standard Error 28.651
Observations 20
ANOVA
df SS MS F Significance F
Regression 2 62,920.044 31,460.022 38.325 0.000
Residual 17 13,954.906 820.877
Total 19 76,874.950
46
Hours Number of
Before Computer Hours to
Breakdown Age Controls Breakdown Difference
205 59 1 169 1332 Intercept 308.451
236 48 1 199 1347 Age -2.800
260 25 0 238 464 No of Computer Controls 25.232
176 39 0 199 541 MSE 26.41487
245 20 1 278 1069
123 66 2 174 2616
176 40 0 196 419 300
45 75 0 98 2861 50
110 75 0 98 133 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
216 25 0 238 505
176 63 1 157 349
90 75 0 98 72
176 69 2 166 105
112 65 0 126 210
230 30 0 224 31
280 23 1 269 115
47
Linear Trend analysis
Quadratic Regression Analysis
48
Quadratic Regression Analysis
SUMMARY OUTPUT
Regression Statistics
Multiple R 0.927
R Square 0.859
Adjusted R Square 0.846
Standard Error 1.524
Observations 24
ANOVA
df SS MS F Significance F
Regression 2 297.037 148.518 63.984 0.000
Residual 21 48.745 2.321
Total 23 345.782
Coefficients Standard Error t Stat P-value Lower 95% Upper 95% Upper 95.0%
Intercept 4.685 1.017 4.609 0.000 2.571 6.799 6.799
Quarters -0.261 0.187 -1.395 0.178 -0.651 0.128 0.128
Quarters Squared 0.029 0.007 4.046 0.001 0.014 0.045 0.045
49
Quadratic Regression Analysis
Quarters Fitted
Quarters Squared Demand Demand Difference
1 1 3.47 3.52 0.00 Intercept 3.500
2 4 3.12 3.58 0.21 Slope 1 0.000
3 9 3.97 3.67 0.09 Slope 2 0.019
4 16 4.50 3.80 0.49 MSE 1.494
5 25 4.06 3.98 0.01
6 36 6.90 4.18 7.39
7 49 3.60 4.43 0.69 20.00
8 64 6.47 4.72 3.09 Forecast
15.00
9 81 4.27 5.04 0.60
10 100 5.24 5.40 0.03 10.00
11 121 6.39 5.80 0.35
12 144 5.45 6.24 0.61 5.00
13 169 5.88 6.71 0.70 Demand
0.00
14 196 8.99 7.22 3.10
10
16
19
22
13
25
28
1
7
15 225 4.12 7.78 13.36
16 256 6.68 8.36 2.83
17 289 9.44 8.99 0.20
18 324 7.75 9.66 3.63
19 361 9.91 10.36 0.20
20 400 9.14 11.10 3.85
21 441 14.25 11.88 5.63
22 484 14.89 12.70 4.83
23 529 14.22 13.55 0.45
24 576 15.56 14.44 1.24
25 625 15.38
26 676 16.34
27 729 17.35
28 784 18.40
50