Sunteți pe pagina 1din 9

Proceedings of the 2013 Industrial and Systems Engineering Research Conference

A. Krishnamurthy and W.K.V. Chan, eds.

Analysis and Forecasting Of International Flights Arriving In the


United States
Isaac Atuahene1, Rolando José Acosta-Amado2, Rapinder Sawhney3, Philip Appiah- Kubi4,
Girish Upreti5.
1,3,5
Dept. of Industrial and Information Engineering, University of Tennessee, Knoxville,
TN 37996, USA.
2
Faculty of Industrial Engineering, Universidad Pontificia Bolivariana, Bucaramanga,
Colombia.
3
Dept. of Industrial Engineering, Ohio University, OH 43230, United States.

Abstract
International flights arrive and depart from the United States daily at an increasing rate. These flights are monitored
and booked by airlines and the Transportation Security Authorities (TSA). A good forecast for international flights
and travelers arriving and departing from the US may help to plan for immigration arrival proceedings, security
screenings, monitoring, help to plan government budget spending on TSA processes as well as help to estimate
income for the tourism industry among others. This study presents an analysis and forecast assessment of
international flights and determines the best model for forecasting international flights. The study examines the
seasonality of the data, finds the best model and finally makes a forecast prediction for the 2012 year. We then
illustrate the model chosen for this forecast. In executing this study, we look at numerous modeling alternatives;
time series regression, winter’s exponential smoothing, decomposition method (automated and spread sheet),
ARIMA, combination model with ARIMA and dynamic regression. The study concludes that dynamic regression is
the best forecasting model with its unique handling capabilities of interventions and seasonality, and presents
recommendations.

Keywords
Forecast, seasonality, prediction, interventions

1. Introduction
International flights arrive and depart the United States daily. These flights are monitored and booked in and out at
all times. Arrivals to the United States by port-of-entry are tracked on a monthly basis [1]. We highlight summary of
statistics international travel reports by departments in charge. The U.S. Department of Commerce has arrivals data
on more than 40 U.S. ports-of-entry from all world regions [2]. In 2011, 62.7 million international visitors traveled
to the U.S., generating $153 billion in receipts and a $43 billion trade surplus [3]. According to the Office of Travel
and Tourism Industries, 2010 was a very good year for the travel and tourism industry; The United States welcomed
60 million international visitors, a record level of visitors to the United States, 5 million more than the year before
[4]. In 2010, the top inbound markets continued to be Canada and Mexico, both of which were up in arrivals along
with eight of the nine overseas regional markets. Asia, South America, Oceania and the Middle East experienced the
strongest growth in 2010, due in part to record level non-resident visits to the United States from Brazil, South
Korea, Australia, China, India and Colombia [4]. The US International Trade Administration reported in their
analysis that the top three ports of entry (New York JFK, Miami and Los Angeles) accounted for 39 percent of all
overseas arrivals, one and one-half percentage points less than last year [2]. A monthly independent forecast and
prediction of arrivals for these airports will be a great benefit to the management. Developing forecasts of aviation
demand and activity levels continues to be challenging as the aviation industry evolves and prior relationships
change [5]. A good forecast for international flights arrival and departure from the USA will help to estimate income
for the tourism industry, plan for immigration arrival proceedings, security screenings and monitoring among other
things. This study presents analysis and determines the best forecasting model for predicting international flights in
the USA.
Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti
We now briefly review literature on aviation forecasting. Accurate prediction of international flight arrivals and
departure is critical and fundamental to airport security, play an integral role in the assessment of future facility
needs at airports [6] as well as tourism planning among others. Using data from a major North American cruise
company, Xiaodong et al (2011) applied a variety of (24) forecasting methods, which are divided into three
categories (non-pickup methods, classical pickup (CP) methods and advanced pickup (AP) methods), to generate
forecasts of final bookings for the cruises that have not yet departed at a particular reading point. The study then
used a two-stage framework to test alternative forecasting methods and compared their performance [7].
Weatherford et al. (2003) presented the first published research paper on the technique of neural network forecasting
as applied to the airline industry, which was compared with the traditional forecasting techniques (moving averages,
exponential smoothing, regression, etc.) [8]. A paper presentation by Littlewood (2005) described the early work on
applying mathematical models to the development of revenue management in the airline industry. The study focused
on describing passenger forecasting and revenue control methods, and introduced the idea of maximizing the
revenue received on a particular flight, rather than maximizing the number of passengers carried [9]. Keith and
Leyton (2007) applied statistical, probabilistic forecasts and categorical forecasts methods in the forecasts of low
ceiling and/or reduced visibility and their corresponding impact on forecast value for flights arriving at three major
airports in the United States [10]. In a similar study, Rudack and Ghirardelli (2010) applied categorical and
probabilistic forecasts to conduct a comparative verification of localized aviation model output statistics program
(LAMP) and numerical weather prediction (NWP) model forecasts of ceiling height and visibility [11]. Chmielecki
and Raftery (2011) applied the Bayesian model averaging to probabilistic visibility forecasting using a predictive
PDF that is a mixture of discrete point mass and beta distribution components to develop three approaches to
developing predictive PDFs for visibility [12].
Researchers have noted that the main factors that significantly influence flight arrivals at both international and local
airports are weather conditions, terrorist attacks, spike in oil prices and recession to mention a few [7, 8]. Other
natural conditions like tornadoes, earthquakes and diseases like the H1N1 flu in the cold or winter season, etc may
also influence arrivals and departures. Several approaches and techniques have been used to forecast US travel
demand, commercial aviation forecasts and assumptions, aerospace forecast, etc. Conventionally, they were based
on time series, regression, and econometric approaches with iterative processes [13]. Among the approaches are
those of the FAA which they are confident that the forecasts accurately predict future aviation demands, however
they report that due to the large uncertainty of the operating environment the variance around the forecasts is wider
than in prior years [5]. Different forecasting approaches to the aviation industry (income, load, passenger
enplanement, etc can be found in literature. Commercial aviation forecasts reported thus far are considered
unconstrained in that they assume there will be sufficient infrastructure to handle the projected levels of activity [14,
15]. The FAA’s economic forecasts developed by Global Insight, Inc. do not assume further contractions of the
industry through bankruptcy, consolidation, or liquidation [5, 16]. On the other hand, we present a variety of
forecasting models that accommodates interventions and uncertainties and compare to select the best model.

2. Objective
The objective of this study is to analyze international flights arriving and departing from the US, from 2000 to 2011.
The data was collected by month for 12 years beginning 2000 and going through the next to the last month of
December 2011. The last 12 months of 2011 is used as a holdout sample. The study examines the seasonality of the
data; find the best forecasting model for international flights, etc. Finally, we attempt to make forecasts for 2012 and
compare it with the FAA 2012 data reported. We then illustrate the model chosen for this forecast. In executing this
study, the projects looks at numerous modeling alternatives; time series regression, winter’s exponential smoothing,
decomposition (automated and spread sheet), ARIMA, combination model with ARIMA and dynamic regression.
Finally, we conduct a forecast assessment and selection of the best forecasting model and then presents
recommendations. The forecasts and predictions we present are based on historical passenger statistics from the
United States Immigration and Naturalization Services and also from the Bureau of Transportation Statistics. The
data was obtained from the Bureau of Transportation Statistics T-100 Segment data section [17].

3. Methodology

3.1 Preview of Data and Data Analysis


A data plot of time series versus trend variable is shown in figure 1. The scatter plot in figure 2 shows an increasing
trend of international flights over the years and also increasing upward trend. Some increasing variability is also
prevalent over time. In commencing our analysis, the data was previewed to check for the possibility of outliers and
Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti
for its seasonality (see box plot in figure 3). The box plot shows some seasonality and it can be inferred that the
highest flights occurred around the summer season (month of July) and the lowest flights in the winter season
(month of February). We then performed a Kruskal Walis Test on the data which rejected the null hypotheses (Ho),
hence we can concluded that seasonality is present in the data. The Kruskal-Wallis One-Way ANOVA on Ranks
hypotheses test showed that at 0.05 alpha level, there is significant difference by month. Also the means showed that
all the months are different. The Kruskul-Wallis Multiple Comparison Z-value Test showed that some of the group
means are significantly different from others.
We then performed an autocorrelation Analysis. The autocorrelation plot showed a slight exponential decay. Also
the partial autocorrelation has one prevalent spike which indicates that the data may be auto correlated. 8
autocorrelations were found not to be significant. Only 7 partial autocorrelations were identified as significant. The
t-test for the autocorrelation coefficient results showed that the rule of thumb at 0.166667 flags 32 autocorrelation
coefficients as significant since it does not take into consideration the increasing standard error and the decreasing
degrees of freedom. At alpha =0.05, the first 14 autocorrelations are significant. However at alpha =0.01, only the
first 8 are significant. The rule of Thumb over identifies the significant autocorrelation.

Plot of International International vs Trend

130000.0 130000.0
Variables
International Box Plot
130000.0
115000.0
110000.0
International
International

100000.0 110000.0

International
90000.0
85000.0 90000.0

70000.0 70000.0 70000.0


0.9 36.9 72.9 108.9 144.9 0.0 50.0 100.0 150.0 1 2 3 4 5 6 7 8 9 10 11 12
Time Trend Month

Figure 1: Data Plot Figure 2: Scatter Plot Figure 3: Box Plot

In treating outlier we used statistical process control (SPC) methodology. The ARIMA model with regular (1,0,0)
and seasonal (1,0,1) gives white noise , significant coefficients and no high parameter estimates, so was chosen for
the outlier analysis on the original data. This delineated 2 observations (2001, Month 9: September, and 2009 Month
5: May) as possible outliers. A normality test on the residuals of the untouched data with Shapiro-Wilk W test,
Kolmogorov-Smirnov test, D'Agostino Skewness test, D'Agostino Kurtosis test and D'Agostino Omnibus test did
not reject normality and showed that the data was normal. We also performed a white noise test which yielded p-
values =0, indicating that there is no white noise at alpha of 0.05 level.

Before using the various methodologies for the forecast analysis, an intervention analysis and identification was
performed using the SAS software to identify the possibility of interventions in the historical data. Based on the
analysis, the data was suspected of having interventions. We deal with the intervention in the dynamic forecast
section of this study. The first intervention was identified as a major change point that occurred early in the
series.There was a likely second intervention later and possibly a third in the series. In this case dynamic linear
regression may help with such situations. The intervention seems to have occurred in the year 2001 and in the month
of October which extends through November 2001 and January 2002. These interventions were investigated and we
found out that it may have to do with (9/11) after the bombing of the world trade center. At this point, international
flights were halted and flights from some countries restricted and limited for a period of time. The possibly second
intervention which occurred through September 2008 and Oct 2008 could be as a result of the number of things that
occurred through this period across the United States. Among these were hurricane Gustav on September 1st and
hurricane Ike, around September 13th with their effects spreading through the month of October. Another possible
one seems to have occurred around September 2009 which may be as a result of September just being a bad month
for travel since it’s the end of summer and also tail end of the hurricane seasons. Another reason which is not full
proof may be linked to International flights that are likely to drop every year during the month of September as a
result of the effect of 9/11.

3.2 Forecast Modeling


Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti
In executing this study, the project applies at a variety of forecasting models and methodologies with the use of the
NCSS software. The models explored are explained below in the next sections.

3.2.1 Time Series Regression Forecast


DeLurgio (1998) and Montgomery et al (2008) notes that the general form of a multiple regression may be written
as;
(1)
Where
= variable to be forecast
x1, ….xk= the k predictor variables,

The coefficients ,…, measure the effect of each predictor after taking account of the effect of all other predic-
tors in the model [18, 19]. With this approach, we considered numerous modeling alternatives using the NCSS
software. The models explored are; time series regression, Additive with trend only (T), additive with trend and
seasonality (T+S), Additive with trend and seasonality plus the interaction of trend and seasonality (T+S+T*S), and
Robust additive with trend and seasonality. Also, we explored Multiplicative with trend only (T), Multiplicative
with trend and seasonality (T+S), Multiplicative with trend and seasonality plus the interaction of trend and
seasonality (T+S+T*S) and Robust Multiplicative with trend and seasonality. Upon comparing all these models, the
additive with trend and seasonality (T+S) was found to be the best time series regression model. The ANOVA test
on the overall model as well as overall months yielded a p-value=0 which indicates that there is a difference in
month adjusting for trend. Also trend adjusting for seasonality was significant (with p-value of 0.00). The regression
equation identified that the p-value for the intercept is significant, but not for months 1, 3, 4, 5, 6. The regression
coefficient analysis gave an intercept of 84705.6818, and trend for each additional month increase by 228.7885
adjusting for trend and seasonality. The Durbin-Watson test for serial correlation rejects Ho. In the residual
diagnostic analysis section of the results, observations 25 and 26 seemed higher than the others and were suspicious
to be possible outliers. These are the months right after the September 2001, 9/11 intervention. The R-squared value
for the model is 0.7637 which shows that 76 % of the variation in international flights is explained by trend and
seasonality. The press R-squared value is 0.7089. There was no normality in the residuals; hence the assumptions
are not satisfied. Also some problems with the serial correlations were identified. The variance inflation factors were
acceptable since they were less than 5 and hence no co-linearity problems present.

3.2.2 Exponential Smoothing


Exponential Smoothing uses the time series values to generate a forecast with lesser weights given to the
observations further back in time; at each period t, an exponentially smoothed level, Lt, is calculated which updates
the previous level, Lt-1, as the best current estimate of the unknown constant level, α, of the time series using the
equation below [18- 20] :
(2)
Where (1- α) a damping factor and α is the smoothing constant . The Exponential Smoothing method
was performed optimizing on mean absolute percentage errors (MAPE). Six different exponential smoothing
methods were explored and the best model with favorable characteristics selected. Using the NCSS software, we
modeled the Simple Exponential smoothing method (SES), Holts Linear Trend Method and Winters Exponential
Smoothing method with four seasonal adjustments (Additive Additive; Additive Multiplicative; Multiplicative
Multiplicative; and Multiplicative Additive). The Additive Winters exponential smoothing model optimizing with
MAPE was found to be the best model when compared with the other exponential smoothing models. This model
had white noise characteristics, performed well on the holdout R-squared value and yielded a good prediction
interval. On the other hand, it had no normality (hence assumptions not satisfied) as the other exponential smoothing
model but comes out as the best upon comparison.
The model for the Winters Additive and Winters Multiplicative both performed very well on the Hold out R-squared
values; 0.887 and 0.880 respectively. Both did not have normality but had white noise characteristics, but the
Additive winters was chosen as the best model since it has easy interpretation and technically, the hold out R-
squared value is better than the multiplicative model. Finally we concluded that the Winters Additive model is the
best forecasting model among the Exponential methods for its simplicity and easy interpretation. The model
converged after 196 search iterations and resulted in a pseudo R-squared value of 0.9573, indicating that
approximately 95% of the variation in international flights is explained by the trend and the additive seasonal
pattern. The prediction equation for this model is as shown below;
(3)
Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti
The seasonal factor is the respective season under consideration.

3.2.3 Decomposition Forecasting Method


Decomposition forecasting methods attempt to identify two separate components of the basic underlying pattern that
tend to characterize economic and business series. It consists of estimating the five components of the model [20];
(4)
Where, is the series or log of the series, U is the mean of the series, is the linear trend, is the cycle, is the
season, is the random error and t is the time period. Decomposition method using the direct decomposition
function (Automated) in the NCSS software was performed and also a manual/spread sheet decomposition method
with stable seasonal Index, changing seasonal index and finally a model using regression to simultaneously take care
of trend and seasonality was experimented. Generally all the decomposition models did not perform well for this
data. We noted that this could be due to seasonal difficulties which are a disadvantage of decomposition models. All
the models perform poorly on the hold out R-squared values. Nevertheless, for the purpose of this study, we selected
the best decomposition model as the model with changing seasonality since there were seasonality changes within
the months. This model had a good R-squared value of 0.983 which indicates that approximately 98% of the
variation in international flights is explained by trend and month (trend + month). The model had a constant
prediction interval and did not have normal residuals, hence assumptions not satisfied. No white noise characteristics
were produced by this model. As in all the other decomposition models experimented, the hold out R-squared value
was not good, but when compared amongst all the models, this model with the changing seasonality is the best.
Surprisingly, the decomposition model with the NCSS macro also did not do well on the hold out and this can
possibly also justify why the manual decomposition holdouts did not perform well. We may incline to say that
decomposition models do not work well for this data set.

3.2.4 Auto Regressive Integrated Moving Average (ARIMA) Method


Auto-Regressive Integrated Moving Average (ARIMA) uses non seasonal parameters (p, d, q), seasonal parameters
or a combination of both to generate forecast [18, 19]. Where p is the number of autoregressive terms, d is the
number of non-seasonal differencing, and q is the number of lagged forecast errors in the prediction equation. All
possible ARIMA models (36 models) with both non seasonal (18 models) and seasonal (18 models) components
were experimented to find the best model. Firstly, we performed a run of different autocorrelations in order to find
the pattern of the non seasonal portion, and then performed another run of time series decomposition and saved the
seasonal ratio for de-seasonalizing the data. We then performed a run of automatic ARMA (Auto Regressive
Moving average) in order to get an idea of where the model converges. This gave an idea and hint that auto
regressive one (1) and also seasonal component with auto regressive one (AR1) has a better convergence. Finally,
the best ARIMA model was found to be ARIMA (1,0,0) (1,0,1). This model converged normally after 20 iterations
and gave all significant p-values (at alpha = 0.05) for all the model estimations. The pseudo R-squared value
obtained was 92.378 which indicated that 92% of the variation is explained by this model. The Asymptotic
correlation matrix of parameters had no high correlations. The model had a good hold-out R-squared value, good
prediction intervals and no issues with multi-co linearity. The residuals in this model were not normal, but had white
noise, hence it was chosen as the best among all the models when compared.

3.2.5 Combination Forecast


The Combination Forecast Model (CFM) determines the forecast by averaging two or more independent forecasts
equally weighted, usually the best models are used in the average [18]. The best Additive Winters Exponential
smoothing (Y1) and the best ARIMA [(1, 0, 0) (1, 0, 1)], (Y 2) were chosen and used for the combination model as
they were the best models among the models explored and with good performance on the hold out R-squared values.
We combined the best model for exponential smoothing method (Winters Additive model) and the best ARIMA
(ARIMA (1,0,0) (1,0,1)) model. This model performs well on the training data with a good R-squared value
(0.9519) but unfortunately does not perform well on the hold out R-squared value (-1.0256). Surprisingly, the
residuals for this model have no white noise and no normality. The prediction interval was however acceptable.
Results of the combination model are summarized in the Table 1.
The prediction interval using the root mean squared error was (+5437.046, -5437.046) and (+4089.326, -4089.33)
for the mean absolute deviation on the training data. Similarly, the prediction interval using the root mean squared
error was (+18652.52, -18652.52) and (+49615.4, -49615.4) for the mean absolute deviation on the holdout data.
Table 1. Results of Combination Model
combination model
Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti
Pseudo R2 Normality White noise Prediction Interval Holdout R2
Combination Model 0.951873233 No No Good -1.02558
(Best Additive Winters +
Best ARIMA)

3.2.6 Dynamic Regression Forecast


The Dynamic Regression forecast combines the general regression model with the lagged term of the dependent
variable to capture the trend, seasonality and time-phased relationship between variables. Dynamic regression model
takes the general form [18], [19]:
(5)
In this autoregressive distributed-lag model (equation 5), the values of p and k (i.e. how many lags of y and x will be
used) are chosen on the basis of the statistical significance of the lagged variables, and so that the resulting model is
well specified. We first used the SAS software to perform dynamic forecast on the data set without interventions.
We then fitted the model automatically allowing the software to select the best model. The best model selected was
the Log Winters Additive model. This model had a good fit R-squared value on the training data (0.959), had some
white noise and good prediction interval but had no normality (hence assumptions not satisfied). Finally, we fitted
dynamic regression models with interventions which we had identified during the earlier preview of the data. The
best model identified with the automatic fit was used firstly and the interventions incorporated. Unfortunately this
model could not handle the interventions very well. It did not perform well both on the fit R- squared value (0.322)
and the holdout R-squared value (0.352). The best model was found to be ARIMA (1, 0, 0) (1, 0, 0) with
interventions on (October 2001, November 2001, January 2002, September 2008, October 2008 and September
2009). This model had white noise, a good hold out R-squared (0.912) and good R-squared fit (0.910). It however
did not have normality like the other best models in the other sections.

3.2.7 Model Comparison and Forecast Summary


Before we conducted the dynamic regression which incorporates interventions, we compare all the forecast models
so far computed (Table 2). It is realized that two models “Winters Exponential Smoothing Additive” and ARIMA
(1, 0, 0) (1, 0, 1) comes out as the best models with similar characteristics but a slightly higher R-squared value on
the winters method. Both methods have good performance on the holdout R-squared values and have white noise
but unfortunately normality was not obtained for them (hence assumptions not satisfied). Since only one model is
required and needed for the forecast, we chose the winters Additive Additive model since it is simple to convey,
interpret and explain to clients with no statistical background. This model has a good fit R squared value (92.378%).
It has no normality like the other models but it has significant coefficients, good prediction intervals, no high
parameter estimates, white noise and performs excellently on the hold out R-squared value. On the other hand for an
engineering client who is abreast with statistics, we would choose the ARIMA model since it is more robust. Finally,
the best overall best model is found to be the dynamic regression model with ARIMA (1, 0, 0) (1, 0, 0) and with
interventions on (October 2001, November 2001, January 2002, September 2008, October 2008 and September
2009). This model has white noise, a good hold out R-squared value (0.912) and good R-squared fit value (0.910). It
however does not have normality like the other best models. But it has the smallest MAPE (mean absolute
percentage error) and is the best amongst all the other models. The final prediction plot is shown in figure 4.
To validate our analysis, the final monthly forecast obtained by the best model selected (Dynamic Regression model
with interventions) was compared with the 9 months data obtained by the Bureau of Transportation statistic , BTS
(Research and innovative technology administration) data recorded. At the time of this study in January 2013, the
data for the last 3 months of 2012 was not available. Most recent three months of international data by airport and by
carrier is withheld because of confidentiality agreements for individual routes [17]. The results of the comparison
are shown in Table 3 below. The total flight arrivals for the first nine months of the 2012 were found to be 1.519%
(0.519% prediction error) more than that reported by the BTS. This could have been due to other minor
interventions which were not considered as well as outliers that were not considered severe. The non normality
nature of the data as well as the residuals not having complete white noise characteristics may also be a likely cause.

Table 2 Model Comparison


Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti

Table 3 Comparison Forecast Of 2012


Year Month Best Model Forecast BTS Data Difference Prediction Interval of Best
Model
U95 L95
2012 Jan 113058 111,701 -1,357 119028 107089
2012 Feb 102401 104,704 2,303 110314 94487
2012 Mar 115826 117,612 1,786 124939 106712
2012 April 112162 112,718 556 122088 102236
2012 May 111159 114,348 3,189 121658 100659
2012 Jun 113041 117,270 4,229 123954 102127
2012 Jul 123631 124,704 1,073 132848 110414
2012 Aug 120502 123,006 2,504 127943 105061
2012 Sept 105595 107,001 1,406 117203 93986
2012 Oct 106594 Not available N/A 118327 94861
2012 Nov 102229 Not available N/A 114056 90402
2012 Dec 110858 Not available N/A 122756 98961
Total 1337056

Figure 4: Final Forecast For 2012

4. Conclusion and Recommendations


This study analyzes and conducts a forecast assessment of international flights to determine the best model for
forecasting international flights. We examined the seasonality of the historical data, determined the best model and
finally make a forecast prediction for the 2012 year which we validated with the real data reported by the Bureau of
Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti
transportation and statistics. The international flight data set has good seasonality and in this case a good forecast
needed a model which could accommodate seasonal components and interventions/change points identified in the
data set. Therefore the dynamic regression model which takes account of seasonality very well and handles all the
interventions is adopted for use as the best forecast model. Most importantly, the dynamic regression with
interventions has the lowest MAPE (mean absolute percentage error) and is the best amongst all the other models.
We did not have more data on external variables so this option in the dynamic regression did not help much. We
have selected the best model for achieving this forecast objective based on the best hold out R-squared, white noise,
Prediction interval, fit R-squared value, ability to handle interventions, MAPE and normality (which was not
achieved by almost all the models). Factors like multi co-linearity, serial correlation, significance of co-efficient and
parameter estimates were also considered in some cases.

References
1. Office of Travel and tourism industries, 2011, available at
http://www.tinet.ita.doc.gov/research/programs/i94/index.html
2. Office of Travel and tourism industries, Monthly tourism statistics, 2012. Available at
http://tinet.ita.doc.gov/view/m-2012-I-001/index.html
3. International Trade Administration, New Report Forecasts Continued Growth in International Travel to the
United States During Next Five Years, 2000. available at http://tinet.ita.doc.gov/view/f-2000-99-
001/forecast/Forecast_Summary.pdf
4. Office of Travel and Tourism Industries, 2010, “International Visitation to the United States: A statistical
Summary of US visitation (2010)”. Available at
http://tinet.ita.doc.gov/outreachpages/download_data_table/2010_Visitation_Report.pdf
5. FAA Aerospace Forecast, Fiscal Years 2011-2031, 2011, Available at
http://www.faa.gov/about/office_org/headquarters_offices/apl/aviation_forecasts/aerospace_forecasts/2011
-2031/media/2011%20Forecast%20Doc.pdf
6. Quad City International Airport, Master plan update, 2012. Available at http://qciaairportmasterplan.com/
7. Xiaodong, S., Gauri D. K., and Webster S., 2011, Forecasting for cruise line revenue management, Journal
of Revenue & Pricing Management 10, 306-324.
8. Weatherford, L. R., Gentry T. W. and Wilamowski B., 2003, Neural network forecasting for airlines: A
comparative analysis, Journal of Revenue & Pricing Management, 1, 319–331.
9. Littlewood, K., 2005, Special Issue Papers: Forecasting and control of passenger bookings, Journal of
Revenue & Pricing Management, 4, 111–123.
10. Ross, k., and Leyton S.M., 2007, An Experiment to Measure the Value of Statistical Probability Forecasts
for Airports, Wea. Forecasting, 22, 928–935.
11. Rudack, D. E., and Ghirardelli, J. E., 2010, A Comparative Verification of Localized Aviation Model
Output Statistics Program (LAMP) and Numerical Weather Prediction (NWP) Model Forecasts of Ceiling
Height and Visibility, Wea. Forecasting, 25, 1161–1178.
12. Chmielecki, R. M., and Raftery A. E., 2011, Probabilistic Visibility Forecasting Using Bayesian Model
Averaging, Mon. Wea. Rev., 139, 1626–1636.
13. National Bureau of Economic Research. Available at http://www.nber.org/
14. FAA Aerospace Forecast Fiscal Years 2012-2032, 2012. Available at:
http://www.faa.gov/about/office_org/headquarters_offices/apl/aviation_forecasts/aerospace_forecasts/2012
-2032/media/FAA%20Aerospace%20Forecasts%20FY%202012-2032.pdf
15. Field, C., 2005, Aviation Activity Forecasts, Master plan Update, Jacksonville Aviation Authority (Chapter
3). Available at:
http://www.flyjacksonville.com/pdfs/VQQMP/Chapter%203,%20Aviation%20Activity%20Forecasts.pdf
16. FAA Aerospace Forecast, Fiscal years 2009-2025, 2009, U.S. Department of Transportation Federal
Aviation Administration Aviation Policy and Plans. Available at
http://www.faa.gov/data_research/aviation/aerospace_forecasts/2009-
2025/media/2009%20Forecast%20Doc.pdf
17. Research and Innovative Technology Administration, Bureau of Transportation Statistics, 2012. Available
at http://www.transtats.bts.gov/Data_Elements.aspx?Data=1
18. DeLurgio, S.A., 1998, “Forecasting Principles and Applications”, 1st Edition, Graphic Creations, Richard D
Irwin; 1st edition.
19. Montgomery, D.C., Jennings, C.L., and Kulahci, M., 2008, “Introduction to Time Series Analysis and
Forecasting,” John Wiley & Sons, Inc.
Atuahene, Acosta-Amado, Sawhney, Appiah-Kubi and Upreti
20. Makridakis S. G., Wheelwright S. C., and Hyndman R. J., 1997, “Forecasting: Methods and Applications”,
Wiley; 3 edition.

S-ar putea să vă placă și