Documente Academic
Documente Profesional
Documente Cultură
E MPIRICAL W ORK ?
The Neoclassical theory of production establishes a dual relationship between the profit value func-
tion of a competitive firm and its underlying production technology. This relationship, commonly re-
ferred to as duality theory, has been widely used in empirical work to estimate production
parameters such as elasticities and returns to scale. We generate a pseudo-dataset by Monte Carlo
simulations, which, starting from known production parameters, yield a dataset with the main charac-
teristics of U.S. agriculture in terms of unobserved firm heterogeneity, decisions under uncertainty,
unexpected production and price shocks, endogenous prices, output and input aggregation, measure-
ment error in variables, and omitted variables. Production parameters are not precisely recovered
when performing econometric estimation based on the duality approach, and the elasticity estimates
are inaccurate. Deviations of own- and cross-price elasticities from initial median values, given our
parameter calibration, range between 6% and 690%, with an average of 90%. Also, own-price elas-
ticities are as imprecisely recovered as cross-price elasticities. Sensitivity analysis shows that results
still hold for different sources and levels of noise, and sample size used in estimation.
Key words: Data aggregation, duality theory, endogeneity, firm heterogeneity, measurement error,
Monte Carlo simulations, omitted variables, uncertainty.
prices and allocatable inputs induces estima- authors assume perfect competition, profit
tion inefficiency. maximization, certainty, and lack of measure-
The reliance of the approach on some re- ment errors, deviations from duality theory
strictive assumptions prompted a body of lit- only come from the FFF choice. As a result,
characterize the real-world data used by prac- netputs so as to maximize the expected utility
titioners. We aim at generating noise compa- ~ 1 )4:
of uncertain terminal wealth (W
rable to that encountered in widely used
datasets, such as the one constructed and ~ 1 Þ
4
The model setup closely follows the one used in Rosas and
Lence (2017), which in turn is based on Lau (1976). The pro-
The Model of Individual Firms posed method could also be employed to conduct the estimation
using the state-contingent approach to production uncertainty
proposed by Quiggin and Chambers (2006). We do not pursue
Each firm underlying the simulated dataset is the state-contingent approach here because the limitations of the
assumed to consist of a producer who chooses available data for estimation have severely hindered its use in
empirical work. However, our method could be used to explore
the empirical performance of the approach by simulating data
rich enough to allow for its estimation.
2 5
In this study, we focus on the properties of duality theory According to netput notation, a positive value is a net output
applications using time series data by generating a panel of and a negative value is a net input.
6
observations across firms and over time. The analysis of applica- The properties of the set S include: (a) the origin belongs to
tions with cross-sectional data is as relevant as the one pursued S; (b) S is closed; (c) S is convex; (d) S is monotonic with respect
here, but is left for future research. The properties of duality the- to y0; and (e) non-producibility with respect to at least one vari-
ory using panel data can be studied as well, but they are less fre- able input, which implies at least one commodity is freely dispos-
quent in the literature because these datasets are not as readily able and can only be a net input in the production process (a
available. primary factor of production).
3 7
This issue can also be interpreted in the framework of the ex- The properties of the production function G() are: (a) the
istence of a representative technology arising from the aggrega- domain is a convex set of <nþm that contains the origin; (b) the
tion of heterogeneous firms. It can be argued that this is not a value of G at the origin, say G(0), is non-positive; (c) G is
problem exclusive of duality theory, and that the primal problem bounded; (d) G is closed; and (e) G is convex in {y, K}. Convexity
also bears it. We decided to work on the dual problem because it is required because of the convention used in Lau (1976) that y0
is an application extensively used to recover production parame- ¼ G(y, K). We follow the convention that the value of the pro-
ters and estimate elasticities, but the other option is appropriate duction function is positive infinity if a production plan is not fea-
as well. sible, that is, max{Ø} ¼ 1, where {Ø} is the empty set.
828 April 2019 Amer. J. Agr. Econ.
ð3Þ ~ 1 Þg
maxy fE½UðW vectors of ai,f coefficients, A11f and A22f are,
respectively, (N N) and (M M) symmetric
~T ~y Gð~y;K;aÞÞg
¼ maxy fE½UðW0 þ p and nonsingular matrices, A12f is an (N M)
matrix, and wft is a mean-zero heteroskedas-
parameters when data are problem-free. In expected variable netput prices p ft as de-
this study, we focus on the noisy dataset be- scribed in appendix B. This is followed by the
cause it allows us to document the effects on numerical solution to the firm’s optimization
the estimated production parameters when problem (3) which, due to the absence of noise
the data exhibit more realistic features. and the normalized quadratic production FFF
Figure 2 sketches the steps involved in the (4), collapses to a standard profit maximiza-
simulations, and the sections where they are tion with first-order conditions given by pft
explained. The first step is the same for the A1f A11f y
ft þ A 12f K ft ¼ 0. Hence, the opti-
noiseless and noisy datasets, and consists of gen- mal variable netput quantities for each firm
erating the starting production parameters af and time period are computed as
and quasi-fixed netputs K ft by Monte Carlo sim-
ulations. To this end, we used the procedures ð6Þ y 1
ðp
ft ¼ ðA11f Þ ft A1f þ A12f K ft Þ
described in Rosas and Lence (2017), which are
summarized in appendix A. To favor parameter
identification, production parameters are The final step to create the noiseless dataset
allowed to vary across firms but not over time.17 involves aggregating across heterogeneous firms
In the case of the noiseless simulations, the the simulated individual observations:
second step consists of drawing exogenous X
ð7Þ y
t ¼ y
f ft
X
ð8Þ K
t ¼ f
K
ft
17
The ability of the dual approach to recover parameters asso-
ciated with technological change is a very important topic, but it
is not pursued here for reasons of space. and
Rosas and Lence How Reliable is Duality Theory in Empirical Work? 831
X
ð9Þ p
nt ¼ f
wnft p
nft ; n ¼ 1; . . .; N
aggregating across firms, in a manner analo-
gous to expressions (7) to (9). The set of pro-
duction function parameter estimates
where wnft y
nft /ynt is firm f’s share of the ag- corresponding to the noisy data is denoted
gregate nth netput quantity at time t. That is, ^f .
by a
netput quantities are aggregated by adding The present simulations are parameterized
across firms because they are homogeneous so as to obtain panel data for N ¼ 8 variable
commodities, whereas aggregate netput pri- netputs and M ¼ 1 quasi-fixed netput over a
ces are weighted averages of firm-level prices. period of T ¼ 50 years from R ¼ 3 regions,
The resulting time series dataset [y
t , pt , K t ] each composed of F ¼ 10,000 heterogeneous
is the one used to estimate the production firms, such that firm heterogeneity is higher
^
function parameters (af ). across regions than within them.18 Therefore,
The procedure for constructing the noisy conditional on the set of parameters af , there
dataset is more involved because it requires are R FT ¼ 1.5 million observations for
the numerical maximization of expected util- each variable in the noisy data panel [yft, pft,
ity (rather than just profits), and the incorpo- Kft; af ]. Upon aggregation over the 10,000
ration of various sources of noise. In the heterogeneous firms at each time t, we obtain
second step, explained in appendix B, endog- a dataset of 50 observations for each variable
enous expected variable netput prices pft are per region that we use to estimate netput
drawn conditioning on the values of af and demands and supplies as shown in system
K ft . An additional step, described in appendix (11) below.
D, is necessary to obtain calibrated values of
initial wealth Wft,0 and the risk-preference pa-
rameter kft, which are used at the next step to Data Used for Estimation
compute the expected netput quantities yft
that maximize the expected utility of end-of- The noiseless data [y
t , pt , K t ] include all
period terminal wealth (see appendix D for N ¼ 8 netput quantities and prices, and M ¼ 1
details). Before aggregating across firms, the quasi-fixed netput. Variable netput prices are
following sources of noise are incorporated exogenous from quantities but have serial
into the simulated data: shocks to expected correlation (see appendix B). To avoid the
price and quantity variables (outlined in ap-
pendix C.2); omission of variables (by elimi-
18
nating some of the netput series); aggregation This figure roughly represents about one-fifth of the number
of farms in a given state of the Corn Belt (Iowa, Illinois, Indiana,
across netputs (discussed in appendix C.3); Missouri, and Ohio), Lake States (Michigan, Minnesota, and
and measurement errors in price and quantity Wisconsin), and Northern Plains (Kansas, North Dakota,
variables (addressed in appendix C.4). Nebraska, and South Dakota) regions. Available U.S. state-level
time-series datasets with information on prices and quantities of
Finally, the noisy dataset [yt, pt, Kt] used to agricultural outputs and inputs comprise no more than 50 years
conduct time-series estimation is obtained by of observations.
832 April 2019 Amer. J. Agr. Econ.
addition of another source of noise coming problem (3) is approximated by the following
from heterogeneous technology across normalized quadratic FFF:
regions, we select region 1 to conduct the
baseline estimation, and compare results with pR ðp; K; bÞ ¼ pT B1 þ K T B2
Variable
Variable
1 2 3 4 5 6 7 8
netput
Variable
1 2 3 4 5 6 7 8
netput
techniques.21 First, inspection of the autocor- endogeneity, that is, prices in system (11) are
relation and partial autocorrelation functions correlated with the error term j as a conse-
of the noiseless and noisy time series suggests quence of systematic market shocks /nts (see
that the time series present serial correlation. appendix B.2). Since instruments have to be
Therefore, we estimate system (11) with vari- correlated with prices but uncorrelated with
ables in pseudo first differences (Greene the error term, and we know the shocks /nt
2003).22 used to construct the price series, we con-
Second, omitted variables in the noisy data- struct instruments by regressing each netput
set would generate biased and inconsistent esti- price on its own systematic shock: pnt ¼ t0 þ
mates. Hence, we use instrumental variables t1 /nt þ ivnt. The residual (ivnt) is an ideal in-
(IV) for each omitted netput price in system strument because it is correlated with pnt and
(11). We do so even though it is not a common orthogonal to the systematic shock (/nt) by
practice, so as to favor the recovery of the construction; hence, it represents the varia-
parameters of interest. In particular, we use the tion in prices not explained by the systematic
omitted prices themselves as instruments be- shocks. There is one instrument for each net-
cause they are the best possible instruments. put price. We use three-stage least squares to
Third, an IV approach is used to control account for the instruments in estimation.
for the endogeneity of the explanatory price Furthermore, we perform a Hausman test
variables, which arises because they are cor- (Hausman 1978) for assessing the effect of
related with the error terms due to the supply the IV on the estimated parameters relative
and demand shocks. To instrument for the to not instrumenting for these mean-
endogenous prices, we make use of the fact independence violations.
that we know the underlying source of The parameters comprising matrix B11 and
vector B12 are the focus of our attention; they
are, respectively, the marginal effects of pri-
21
The DGP embeds a measurement error at the farm level, as ces and quasi-fixed netputs on netput quanti-
if each farm miss-reports the value actually observed. This error ties. Hence, they are the foundation for the
is simulated for each farm as an independently distributed mean- estimated profit function Hessian matrix and
preserving spread from the regional prices and optimal quantities
(see appendix C.4). Hence, being independent, measurement the elasticities of netput quantities with re-
errors should have little impact in the present estimation because spect to own price, cross prices, and quasi-
they should vanish when aggregating across a large number of
farms in each period of time. However, measurement errors
fixed netputs. As depicted in ^figure 4, we esti-
should have an effect when conducting estimation with cross- mate the Hessian matrices [B ] and [B] ^ from
sectional data or panel data.
22
the noiseless and noisy datasets, respectively.
Briefly, pseudo differences start by estimating system (11)
by SUR using the data in levels. Second, the estimated residuals The Hessians are then transformed ^
into the
^ of each equation are stacked in a sole vector of dimension
j corresponding elasticity matrices [E ] and [E] ^
4T, and used to estimate the autocorrelation coefficients -. ^ in a straightforward manner.
Third, each explained and explanatory variable of the system is
transformed into the pseudo-differenced variable. For example, To compare estimated elasticities with ini-
in the case of the price of netput 1, which is an explanatory vari- tial values, we begin from the known firm-
able, the transformation implies pt;1 ¼ pt;1 -p
^ t1;1 . Finally, we
proceed to estimation of system (11) using the pseudo-
specific production function Hessian matrix
differenced variables. [A]f and convert it into the corresponding
834 April 2019 Amer. J. Agr. Econ.
profit function Hessian [B]f by resorting to summarizes the average difference between
Lau’s Hessian identities. We then transform each entry of the estimated elasticity matrix
the Hessian [B]f into the matrix of own- and and the median of the corresponding initial
cross-price initial elasticities and quasi-fixed elasticity distribution. The RMSE accounts
initial elasticities of netput quantities [E]f. for two sources of error, namely, one due to
Finally, as indicated in figure 4, we compare the SUR estimation error for each of the 64
the^
initial [E]f versus the estimated values parameters, and the other one associated
^ to evaluate how precisely we re-
([E ] and [E]) with the difference between the estimated
cover the starting elasticities under duality and the initial value of the elasticity across
theory, for both the noiseless and the noisy the 64 parameters. Given that the SUR esti-
data. Note that this comparison implies that mation provides only a minor source of error
each initial parameter is represented by a dis- because point estimates are all highly signifi-
tribution (across firms), whereas the estima- cant, we argue that most of the RMSE can be
tion yields a corresponding point estimate and attributed to the deviations between the
its confidence interval. estimated and the initial values across
elasticities.
Results
Estimation with Noisy Data
Estimation results for the noiseless and noisy Estimation with noisy data (i.e., [yt, pt, Kt])
data are discussed separately. yields 16 own- and cross-price elasticities of
variable netput quantities, and 4 elasticities
Estimation with Noiseless Data with respect to quasi-fixed netputs. A
Hausman test rejects the null that the three-
Econometric estimates from the data stage least squares IV estimates are equal to
obtained by aggregating across heteroge- the ordinary least squares estimates, indicat-
neous firms but without any other source of ing that the IV approach is appropriate.24
noise (i.e., [y
t , pt , K t ]) are omitted to save Figure 5 shows the distribution of the firm-
space, as they can be found in Rosas and specific (initial) price elasticities, and their
Lence (2017). The estimates from the dual corresponding SUR point estimates indicated
approach are able to recover the medians of with a circle (and the bounds of its 95% confi-
the distribution of the corresponding initial dence interval with a “þ” sign). After estima-
firm-specific production parameters fairly ac- tion, we take 10,000 draws from the
curately. More specifically, estimated elastici-
ties with respect to prices (quasi-fixed
netputs) deviate on average by 12.4% (7.5%) the limiting distribution of the SUR parameter estimates, and
from the median of the starting elasticities subscript s indicates the sth draw of the ijth parameter.
Comparison with the mean can be performed by substituting E
according to the computed root mean by E ij . The RMSE averages over all the 64 10,000 squared dif-
ij
squared error (RMSE).23 The RMSE ferences. A measure of its dispersion is achieved by computing
the standard deviation of these 64 10,000 values before averag-
ing over them.
24
Variance inflation factors computed on the estimated model
23
When compared to the median of the distribution,
^
RMSE is over the 100 samples have an average (standard deviation) of
E
computed as [(64 10,000)1Ri Rj Rs (E 2 1/2
ij;s ij;s ) ] , where 64 is 1.35 (0.0014), indicating that multicollinearity is not an issue de-
the number of parameters, 10,000 is the number of draws from spite the high correlation among explanatory variables.
Rosas and Lence How Reliable is Duality Theory in Empirical Work? 835
parameters asymptotic distribution of each of the initial distribution in some instances, but
the 100 samples, transform them into elastici- a poor one in others. However, table 1 shows
ties, and calculate their mean, standard devia- that the percentage difference between the
tion, and confidence interval over the median of the initial distribution (E ) and the
ij
1,000,000 values. Except for entries (2, 2), (2, estimated value (E ^ ij ) is high for the majority
3), (3, 2), and (3, 3) of the own- and cross- of the entries in the elasticity matrix. The dif-
price elasticity matrix, the distributions in- ference ranges between 6% and 690%, and is
volve more than one initial elasticity due to less than 18% in only one entry. The own-
the aggregation of netputs. In these cases, to price elasticities, reported along the main di-
be able to compare with the elasticities esti- agonal, are not recovered with much preci-
mated by means of SUR, we construct “new sion, given that the differences range
initial” elasticity distributions as the revenue- between 6% and 150%. Importantly, the
weighted averages of the distributions of the own-price elasticities for the netputs that are
corresponding original initial elasticities. not aggregated with other netputs (e.g., net-
In light of the conclusions from the previ- puts 5, corresponding to entry (3, 3)) are not
ous sub-section, we measure the accuracy in necessarily more precisely estimated than the
recovering initial elasticities by comparing main diagonal elements that do arise as ag-
the estimated values to the medians of the re- gregated netputs (entries (1, 1) and (4, 4)).
spective distributions.25 Visual inspection of As expected, the off-diagonal elements (i.e.,
figure 5 suggests that, when comparing where the cross-price elasticities) are less accurately
the estimated values fall relative to where the estimated than the main diagonal entries, as
distributions accumulate more mass, the dual they require more information to be
approach provides a good approximation of recovered.
As a summary measure of the dispersion in
recovering the initial elasticities, we calculate
25 the RMSE of the difference between the me-
Comparisons using the means of the distributions provided
less accurate results. dian of the initial distribution and the SUR
836 April 2019 Amer. J. Agr. Econ.
^ ij ) with Medians of
Table 1. Comparison of Price Elasticities Estimated from Noisy Data (E
)
Distributions of Initial Price Elasticities (E ij
estimated values for all 16 estimated price the inaccuracy in recovering the starting elas-
elasticities. Table 2 shows that the RMSE for ticities, averaged over the 4 netputs, amounts
the baseline scenario equals 0.18 in elasticity to 70% of the original elasticities.
units. The average value of all initial elastici- While in real-world datasets it is expected
ties (calculated as the mean absolute value of to observe all the simulated sources of noise
all the medians of the initial distributions) is simultaneously affecting the data, and this is
0.20. Therefore, by comparing both values we what the previous analysis intends to show, it
conclude that duality theory recovers elastici- is informative to document how much each
ties which are, on average, off by 90% of the source contributes to the bias. To this end,
initial elasticities. These results provide evi- three additional scenarios are investigated. In
dence that the dual approach is unable to de- the first scenario, the only source of noise is
liver precise estimates of underlying the prediction errors in prices and quantities
production parameters when employing data of netputs. The second scenario analyzes the
featuring real-world characteristics. effect of price endogeneity and maximization
The estimation of variable netput elastici- under uncertainty. Therefore, both scenarios
ties with respect to quasi-fixed netputs is not consider the 8 netputs of the DGP. Finally,
accurate either. Results are represented the third scenario incorporates both omission
graphically in figure 6. Each panel titled and aggregation of netputs. In each of these
“Eik” is the elasticity of netput i with respect scenarios, we follow the same parameteriza-
to the quasi-fixed netput. The SUR point esti- tion to represent the source of noise, and the
mates of the elasticities are within the sup- same approach to deal with that problem in
port of the respective initial distributions estimation, as in the baseline scenario. Note,
except for E1k and E3k; however, in those however, that all of the scenarios assume ag-
cases the 95% confidence interval does con- gregation across technologically heteroge-
tain the support of the initial elasticities. The neous firms, which is required to obtain the
baseline scenario in table 2 shows that the variables in time series.
RMSE relative to the median of the initial Table 2 shows the results from the addi-
distribution is 0.30 expressed in elasticity tional scenarios. It can be observed from the
units, and the average value of the elasticities elasticities with respect to variable netput pri-
is calculated at 0.43. These results imply that ces that, in each scenario, the bias in the
Rosas and Lence How Reliable is Duality Theory in Empirical Work? 837
Figure 6. Elasticities of variable netput quantities with respect to quasi-fixed netputs: Initial
firm-level distributions versus estimated values obtained from noisy data.
Note: Each Eik panel is the elasticity of netput i with respect to the quasi-fixed input in the case of noisy data. The elasticity value is on the horizontal axis and
histogram frequency on the vertical. Each histogram depicts the distribution across firms of the initial elasticity (Eik). The circle is the SUR estimated elastic-
^ ik ) and the “þ” signs denote the bounds of the 95% confidence interval.
ity (E
recovery of the initial elasticities is smaller generate a high percentage deviation from
than in the baseline, but it is still sizable. In the initial parameters, while the endogeneity
the case of elasticities with respect to quantity and expected utility on the one hand, and
of quasi-fixed netputs, prediction errors also omission and aggregation of netputs on the
838 April 2019 Amer. J. Agr. Econ.
Sensitivity Analysis
Elasticities with Baseline
other, generate high but smaller deviations. quantitatively similar to the ones for the
Table 2 also shows that the sum of the per- baseline scenario.26
centage deviations across the three additional We regularly encounter empirical applica-
scenarios is substantially higher than the per- tions of duality theory with time-series data
centage deviation in the baseline scenario. where observations from different regions or
This result might be due to the fact that indi- states are pooled together for estimation
vidual sources of noise contribute to the over- (e.g., Schuring, Huffman, and Fan (2011),
all deviation in different directions. More and O’Donnell, Shumway, and Ball (1999)).
research is needed to further assess their con- By expanding the sample size, pooling has
tribution under different configurations of the advantage of increasing the degrees of
parameters calibrated to the reality of other freedom, which is especially helpful in the
regions and countries, implying sources with presence of several explanatory variables.
different levels of noise or inducing changes However, pooling also has the downside of
in the level of one source as the others re- adding observations from states that are
main fixed. likely to have different technology. We ex-
plore the consequences of such practice by
conducting a sensitivity analysis.
Sensitivity Analysis Pooling implies seeking to recover produc-
We explore the robustness of noisy data esti- tion parameters from firms that are more het-
mation results to changes in the sources and erogeneous than in the case of a single state,
levels of noise. According to this analysis, the usually by adding regional- or state-level
specific combinations of the two or four net- dummy variables. Hence, for this sensitivity
puts being omitted or aggregated, respec- analysis, we exploit the noisy simulated data
tively, do not seem to affect the extent of the for all regions but 1, 2, and 3. For each region
percentage deviation found. For example, ta- and in each of the 50 time-periods, we take
ble 3 shows that estimation with the noisy five samples of 2,000 observations represent-
data structures shown in panel B of figure 3 ing samples of firms from five states within
yield price elasticity estimates with respect to the region, and aggregate across its heteroge-
variable netputs that are off by 88% (case 1) neous firms to obtain the corresponding
and 52% (case 2) relative to the starting price state-level time series. The resulting dataset,
elasticities. Similarly, the corresponding
elasticities with respect to quasi-fixed
netputs differ by 25% and 120% from their 26
We present only a few alternative scenarios due to the com-
starting values. Therefore, these values are putational burden of such analysis.
Rosas and Lence How Reliable is Duality Theory in Empirical Work? 839
Also, because we know the existing sources errors in variables (such as omitted variables
of noise in the data, we explicitly address or aggregation of netputs). Also, the hetero-
them in the estimation. We deal with serial geneity of the underlying firms is relevant
correlation by estimating the model with data when recovering the underlying production
Appendix: How Reliable Is Duality Theory dispersion comes from heterogeneity across
in Empirical Work? firms. Since it is most likely that reality falls
somewhere between the two extremes, we es-
Appendix A Simulation of Initial Production timate the portion of yield variation attribut-
from dr representing random deviations from prices.29 This endogeneity induces correlation
national prices.27 Then, firm-specific random between prices and the error term in system
prices are generated as deviations from the (11), which violates the orthogonality as-
respective regional average, pnft ¼ pnrt enft, sumption required by ordinary least squares.
Table B.1. Estimation Results of the OLS Regression Model Used to Generate Random
Exogenous “National” Prices from Equation (B.1)
n¼1 n¼2 n¼3 n¼4 n¼5 n¼6 n¼7 n¼8
Table B.2. Calibrated Parameter Values for Market Shocks (/nt) Used in Equation (B.6)
n¼1 n¼2 n¼3 n¼4 n¼5 n¼6 n¼7 n¼8
q0n 5.1096 5.2822 5.1794 4.764 4.2696 4.4937 4.4506 4.6259
q1n 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000 0.5000
r2nn 0.1779 0.078 0.1406 0.3053 0.3605 0.4664 0.7051 0.269
matrices A1, A12, and A11, and such devia- ðB:6Þ lnð/nt Þ ¼ q0n þ q1n lnð/n;t 1 Þ þ nnt
tions are independent from the price shocks
(enrt and enft) and the production shocks (wft), where nnt Normal(0, r2nn ). Parameters q0n,
for a sufficiently large number of firms (F)
q1n, and r2nn are calibrated so that they yield
and by the law of large numbers, yt converges
in distribution to a Normal random variable “national” prices pUS;t with descriptive statis-
whose mean is tics comparable to the observed output and in-
put prices (see table B.2 for the specific
parameter values). To generate the shock se-
ðB:4Þ yt ¼ FðA11 Þ1 pUS;t FðA11 Þ1 A1
ries, we set ln(/n, t¼0) ¼ q0n/(1 q1n) (i.e., the
þ FðA11 Þ1 A12 K t þ F w
t unconditional mean of ln(/n, t¼0)), take 10,000
draws from a Normal(0, r2nn ) distribution, plug
This expression depends only on the them into expression (B.6) to generate itera-
known “average” production parameters and tively the systematic shocks (/nt), and keep
“national” time-t prices pUS,t, which are the the last 50 of them as the series of shocks used
same as those in the isoelastic demand or sup- to solve for the endogenous “national” netput
ply function (B.2) faced by firms. Thus, the prices pUS;t from equality (B.5).31
time-t endogenous “national” netput prices It is worth noting that price variability is
that clear the markets (Qt ¼ yt ) are obtained critical to recover production parameters be-
by numerically solving for pUS;t the system cause it contributes to the identification of a
bigger portion of the production function. The
ðB:5Þ Ut pg
US;t ¼ FðA11 Þ
1
pUS;t simulated systematic shocks are independent
FðA11 Þ1 A1
from each other because random draws from order conditions for optimization to the nu-
Normal(0, r2nn ) are independent; however, merical routine as equality and inequality con-
when plugged into system (B.6) correlation straints, respectively. The solution is the
between national prices is induced through vector of expected netput quantities for each
can be obtained for several inputs and out- initial wealth measured as total net assets
puts, they are aggregated to preserve degrees (TNAft) is strongly associated to the value of
of freedom, or because they are not the ob- production (VPft) in the USDA-ARMS data-
jective of the study. To incorporate this fea- base. More specifically, panel A of table D.1
33
TNA is computed as “value of total farm financial assets”
minus “total farm financial debt,” and VP is calculated as “all
crops – value of production” plus “all livestock – value of
production.”
848 April 2019 Amer. J. Agr. Econ.
Table D.1. Parameter Estimates of Regression (D.1), and the Form of Its Heteroskedasticity.
r2sft ), where r
B. Dependent variable: ln(^ ^2sft is the sample estimate of r2sft
Region 1 Region 2 Region 3
Explanatory Variables: Constant (d0) 2.062 1.925 1.5069
(0.045) (0.058) (0.048)
VP (d1) 1.544 0.964 0.416
(0.071) (0.063) (0.035)
VP2 (d2) 0.105 0.026 0.006
(0.008) (0.002) (0.001)
^2e
r 4.343 4.710 4.040
R2 0.150 0.113 0.080
Note: VP: Value of Production. Standard errors shown within parentheses.