Access the full text.

Sign up today, get DeepDyve free for 14 days.

Dependence Modeling
, Volume 10 (1): 14 – Jan 1, 2022

/lp/de-gruyter/technical-and-allocative-inefficiency-in-production-systems-a-vine-zq8O7c6rEi

- Publisher
- de Gruyter
- Copyright
- © 2022 Jian Zhai et al., published by De Gruyter
- ISSN
- 2300-2298
- eISSN
- 2300-2298
- DOI
- 10.1515/demo-2022-0108
- Publisher site
- See Article on Publisher Site

1IntroductionIn this article, we consider the following production system: (1)lny=α0+∑j=1Jαjlnxj+υ−u,\mathrm{ln}y={\alpha }_{0}+\mathop{\sum }\limits_{j=1}^{J}{\alpha }_{j}\mathrm{ln}{x}_{j}+\upsilon -u,(2)lnx1−lnxj=ln(α1pj/αjp1)+ωj,j=2,…,J,\mathrm{ln}\hspace{0.33em}{x}_{1}-\mathrm{ln}\hspace{0.33em}{x}_{j}=\mathrm{ln}\left({\alpha }_{1}{p}_{j}\hspace{0.1em}\text{/}\hspace{0.1em}{\alpha }_{j}{p}_{1})+{\omega }_{j},\hspace{1.0em}j=2,\ldots ,J,where the first equation represents a Cobb-Douglas type production function of a generic production unit, with output y and inputs xj,j=1,…J{x}_{j},j=1,\ldots J. The set of the J−1J-1equations that follow the production function derive from the first-order conditions of cost minimization and define the J−1J-1optimal input ratios in terms of the input prices pj{p}_{j}. This is the production system introduced in the classic productivity papers by Schmidt and Lovell [21,22] and most recently considered by Amsler et al. [5]. For a textbook exposition of how to derive these equations from the first principles of the economic theory, we refer interested readers to Kumbhakar and Lovell ([16], Section 4.2.2.1).The error terms in each of the equations have an economic interpretation. The two errors in first equation are a symmetric component υ\upsilon , which accounts for random factors affecting the production potential, e.g., weather, and an asymmetric component u, which represent the technical inefficiency of production, that is, the percentage by which the production unit falls short of the stochastic production frontier. The ω\omega ’s denote the allocative inefficiencies of the production unit. That is, they are the percentages by which the ratios x1/xj{x}_{1}\hspace{0.1em}\text{/}\hspace{0.1em}{x}_{j}deviate from their cost-minimizing values. Clearly, production units may be technically inefficient due to allocative inefficiencies and vice versa. Therefore, dependence between u and ω\omega ’s should be permitted within the production system.The choice of a parameterization for the dependence between u and ω\omega ’s is not trivial. A naive approach would simply allow for a nonzero correlation between u and ω\omega ’s. Such a dependence structure would imply that either too large or too small values of the input ratio x1/xj{x}_{1}\hspace{0.1em}\text{/}\hspace{0.1em}{x}_{j}, but not both at the same time, are associated with greater inefficiency. However, from an economic standpoint, it is equally likely that technically inefficient firms exhibit both positive and negative deviations from the cost-minimizing input ratios. As such, we require a dependence structure that permits correlation between u and ∣ωj∣| {\omega }_{j}| , not between u and ωj{\omega }_{j}. Dependence structures specific to productivity analysis are a new field in dependence modeling that is only starting to attract interest. A new survey of copula-based models of productivity can be found in Prokhorov [19].Dependence per se is not new to productivity analysis. Models that permit dependence between u and ∣ωj∣| {\omega }_{j}| without copulas have been considered before in the literature [22]. Let u=∣u∗∣,u=| {u}^{\ast }| ,where u∗∼N(0,σu2){u}^{\ast }\hspace{0.33em} \sim \hspace{0.33em}N\left(0,{\sigma }_{u}^{2})and assume that the errors (u∗,ω2,…,ωJ)\left({u}^{\ast },{\omega }_{2},\ldots ,{\omega }_{J})are jointly normal. Then, Schmidt and Lovell [22] show that in this case, u is uncorrelated with any ωj{\omega }_{j}but, by the result of Nabeya [17], u and ∣ωj∣| {\omega }_{j}| have a nonzero correlation of the form: 2σuσjπ[1−ρ2+ρarcsin(ρ)−1],\frac{2{\sigma }_{u}{\sigma }_{j}}{\pi }{[}\sqrt{1-{\rho }^{2}}+\rho \arcsin \left(\rho )-1],where σj{\sigma }_{j}is the standard deviation of ωj{\omega }_{j}and ρ\rho is the correlation between u∗{u}^{\ast }and ωj{\omega }_{j}.Much more recently, Amsler et al. [5] considered a more general dependence framework. A new family of copulas, referred to as APS-T copulas were developed, for which, given any set of marginals Fu,Fω2,…,FωJ{F}_{u},{F}_{{\omega }_{2}},\ldots ,{F}_{{\omega }_{J}}, the variables u and ωj,j=2,…,J{\omega }_{j},\hspace{0.33em}j=2,\ldots ,J, are uncorrelated but u and ∣ωj∣| {\omega }_{j}| are correlated. This new copula family is based upon the class of the Sarmanov copulas. The authors also derive a remarkably simple approach to construct copulas for dimensions greater than two. However, this approach to construct high dimensional APS-T copulas restricts the range of dependence that can be covered (see Result 10 in the study by Amsler et al. [5]).In this article, we propose a canonical vine decomposition (see Bedford and Cooke [6]) of the joint distribution of (u,ω2,…,ωJ)\left(u,{\omega }_{2},\ldots ,{\omega }_{J})that reflects the desired dependence structure discussed earlier. Specifically, we use the bivariate APS copulas developed by Amsler et al. [5] and the Gaussian copula as the building blocks of the vine decomposition. Vine copulas simplify the task of constructing multivariate joint densities with a desired dependence structure by representing high-dimensional densities as products of bivariate conditional copulas. The canonical vine is a natural representation in our setting, since we are concerned with modeling bivariate dependence between each pair (u, ωj{\omega }_{j}). Importantly, our vine copula construction allows for a wider range of dependence than would be possible using the APS-T copula approach of Amsler et al. [5]. We discuss practicalities associated with a maximum simulated likelihood estimation (MSLE) procedure of the production system that uses our vine copula decomposition. Finally, we propose a vine copula-based estimator of the technical inefficiency score that uses information from the estimated allocative inefficiency terms.The proposed vine decomposition is relevant more generally in stochastic frontier models with endogeneity, where we wish to keep the marginal distribution of the inefficiency term prespecified and allow dependence between it and the reduced form errors or endogenous regressors (see, e.g., Amsler et al. [4], Tran and Tsionas [25]). Our vine copula decomposition is also of general statistical merit as a way of generating dependent random variables with a specific dependence structure, that is, the dependence structure implied by the bivariate APS2 copulas, from uncorrelated random variables.The remainder of this article is organized as follows. Section 2 briefly introduces vine copulas. Section 3 reviews the APS copula family proposed by Amsler et al. [5]. Section 4 discusses our proposed vine copula construction. Section 5 discusses the details of MSLE and proposes a nonparametric estimator of the technical inefficiency scores. Sections 6 and 7 present selected simulation results and an empirical application respectively. Section 8 concludes this article.2Vine copulasCopulas are multivariate distributions with uniform marginals (see, e.g., Nelsen [18] for an early introduction). The well-cited theorem of Sklar [23] states that any joint distribution of continuous random variables can be written uniquely as a copula function taking as arguments the univariate marginal distributions H(z1,…,zT)=C(F1(z1),…,FT(zT)).H\left({z}_{1},\ldots ,{z}_{T})=C\left({F}_{1}\left({z}_{1}),\ldots ,{F}_{T}\left({z}_{T})).In terms of densities, we have h(z1,…,zT)=c(F1(z1),…,FT(zT))∏t=1Tft(zt),h\left({z}_{1},\ldots ,{z}_{T})=c\left({F}_{1}\left({z}_{1}),\ldots ,{F}_{T}\left({z}_{T}))\mathop{\prod }\limits_{t=1}^{T}{f}_{t}\left({z}_{t}),where ft{f}_{t}is the marginal density corresponding to the marginal cdf Ft{F}_{t}, c is the T-copula density corresponding to the T-copula function C, and h is the joint density function corresponding to the joint cdf H.A vine copula decomposition, proposed by Joe [14], makes use of an equivalent representation of h(z1,…,zT)h\left({z}_{1},\ldots ,{z}_{T})in terms of conditional densities. For example, if T=4T=4, a canonical vine decomposition is (3)h(z1,…,z4)=c12(F1(z1),F2(z2))⋅c13(F1(z1),F3(z3))⋅c14(F1(z1),F4(z4))×c23∣1(F(z2∣z1),F(z3∣z1))⋅c24∣1(F(z2∣z1),F(z4∣z1))×c34∣12(F(z3∣z1,z2),F(z4∣z1,z2))×∏t=14ft(zt),\begin{array}{rcl}h\left({z}_{1},\ldots ,{z}_{4})& =& {c}_{12}\left({F}_{1}\left({z}_{1}),{F}_{2}\left({z}_{2}))\hspace{0.33em}\cdot \hspace{0.33em}{c}_{13}\left({F}_{1}\left({z}_{1}),{F}_{3}\left({z}_{3}))\hspace{0.33em}\cdot \hspace{0.33em}{c}_{14}\left({F}_{1}\left({z}_{1}),{F}_{4}\left({z}_{4}))\\ & & \times {c}_{23| 1}\left(F\left({z}_{2}| {z}_{1}),F\left({z}_{3}| {z}_{1}))\hspace{0.33em}\cdot \hspace{0.33em}{c}_{24| 1}\left(F\left({z}_{2}| {z}_{1}),F\left({z}_{4}| {z}_{1}))\\ & & \times {c}_{34| 12}\left(F\left({z}_{3}| {z}_{1},{z}_{2}),F\left({z}_{4}| {z}_{1},{z}_{2}))\times \mathop{\displaystyle \prod }\limits_{t=1}^{4}{f}_{t}\left({z}_{t}),\end{array}Here, the joint density is decomposed into a product of six 2-copula densities, three of which are acting on unconditional marginal cdf’s, two are acting on conditional cdf’s with one variable in the conditioning set and one is acting on a conditional cdf with two variables in the conditioning set (we provide details in Appendix A). The decomposition is particularly useful in settings with large T. When discussing the bivariate copulas within a vine copula decomposition, we will selectively use the abbreviations cij≔cij(Fi(zi),Fj(zj)){c}_{ij}:= {c}_{ij}\left({F}_{i}\left({z}_{i}),{F}_{j}\left({z}_{j}))and cij∣D≔cij∣D(Fi∣D(zi∣zD),Fj∣D(zj∣zD)){c}_{ij| D}:= {c}_{ij| D}\left({F}_{i| D}\left({z}_{i}| {{\bf{z}}}_{D}),{F}_{j| D}\left({z}_{j}| {{\bf{z}}}_{D})), for distinct indices i,ji,j, where D is the set of indices {1,…,d}\left\{1,\ldots ,d\right\}not including i,ji,j.An important element of a vine copula construction is the conditional univariate cdf’s used as arguments of the 2-copula densities. For a single variable in the conditioning set, it is easy to see that the conditional cdf can be written in terms of the corresponding copula as follows: F(zi∣zj)=∂Cij(Fi(zi),Fj(zj))∂Fj(zj).F\left({z}_{i}| {z}_{j})=\frac{\partial {C}_{ij}\left({F}_{i}\left({z}_{i}),{F}_{j}\left({z}_{j}))}{\partial {F}_{j}\left({z}_{j})}.In the case of more than one variable in the conditioning set, Joe [14] shows that the following formula applies: F(zi∣zj,zk,zl)=∂Cij∣kl(F(zi∣zk,zl),F(zj∣zk,zl))∂F(zj∣zk,zl).F\left({z}_{i}| {z}_{j},{z}_{k},{z}_{l})=\frac{\partial {C}_{ij| kl}\left(F\left({z}_{i}| {z}_{k},{z}_{l}),F\left({z}_{j}| {z}_{k},{z}_{l}))}{\partial F\left({z}_{j}| {z}_{k},{z}_{l})}.Clearly, it is important that the 2-copula densities can be integrated efficiently in each dimension.Vine copula constructions, including ours, typically assume the so-called simplifying assumption, i.e., that the bivariate copulas that act on conditional cdf’s in the vine copula decomposition do not themselves depend on the values of the conditioning variable(s) (see Haff et al. [13]). However, even under the simplifying assumption, vine copula constructions are known to be sufficiently general to capture a wide range of dependence even in extremely high dimensions [24].3The APS copula familyAmsler et al. [5] consider copulas of the form c(ξ1,ξ2)=1+θg(ξ1)a(ξ2),c\left({\xi }_{1},{\xi }_{2})=1+\theta g\left({\xi }_{1})a\left({\xi }_{2}),where ∫01g(s)ds=∫01a(s)ds=0{\int }_{0}^{1}g\left(s){\rm{d}}s={\int }_{0}^{1}a\left(s){\rm{d}}s=0and θ\theta satisfies the restrictions that are necessary for c to be a density (see, e.g., Sarmanov [20]). The authors characterize the functions g(ξ1)g\left({\xi }_{1})and a(ξ2)a\left({\xi }_{2})such that cov(ξ1,ξ2)=0{\rm{cov}}\left({\xi }_{1},{\xi }_{2})=0, while cov(ξ1,q(ξ2))≠0{\rm{cov}}\left({\xi }_{1},q\left({\xi }_{2}))\ne 0for some function q(⋅)q\left(\cdot )and call a 2-copula with this property an APS-2 copula.It turns out that an APS-2 copula is obtained when g(ξ1)=1−2ξ1g\left({\xi }_{1})=1-2{\xi }_{1}and a(ξ2)=1−kq−1q(ξ2)a\left({\xi }_{2})=1-{k}_{q}^{-1}q\left({\xi }_{2}), where q(⋅)q\left(\cdot )is integrable on [0,1]\left[0,1], symmetric around ξ=1/2\xi =1\hspace{0.1em}\text{/}\hspace{0.1em}2, monotonically decreasing on 0,12\left[0,\frac{1}{2}\right], and monotonically increasing on 12,1\left[\frac{1}{2},1\right], and where kq=∫01q(s)ds{k}_{q}={\int }_{0}^{1}q\left(s){\rm{d}}s. For an APS-2 copula, cov(ξ1,q(ξ2))=16θkq−1Var(q(ξ2)),{\rm{cov}}\left({\xi }_{1},q\left({\xi }_{2}))=\frac{1}{6}\theta {k}_{q}^{-1}\hspace{0.1em}\text{Var}\hspace{0.1em}\left(q\left({\xi }_{2})),so that θ\theta is proportional to the correlation between ξ1{\xi }_{1}and q(ξ2)q\left({\xi }_{2}). Amsler et al. [5] show that if the original random variables zj=F−1(ξj){z}_{j}={F}^{-1}\left({\xi }_{j})have symmetric marginals with finite variance and are linked by an APS-2 copula, then cov(z1,z2)=0{\rm{cov}}\left({z}_{1},{z}_{2})=0while generally cov(z1,∣z2∣)≠0{\rm{cov}}\left({z}_{1},| {z}_{2}| )\ne 0. This property is of particular importance to the production systems we consider in this study.The two members of the APS-2 copula family studied by Amsler et al. [5] are expressed as follows: APS-2A:c(ξ1,ξ2)=1+θ(1−2ξ1)1−12ξ2−122,APS-2B:c(ξ1,ξ2)=1+θ(1−2ξ1)1−4ξ2−12,\begin{array}{rcl}\hspace{0.1em}\text{APS-2A:}\hspace{0.1em}\hspace{0.33em}c\left({\xi }_{1},{\xi }_{2})& =& 1+\theta (1-2{\xi }_{1})\left[1-12{\left({\xi }_{2}-\frac{1}{2}\right)}^{2}\right],\\ \hspace{0.1em}\text{APS-2B:}\hspace{0.1em}\hspace{0.33em}c\left({\xi }_{1},{\xi }_{2})& =& 1+\theta (1-2{\xi }_{1})\left[1-4\left|{\xi }_{2}-\frac{1}{2}\right|\right],\end{array}for which they show that APS-2A:corrξ1,ξ2−122=215θ,APS-2B:corrξ1,ξ2−12=13θ.\begin{array}{l}\hspace{0.1em}\text{APS-2A:}\hspace{0.1em}\hspace{0.33em}{\rm{corr}}\left({\xi }_{1},{\left({\xi }_{2}-\frac{1}{2}\right)}^{2}\right)=\frac{2}{\sqrt{15}}\theta ,\\ \hspace{0.1em}\text{APS-2B:}\hspace{0.1em}\hspace{0.33em}{\rm{corr}}\left({\xi }_{1},\left|{\xi }_{2}-\frac{1}{2}\right|\right)=\frac{1}{3}\theta .\end{array}Extensions to T>2T\gt 2using 2-copulas are usually hard to achieve. Numerous noncompatibility results show that 2-copulas that act on 2-copulas generally do not produce 3- or 4-copulas (see, e.g., Nelsen [18], pp. 105–107). However, Amsler et al. [5] show that a valid T-copula can be constructed using T2\left(\begin{array}{c}T\\ 2\end{array}\right)2-copulas as follows: (4)c(ξ1,…,ξT)=1+∑1≤i<j≤T(cij−1),c\left({\xi }_{1},\ldots ,{\xi }_{T})=1+\sum _{1\le i\lt j\le T}\left({c}_{ij}-1),where cij{c}_{ij}is the 2-copula of ξi{\xi }_{i}and ξj{\xi }_{j}. For example, a valid 4-copula has the form (5)c(ξ1,…,ξ4)=1+(c12−1)+(c13−1)+(c14−1)+(c23−1)+(c24−1)+(c34−1).c\left({\xi }_{1},\ldots ,{\xi }_{4})=1+\left({c}_{12}-1)+\left({c}_{13}-1)+\left({c}_{14}-1)+\left({c}_{23}-1)+\left({c}_{24}-1)+\left({c}_{34}-1).An interesting property of such copulas is that the 2-copulas implied by them are the 2-copulas used to construct them. For example, for the 4-copula, the implied 2-copulas are c12,c13,c14,c23,c24{c}_{12},{c}_{13},{c}_{14},{c}_{23},{c}_{24}, and c34{c}_{34}.On the basis of this result, Amsler et al. [5] define the APS-T copula as follows: APS-T:c(ξ1,…,ξT)=1+∑1≤i<j≤T(cij−1),wherec1j(ξ1,ξj)=APS-2 copula density,j=2,…,T,andckl(ξk,ξl)=bivariate Gaussian copula density,k,l,≠1.\begin{array}{rcl}\hspace{0.1em}\text{APS-T:}\hspace{0.1em}& & c\left({\xi }_{1},\ldots ,{\xi }_{T})=1+\displaystyle \sum _{1\le i\lt j\le T}\left({c}_{ij}-1),\\ \hspace{0.1em}\text{where}\hspace{0.1em}& & {c}_{1j}\left({\xi }_{1},{\xi }_{j})=\hspace{0.1em}\text{APS-2 copula density}\hspace{0.1em},\hspace{0.33em}j=2,\ldots ,T,\\ \hspace{0.1em}\text{and}\hspace{0.1em}& & {c}_{kl}\left({\xi }_{k},{\xi }_{l})=\hspace{0.1em}\text{bivariate Gaussian copula density}\hspace{0.1em},\hspace{0.33em}k,l,\ne 1.\end{array}Two members of this family would arise if we use the bivariate APS-2A and APS-2B copulas for any c1j(ξ1,ξj){c}_{1j}\left({\xi }_{1},{\xi }_{j})with associated dependence parameter θ1j{\theta }_{1j}. Again, the key feature of this family is that cov(ξ1,ξj)=0{\rm{cov}}\left({\xi }_{1},{\xi }_{j})=0, while in general, cov(ξ1,q(ξj))≠0{\rm{cov}}\left({\xi }_{1},q\left({\xi }_{j}))\ne 0, j=2,…,Tj=2,\ldots ,T. As such, Amsler et al. [5] apply the APS-T copula to model the dependence between technical and allocative inefficiency in production systems.The proposed four-dimensional copula is expressed as follows: (6)c(F(u),F(ω2),F(ω3),F(ω4))=cuω2{Fu(u),Fω2(ω2)}⋅cuω3{Fu(u),Fω3(ω3)}⋅cuω4{Fu(u),Fω4(ω4)}⋅cω2ω3∣u{F(ω2∣u),F(ω3∣u)}⋅cω2ω4∣u{F(ω2∣u),F(ω4∣u)}⋅cω3ω4∣uω2{F(ω3∣u,ω2),F(ω4∣u,ω2)},\begin{array}{rcl}c\left(F\left(u),F\left({\omega }_{2}),F\left({\omega }_{3}),F\left({\omega }_{4}))& =& {c}_{u{\omega }_{2}}\left\{{F}_{u}\left(u),{F}_{{\omega }_{2}}\left({\omega }_{2})\right\}\hspace{0.33em}\cdot \hspace{0.33em}{c}_{u{\omega }_{3}}\left\{{F}_{u}\left(u),{F}_{{\omega }_{3}}\left({\omega }_{3})\right\}\hspace{0.33em}\cdot \hspace{0.33em}{c}_{u{\omega }_{4}}\left\{{F}_{u}\left(u),{F}_{{\omega }_{4}}\left({\omega }_{4})\right\}\hspace{0.33em}\cdot {c}_{{\omega }_{2}{\omega }_{3}| u}\left\{F\left({\omega }_{2}| u),F\left({\omega }_{3}| u)\right\}\cdot {c}_{{\omega }_{2}{\omega }_{4}| u}\left\{F\left({\omega }_{2}| u),F\left({\omega }_{4}| u)\right\}\hspace{0.33em}\cdot {c}_{{\omega }_{3}{\omega }_{4}| u{\omega }_{2}}\left\{F\left({\omega }_{3}| u,{\omega }_{2}),F\left({\omega }_{4}| u,{\omega }_{2})\right\},\end{array}where u is the technical inefficiency and, ω\omega is allocative inefficiency.Two important limitations of the APS-T copula restrict the range of dependence between T random variables it can accommodate. First, the Sarmanov specification is a perturbation of independence, which is generally not a comprehensive copula. For example, the Eyraud-Farlie-Gumbel-Morgenstern copula, which is a member of the Sarmanov class, cannot accommodate dependence outside the Kendall τ\tau range of [−2/9,2/9]\left[-2\hspace{0.1em}\text{/}9,2\text{/}\hspace{0.1em}9].Second, by construction, the APS-T family (T>2T\gt 2) can accommodate no dependence beyond that implied by pairwise copulas. This follows from Eq. (4) since all low-dimensional marginals of APS-T are also expressed in terms of cij{c}_{ij}. For example, c123{c}_{123}contains terms of the form g(ξi)a(ξj),i,j=1,2,3,g\left({\xi }_{i})a\left({\xi }_{j}),i,j=1,2,3,but no terms of the form g(ξ1)a(ξ2)m(ξ3)g\left({\xi }_{1})a\left({\xi }_{2})m\left({\xi }_{3}), for some function m. This places additional restrictions on the type and strength of dependence that can be estimated using APS-T.4Vine copulas for a production systemWe propose to use vine copulas to construct the joint density of the error terms in the system (1) and (2). To begin, we follow standard conventions from the prior literature and assume that the marginal distributions of v and ωj{\omega }_{j}are normal and that u follows a half-normal distributionAny other asymmetric distribution would also be applicable.. We use F and f to denote a cdf and pdf, respectively, and the subscripts will denote which error’s distribution is in question, e.g., Fu,Fυ{F}_{u},{F}_{\upsilon }denote the cdf of u and υ\upsilon , respectively, while Fj{F}_{j}is the cdf of ωj,j=2,…,J{\omega }_{j},\hspace{0.33em}j=2,\ldots ,J.For example, if J=4J=4, the vine decomposition (Eq. (3)) of the joint density of (u,ω2,ω3,ω4)\left(u,{\omega }_{2},{\omega }_{3},{\omega }_{4})can be written as follows: (7)h(u,ω2,ω3,ω4)=c12(Fu(u),F2(ω2))⋅c13(Fu(u),F3(ω3))⋅c14(Fu(u),F4(ω4))×c23∣1(F(ω2∣u),F(ω3∣u))⋅c24∣1(F(ω2∣u),F(ω4∣u))×c34∣12(F(ω3∣u,ω2),F(ω4∣u,ω2))×∏j=24fj(ωj)⋅fu(u).\begin{array}{rcl}h\left(u,{\omega }_{2},{\omega }_{3},{\omega }_{4})& =& {c}_{12}\left({F}_{u}\left(u),{F}_{2}\left({\omega }_{2}))\cdot {c}_{13}\left({F}_{u}\left(u),{F}_{3}\left({\omega }_{3}))\hspace{0.33em}\cdot \hspace{0.33em}{c}_{14}\left({F}_{u}\left(u),{F}_{4}\left({\omega }_{4}))\\ & & \times {c}_{23| 1}\left(F\left({\omega }_{2}| u),F\left({\omega }_{3}| u))\hspace{0.33em}\cdot \hspace{0.33em}{c}_{24| 1}\left(F\left({\omega }_{2}| u),F\left({\omega }_{4}| u))\\ & & \times {c}_{34| 12}\left(F\left({\omega }_{3}| u,{\omega }_{2}),F\left({\omega }_{4}| u,{\omega }_{2}))\times \mathop{\displaystyle \prod }\limits_{j=2}^{4}{f}_{j}\left({\omega }_{j})\hspace{0.33em}\cdot \hspace{0.33em}{f}_{u}\left(u).\end{array}We assume that all the random variables are continuous, and hence, the 2-copulas in this vine decomposition are unique. Since we wish to preserve the key property of the APS copula family for the pairs (u,ωj),j=2,3,4\left(u,{\omega }_{j}),\hspace{0.33em}j=2,3,4, a natural choice of c1j,j=2,3,4,{c}_{1j},\hspace{0.33em}j=2,3,4,is an APS-2 copula. For the symmetric errors (ω\omega ’s), we assume joint normality, and hence, a conventional choice for ckl∣1{c}_{kl| 1}and c34∣12{c}_{34| 12}is the Gaussian copula. Since the bivariate Gaussian copula is exchangeable, the various potential orderings of (ω2,…,ωJ)\left({\omega }_{2},\ldots ,{\omega }_{J})in Eq. (6) will all result in equivalent specifications.A conceptual difference from the APS-4 copula is that the vine copula approach places virtually no restriction on the type of dependence (aside from that between u and ωj{\omega }_{j}) that can be accommodated. For example, an APS-T copula must be of the structure given in Eq. (5), while our vine decomposition is subject to no such restriction. Moreover, since the vine decomposition of the joint density uses bivariate APS-2 copulas, it can cover a theoretically wider range of dependence than the equivalent APS-T copula construction, because the equivalent APS-T copula is subject to restrictions on ∣θ∣| \theta | (see Result 10 in the study by Amsler et al. [5]). Our vine approach is also more general than the approach based on joint normality used by Schmidt and Lovell [22].The use of APS-2 copulas in the vine decomposition provides a number of important computational advantages because the conditional distributions have simple closed-form expressions. To see this, differentiate the APS-2 copula function (or integrate the APS-2 copula density) with respect to the second argument. For example, for u and ω2{\omega }_{2}, the two members of the APS-2 copula family imply the following distributions: APS-2A:C12(Fu(u),F2(ω2))=Fu(u)⋅F2(ω2)+θ12⋅Fu(u)⋅(1−Fu(u))⋅F2(ω2)[1−(4F2(ω2)2−6F2(ω2)+3)],APS-2B:C12(Fu(u),F2(ω2))=Fu(u)⋅F2(ω2)+θ12⋅Fu(u)⋅(1−Fu(u))×F2(ω2)⋅(2F2(ω2)−1),F2(ω2)≤12,Fu(u)⋅F2(ω2)+θ12⋅Fu(u)⋅(1−Fu(u))×(F2(ω2)−1)⋅(1−2F2(ω2)),F2(ω2)>12.\begin{array}{rcl}\hspace{0.1em}\text{APS-2A:}\hspace{0.1em}\hspace{0.33em}{C}_{12}\left({F}_{u}\left(u),{F}_{2}\left({\omega }_{2}))& =& {F}_{u}\left(u)\hspace{0.33em}\cdot \hspace{0.33em}{F}_{2}\left({\omega }_{2})+{\theta }_{12}\hspace{0.33em}\cdot \hspace{0.33em}{F}_{u}\left(u)\hspace{0.33em}\cdot \hspace{0.33em}\left(1-{F}_{u}\left(u))\hspace{0.33em}\cdot \hspace{0.33em}{F}_{2}\left({\omega }_{2})\left[1-\left(4{F}_{2}{\left({\omega }_{2})}^{2}-6{F}_{2}\left({\omega }_{2})+3)],\\ \hspace{0.1em}\text{APS-2B:}\hspace{0.1em}\hspace{0.33em}{C}_{12}\left({F}_{u}\left(u),{F}_{2}\left({\omega }_{2}))& =& \left\{\begin{array}{l}{F}_{u}\left(u)\hspace{0.33em}\cdot \hspace{0.33em}{F}_{2}\left({\omega }_{2})+{\theta }_{12}\hspace{0.33em}\cdot \hspace{0.33em}{F}_{u}\left(u)\hspace{0.33em}\cdot \hspace{0.33em}\left(1-{F}_{u}\left(u))\times {F}_{2}\left({\omega }_{2})\hspace{0.33em}\cdot \hspace{0.33em}\left(2{F}_{2}\left({\omega }_{2})-1),{F}_{2}\left({\omega }_{2})\le \frac{1}{2},\\ {F}_{u}\left(u)\hspace{0.33em}\cdot \hspace{0.33em}{F}_{2}\left({\omega }_{2})+{\theta }_{12}\hspace{0.33em}\cdot \hspace{0.33em}{F}_{u}\left(u)\hspace{0.33em}\cdot \hspace{0.33em}\left(1-{F}_{u}\left(u))\times \left({F}_{2}\left({\omega }_{2})-1)\hspace{0.33em}\cdot \hspace{0.33em}\left(1-2{F}_{2}\left({\omega }_{2})),{F}_{2}\left({\omega }_{2})\gt \frac{1}{2}.\end{array}\right.\end{array}Then, the required conditional distributions can be written as follows: APS-2A:F(ω2∣u)=F2(ω2)+θ12(1−2Fu(u))(−4F2(ω2)3+6F2(ω2)2+2F2(ω2))\hspace{0.1em}\text{APS-2A:}\hspace{0.1em}\hspace{0.33em}F\left({\omega }_{2}| u)={F}_{2}\left({\omega }_{2})+{\theta }_{12}\left(1-2{F}_{u}\left(u))\left(-4{F}_{2}{\left({\omega }_{2})}^{3}+6{F}_{2}{\left({\omega }_{2})}^{2}+2{F}_{2}\left({\omega }_{2}))and APS-2B:F(ω2∣u)=F2(ω2)+θ12F2(ω2)(2F2(ω2)−1)(1−2Fu(u)),F2(ω2)≤12,F2(ω2)+θ12(F2(ω2)−1)(1−2F2(ω2))(1−2Fu(u)),F2(ω2)>12.\hspace{0.1em}\text{APS-2B:}\hspace{0.1em}\hspace{0.33em}F\left({\omega }_{2}| u)=\left\{\begin{array}{l}{F}_{2}\left({\omega }_{2})+{\theta }_{12}{F}_{2}\left({\omega }_{2})\left(2{F}_{2}\left({\omega }_{2})-1)\left(1-2{F}_{u}\left(u)),\hspace{1.0em}{F}_{2}\left({\omega }_{2})\le \frac{1}{2},\\ {F}_{2}\left({\omega }_{2})+{\theta }_{12}\left({F}_{2}\left({\omega }_{2})-1)\left(1-2{F}_{2}\left({\omega }_{2}))\left(1-2{F}_{u}\left(u)),\hspace{1.0em}{F}_{2}\left({\omega }_{2})\gt \frac{1}{2}.\end{array}\right.The joint (conditional) normality assumption on ω\omega ’s also leads to a simple formula for conditional cdf’s. For example, the conditional copula of ω2{\omega }_{2}and ω3{\omega }_{3}given u can be written as follows: C23∣1(F(ω2∣u),F(ω3∣u))=Φ2(Φ−1(F(ω2∣u)),Φ−1(F(ω3∣u));ρ23),{C}_{23| 1}\left(F\left({\omega }_{2}| u),F\left({\omega }_{3}| u))={\Phi }_{2}\left({\Phi }^{-1}\left(F\left({\omega }_{2}| u)),{\Phi }^{-1}\left(F\left({\omega }_{3}| u));\hspace{0.33em}{\rho }_{23}),where Φ2{\Phi }_{2}denotes a bivariate normal cdf, Φ−1{\Phi }^{-1}denotes the inverse of a standard normal cdf, and ρ23{\rho }_{23}is the correlation between Φ−1(F(ω2∣u)){\Phi }^{-1}\left(F\left({\omega }_{2}| u))and Φ−1(F(ω3∣u)){\Phi }^{-1}\left(F\left({\omega }_{3}| u)). Thus, the conditional distribution function for ω3{\omega }_{3}given u and ω2{\omega }_{2}can be written as follows: F(ω3∣u,ω2)=∂C23∣1(F(ω2∣u),F(ω3∣u))∂F(ω2∣u)=ΦΦ−1(F(ω3∣u))−ρΦ−1(F(ω2∣u))1−ρ232.F\left({\omega }_{3}| u,{\omega }_{2})=\frac{\partial {C}_{23| 1}\left(F\left({\omega }_{2}| u),F\left({\omega }_{3}| u))}{\partial F\left({\omega }_{2}| u)}=\Phi \left(\frac{{\Phi }^{-1}\left(F\left({\omega }_{3}| u))-\rho {\Phi }^{-1}\left(F\left({\omega }_{2}| u))}{\sqrt{1-{\rho }_{23}^{2}}}\right).Other conditional distributions that serve as arguments of the bivariate copula densities in vine decompositions of the form Eq. (7) can be derived similarly.5Estimation of parameters and technical inefficienciesWe now seek to construct a likelihood for the production system based upon the joint density of (u,υ,ω2,…,ωJ)\left(u,\upsilon ,{\omega }_{2},\ldots ,{\omega }_{J})and maximize this likelihood to obtain the parameter vector. It is customary to assume that υ\upsilon , being the random noise, is independent of u and ω\omega ’s. Then, the joint density of υ\upsilon , u, ω\omega ’s can be written as follows: h(υ,u,ω2,…,ωJ)=fυ(υ)⋅c(Fu(u),F2(ω2),…,FJ(ωJ))⋅fu(u)⋅∏j=2Jfj(ωj),h\left(\upsilon ,u,{\omega }_{2},\ldots ,{\omega }_{J})={f}_{\upsilon }\left(\upsilon )\hspace{0.33em}\cdot \hspace{0.33em}c\left({F}_{u}\left(u),{F}_{2}\left({\omega }_{2}),\ldots ,{F}_{J}\left({\omega }_{J}))\hspace{0.33em}\cdot \hspace{0.33em}{f}_{u}\left(u)\hspace{0.33em}\cdot \hspace{0.33em}\mathop{\prod }\limits_{j=2}^{J}{f}_{j}\left({\omega }_{j}),where the copula term assumes a vine decomposition similar to Eq. (7).As noted by Amsler et al. [5], u and υ\upsilon are not observed separately. To apply MLE to estimate the parameters of our production system, we need the joint density of υ−u\upsilon -uand ω\omega ’s. Let ε=υ−u\varepsilon =\upsilon -u. Then, as long as h(υ,u,ω2,…,ωJ)h\left(\upsilon ,u,{\omega }_{2},\ldots ,{\omega }_{J})is available, Amsler et al. [5] obtain the required joint density using expectation over the distribution of u as follows: (8)h(ε,ω2,…,ωJ)=∫h(u+ε,u,ω2,…,ωJ)du=∏j=2Jfj(ωj)⋅Eu[c(Fu(u),F2(ω2),…,FJ(ωJ))⋅fυ(u+ε)],h\left(\varepsilon ,{\omega }_{2},\ldots ,{\omega }_{J})=\int h\left(u+\varepsilon ,u,{\omega }_{2},\ldots ,{\omega }_{J}){\rm{d}}u=\mathop{\prod }\limits_{j=2}^{J}{f}_{j}\left({\omega }_{j})\hspace{0.33em}\cdot \hspace{0.33em}{E}_{u}\left[c\left({F}_{u}\left(u),{F}_{2}\left({\omega }_{2}),\ldots ,{F}_{J}\left({\omega }_{J}))\hspace{0.33em}\cdot \hspace{0.33em}{f}_{\upsilon }\left(u+\varepsilon )],where h(ε,ω2,…,ωJ)h\left(\varepsilon ,{\omega }_{2},\ldots ,{\omega }_{J})is the joint density of υ−u\upsilon -uand ω\omega ’s and Eu{E}_{u}is the expectation with respect to fu(u){f}_{u}\left(u). The joint density can be approximated to a desired precision by simulation after taking the average over a large number of random draws of u.The vine decomposition permits an additional representation of h(ε,ω2,…,ωJ)h\left(\varepsilon ,{\omega }_{2},\ldots ,{\omega }_{J})given by Eq. (7) and the assumption that the copulas c1j,j=2,3{c}_{1j},j=2,3are APS-2 copulas and the conditional copulas linking ω\omega ’s are Gaussian. For example, for J=3J=3, h(ε,ω2,ω3)=f2(ω2)⋅f3(ω3)⋅{Eu[c23∣1⋅fυ(u+ε)]+[θ12⋅a(F2(ω2))+θ13⋅a(F3(ω3))]×Eu[g(Fu(u))⋅c23∣1⋅fυ(u+ε)]+θ12⋅θ13⋅a(F2(ω2))⋅a(F3(ω3))×Eu[g2(Fu(u))⋅c23∣1⋅fυ(u+ε)]},\begin{array}{rcl}h\left(\varepsilon ,{\omega }_{2},{\omega }_{3})& =& {f}_{2}\left({\omega }_{2})\hspace{0.33em}\cdot \hspace{0.33em}{f}_{3}\left({\omega }_{3})\hspace{0.33em}\cdot \hspace{0.33em}\{{E}_{u}\left[{c}_{23| 1}\hspace{0.33em}\cdot \hspace{0.33em}{f}_{\upsilon }\left(u+\varepsilon )]+\left[{\theta }_{12}\hspace{0.33em}\cdot \hspace{0.33em}a\left({F}_{2}\left({\omega }_{2}))+{\theta }_{13}\hspace{0.33em}\cdot \hspace{0.33em}a\left({F}_{3}\left({\omega }_{3}))]\\ & & \times {E}_{u}\left[g\left({F}_{u}\left(u))\hspace{0.33em}\cdot \hspace{0.33em}{c}_{23| 1}\hspace{0.33em}\cdot \hspace{0.33em}{f}_{\upsilon }\left(u+\varepsilon )]+{\theta }_{12}\hspace{0.33em}\cdot \hspace{0.33em}{\theta }_{13}\hspace{0.33em}\cdot \hspace{0.33em}a\left({F}_{2}\left({\omega }_{2}))\hspace{0.33em}\cdot \hspace{0.33em}a\left({F}_{3}\left({\omega }_{3}))\times {E}_{u}\left[{g}^{2}\left({F}_{u}\left(u))\hspace{0.33em}\cdot \hspace{0.33em}{c}_{23| 1}\hspace{0.33em}\cdot \hspace{0.33em}{f}_{\upsilon }\left(u+\varepsilon )]\},\end{array}where c23∣1{c}_{23| 1}denotes the Gaussian copula density evaluated at ξj=F(ωj∣u),j=2,3{\xi }_{j}=F\left({\omega }_{j}| u),j=2,3. The Gaussian density can be written as follows: c23∣1(ξ2,ξ3)=11−ρ232exp−ρ232(c22+c32)−2ρ23⋅c2⋅c32(1−ρ232),{c}_{23| 1}\left({\xi }_{2},{\xi }_{3})=\frac{1}{\sqrt{1-{\rho }_{23}^{2}}}\exp \left(-\frac{{\rho }_{23}^{2}\left({c}_{2}^{2}+{c}_{3}^{2})-2{\rho }_{23}\hspace{0.33em}\cdot \hspace{0.33em}{c}_{2}\hspace{0.33em}\cdot \hspace{0.33em}{c}_{3}}{2\left(1-{\rho }_{23}^{2})}\right),where cj=Φ−1(ξj){c}_{j}={\Phi }^{-1}\left({\xi }_{j}). As mentioned earlier, we can use g(ξ1)=1−2ξ1g\left({\xi }_{1})=1-2{\xi }_{1}and a(ξj)=1−12(ξj−12)2a\left({\xi }_{j})=1-12{\left({\xi }_{j}-\frac{1}{2})}^{2}for APS-2A and a(ξj)=1−4∣ξj−12∣a\left({\xi }_{j})=1-4| {\xi }_{j}-\frac{1}{2}| for APS-2B, j=2,3j=2,3.By using the Jacobian of the transformation from (ε,ω2,…,ωJ)\left(\varepsilon ,{\omega }_{2},\ldots ,{\omega }_{J})to (ln(x1),…,ln(xJ))\left(\mathrm{ln}\left({x}_{1}),\ldots ,\mathrm{ln}\left({x}_{J}))given by Schmidt and Lovell ([22], p. 88), we can define a maximum simulated likelihood estimation (MSLE) based on maximizing the following log-likelihood function: lnL(β)=nlnr+∑i=1nlnh(εi,ωi2,…ωiJ),\mathrm{ln}\hspace{0.33em}L\left(\beta )=n\mathrm{ln}\hspace{0.33em}r+\mathop{\sum }\limits_{i=1}^{n}\mathrm{ln}\hspace{0.33em}h\left({\varepsilon }_{i},{\omega }_{i2},\ldots {\omega }_{iJ}),where β\beta contains all the model parameters to be estimated, r=∑j=1Jαjr={\sum }_{j=1}^{J}{\alpha }_{j}, εi=lnyi−α0−∑j=1Jαjlnxij{\varepsilon }_{i}=\mathrm{ln}\hspace{0.33em}{y}_{i}-{\alpha }_{0}-{\sum }_{j=1}^{J}{\alpha }_{j}\mathrm{ln}\hspace{0.33em}{x}_{ij}, and ωij=ln(xi1)−ln(xij)−ln(α1pij/αjpi1){\omega }_{ij}=\mathrm{ln}\left({x}_{i1})-\mathrm{ln}\left({x}_{ij})-\mathrm{ln}\left({\alpha }_{1}{p}_{ij}\hspace{0.1em}\text{/}\hspace{0.1em}{\alpha }_{j}{p}_{i1}), j=2,…,Jj=2,\ldots ,J. The parameter vector β\beta contains the parameters of the production function as well as the distributional parameters of the error terms, including the copula parameters. For example, if J=3J=3, this includes four parameters in the production function (α0,α1,α2,α3)\left({\alpha }_{0},{\alpha }_{1},{\alpha }_{2},{\alpha }_{3}), six parameters from the marginal distributions of the error terms (means μj{\mu }_{j}and variances σj2{\sigma }_{j}^{2}of each ωj,j=2,3{\omega }_{j},\hspace{0.33em}j=2,3, variance συ2{\sigma }_{\upsilon }^{2}of υ\upsilon and variance σu2{\sigma }_{u}^{2}of u), plus three dependence parameters (θ12{\theta }_{12}and θ13{\theta }_{13}from the APS-2 copulas c12{c}_{12}and c13{c}_{13}, respectively, and ρ23{\rho }_{23}from the Gaussian copula c23{c}_{23}). Note, that if θ1j≠0{\theta }_{1j}\ne 0, then corr[Fu(u),q(Fj(ωj))]≠0{\rm{corr}}\left[{F}_{u}\left(u),q\left({F}_{j}\left({\omega }_{j}))]\ne 0even if corr(u,ωj)=0{\rm{corr}}\left(u,{\omega }_{j})=0.Once β\beta is estimated, we can obtain the technical inefficiency scores uˆi{\hat{u}}_{i}using the well-known formula of Jondrow et al. [15], uˆi=E(u∣εi)=σ∗ϕ(bi)1−Φ(bi)−bi,{\hat{u}}_{i}=E\left(u| {\varepsilon }_{i})={\sigma }_{\ast }\left[\frac{\phi \left({b}_{i})}{1-\Phi \left({b}_{i})}-{b}_{i}\right],where σ∗=σλ1+λ2{\sigma }_{\ast }=\sigma \frac{\lambda }{1+{\lambda }^{2}}, bi=εiλ/σ{b}_{i}={\varepsilon }_{i}\lambda \hspace{0.1em}\text{/}\hspace{0.1em}\sigma , σ2=σu2+συ2{\sigma }^{2}={\sigma }_{u}^{2}+{\sigma }_{\upsilon }^{2}and λ=σu/συ\lambda ={\sigma }_{u}\hspace{0.1em}\text{/}\hspace{0.1em}{\sigma }_{\upsilon }, and ϕ\phi is the standard normal density function. We use the residuals εˆi=lnyi−αˆ0−∑j=1Jαˆjlnxij{\hat{\varepsilon }}_{i}=\mathrm{ln}\hspace{0.33em}{y}_{i}-{\hat{\alpha }}_{0}-{\sum }_{j=1}^{J}{\hat{\alpha }}_{j}\mathrm{ln}\hspace{0.33em}{x}_{ij}in place of εi{\varepsilon }_{i}, to evaluate the conditional expectation. Amsler et al. [4] note that the availability of allocative inefficiency terms ωˆj{\hat{\omega }}_{j}allows for an improvement in the precision uˆ\hat{u}. Unfortunately, their approach is restricted to models that allow for dependence between υ\upsilon and ω\omega ’s not between u and ω\omega ’sHowever, in our setting, it is possible to use the approach proposed by Amsler et al. ([3], Section 5.2) for panel stochastic frontier models. This approach makes use of the fact that, once we estimate the model, we know the joint distribution of (u,ω2,…,ωJ)\left(u,{\omega }_{2},\ldots ,{\omega }_{J}), as given by the copula, and can simulate from it. Then, we can use any nonparametric smoother to estimate the conditional expectation.Let J=3J=3and let s=1,…,Ss=1,\ldots ,Sindex that draws from the joint distribution (copula). The Nadaraya-Watson estimator of technical inefficiency scores can be written as follows: (9)u˜i=E(u∣εi,ωi2,ωi3)=∑s=1SusK(εs−εihε)Kωs2−ωi2hω2Kωs3−ωi3hω3∑s=1SKεs−εihεKωs2−ωi2hω2Kωs3−ωi3hω3,{\tilde{u}}_{i}=E\left(u| {\varepsilon }_{i},{\omega }_{i2},{\omega }_{i3})=\frac{{\sum }_{s=1}^{S}{u}_{s}K\left(\frac{{\varepsilon }_{s}-{\varepsilon }_{i}}{{h}_{\varepsilon }})K\left(\frac{{\omega }_{s2}-{\omega }_{i2}}{{h}_{{\omega }_{2}}}\right)K\left(\frac{{\omega }_{s3}-{\omega }_{i3}}{{h}_{{\omega }_{3}}}\right)}{{\sum }_{s=1}^{S}K\left(\frac{{\varepsilon }_{s}-{\varepsilon }_{i}}{{h}_{\varepsilon }}\right)K\left(\frac{{\omega }_{s2}-{\omega }_{i2}}{{h}_{{\omega }_{2}}}\right)K\left(\frac{{\omega }_{s3}-{\omega }_{i3}}{{h}_{{\omega }_{3}}}\right)},where K(⋅)K\left(\cdot )is a univariate kernel function and h⋅{h}_{\cdot }denotes the error-specific bandwidth parameter, different for each element of (ε,ω2,ω3)\left(\varepsilon ,{\omega }_{2},{\omega }_{3}). In practice, one would often use the Gaussian kernel with the rule-of-thumb value for h=1.06σˆS−1/5h=1.06\hat{\sigma }{S}^{-1\text{/}5}, where σˆ\hat{\sigma }is the standard deviation of the simulated draws for the relevant variable.Multivariate (nonproduct) kernels using certain features of the joint distribution of (ε,ω2,ω3)\left(\varepsilon ,{\omega }_{2},{\omega }_{3})can in principle be more effective in the nonparametric estimation of the conditional expectation. They may mitigate the curse of dimensionality inherent in high-dimensional tasks. However, because the number of factors of production does not typically exceed three, often due to an aggregation into land, labor and capital, and because we can obtain any number of draws from the joint distribution of (v,u,ω2,ω3)\left(v,u,{\omega }_{2},{\omega }_{3}), we are not concerned with the curse of dimensionality of the estimator in (8). In cases that require a larger J, practitioners may wish to use nonparametric regression techniques that are more suitable for higher dimensions, such as the local linear forest of Friedberg et al. [11] (see, e.g., Amsler et al. [2]).As mentioned earlier, we would evaluate this estimator at the values of the residuals but now we have both the residuals εˆi{\hat{\varepsilon }}_{i}, and ωˆij{\hat{\omega }}_{ij}, both as defined earlier. It is a standard result in nonparametrics that, under S→∞S\to \infty , hε,hω2,hω3→0{h}_{\varepsilon },{h}_{{\omega }_{2}},{h}_{{\omega }_{3}}\to 0and Shεhω2hω3→∞S{h}_{\varepsilon }{h}_{{\omega }_{2}}{h}_{{\omega }_{3}}\to \infty , the new estimator u˜i{\tilde{u}}_{i}converges to E(u∣εi,ωi2,ωi3)E\left(u| {\varepsilon }_{i},{\omega }_{i2},{\omega }_{i3}). Theoretically optimal bandwidth choices for a twice differentiable conditional expectation function are known to be of order O(S−1/(J+5))O\left({S}^{-1\text{/}\left(J+5)}); in practice, univariate rules of thumb such as the one mentioned earlier would be used. A classic survey discussing consistency of simulated likelihood-based estimators is Gourieroux and Monfort [12].6Monte Carlo simulationsWe evaluate the performance of the proposed vine construction in terms of the parameter specific mean squared error (MSE) and the aggregate MSE over all parameters. The production function and endogenous regressor equations used to generate the data in our simulations are as follows: (10)y=α0+α1x1+α2x2+υ−u.y={\alpha }_{0}+{\alpha }_{1}{x}_{1}+{\alpha }_{2}{x}_{2}+\upsilon -u.(11)xj=zjγ+ωj,j=1,2.{x}_{j}={z}_{j}\gamma +{\omega }_{j},\hspace{1.0em}j=1,2.The bivariate copula used to generate dependence between u and ∣ωj∣| {\omega }_{j}| is APS-2B with parameter θ1{\theta }_{1}for (u,ω1)\left(u,{\omega }_{1})and θ2{\theta }_{2}for (u,ω2)\left(u,{\omega }_{2}). The marginal distributions of the error terms are u∼∣N(0,σu2)∣u\hspace{0.33em} \sim \hspace{0.33em}| N\left(0,{\sigma }_{u}^{2})| , υ∼N(0,συ2)\upsilon \hspace{0.33em} \sim \hspace{0.33em}N\left(0,{\sigma }_{\upsilon }^{2}), and ωj∼N(0,σj2){\omega }_{j}\hspace{0.33em} \sim \hspace{0.33em}N\left(0,{\sigma }_{j}^{2}), j=1,2,j=1,2,are bivariate normal with correlation parameter ρ\rho . First, we generate three dependent uniforms from the canonical vine APS-2B copula using the algorithm described in, e.g., Aas et al. [1] and Czado [10]. Then, we invert the respective marginal distribution functions to generate the vector (u,ω1,ω2)\left(u,{\omega }_{1},{\omega }_{2}). Second, we generate z1{z}_{1}and z2{z}_{2}independently as χ22{\chi }_{2}^{2}and use these together with the simulated allocative inefficiency terms to generate x1{x}_{1}and x2{x}_{2}. Finally, y is generated using x1,x2{x}_{1},{x}_{2}, the simulated technical inefficiency u, and the simulated random noise v, according to Eq. (10).As an alternative design, we have used the setup of Tran and Tsionas [25] and obtained qualitatively similar results. These results are available upon request.Eqs. (10) and (11) can be viewed as a simplified version of Eqs. (1) and (2), where xj{x}_{j}’s represent log input ratios, zj{z}_{j}’s represent log price ratios, and y denotes the log ratio of output to the numeraire input. The assumption of the Cobb-Douglass production function and cost minimization implied by Eqs. (1) and (2) translate into restrictions on α\alpha ’s and γ\gamma , but we do not impose those restrictions in simulations except for setting the true values of the parameters close to realistic values.The true parameter values are α0=α1=α2=0.5{\alpha }_{0}={\alpha }_{1}={\alpha }_{2}=0.5, συ2=σu2=σ12=σ22=1{\sigma }_{\upsilon }^{2}={\sigma }_{u}^{2}={\sigma }_{1}^{2}={\sigma }_{2}^{2}=1, γ=1\gamma =1, and ρ=0.4\rho =0.4. We consider three combinations of copula parameter values (θ1,θ2)∈{(0.3,0.1),(0.45,0.45),(0.8,0.7)}\left({\theta }_{1},{\theta }_{2})\in \left\{\left(0.3,0.1),\left(0.45,0.45),\left(0.8,0.7)\right\}, corresponding to a low, medium, and high degree of dependence between u and ∣ωj∣| {\omega }_{j}| . We study two sample sizes n∈{500,1,000}n\in \left\{500,\hspace{0.1em}\text{1,000}\hspace{0.1em}\right\}and conduct 1,000 replications. A sample size of 500 is used to evaluate the expectation in Eq. (8).We compare our vine copula-based model with three other alternative models. Table 1 reports the simulation results. Vine2A and Vine2B are the models that use the proposed vine copula constructions with APS-2A and APS-2B copulas, respectively. The APS3A and APS3B estimators use high-dimensional APS-T copulas rather than vines. The QMLE estimator is based on the assumption of independence between u and ω\omega ’s (see Schmidt and Lovell [21]).Table 1MSE comparisons(i) n=500n=500(ii) n=1,000n=\hspace{0.1em}\text{1,000}\hspace{0.1em}QMLEGaussianAPS3AAPS3BVine2AVine2BQMLEGaussianAPS3AAPS3BVine2AVine2Bθ1=0.3,θ2=0.1{\theta }_{1}=0.3,{\theta }_{2}=0.1θ1=0.3,θ2=0.1{\theta }_{1}=0.3,{\theta }_{2}=0.1α0{\alpha }_{0}0.3390.4890.3170.3560.3670.3970.2340.4890.3510.3770.3730.391α1{\alpha }_{1}0.0240.0250.0240.0240.0240.0240.0170.0180.0170.0170.0170.017α2{\alpha }_{2}0.0240.0250.0240.0240.0240.0240.0160.0180.0160.0160.0160.016σu2{\sigma }_{u}^{2}0.2040.2530.2000.2180.2210.2270.1520.2510.2110.2230.2190.222συ2{\sigma }_{\upsilon }^{2}0.5650.7030.5270.5860.5960.6170.4200.6880.5710.6070.5980.606σ12{\sigma }_{1}^{2}0.1230.0650.0650.0650.0650.0650.0850.0450.0450.0450.0450.045σ22{\sigma }_{2}^{2}0.1140.0720.0610.0600.0610.0610.0830.0510.0440.0440.0440.044γ\gamma 0.0110.0110.0120.0120.0120.0120.0080.0080.0080.0080.0080.008Total1.4051.6441.2311.3451.3691.4251.0151.5681.2641.3371.3201.349E[u∣ε]E\left[u| \varepsilon ]0.6300.7350.6150.6420.6480.6680.5730.7370.6410.6590.6560.668E[u∣ε,ω1,ω2]E\left[u| \varepsilon ,{\omega }_{1},{\omega }_{2}]0.7360.6180.6440.6510.6700.7400.6450.6630.6610.672θ1=0.45,θ2=0.45{\theta }_{1}=0.45,{\theta }_{2}=0.45θ1=0.45,θ2=0.45{\theta }_{1}=0.45,{\theta }_{2}=0.45α0{\alpha }_{0}0.3410.4940.2310.2760.2720.2800.2360.4740.2350.2670.2220.218α1{\alpha }_{1}0.0240.0250.0250.0240.0240.0240.0170.0180.0170.0170.0170.017α2{\alpha }_{2}0.0240.0260.0250.0240.0240.0240.0160.0180.0170.0160.0160.016σu2{\sigma }_{u}^{2}0.2050.2580.1590.1820.1760.1830.1520.2450.1580.1750.1510.149συ2{\sigma }_{\upsilon }^{2}0.5680.7190.3920.4830.4690.4820.4220.6740.4090.4700.4070.399σ12{\sigma }_{1}^{2}0.1230.0650.0700.0650.0650.0650.0850.0450.0480.0450.0450.045σ22{\sigma }_{2}^{2}0.1140.0710.0650.0610.0610.0610.0830.0510.0480.0440.0440.044γ\gamma 0.0110.0110.0120.0120.0110.0110.0080.0080.0080.0080.0080.008Total1.4101.6690.9801.1271.1031.1291.0181.5330.9411.0420.9100.896E[u∣ε]E\left[u| \varepsilon ]0.6320.7380.5650.5910.5870.5930.5740.7280.5730.5920.5680.566E[u∣ε,ω1,ω2]E\left[u| \varepsilon ,{\omega }_{1},{\omega }_{2}]0.7390.5650.5910.5860.5920.7310.5740.5930.5690.567θ1=0.8,θ2=0.7{\theta }_{1}=0.8,{\theta }_{2}=0.7θ1=0.8,θ2=0.7{\theta }_{1}=0.8,{\theta }_{2}=0.7α0{\alpha }_{0}0.3420.4870.1590.1790.1460.1690.2390.4790.1370.1570.1060.112α1{\alpha }_{1}0.0240.0260.0270.0250.0240.0240.0170.0180.0190.0180.0170.017α2{\alpha }_{2}0.0240.0260.0260.0260.0240.0240.0160.0180.0190.0180.0170.016σu2{\sigma }_{u}^{2}0.2060.2580.1300.1340.1190.1350.1530.2520.1150.1200.0890.097συ2{\sigma }_{\upsilon }^{2}0.5700.7170.2490.3100.2760.3400.4230.6920.2340.2910.2170.238σ12{\sigma }_{1}^{2}0.1230.0650.0840.0670.0670.0650.0850.0450.0590.0460.0460.045σ22{\sigma }_{2}^{2}0.1140.0720.0780.0620.0630.0610.0830.0510.0600.0460.0450.044γ\gamma 0.0110.0110.0120.0120.0120.0110.0080.0080.0090.0080.0080.008Total1.4141.6610.7650.8150.7300.8291.0231.5620.6500.7030.5440.576E[u∣ε]E\left[u| \varepsilon ]0.6330.7350.5320.5420.5310.5410.5760.7320.5290.5380.5220.524E[u∣ε,ω1,ω2]E\left[u| \varepsilon ,{\omega }_{1},{\omega }_{2}]0.7370.5260.5340.5190.5270.7350.5240.5320.5090.511Gaussian is the estimator that assumes a Gaussian copula for (u,ω1,ω2)\left(u,{\omega }_{1},{\omega }_{2}), rather than APS-3A, APS-3B, or vine constructions with APS-2A and APS-2B. This is different from Schmidt and Lovell [22] who assume joint normality of (u∗,ω1,ω2)\left({u}^{\ast },{\omega }_{1},{\omega }_{2})as discussed in Section 1. However, the estimates based on Schmidt and Lovell [22] are similar and are not reported. For more details, on the copula implied by the Schmidt and Lovell [22] approach, see the study by Amsler et al. [5].We find that in the case of near independence (upper panel), all six estimators perform similarly for both sample sizes, with QMLE being not much worse and sometimes better than the other estimators in terms of aggregate MSE. As the strength of the dependence increases (middle panel), the two vine copula-based estimators Vine2A and Vine2B behave similarly to the APS3A and APS3B models for n=500n=500and marginally better for n=1,000n=\hspace{0.1em}\text{1,000}\hspace{0.1em}. All APS copula-based models perform better than QMLE. In the case of strongest dependence (lower panel), Vine2A and Vine2B show superior performance when compared with APS3A and APS3B, respectively (particularly for the larger sample size). The parameter estimates from the vine copula-based models are considerably more accurate than those from the QMLE and Gaussian copula models. Parameter-specific MSEs show that this behavior is not limited to just a few parameters but is prevalent uniformly across all parameters. It is perhaps remarkable that estimates of σv2{\sigma }_{v}^{2}are always associated with the largest MSE, regardless of the estimator, sample size, or dependence strength. The Gaussian estimator is dominated by all the other estimators, since this dependence structure is always miss-specified.The last two lines of each panel contain the MSE for the two variants of technical inefficiency predictions. We have the set of ui{u}_{i}’s used in the data generating process and a corresponding set of predictions uˆi{\hat{u}}_{i}and u˜i{\tilde{u}}_{i}computed using the two different conditioning sets and associated estimators in each iteration. For each iteration, we calculate MSEs: 1n∑i=1n(ui−uˆi)2,1n∑i=1n(ui−u˜i)2.\frac{1}{n}{\sum }_{i=1}^{n}{\left({u}_{i}-{\hat{u}}_{i})}^{2},\hspace{1.0em}\frac{1}{n}\mathop{\sum }\limits_{i=1}^{n}{\left({u}_{i}-{\tilde{u}}_{i})}^{2}.The line showing E(u∣ε,ω2,ω3)E\left(u| \varepsilon ,{\omega }_{2},{\omega }_{3})is obtained using the average of the latter values over 1,000 replications. The row showing E(u∣ε)E\left(u| \varepsilon )is obtained using the former. Perhaps, surprisingly, for all dependence strengths, we obtain very similar values of the MSE, even though one estimator is nonparametric and the other uses an analytic expression. There is some evidence that the estimator that uses ω\omega ’s in the conditioning set provides a better estimate of the technical inefficiency scores as the strength of dependence increases, although the differences are generally not large.7Empirical illustrationWe illustrate the use of our vine copula-based estimator using classic electricity generation data from 111 privately-owned steam-electric power plants constructed in the United States between 1947 and 1965 (see Cowing [8,9] for details). The output is measured in 1061{0}^{6}kWh of electricity generated in the first year of operation, inputs are capital as measured by actual cost of construction, fuel as measured in British Thermal Units (BTU) of actual consumption of coal, oil, and gas in the first year, and labor as measured by the total number of employees times 2,000 h. We also have input prices: firm’s bond rate prior to plant construction, actual price of a BTU of fuel, and regional industry salary rate averaged over two years prior to plant opening. Summary statistics for the data are given in Table 2, where output and inputs have been logged.Table 2Descriptive statistics for electricity generation dataMeanMedianSt.D.MinMaxOutput6.8346.9150.9913.6388.703Capital16.85916.9190.77514.54218.374Fuel16.09416.1380.88813.34617.772Labor11.65511.6780.50310.08612.725Price of capital−3.329−3.3870.192−3.594−2.947Price of fuel−1.337−1.2410.313−2.797−0.877Price of labor0.8000.8290.2470.3001.278We use a Cobb-Douglas type production function to mimic equations (1) and (2), with y=Outputy=\hspace{0.1em}\text{Output}\hspace{0.1em}, x1=Capital{x}_{1}=\hspace{0.1em}\text{Capital}\hspace{0.1em}, x2=Fuel{x}_{2}=\hspace{0.1em}\text{Fuel}\hspace{0.1em}, and x3=Labor{x}_{3}=\hspace{0.1em}\text{Labor}\hspace{0.1em}. We report parameter estimates together with standard errors. As benchmarks, we reproduce the estimates of Schmidt and Lovell [22], (SL80) which assumes joint normality, and Amsler et al. [4] (APS16), which assume that v is correlated with ω\omega , but u is independent of v and ω\omega . We also report the estimates based on the study by Amsler et al. [5], which use the APS-3 copulas (APS3A and APS3B). Table 3 presents the results. All of the parameters are as defined earlier; in addition, μj{\mu }_{j}’s are the means of the allocative inefficiencies ωj{\omega }_{j}’s, which are permitted to be nonzero. The parameter estimates and standard errors associated with our vine copula-based estimators (Vine2A and Vine2B) are reported in the last four columns.Table 3MLE of production function parametersALS77SL80APS16APS3AAPS3BVine2AVine2BEstStd ErrEstStd ErrEstStd ErrEstStd ErrEstStd ErrEstStd ErrEstStd Errα0{\alpha }_{0}−11.0177-11.01770.2001−11.2700-11.27000.2510−11.6839-11.68390.3865−11.4126-11.41260.2315−11.4525-11.45250.2272−11.4482-11.44820.2477−11.4675-11.46750.2542α1{\alpha }_{1}0.04020.01920.04280.02460.22480.01420.04980.02270.04950.02340.04300.02500.04710.0004α2{\alpha }_{2}1.08600.02031.07540.02720.86250.02511.07510.02321.07120.02301.07930.02471.07400.0197α3{\alpha }_{3}−0.0191-0.01910.02580.01370.03190.08050.00600.01450.02710.02390.01740.02200.02860.02530.0301μ1{\mu }_{1}1.98610.60521.84810.49481.82710.50931.99260.62361.89200.0740μ2{\mu }_{2}−0.0526-0.05262.4744−0.1740-0.17401.95300.30970.84340.36751.49850.41851.1890σu2{\sigma }_{u}^{2}0.01070.00330.01190.00360.00890.00580.00830.00300.00800.00290.00860.00290.00860.0028συ2{\sigma }_{\upsilon }^{2}0.00270.00090.00200.00090.01570.00480.00370.00100.00380.00110.00360.00110.00350.0010σ12{\sigma }_{1}^{2}0.33660.04350.35030.04140.34370.03510.34160.03540.34560.03600.33910.0354σ22{\sigma }_{2}^{2}0.59010.10100.59230.10070.59560.09800.59870.09920.56840.09580.57630.0937σ12{\sigma }_{12}0.21000.05770.21590.0549ρ12{\rho }_{12}0.51570.10240.51470.10090.47900.09980.46270.1043σu,1{\sigma }_{u,1}0.01190.0036σu,2{\sigma }_{u,2}0.01480.0242συ,1{\sigma }_{\upsilon ,1}0.01570.0048συ,2{\sigma }_{\upsilon ,2}−0.0492-0.04920.0125θ12{\theta }_{12}0.59070.34890.74560.47740.35970.41760.01980.5956θ13{\theta }_{13}−0.4088-0.40880.3688−0.5465-0.54650.5512−0.3845-0.38450.3751−0.4526-0.45260.5429E[u∣ε]E\left[u| \varepsilon ]0.04470.08520.08200.06900.07640.07140.0724V[u∣ε]V\left[u| \varepsilon ]0.00100.00130.00290.00150.00160.00150.0015E˜[u∣ε]\tilde{E}\left[u| \varepsilon ]0.04430.08170.07380.06820.07540.07020.0709V˜[u∣ε]\tilde{V}\left[u| \varepsilon ]0.00100.00110.00270.00170.00190.00170.0018E˜[u∣ε,ω1,ω2]\tilde{E}\left[u| \varepsilon ,{\omega }_{1},{\omega }_{2}]0.08130.02070.06620.07390.06860.0705V˜[u∣ε,ω1,ω2]\tilde{V}\left[u| \varepsilon ,{\omega }_{1},{\omega }_{2}]0.00100.00130.00160.00180.00170.0017Log-Likelihood123.0607−73.6973-73.6973−101.4120-101.4120−72.2412-72.2412−73.0625-73.0625−74.5614-74.5614−74.8482-74.8482AIC−234.1213-234.1213173.3946224.8239170.4823172.1249175.1229175.6964BIC−217.8641-217.8641208.6185254.6287205.7062207.3488210.3468210.9203We start by noticing that the estimated input–output elasticities are similar for all the estimators that allow for dependence between u and ω\omega ’s (SL80, APS3A&B, Vine2A&B) and different from APS16. Moreover, there are substantial differences in the dependence parameter estimates. APS3A&B and Vine2A&B show positive dependence between u and ω2{\omega }_{2}and negative between u and ω3{\omega }_{3}, while SL80 shows positive dependence for both. The correlations for all copula-based estimators are, for the most part, large in magnitude but statistically insignificant, while SL80 shows a very weak but statistically significant correlation.Finally, we note the similarity in the three versions of the technical inefficiency score estimates. The statistics E[u∣ε]E\left[u| \varepsilon ]and V[u∣ε]V\left[u| \varepsilon ]are the conditional mean and variance, computed as averages (over observations) of the closed-form expressions in the studies by Jondrow et al. [15] and Bera and Sharma [7], respectively. The estimates E˜[u∣ε]\tilde{E}\left[u| \varepsilon ]and V˜[u∣ε]\tilde{V}\left[u| \varepsilon ]are the nonparametric (Nadaraya-Watson) versions of the same quantities. The estimates E˜[u∣ε,ω1,ω2]\tilde{E}\left[u| \varepsilon ,{\omega }_{1},{\omega }_{2}]and V˜[u∣ε,ω1,ω2]\tilde{V}\left[u| \varepsilon ,{\omega }_{1},{\omega }_{2}]use Nadaraya-Watson for the enlarged conditioning set. The values associated with the APS16 model are visibly different from the estimators based on the APS copula models and SL80. Again, perhaps surprisingly, it makes little difference whether the estimator is parametric or nonparametric and whether we condition on ε\varepsilon or (ε,ω2,ω3)\left(\varepsilon ,{\omega }_{2},{\omega }_{3}).8ConclusionTechnically inefficient firms are likely to exhibit both positive and negative deviations from their cost minimizing input ratios. As such, stochastic frontier models of production systems should incorporate a dependence structure that permits correlation between technical inefficiency and the absolute value of allocative inefficiency. We propose a vine copula construction that allows for this unique dependence structure and argue that our approach admits a more comprehensive coverage of dependence than the multivariate APS-T copula proposed by Amsler et al. [5]. Moreover, we discuss the MSLE parameter estimation procedure and implement an improved estimator of the technical inefficiency score, permitted by the fact that we can condition on a larger set of error terms.

Dependence Modeling – de Gruyter

**Published: ** Jan 1, 2022

**Keywords: **vine copulas; production frontier; allocative inefficiency; technical inefficiency; 62H05; 62P20; 91B38

Loading...

You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!

Read and print from thousands of top scholarly journals.

System error. Please try again!

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.

Copy and paste the desired citation format or use the link below to download a file formatted for EndNote

Access the full text.

Sign up today, get DeepDyve free for 14 days.

All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.