# The Tacnode Kernel: Equality of Riemann–Hilbert and Airy Resolvent Formulas

The Tacnode Kernel: Equality of Riemann–Hilbert and Airy Resolvent Formulas Abstract We study nonintersecting Brownian motions with two prescribed starting and ending positions, in the neighborhood of a tacnode in the time–space plane. Several expressions have been obtained for the critical correlation kernel that describes the microscopic behavior of the Brownian motions near the tacnode. One approach, due to Kuijlaars, Zhang, and the author, expresses the kernel via a $$4\times 4$$ matrix-valued Riemann–Hilbert problem (RH problem). Another approach, due to Adler, Ferrari, Johansson, van Moerbeke, and Vet̋, expresses the kernel via resolvents and Fredholm determinants of the Airy integral operator. In this paper, we prove the equivalence of both approaches. We also obtain a rank-2 property for the derivative of the tacnode kernel. Finally, we find an RH expression for the multitime extended tacnode kernel. 1 Introduction Recently several papers appeared that study $$n$$ nonintersecting Brownian motion paths with prescribed starting positions at time $$t=0$$ and ending positions at time $$t=1$$, see [1, 2, 4, 13, 14, 16, 17, 22–24] among many others. In the limit as $$n\to \infty$$ the paths fill up a well-defined region in the time-space plane. By fine-tuning the parameters, we may create a situation with two groups of Brownian motions, located inside two touching ellipses in the time–space plane: see Figure 1c. We are interested in the microscopic behavior of the paths near the touching point of the two ellipses, that is, near the tacnode. Fig. 1. View largeDownload slide Twenty nonintersecting Brownian motions at temperature $$T=1$$ with two prescribed starting and two ending positions in the case of (a) large, (b) small, and (c) critical separation between the endpoints. The horizontal axis stands for the time, $$t\in [0,1]$$, and the vertical axis shows the positions of the Brownian motions at time $$t$$. For $$n\to \infty$$ the Brownian motions fill a prescribed region in the time–space plane which is bounded by the boldface lines in the figure. In the case (c), the limiting support consists of two touching ellipses which touch each other at a critical point which is a tacnode. Fig. 1. View largeDownload slide Twenty nonintersecting Brownian motions at temperature $$T=1$$ with two prescribed starting and two ending positions in the case of (a) large, (b) small, and (c) critical separation between the endpoints. The horizontal axis stands for the time, $$t\in [0,1]$$, and the vertical axis shows the positions of the Brownian motions at time $$t$$. For $$n\to \infty$$ the Brownian motions fill a prescribed region in the time–space plane which is bounded by the boldface lines in the figure. In the case (c), the limiting support consists of two touching ellipses which touch each other at a critical point which is a tacnode. It is well known that the positions of the Brownian motions at any fixed time $$t\in (0,1)$$ form a determinantal point process. The process has a well-defined limit for $$n\to \infty$$ in a microscopic neighborhood of the tacnode. The limiting process is encoded by a two-variable correlation kernel $$K_{{\rm tac}}(x,y)$$ which we call the tacnode kernel. It depends parametrically on the scaling that we use near the tacnode. There exists also a multitime extended version of the tacnode kernel [2, 3, 17, 22], to be discussed in Section 2.4. The tacnode kernel $$K_{{\rm tac}}(x,y)$$ can be expressed using resolvents and Fredholm determinants of the Airy integral operator acting on a semi-infinite interval $$[\sigma ,\infty )$$. This approach was followed in the symmetric case by Adler et al. [2] and Johansson [22] and in the non-symmetric case by Ferrari and Vet̋ [17]. Here the “symmetric case” means that the two touching groups of Brownian motions have the same size, as in Figure 1, and the non-symmetric case means that one group is bigger than the other. Similar methods were used to study a double Aztec diamond [3]. The latter paper also shows that the expressions for the symmetric tacnode kernel derived in [2, 22], respectively, are equivalent to each other (as one would expect), but the proof of this is rather indirect since it involves calculating the limit of a certain kernel in two different ways. An alternative expression for the tacnode kernel can be obtained from the Riemann–Hilbert (RH) method. This approach was followed by Kuijlaars et al. in [14]. In that paper we express the tacnode kernel in terms of a $$4\times 4$$ matrix-valued RH problem $$M(z)$$, which yields a new Lax pair representation for the Hastings–McLeod solution $$q(x)$$ to the Painlevé II equation. Recall that the Painlevé II equation is the second-order, ordinary differential equation   $$q''\left(x\right) = xq\left(x\right)+2q^3\left(x\right),$$ (1) where the prime denotes the derivative with respect to $$x$$. The Hastings–McLeod solution [19, 21] is the special solution $$q(x)$$ of (1) that is real for real $$x$$ and satisfies   $$q\left(x\right)\sim {\rm Ai}\left(x\right), \quad x\to +\infty,$$ (2) with $${\rm Ai}$$ the Airy function. We note that the usual RH matrix $$\Psi (z)$$ associated to the Painlevé II equation, due to Flaschka and Newell [18], has size $$2\times 2$$ rather than $$4\times 4$$. The RH matrix $$M(z)$$ from the previous paragraph has been the topic of some recent developments. It was used to study a new critical phenomenon in the two-matrix model [15], and to establish a reduction from the tacnode kernel to the Pearcey kernel [20]. It was also extended to a hard-edge version of the tacnode [12]. Summarizing, there exist several, apparently different, formulas for the tacnode kernel $$K_{{\rm tac}}(x,y)$$. It is natural to ask about the equivalence of these formulas. There is an interesting analogy with a model of nonintersecting Brownian excursions, also known as watermelons (with a wall). The model consists of $$n$$ Brownian motion paths on the positive half-line with a reflecting or absorbing wall at the origin. The paths are forced to start and end at the origin and the interest lies in the maximum position $$x_{\max }$$ reached by the topmost path during the time interval $$t\in [0,1]$$. Recently, several results were obtained about the joint distribution of the maximum position $$x_{\max }$$ and the maximizing time $$t_{\max }$$ for such watermelons. One approach, due to Moreno–Quastel–Remenik [26], involves resolvents and Fredholm determinants of the Airy operator acting on an interval $$[\sigma ,\infty )$$. Another approach, due to Schehr [27], involves the $$2\times 2$$ RH matrix $$\Psi (z)$$ associated to the Hastings–McLeod solution to the Painlevé II equation [18, 21]. The equivalence of both approaches has been established in a recent work of Baik–Liechty–Schehr [7]. Along the way they obtain Airy resolvent formulas for the entries of the RH matrix $$\Psi (z)$$, see also [6, Section 1.1.3] for a similar result. Inspired by the work of Baik–Liechty–Schehr [7], in this paper we obtain Airy resolvent formulas for the $$4\times 4$$ RH matrix $$M(z)$$. This is the content of Theorem 2.9. Our formulas will apply to the entries in the first and second column of the RH matrix $$M(z)$$, where $$z$$ lies in a sector around the positive imaginary axis. In Theorem 2.4 we will use these formulas to prove the equivalence of the tacnode kernels in [2, 3, 17, 22] and [14], respectively. We also obtain a remarkable rank-2 property for the derivative of the tacnode kernel, see Theorems 2.5 and 2.6. As a byproduct, the rank-2 formula will yield an RH formula for the multi-time extended tacnode kernel, see Section 2.4. To the best of our knowledge, this is the first time that an RH formula is obtained for a multitime extended correlation kernel. See [8] for a completely different connection between RH problems and multitime extended point processes, at the level of gap probabilities. Remark 1.1. Based on an earlier version of this paper, Kuijlaars [25] has extended our approach and obtained Airy resolvent formulas for the third and fourth column of $$M(z)$$, with $$z$$ lying again in a sector around the positive imaginary axis. The latter columns are relevant because they appear in a critical correlation kernel for the 2-matrix model [15]. Remark 1.2. The paper [12] discusses a hard-edge variant of the tacnode kernel. The interaction with the hard edge is quantified by a certain parameter $$\alpha >-1$$. In the special case $$\alpha =0$$, the hard edge tacnode kernel involves the same RH matrix $$M(z)$$ as the one above and our results give an alternative way of writing the kernel. It is an open problem to extend our results to a general value of $$\alpha$$. 2 Statement of Results 2.1 Definition of the tacnode kernel In this section we recall the two different definitions of the tacnode kernel in the literature. We will denote them by $$K_{{\rm tac}}(u,v)$$ and $$\mathcal L_{{\rm tac}}(u,v)$$, respectively. 2.1.1 Definition of the kernel $$K_{{\rm tac}}$$. A first approach to define the tacnode kernel $$K_{{\rm tac}}(u,v)$$ (in the single-time case) is via an RH problem for a matrix $$M(z)$$ of size $$4\times 4$$. We recall the RH problem from [14, 15]. Fix two numbers $$\varphi _1,\varphi _2$$ such that   $$0<\varphi_1<\varphi_2<\pi/3.$$ (3) Define the half-lines $$\Gamma _k$$, $$k=0,\ldots ,9$$, by   $$\Gamma_0=\mathbb{R}_+,\quad \Gamma_1={\rm e}^{{\rm i}\varphi_1}\mathbb{R}_+,\quad \Gamma_2={\rm e}^{{\rm i}\varphi_2}\mathbb{R}_+,\quad \Gamma_3={\rm e}^{{\rm i}\left(\pi-\varphi_2\right)}\mathbb{R}_+,\quad \Gamma_4={\rm e}^{{\rm i}\left(\pi-\varphi_1\right)}\mathbb{R}_+,$$ (4) and   $$\Gamma_{5+k}=-\Gamma_k,\quad k=0,\ldots,4.$$ (5) All rays $$\Gamma _k$$, $$k=0,\ldots ,9$$, are oriented toward infinity, as shown in Figure 2. We denote by $$\Omega _k$$ the region in $$\mathbb {C}$$ that lies between the rays $$\Gamma _k$$ and $$\Gamma _{k+1}$$, for $$k=0,\ldots ,9$$, where we identify $$\Gamma _{10}:=\Gamma _0$$. We consider the following RH problem. RH problem 2.1. We look for a matrix-valued function $$M:\mathbb {C}\setminus (\bigcup _{k=0}^{9}\Gamma _k)\to \mathbb {C}^{4\times 4}$$ (which also depends on the parameters $$r_1,r_2>0$$, and $$s_1,s_2,\tau \in \mathbb {C}$$) satisfying $$M(z)$$ is analytic (entrywise) for $$z\in \mathbb {C}\setminus (\bigcup _{k=0}^{9} \Gamma _k)$$. For $$z\in \Gamma _k$$, the limiting values   $M_+\left(z\right) = \lim_{x \to z, \, x{\rm on\, +-side\, of}\Gamma_k} M\left(x\right), \quad M_-\left(z\right) = \lim_{x \to z, \, x{\rm on\, -side\, of}\Gamma_k} M\left(x\right)$ exist, where the $$+$$-side and $$-$$-side of $$\Gamma _k$$ are the sides which lie on the left and right of $$\Gamma _k$$, respectively, when traversing $$\Gamma _k$$ according to its orientation. These limiting values satisfy the jump relation   $$M_{+}\left(z\right) = M_{-}\left(z\right)J_k\left(z\right),\quad k=0,\ldots,9,$$ (6) where the jump matrix $$J_k(z)$$ for each ray $$\Gamma _k$$ is shown in Figure 2. As $$z\to \infty$$ we have   \begin{align} M(z) &=\left(I+\frac{M_1}{z}+\frac{M_2}{z^2}+O\left(\frac{1}{z^3}\right)\right) {\rm diag}((-z)^{-1/4},z^{-1/4},(-z)^{1/4},z^{1/4})\nonumber\\ &\quad \times\mathcal{A}{\rm diag}\left({\rm e}^{\theta_1(z)},{\rm e}^{\theta_2(z)}, {\rm e}^{\theta_3(z)},{\rm e}^{\theta_4(z)}\right), \end{align} (7) where the coefficient matrices $$M_1,M_2,\ldots$$ are independent of $$z$$, and with   $$\mathcal{A}:=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 0 & -i & 0 \\ 0 & 1 & 0 & i \\ -i & 0 & 1 & 0 \\ 0 & i & 0 & 1 \\ \end{pmatrix},$$ (8) and   $$\begin{cases} \theta_1\left(z\right) = -\frac{2}{3}r_1\left(-z\right)^{3/2}-2s_1\left(-z\right)^{1/2}+r_1^2\tau z, \\ \theta_2\left(z\right) = -\frac{2}{3}r_2z^{3/2}-2s_2z^{1/2}-r_2^2\tau z,\\ \theta_3\left(z\right) =\frac{2}{3}r_1\left(-z\right)^{3/2} +2s_1\left(-z\right)^{1/2}+r_1^2\tau z, \\ \theta_4\left(z\right) =\frac{2}{3}r_2z^{3/2}+2s_2z^{1/2}-r_2^2\tau z. \end{cases}$$ (9) Here we use the principal branches of the fractional powers. $$M(z)$$ is bounded as $$z\to 0$$. Fig. 2. View largeDownload slide The jump contours $$\Gamma _k$$ in the complex $$z$$-plane and the corresponding jump matrix $$J_k$$ on $$\Gamma _k$$, $$k=0,\ldots ,9$$, in the RH problem for $$M = M(z)$$. We denote by $$\Omega _k$$ the region between the rays $$\Gamma _k$$ and $$\Gamma _{k+1}$$. Fig. 2. View largeDownload slide The jump contours $$\Gamma _k$$ in the complex $$z$$-plane and the corresponding jump matrix $$J_k$$ on $$\Gamma _k$$, $$k=0,\ldots ,9$$, in the RH problem for $$M = M(z)$$. We denote by $$\Omega _k$$ the region between the rays $$\Gamma _k$$ and $$\Gamma _{k+1}$$. We will sometimes write $$M(z)=M(z;r_1,r_2,s_1,s_2,\tau )$$ to indicate the dependence on the parameters. The factors $$r_1^2$$ and $$r_2^2$$ in front of $$\tau z$$ in (9) will be useful in the statement of our main theorems; these factors could be removed by a scaling and translation of the RH matrix. We could also assume $$r_2=1$$ without loss of generality by a simple rescaling of $$z$$. The RH problem 2.1 was introduced in [14] with $$\tau =0$$ in (9). The parameter $$\tau$$ was introduced in [15] in the symmetric setting where $$r_1=r_2=1$$ and $$s_1=s_2$$. The general nonsymmetric case with the extra parameter $$\tau$$ in (9) has not been considered before in the literature. We note that the parameter $$\tau$$ in [15] can be identified with the (rescaled) coupling constant in a critical model of coupled random matrices, while in the context of the tacnode process the parameter $$\tau$$ will be related to the $$O(n^{-1/3})$$ coefficient in the critical scaling of the time variable. The following result can be proved as in [15]; see also Lemma 6.7 and the paragraph following it. Proposition 2.2 (Solvability). For any $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ there exists a unique solution $$M(z)=M(z;r_1,r_2,s_1,s_2,\tau )$$ to the RH problem 2.1. Now we define the tacnode kernel. Let $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed parameters. Let $$\tilde {M}(z)$$ be the restriction of $$M(z)$$ to the sector $$z\in \Omega _2$$ around the positive imaginary axis. We extend $$\tilde {M}(z)$$ to the whole complex $$z$$-plane by analytic continuation. This analytic continuation is well defined, since the product $$J_3J_4\cdots J_9J_0J_1J_2$$ of the jump matrices in the RH problem 2.1 is the identity matrix. The tacnode kernel $$K_{{\rm tac}}(u,v)$$ is defined in terms of the RH matrix $$\tilde {M}(z)$$ by [14, Definition 2.6] (we note that [14, Definition 2.6] has a typo: it has “$$\tilde {M}^{-1}(u) \tilde {M}(v)$$” instead of “$$\tilde {M}^{-1}(v) \tilde {M}(u)$$).”   $$K_{{\rm tac}}\left(u,v\right) = \frac{1}{2\pi i\left(u-v\right)} \left(0\quad 0\quad 1\quad 1\right) \tilde{M}^{-1}\left(v\right) \tilde{M}\left(u\right) \begin{pmatrix} 1\\ 1\\ 0\\ 0\end{pmatrix}.$$ (10) For later use, it is convenient to denote by   $$\textbf{p}\left(z\right) = \tilde{M}\left(z\right)\begin{pmatrix}1\\ 1\\ 0\\ 0\end{pmatrix} \in\mathbb{C}^{4\times 1}$$ (11) the sum of the first and second columns of $$\tilde {M}(z)$$. Observe that (10) and (11) both depend on the parameters $$r_1,r_2,s_1,s_2,\tau$$. 2.1.2 Definition of the kernel $$\mathcal L_{{\rm tac}}$$. In this section we recall the second way to define the tacnode kernel, via Airy resolvents [2, 3, 17, 22]. We will denote this kernel by $$\mathcal L_{{\rm tac}}(u,v)$$. Denote by $${\rm Ai}(x)$$ the standard Airy function and by   $$K_{{\rm Ai}}\left(x,y\right)= \frac{{\rm Ai}\left(x\right){\rm Ai}'\left(y\right)-{\rm Ai}'\left(x\right){\rm Ai}\left(y\right)}{x-y}=\int_{0}^{\infty} {\rm Ai}\left(x+z\right){\rm Ai}\left(y+z\right){\rm d} z$$ (12) the Airy kernel. For $$\sigma \in \mathbb {R}$$ let   $$K_{{\rm Ai},\sigma}\left(x,y\right) = \int_{0}^{\infty} {\rm Ai}\left(x+z+\sigma\right){\rm Ai}\left(y+z+\sigma\right){\rm d} z$$ (13) be the Airy kernel shifted by $$\sigma$$. Let $$\textbf {K}_{{\rm Ai},\sigma }$$ be the integral operator with kernel $$K_{{\rm Ai},\sigma }$$ acting on the function space $$L^2([0,\infty ))$$. The action of the operator $$\textbf {K}_{{\rm Ai},\sigma }$$ on the function $$f$$ is defined by   $[\textbf{K}_{{\rm Ai},\sigma} f]\ \left(x\right) = \int_{0}^{\infty} K_{{\rm Ai},\sigma}\left(x,y\right)f\left(y\right){\rm d} y.$ Define the resolvent operator $$\textbf {R}_{{\rm Ai},\sigma }$$ on $$L^2([0,\infty ))$$ by   $$\textbf{R}_{{\rm Ai},\sigma} := \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} - \textbf{1} = \textbf{K}_{{\rm Ai},\sigma}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{K}_{{\rm Ai},\sigma},$$ (14) where $$\textbf {1}$$ stands for the identity operator on $$L^2([0,\infty ))$$. It is known that $$\textbf {R}_{{\rm Ai},\sigma }$$ is again an integral operator on $$L^2([0,\infty ))$$ and we denote its kernel by $$R_{\sigma }(x,y)$$:   $$[\textbf{R}_{{\rm Ai},\sigma} f] \left(x\right) = \int_{0}^{\infty} R_{\sigma}\left(x,y\right)f\left(y\right){\rm d} y.$$ (15) We will sometimes use the notation   $$\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \equiv \left(\textbf{1}+\textbf{R}_{{\rm Ai},\sigma}\right)\left(x,y\right) := \delta\left(x-y\right)+R_{\sigma}\left(x,y\right),$$ (16) with $$\delta (x-y)$$ the Dirac delta function at $$x=y$$ and $$R_{\sigma }$$ the Airy resolvent kernel (15). We will often use the symmetry of the kernel, $$R_{\sigma }(x,y)=R_{\sigma }(y,x)$$. Finally, we will abbreviate $$R_{\sigma }$$ by $$R$$ if the value of $$\sigma$$ is clear from the context. In a series of papers [2, 3, 17, 22], Adler coworkers study the tacnode problem using Airy resolvent expressions. We focus, in particular, on the paper by Ferrari–Vet̋ [17] on the nonsymmetric tacnode. Hence the two touching groups of Brownian motions at the tacnode are allowed to have a different size. The paper [17] uses a parameter $$\lambda >0$$ that quantifies the amount of asymmetry, with $$\lambda =1$$ corresponding to the symmetric case treated in [2, 3, 22]. These papers also use a parameter $$\sigma >0$$ that controls the strength of interaction between the two groups of Brownian motions near the tacnode. In the present paper, we will denote the latter parameter with a capital $$\Sigma$$. The parameter $$\Sigma$$ has a similar effect on the tacnode kernel as the temperature parameter used in [12, 13] (suitably rescaled). In order to be consistent with [14], we will use the notation $$\sigma$$ to denote   $$\sigma = \lambda^{1/2}\left(1+\lambda^{-1/2}\right)^{2/3}\Sigma.$$ (17) (What we call $$\sigma$$ was called $$\widetilde \sigma$$ in [2, 3, 17, 22].) The papers [2, 3, 17, 22] consider a multitime extended tacnode kernel with time variables $$\tau _1,\tau _2$$. We restrict ourselves here to the single time case $$\tau _1=\tau _2=:\tau$$. The discussion of the multitime case is postponed to Section 2.4. With the above notations, define the functions [17]   \begin{align} \begin{split} b_{\tau,z}\left(x\right) &= \exp\left(-\tau y+\tau^3/3\right){\rm Ai}\left(y\right),\quad {\rm with }y:=z+Cx+\Sigma+\tau^2,\\ \tilde{b}_{\tau,z}\left(x\right) &= \exp\left(-\sqrt{\lambda}\tau \tilde{y}+\lambda\tau^3/3\right){\rm Ai}\left(\lambda^{1/6}\widetilde y\right),\quad {\rm with}\tilde{y}:=-z+Cx+\sqrt{\lambda}\left(\Sigma+\tau^2\right), \end{split} \end{align} (18) where   $$C = \left(1+\lambda^{-1/2}\right)^{1/3}.$$ (19) The notations $$b_{\tau ,\sigma +\xi }^{\lambda }(x+\widetilde \sigma )$$ and $$b_{\lambda ^{1/3}\tau ,\lambda ^{2/3}\sigma -\lambda ^{1/6}\xi }^{\lambda ^{-1}}(x+\tilde {\sigma })$$ in [17] correspond to our notations $$\lambda ^{1/6}\tilde {b}_{-\tau ,\xi }(x)$$ and $$\lambda ^{-1/6}b_{-\tau ,\xi }(x)$$, respectively. Note that, in the symmetric case $$\lambda =1$$, we have $$\tilde {b}_{\tau ,z}(x)=b_{\tau ,-z}(x)$$. The functions (18) also depend on $$\lambda ,\sigma$$ (recall (17)) but we do not show this in the notation. Next, we define the functions   \begin{align} \begin{split} \mathcal{A}_{\tau,z}\left(x\right) &= b_{\tau,z}\left(x\right) -\lambda^{1/6}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) {\rm d} y\\ \tilde{\mathcal{A}}_{\tau,z}\left(x\right) &= \tilde{b}_{\tau,z}\left(x\right)- \lambda^{-1/6}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)b_{\tau,z}\left(y\right){\rm d} y. \end{split} \end{align} (20) Again we have $$\tilde {\mathcal {A}}_{\tau ,z}(x) = \mathcal {A}_{\tau ,-z}(x)$$ in the symmetric case $$\lambda =1$$, and we suppress the dependence on $$\lambda ,\sigma$$ from the notation. We are now ready to introduce the tacnode kernel $$\mathcal L_{{\rm tac}}(u,v)=\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ of Ferrari–Vet̋ [17], restricted to the single-time case $$\tau _1=\tau _2=\tau$$. (This kernel is called $$\mathcal L_{{\rm tac}}^{\lambda ,\sigma }(\tau _1,\xi _1,\tau _2,\xi _2)$$ in [17], with $$\xi _1=u$$, $$\xi _2=v$$ and $$\tau _1=\tau _2=\tau$$. Recall that we use $$\sigma$$ in a different meaning.) The kernel can be represented in several equivalent ways. We find it convenient to use the following representation. Proposition 2.3 (The tacnode kernel). Fix $$\lambda >0$$. The tacnode kernel $$\mathcal L_{{\rm tac}}(u,v)$$ of Ferrari–Vet̋ [17] in the single-time case $$\tau _1=\tau _2=\tau$$ can be written in the form   \begin{align} \begin{split} \mathcal L_{{\rm tac}}(u,v;\sigma,\tau) &= C\lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau,u}(x)\tilde{b}_{-\tau,v}(x)\,{\rm d} x\\ &\quad +C\int_0^{\infty}\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y, \end{split} \end{align} (21) with the notations (13) and (16)–(20). Proposition 2.3 is proved in Section 4.1 and its multitime extended version is stated in Section 2.4. The proposition was obtained in the symmetric case $$\lambda =1$$ by Adler et al. [3, Theorem 1.2(i)]. Incidentally we note that a kernel of the form (21) allows an efficient numerical evaluation of its gap probabilities, as is recently shown in [9]. In the above proposition we use the notation (16). Hence the double integral can be rewritten as   \begin{align*} &\int_0^{\infty}\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y \\ &\quad = \int_0^{\infty}\int_0^{\infty}R_{\sigma}(x,y) \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y+\int_0^{\infty} \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(x)\,{\rm d} x. \end{align*} 2.2 Connection between the tacnode kernels $$K_{{\rm tac}}$$ and $$\mathcal L_{{\rm tac}}$$ Now we state the first main theorem of this paper. Theorem 2.4 (Connection between kernels). Fix $$\lambda >0$$. The kernel $$\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ in (21) equals the RH kernel $$K_{{\rm tac}}(u,v;r_1,r_2,s_1,s_2,\tau )$$ in (10) with the parameters   $$r_1=\lambda^{1/4},\quad r_2=1,\quad s_1=\tfrac{1}{2}\lambda^{3/4}\left(\Sigma+\tau^2\right), \quad s_2=\tfrac{1}{2}\left(\Sigma+\tau^2\right),$$ (22) where we recall (17). Theorem 2.4 is proved in Section 4.4. 2.3 Derivative of the tacnode kernel: a rank-2 property To prove Theorem 2.4 we will take the derivatives of the kernels $$K_{{\rm tac}}$$ and $$\mathcal L_{{\rm tac}}$$ with respect to a certain parameter, and prove that they are equal. The derivative will be a rank-2 kernel. First we discuss this for the kernel $$K_{{\rm tac}}$$. 2.3.1 Derivative of the kernel $$K_{{\rm tac}}$$. Recall the formula (10) for the tacnode kernel $$K_{{\rm tac}}$$. This kernel has an “integrable” form, due to the factor $$u-v$$ in the denominator. Interestingly, this factor cancels when taking the derivative with respect to $$s_1$$ or $$s_2$$. This is the content of the next theorem. To state the theorem, we parameterize   $$s_1 =: \sigma_1 s,\quad s_2 =: \sigma_2 s,$$ (23) where $$\sigma _1,\sigma _2$$ are fixed and $$s$$ is variable. In the symmetric case where $$s_1=s_2=:s$$, we could simply take $$\sigma _1=\sigma _2=1$$. We also consider $$r_1,r_2>0$$ to be fixed. Then we write $$K_{{\rm tac}}(u,v;s,\tau )$$, $$\textbf {p}(z;s,\tau )$$ etc., to denote the dependence on the two parameters $$s$$ and $$\tau$$. Theorem 2.5 (Derivative of tacnode kernel $$K_{{\rm tac}}$$). With the parametrization (23), the kernel (10) satisfies   $$\frac{\partial}{\partial s} K_{{\rm tac}}\left(u,v;s,\tau\right) = -\frac{1}{\pi}\left(\sigma_1 p_1\left(u;s,\tau\right)p_1\left(v;s,-\tau\right)+\sigma_2 p_2\left(u;s,\tau\right)p_2\left(v;s,-\tau\right)\right),$$ (24) where $$p_j$$, $$j=1,\ldots ,4$$, denotes the $$j$$th entry of the vector $$\textbf {p}$$ in (11). Consequently, if $$\sigma _1,\sigma _2>0$$ then   $$K_{{\rm tac}}\left(u,v;s,\tau\right) = \frac{1}{\pi}\int_s^{\infty}\left(\sigma_1 p_1\left(u;\widetilde s,\tau\right)p_1\left(v;\widetilde s,-\tau\right)+\sigma_2 p_2\left(u;\widetilde s,\tau\right)p_2\left(v;\widetilde s,-\tau\right)\right){\rm d} \widetilde s.$$ (25) Theorem 2.5 is proved in Section 3. Note that the right-hand side of (24) is a rank-2 kernel. In the proof of Theorem 2.4 we will use Theorem 2.5 with specific parameters $$\sigma _1,\sigma _2$$ (see (84)). Remark 2.1. Formulas (24)–(25) have an analogue for the kernel $$K_{\Psi }(u,v)$$ which is associated to the $$2\times 2$$ Flaschka–Newell RH matrix $$\Psi (z)$$, see [11, Eq. (1.22)]. The kernel $$K_{\Psi }(u,v)$$ occurs in Hermitian random matrix theory when the limiting eigenvalue density vanishes quadratically at an interior point of its support [10, 11]. 2.3.2 Derivative of the kernel $$\mathcal L_{{\rm tac}}$$. Next we consider the derivative of the tacnode kernel $$\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ with respect to the parameter $$\sigma$$. Theorem 2.6 (Derivative of tacnode kernel $$\mathcal L_{{\rm tac}}$$). Fix $$\lambda >0$$. The kernel (21) satisfies   $$\frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) = -C^{-2}\left(\lambda^{1/3}\hat{p}_1\left(u;\sigma,\tau\right)\hat{p}_1\left(v;\sigma,-\tau\right)+\lambda^{-1/2}\hat{p}_2\left(u;\sigma,\tau\right)\hat{p}_2\left(v;\sigma,-\tau\right)\right)$$ (26) and consequently   $$\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) = C^{-2}\int_{\sigma}^{\infty}\left(\lambda^{1/3}\hat{p}_1\left(u;s,\tau\right)\hat{p}_1\left(v;s,-\tau\right)+\lambda^{-1/2}\hat{p}_2\left(u;s,\tau\right)\hat{p}_2\left(v;s,-\tau\right)\right){\rm d} s.$$ (27) Here we denote $$C=(1+\lambda ^{-1/2})^{1/3}$$ and   \begin{align} \begin{split} \hat{p}_1\left(z;\sigma,\tau\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) \tilde{\mathcal{A}}_{\tau,z}\left(x\right){\rm d} x\\ \hat{p}_2\left(z;\sigma,\tau\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right)\mathcal{A}_{\tau,z}\left(x\right){\rm d}x, \end{split} \end{align} (28) with the notation (16). Theorem 2.6 is proved in Section 4.3 and its multi-time extended version is stated in Section 2.4. In the proof we will use certain functions (53) that were introduced by Tracy–Widom [28]. We will also obtain some alternative representations for (28), see Lemma 4.3. Note that the right-hand side of (26) is again a rank-2 kernel. This is a result of independent interest. 2.4 The multi-time case Our results for the Ferrari–Vet̋ tacnode kernel can be readily generalized to the multitime case, corresponding to two different times $$\tau _1,\tau _2$$. It suffices to replace all the subscripts $$\tau$$ and $$-\tau$$, by $$\tau _1$$ and $$-\tau _2$$, respectively. This yields the following generalization of Proposition 2.3. Proposition 2.7 (Extended tacnode kernel). Fix $$\lambda >0$$. The multitime extended tacnode kernel $$\mathcal L_{{\rm tac}}$$ of Ferrari–Vet̋ [17] can be written in the form   \begin{align} \mathcal L_{{\rm tac}}(u,v;\sigma,\tau_1,\tau_2) &= -\textbf{1}_{\tau_1<\tau_2}\frac{1}{\sqrt{4\pi(\tau_2-\tau_1)}}\exp \left(-\frac{(v-u)^2}{4(\tau_2-\tau_1)}\right)\nonumber\\ &\quad +C\lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau_1,u}(x)\tilde{b}_{-\tau_2,v}(x)\,{\rm d} x\nonumber\\ &\quad +C\int_0^{\infty}\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \mathcal{A}_{\tau_1,u}(x)\mathcal{A}_{-\tau_2,v}(y)\,{\rm d} x\,{\rm d} y, \end{align} (29) with the notations (13) and (16)–(20). This kernel is called $$\mathcal L_{{\rm tac}}^{\lambda ,\sigma }(\tau _1,\xi _1,\tau _2,\xi _2)$$ in [17] with $$\xi _1=u$$ and $$\xi _2=v$$. Recall that we use $$\sigma$$ in a different meaning. The rank-2 formula in Theorem 2.6 generalizes as follows. Theorem 2.8 (Derivative of extended tacnode kernel). Fix $$\lambda >0$$. The kernel (29) satisfies   $$\frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau_1,\tau_2\right) = -C^{-2}\left(\lambda^{1/3}\hat{p}_1\left(u;\sigma,\tau_1\right) \hat{p}_1\left(v;\sigma,-\tau_2\right)+\lambda^{-1/2}\hat{p}_2\left(u;\sigma,\tau_1\right) \hat{p}_2\left(v;\sigma,-\tau_2\right)\right)$$ (30) and consequently   \begin{align} &\mathcal L_{{\rm tac}}(u,v;\sigma,\tau_1,\tau_2)\nonumber\\ &\quad = -\textbf{1}_{\tau_1<\tau_2}\frac{1}{\sqrt{4\pi(\tau_2-\tau_1)}} \exp\left(-\frac{(v-u)^2}{4(\tau_2-\tau_1)}\right)\nonumber\\ &\qquad +C^{-2}\int_{\sigma}^{\infty}\left(\lambda^{1/3}\hat{p}_1(u;s,\tau_1) \hat{p}_1(v;s,-\tau_2)+\lambda^{-1/2}\hat{p}_2(u;s,\tau_1)\hat{p}_2(v;s,-\tau_2)\right){\rm d} s. \end{align} (31) Here we use again the notations $$C=(1+\lambda ^{-1/2})^{1/3}$$ and (28). As an offshoot of this theorem, we obtain an RH expression for the multitime extended tacnode kernel. Indeed, the functions $$\hat {p}_1$$ and $$\hat {p}_2$$ can be expressed in terms of the top left $$2\times 2$$ block of the RH matrix $$M(z)$$ on account of (11), (22), and (83). To the best of our knowledge, this is the first time that an RH expression is given for a multitime extended kernel. The above results on the multitime extended tacnode kernel can be proved exactly as in the single-time case. We omit the details. In the remainder of this paper we will not come back on the multitime case anymore. 2.5 Airy resolvent formulas for the $$4\times 4$$ RH matrix In the proof of Theorem 2.4 we will need Airy resolvent formulas for the entries in the first two columns of the RH matrix $$\tilde {M}(z) = \tilde {M}(z;r_1,r_2,s_1,s_2,\tau )$$. The existence of such formulas could be anticipated by comparing Theorems 2.5 and 2.6. The formulas below will be stated for general values of $$r_1,r_2,s_1,s_2,\tau$$. The reader who is interested only in the symmetric case $$r_1=r_2$$ and $$s_1=s_2$$ can skip the next paragraph and move directly to (34). For general $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$, we define the constants   \begin{align} \begin{split} C &=\left(r_1^{-2}+r_2^{-2}\right)^{1/3},\\ D &=\sqrt{\frac{r_1}{r_2}}\exp\left(\frac{r_1^4-r_2^4}{3}\tau^3 +2\left(r_2s_2-r_1s_1\right)\tau\right),\\ \sigma &=C^{-1}\left(2\left(\frac{s_1}{r_1}+\frac{s_2}{r_2}\right)-\left(r_1^2+r_2^2\right)\tau^2\right), \end{split} \end{align} (32) and the functions   \begin{align} \begin{split} b_z\left(x\right) &=\sqrt{2\pi}r_2^{1/6}\exp\left(-r_2^2 \tau \left(z+C x\right)\right){\rm Ai}\left(r_2^{2/3}\left(z+Cx+2\frac{s_2}{r_2}\right)\right),\\ \tilde{b}_z\left(x\right) &= \sqrt{2\pi}r_1^{1/6}\exp\left(r_1^2 \tau \left(z-C x\right)\right){\rm Ai}\left(r_1^{2/3}\left(-z+Cx+2\frac{s_1}{r_1}\right)\right). \end{split} \end{align} (33) The above definitions of $$C,\sigma$$ are consistent with our earlier formulas (19) and (17) under the identification (22). Similarly, the formulas for $$b_z(x),\tilde {b}_z(x)$$ reduce to the ones in (18) up to certain multiplicative constants (independent of $$z,x$$). In the symmetric case where $$r_1=r_2=1$$, $$s_1=s_2=:s$$, the above definitions simplify to   \begin{align} \begin{split} C&=2^{1/3},\\ D &=1,\\ \sigma &= 2^{5/3}s -2^{2/3}\tau^2, \\ b_z\left(x\right) &= \sqrt{2\pi}\exp\left(-\tau \left(z+2^{1/3} x\right)\right){\rm Ai}\left(z+2^{1/3}x+2s\right),\\ \tilde{b}_z\left(x\right)&=b_{-z}\left(x\right). \end{split} \end{align} (34) The fact of the matter is Theorem 2.9 (Airy resolvent formulas for the $$4\times 4$$ RH matrix). Denote by $$\tilde {M}_{j,k}(z)$$ the $$(j,k)$$ entry of the RH matrix $$\tilde {M}(z)=\tilde {M}(z;r_1,r_2,s_1,s_2,\tau )$$. Then the entries in the top left $$2\times 2$$ block of $$\tilde {M}(z)$$ can be expressed by the formulas   \begin{align} \begin{split} \tilde{M}_{1,1}\left(z\right) &= \int_{0}^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) \tilde{b}_z\left(x\right){\rm d} x,\\ \tilde{M}_{2,1}\left(z\right)&= -D\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_z\left(y\right) {\rm d} x\,{\rm d} y,\\ \tilde{M}_{1,2}\left(z\right) &=-D^{-1}\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right){\rm Ai}\left(x+y+\sigma\right)b_z\left(y\right) {\rm d} x\,{\rm d} y,\\ \tilde{M}_{2,2}\left(z\right) &=\int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} \left(x,0\right)b_z\left(x\right) {\rm d} x, \end{split} \end{align} (35) where we use the notations (16) and (32)–(33) (or (34) in the symmetric case). The entries in the bottom left $$2\times 2$$ block of $$\tilde {M}(z)$$ can be obtained by combining the above expressions with equations (41)–(42). Theorem 2.9 is proved in Section 5. The proof makes heavy use of the Tracy–Widom functions defined in Section 4.2. Corollary 2.10. The first two entries of the vector $$\textbf {p}(z)$$ in (11) are given by   \begin{align} \begin{split} p_1\left(z\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) \tilde{\mathcal{A}}_{z}\left(x\right){\rm d} x, \\ p_2\left(z\right)& = \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right)\mathcal{A}_{z}\left(x\right){\rm d} x, \end{split} \end{align} (36) where   \begin{align} \begin{split} \mathcal{A}_{z}\left(x\right) &= b_z\left(x\right) -D\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{z}\left(y\right) {\rm d} y,\\ \tilde{\mathcal{A}}_{z}\left(x\right) &= \tilde{b}_{z}\left(x\right)- D^{-1}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)b_{z}\left(y\right){\rm d} y. \end{split} \end{align} (37) 2.6 Differential equations for the columns of $$M(z)$$ In the proof of Theorem 2.9, we will use the differential equations for the columns of the RH matrix $$\tilde {M}(z)$$ (or $$M(z)$$). Interestingly, the coefficients in these differential equations contain the Hastings–McLeod solution $$q(x)$$ to Painlevé II. We also need the associated Hamiltonian  $$u\left(x\right) := \left(q'\left(x\right)\right)^2-xq^2\left(x\right)-q^4\left(x\right).$$ (38) Proposition 2.11 (System of differential equations). (a) Let the vector $$\textbf {m}(z)=\textbf {m}(z;r_1,r_2,s_1,s_2,\tau )$$ be one of the columns of $$\tilde {M}(z)$$, or a fixed linear combination of them, and denote its entries by $$m_j(z)$$, $$j=1,\ldots ,4$$. Then with the prime denoting the derivative with respect to $$z$$, we have   \begin{align} r_1^{-2} m_1'' &= 2\tau m_1'+C^2D^{-1} q\left(\sigma\right)m_2'\nonumber\\ &\quad + \left[C q^2\left(\sigma\right)- z+2s_1/r_1 -r_1^2\tau^2\right] m_1 -\left[CD^{-1} q'\left(\sigma\right)\right]m_2, \end{align} (39)  \begin{align} \nonumber r_2^{-2} m_2'' &= -C^2D q\left(\sigma\right) m_1'- 2\tau m_2'\\ &\quad +\left[C q^2\left(\sigma\right)+z+2s_2/r_2-r_2^2\tau^2\right] m_2 -\left[CD q'\left(\sigma\right)\right]m_1, \end{align} (40)  \begin{align} r_1im_3 &= m_1'-\left(C^{-1} u\left(\sigma\right)-s_1^2+r_1^2\tau\right) m_1-C^{-1}D^{-1}q\left(\sigma\right) m_2, \end{align} (41)  \begin{align} r_2im_4 &= m_2'+C^{-1}D q\left(\sigma\right) m_1+\left(C^{-1}u\left(\sigma\right)-s_2^2+r_2^2\tau\right) m_2, \end{align} (42) where the constants $$C,D,\sigma$$ are defined in (32) and we denote by $$q,u$$ the Hastings–McLeod function and the associated Hamiltonian. (b) Conversely, any vector $$\textbf {m}(z)$$ that solves (39)–(42) is a fixed (independent of $$z$$) linear combination of the columns of $$\tilde {M}(z)$$. Proposition 2.11 is proved in Section 6.4 with the help of Lax pair calculations. There are similar differential equations with respect to the parameters $$s_1$$, $$s_2$$, or $$\tau$$ but they will not be needed. 2.7 Outline of the paper The remainder of this paper is organized as follows. In Section 3 we establish Theorem 2.5 about the derivative of the RH tacnode kernel. In Section 4 we prove Proposition 2.3 and Theorems 2.4 and 2.6 about the Ferrari–Vet̋ tacnode kernel. In Section 5 we prove Theorem 2.9 about the Airy resolvent formulas for the entries of the RH matrix $$\tilde {M}(z)$$. Finally, in Section 6 we use Lax pair calculations to prove Proposition 2.11. 3 Proof of Theorem 2.5 Throughout the proof we use the parametrization $$s_j=\sigma _j s$$ with $$\sigma _j$$ fixed, $$j=1,2$$. We also assume $$r_1,r_2>0$$ to be fixed. The RH matrix $$\tilde {M}(z)=\tilde {M}(z;s,\tau )$$ satisfies the differential equation   $$\frac{\partial}{\partial s}\tilde{M}\left(z\right) =V\left(z\right)\tilde{M}\left(z\right),$$ (43) for a certain coefficient matrix $$V(z)=V(z;s,\tau )$$. This is described in more detail in Section 6.3. At this moment, we only need to know that   $$V\left(z\right)= -2iz\begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \sigma_1 & 0 & 0 & 0 \\ 0 & \sigma_2 & 0 & 0 \end{pmatrix}+V\left(0\right),$$ (44) where the matrix $$V(0)$$ is independent of $$z$$. We will also need the symmetry relation   $$\tilde{M}^{-1}\left(z;s,\tau\right) = K^{-1} \tilde{M}^T\left(z;s,-\tau\right) K,$$ (45) where the superscript T denotes the transpose and   $$K = \begin{pmatrix} 0 & I_2 \\ -I_2 & 0 \end{pmatrix}$$ (46) with $$I_2$$ denoting the identity matrix of size $$2\times 2$$. This symmetry relation is a consequence of Lemma 6.3. We are now ready to prove Theorem 2.5. Abbreviating $$\tilde {M}(z):=\tilde {M}(z;s,\tau )$$ for the moment, we start by calculating   \begin{align*} \frac{\partial}{\partial s}\left[\tilde{M}^{-1}\left(v\right) \tilde{M}\left(u\right)\right] &=\tilde{M}^{-1}\left(v\right)\left(\frac{\partial}{\partial s}\left[\tilde{M}\left(u\right)\right] \tilde{M}^{-1}\left(u\right)-\frac{\partial}{\partial s}\left[\tilde{M}\left(v\right)\right] \tilde{M}^{-1}\left(v\right)\right)\tilde{M}\left(u\right) \\ &=\tilde{M}^{-1}\left(v\right)\left(V\left(u\right)-V\left(v\right) \right) \tilde{M}\left(u\right) \\ &=-2i\left(u-v\right)\tilde{M}^{-1}\left(v\right)\begin{pmatrix} 0 & 0\\ \Sigma & 0 \end{pmatrix}\tilde{M}\left(u\right), \end{align*} with $$\Sigma :={\rm diag}(\sigma _1,\sigma _2)$$, where the last equality follows by (44). With the help of the above calculation we now obtain   \begin{align*} \frac{\partial}{\partial s} K_{{\rm tac}}\left(u,v;s,\tau\right) &= \frac{\partial}{\partial s} \frac{1}{2\pi i\left(u-v\right)} \left(0\quad 0\quad 1\quad 1\right)\tilde{M}^{-1}\left(v;s,\tau\right) \tilde{M}\left(u;s,\tau\right) \left(1\quad 1\quad 0\quad 0\right)^T\\ &=-\frac{1}{\pi}\left(0\quad 0\quad 1\quad 1\right) \tilde{M}^{-1}\left(v;s,\tau\right)\begin{pmatrix} 0 & 0\\ \Sigma & 0 \end{pmatrix}\tilde{M}\left(u;s,\tau\right)\left(1\quad 1\quad 0\quad 0\right)^T\\ &=-\frac{1}{\pi}\left(1\quad 1\quad 0\quad 0\right) \tilde{M}^{T}\left(v;s,-\tau\right)\begin{pmatrix} \Sigma & 0\\ 0 & 0 \end{pmatrix}\tilde{M}\left(u;s,\tau\right)\left(1\quad 1\quad 0\quad 0\right)^T\\ &=-\frac{1}{\pi}\textbf{p}^T\left(v;s,-\tau\right) \begin{pmatrix} \Sigma & 0 \\ 0&0 \end{pmatrix}\textbf{p}\left(u;s,\tau\right), \end{align*} where the third equality follows from (45) and the fourth one from (11). This proves (24). By integrating this equality, we obtain (25), due to the fact that the entries of $$\textbf {p}(z;s,\tau )$$ go to zero for $$s\to +\infty$$ if $$\sigma _1,\sigma _2>0$$. The latter fact follows from [14, Section 3] if $$\tau =0$$ and is established similarly for general $$\tau \in \mathbb {R}$$. This ends the proof of Theorem 2.5. 4 Proofs of Proposition 2.3 and Theorems 2.6 and thm2.4 4.1 Proof of Proposition 2.3 Denote by $$\textbf {A}_{\sigma }$$ the operator on $$L^2([0,\infty ))$$ that acts on the function $$f\in L^2([0,\infty ))$$ by the rule   $$\left[\textbf{A}_{\sigma} f\right]\left(x\right) = \int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right) f\left(y\right){\rm d} y.$$ (47) Observe that   $$\textbf{A}_{\sigma}^2 = \textbf{K}_{{\rm Ai},\sigma},$$ (48) on account of (13). Now the kernel $$\mathcal L_{{\rm tac}}(u,v)$$ in [17, Eq. (1.5)] in the single time case $$\tau _1=\tau _2=\tau$$ has the form   \begin{align} C^{-1}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right)&= \int_0^{\infty}b_{\tau,u}\left(x\right)b_{-\tau,v}\left(x\right){\rm d} x\nonumber \\ &\quad +\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\left[\textbf{A}_{\sigma} b_{\tau,u}\right]\left(x\right)\left[\textbf{A}_{\sigma} b_{-\tau,v}\right]\left(y\right){\rm d} x{\rm d} y\nonumber \\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} b_{\tau,u}\right]\left(x\right)\tilde{b}_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y\nonumber\\ &\quad +\lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right) \tilde{b}_{-\tau,v}\left(x\right){\rm d} x\nonumber\\ &\quad +\lambda^{1/3}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right]\left(x\right) \left[\textbf{A}_{\sigma} \tilde{b}_{-\tau,v}\right]\left(y\right){\rm d} x\,{\rm d} y\nonumber\\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right]\left(x\right)b_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y, \end{align} (49) where again $$C=(1+\lambda ^{-1/2})^{1/3}$$ and the resolvent kernel notation (16) is used. Note that the notations $$\sigma$$, $$\widetilde \sigma$$, $$b_{\tau ,\sigma +\xi }^{\lambda }(x+\widetilde \sigma )$$, $$B_{\tau ,\sigma +\xi }^{\lambda }(x+\widetilde \sigma )$$, $$b_{\lambda ^{1/3}\tau ,\lambda ^{2/3}\sigma -\lambda ^{1/6}\xi }^{\lambda ^{-1}}(x+\widetilde \sigma )$$, $$B_{\lambda ^{1/3}\tau ,\lambda ^{2/3}\sigma -\lambda ^{1/6}\xi }^{\lambda ^{-1}}(x+\widetilde \sigma )$$ in [17] correspond to our notations $$\Sigma$$, $$\sigma$$, $$\lambda ^{1/6}\tilde {b}_{-\tau ,\xi }(x)$$, $$[\textbf {A}_{\sigma } b_{-\tau ,\xi }](x)$$, $$\lambda ^{-1/6}b_{-\tau ,\xi }(x)$$, $$[\textbf {A}_{\sigma } \tilde {b}_{-\tau ,\xi }](x)$$, respectively. The notation $$K_{{\rm Ai}}^{(-\tau ,\tau )}(\sigma +\xi _1,\sigma +\xi _2)$$ in [17, Eq. (1.11)] corresponds to the first term in the right-hand side of (49). The second term in the right-hand side of (49) equals   \begin{align} &\int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \left[\textbf{A}_{\sigma} b_{\tau,u}\right](x)\left[\textbf{A}_{\sigma}b_{-\tau,v}\right](y)\,{\rm d} x\,{\rm d} y \nonumber\\ &\quad = \int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) b_{\tau,u}(x)b_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y-\int_0^{\infty} b_{\tau,u}(x)b_{-\tau,v}(x)\,{\rm d} x, \end{align} (50) where we used that   $\textbf{A}_{\sigma}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{A}_{\sigma} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{K}_{{\rm Ai},\sigma} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}-\textbf{1},$ on account of (48). Next, the third term in the right-hand side of (49) can be written as   \begin{align} &-\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \left[\textbf{A}_{\sigma} b_{\tau,u}\right](x)\tilde{b}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y\nonumber\\ &\quad= -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) b_{\tau,u}(x)\left[\textbf{A}_{\sigma}\tilde{b}_{-\tau,v}\right](y)\,{\rm d} x\,{\rm d} y, \end{align} (51) since the operators $$(\textbf {1}-\textbf {K}_{{\rm Ai},\sigma })^{-1}$$ and $$\textbf {A}_{\sigma }$$ commute. Inserting (50)–(51) for the second and third term in the right-hand side of (49), we obtain   \begin{align} C^{-1}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right)&= \lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right)\tilde{b}_{-\tau,v}\left(x\right){\rm d} x\nonumber \\ &\quad +\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) b_{\tau,u}\left(x\right)b_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y\nonumber \\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) b_{\tau,u}\left(x\right)\left[\textbf{A}_{\sigma}\tilde{b}_{-\tau,v}\right]\left(y\right){\rm d} x\,{\rm d} y\nonumber \\ &\quad +\lambda^{1/3}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right] \left(x\right)\left[\textbf{A}_{\sigma} \tilde{b}_{-\tau,v}\right]\left(y\right){\rm d} x\,{\rm d} y\nonumber\\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right]\left(x\right)b_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y, \end{align} (52) which is equivalent to the desired result (21). 4.2 Tracy–Widom functions and their properties In some of the remaining proofs, we need the functions   \begin{align} \begin{split} Q\left(x\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right){\rm Ai}\left(y+\sigma\right){\rm d} y\\ P\left(x\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right){\rm Ai}'\left(y+\sigma\right){\rm d} y, \end{split} \end{align} (53) with the usual notation (16). Note that $$Q,P$$ depend on $$\sigma$$ but we omit this dependence from the notation. The functions $$Q,P$$ originate from the seminal paper of Tracy–Widom [28]. Below we list some of their properties. These properties can all be found in [28], see also [5, Section 3.8] for a text book treatment. One should take into account that our function $$R(x,y)=R_{\sigma }(x,y)$$ equals the one in [28], [5, Section 3.8] at the shifted arguments $$x+\sigma$$, $$y+\sigma$$, and similarly for $$Q(x)$$ and $$P(x)$$. Lemma 4.1. The functions $$Q,P$$ and $$R=R_{\sigma }$$ satisfy the differential equations   \begin{align} \left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)R\left(x,y\right) &= R\left(x,0\right)R\left(0,y\right)-Q\left(x\right)Q\left(y\right), \end{align} (54)  \begin{align} \frac{\partial}{\partial \sigma} R\left(x,y\right) &= -Q\left(x\right)Q\left(y\right), \end{align} (55)  \begin{align} Q'\left(x\right) &= P\left(x\right)+q R\left(x,0\right)- u Q\left(x\right), \end{align} (56)  \begin{align} P'\left(x\right) &= \left(x+\sigma-2v\right)Q\left(x\right)+p R\left(x,0\right)+uP\left(x\right), \end{align} (57) where   \begin{align} q&=Q\left(0\right) \end{align} (58)  \begin{align} p&=P\left(0\right) \end{align} (59)  \begin{align} u&=\int_{0}^{\infty} Q\left(x\right){\rm Ai}\left(x+\sigma\right){\rm d} x \end{align} (60)  \begin{align} v&=\int_{0}^{\infty} Q\left(x\right){\rm Ai}'\left(x+\sigma\right){\rm d} x =\int_{0}^{\infty} P\left(x\right){\rm Ai}\left(x+\sigma\right){\rm d} x. \end{align} (61) Note that $$q,p,u,v$$ are all functions of $$\sigma$$, although we do not show this in the notation. They satisfy the following differential equations with respect to $$\sigma$$ [28], [5, Section 3.8]   \begin{align} q' &= p-qu \end{align} (62)  \begin{align} p' &= \sigma q+pu-2qv \end{align} (63)  \begin{align} u' &= -q^2 \end{align} (64)  \begin{align} v' &= -pq. \end{align} (65) It is known that $$q=q(\sigma )$$ is the Hastings–McLeod solution to the Painlevé II equation (1)–(2). Moreover, $$u=u(\sigma )$$ is the Hamiltonian (38), and   $$2v = u^2-q^2.$$ (66) Finally, we establish the following lemma. Lemma 4.2. For any function $$b$$ on $$[0,\infty )$$ we have   $$\int_{0}^{\infty}Q\left(x\right)b\left(x\right){\rm d} x = \int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right){\rm Ai}\left(x+y+\sigma\right) b\left(y\right){\rm d} x\,{\rm d} y$$ (67) and   $$\int_0^{\infty}\int_{0}^{\infty}Q\left(x\right){\rm Ai}\left(x+y+\sigma\right)b\left(y\right){\rm d} x\,{\rm d} y =\int_0^{\infty}R\left(x,0\right) b\left(x\right){\rm d} x,$$ (68) with the usual notations $$R=R_{\sigma }$$ and (16). Proof. The lemma is rather obvious from the point of view of integral operators, but we include a proof for convenience. Recall the operator $$\textbf {A}_{\sigma }$$ defined in (47)–(48) and let $$\delta _0$$ denote the Dirac delta function at 0, in the sense that   $\int_0^\infty \delta_0\left(x\right) f\left(x\right){\rm d} x = f\left(0\right)$ for any function $$f$$ right continuous at 0. We have   \begin{align*} \int_{0}^{\infty}Q\left(x\right)b\left(x\right){\rm d} x &=\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma}\delta_0\right]\left(y\right) b\left(x\right){\rm d} x\,{\rm d} y \\ &=\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\delta_0\left(y\right) \left[\textbf{A}_{\sigma} b\right]\left(x\right){\rm d} x\,{\rm d} y, \end{align*} where we used that the operators $$\textbf {A}_{\sigma }$$ and $$\textbf {K}_{{\rm Ai},\sigma }$$ commute. This proves (67). Next,   \begin{align*} &\int_0^{\infty} \int_{0}^{\infty}Q\left(x\right){\rm Ai}\left(x+y+\sigma\right)b\left(y\right){\rm d} x\,{\rm d} y \\ &\quad =\int_0^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\left[\textbf{A}_{\sigma}\delta_0\right]\left(y\right) \left[\textbf{A}_{\sigma}b\right]\left(x\right){\rm d} x\,{\rm d} y \\ &\quad =\int_{0}^{\infty}\int_0^{\infty}R\left(x,y\right)\delta_0\left(y\right) b\left(x\right){\rm d} x\,{\rm d} y, \end{align*} where we used that   $\textbf{A}_{\sigma}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{A}_{\sigma} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{K}_{{\rm Ai},\sigma}=\textbf{R}_{{\rm Ai},\sigma}.$ This proves (68). □ 4.3 Proof of Theorem 2.6 We will prove Theorem 2.6 using the following lemmas. Lemma 4.3. The formulas (28) can be written in the equivalent ways   \begin{align} \hat{p}_1\left(z;\sigma,\tau\right) &=\tilde{\mathcal{A}}_{\tau,z}\left(0\right)+ \int_0^{\infty} R\left(x,0\right)\tilde{\mathcal{A}}_{\tau,z}\left(x\right){\rm d} x \end{align} (69)  \begin{align} &=\tilde{b}_{\tau,z}\left(0\right)-\lambda^{-1/6}\int_0^{\infty} Q\left(x\right)\mathcal{A}_{\tau,z}\left(x\right){\rm d} x \end{align} (70)  \begin{align} \hat{p}_2\left(z;\sigma,\tau\right) &= \mathcal{A}_{\tau,z}\left(0\right)+\int_0^{\infty} R\left(x,0\right) \mathcal{A}_{\tau,z}\left(x\right){\rm d} x \end{align} (71)  \begin{align} &= b_{\tau,z}\left(0\right)-\lambda^{1/6}\int_0^{\infty} Q\left(x\right)\tilde{\mathcal{A}}_{\tau,z}\left(x\right){\rm d} x, \end{align} (72) with the notations $$R=R_{\sigma }$$ and (53). Proof. Immediate from Lemma 4.2 and the definitions. □ Lemma 4.4. We have   \begin{align} \left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}b_{\tau,z}\left(x\right) &=\lambda^{-1/2}\frac{\partial}{\partial x} b_{\tau,z}\left(x\right), \end{align} (73)  \begin{align} \left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}\tilde{b}_{\tau,z}\left(x\right) &=\frac{\partial}{\partial x} \tilde{b}_{\tau,z}\left(x\right), \end{align} (74)  \begin{align} \left(1+\lambda^{-1/2}\right) \frac{\partial}{\partial\sigma}\mathcal{A}_{\tau,z}\left(x\right) &=\lambda^{-1/2}\frac{\partial}{\partial x}\mathcal{A}_{\tau,z}\left(x\right) +\lambda^{1/6} {\rm Ai}\left(x+\sigma\right)\tilde{b}_{\tau,z}\left(0\right). \end{align} (75) Proof. The first two formulas are obvious from the definitions (17)–(19). For the last formula, we calculate   \begin{align*} &\left[\left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}-\lambda^{-1/2}\frac{\partial}{\partial x}\right]\mathcal{A}_{\tau,z}\left(x\right) \\ &\quad =\left[\left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}-\lambda^{-1/2}\frac{\partial}{\partial x}\right] \left(b_{\tau,z}\left(x\right) -\lambda^{1/6}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) {\rm d} y\right) \\ &\quad = -\lambda^{1/6}\left[\left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}-\lambda^{-1/2}\frac{\partial}{\partial x}\right] \left(\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) {\rm d} y\right) \\ &\quad = -\lambda^{1/6}\int_{0}^{\infty} \left({\rm Ai}'\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) + {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}'\left(y\right) \right) {\rm d} y\\ &\quad =\lambda^{1/6}{\rm Ai}\left(x+\sigma\right)\tilde{b}_{\tau,z}\left(0\right), \end{align*} where the second and third equalities use (73) and (74), respectively, and the last equality follows from integration by parts. This proves the lemma. □ Now we prove Theorem 2.6. With the help of (75), the derivative of (21) with respect to $$\sigma$$ becomes   \begin{align} &\left(1+\lambda^{-1/2}\right)^{2/3} \frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right)\nonumber\\ &\quad=\left(1+\lambda^{-1/2}\right)^{2/3}\lambda^{1/3} C \frac{\partial}{\partial \sigma}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right)\tilde{b}_{-\tau,v}\left(x\right){\rm d} x\nonumber \\ &\qquad +\lambda^{-1/2}\int_0^{\infty}\int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} \left(x,y\right)\left(\mathcal{A}_{\tau,u}'\left(x\right)\mathcal{A}_{-\tau,v}\left(y\right) +\mathcal{A}_{\tau,u}\left(x\right)\mathcal{A}_{-\tau,v}'\left(y\right)\right){\rm d} x\,{\rm d} y\nonumber \\ &\qquad + \lambda^{1/6}\tilde{b}_{\tau,u}\left(0\right)\int_0^{\infty} Q\left(y\right)\mathcal{A}_{-\tau,v}\left(y\right){\rm d} y + \lambda^{1/6}\tilde{b}_{-\tau,v}\left(0\right)\int_0^{\infty} Q\left(x\right) \mathcal{A}_{\tau,u}\left(x\right){\rm d} x\nonumber\\ &\qquad +\left(1+\lambda^{-1/2}\right)\int_0^{\infty}\int_0^{\infty} \left(\frac{\partial}{\partial\sigma} R\left(x,y\right)\right) \mathcal{A}_{\tau,u}\left(x\right) \mathcal{A}_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y, \end{align} (76) where we used the definition of $$Q$$ in (53). The first term in the right-hand side of (76) can be written as   $$\left(1+\lambda^{-1/2}\right)^{2/3}\lambda^{1/3} C \frac{\partial}{\partial \sigma}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right)\tilde{b}_{-\tau,v}\left(x\right){\rm d} x = -\lambda^{1/3}\tilde{b}_{\tau,u}\left(0\right)\tilde{b}_{-\tau,v}\left(0\right),$$ (77) on account of (74). The second term in the right-hand side of (76) can be written as   \begin{align} &\lambda^{-1/2}\int_0^{\infty}\!\!\!\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \left(\mathcal{A}_{\tau,u}'(x)\mathcal{A}_{-\tau,v}(y)+\mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}'(y)\right){\rm d} x\,{\rm d} y \nonumber\\ &\quad =-\lambda^{-1/2}\left(\int_0^{\infty}\int_0^{\infty}\left[\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)R(x,y)\right] \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y){\rm d} x\,{\rm d} y+\mathcal{A}_{\tau,u}(0)\mathcal{A}_{-\tau,v}(0)\right.\nonumber\\ &\quad \left. +\mathcal{A}_{\tau,u}(0)\int_0^{\infty}R(0,y)\mathcal{A}_{-\tau,v}(y)\,{\rm d} y +\mathcal{A}_{-\tau,v}(0)\int_0^{\infty}R(x,0)\mathcal{A}_{\tau,u}(x)\,{\rm d} x\right), \end{align} (78) where we used (16) and integration by parts. Finally, we observe that   $$\left[ (1+\lambda^{-1/2})\frac{\partial}{\partial\sigma}-\lambda^{-1/2}\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)\right]R(x,y) = -\lambda^{-1/2} R(x,0)R(0,y)-Q(x)Q(y),$$ (79) on account of (54)–(55). Inserting (77)–(79) in the right-hand side of (76), we obtain   \begin{align} &\left(1+\lambda^{-1/2}\right)^{2/3} \frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) \nonumber \\ &\quad =-\lambda^{1/3}\left(\tilde{b}_{\tau,u}\left(0\right)-\lambda^{-1/6} \int_0^{\infty}Q\left(x\right)\mathcal{A}_{\tau,u}\left(x\right){\rm d} x\right)\left(\tilde{b}_{-\tau,v}\left(0\right)-\lambda^{-1/6} \int_0^{\infty}Q\left(x\right)\mathcal{A}_{-\tau,v}\left(x\right){\rm d} x\right)\nonumber \\ &\qquad -\lambda^{-1/2}\left(\mathcal{A}_{\tau,u}\left(0\right)+ \int_0^{\infty}R\left(x,0\right)\mathcal{A}_{\tau,u}\left(x\right){\rm d} x\right) \left(\mathcal{A}_{-\tau,v}\left(0\right)+\int_0^{\infty}R\left(0,x\right)\mathcal{A}_{-\tau,v}\left(x\right){\rm d} x\right)\nonumber \\ &\quad =-\lambda^{1/3}\hat{p}_1\left(u;\sigma,\tau\right)\hat{p}_1\left(v;\sigma,-\tau\right) -\lambda^{-1/2}\hat{p}_2\left(u;\sigma,\tau\right)\hat{p}_2\left(v;\sigma,-\tau\right), \end{align} (80) on account of (70)–(71). This proves (26). Finally we prove (27). From [5, Section 3.8] we have the estimate   $$|R_{\sigma}\left(x,y\right)|< C_0 \,{\rm e}^{-x-y-2\sigma}$$ (81) for all $$x,y,\sigma >0$$, with $$C_0>0$$ a certain constant. We also have the asymptotics for the Airy function   $${\rm Ai}\left(x\right)\sim \exp\left(-\tfrac 23 x^{3/2}\right)/\left(2\sqrt{\pi}x^{1/4}\right),\quad x\to +\infty.$$ (82) Consequently the kernel $$\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ in (21), (16) goes to zero (at a very fast rate) for $$\sigma \to +\infty$$. Integration of (26) then yields (27). This proves Theorem 2.6. 4.4 Proof of Theorem 2.4 Let the parameters $$r_1,r_2,s_1,s_2$$ be given by (22). As already observed, the expressions for $$C,\sigma$$ in (32) and (19), (17) are equal under this identification. Similarly, the expressions for $$p_1,p_2$$ in (36) and $$\hat {p}_1,\hat {p}_2$$ in (28) are related by   $$p_j\left(z\right) = \sqrt{2\pi} r_j^{1/6} \exp\left(r_j^4 \tau\left(\Sigma+\frac 23 \tau^2\right)\right)\hat{p}_j\left(z\right),\quad j=1,2.$$ (83) Note that the exponential factor in (83) cancels out in (24). Now observe that the formulas for $$s_1,s_2$$ in (22) are of the form $$s_j=\sigma _j s$$ with   $$s:=\left(\Sigma+\tau^2\right)/2,\quad \sigma_1:=\lambda^{3/4},\quad \sigma_2:=1.$$ (84) By (17) we then have   $$\frac{\partial}{\partial s}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) = 2\sqrt{\lambda}\left(1+\lambda^{-1/2}\right)^{2/3}\frac{\partial}{\partial \sigma}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right),$$ (85) where we consider $$\tau$$ to be fixed. Theorem 2.4 follows from Theorems 2.5 and 2.6 and the above observations. 5 Proof of Theorem 2.9 In this section we prove Theorem 2.9 on the Airy resolvent formulas for the RH matrix. The outline of the proof is as follows: In Sections 5.2 and 5.3 we will show that the Airy resolvent formulas in (35) satisfy the required differential equations for the columns of the RH matrix, as given in Proposition 2.11. The latter proposition (whose proof will be established in Section 6) then shows that the Airy resolvent formulas correspond to certain linear combinations of the columns of the RH matrix. In Section 5.4 we will conclude the proof by deriving the appropriate asymptotics. 5.1 Preparations We start with a few lemmas that will be useful in the calculations in Section 5.2. Lemma 5.1. The functions $$b_z(x)$$ and $$\tilde {b}_z(x)$$ in (33) satisfy the differential equations   $$b_z'\left(x\right) = C\frac{\partial}{\partial z} b_z\left(x\right),\quad \tilde{b}_z'\left(x\right) = -C\frac{\partial}{\partial z} \tilde{b}_z\left(x\right),$$ (86) and   \begin{align} r_2^{-2}\frac{\partial^2}{\partial z^2} b_{z}\left(x\right)+2\tau \frac{\partial}{\partial z} b_z\left(x\right) &= \left(z+Cx+2\frac{s_2}{r_2}-r_2^{2}\tau^2 \right)b_z\left(x\right), \end{align} (87)  \begin{align} r_1^{-2}\frac{\partial^2}{\partial z^2} \tilde{b}_{z}\left(x\right)-2\tau \frac{\partial}{\partial z} \tilde{b}_z\left(x\right) &= \left(-z+Cx+2\frac{s_1}{r_1}-r_1^{2}\tau^2 \right)\tilde{b}_z\left(x\right). \end{align} (88) Proof. Equation (86) is obvious. The other equations follow from a straightforward calculation using the Airy differential equation $${\rm Ai}''(x)=x{\rm Ai}(x)$$. □ Lemma 5.2. The function $$\mathcal {A}_z(x)$$ in (37) satisfies the differential equations   $$\frac{\partial}{\partial z} \mathcal{A}_{z}\left(x\right) = C^{-1}\left(\mathcal{A}_{z}'\left(x\right) - D{\rm Ai}\left(x+\sigma\right)\tilde{b}_{z}\left(0\right)\right)$$ (89) and   \begin{align} r_2^{-2}\frac{\partial^2}{\partial z^2} \mathcal{A}_{z}(x)+2\tau \frac{\partial}{\partial z} \mathcal{A}_z(x) &= \left(z+Cx+2\frac{s_2}{r_2}-r_2^{2}\tau^2 \right)\mathcal{A}_{z}(x) \nonumber\\ &\quad + C D\left({\rm Ai}(x+\sigma)\tilde{b}_{z}'(0) - {\rm Ai}'(x+\sigma)\tilde{b}_{z}(0)\right). \end{align} (90) Proof. Equation (89) follows from the definition of $$\mathcal {A}_z$$, (86) and integration by parts. Now we check the formula (90). From (37) we have   \begin{align*} r_2^{-2}\frac{\partial^2}{\partial z^2}\mathcal{A}_{z}\left(x\right) &=r_2^{-2}\frac{\partial^2}{\partial z^2}b_{z}\left(x\right) -r_2^{-2}D\int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right) \frac{\partial^2}{\partial z^2} \tilde{b}_{z}\left(y\right){\rm d} y \\ &= r_2^{-2}\frac{\partial^2}{\partial z^2} b_{z}\left(x\right) + r_1^{-2}D\int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right)\frac{\partial^2}{\partial z^2} \tilde{b}_{z}\left(y\right){\rm d} y\\ &\quad - CD\int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{z}''\left(y\right){\rm d} y \end{align*} where in the second equality we used $$r_1^{-2}+r_2^{-2}=C^3$$ and (86). Hence   \begin{align} r_2^{-2}\frac{\partial^2}{\partial z^2}\mathcal{A}_{z}(x)+2\tau \frac{\partial}{\partial z}\mathcal{A}_{z}(x) & =\left(r_2^{-2}\frac{\partial^2}{\partial z^2} b_{z}(x)+2\tau \frac{\partial}{\partial z}b_{z}(x)\right)\nonumber\\ &\quad+D\int_0^{\infty} {\rm Ai}(x+y+\sigma)\left(r_1^{-2}\frac{\partial^2}{\partial z^2} \tilde{b}_{z}(y)-2\tau \frac{\partial}{\partial z} \tilde{b}_{z}(y)\right){\rm d} y\nonumber\\ &\quad -CD\int_0^{\infty} {\rm Ai}(x+y+\sigma)\tilde{b}_{z}''(y)\,{\rm d} y. \end{align} (91) In the first two terms in the right-hand side of (91) we use the differential equations (87)–(88), and in the third term we integrate by parts twice and subsequently use the Airy differential equation $${\rm Ai}''(x)=x{\rm Ai}(x)$$. The lemma then follows from a straightforward calculation, taking into account that   $$\left(-z+Cy+2\frac{s_1}{r_1}-r_1^2\tau^2 \right)-C\left(x+y+\sigma\right) = -z-Cx-2\frac{s_2}{r_2}+r_2^2\tau^2$$ (92) thanks to (32). □ Lemma 5.3. The formulas (36) can be written in the equivalent ways   \begin{align} p_1\left(z\right) & =\tilde{\mathcal{A}}_{z}\left(0\right)+ \int_{0}^{\infty}R\left(x,0\right)\tilde{\mathcal{A}}_{z}\left(x\right){\rm d} x \end{align} (93)  \begin{align} &=\tilde{b}_{z}\left(0\right)-D^{-1}\int_{0}^{\infty}Q\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x, \end{align} (94)  \begin{align} p_2\left(z\right) &=\mathcal{A}_{z}\left(0\right)+ \int_{0}^{\infty}R\left(x,0\right)\mathcal{A}_{z}\left(x\right){\rm d} x \end{align} (95)  \begin{align} &= b_z\left(0\right)-D\int_{0}^{\infty} Q\left(x\right)\tilde{\mathcal{A}}_{z}\left(x\right){\rm d} x. \end{align} (96) Proof. Immediate from Lemma 4.2 and the definitions. □ 5.2 Differential equation for $$p_j$$ In this section we check that the expressions for $$p_1(z)$$, $$p_2(z)$$ in (94)–(95) satisfy the differential equation (39) (with $$m_j:=p_j$$). From (94) we have   $$p_1'\left(z\right) = \frac{\partial}{\partial z}\left[ \tilde{b}_{z}\left(0\right)\right]-D^{-1}\int_{0}^{\infty}Q\left(x\right)\frac{\partial}{\partial z}\left[\mathcal{A}_z\left(x\right)\right]{\rm d} x.$$ (97) We calculate the second term   \begin{align} & \int_{0}^{\infty}Q\left(x\right)\frac{\partial}{\partial z}\left[\mathcal{A}_z\left(x\right)\right]{\rm d} x\nonumber\\ &\quad =C^{-1}\left(\int_{0}^{\infty}Q\left(x\right)\mathcal{A}_z'\left(x\right){\rm d} x -D u \tilde{b}_{z}\left(0\right) \right) \nonumber\\ &\quad =-C^{-1}\left(\int_{0}^{\infty} Q'\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x+q\mathcal{A}_z\left(0\right)+D u\tilde{b}_{z}\left(0\right)\right) \nonumber\\ &\quad = -C^{-1}\left(\int_{0}^{\infty} \left[P\left(x\right)+qR\left(x,0\right)-uQ\left(x\right)\right]\mathcal{A}_z\left(x\right){\rm d} x+q\mathcal{A}_z\left(0\right)+D u\tilde{b}_{z}\left(0\right)\right)\nonumber \\ &\quad = -C^{-1}\left(D u p_1\left(z\right)+q p_2\left(z\right)+\int_{0}^{\infty} P\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x\right) \end{align} (98) where the first equality uses (89) and (60), the second one uses integration by parts and (58), the third one uses (56), and the fourth equality uses (94)–(95). From (97)–(98) and $$r_1^{-2}+r_2^{-2}=C^3$$, we get   \begin{align} r_1^{-2} p_1'(z) &=r_1^{-2}\frac{\partial}{\partial z}[\tilde{b}_{z}(0)] + r_2^{-2}D^{-1}\int_{0}^{\infty}Q(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x\nonumber\\ &\quad +C^2D^{-1}\left(D u p_1(z)+q p_2(z)+\int_{0}^{\infty} P(x)\mathcal{A}_z(x){\rm d} x\right). \end{align} (99) By differentiating (99) with respect to $$z$$, we get   \begin{align*} &r_1^{-2} p_1''(z)-C^2D^{-1}q p_2'(z)-2\tau p_1'(z)\\ &\quad = r_1^{-2}\frac{\partial^2}{\partial z^2}\left[ \tilde{b}_{z}(0)\right]-2\tau \frac{\partial}{\partial z}\left[ \tilde{b}_{z}(0)\right] + D^{-1}\int_{0}^{\infty} Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[ \mathcal{A}_z(x)\right]\right){\rm d} x \\ &\qquad +C^2D^{-1} \left(Du p_1'(z)+\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x\right), \end{align*} where the last term in the left-hand side was expanded using (97). Equivalently,   \begin{align} &r_1^{-2} p_1''(z)-C^2D^{-1}q p_2'(z)-2\tau p_1'(z)\nonumber\\ &\quad = r_1^{-2}\frac{\partial^2}{\partial z^2}\left[ \tilde{b}_{z}(0)\right]-2\tau \frac{\partial}{\partial z}\left[ \tilde{b}_{z}(0)\right]\nonumber\\ &\qquad + CD^{-1}\left(C^{-1}\int_{0}^{\infty} Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[ \mathcal{A}_z(x)\right]\right){\rm d} x\right. \nonumber\\ &\qquad \left. +CD u p_1'(z)+C\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x\right). \end{align} (100) We will calculate each of the terms in the right-hand side of (100). We start with   \begin{align*} C\int_{0}^{\infty} P\left(x\right)\frac{\partial}{\partial z}\left[\mathcal{A}_z\left(x\right)\right]{\rm d} x &=\int_{0}^{\infty} P\left(x\right) \mathcal{A}_{z}'\left(x\right){\rm d} x - D v \tilde{b}_{z}\left(0\right) \\ &=-\left(\int_{0}^{\infty} P'\left(x\right) \mathcal{A}_{z}\left(x\right){\rm d} x +p \mathcal{A}_{z}\left(0\right) +D v\tilde{b}_{z}\left(0\right) \right) \end{align*} where the first equality follows from (89) and (61), and the second equality uses integration by parts and (59). Consequently,   \begin{align} C\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x &= -\int_{0}^{\infty} (x+\sigma)Q(x) \mathcal{A}_{z}(x){\rm d} x \nonumber\\ &\quad -u\int_{0}^{\infty} P(x) \mathcal{A}_{z}(x){\rm d} x-p p_2(z) -2D v p_1(z)+D v \tilde{b}_{z}(0) \end{align} (101) by virtue of (57) and (94)–(95). Next we calculate the third term in the right-hand side of (100),   \begin{align} &C^{-1}\int_{0}^{\infty}Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[ \mathcal{A}_z(x)\right]\right){\rm d} x \nonumber\\ &\quad =C^{-1}\int_{0}^{\infty}\left(z+Cx+2\frac{s_2}{r_2}-r_2^{2}\tau^2 \right) Q(x) \mathcal{A}_{z}(x)\,{\rm d} x + D(u \tilde{b}_{z}'(0) - v \tilde{b}_{z}(0)), \end{align} (102) on account of (90) and (60)–(61). By adding (101)–(102) and canceling terms we get   \begin{align} &C\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x +C^{-1}\int_{0}^{\infty}Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]\right){\rm d} x \nonumber\\ &\quad =-u\int_{0}^{\infty} P(x) \mathcal{A}_{z}(x){\rm d} x + C^{-1}\left(z- 2\frac{s_1}{r_1}+r_1^2\tau^2 \right)\int_{0}^{\infty}Q(x) \mathcal{A}_{z}(x)\,{\rm d} x\nonumber\\ &\qquad + D u \tilde{b}_{z}'(0)-2D v p_1(z)-p p_2(z), \end{align} (103) where the factor between brackets in front of the second integral in the right-hand side was obtained via (92). Finally, we calculate the fourth term in the right-hand side of (100),   \begin{align} CD u p_1'\left(z\right) &= CD u \frac{\partial}{\partial z}[\tilde{b}_{z}\left(0\right)]+D u^2 p_1\left(z\right)+uq p_2\left(z\right)+u\int_{0}^{\infty} P\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x \nonumber\\ &=-D u \tilde{b}_{z}'\left(0\right)+D u^2 p_1\left(z\right)+uq p_2\left(z\right)+u\int_{0}^{\infty} P\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x \end{align} (104) where the first equality follows from (97)–(98) and the second one from (86). Inserting (103)–(104) in the right-hand side of (100) and canceling terms we get   \begin{align*} & r_1^{-2}p_1''\left(z\right)-C^2D^{-1}q p_2'\left(z\right)-2\tau p_1'\left(z\right) \\ &\quad =\left(r_1^{-2}\frac{\partial^2}{\partial z^2}\left[\tilde{b}_{z}\left(0\right)\right]-2\tau \frac{\partial}{\partial z}\left[\tilde{b}_{z}\left(0\right)\right]\right) +D^{-1}\left(z- 2\frac{s_1}{r_1}+ r_1^2\tau^2\right)\int_{0}^{\infty} Q\left(x\right)\mathcal{A}_{z}\left(x\right){\rm d} x\\ &\qquad +C [u^2-2v] p_1\left(z\right)+CD^{-1}[uq-p] p_2\left(z\right) \\ &\quad =\left[-z+ 2\frac{s_1}{r_1}-r_1^2\tau^2 +C \left(u^2- 2v\right)\right] p_1\left(z\right)+CD^{-1}[uq-p] p_2\left(z\right)\\ &\quad = \left[-z+ 2\frac{s_1}{r_1}-r_1^2\tau^2 +C q^2\right] p_1\left(z\right)-\left[CD^{-1} q'\right] p_2\left(z\right) \end{align*} where the second equality follows from (94) and (88), and the last equality uses (62) and (66). We have established the desired differential equation (39). 5.3 Other differential equations Denote by $$\widehat N_{j,k}(z)$$ the right-hand side of the formula for $$\tilde {M}_{j,k}(z)$$ in (35). Let the vectors $$\textbf {p}(z),\textbf {m}(z)$$ have entries   $p_j\left(z\right) =\widehat N_{j,1}\left(z\right)+\widehat N_{j,2}\left(z\right),\quad m_j\left(z\right)=\widehat N_{j,1}\left(z\right)-\widehat N_{j,2}\left(z\right),$ for $$j=1,\ldots ,4$$. For $$\textbf {p}(z)$$ this is compatible with (36). We have already shown in Section 5.2 that $$\textbf {p}(z)$$ satisfies the differential equation (39) (with $$m_j:=p_j$$). We claim that the same statement holds for $$\textbf {m}(z)$$. This can be shown by going through the proofs in Sections 5.1–5.2 again and replacing the appropriate plus signs by minus signs and vice versa. We leave this to the reader. Summarizing, both $$\textbf {p}(z)$$ and $$\textbf {m}(z)$$ satisfy the differential equation (39). By symmetry they also satisfy the differential equation (40). Finally, (41)–(42) is valid by construction. Proposition 2.11(b) implies that $$\textbf {p}(z)$$ and $$\textbf {m}(z)$$ are fixed linear combinations of the columns of $$\tilde {M}(z)$$. By linearity, the same holds for the $$\widehat N_{j,k}(z)$$. 5.4 Asymptotics In view of Section 5.3, Theorem 2.9 will be proved if we can show that the expressions for $$\tilde {M}_{j,1}(z)$$ and $$\tilde {M}_{j,2}(z)$$ in (35) and (41)–(42) satisfy the required asymptotics for $$z\to \infty$$ in the RH problem 2.1. It will be enough to prove the asymptotics for the second column $$\tilde {M}_{j,2}(z)$$. Moreover, it is sufficient to let $$z$$ go to infinity along the positive real line. Indeed, the first observe that the second columns of $$M(z)$$ and $$\tilde {M}(z)$$ are equal if $$\Re z>0$$. Furthermore, the second column of $$M(z)$$ is recessive with respect to the other columns as $$z\to +\infty$$, due to (7). So, if we can prove that the expressions for $$\tilde {M}_{j,2}(z)$$ in (35) share this same recessive asymptotic behavior for $$z\to +\infty$$, then we are done. The proof of the above asymptotics is now an easy consequence of (81)–(82). This ends the proof of Theorem 2.9. 6 Proof of Proposition 2.11 6.1 Painlevé formulas for the residue matrix $$M_1$$ Let the parameters $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed. In the proof of Proposition 2.11 we will need Painlevé formulas for the entries of the “residue” matrix $$M_1=M_1(r_1,r_2,s_1,s_2,\tau )$$ in (7). Write this matrix in entrywise form as   $$M_1 =: \begin{pmatrix} a & b & ic & id \\ - \tilde{b} & - \tilde{a} & i\tilde{d} & i \tilde{c} \\ i e & i \tilde{f} & -\alpha & \tilde{\beta} \\ i f & i\tilde{e} & -\beta & \tilde{\alpha} \end{pmatrix}$$ (105) for certain numbers $$a, \tilde {a}, b, \tilde {b},c,\tilde {c},\ldots$$ that depend on $$r_1, r_2, s_1, s_2,\tau$$. We will sometimes write $$a(r_1,r_2,s_1,s_2,\tau )$$, $$b(r_1,r_2,s_1,s_2,\tau )$$, etc., to denote the dependence on the parameters. In the symmetric case $$r_1=r_2$$, $$s_1=s_2$$ we are allowed to drop all the tildes from (105), while in the case $$\tau =0$$ we can replace all the Greek letters in (105) by their Roman counterparts and put $$\tilde {d}=d$$ and $$\tilde {f}=f$$. These are special instances of the next lemma. Lemma 6.1 (Symmetry relations). Let $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed. The 16 entries $$x=x(r_1,r_2,s_1,s_2,\tau )$$ of the matrix (105) are all real-valued and they satisfy the symmetry relations   $$x\left(r_1,r_2,s_1,s_2,\tau\right) = \tilde{x}\left(r_2,r_1,s_2,s_1,\tau\right),$$ (106) for any $$x=a,b,c,d,e,f,\alpha ,\beta$$, and   $$x\left(r_1,r_2,s_1,s_2,\tau\right) = \chi\left(r_1,r_2,s_1,s_2,-\tau\right),$$ (107) for any $$x=a,b,\tilde {a},\tilde {b},c,\tilde {c},d,e,\tilde {e},f$$, where we write $$\chi =\alpha ,\beta ,\tilde {\alpha },\tilde {\beta },c,\tilde {c},\tilde {d},e,\tilde {e},\tilde {f}$$, respectively. The proof of Lemma 6.1 follows from Section 6.2. Now we relate the entries of the matrix $$M_1$$ to the Hastings–McLeod solution $$q(x)$$ of Painlevé II and the Hamiltonian $$u(x)$$ in (38). The next theorem was proved for the special case $$\tau =0$$ in [14] and in the symmetric setting $$r_1=r_2$$, $$s_1=s_2$$ in [12, 15]. In the general case, we have the extra exponential factor $$D$$ in (32). Theorem 6.2 (Painlevé formulas). Let the parameters $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed. The entries in the top right $$2\times 2$$ block of (105) are given by   \begin{align} d &= \left(r_2CD\right)^{-1} q\left(\sigma\right) \end{align} (108)  \begin{align} \tilde{d} &= \left(r_1C\right)^{-1}D q\left(\sigma\right) \end{align} (109)  \begin{align} c &= r_1^{-1}\left(s_1^2-C^{-1} u\left(\sigma\right)\right) \end{align} (110)  \begin{align} \tilde{c} &=r_2^{-1}\left(s_2^2-C^{-1}u\left(\sigma\right)\right) \end{align} (111) where $$q$$ is the Hastings–McLeod solution to Painlevé II (1)–(2), $$u$$ is the Hamiltonian (38), and with the constants $$C,D,\sigma$$ given by (32). Moreover, some of the other entries in (105) are given by   \begin{align} b &=\left(\tilde{c}+\tau r_2\right) d -\left(r_2^{2}C^{2}D\right)^{-1} q'\left(\sigma\right) \end{align} (112)  \begin{align} \tilde{b} &=\left(c+\tau r_1\right) \tilde{d} -\left(r_1^{2}C^{2}\right)^{-1}D q'\left(\sigma\right) \end{align} (113)  \begin{align} \beta &= \left(\tilde{c}-\tau r_2\right) \tilde{d}-\left(r_1r_2C^{2}\right)^{-1}D q'\left(\sigma\right) \end{align} (114)  \begin{align} \tilde{\beta} &= \left(c-\tau r_1\right) d-\left(r_1r_2C^{2}D\right)^{-1}q'\left(\sigma\right) \end{align} (115) and   \begin{align} r_1f &= -\frac{r_2}{r_1^2+r_2^2} \frac{\partial d}{\partial\tau}+\left(-r_1 c-r_2 \tilde{c}+r_1^2\tau +r_2^2\tau\right) b -r_1 d^2\tilde{d}+r_2 \tilde{c}^2 d -2s_2 d \end{align} (116)  \begin{align} r_2\tilde{f} &= -\frac{r_1}{r_1^2+r_2^2} \frac{\partial \tilde{d}}{\partial\tau} +\left(-r_1 c-r_2 \tilde{c}+r_1^2\tau+r_2^2\tau\right)\tilde{b}-r_2\tilde{d}^2 d+r_1 c^2 \tilde{d}-2s_1\tilde{d}. \end{align} (117) Theorem 6.2 is proved in Section 6.5. 6.2 Symmetry relations For further use, we collect some elementary results concerning symmetry. Lemma 6.3 (Symmetry relations). For any fixed $$r_1, r_2,s_1,s_2,\tau$$, the RH matrix $$M$$ satisfies the symmetry relations   $$\overline{M\left(\overline{z};r_1,r_2,s_1,s_2,\tau\right)} = \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix} M\left(z;\overline{r_1},\overline{r_2},\overline{s_1},\overline{s_2},\overline{\tau}\right) \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix},$$ (118) where the bar denotes complex conjugation,   $$M^{-{\rm T}}\left(z;r_1,r_2,s_1,s_2,\tau\right) = K^{-1} M\left(z;r_1,r_2,s_1,s_2,-\tau\right) K,$$ (119) where the superscript $${-\hbox {T}}$$ denotes the inverse transpose, and finally   $$M\left(-z; r_1, r_2, s_1, s_2,\tau\right) =\begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix}M\left(z; r_2, r_1, s_2, s_1,\tau\right) \begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix},$$ (120) where we denote   $$J=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad K = \begin{pmatrix} 0 & I_2 \\ -I_2 & 0 \end{pmatrix}.$$ (121) Proof. This follows as in [14, Section 5.1]. More precisely, one easily checks that the left- and righthand sides of (118) satisfy the same RH problem. Then (118) follows from the uniqueness of the solution to this RH problem. The same argument applies to (119) and (120). □ Corollary 6.4. For any fixed $$r_1, r_2,s_1,s_2,\tau$$, the residue matrix $$M_1$$ in (7), (105) satisfies the symmetry relations   \begin{align} \overline{M_1\left(r_1,r_2,s_1,s_2,\tau\right)} &= \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix} M_1\left(\overline{r_1},\overline{r_2},\overline{s_1},\overline{s_2},\overline{\tau}\right) \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix}, \end{align} (122)  \begin{align} M_1^{T}\left(r_1,r_2,s_1,s_2,\tau\right) &= -K^{-1} M_1\left(r_1,r_2,s_1,s_2,-\tau\right) K, \end{align} (123)  \begin{align} M_1\left(r_1, r_2, s_1, s_2,\tau\right) &=-\begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix} M_1\left(r_2, r_1, s_2, s_1,\tau\right) \begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix}, \end{align} (124) with the notations $$J,K$$ in (121). Lemma 6.1 is an immediate consequence of Corollary 6.4. 6.3 Lax system To the RH matrix $$M(z)$$ there is associated a Lax system of differential equations   $$\frac{\partial}{\partial z}M =UM,\quad \frac{\partial}{\partial s}M =VM,\quad \frac{\partial}{\partial \tau}M =WM,$$ (125) for certain coefficient matrices $$U,V,W$$. These matrices were obtained in the symmetric case $$r_1=r_2$$, $$s_1=s_2$$ in [12, Section 5.3, 15] and for $$\tau =0$$ in [14, Section 5.2]. We will consider the general nonsymmetric case. To take derivatives with respect to $$s_1$$ or $$s_2$$, we again parameterize $$s_j=\sigma _j s$$ with $$\sigma _1,\sigma _2$$ fixed and $$s$$ variable, as in (23). Lemma 6.5. In the general nonsymmetric setting, with the parametrization $$s_1=\sigma _1 s$$, $$s_2=\sigma _2 s$$, the coefficient matrices $$U,V,W$$ in (125) take the form   \begin{align} U&= \begin{pmatrix} -r_1 c+r_1^2\tau & r_2 d & r_1 i & 0 \\ -r_1 \tilde{d} & r_2 \tilde{c}-r_2^2\tau & 0 & r_2 i \\ \left(r_1 c^2-r_2 d\tilde{d}-2s_1+r_1z\right)i & -\left(r_1 b+r_2\tilde{\beta}\right) i & r_1 c+r_1^2\tau & r_1 d \\ -\left(r_1\beta+r_2\tilde{b}\right) i & \left(r_2\tilde{c}^2-r_1 d\tilde{d}-2s_2-r_2z\right)i & -r_2\tilde{d} & -r_2\tilde{c}-r_2^2\tau \end{pmatrix}, \end{align} (126)  \begin{align} V&= 2\begin{pmatrix} \sigma_1 c & \sigma_2 d & -\sigma_1 i & 0 \\ \sigma_1\tilde{d} & \sigma_2\tilde{c} & 0 & \sigma_2 i \\ \sigma_1\left(-c^2+\dfrac{r_2}{r_1} d\tilde{d}+\dfrac{\sigma_1}{r_1}s-z\right)i & \left(\sigma_1 b-\sigma_2\tilde{\beta}\right)i & -\sigma_1 c & -\sigma_1 d \\ \left(\sigma_1\beta-\sigma_2\tilde{b}\right)i & \sigma_2\left(\tilde{c}^2-\dfrac{r_1}{r_2}d\tilde{d}-\dfrac{\sigma_2}{r_2} s-z\right)i & -\sigma_2 \tilde{d} & -\sigma_2 \tilde{c} \end{pmatrix}, \end{align} (127)  \begin{align} W& = \left(r_1^2+r_2^2\right)\begin{pmatrix} \dfrac{r_1^2}{r_1^2+r_2^2}z & -b & 0 & -di \\ -\tilde{b} & -\dfrac{r_2^2}{r_1^2+r_2^2}z & \tilde{d} i & 0 \\ 0 & -fi & \dfrac{r_1^2}{r_1^2+r_2^2}z & -\tilde{\beta} \\ \tilde{f} i & 0 & -\beta & -\frac{r_2^2}{r_1^2+r_2^2}z \end{pmatrix}, \end{align} (128) with the notations in (105). Proof. This is a routine calculation that follows similarly as in the above-cited references [12, 14, 15]. From the asymptotics (7) we have for $$z\to \infty$$ that   \begin{align} \frac{\partial M}{\partial z}M^{-1} &= \left(I+\frac{M_1}{z}+\cdots\right) \begin{pmatrix} r_1^2\tau & 0 & i(r_1-s_1z^{-1}) & 0 \\ 0 & -r_2^2\tau & 0 & i(r_2+ s_2z^{-1}) \\ i(r_1z-s_1) & 0 & r_1^2\tau & 0 \\ 0 & -i(r_2 z+ s_2) & 0 & -r_2^2\tau \end{pmatrix}\nonumber\\ &\quad \left(I-\frac{M_1}{z}+\cdots\right)+O(z^{-1}). \end{align} (129) Since the RH matrix $$M(z)$$ has constant jumps, the left-hand side of (129) is an entire function of $$z$$. Liouville's theorem implies that it is a polynomial in $$z$$. Collecting the polynomial terms in the right-hand side of (129) we obtain   $U = \begin{pmatrix} r_1^2\tau & 0 & i r_1 & 0 \\ 0 & -r_2^2\tau & 0 & i r_2 \\ i\left(r_1z-s_1\right) & 0 & r_1^2\tau & 0 \\ 0 & -i\left( r_2z+ s_2\right) & 0 & -r_2^2\tau \end{pmatrix}+i M_1A- iA M_1,\quad A:=\begin{pmatrix}0&0\\ 0&0\\ r_1&0\\ 0&-r_2&0 \end{pmatrix}.$ With the help of (105) and a small calculation, we then get (126). To obtain the $$(3,1)$$ and $$(4,2)$$ entries of (126), we also need the relations   \begin{align} a+\alpha &= -c^2 + \frac{r_2}{r_1} d\tilde{d} +\frac{s_1}{r_1}, \end{align} (130)  \begin{align} \tilde{a}+\tilde{\alpha} &= -\tilde{c}^2 + \frac{r_1}{r_2} d\tilde{d} +\frac{s_2}{r_2}, \end{align} (131) which follow from the fact that $$(1,3)$$ and $$(2,4)$$ entries in the $$z^{-1}$$ coefficient in (129) are equal to zero. A similar argument yields (127)–(128). □ 6.4 Proof of Proposition 2.11 Let the vector $$\textbf {m}(z)$$ be a solution of $$({\partial }/{\partial z})\textbf {m}=U\textbf {m}$$ with $$U$$ in (126). By splitting this equation in $$2\times 2$$ blocks we get   \begin{align} \begin{pmatrix} r_1^{-1}m_1'\left(z\right) \\ r_2^{-1}m_2'\left(z\right) \end{pmatrix}&=\begin{pmatrix} -c+r_1\tau & r_1^{-1}r_2 d \\ -r_1r_2^{-1} \tilde{d} & \tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} m_1\left(z\right) \\ m_2\left(z\right) \end{pmatrix}+i\begin{pmatrix} m_3\left(z\right) \\ m_4\left(z\right) \end{pmatrix}, \end{align} (132)  \begin{align} \begin{pmatrix} r_1^{-1}m_3'(z) \\ r_2^{-1}m_4'(z)\end{pmatrix}&=i\begin{pmatrix} c^2-r_1^{-1}r_2 d\tilde{d}-2r_1^{-1}s_1+z & -b-r_1^{-1}r_2\tilde{\beta}\\ -r_1r_2^{-1}\beta-\tilde{b} & \tilde{c}^2-r_1r_2^{-1} d\tilde{d}-2r_2^{-1}s_2-z \end{pmatrix}\begin{pmatrix} m_1(z) \\ m_2(z) \end{pmatrix}\nonumber\\ &\quad +\begin{pmatrix} c+r_1\tau & d \\ -\tilde{d} & -\tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} m_3(z) \\ m_4(z) \end{pmatrix}. \end{align} (133) From (132) and (108)–(111), we easily get (41)–(42). To prove the two remaining differential equations, we take the derivative of (132) and use (133) to get   \begin{align*} \begin{pmatrix} r_1^{-2}m_1''(z) \\ r_2^{-2}m_2''(z) \end{pmatrix} &=\begin{pmatrix} -c+r_1\tau & r_1^{-2}r_2^2 d \\ -r_1^2r_2^{-2} \tilde{d} & \tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} r_1^{-1}m_1'(z) \\ r_2^{-1}m_2'(z) \end{pmatrix}\\ &\quad - \begin{pmatrix} c^2-r_1^{-1}r_2 d\tilde{d}-2r_1^{-1}s_1+z & -b-r_1^{-1}r_2\tilde{\beta}\\ -r_1r_2^{-1}\beta-\tilde{b} & \tilde{c}^2-r_1r_2^{-1} d\tilde{d}-2r_2^{-1}s_2-z \end{pmatrix}\begin{pmatrix} m_1(z) \\ m_2(z) \end{pmatrix}\\ &\quad +\begin{pmatrix} c+r_1\tau & d \\ -\tilde{d} & -\tilde{c}-r_2\tau \end{pmatrix}\left[\begin{pmatrix} r_1^{-1}m_1'(z) \\ r_2^{-1}m_2'(z)\end{pmatrix} -\begin{pmatrix} -c+r_1\tau & r_1^{-1}r_2 d \\ -r_1r_2^{-1} \tilde{d} & \tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} m_1(z) \\ m_2(z) \end{pmatrix}\right]. \end{align*} From this equation and (108)–(115) we obtain the desired differential equations (39)–(40). This proves Proposition 2.11(a). To prove Part (b), let $$\textbf {m}(z)$$ satisfy the differential equations (39)–(42). From the proof of Part (a) above, we see that $$\frac {\partial }{\partial z}\textbf {m}=U\textbf {m}$$ with $$U$$ in (126). But then   $\frac{\partial}{\partial z}[\tilde{M}^{-1} \textbf{m}]=\tilde{M}^{-1}\left(U-U\right)\textbf{m}=0,$ which implies Proposition 2.11(b). 6.5 Proof of Theorem 6.2 The matrices $$U,V,W$$ in (125) satisfy the compatibility conditions   \begin{align} \frac{\partial U}{\partial s} &=\frac{\partial V}{\partial z} - UV + VU \end{align} (134)  \begin{align} \frac{\partial U}{\partial \tau}&=\frac{\partial W}{\partial z} - UW + WU. \end{align} (135) These relations are obtained by calculating the mixed derivatives $$({\partial ^2}/{\partial z\partial s}) M\,{=}\,({\partial ^2}/{\partial s\partial z})M$$ and $$({\partial ^2}/{\partial z\partial \tau })M=({\partial ^2}/{\partial \tau \partial z})M$$, respectively, in two different ways. Lemma 6.6. Consider the matrices $$U,V,W$$ in (126)–(128). Then with the expressions (108)–(117) in Theorem 6.2, the compatibility conditions (134)–(135) are satisfied. Proof. This is a lengthy but direct calculation. It is best performed with the help of a symbolic computer program such as Maple. First we consider the compatibility condition with respect to $$s$$. Writing the matrix equation (134) in entrywise form, with the help of Maple, we obtain the system of equations, with the prime denoting the derivative with respect to $$s$$,   \begin{align} r_1 d' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(\tilde{c} d-b\right)+2\left(r_1^2+r_2^2\right)\sigma_1\tau d \end{align} (136)  \begin{align} r_2 d' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(c d-\tilde{\beta}\right)-2\left(r_1^2+r_2^2\right)\sigma_2\tau d \end{align} (137)  \begin{align} r_1\tilde{d}' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(\tilde{c} \tilde{d}-\beta\right)-2\left(r_1^2+r_2^2\right)\sigma_1\tau \tilde{d} \end{align} (138)  \begin{align} r_2\tilde{d}' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(c \tilde{d}-\tilde{b}\right)+2\left(r_1^2+r_2^2\right)\sigma_2\tau \tilde{d} \end{align} (139)  \begin{align} r_1 c' &=2\left(\sigma_1 r_2+\sigma_2 r_1\right)d\tilde{d}+2\sigma_1^2 s \end{align} (140)  \begin{align} r_2\tilde{c}' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)d\tilde{d}+2\sigma_2^2 s \end{align} (141) and   \begin{align} (r_1 b+r_2\tilde{\beta})' &= (r_1\tilde{c}+r_2 c)d'-2(r_1^2+r_2^2)\tau\sigma_1 (\tilde{c}d-b)+2(r_1^2+r_2^2)\tau\sigma_2(cd-\tilde{\beta})\nonumber\\ &\quad-2\frac{r_1\sigma_2+r_2\sigma_1}{r_1r_2}\left((r_1^2+r_2^2)d^2\tilde{d}+(r_1\sigma_2+r_2\sigma_1)sd\right)-4\sigma_1 \sigma_2 sd \end{align} (142)  \begin{align} (r_1\beta+r_2\tilde{b})' &= (r_1\tilde{c}+r_2 c)\tilde{d}'+2(r_1^2+r_2^2)\sigma_1\tau (\tilde{c} \tilde{d}-\beta)-2(r_1^2+r_2^2)\sigma_2\tau(c\tilde{d}-\tilde{b})\nonumber\\ &\quad -2\frac{r_1\sigma_2+r_2\sigma_1}{r_1r_2}\left((r_1^2+r_2^2)\tilde{d}^2 d+(r_1\sigma_2+r_2\sigma_1)s\tilde{d}\right)-4\sigma_1 \sigma_2 s\tilde{d}. \end{align} (143) Next we consider the compatibility condition with respect to $$\tau$$. By writing the matrix equation (135) in entrywise form, with the help of Maple, we obtain, with the prime denoting the derivative with respect to $$\tau$$,   \begin{align} r_1 d' &= \left(r_1^2+r_2^2\right)\left(r_1^2\tau\tilde{\beta}+r_2^2\tau\tilde{\beta}+r_1 c\tilde{\beta} +r_2\tilde{c}\tilde{\beta} +r_2 d^2 \tilde{d}-r_1 c^2 d+2 s_1 d+r_2 f\right) \end{align} (144)  \begin{align} r_2 d' &=\left(r_1^2+r_2^2\right)\left(r_1^2\tau b+r_2^2\tau b -r_1 c b-r_2 \tilde{c} b -r_1 d^2\tilde{d}+r_2 \tilde{c}^2 d -2s_2 d-r_1 f\right) \end{align} (145)  \begin{align} r_1 \tilde{d}' &=\left(r_1^2+r_2^2\right)\left(r_1^2\tau\tilde{b} +r_2^2\tau\tilde{b}-r_2 \tilde{c}\tilde{b}-r_1 c\tilde{b}-r_2\tilde{d}^2 d+r_1 c^2\tilde{d}-2s_1\tilde{d} -r_2\tilde{f}\right) \end{align} (146)  \begin{align} r_2 \tilde{d}' &=\left(r_1^2+r_2^2\right) \left(r_1^2\tau\beta +r_2^2\tau\beta+r_2\tilde{c}\beta+r_1 c\beta +r_1\tilde{d}^2 d-r_2\tilde{c}^2\tilde{d}+2 s_2\tilde{d}+r_1\tilde{f}\right) \end{align} (147)  \begin{align} c' &=\left(r_1^2+r_2^2\right)\left(d\beta-\tilde{d} b\right) \end{align} (148)  \begin{align} \tilde{c}' &=\left(r_1^2+r_2^2\right)\left(\tilde{d}\tilde{\beta}- d\tilde{b}\right) \end{align} (149) and   \begin{align*} r_1b'+r_2\tilde{\beta}' &= (r_1^2+r_2^2)(-r_1 c^2 b+r_2\tilde{c}^2\tilde{\beta}-r_1 d\tilde{d}\tilde{\beta}+r_2 d \tilde{d} b+2 s_1 b-2s_2\tilde{b}\\ &\quad -r_1^2\tau f-r_2^2 \tau f-r_1 c f+r_2 \tilde{c} f)\\ r_2\tilde{b}'+r_1\beta' &= (r_1^2+r_2^2)(-r_2 \tilde{c}^2 \tilde{b}+r_1 c^2 \beta-r_2 d\tilde{d}\beta+r_1 d \tilde{d} \tilde{b}+2 s_2 \tilde{b}-2s_1 b \\ &\quad -r_2^2\tau \tilde{f}-r_1^2 \tau \tilde{f}-r_2 \tilde{c} \tilde{f}+r_1 c \tilde{f}). \end{align*} Direct calculations show that all these equations are satisfied by (108)–(117).□ Lemma 6.7. Theorem 6.2 and Proposition 2.2 both hold true in the case $$\tau =0$$. Proof. For $$\tau =0$$, Proposition 2.2 was obtained in [14, Theorem 2.3], and Equations (108)–(111) were obtained in [14, Theorem 2.4]. The other equations in Theorem 6.2 then follow from (136)–(139) and (145)–(146). □ With the help of Lemmas 6.6–6.7, one can prove Theorem 6.2 and Proposition 2.2 for $$\tau \neq 0$$ in the same way as in [15, Section 5], where the symmetric case was considered. More precisely, the approach in [15, Section 5] allows to prove the existence of a matrix $$M(z)$$ which solves the RH problem 2.1, and which additionally satisfies the Lax pair   $$\frac{\partial}{\partial z}M =UM,\quad \frac{\partial}{\partial \tau}M=WM,$$ (150) where the coefficient matrices $$U,W$$ are given by (126) and (128), respectively, with the numbers $$c,\tilde {c},d,\tilde {d},\ldots$$ involved in these matrices given by the explicit Painlevé-type formulas in (108)–(117). This follows from a lengthy and tedious calculation that follows exactly the same plan as in [15]; we do not go into the details. In particular, this implies the existence part in Proposition 2.2 (the uniqueness part is a standard fact). Theorem 6.2 is another consequence of the preceding construction. Indeed, from Lemma 6.5 we already know that the RH matrix $$M(z)$$ satisfies the Lax pair (150), where the coefficient matrices $$U,W$$ are given by (126) and (128), respectively, with the numbers $$c,\tilde {c},d,\tilde {d},\ldots$$ involved in these matrices obtained from the residue matrix $$M_1$$ in (105). The construction in the previous paragraph shows that the same holds with the numbers $$c,\tilde {c},d,\tilde {d},\ldots$$ given by the right hand sides of equations (108)–(117). Since the numbers in the left-hand sides of the latter equations are uniquely retrievable from the matrices $$U,W$$ in (126) and (128), we then obtain Theorem 6.2. 6.6 Alternative approach to Theorem 6.2 The above reasoning does not give any insight on the origin of the expressions in Theorem 6.2. Therefore, in the remaining part of this section, let us deduce these formulas in a more direct way. The calculations below are partly heuristic in the sense that we will make an ansatz (159), (167). We start with Lemma 6.8. The numbers $$d,\tilde {d}$$ in (105) satisfy the system of coupled second-order differential equations   \begin{align} \frac{\partial^2 d}{\partial s^2} &= 4\tau(r_1\sigma_1-r_2\sigma_2)\frac{\partial d}{\partial s} -4(r_1^2+r_2^2)(\sigma_1^2+\sigma_2^2) \tau^2 d\nonumber\\ &\quad +8\frac{(r_1\sigma_2+r_2\sigma_1)^2}{r_1 r_2} d^2\tilde{d}+8\frac{(r_1\sigma_2+r_2\sigma_1)^3}{r_1r_2(r_1^2+r_2^2)}sd, \end{align} (151)  \begin{align} \frac{\partial^2 \tilde{d}}{\partial s^2} &= -4\tau(r_1\sigma_1-r_2\sigma_2)\frac{\partial\tilde{d}}{\partial s} -4(r_1^2+r_2^2)(\sigma_1^2+\sigma_2^2) \tau^2 \tilde{d}\nonumber\\ &\quad +8\frac{(r_1\sigma_2+r_2\sigma_1)^2}{r_1 r_2} \tilde{d}^2 d+8\frac{(r_1\sigma_2+r_2\sigma_1)^3}{r_1r_2(r_1^2+r_2^2)}s\tilde{d}. \end{align} (152) Moreover,   $$\frac{\partial d}{\partial\tau}= -\frac{r_1r_2\left(r_1^2+r_2^2\right)}{\sigma_1r_2+\sigma_2 r_1}\tau\frac{\partial d}{\partial s} + \left(r_1^2+r_2^2\right)^2 \frac{\sigma_1 r_2-\sigma_2 r_1}{\sigma_1r_2+\sigma_2 r_1}\tau^2 d+2\left(r_1 s_1- r_2 s_2\right) d.$$ (153) Proof. Equation (151) follows from (136)–(137) and (140)–(143) after some lengthy algebraic manipulations. Equation (152) follows by symmetry. To obtain (153), first note that (136)–(137) imply the relations   \begin{align} &r_2\left(\tilde{c} d-b\right)-r_1\left(cd-\tilde{\beta}\right)+\left(r_1^2+r_2^2\right)\tau d=0, \end{align} (154)  \begin{align} &r_1 r_2 \frac{\partial d}{\partial s} = \left(\sigma_1 r_2+\sigma_2 r_1\right)\left(r_2\tilde{c} d-r_2b+r_1c d-r_1\tilde{\beta}\right)+\left(r_1^2+r_2^2\right)\left(\sigma_1 r_2-\sigma_2 r_1\right)\tau d. \end{align} (155) Now by eliminating $$f$$ from (144)–(145) we get   \begin{align} \frac{\partial d}{\partial\tau} &= \left(r_1^2+r_2^2\right) \left(r_2 b + r_1\tilde{\beta}\right) \tau-\left(r_2 b-r_1\tilde{\beta}\right) \left(r_1 c+r_2 \tilde{c}\right)\nonumber\\ &\quad -d \left(r_1^2 c^2-r_2^2 \tilde{c}^2\right)+2\left(r_1 s_1- r_2 s_2\right) d, \end{align} (156) On account of (154) this becomes   $\frac{\partial d}{\partial\tau} = \left(r_1^2+r_2^2\right) \left(r_2 b + r_1\tilde{\beta}-r_1 cd-r_2 \tilde{c} d\right)\tau +2\left(r_1 s_1- r_2 s_2\right) d.$ Combining this with (155) we obtain (153). □ We seek a solution to the differential equations (151)–(152) in the form   $$d = e^{h}g,\quad \tilde{d} = e^{-h} g,$$ (157) with $$h=h(s,\tau )$$ an odd function of $$\tau$$ and $$g=g(s,\tau )$$ an even function of $$\tau$$ (recall Lemma 6.1). Plugging this in (151) we find, with again the prime denoting the derivative with respect to $$s$$,   \begin{align} g''+2h'g'+((h')^2+h'') g &= 4\tau(r_1\sigma_1-r_2\sigma_2)(g'+h' g)\nonumber\\ &\quad -4(r_1^2+r_2^2)\tau^2 (\sigma_1^2+\sigma_2^2) g+8\frac{(r_1\sigma_2+r_2\sigma_1)^2}{r_1 r_2} g^3\nonumber\\ &\quad +8\frac{(r_1\sigma_2+r_2\sigma_1)^3}{r_1r_2(r_1^2+r_2^2)}sg. \end{align} (158) To obtain further progress we make the ansatz   $$\frac{\partial^2 h}{\partial s^2}=0,\quad \frac{\partial h}{\partial s} = 2\left(r_1\sigma_1-r_2\sigma_2\right)\tau.$$ (159) After a little calculation, (158) then simplifies to   $$g'' = -4\left(r_1\sigma_2+r_2\sigma_1\right)^2\tau^2 g+8\frac{\left(r_1\sigma_2+r_2\sigma_1\right)^2}{r_1r_2} g^3+8\frac{\left(r_1\sigma_2+r_2\sigma_1\right)^3}{r_1r_2\left(r_1^2+r_2^2\right)}sg.$$ (160) We can relate (160) to the Painlevé II equation. We have that $$q=q(s)$$ satisfies $$q''\,{=}\,s q\,{+}\,2q^3$$, if and only if   $$g\left(s\right) := c_1 q\left(c_2 s + c_3\right)$$ (161) satisfies   $$g'' = c_2^2 c_3 g+2 \frac{c_2^2}{c_1^2} g^3+ c_2^3 s g.$$ (162) Comparing coefficients with (160), we see that   \begin{align} c_1 &= \frac{\left(r_1r_2\right)^{1/6}}{\left(r_1^2+r_2^2\right)^{1/3}} \end{align} (163)  \begin{align} c_2 &= 2\frac{\left(r_1\sigma_2+r_2\sigma_1\right)}{\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{1/3}} \end{align} (164)  \begin{align} c_3 &= -\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{2/3}\tau^2. \end{align} (165) Finally, substituting the formulas (157), (161),   $$d = e^{h}g =e^h \frac{\left(r_1r_2\right)^{1/6}}{\left(r_1^2+r_2^2\right)^{1/3}} q\left(2\frac{\left(r_1\sigma_2+r_2\sigma_1\right)}{\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{1/3}} s -\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{2/3}\tau^2\right)$$ (166) in (153) and using again (159) we find after some calculations,   $$\frac{\partial h}{\partial \tau} = \left(r_2^4-r_1^4\right)\tau^2+2\left(r_1\sigma_1-r_2\sigma_2\right)s,$$ (167) where we are assuming that the choice of the Painlevé II solution $$q$$ in (166) is independent from $$\tau$$. From (166)–(167) and the known result for $$\tau =0$$ [14, Theorem 2.4] we get the expression for $$d$$ in Theorem 2.9 (with $$q$$ the Hastings–McLeod solution to Painlevé II). By symmetry we obtain the expression for $$\tilde {d}$$. From (148), (136), (138) and a little calculation we then find   $\frac{\partial}{\partial\tau}c = -2r_1^{-1}\left(r_1^2+r_2^2\right)\tau C^{-2} q^2\left(\sigma\right)= \frac{\partial}{\partial\tau}\left(-r_1^{-1}C^{-1} u\left(\sigma\right)\right),$ where the second equality follows from (64) and (32). Combining this with the known result for $$\tau =0$$ [14, Theorem 2.4] we get the expression for $$c$$ in Theorem 2.9. From (136) and a little calculation we find the expression for $$b$$, while (145) yields the formula for $$f$$. Finally, the remaining formulas in Theorem 2.9 follow from symmetry considerations, see Lemma 6.1. Funding The author is a Postdoctoral Fellow of the Fund for Scientific Research—Flanders (Belgium). References 1 Adler M., Delépine J., van Moerbeke P., and Vanhaecke P.. “ A PDE for non-intersecting Brownian motions and applications.” Advances in Mathematics  226 ( 2011): 1715– 55. Google Scholar CrossRef Search ADS   2 Adler M., Ferrari P. L., and van Moerbeke P.. “ Non-intersecting random walks in the neighborhood of a symmetric tacnode.” Annals of Probability  41 ( 2013): 2599– 647. Google Scholar CrossRef Search ADS   3 Adler M., Johansson K., and van Moerbeke P.. “ Double Aztec diamonds and the tacnode process.” Advances in Mathematics  252 ( 2014): 518– 71. Google Scholar CrossRef Search ADS   4 Adler M., van Moerbeke P., and Vanderstichelen D.. “ Non-intersecting Brownian motions leaving from and going to several points.” Physica D  241 ( 2012): 443– 60. Google Scholar CrossRef Search ADS   5 Anderson G. W., Guionnet A., and Zeitouni O.. An Introduction to Random Matrices . Cambridge Studies in Advanced Mathematics 118. Cambridge: Cambridge University Press, 2010. 6 Baik J. “ Painlevé formulas of the limiting distributions for nonnull complex sample covariance matrices.” Duke Mathematical Journal  133 ( 2006): 205– 35. Google Scholar CrossRef Search ADS   7 Baik J., Liechty K., and Schehr G.. “ On the joint distribution of the maximum and its position of the Airy$$_2$$ process minus a parabola.” Journal of Mathematical Physics  53 ( 2012): 083303. Google Scholar CrossRef Search ADS   8 Bertola M. and Cafasso M.. “ Riemann-Hilbert approach to multi-time processes; the Airy and the Pearcey case.” Physica D  241 ( 2012): 2237– 45. Google Scholar CrossRef Search ADS   9 Bertola M. and Cafasso M.. “ The gap probabilities of the tacnode, Pearcey and Airy point processes, their mutual relationship and evaluation.” Random Matrices: Theory and Applications  2 ( 2013): 1350003. Google Scholar CrossRef Search ADS   10 Bleher P. and Its A.. “ Double scaling limit in the random matrix model: the Riemann-Hilbert approach.” Communications on Pure and Applied Mathematics  56 ( 2003): 433– 516. Google Scholar CrossRef Search ADS   11 Claeys T. and Kuijlaars A. B. J.. “ Universality of the double scaling limit in random matrix models.” Communications on Pure and Applied Mathematics  59 ( 2006): 1573– 603. Google Scholar CrossRef Search ADS   12 Delvaux S. “ Non-intersecting squared Bessel paths at a hard-edge tacnode.” Communications in Mathematical Physics  324 ( 2013): 715– 66. Google Scholar CrossRef Search ADS   13 Delvaux S. and Kuijlaars A. B. J.. “ A graph-based equilibrium problem for the limiting distribution of non-intersecting Brownian motions at low temperature.” Constructive Approximation  32 ( 2010): 467– 512. Google Scholar CrossRef Search ADS   14 Delvaux S., Kuijlaars A. B. J., and Zhang L.. “ Critical behavior of non-intersecting Brownian motions at a tacnode.” Communications on Pure and Applied Mathematics  64 ( 2011): 1305– 83. Google Scholar CrossRef Search ADS   15 Duits M. and Geudens D.. “ A critical phenomenon in the two-matrix model in the quartic/quadratic case.” Duke Mathematical Journal  162 ( 2013): 1383– 462. Google Scholar CrossRef Search ADS   16 Eynard B. and Orantin N.. “ Topological recursion in enumerative geometry and random matrices.” Journal of Physics A  42, no. 29 ( 2009): 293001, 117. Google Scholar CrossRef Search ADS   17 Ferrari P. L. and B. Vet̋. “ Non-colliding Brownian bridges and the asymmetric tacnode process.” Electronic Journal of Probability  17 ( 2012): 1– 17. Google Scholar CrossRef Search ADS   18 Flaschka H. and Newell A. C.. “ Monodromy and spectrum-preserving deformations I.” Communications in Mathematical Physics  76 ( 1980): 65– 116. Google Scholar CrossRef Search ADS   19 Fokas A. S., Its A. R., Kapaev A. A., and Novokshenov V. Yu.. Painlevé Transcendents: a Riemann-Hilbert Approach . Mathematical Surveys and Monographs 128. Providence, RI: Amer. Math. Soc., 2006. Google Scholar CrossRef Search ADS   20 Geudens D. and Zhang L.. “ Transitions between critical kernels: from the tacnode kernel and critical kernel in the two-matrix model to the Pearcey kernel.” International Mathematics Research Notices  ( 2014). 10.1093/imrn/rnu105. 21 Hastings S. P. and McLeod J. B.. “ A boundary value problem associated with the second Painlevé transcendent and the Korteweg-de Vries equation.” Archive for Rational Mechanics and Analysis  73 ( 1980): 31– 51. Google Scholar CrossRef Search ADS   22 Johansson K. “ Noncolliding Brownian motions and the extended tacnode process.” Communications in Mathematical Physics  319 ( 2013): 231– 67. Google Scholar CrossRef Search ADS   23 Katori M. and Tanemura H.. “ Noncolliding Brownian motion and determinantal processes.” Journal of Statistical Physics  129 ( 2007): 1233– 77. Google Scholar CrossRef Search ADS   24 Katori M. and Tanemura H.. “ Noncolliding processes, matrix-valued processes and determinantal processes.” Sugaku Expositions  24 ( 2011): 263– 89. 25 Kuijlaars A. B. J. “ The tacnode Riemann Hilbert problem.” Constructive Approximation  39 ( 2014): 197– 222. Google Scholar CrossRef Search ADS   26 Moreno Flores G. R., Quastel J., and Remenik D.. “ Endpoint distribution of directed polymers in 1+1 dimensions.” Communications in Mathematical Physics  317 ( 2013): 363– 80. Google Scholar CrossRef Search ADS   27 Schehr G. “ Extremes of $$N$$ vicious walkers for large $$N$$: application to the directed polymer and KPZ interfaces.” Journal of Statistical Physics  149 ( 2012): 385– 410. Google Scholar CrossRef Search ADS   28 Tracy C. and Widom H.. “ Level-spacing distributions and the Airy kernel.” Communications in Mathematical Physics  159 ( 1994): 151. Google Scholar CrossRef Search ADS   Communicated by Prof. Kurt Johansson © The Author(s) 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Mathematics Research Notices Oxford University Press

# The Tacnode Kernel: Equality of Riemann–Hilbert and Airy Resolvent Formulas

, Volume 2018 (1) – Jan 1, 2018
42 pages

/lp/ou_press/the-tacnode-kernel-equality-of-riemann-hilbert-and-airy-resolvent-nsEmHx85ir
Publisher
Oxford University Press
ISSN
1073-7928
eISSN
1687-0247
D.O.I.
10.1093/imrn/rnv348
Publisher site
See Article on Publisher Site

### Abstract

Abstract We study nonintersecting Brownian motions with two prescribed starting and ending positions, in the neighborhood of a tacnode in the time–space plane. Several expressions have been obtained for the critical correlation kernel that describes the microscopic behavior of the Brownian motions near the tacnode. One approach, due to Kuijlaars, Zhang, and the author, expresses the kernel via a $$4\times 4$$ matrix-valued Riemann–Hilbert problem (RH problem). Another approach, due to Adler, Ferrari, Johansson, van Moerbeke, and Vet̋, expresses the kernel via resolvents and Fredholm determinants of the Airy integral operator. In this paper, we prove the equivalence of both approaches. We also obtain a rank-2 property for the derivative of the tacnode kernel. Finally, we find an RH expression for the multitime extended tacnode kernel. 1 Introduction Recently several papers appeared that study $$n$$ nonintersecting Brownian motion paths with prescribed starting positions at time $$t=0$$ and ending positions at time $$t=1$$, see [1, 2, 4, 13, 14, 16, 17, 22–24] among many others. In the limit as $$n\to \infty$$ the paths fill up a well-defined region in the time-space plane. By fine-tuning the parameters, we may create a situation with two groups of Brownian motions, located inside two touching ellipses in the time–space plane: see Figure 1c. We are interested in the microscopic behavior of the paths near the touching point of the two ellipses, that is, near the tacnode. Fig. 1. View largeDownload slide Twenty nonintersecting Brownian motions at temperature $$T=1$$ with two prescribed starting and two ending positions in the case of (a) large, (b) small, and (c) critical separation between the endpoints. The horizontal axis stands for the time, $$t\in [0,1]$$, and the vertical axis shows the positions of the Brownian motions at time $$t$$. For $$n\to \infty$$ the Brownian motions fill a prescribed region in the time–space plane which is bounded by the boldface lines in the figure. In the case (c), the limiting support consists of two touching ellipses which touch each other at a critical point which is a tacnode. Fig. 1. View largeDownload slide Twenty nonintersecting Brownian motions at temperature $$T=1$$ with two prescribed starting and two ending positions in the case of (a) large, (b) small, and (c) critical separation between the endpoints. The horizontal axis stands for the time, $$t\in [0,1]$$, and the vertical axis shows the positions of the Brownian motions at time $$t$$. For $$n\to \infty$$ the Brownian motions fill a prescribed region in the time–space plane which is bounded by the boldface lines in the figure. In the case (c), the limiting support consists of two touching ellipses which touch each other at a critical point which is a tacnode. It is well known that the positions of the Brownian motions at any fixed time $$t\in (0,1)$$ form a determinantal point process. The process has a well-defined limit for $$n\to \infty$$ in a microscopic neighborhood of the tacnode. The limiting process is encoded by a two-variable correlation kernel $$K_{{\rm tac}}(x,y)$$ which we call the tacnode kernel. It depends parametrically on the scaling that we use near the tacnode. There exists also a multitime extended version of the tacnode kernel [2, 3, 17, 22], to be discussed in Section 2.4. The tacnode kernel $$K_{{\rm tac}}(x,y)$$ can be expressed using resolvents and Fredholm determinants of the Airy integral operator acting on a semi-infinite interval $$[\sigma ,\infty )$$. This approach was followed in the symmetric case by Adler et al. [2] and Johansson [22] and in the non-symmetric case by Ferrari and Vet̋ [17]. Here the “symmetric case” means that the two touching groups of Brownian motions have the same size, as in Figure 1, and the non-symmetric case means that one group is bigger than the other. Similar methods were used to study a double Aztec diamond [3]. The latter paper also shows that the expressions for the symmetric tacnode kernel derived in [2, 22], respectively, are equivalent to each other (as one would expect), but the proof of this is rather indirect since it involves calculating the limit of a certain kernel in two different ways. An alternative expression for the tacnode kernel can be obtained from the Riemann–Hilbert (RH) method. This approach was followed by Kuijlaars et al. in [14]. In that paper we express the tacnode kernel in terms of a $$4\times 4$$ matrix-valued RH problem $$M(z)$$, which yields a new Lax pair representation for the Hastings–McLeod solution $$q(x)$$ to the Painlevé II equation. Recall that the Painlevé II equation is the second-order, ordinary differential equation   $$q''\left(x\right) = xq\left(x\right)+2q^3\left(x\right),$$ (1) where the prime denotes the derivative with respect to $$x$$. The Hastings–McLeod solution [19, 21] is the special solution $$q(x)$$ of (1) that is real for real $$x$$ and satisfies   $$q\left(x\right)\sim {\rm Ai}\left(x\right), \quad x\to +\infty,$$ (2) with $${\rm Ai}$$ the Airy function. We note that the usual RH matrix $$\Psi (z)$$ associated to the Painlevé II equation, due to Flaschka and Newell [18], has size $$2\times 2$$ rather than $$4\times 4$$. The RH matrix $$M(z)$$ from the previous paragraph has been the topic of some recent developments. It was used to study a new critical phenomenon in the two-matrix model [15], and to establish a reduction from the tacnode kernel to the Pearcey kernel [20]. It was also extended to a hard-edge version of the tacnode [12]. Summarizing, there exist several, apparently different, formulas for the tacnode kernel $$K_{{\rm tac}}(x,y)$$. It is natural to ask about the equivalence of these formulas. There is an interesting analogy with a model of nonintersecting Brownian excursions, also known as watermelons (with a wall). The model consists of $$n$$ Brownian motion paths on the positive half-line with a reflecting or absorbing wall at the origin. The paths are forced to start and end at the origin and the interest lies in the maximum position $$x_{\max }$$ reached by the topmost path during the time interval $$t\in [0,1]$$. Recently, several results were obtained about the joint distribution of the maximum position $$x_{\max }$$ and the maximizing time $$t_{\max }$$ for such watermelons. One approach, due to Moreno–Quastel–Remenik [26], involves resolvents and Fredholm determinants of the Airy operator acting on an interval $$[\sigma ,\infty )$$. Another approach, due to Schehr [27], involves the $$2\times 2$$ RH matrix $$\Psi (z)$$ associated to the Hastings–McLeod solution to the Painlevé II equation [18, 21]. The equivalence of both approaches has been established in a recent work of Baik–Liechty–Schehr [7]. Along the way they obtain Airy resolvent formulas for the entries of the RH matrix $$\Psi (z)$$, see also [6, Section 1.1.3] for a similar result. Inspired by the work of Baik–Liechty–Schehr [7], in this paper we obtain Airy resolvent formulas for the $$4\times 4$$ RH matrix $$M(z)$$. This is the content of Theorem 2.9. Our formulas will apply to the entries in the first and second column of the RH matrix $$M(z)$$, where $$z$$ lies in a sector around the positive imaginary axis. In Theorem 2.4 we will use these formulas to prove the equivalence of the tacnode kernels in [2, 3, 17, 22] and [14], respectively. We also obtain a remarkable rank-2 property for the derivative of the tacnode kernel, see Theorems 2.5 and 2.6. As a byproduct, the rank-2 formula will yield an RH formula for the multi-time extended tacnode kernel, see Section 2.4. To the best of our knowledge, this is the first time that an RH formula is obtained for a multitime extended correlation kernel. See [8] for a completely different connection between RH problems and multitime extended point processes, at the level of gap probabilities. Remark 1.1. Based on an earlier version of this paper, Kuijlaars [25] has extended our approach and obtained Airy resolvent formulas for the third and fourth column of $$M(z)$$, with $$z$$ lying again in a sector around the positive imaginary axis. The latter columns are relevant because they appear in a critical correlation kernel for the 2-matrix model [15]. Remark 1.2. The paper [12] discusses a hard-edge variant of the tacnode kernel. The interaction with the hard edge is quantified by a certain parameter $$\alpha >-1$$. In the special case $$\alpha =0$$, the hard edge tacnode kernel involves the same RH matrix $$M(z)$$ as the one above and our results give an alternative way of writing the kernel. It is an open problem to extend our results to a general value of $$\alpha$$. 2 Statement of Results 2.1 Definition of the tacnode kernel In this section we recall the two different definitions of the tacnode kernel in the literature. We will denote them by $$K_{{\rm tac}}(u,v)$$ and $$\mathcal L_{{\rm tac}}(u,v)$$, respectively. 2.1.1 Definition of the kernel $$K_{{\rm tac}}$$. A first approach to define the tacnode kernel $$K_{{\rm tac}}(u,v)$$ (in the single-time case) is via an RH problem for a matrix $$M(z)$$ of size $$4\times 4$$. We recall the RH problem from [14, 15]. Fix two numbers $$\varphi _1,\varphi _2$$ such that   $$0<\varphi_1<\varphi_2<\pi/3.$$ (3) Define the half-lines $$\Gamma _k$$, $$k=0,\ldots ,9$$, by   $$\Gamma_0=\mathbb{R}_+,\quad \Gamma_1={\rm e}^{{\rm i}\varphi_1}\mathbb{R}_+,\quad \Gamma_2={\rm e}^{{\rm i}\varphi_2}\mathbb{R}_+,\quad \Gamma_3={\rm e}^{{\rm i}\left(\pi-\varphi_2\right)}\mathbb{R}_+,\quad \Gamma_4={\rm e}^{{\rm i}\left(\pi-\varphi_1\right)}\mathbb{R}_+,$$ (4) and   $$\Gamma_{5+k}=-\Gamma_k,\quad k=0,\ldots,4.$$ (5) All rays $$\Gamma _k$$, $$k=0,\ldots ,9$$, are oriented toward infinity, as shown in Figure 2. We denote by $$\Omega _k$$ the region in $$\mathbb {C}$$ that lies between the rays $$\Gamma _k$$ and $$\Gamma _{k+1}$$, for $$k=0,\ldots ,9$$, where we identify $$\Gamma _{10}:=\Gamma _0$$. We consider the following RH problem. RH problem 2.1. We look for a matrix-valued function $$M:\mathbb {C}\setminus (\bigcup _{k=0}^{9}\Gamma _k)\to \mathbb {C}^{4\times 4}$$ (which also depends on the parameters $$r_1,r_2>0$$, and $$s_1,s_2,\tau \in \mathbb {C}$$) satisfying $$M(z)$$ is analytic (entrywise) for $$z\in \mathbb {C}\setminus (\bigcup _{k=0}^{9} \Gamma _k)$$. For $$z\in \Gamma _k$$, the limiting values   $M_+\left(z\right) = \lim_{x \to z, \, x{\rm on\, +-side\, of}\Gamma_k} M\left(x\right), \quad M_-\left(z\right) = \lim_{x \to z, \, x{\rm on\, -side\, of}\Gamma_k} M\left(x\right)$ exist, where the $$+$$-side and $$-$$-side of $$\Gamma _k$$ are the sides which lie on the left and right of $$\Gamma _k$$, respectively, when traversing $$\Gamma _k$$ according to its orientation. These limiting values satisfy the jump relation   $$M_{+}\left(z\right) = M_{-}\left(z\right)J_k\left(z\right),\quad k=0,\ldots,9,$$ (6) where the jump matrix $$J_k(z)$$ for each ray $$\Gamma _k$$ is shown in Figure 2. As $$z\to \infty$$ we have   \begin{align} M(z) &=\left(I+\frac{M_1}{z}+\frac{M_2}{z^2}+O\left(\frac{1}{z^3}\right)\right) {\rm diag}((-z)^{-1/4},z^{-1/4},(-z)^{1/4},z^{1/4})\nonumber\\ &\quad \times\mathcal{A}{\rm diag}\left({\rm e}^{\theta_1(z)},{\rm e}^{\theta_2(z)}, {\rm e}^{\theta_3(z)},{\rm e}^{\theta_4(z)}\right), \end{align} (7) where the coefficient matrices $$M_1,M_2,\ldots$$ are independent of $$z$$, and with   $$\mathcal{A}:=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 0 & -i & 0 \\ 0 & 1 & 0 & i \\ -i & 0 & 1 & 0 \\ 0 & i & 0 & 1 \\ \end{pmatrix},$$ (8) and   $$\begin{cases} \theta_1\left(z\right) = -\frac{2}{3}r_1\left(-z\right)^{3/2}-2s_1\left(-z\right)^{1/2}+r_1^2\tau z, \\ \theta_2\left(z\right) = -\frac{2}{3}r_2z^{3/2}-2s_2z^{1/2}-r_2^2\tau z,\\ \theta_3\left(z\right) =\frac{2}{3}r_1\left(-z\right)^{3/2} +2s_1\left(-z\right)^{1/2}+r_1^2\tau z, \\ \theta_4\left(z\right) =\frac{2}{3}r_2z^{3/2}+2s_2z^{1/2}-r_2^2\tau z. \end{cases}$$ (9) Here we use the principal branches of the fractional powers. $$M(z)$$ is bounded as $$z\to 0$$. Fig. 2. View largeDownload slide The jump contours $$\Gamma _k$$ in the complex $$z$$-plane and the corresponding jump matrix $$J_k$$ on $$\Gamma _k$$, $$k=0,\ldots ,9$$, in the RH problem for $$M = M(z)$$. We denote by $$\Omega _k$$ the region between the rays $$\Gamma _k$$ and $$\Gamma _{k+1}$$. Fig. 2. View largeDownload slide The jump contours $$\Gamma _k$$ in the complex $$z$$-plane and the corresponding jump matrix $$J_k$$ on $$\Gamma _k$$, $$k=0,\ldots ,9$$, in the RH problem for $$M = M(z)$$. We denote by $$\Omega _k$$ the region between the rays $$\Gamma _k$$ and $$\Gamma _{k+1}$$. We will sometimes write $$M(z)=M(z;r_1,r_2,s_1,s_2,\tau )$$ to indicate the dependence on the parameters. The factors $$r_1^2$$ and $$r_2^2$$ in front of $$\tau z$$ in (9) will be useful in the statement of our main theorems; these factors could be removed by a scaling and translation of the RH matrix. We could also assume $$r_2=1$$ without loss of generality by a simple rescaling of $$z$$. The RH problem 2.1 was introduced in [14] with $$\tau =0$$ in (9). The parameter $$\tau$$ was introduced in [15] in the symmetric setting where $$r_1=r_2=1$$ and $$s_1=s_2$$. The general nonsymmetric case with the extra parameter $$\tau$$ in (9) has not been considered before in the literature. We note that the parameter $$\tau$$ in [15] can be identified with the (rescaled) coupling constant in a critical model of coupled random matrices, while in the context of the tacnode process the parameter $$\tau$$ will be related to the $$O(n^{-1/3})$$ coefficient in the critical scaling of the time variable. The following result can be proved as in [15]; see also Lemma 6.7 and the paragraph following it. Proposition 2.2 (Solvability). For any $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ there exists a unique solution $$M(z)=M(z;r_1,r_2,s_1,s_2,\tau )$$ to the RH problem 2.1. Now we define the tacnode kernel. Let $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed parameters. Let $$\tilde {M}(z)$$ be the restriction of $$M(z)$$ to the sector $$z\in \Omega _2$$ around the positive imaginary axis. We extend $$\tilde {M}(z)$$ to the whole complex $$z$$-plane by analytic continuation. This analytic continuation is well defined, since the product $$J_3J_4\cdots J_9J_0J_1J_2$$ of the jump matrices in the RH problem 2.1 is the identity matrix. The tacnode kernel $$K_{{\rm tac}}(u,v)$$ is defined in terms of the RH matrix $$\tilde {M}(z)$$ by [14, Definition 2.6] (we note that [14, Definition 2.6] has a typo: it has “$$\tilde {M}^{-1}(u) \tilde {M}(v)$$” instead of “$$\tilde {M}^{-1}(v) \tilde {M}(u)$$).”   $$K_{{\rm tac}}\left(u,v\right) = \frac{1}{2\pi i\left(u-v\right)} \left(0\quad 0\quad 1\quad 1\right) \tilde{M}^{-1}\left(v\right) \tilde{M}\left(u\right) \begin{pmatrix} 1\\ 1\\ 0\\ 0\end{pmatrix}.$$ (10) For later use, it is convenient to denote by   $$\textbf{p}\left(z\right) = \tilde{M}\left(z\right)\begin{pmatrix}1\\ 1\\ 0\\ 0\end{pmatrix} \in\mathbb{C}^{4\times 1}$$ (11) the sum of the first and second columns of $$\tilde {M}(z)$$. Observe that (10) and (11) both depend on the parameters $$r_1,r_2,s_1,s_2,\tau$$. 2.1.2 Definition of the kernel $$\mathcal L_{{\rm tac}}$$. In this section we recall the second way to define the tacnode kernel, via Airy resolvents [2, 3, 17, 22]. We will denote this kernel by $$\mathcal L_{{\rm tac}}(u,v)$$. Denote by $${\rm Ai}(x)$$ the standard Airy function and by   $$K_{{\rm Ai}}\left(x,y\right)= \frac{{\rm Ai}\left(x\right){\rm Ai}'\left(y\right)-{\rm Ai}'\left(x\right){\rm Ai}\left(y\right)}{x-y}=\int_{0}^{\infty} {\rm Ai}\left(x+z\right){\rm Ai}\left(y+z\right){\rm d} z$$ (12) the Airy kernel. For $$\sigma \in \mathbb {R}$$ let   $$K_{{\rm Ai},\sigma}\left(x,y\right) = \int_{0}^{\infty} {\rm Ai}\left(x+z+\sigma\right){\rm Ai}\left(y+z+\sigma\right){\rm d} z$$ (13) be the Airy kernel shifted by $$\sigma$$. Let $$\textbf {K}_{{\rm Ai},\sigma }$$ be the integral operator with kernel $$K_{{\rm Ai},\sigma }$$ acting on the function space $$L^2([0,\infty ))$$. The action of the operator $$\textbf {K}_{{\rm Ai},\sigma }$$ on the function $$f$$ is defined by   $[\textbf{K}_{{\rm Ai},\sigma} f]\ \left(x\right) = \int_{0}^{\infty} K_{{\rm Ai},\sigma}\left(x,y\right)f\left(y\right){\rm d} y.$ Define the resolvent operator $$\textbf {R}_{{\rm Ai},\sigma }$$ on $$L^2([0,\infty ))$$ by   $$\textbf{R}_{{\rm Ai},\sigma} := \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} - \textbf{1} = \textbf{K}_{{\rm Ai},\sigma}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{K}_{{\rm Ai},\sigma},$$ (14) where $$\textbf {1}$$ stands for the identity operator on $$L^2([0,\infty ))$$. It is known that $$\textbf {R}_{{\rm Ai},\sigma }$$ is again an integral operator on $$L^2([0,\infty ))$$ and we denote its kernel by $$R_{\sigma }(x,y)$$:   $$[\textbf{R}_{{\rm Ai},\sigma} f] \left(x\right) = \int_{0}^{\infty} R_{\sigma}\left(x,y\right)f\left(y\right){\rm d} y.$$ (15) We will sometimes use the notation   $$\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \equiv \left(\textbf{1}+\textbf{R}_{{\rm Ai},\sigma}\right)\left(x,y\right) := \delta\left(x-y\right)+R_{\sigma}\left(x,y\right),$$ (16) with $$\delta (x-y)$$ the Dirac delta function at $$x=y$$ and $$R_{\sigma }$$ the Airy resolvent kernel (15). We will often use the symmetry of the kernel, $$R_{\sigma }(x,y)=R_{\sigma }(y,x)$$. Finally, we will abbreviate $$R_{\sigma }$$ by $$R$$ if the value of $$\sigma$$ is clear from the context. In a series of papers [2, 3, 17, 22], Adler coworkers study the tacnode problem using Airy resolvent expressions. We focus, in particular, on the paper by Ferrari–Vet̋ [17] on the nonsymmetric tacnode. Hence the two touching groups of Brownian motions at the tacnode are allowed to have a different size. The paper [17] uses a parameter $$\lambda >0$$ that quantifies the amount of asymmetry, with $$\lambda =1$$ corresponding to the symmetric case treated in [2, 3, 22]. These papers also use a parameter $$\sigma >0$$ that controls the strength of interaction between the two groups of Brownian motions near the tacnode. In the present paper, we will denote the latter parameter with a capital $$\Sigma$$. The parameter $$\Sigma$$ has a similar effect on the tacnode kernel as the temperature parameter used in [12, 13] (suitably rescaled). In order to be consistent with [14], we will use the notation $$\sigma$$ to denote   $$\sigma = \lambda^{1/2}\left(1+\lambda^{-1/2}\right)^{2/3}\Sigma.$$ (17) (What we call $$\sigma$$ was called $$\widetilde \sigma$$ in [2, 3, 17, 22].) The papers [2, 3, 17, 22] consider a multitime extended tacnode kernel with time variables $$\tau _1,\tau _2$$. We restrict ourselves here to the single time case $$\tau _1=\tau _2=:\tau$$. The discussion of the multitime case is postponed to Section 2.4. With the above notations, define the functions [17]   \begin{align} \begin{split} b_{\tau,z}\left(x\right) &= \exp\left(-\tau y+\tau^3/3\right){\rm Ai}\left(y\right),\quad {\rm with }y:=z+Cx+\Sigma+\tau^2,\\ \tilde{b}_{\tau,z}\left(x\right) &= \exp\left(-\sqrt{\lambda}\tau \tilde{y}+\lambda\tau^3/3\right){\rm Ai}\left(\lambda^{1/6}\widetilde y\right),\quad {\rm with}\tilde{y}:=-z+Cx+\sqrt{\lambda}\left(\Sigma+\tau^2\right), \end{split} \end{align} (18) where   $$C = \left(1+\lambda^{-1/2}\right)^{1/3}.$$ (19) The notations $$b_{\tau ,\sigma +\xi }^{\lambda }(x+\widetilde \sigma )$$ and $$b_{\lambda ^{1/3}\tau ,\lambda ^{2/3}\sigma -\lambda ^{1/6}\xi }^{\lambda ^{-1}}(x+\tilde {\sigma })$$ in [17] correspond to our notations $$\lambda ^{1/6}\tilde {b}_{-\tau ,\xi }(x)$$ and $$\lambda ^{-1/6}b_{-\tau ,\xi }(x)$$, respectively. Note that, in the symmetric case $$\lambda =1$$, we have $$\tilde {b}_{\tau ,z}(x)=b_{\tau ,-z}(x)$$. The functions (18) also depend on $$\lambda ,\sigma$$ (recall (17)) but we do not show this in the notation. Next, we define the functions   \begin{align} \begin{split} \mathcal{A}_{\tau,z}\left(x\right) &= b_{\tau,z}\left(x\right) -\lambda^{1/6}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) {\rm d} y\\ \tilde{\mathcal{A}}_{\tau,z}\left(x\right) &= \tilde{b}_{\tau,z}\left(x\right)- \lambda^{-1/6}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)b_{\tau,z}\left(y\right){\rm d} y. \end{split} \end{align} (20) Again we have $$\tilde {\mathcal {A}}_{\tau ,z}(x) = \mathcal {A}_{\tau ,-z}(x)$$ in the symmetric case $$\lambda =1$$, and we suppress the dependence on $$\lambda ,\sigma$$ from the notation. We are now ready to introduce the tacnode kernel $$\mathcal L_{{\rm tac}}(u,v)=\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ of Ferrari–Vet̋ [17], restricted to the single-time case $$\tau _1=\tau _2=\tau$$. (This kernel is called $$\mathcal L_{{\rm tac}}^{\lambda ,\sigma }(\tau _1,\xi _1,\tau _2,\xi _2)$$ in [17], with $$\xi _1=u$$, $$\xi _2=v$$ and $$\tau _1=\tau _2=\tau$$. Recall that we use $$\sigma$$ in a different meaning.) The kernel can be represented in several equivalent ways. We find it convenient to use the following representation. Proposition 2.3 (The tacnode kernel). Fix $$\lambda >0$$. The tacnode kernel $$\mathcal L_{{\rm tac}}(u,v)$$ of Ferrari–Vet̋ [17] in the single-time case $$\tau _1=\tau _2=\tau$$ can be written in the form   \begin{align} \begin{split} \mathcal L_{{\rm tac}}(u,v;\sigma,\tau) &= C\lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau,u}(x)\tilde{b}_{-\tau,v}(x)\,{\rm d} x\\ &\quad +C\int_0^{\infty}\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y, \end{split} \end{align} (21) with the notations (13) and (16)–(20). Proposition 2.3 is proved in Section 4.1 and its multitime extended version is stated in Section 2.4. The proposition was obtained in the symmetric case $$\lambda =1$$ by Adler et al. [3, Theorem 1.2(i)]. Incidentally we note that a kernel of the form (21) allows an efficient numerical evaluation of its gap probabilities, as is recently shown in [9]. In the above proposition we use the notation (16). Hence the double integral can be rewritten as   \begin{align*} &\int_0^{\infty}\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y \\ &\quad = \int_0^{\infty}\int_0^{\infty}R_{\sigma}(x,y) \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y+\int_0^{\infty} \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(x)\,{\rm d} x. \end{align*} 2.2 Connection between the tacnode kernels $$K_{{\rm tac}}$$ and $$\mathcal L_{{\rm tac}}$$ Now we state the first main theorem of this paper. Theorem 2.4 (Connection between kernels). Fix $$\lambda >0$$. The kernel $$\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ in (21) equals the RH kernel $$K_{{\rm tac}}(u,v;r_1,r_2,s_1,s_2,\tau )$$ in (10) with the parameters   $$r_1=\lambda^{1/4},\quad r_2=1,\quad s_1=\tfrac{1}{2}\lambda^{3/4}\left(\Sigma+\tau^2\right), \quad s_2=\tfrac{1}{2}\left(\Sigma+\tau^2\right),$$ (22) where we recall (17). Theorem 2.4 is proved in Section 4.4. 2.3 Derivative of the tacnode kernel: a rank-2 property To prove Theorem 2.4 we will take the derivatives of the kernels $$K_{{\rm tac}}$$ and $$\mathcal L_{{\rm tac}}$$ with respect to a certain parameter, and prove that they are equal. The derivative will be a rank-2 kernel. First we discuss this for the kernel $$K_{{\rm tac}}$$. 2.3.1 Derivative of the kernel $$K_{{\rm tac}}$$. Recall the formula (10) for the tacnode kernel $$K_{{\rm tac}}$$. This kernel has an “integrable” form, due to the factor $$u-v$$ in the denominator. Interestingly, this factor cancels when taking the derivative with respect to $$s_1$$ or $$s_2$$. This is the content of the next theorem. To state the theorem, we parameterize   $$s_1 =: \sigma_1 s,\quad s_2 =: \sigma_2 s,$$ (23) where $$\sigma _1,\sigma _2$$ are fixed and $$s$$ is variable. In the symmetric case where $$s_1=s_2=:s$$, we could simply take $$\sigma _1=\sigma _2=1$$. We also consider $$r_1,r_2>0$$ to be fixed. Then we write $$K_{{\rm tac}}(u,v;s,\tau )$$, $$\textbf {p}(z;s,\tau )$$ etc., to denote the dependence on the two parameters $$s$$ and $$\tau$$. Theorem 2.5 (Derivative of tacnode kernel $$K_{{\rm tac}}$$). With the parametrization (23), the kernel (10) satisfies   $$\frac{\partial}{\partial s} K_{{\rm tac}}\left(u,v;s,\tau\right) = -\frac{1}{\pi}\left(\sigma_1 p_1\left(u;s,\tau\right)p_1\left(v;s,-\tau\right)+\sigma_2 p_2\left(u;s,\tau\right)p_2\left(v;s,-\tau\right)\right),$$ (24) where $$p_j$$, $$j=1,\ldots ,4$$, denotes the $$j$$th entry of the vector $$\textbf {p}$$ in (11). Consequently, if $$\sigma _1,\sigma _2>0$$ then   $$K_{{\rm tac}}\left(u,v;s,\tau\right) = \frac{1}{\pi}\int_s^{\infty}\left(\sigma_1 p_1\left(u;\widetilde s,\tau\right)p_1\left(v;\widetilde s,-\tau\right)+\sigma_2 p_2\left(u;\widetilde s,\tau\right)p_2\left(v;\widetilde s,-\tau\right)\right){\rm d} \widetilde s.$$ (25) Theorem 2.5 is proved in Section 3. Note that the right-hand side of (24) is a rank-2 kernel. In the proof of Theorem 2.4 we will use Theorem 2.5 with specific parameters $$\sigma _1,\sigma _2$$ (see (84)). Remark 2.1. Formulas (24)–(25) have an analogue for the kernel $$K_{\Psi }(u,v)$$ which is associated to the $$2\times 2$$ Flaschka–Newell RH matrix $$\Psi (z)$$, see [11, Eq. (1.22)]. The kernel $$K_{\Psi }(u,v)$$ occurs in Hermitian random matrix theory when the limiting eigenvalue density vanishes quadratically at an interior point of its support [10, 11]. 2.3.2 Derivative of the kernel $$\mathcal L_{{\rm tac}}$$. Next we consider the derivative of the tacnode kernel $$\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ with respect to the parameter $$\sigma$$. Theorem 2.6 (Derivative of tacnode kernel $$\mathcal L_{{\rm tac}}$$). Fix $$\lambda >0$$. The kernel (21) satisfies   $$\frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) = -C^{-2}\left(\lambda^{1/3}\hat{p}_1\left(u;\sigma,\tau\right)\hat{p}_1\left(v;\sigma,-\tau\right)+\lambda^{-1/2}\hat{p}_2\left(u;\sigma,\tau\right)\hat{p}_2\left(v;\sigma,-\tau\right)\right)$$ (26) and consequently   $$\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) = C^{-2}\int_{\sigma}^{\infty}\left(\lambda^{1/3}\hat{p}_1\left(u;s,\tau\right)\hat{p}_1\left(v;s,-\tau\right)+\lambda^{-1/2}\hat{p}_2\left(u;s,\tau\right)\hat{p}_2\left(v;s,-\tau\right)\right){\rm d} s.$$ (27) Here we denote $$C=(1+\lambda ^{-1/2})^{1/3}$$ and   \begin{align} \begin{split} \hat{p}_1\left(z;\sigma,\tau\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) \tilde{\mathcal{A}}_{\tau,z}\left(x\right){\rm d} x\\ \hat{p}_2\left(z;\sigma,\tau\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right)\mathcal{A}_{\tau,z}\left(x\right){\rm d}x, \end{split} \end{align} (28) with the notation (16). Theorem 2.6 is proved in Section 4.3 and its multi-time extended version is stated in Section 2.4. In the proof we will use certain functions (53) that were introduced by Tracy–Widom [28]. We will also obtain some alternative representations for (28), see Lemma 4.3. Note that the right-hand side of (26) is again a rank-2 kernel. This is a result of independent interest. 2.4 The multi-time case Our results for the Ferrari–Vet̋ tacnode kernel can be readily generalized to the multitime case, corresponding to two different times $$\tau _1,\tau _2$$. It suffices to replace all the subscripts $$\tau$$ and $$-\tau$$, by $$\tau _1$$ and $$-\tau _2$$, respectively. This yields the following generalization of Proposition 2.3. Proposition 2.7 (Extended tacnode kernel). Fix $$\lambda >0$$. The multitime extended tacnode kernel $$\mathcal L_{{\rm tac}}$$ of Ferrari–Vet̋ [17] can be written in the form   \begin{align} \mathcal L_{{\rm tac}}(u,v;\sigma,\tau_1,\tau_2) &= -\textbf{1}_{\tau_1<\tau_2}\frac{1}{\sqrt{4\pi(\tau_2-\tau_1)}}\exp \left(-\frac{(v-u)^2}{4(\tau_2-\tau_1)}\right)\nonumber\\ &\quad +C\lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau_1,u}(x)\tilde{b}_{-\tau_2,v}(x)\,{\rm d} x\nonumber\\ &\quad +C\int_0^{\infty}\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \mathcal{A}_{\tau_1,u}(x)\mathcal{A}_{-\tau_2,v}(y)\,{\rm d} x\,{\rm d} y, \end{align} (29) with the notations (13) and (16)–(20). This kernel is called $$\mathcal L_{{\rm tac}}^{\lambda ,\sigma }(\tau _1,\xi _1,\tau _2,\xi _2)$$ in [17] with $$\xi _1=u$$ and $$\xi _2=v$$. Recall that we use $$\sigma$$ in a different meaning. The rank-2 formula in Theorem 2.6 generalizes as follows. Theorem 2.8 (Derivative of extended tacnode kernel). Fix $$\lambda >0$$. The kernel (29) satisfies   $$\frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau_1,\tau_2\right) = -C^{-2}\left(\lambda^{1/3}\hat{p}_1\left(u;\sigma,\tau_1\right) \hat{p}_1\left(v;\sigma,-\tau_2\right)+\lambda^{-1/2}\hat{p}_2\left(u;\sigma,\tau_1\right) \hat{p}_2\left(v;\sigma,-\tau_2\right)\right)$$ (30) and consequently   \begin{align} &\mathcal L_{{\rm tac}}(u,v;\sigma,\tau_1,\tau_2)\nonumber\\ &\quad = -\textbf{1}_{\tau_1<\tau_2}\frac{1}{\sqrt{4\pi(\tau_2-\tau_1)}} \exp\left(-\frac{(v-u)^2}{4(\tau_2-\tau_1)}\right)\nonumber\\ &\qquad +C^{-2}\int_{\sigma}^{\infty}\left(\lambda^{1/3}\hat{p}_1(u;s,\tau_1) \hat{p}_1(v;s,-\tau_2)+\lambda^{-1/2}\hat{p}_2(u;s,\tau_1)\hat{p}_2(v;s,-\tau_2)\right){\rm d} s. \end{align} (31) Here we use again the notations $$C=(1+\lambda ^{-1/2})^{1/3}$$ and (28). As an offshoot of this theorem, we obtain an RH expression for the multitime extended tacnode kernel. Indeed, the functions $$\hat {p}_1$$ and $$\hat {p}_2$$ can be expressed in terms of the top left $$2\times 2$$ block of the RH matrix $$M(z)$$ on account of (11), (22), and (83). To the best of our knowledge, this is the first time that an RH expression is given for a multitime extended kernel. The above results on the multitime extended tacnode kernel can be proved exactly as in the single-time case. We omit the details. In the remainder of this paper we will not come back on the multitime case anymore. 2.5 Airy resolvent formulas for the $$4\times 4$$ RH matrix In the proof of Theorem 2.4 we will need Airy resolvent formulas for the entries in the first two columns of the RH matrix $$\tilde {M}(z) = \tilde {M}(z;r_1,r_2,s_1,s_2,\tau )$$. The existence of such formulas could be anticipated by comparing Theorems 2.5 and 2.6. The formulas below will be stated for general values of $$r_1,r_2,s_1,s_2,\tau$$. The reader who is interested only in the symmetric case $$r_1=r_2$$ and $$s_1=s_2$$ can skip the next paragraph and move directly to (34). For general $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$, we define the constants   \begin{align} \begin{split} C &=\left(r_1^{-2}+r_2^{-2}\right)^{1/3},\\ D &=\sqrt{\frac{r_1}{r_2}}\exp\left(\frac{r_1^4-r_2^4}{3}\tau^3 +2\left(r_2s_2-r_1s_1\right)\tau\right),\\ \sigma &=C^{-1}\left(2\left(\frac{s_1}{r_1}+\frac{s_2}{r_2}\right)-\left(r_1^2+r_2^2\right)\tau^2\right), \end{split} \end{align} (32) and the functions   \begin{align} \begin{split} b_z\left(x\right) &=\sqrt{2\pi}r_2^{1/6}\exp\left(-r_2^2 \tau \left(z+C x\right)\right){\rm Ai}\left(r_2^{2/3}\left(z+Cx+2\frac{s_2}{r_2}\right)\right),\\ \tilde{b}_z\left(x\right) &= \sqrt{2\pi}r_1^{1/6}\exp\left(r_1^2 \tau \left(z-C x\right)\right){\rm Ai}\left(r_1^{2/3}\left(-z+Cx+2\frac{s_1}{r_1}\right)\right). \end{split} \end{align} (33) The above definitions of $$C,\sigma$$ are consistent with our earlier formulas (19) and (17) under the identification (22). Similarly, the formulas for $$b_z(x),\tilde {b}_z(x)$$ reduce to the ones in (18) up to certain multiplicative constants (independent of $$z,x$$). In the symmetric case where $$r_1=r_2=1$$, $$s_1=s_2=:s$$, the above definitions simplify to   \begin{align} \begin{split} C&=2^{1/3},\\ D &=1,\\ \sigma &= 2^{5/3}s -2^{2/3}\tau^2, \\ b_z\left(x\right) &= \sqrt{2\pi}\exp\left(-\tau \left(z+2^{1/3} x\right)\right){\rm Ai}\left(z+2^{1/3}x+2s\right),\\ \tilde{b}_z\left(x\right)&=b_{-z}\left(x\right). \end{split} \end{align} (34) The fact of the matter is Theorem 2.9 (Airy resolvent formulas for the $$4\times 4$$ RH matrix). Denote by $$\tilde {M}_{j,k}(z)$$ the $$(j,k)$$ entry of the RH matrix $$\tilde {M}(z)=\tilde {M}(z;r_1,r_2,s_1,s_2,\tau )$$. Then the entries in the top left $$2\times 2$$ block of $$\tilde {M}(z)$$ can be expressed by the formulas   \begin{align} \begin{split} \tilde{M}_{1,1}\left(z\right) &= \int_{0}^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) \tilde{b}_z\left(x\right){\rm d} x,\\ \tilde{M}_{2,1}\left(z\right)&= -D\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_z\left(y\right) {\rm d} x\,{\rm d} y,\\ \tilde{M}_{1,2}\left(z\right) &=-D^{-1}\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right){\rm Ai}\left(x+y+\sigma\right)b_z\left(y\right) {\rm d} x\,{\rm d} y,\\ \tilde{M}_{2,2}\left(z\right) &=\int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} \left(x,0\right)b_z\left(x\right) {\rm d} x, \end{split} \end{align} (35) where we use the notations (16) and (32)–(33) (or (34) in the symmetric case). The entries in the bottom left $$2\times 2$$ block of $$\tilde {M}(z)$$ can be obtained by combining the above expressions with equations (41)–(42). Theorem 2.9 is proved in Section 5. The proof makes heavy use of the Tracy–Widom functions defined in Section 4.2. Corollary 2.10. The first two entries of the vector $$\textbf {p}(z)$$ in (11) are given by   \begin{align} \begin{split} p_1\left(z\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right) \tilde{\mathcal{A}}_{z}\left(x\right){\rm d} x, \\ p_2\left(z\right)& = \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right)\mathcal{A}_{z}\left(x\right){\rm d} x, \end{split} \end{align} (36) where   \begin{align} \begin{split} \mathcal{A}_{z}\left(x\right) &= b_z\left(x\right) -D\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{z}\left(y\right) {\rm d} y,\\ \tilde{\mathcal{A}}_{z}\left(x\right) &= \tilde{b}_{z}\left(x\right)- D^{-1}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)b_{z}\left(y\right){\rm d} y. \end{split} \end{align} (37) 2.6 Differential equations for the columns of $$M(z)$$ In the proof of Theorem 2.9, we will use the differential equations for the columns of the RH matrix $$\tilde {M}(z)$$ (or $$M(z)$$). Interestingly, the coefficients in these differential equations contain the Hastings–McLeod solution $$q(x)$$ to Painlevé II. We also need the associated Hamiltonian  $$u\left(x\right) := \left(q'\left(x\right)\right)^2-xq^2\left(x\right)-q^4\left(x\right).$$ (38) Proposition 2.11 (System of differential equations). (a) Let the vector $$\textbf {m}(z)=\textbf {m}(z;r_1,r_2,s_1,s_2,\tau )$$ be one of the columns of $$\tilde {M}(z)$$, or a fixed linear combination of them, and denote its entries by $$m_j(z)$$, $$j=1,\ldots ,4$$. Then with the prime denoting the derivative with respect to $$z$$, we have   \begin{align} r_1^{-2} m_1'' &= 2\tau m_1'+C^2D^{-1} q\left(\sigma\right)m_2'\nonumber\\ &\quad + \left[C q^2\left(\sigma\right)- z+2s_1/r_1 -r_1^2\tau^2\right] m_1 -\left[CD^{-1} q'\left(\sigma\right)\right]m_2, \end{align} (39)  \begin{align} \nonumber r_2^{-2} m_2'' &= -C^2D q\left(\sigma\right) m_1'- 2\tau m_2'\\ &\quad +\left[C q^2\left(\sigma\right)+z+2s_2/r_2-r_2^2\tau^2\right] m_2 -\left[CD q'\left(\sigma\right)\right]m_1, \end{align} (40)  \begin{align} r_1im_3 &= m_1'-\left(C^{-1} u\left(\sigma\right)-s_1^2+r_1^2\tau\right) m_1-C^{-1}D^{-1}q\left(\sigma\right) m_2, \end{align} (41)  \begin{align} r_2im_4 &= m_2'+C^{-1}D q\left(\sigma\right) m_1+\left(C^{-1}u\left(\sigma\right)-s_2^2+r_2^2\tau\right) m_2, \end{align} (42) where the constants $$C,D,\sigma$$ are defined in (32) and we denote by $$q,u$$ the Hastings–McLeod function and the associated Hamiltonian. (b) Conversely, any vector $$\textbf {m}(z)$$ that solves (39)–(42) is a fixed (independent of $$z$$) linear combination of the columns of $$\tilde {M}(z)$$. Proposition 2.11 is proved in Section 6.4 with the help of Lax pair calculations. There are similar differential equations with respect to the parameters $$s_1$$, $$s_2$$, or $$\tau$$ but they will not be needed. 2.7 Outline of the paper The remainder of this paper is organized as follows. In Section 3 we establish Theorem 2.5 about the derivative of the RH tacnode kernel. In Section 4 we prove Proposition 2.3 and Theorems 2.4 and 2.6 about the Ferrari–Vet̋ tacnode kernel. In Section 5 we prove Theorem 2.9 about the Airy resolvent formulas for the entries of the RH matrix $$\tilde {M}(z)$$. Finally, in Section 6 we use Lax pair calculations to prove Proposition 2.11. 3 Proof of Theorem 2.5 Throughout the proof we use the parametrization $$s_j=\sigma _j s$$ with $$\sigma _j$$ fixed, $$j=1,2$$. We also assume $$r_1,r_2>0$$ to be fixed. The RH matrix $$\tilde {M}(z)=\tilde {M}(z;s,\tau )$$ satisfies the differential equation   $$\frac{\partial}{\partial s}\tilde{M}\left(z\right) =V\left(z\right)\tilde{M}\left(z\right),$$ (43) for a certain coefficient matrix $$V(z)=V(z;s,\tau )$$. This is described in more detail in Section 6.3. At this moment, we only need to know that   $$V\left(z\right)= -2iz\begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \sigma_1 & 0 & 0 & 0 \\ 0 & \sigma_2 & 0 & 0 \end{pmatrix}+V\left(0\right),$$ (44) where the matrix $$V(0)$$ is independent of $$z$$. We will also need the symmetry relation   $$\tilde{M}^{-1}\left(z;s,\tau\right) = K^{-1} \tilde{M}^T\left(z;s,-\tau\right) K,$$ (45) where the superscript T denotes the transpose and   $$K = \begin{pmatrix} 0 & I_2 \\ -I_2 & 0 \end{pmatrix}$$ (46) with $$I_2$$ denoting the identity matrix of size $$2\times 2$$. This symmetry relation is a consequence of Lemma 6.3. We are now ready to prove Theorem 2.5. Abbreviating $$\tilde {M}(z):=\tilde {M}(z;s,\tau )$$ for the moment, we start by calculating   \begin{align*} \frac{\partial}{\partial s}\left[\tilde{M}^{-1}\left(v\right) \tilde{M}\left(u\right)\right] &=\tilde{M}^{-1}\left(v\right)\left(\frac{\partial}{\partial s}\left[\tilde{M}\left(u\right)\right] \tilde{M}^{-1}\left(u\right)-\frac{\partial}{\partial s}\left[\tilde{M}\left(v\right)\right] \tilde{M}^{-1}\left(v\right)\right)\tilde{M}\left(u\right) \\ &=\tilde{M}^{-1}\left(v\right)\left(V\left(u\right)-V\left(v\right) \right) \tilde{M}\left(u\right) \\ &=-2i\left(u-v\right)\tilde{M}^{-1}\left(v\right)\begin{pmatrix} 0 & 0\\ \Sigma & 0 \end{pmatrix}\tilde{M}\left(u\right), \end{align*} with $$\Sigma :={\rm diag}(\sigma _1,\sigma _2)$$, where the last equality follows by (44). With the help of the above calculation we now obtain   \begin{align*} \frac{\partial}{\partial s} K_{{\rm tac}}\left(u,v;s,\tau\right) &= \frac{\partial}{\partial s} \frac{1}{2\pi i\left(u-v\right)} \left(0\quad 0\quad 1\quad 1\right)\tilde{M}^{-1}\left(v;s,\tau\right) \tilde{M}\left(u;s,\tau\right) \left(1\quad 1\quad 0\quad 0\right)^T\\ &=-\frac{1}{\pi}\left(0\quad 0\quad 1\quad 1\right) \tilde{M}^{-1}\left(v;s,\tau\right)\begin{pmatrix} 0 & 0\\ \Sigma & 0 \end{pmatrix}\tilde{M}\left(u;s,\tau\right)\left(1\quad 1\quad 0\quad 0\right)^T\\ &=-\frac{1}{\pi}\left(1\quad 1\quad 0\quad 0\right) \tilde{M}^{T}\left(v;s,-\tau\right)\begin{pmatrix} \Sigma & 0\\ 0 & 0 \end{pmatrix}\tilde{M}\left(u;s,\tau\right)\left(1\quad 1\quad 0\quad 0\right)^T\\ &=-\frac{1}{\pi}\textbf{p}^T\left(v;s,-\tau\right) \begin{pmatrix} \Sigma & 0 \\ 0&0 \end{pmatrix}\textbf{p}\left(u;s,\tau\right), \end{align*} where the third equality follows from (45) and the fourth one from (11). This proves (24). By integrating this equality, we obtain (25), due to the fact that the entries of $$\textbf {p}(z;s,\tau )$$ go to zero for $$s\to +\infty$$ if $$\sigma _1,\sigma _2>0$$. The latter fact follows from [14, Section 3] if $$\tau =0$$ and is established similarly for general $$\tau \in \mathbb {R}$$. This ends the proof of Theorem 2.5. 4 Proofs of Proposition 2.3 and Theorems 2.6 and thm2.4 4.1 Proof of Proposition 2.3 Denote by $$\textbf {A}_{\sigma }$$ the operator on $$L^2([0,\infty ))$$ that acts on the function $$f\in L^2([0,\infty ))$$ by the rule   $$\left[\textbf{A}_{\sigma} f\right]\left(x\right) = \int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right) f\left(y\right){\rm d} y.$$ (47) Observe that   $$\textbf{A}_{\sigma}^2 = \textbf{K}_{{\rm Ai},\sigma},$$ (48) on account of (13). Now the kernel $$\mathcal L_{{\rm tac}}(u,v)$$ in [17, Eq. (1.5)] in the single time case $$\tau _1=\tau _2=\tau$$ has the form   \begin{align} C^{-1}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right)&= \int_0^{\infty}b_{\tau,u}\left(x\right)b_{-\tau,v}\left(x\right){\rm d} x\nonumber \\ &\quad +\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\left[\textbf{A}_{\sigma} b_{\tau,u}\right]\left(x\right)\left[\textbf{A}_{\sigma} b_{-\tau,v}\right]\left(y\right){\rm d} x{\rm d} y\nonumber \\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} b_{\tau,u}\right]\left(x\right)\tilde{b}_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y\nonumber\\ &\quad +\lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right) \tilde{b}_{-\tau,v}\left(x\right){\rm d} x\nonumber\\ &\quad +\lambda^{1/3}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right]\left(x\right) \left[\textbf{A}_{\sigma} \tilde{b}_{-\tau,v}\right]\left(y\right){\rm d} x\,{\rm d} y\nonumber\\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right]\left(x\right)b_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y, \end{align} (49) where again $$C=(1+\lambda ^{-1/2})^{1/3}$$ and the resolvent kernel notation (16) is used. Note that the notations $$\sigma$$, $$\widetilde \sigma$$, $$b_{\tau ,\sigma +\xi }^{\lambda }(x+\widetilde \sigma )$$, $$B_{\tau ,\sigma +\xi }^{\lambda }(x+\widetilde \sigma )$$, $$b_{\lambda ^{1/3}\tau ,\lambda ^{2/3}\sigma -\lambda ^{1/6}\xi }^{\lambda ^{-1}}(x+\widetilde \sigma )$$, $$B_{\lambda ^{1/3}\tau ,\lambda ^{2/3}\sigma -\lambda ^{1/6}\xi }^{\lambda ^{-1}}(x+\widetilde \sigma )$$ in [17] correspond to our notations $$\Sigma$$, $$\sigma$$, $$\lambda ^{1/6}\tilde {b}_{-\tau ,\xi }(x)$$, $$[\textbf {A}_{\sigma } b_{-\tau ,\xi }](x)$$, $$\lambda ^{-1/6}b_{-\tau ,\xi }(x)$$, $$[\textbf {A}_{\sigma } \tilde {b}_{-\tau ,\xi }](x)$$, respectively. The notation $$K_{{\rm Ai}}^{(-\tau ,\tau )}(\sigma +\xi _1,\sigma +\xi _2)$$ in [17, Eq. (1.11)] corresponds to the first term in the right-hand side of (49). The second term in the right-hand side of (49) equals   \begin{align} &\int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \left[\textbf{A}_{\sigma} b_{\tau,u}\right](x)\left[\textbf{A}_{\sigma}b_{-\tau,v}\right](y)\,{\rm d} x\,{\rm d} y \nonumber\\ &\quad = \int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) b_{\tau,u}(x)b_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y-\int_0^{\infty} b_{\tau,u}(x)b_{-\tau,v}(x)\,{\rm d} x, \end{align} (50) where we used that   $\textbf{A}_{\sigma}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{A}_{\sigma} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{K}_{{\rm Ai},\sigma} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}-\textbf{1},$ on account of (48). Next, the third term in the right-hand side of (49) can be written as   \begin{align} &-\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \left[\textbf{A}_{\sigma} b_{\tau,u}\right](x)\tilde{b}_{-\tau,v}(y)\,{\rm d} x\,{\rm d} y\nonumber\\ &\quad= -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} (\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) b_{\tau,u}(x)\left[\textbf{A}_{\sigma}\tilde{b}_{-\tau,v}\right](y)\,{\rm d} x\,{\rm d} y, \end{align} (51) since the operators $$(\textbf {1}-\textbf {K}_{{\rm Ai},\sigma })^{-1}$$ and $$\textbf {A}_{\sigma }$$ commute. Inserting (50)–(51) for the second and third term in the right-hand side of (49), we obtain   \begin{align} C^{-1}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right)&= \lambda^{1/3}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right)\tilde{b}_{-\tau,v}\left(x\right){\rm d} x\nonumber \\ &\quad +\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) b_{\tau,u}\left(x\right)b_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y\nonumber \\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) b_{\tau,u}\left(x\right)\left[\textbf{A}_{\sigma}\tilde{b}_{-\tau,v}\right]\left(y\right){\rm d} x\,{\rm d} y\nonumber \\ &\quad +\lambda^{1/3}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right] \left(x\right)\left[\textbf{A}_{\sigma} \tilde{b}_{-\tau,v}\right]\left(y\right){\rm d} x\,{\rm d} y\nonumber\\ &\quad -\lambda^{1/6}\int_0^{\infty}\int_0^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma} \tilde{b}_{\tau,u}\right]\left(x\right)b_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y, \end{align} (52) which is equivalent to the desired result (21). 4.2 Tracy–Widom functions and their properties In some of the remaining proofs, we need the functions   \begin{align} \begin{split} Q\left(x\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right){\rm Ai}\left(y+\sigma\right){\rm d} y\\ P\left(x\right) &= \int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right){\rm Ai}'\left(y+\sigma\right){\rm d} y, \end{split} \end{align} (53) with the usual notation (16). Note that $$Q,P$$ depend on $$\sigma$$ but we omit this dependence from the notation. The functions $$Q,P$$ originate from the seminal paper of Tracy–Widom [28]. Below we list some of their properties. These properties can all be found in [28], see also [5, Section 3.8] for a text book treatment. One should take into account that our function $$R(x,y)=R_{\sigma }(x,y)$$ equals the one in [28], [5, Section 3.8] at the shifted arguments $$x+\sigma$$, $$y+\sigma$$, and similarly for $$Q(x)$$ and $$P(x)$$. Lemma 4.1. The functions $$Q,P$$ and $$R=R_{\sigma }$$ satisfy the differential equations   \begin{align} \left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)R\left(x,y\right) &= R\left(x,0\right)R\left(0,y\right)-Q\left(x\right)Q\left(y\right), \end{align} (54)  \begin{align} \frac{\partial}{\partial \sigma} R\left(x,y\right) &= -Q\left(x\right)Q\left(y\right), \end{align} (55)  \begin{align} Q'\left(x\right) &= P\left(x\right)+q R\left(x,0\right)- u Q\left(x\right), \end{align} (56)  \begin{align} P'\left(x\right) &= \left(x+\sigma-2v\right)Q\left(x\right)+p R\left(x,0\right)+uP\left(x\right), \end{align} (57) where   \begin{align} q&=Q\left(0\right) \end{align} (58)  \begin{align} p&=P\left(0\right) \end{align} (59)  \begin{align} u&=\int_{0}^{\infty} Q\left(x\right){\rm Ai}\left(x+\sigma\right){\rm d} x \end{align} (60)  \begin{align} v&=\int_{0}^{\infty} Q\left(x\right){\rm Ai}'\left(x+\sigma\right){\rm d} x =\int_{0}^{\infty} P\left(x\right){\rm Ai}\left(x+\sigma\right){\rm d} x. \end{align} (61) Note that $$q,p,u,v$$ are all functions of $$\sigma$$, although we do not show this in the notation. They satisfy the following differential equations with respect to $$\sigma$$ [28], [5, Section 3.8]   \begin{align} q' &= p-qu \end{align} (62)  \begin{align} p' &= \sigma q+pu-2qv \end{align} (63)  \begin{align} u' &= -q^2 \end{align} (64)  \begin{align} v' &= -pq. \end{align} (65) It is known that $$q=q(\sigma )$$ is the Hastings–McLeod solution to the Painlevé II equation (1)–(2). Moreover, $$u=u(\sigma )$$ is the Hamiltonian (38), and   $$2v = u^2-q^2.$$ (66) Finally, we establish the following lemma. Lemma 4.2. For any function $$b$$ on $$[0,\infty )$$ we have   $$\int_{0}^{\infty}Q\left(x\right)b\left(x\right){\rm d} x = \int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,0\right){\rm Ai}\left(x+y+\sigma\right) b\left(y\right){\rm d} x\,{\rm d} y$$ (67) and   $$\int_0^{\infty}\int_{0}^{\infty}Q\left(x\right){\rm Ai}\left(x+y+\sigma\right)b\left(y\right){\rm d} x\,{\rm d} y =\int_0^{\infty}R\left(x,0\right) b\left(x\right){\rm d} x,$$ (68) with the usual notations $$R=R_{\sigma }$$ and (16). Proof. The lemma is rather obvious from the point of view of integral operators, but we include a proof for convenience. Recall the operator $$\textbf {A}_{\sigma }$$ defined in (47)–(48) and let $$\delta _0$$ denote the Dirac delta function at 0, in the sense that   $\int_0^\infty \delta_0\left(x\right) f\left(x\right){\rm d} x = f\left(0\right)$ for any function $$f$$ right continuous at 0. We have   \begin{align*} \int_{0}^{\infty}Q\left(x\right)b\left(x\right){\rm d} x &=\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right) \left[\textbf{A}_{\sigma}\delta_0\right]\left(y\right) b\left(x\right){\rm d} x\,{\rm d} y \\ &=\int_{0}^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\delta_0\left(y\right) \left[\textbf{A}_{\sigma} b\right]\left(x\right){\rm d} x\,{\rm d} y, \end{align*} where we used that the operators $$\textbf {A}_{\sigma }$$ and $$\textbf {K}_{{\rm Ai},\sigma }$$ commute. This proves (67). Next,   \begin{align*} &\int_0^{\infty} \int_{0}^{\infty}Q\left(x\right){\rm Ai}\left(x+y+\sigma\right)b\left(y\right){\rm d} x\,{\rm d} y \\ &\quad =\int_0^{\infty}\int_{0}^{\infty} \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\left(x,y\right)\left[\textbf{A}_{\sigma}\delta_0\right]\left(y\right) \left[\textbf{A}_{\sigma}b\right]\left(x\right){\rm d} x\,{\rm d} y \\ &\quad =\int_{0}^{\infty}\int_0^{\infty}R\left(x,y\right)\delta_0\left(y\right) b\left(x\right){\rm d} x\,{\rm d} y, \end{align*} where we used that   $\textbf{A}_{\sigma}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{A}_{\sigma} = \left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1}\textbf{K}_{{\rm Ai},\sigma}=\textbf{R}_{{\rm Ai},\sigma}.$ This proves (68). □ 4.3 Proof of Theorem 2.6 We will prove Theorem 2.6 using the following lemmas. Lemma 4.3. The formulas (28) can be written in the equivalent ways   \begin{align} \hat{p}_1\left(z;\sigma,\tau\right) &=\tilde{\mathcal{A}}_{\tau,z}\left(0\right)+ \int_0^{\infty} R\left(x,0\right)\tilde{\mathcal{A}}_{\tau,z}\left(x\right){\rm d} x \end{align} (69)  \begin{align} &=\tilde{b}_{\tau,z}\left(0\right)-\lambda^{-1/6}\int_0^{\infty} Q\left(x\right)\mathcal{A}_{\tau,z}\left(x\right){\rm d} x \end{align} (70)  \begin{align} \hat{p}_2\left(z;\sigma,\tau\right) &= \mathcal{A}_{\tau,z}\left(0\right)+\int_0^{\infty} R\left(x,0\right) \mathcal{A}_{\tau,z}\left(x\right){\rm d} x \end{align} (71)  \begin{align} &= b_{\tau,z}\left(0\right)-\lambda^{1/6}\int_0^{\infty} Q\left(x\right)\tilde{\mathcal{A}}_{\tau,z}\left(x\right){\rm d} x, \end{align} (72) with the notations $$R=R_{\sigma }$$ and (53). Proof. Immediate from Lemma 4.2 and the definitions. □ Lemma 4.4. We have   \begin{align} \left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}b_{\tau,z}\left(x\right) &=\lambda^{-1/2}\frac{\partial}{\partial x} b_{\tau,z}\left(x\right), \end{align} (73)  \begin{align} \left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}\tilde{b}_{\tau,z}\left(x\right) &=\frac{\partial}{\partial x} \tilde{b}_{\tau,z}\left(x\right), \end{align} (74)  \begin{align} \left(1+\lambda^{-1/2}\right) \frac{\partial}{\partial\sigma}\mathcal{A}_{\tau,z}\left(x\right) &=\lambda^{-1/2}\frac{\partial}{\partial x}\mathcal{A}_{\tau,z}\left(x\right) +\lambda^{1/6} {\rm Ai}\left(x+\sigma\right)\tilde{b}_{\tau,z}\left(0\right). \end{align} (75) Proof. The first two formulas are obvious from the definitions (17)–(19). For the last formula, we calculate   \begin{align*} &\left[\left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}-\lambda^{-1/2}\frac{\partial}{\partial x}\right]\mathcal{A}_{\tau,z}\left(x\right) \\ &\quad =\left[\left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}-\lambda^{-1/2}\frac{\partial}{\partial x}\right] \left(b_{\tau,z}\left(x\right) -\lambda^{1/6}\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) {\rm d} y\right) \\ &\quad = -\lambda^{1/6}\left[\left(1+\lambda^{-1/2}\right)\frac{\partial}{\partial \sigma}-\lambda^{-1/2}\frac{\partial}{\partial x}\right] \left(\int_{0}^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) {\rm d} y\right) \\ &\quad = -\lambda^{1/6}\int_{0}^{\infty} \left({\rm Ai}'\left(x+y+\sigma\right)\tilde{b}_{\tau,z}\left(y\right) + {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{\tau,z}'\left(y\right) \right) {\rm d} y\\ &\quad =\lambda^{1/6}{\rm Ai}\left(x+\sigma\right)\tilde{b}_{\tau,z}\left(0\right), \end{align*} where the second and third equalities use (73) and (74), respectively, and the last equality follows from integration by parts. This proves the lemma. □ Now we prove Theorem 2.6. With the help of (75), the derivative of (21) with respect to $$\sigma$$ becomes   \begin{align} &\left(1+\lambda^{-1/2}\right)^{2/3} \frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right)\nonumber\\ &\quad=\left(1+\lambda^{-1/2}\right)^{2/3}\lambda^{1/3} C \frac{\partial}{\partial \sigma}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right)\tilde{b}_{-\tau,v}\left(x\right){\rm d} x\nonumber \\ &\qquad +\lambda^{-1/2}\int_0^{\infty}\int_0^{\infty}\left(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma}\right)^{-1} \left(x,y\right)\left(\mathcal{A}_{\tau,u}'\left(x\right)\mathcal{A}_{-\tau,v}\left(y\right) +\mathcal{A}_{\tau,u}\left(x\right)\mathcal{A}_{-\tau,v}'\left(y\right)\right){\rm d} x\,{\rm d} y\nonumber \\ &\qquad + \lambda^{1/6}\tilde{b}_{\tau,u}\left(0\right)\int_0^{\infty} Q\left(y\right)\mathcal{A}_{-\tau,v}\left(y\right){\rm d} y + \lambda^{1/6}\tilde{b}_{-\tau,v}\left(0\right)\int_0^{\infty} Q\left(x\right) \mathcal{A}_{\tau,u}\left(x\right){\rm d} x\nonumber\\ &\qquad +\left(1+\lambda^{-1/2}\right)\int_0^{\infty}\int_0^{\infty} \left(\frac{\partial}{\partial\sigma} R\left(x,y\right)\right) \mathcal{A}_{\tau,u}\left(x\right) \mathcal{A}_{-\tau,v}\left(y\right){\rm d} x\,{\rm d} y, \end{align} (76) where we used the definition of $$Q$$ in (53). The first term in the right-hand side of (76) can be written as   $$\left(1+\lambda^{-1/2}\right)^{2/3}\lambda^{1/3} C \frac{\partial}{\partial \sigma}\int_0^{\infty}\tilde{b}_{\tau,u}\left(x\right)\tilde{b}_{-\tau,v}\left(x\right){\rm d} x = -\lambda^{1/3}\tilde{b}_{\tau,u}\left(0\right)\tilde{b}_{-\tau,v}\left(0\right),$$ (77) on account of (74). The second term in the right-hand side of (76) can be written as   \begin{align} &\lambda^{-1/2}\int_0^{\infty}\!\!\!\int_0^{\infty}(\textbf{1}-\textbf{K}_{{\rm Ai},\sigma})^{-1}(x,y) \left(\mathcal{A}_{\tau,u}'(x)\mathcal{A}_{-\tau,v}(y)+\mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}'(y)\right){\rm d} x\,{\rm d} y \nonumber\\ &\quad =-\lambda^{-1/2}\left(\int_0^{\infty}\int_0^{\infty}\left[\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)R(x,y)\right] \mathcal{A}_{\tau,u}(x)\mathcal{A}_{-\tau,v}(y){\rm d} x\,{\rm d} y+\mathcal{A}_{\tau,u}(0)\mathcal{A}_{-\tau,v}(0)\right.\nonumber\\ &\quad \left. +\mathcal{A}_{\tau,u}(0)\int_0^{\infty}R(0,y)\mathcal{A}_{-\tau,v}(y)\,{\rm d} y +\mathcal{A}_{-\tau,v}(0)\int_0^{\infty}R(x,0)\mathcal{A}_{\tau,u}(x)\,{\rm d} x\right), \end{align} (78) where we used (16) and integration by parts. Finally, we observe that   $$\left[ (1+\lambda^{-1/2})\frac{\partial}{\partial\sigma}-\lambda^{-1/2}\left(\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right)\right]R(x,y) = -\lambda^{-1/2} R(x,0)R(0,y)-Q(x)Q(y),$$ (79) on account of (54)–(55). Inserting (77)–(79) in the right-hand side of (76), we obtain   \begin{align} &\left(1+\lambda^{-1/2}\right)^{2/3} \frac{\partial}{\partial \sigma} \mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) \nonumber \\ &\quad =-\lambda^{1/3}\left(\tilde{b}_{\tau,u}\left(0\right)-\lambda^{-1/6} \int_0^{\infty}Q\left(x\right)\mathcal{A}_{\tau,u}\left(x\right){\rm d} x\right)\left(\tilde{b}_{-\tau,v}\left(0\right)-\lambda^{-1/6} \int_0^{\infty}Q\left(x\right)\mathcal{A}_{-\tau,v}\left(x\right){\rm d} x\right)\nonumber \\ &\qquad -\lambda^{-1/2}\left(\mathcal{A}_{\tau,u}\left(0\right)+ \int_0^{\infty}R\left(x,0\right)\mathcal{A}_{\tau,u}\left(x\right){\rm d} x\right) \left(\mathcal{A}_{-\tau,v}\left(0\right)+\int_0^{\infty}R\left(0,x\right)\mathcal{A}_{-\tau,v}\left(x\right){\rm d} x\right)\nonumber \\ &\quad =-\lambda^{1/3}\hat{p}_1\left(u;\sigma,\tau\right)\hat{p}_1\left(v;\sigma,-\tau\right) -\lambda^{-1/2}\hat{p}_2\left(u;\sigma,\tau\right)\hat{p}_2\left(v;\sigma,-\tau\right), \end{align} (80) on account of (70)–(71). This proves (26). Finally we prove (27). From [5, Section 3.8] we have the estimate   $$|R_{\sigma}\left(x,y\right)|< C_0 \,{\rm e}^{-x-y-2\sigma}$$ (81) for all $$x,y,\sigma >0$$, with $$C_0>0$$ a certain constant. We also have the asymptotics for the Airy function   $${\rm Ai}\left(x\right)\sim \exp\left(-\tfrac 23 x^{3/2}\right)/\left(2\sqrt{\pi}x^{1/4}\right),\quad x\to +\infty.$$ (82) Consequently the kernel $$\mathcal L_{{\rm tac}}(u,v;\sigma ,\tau )$$ in (21), (16) goes to zero (at a very fast rate) for $$\sigma \to +\infty$$. Integration of (26) then yields (27). This proves Theorem 2.6. 4.4 Proof of Theorem 2.4 Let the parameters $$r_1,r_2,s_1,s_2$$ be given by (22). As already observed, the expressions for $$C,\sigma$$ in (32) and (19), (17) are equal under this identification. Similarly, the expressions for $$p_1,p_2$$ in (36) and $$\hat {p}_1,\hat {p}_2$$ in (28) are related by   $$p_j\left(z\right) = \sqrt{2\pi} r_j^{1/6} \exp\left(r_j^4 \tau\left(\Sigma+\frac 23 \tau^2\right)\right)\hat{p}_j\left(z\right),\quad j=1,2.$$ (83) Note that the exponential factor in (83) cancels out in (24). Now observe that the formulas for $$s_1,s_2$$ in (22) are of the form $$s_j=\sigma _j s$$ with   $$s:=\left(\Sigma+\tau^2\right)/2,\quad \sigma_1:=\lambda^{3/4},\quad \sigma_2:=1.$$ (84) By (17) we then have   $$\frac{\partial}{\partial s}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right) = 2\sqrt{\lambda}\left(1+\lambda^{-1/2}\right)^{2/3}\frac{\partial}{\partial \sigma}\mathcal L_{{\rm tac}}\left(u,v;\sigma,\tau\right),$$ (85) where we consider $$\tau$$ to be fixed. Theorem 2.4 follows from Theorems 2.5 and 2.6 and the above observations. 5 Proof of Theorem 2.9 In this section we prove Theorem 2.9 on the Airy resolvent formulas for the RH matrix. The outline of the proof is as follows: In Sections 5.2 and 5.3 we will show that the Airy resolvent formulas in (35) satisfy the required differential equations for the columns of the RH matrix, as given in Proposition 2.11. The latter proposition (whose proof will be established in Section 6) then shows that the Airy resolvent formulas correspond to certain linear combinations of the columns of the RH matrix. In Section 5.4 we will conclude the proof by deriving the appropriate asymptotics. 5.1 Preparations We start with a few lemmas that will be useful in the calculations in Section 5.2. Lemma 5.1. The functions $$b_z(x)$$ and $$\tilde {b}_z(x)$$ in (33) satisfy the differential equations   $$b_z'\left(x\right) = C\frac{\partial}{\partial z} b_z\left(x\right),\quad \tilde{b}_z'\left(x\right) = -C\frac{\partial}{\partial z} \tilde{b}_z\left(x\right),$$ (86) and   \begin{align} r_2^{-2}\frac{\partial^2}{\partial z^2} b_{z}\left(x\right)+2\tau \frac{\partial}{\partial z} b_z\left(x\right) &= \left(z+Cx+2\frac{s_2}{r_2}-r_2^{2}\tau^2 \right)b_z\left(x\right), \end{align} (87)  \begin{align} r_1^{-2}\frac{\partial^2}{\partial z^2} \tilde{b}_{z}\left(x\right)-2\tau \frac{\partial}{\partial z} \tilde{b}_z\left(x\right) &= \left(-z+Cx+2\frac{s_1}{r_1}-r_1^{2}\tau^2 \right)\tilde{b}_z\left(x\right). \end{align} (88) Proof. Equation (86) is obvious. The other equations follow from a straightforward calculation using the Airy differential equation $${\rm Ai}''(x)=x{\rm Ai}(x)$$. □ Lemma 5.2. The function $$\mathcal {A}_z(x)$$ in (37) satisfies the differential equations   $$\frac{\partial}{\partial z} \mathcal{A}_{z}\left(x\right) = C^{-1}\left(\mathcal{A}_{z}'\left(x\right) - D{\rm Ai}\left(x+\sigma\right)\tilde{b}_{z}\left(0\right)\right)$$ (89) and   \begin{align} r_2^{-2}\frac{\partial^2}{\partial z^2} \mathcal{A}_{z}(x)+2\tau \frac{\partial}{\partial z} \mathcal{A}_z(x) &= \left(z+Cx+2\frac{s_2}{r_2}-r_2^{2}\tau^2 \right)\mathcal{A}_{z}(x) \nonumber\\ &\quad + C D\left({\rm Ai}(x+\sigma)\tilde{b}_{z}'(0) - {\rm Ai}'(x+\sigma)\tilde{b}_{z}(0)\right). \end{align} (90) Proof. Equation (89) follows from the definition of $$\mathcal {A}_z$$, (86) and integration by parts. Now we check the formula (90). From (37) we have   \begin{align*} r_2^{-2}\frac{\partial^2}{\partial z^2}\mathcal{A}_{z}\left(x\right) &=r_2^{-2}\frac{\partial^2}{\partial z^2}b_{z}\left(x\right) -r_2^{-2}D\int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right) \frac{\partial^2}{\partial z^2} \tilde{b}_{z}\left(y\right){\rm d} y \\ &= r_2^{-2}\frac{\partial^2}{\partial z^2} b_{z}\left(x\right) + r_1^{-2}D\int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right)\frac{\partial^2}{\partial z^2} \tilde{b}_{z}\left(y\right){\rm d} y\\ &\quad - CD\int_0^{\infty} {\rm Ai}\left(x+y+\sigma\right)\tilde{b}_{z}''\left(y\right){\rm d} y \end{align*} where in the second equality we used $$r_1^{-2}+r_2^{-2}=C^3$$ and (86). Hence   \begin{align} r_2^{-2}\frac{\partial^2}{\partial z^2}\mathcal{A}_{z}(x)+2\tau \frac{\partial}{\partial z}\mathcal{A}_{z}(x) & =\left(r_2^{-2}\frac{\partial^2}{\partial z^2} b_{z}(x)+2\tau \frac{\partial}{\partial z}b_{z}(x)\right)\nonumber\\ &\quad+D\int_0^{\infty} {\rm Ai}(x+y+\sigma)\left(r_1^{-2}\frac{\partial^2}{\partial z^2} \tilde{b}_{z}(y)-2\tau \frac{\partial}{\partial z} \tilde{b}_{z}(y)\right){\rm d} y\nonumber\\ &\quad -CD\int_0^{\infty} {\rm Ai}(x+y+\sigma)\tilde{b}_{z}''(y)\,{\rm d} y. \end{align} (91) In the first two terms in the right-hand side of (91) we use the differential equations (87)–(88), and in the third term we integrate by parts twice and subsequently use the Airy differential equation $${\rm Ai}''(x)=x{\rm Ai}(x)$$. The lemma then follows from a straightforward calculation, taking into account that   $$\left(-z+Cy+2\frac{s_1}{r_1}-r_1^2\tau^2 \right)-C\left(x+y+\sigma\right) = -z-Cx-2\frac{s_2}{r_2}+r_2^2\tau^2$$ (92) thanks to (32). □ Lemma 5.3. The formulas (36) can be written in the equivalent ways   \begin{align} p_1\left(z\right) & =\tilde{\mathcal{A}}_{z}\left(0\right)+ \int_{0}^{\infty}R\left(x,0\right)\tilde{\mathcal{A}}_{z}\left(x\right){\rm d} x \end{align} (93)  \begin{align} &=\tilde{b}_{z}\left(0\right)-D^{-1}\int_{0}^{\infty}Q\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x, \end{align} (94)  \begin{align} p_2\left(z\right) &=\mathcal{A}_{z}\left(0\right)+ \int_{0}^{\infty}R\left(x,0\right)\mathcal{A}_{z}\left(x\right){\rm d} x \end{align} (95)  \begin{align} &= b_z\left(0\right)-D\int_{0}^{\infty} Q\left(x\right)\tilde{\mathcal{A}}_{z}\left(x\right){\rm d} x. \end{align} (96) Proof. Immediate from Lemma 4.2 and the definitions. □ 5.2 Differential equation for $$p_j$$ In this section we check that the expressions for $$p_1(z)$$, $$p_2(z)$$ in (94)–(95) satisfy the differential equation (39) (with $$m_j:=p_j$$). From (94) we have   $$p_1'\left(z\right) = \frac{\partial}{\partial z}\left[ \tilde{b}_{z}\left(0\right)\right]-D^{-1}\int_{0}^{\infty}Q\left(x\right)\frac{\partial}{\partial z}\left[\mathcal{A}_z\left(x\right)\right]{\rm d} x.$$ (97) We calculate the second term   \begin{align} & \int_{0}^{\infty}Q\left(x\right)\frac{\partial}{\partial z}\left[\mathcal{A}_z\left(x\right)\right]{\rm d} x\nonumber\\ &\quad =C^{-1}\left(\int_{0}^{\infty}Q\left(x\right)\mathcal{A}_z'\left(x\right){\rm d} x -D u \tilde{b}_{z}\left(0\right) \right) \nonumber\\ &\quad =-C^{-1}\left(\int_{0}^{\infty} Q'\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x+q\mathcal{A}_z\left(0\right)+D u\tilde{b}_{z}\left(0\right)\right) \nonumber\\ &\quad = -C^{-1}\left(\int_{0}^{\infty} \left[P\left(x\right)+qR\left(x,0\right)-uQ\left(x\right)\right]\mathcal{A}_z\left(x\right){\rm d} x+q\mathcal{A}_z\left(0\right)+D u\tilde{b}_{z}\left(0\right)\right)\nonumber \\ &\quad = -C^{-1}\left(D u p_1\left(z\right)+q p_2\left(z\right)+\int_{0}^{\infty} P\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x\right) \end{align} (98) where the first equality uses (89) and (60), the second one uses integration by parts and (58), the third one uses (56), and the fourth equality uses (94)–(95). From (97)–(98) and $$r_1^{-2}+r_2^{-2}=C^3$$, we get   \begin{align} r_1^{-2} p_1'(z) &=r_1^{-2}\frac{\partial}{\partial z}[\tilde{b}_{z}(0)] + r_2^{-2}D^{-1}\int_{0}^{\infty}Q(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x\nonumber\\ &\quad +C^2D^{-1}\left(D u p_1(z)+q p_2(z)+\int_{0}^{\infty} P(x)\mathcal{A}_z(x){\rm d} x\right). \end{align} (99) By differentiating (99) with respect to $$z$$, we get   \begin{align*} &r_1^{-2} p_1''(z)-C^2D^{-1}q p_2'(z)-2\tau p_1'(z)\\ &\quad = r_1^{-2}\frac{\partial^2}{\partial z^2}\left[ \tilde{b}_{z}(0)\right]-2\tau \frac{\partial}{\partial z}\left[ \tilde{b}_{z}(0)\right] + D^{-1}\int_{0}^{\infty} Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[ \mathcal{A}_z(x)\right]\right){\rm d} x \\ &\qquad +C^2D^{-1} \left(Du p_1'(z)+\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x\right), \end{align*} where the last term in the left-hand side was expanded using (97). Equivalently,   \begin{align} &r_1^{-2} p_1''(z)-C^2D^{-1}q p_2'(z)-2\tau p_1'(z)\nonumber\\ &\quad = r_1^{-2}\frac{\partial^2}{\partial z^2}\left[ \tilde{b}_{z}(0)\right]-2\tau \frac{\partial}{\partial z}\left[ \tilde{b}_{z}(0)\right]\nonumber\\ &\qquad + CD^{-1}\left(C^{-1}\int_{0}^{\infty} Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[ \mathcal{A}_z(x)\right]\right){\rm d} x\right. \nonumber\\ &\qquad \left. +CD u p_1'(z)+C\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x\right). \end{align} (100) We will calculate each of the terms in the right-hand side of (100). We start with   \begin{align*} C\int_{0}^{\infty} P\left(x\right)\frac{\partial}{\partial z}\left[\mathcal{A}_z\left(x\right)\right]{\rm d} x &=\int_{0}^{\infty} P\left(x\right) \mathcal{A}_{z}'\left(x\right){\rm d} x - D v \tilde{b}_{z}\left(0\right) \\ &=-\left(\int_{0}^{\infty} P'\left(x\right) \mathcal{A}_{z}\left(x\right){\rm d} x +p \mathcal{A}_{z}\left(0\right) +D v\tilde{b}_{z}\left(0\right) \right) \end{align*} where the first equality follows from (89) and (61), and the second equality uses integration by parts and (59). Consequently,   \begin{align} C\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x &= -\int_{0}^{\infty} (x+\sigma)Q(x) \mathcal{A}_{z}(x){\rm d} x \nonumber\\ &\quad -u\int_{0}^{\infty} P(x) \mathcal{A}_{z}(x){\rm d} x-p p_2(z) -2D v p_1(z)+D v \tilde{b}_{z}(0) \end{align} (101) by virtue of (57) and (94)–(95). Next we calculate the third term in the right-hand side of (100),   \begin{align} &C^{-1}\int_{0}^{\infty}Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[ \mathcal{A}_z(x)\right]\right){\rm d} x \nonumber\\ &\quad =C^{-1}\int_{0}^{\infty}\left(z+Cx+2\frac{s_2}{r_2}-r_2^{2}\tau^2 \right) Q(x) \mathcal{A}_{z}(x)\,{\rm d} x + D(u \tilde{b}_{z}'(0) - v \tilde{b}_{z}(0)), \end{align} (102) on account of (90) and (60)–(61). By adding (101)–(102) and canceling terms we get   \begin{align} &C\int_{0}^{\infty} P(x)\frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]{\rm d} x +C^{-1}\int_{0}^{\infty}Q(x)\left(r_2^{-2}\frac{\partial^2}{\partial z^2}\left[\mathcal{A}_z(x)\right]+2\tau \frac{\partial}{\partial z}\left[\mathcal{A}_z(x)\right]\right){\rm d} x \nonumber\\ &\quad =-u\int_{0}^{\infty} P(x) \mathcal{A}_{z}(x){\rm d} x + C^{-1}\left(z- 2\frac{s_1}{r_1}+r_1^2\tau^2 \right)\int_{0}^{\infty}Q(x) \mathcal{A}_{z}(x)\,{\rm d} x\nonumber\\ &\qquad + D u \tilde{b}_{z}'(0)-2D v p_1(z)-p p_2(z), \end{align} (103) where the factor between brackets in front of the second integral in the right-hand side was obtained via (92). Finally, we calculate the fourth term in the right-hand side of (100),   \begin{align} CD u p_1'\left(z\right) &= CD u \frac{\partial}{\partial z}[\tilde{b}_{z}\left(0\right)]+D u^2 p_1\left(z\right)+uq p_2\left(z\right)+u\int_{0}^{\infty} P\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x \nonumber\\ &=-D u \tilde{b}_{z}'\left(0\right)+D u^2 p_1\left(z\right)+uq p_2\left(z\right)+u\int_{0}^{\infty} P\left(x\right)\mathcal{A}_z\left(x\right){\rm d} x \end{align} (104) where the first equality follows from (97)–(98) and the second one from (86). Inserting (103)–(104) in the right-hand side of (100) and canceling terms we get   \begin{align*} & r_1^{-2}p_1''\left(z\right)-C^2D^{-1}q p_2'\left(z\right)-2\tau p_1'\left(z\right) \\ &\quad =\left(r_1^{-2}\frac{\partial^2}{\partial z^2}\left[\tilde{b}_{z}\left(0\right)\right]-2\tau \frac{\partial}{\partial z}\left[\tilde{b}_{z}\left(0\right)\right]\right) +D^{-1}\left(z- 2\frac{s_1}{r_1}+ r_1^2\tau^2\right)\int_{0}^{\infty} Q\left(x\right)\mathcal{A}_{z}\left(x\right){\rm d} x\\ &\qquad +C [u^2-2v] p_1\left(z\right)+CD^{-1}[uq-p] p_2\left(z\right) \\ &\quad =\left[-z+ 2\frac{s_1}{r_1}-r_1^2\tau^2 +C \left(u^2- 2v\right)\right] p_1\left(z\right)+CD^{-1}[uq-p] p_2\left(z\right)\\ &\quad = \left[-z+ 2\frac{s_1}{r_1}-r_1^2\tau^2 +C q^2\right] p_1\left(z\right)-\left[CD^{-1} q'\right] p_2\left(z\right) \end{align*} where the second equality follows from (94) and (88), and the last equality uses (62) and (66). We have established the desired differential equation (39). 5.3 Other differential equations Denote by $$\widehat N_{j,k}(z)$$ the right-hand side of the formula for $$\tilde {M}_{j,k}(z)$$ in (35). Let the vectors $$\textbf {p}(z),\textbf {m}(z)$$ have entries   $p_j\left(z\right) =\widehat N_{j,1}\left(z\right)+\widehat N_{j,2}\left(z\right),\quad m_j\left(z\right)=\widehat N_{j,1}\left(z\right)-\widehat N_{j,2}\left(z\right),$ for $$j=1,\ldots ,4$$. For $$\textbf {p}(z)$$ this is compatible with (36). We have already shown in Section 5.2 that $$\textbf {p}(z)$$ satisfies the differential equation (39) (with $$m_j:=p_j$$). We claim that the same statement holds for $$\textbf {m}(z)$$. This can be shown by going through the proofs in Sections 5.1–5.2 again and replacing the appropriate plus signs by minus signs and vice versa. We leave this to the reader. Summarizing, both $$\textbf {p}(z)$$ and $$\textbf {m}(z)$$ satisfy the differential equation (39). By symmetry they also satisfy the differential equation (40). Finally, (41)–(42) is valid by construction. Proposition 2.11(b) implies that $$\textbf {p}(z)$$ and $$\textbf {m}(z)$$ are fixed linear combinations of the columns of $$\tilde {M}(z)$$. By linearity, the same holds for the $$\widehat N_{j,k}(z)$$. 5.4 Asymptotics In view of Section 5.3, Theorem 2.9 will be proved if we can show that the expressions for $$\tilde {M}_{j,1}(z)$$ and $$\tilde {M}_{j,2}(z)$$ in (35) and (41)–(42) satisfy the required asymptotics for $$z\to \infty$$ in the RH problem 2.1. It will be enough to prove the asymptotics for the second column $$\tilde {M}_{j,2}(z)$$. Moreover, it is sufficient to let $$z$$ go to infinity along the positive real line. Indeed, the first observe that the second columns of $$M(z)$$ and $$\tilde {M}(z)$$ are equal if $$\Re z>0$$. Furthermore, the second column of $$M(z)$$ is recessive with respect to the other columns as $$z\to +\infty$$, due to (7). So, if we can prove that the expressions for $$\tilde {M}_{j,2}(z)$$ in (35) share this same recessive asymptotic behavior for $$z\to +\infty$$, then we are done. The proof of the above asymptotics is now an easy consequence of (81)–(82). This ends the proof of Theorem 2.9. 6 Proof of Proposition 2.11 6.1 Painlevé formulas for the residue matrix $$M_1$$ Let the parameters $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed. In the proof of Proposition 2.11 we will need Painlevé formulas for the entries of the “residue” matrix $$M_1=M_1(r_1,r_2,s_1,s_2,\tau )$$ in (7). Write this matrix in entrywise form as   $$M_1 =: \begin{pmatrix} a & b & ic & id \\ - \tilde{b} & - \tilde{a} & i\tilde{d} & i \tilde{c} \\ i e & i \tilde{f} & -\alpha & \tilde{\beta} \\ i f & i\tilde{e} & -\beta & \tilde{\alpha} \end{pmatrix}$$ (105) for certain numbers $$a, \tilde {a}, b, \tilde {b},c,\tilde {c},\ldots$$ that depend on $$r_1, r_2, s_1, s_2,\tau$$. We will sometimes write $$a(r_1,r_2,s_1,s_2,\tau )$$, $$b(r_1,r_2,s_1,s_2,\tau )$$, etc., to denote the dependence on the parameters. In the symmetric case $$r_1=r_2$$, $$s_1=s_2$$ we are allowed to drop all the tildes from (105), while in the case $$\tau =0$$ we can replace all the Greek letters in (105) by their Roman counterparts and put $$\tilde {d}=d$$ and $$\tilde {f}=f$$. These are special instances of the next lemma. Lemma 6.1 (Symmetry relations). Let $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed. The 16 entries $$x=x(r_1,r_2,s_1,s_2,\tau )$$ of the matrix (105) are all real-valued and they satisfy the symmetry relations   $$x\left(r_1,r_2,s_1,s_2,\tau\right) = \tilde{x}\left(r_2,r_1,s_2,s_1,\tau\right),$$ (106) for any $$x=a,b,c,d,e,f,\alpha ,\beta$$, and   $$x\left(r_1,r_2,s_1,s_2,\tau\right) = \chi\left(r_1,r_2,s_1,s_2,-\tau\right),$$ (107) for any $$x=a,b,\tilde {a},\tilde {b},c,\tilde {c},d,e,\tilde {e},f$$, where we write $$\chi =\alpha ,\beta ,\tilde {\alpha },\tilde {\beta },c,\tilde {c},\tilde {d},e,\tilde {e},\tilde {f}$$, respectively. The proof of Lemma 6.1 follows from Section 6.2. Now we relate the entries of the matrix $$M_1$$ to the Hastings–McLeod solution $$q(x)$$ of Painlevé II and the Hamiltonian $$u(x)$$ in (38). The next theorem was proved for the special case $$\tau =0$$ in [14] and in the symmetric setting $$r_1=r_2$$, $$s_1=s_2$$ in [12, 15]. In the general case, we have the extra exponential factor $$D$$ in (32). Theorem 6.2 (Painlevé formulas). Let the parameters $$r_1,r_2>0$$ and $$s_1,s_2,\tau \in \mathbb {R}$$ be fixed. The entries in the top right $$2\times 2$$ block of (105) are given by   \begin{align} d &= \left(r_2CD\right)^{-1} q\left(\sigma\right) \end{align} (108)  \begin{align} \tilde{d} &= \left(r_1C\right)^{-1}D q\left(\sigma\right) \end{align} (109)  \begin{align} c &= r_1^{-1}\left(s_1^2-C^{-1} u\left(\sigma\right)\right) \end{align} (110)  \begin{align} \tilde{c} &=r_2^{-1}\left(s_2^2-C^{-1}u\left(\sigma\right)\right) \end{align} (111) where $$q$$ is the Hastings–McLeod solution to Painlevé II (1)–(2), $$u$$ is the Hamiltonian (38), and with the constants $$C,D,\sigma$$ given by (32). Moreover, some of the other entries in (105) are given by   \begin{align} b &=\left(\tilde{c}+\tau r_2\right) d -\left(r_2^{2}C^{2}D\right)^{-1} q'\left(\sigma\right) \end{align} (112)  \begin{align} \tilde{b} &=\left(c+\tau r_1\right) \tilde{d} -\left(r_1^{2}C^{2}\right)^{-1}D q'\left(\sigma\right) \end{align} (113)  \begin{align} \beta &= \left(\tilde{c}-\tau r_2\right) \tilde{d}-\left(r_1r_2C^{2}\right)^{-1}D q'\left(\sigma\right) \end{align} (114)  \begin{align} \tilde{\beta} &= \left(c-\tau r_1\right) d-\left(r_1r_2C^{2}D\right)^{-1}q'\left(\sigma\right) \end{align} (115) and   \begin{align} r_1f &= -\frac{r_2}{r_1^2+r_2^2} \frac{\partial d}{\partial\tau}+\left(-r_1 c-r_2 \tilde{c}+r_1^2\tau +r_2^2\tau\right) b -r_1 d^2\tilde{d}+r_2 \tilde{c}^2 d -2s_2 d \end{align} (116)  \begin{align} r_2\tilde{f} &= -\frac{r_1}{r_1^2+r_2^2} \frac{\partial \tilde{d}}{\partial\tau} +\left(-r_1 c-r_2 \tilde{c}+r_1^2\tau+r_2^2\tau\right)\tilde{b}-r_2\tilde{d}^2 d+r_1 c^2 \tilde{d}-2s_1\tilde{d}. \end{align} (117) Theorem 6.2 is proved in Section 6.5. 6.2 Symmetry relations For further use, we collect some elementary results concerning symmetry. Lemma 6.3 (Symmetry relations). For any fixed $$r_1, r_2,s_1,s_2,\tau$$, the RH matrix $$M$$ satisfies the symmetry relations   $$\overline{M\left(\overline{z};r_1,r_2,s_1,s_2,\tau\right)} = \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix} M\left(z;\overline{r_1},\overline{r_2},\overline{s_1},\overline{s_2},\overline{\tau}\right) \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix},$$ (118) where the bar denotes complex conjugation,   $$M^{-{\rm T}}\left(z;r_1,r_2,s_1,s_2,\tau\right) = K^{-1} M\left(z;r_1,r_2,s_1,s_2,-\tau\right) K,$$ (119) where the superscript $${-\hbox {T}}$$ denotes the inverse transpose, and finally   $$M\left(-z; r_1, r_2, s_1, s_2,\tau\right) =\begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix}M\left(z; r_2, r_1, s_2, s_1,\tau\right) \begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix},$$ (120) where we denote   $$J=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad K = \begin{pmatrix} 0 & I_2 \\ -I_2 & 0 \end{pmatrix}.$$ (121) Proof. This follows as in [14, Section 5.1]. More precisely, one easily checks that the left- and righthand sides of (118) satisfy the same RH problem. Then (118) follows from the uniqueness of the solution to this RH problem. The same argument applies to (119) and (120). □ Corollary 6.4. For any fixed $$r_1, r_2,s_1,s_2,\tau$$, the residue matrix $$M_1$$ in (7), (105) satisfies the symmetry relations   \begin{align} \overline{M_1\left(r_1,r_2,s_1,s_2,\tau\right)} &= \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix} M_1\left(\overline{r_1},\overline{r_2},\overline{s_1},\overline{s_2},\overline{\tau}\right) \begin{pmatrix} I_2 & 0 \\ 0 & -I_2 \end{pmatrix}, \end{align} (122)  \begin{align} M_1^{T}\left(r_1,r_2,s_1,s_2,\tau\right) &= -K^{-1} M_1\left(r_1,r_2,s_1,s_2,-\tau\right) K, \end{align} (123)  \begin{align} M_1\left(r_1, r_2, s_1, s_2,\tau\right) &=-\begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix} M_1\left(r_2, r_1, s_2, s_1,\tau\right) \begin{pmatrix} J & 0 \\ 0 & -J \end{pmatrix}, \end{align} (124) with the notations $$J,K$$ in (121). Lemma 6.1 is an immediate consequence of Corollary 6.4. 6.3 Lax system To the RH matrix $$M(z)$$ there is associated a Lax system of differential equations   $$\frac{\partial}{\partial z}M =UM,\quad \frac{\partial}{\partial s}M =VM,\quad \frac{\partial}{\partial \tau}M =WM,$$ (125) for certain coefficient matrices $$U,V,W$$. These matrices were obtained in the symmetric case $$r_1=r_2$$, $$s_1=s_2$$ in [12, Section 5.3, 15] and for $$\tau =0$$ in [14, Section 5.2]. We will consider the general nonsymmetric case. To take derivatives with respect to $$s_1$$ or $$s_2$$, we again parameterize $$s_j=\sigma _j s$$ with $$\sigma _1,\sigma _2$$ fixed and $$s$$ variable, as in (23). Lemma 6.5. In the general nonsymmetric setting, with the parametrization $$s_1=\sigma _1 s$$, $$s_2=\sigma _2 s$$, the coefficient matrices $$U,V,W$$ in (125) take the form   \begin{align} U&= \begin{pmatrix} -r_1 c+r_1^2\tau & r_2 d & r_1 i & 0 \\ -r_1 \tilde{d} & r_2 \tilde{c}-r_2^2\tau & 0 & r_2 i \\ \left(r_1 c^2-r_2 d\tilde{d}-2s_1+r_1z\right)i & -\left(r_1 b+r_2\tilde{\beta}\right) i & r_1 c+r_1^2\tau & r_1 d \\ -\left(r_1\beta+r_2\tilde{b}\right) i & \left(r_2\tilde{c}^2-r_1 d\tilde{d}-2s_2-r_2z\right)i & -r_2\tilde{d} & -r_2\tilde{c}-r_2^2\tau \end{pmatrix}, \end{align} (126)  \begin{align} V&= 2\begin{pmatrix} \sigma_1 c & \sigma_2 d & -\sigma_1 i & 0 \\ \sigma_1\tilde{d} & \sigma_2\tilde{c} & 0 & \sigma_2 i \\ \sigma_1\left(-c^2+\dfrac{r_2}{r_1} d\tilde{d}+\dfrac{\sigma_1}{r_1}s-z\right)i & \left(\sigma_1 b-\sigma_2\tilde{\beta}\right)i & -\sigma_1 c & -\sigma_1 d \\ \left(\sigma_1\beta-\sigma_2\tilde{b}\right)i & \sigma_2\left(\tilde{c}^2-\dfrac{r_1}{r_2}d\tilde{d}-\dfrac{\sigma_2}{r_2} s-z\right)i & -\sigma_2 \tilde{d} & -\sigma_2 \tilde{c} \end{pmatrix}, \end{align} (127)  \begin{align} W& = \left(r_1^2+r_2^2\right)\begin{pmatrix} \dfrac{r_1^2}{r_1^2+r_2^2}z & -b & 0 & -di \\ -\tilde{b} & -\dfrac{r_2^2}{r_1^2+r_2^2}z & \tilde{d} i & 0 \\ 0 & -fi & \dfrac{r_1^2}{r_1^2+r_2^2}z & -\tilde{\beta} \\ \tilde{f} i & 0 & -\beta & -\frac{r_2^2}{r_1^2+r_2^2}z \end{pmatrix}, \end{align} (128) with the notations in (105). Proof. This is a routine calculation that follows similarly as in the above-cited references [12, 14, 15]. From the asymptotics (7) we have for $$z\to \infty$$ that   \begin{align} \frac{\partial M}{\partial z}M^{-1} &= \left(I+\frac{M_1}{z}+\cdots\right) \begin{pmatrix} r_1^2\tau & 0 & i(r_1-s_1z^{-1}) & 0 \\ 0 & -r_2^2\tau & 0 & i(r_2+ s_2z^{-1}) \\ i(r_1z-s_1) & 0 & r_1^2\tau & 0 \\ 0 & -i(r_2 z+ s_2) & 0 & -r_2^2\tau \end{pmatrix}\nonumber\\ &\quad \left(I-\frac{M_1}{z}+\cdots\right)+O(z^{-1}). \end{align} (129) Since the RH matrix $$M(z)$$ has constant jumps, the left-hand side of (129) is an entire function of $$z$$. Liouville's theorem implies that it is a polynomial in $$z$$. Collecting the polynomial terms in the right-hand side of (129) we obtain   $U = \begin{pmatrix} r_1^2\tau & 0 & i r_1 & 0 \\ 0 & -r_2^2\tau & 0 & i r_2 \\ i\left(r_1z-s_1\right) & 0 & r_1^2\tau & 0 \\ 0 & -i\left( r_2z+ s_2\right) & 0 & -r_2^2\tau \end{pmatrix}+i M_1A- iA M_1,\quad A:=\begin{pmatrix}0&0\\ 0&0\\ r_1&0\\ 0&-r_2&0 \end{pmatrix}.$ With the help of (105) and a small calculation, we then get (126). To obtain the $$(3,1)$$ and $$(4,2)$$ entries of (126), we also need the relations   \begin{align} a+\alpha &= -c^2 + \frac{r_2}{r_1} d\tilde{d} +\frac{s_1}{r_1}, \end{align} (130)  \begin{align} \tilde{a}+\tilde{\alpha} &= -\tilde{c}^2 + \frac{r_1}{r_2} d\tilde{d} +\frac{s_2}{r_2}, \end{align} (131) which follow from the fact that $$(1,3)$$ and $$(2,4)$$ entries in the $$z^{-1}$$ coefficient in (129) are equal to zero. A similar argument yields (127)–(128). □ 6.4 Proof of Proposition 2.11 Let the vector $$\textbf {m}(z)$$ be a solution of $$({\partial }/{\partial z})\textbf {m}=U\textbf {m}$$ with $$U$$ in (126). By splitting this equation in $$2\times 2$$ blocks we get   \begin{align} \begin{pmatrix} r_1^{-1}m_1'\left(z\right) \\ r_2^{-1}m_2'\left(z\right) \end{pmatrix}&=\begin{pmatrix} -c+r_1\tau & r_1^{-1}r_2 d \\ -r_1r_2^{-1} \tilde{d} & \tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} m_1\left(z\right) \\ m_2\left(z\right) \end{pmatrix}+i\begin{pmatrix} m_3\left(z\right) \\ m_4\left(z\right) \end{pmatrix}, \end{align} (132)  \begin{align} \begin{pmatrix} r_1^{-1}m_3'(z) \\ r_2^{-1}m_4'(z)\end{pmatrix}&=i\begin{pmatrix} c^2-r_1^{-1}r_2 d\tilde{d}-2r_1^{-1}s_1+z & -b-r_1^{-1}r_2\tilde{\beta}\\ -r_1r_2^{-1}\beta-\tilde{b} & \tilde{c}^2-r_1r_2^{-1} d\tilde{d}-2r_2^{-1}s_2-z \end{pmatrix}\begin{pmatrix} m_1(z) \\ m_2(z) \end{pmatrix}\nonumber\\ &\quad +\begin{pmatrix} c+r_1\tau & d \\ -\tilde{d} & -\tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} m_3(z) \\ m_4(z) \end{pmatrix}. \end{align} (133) From (132) and (108)–(111), we easily get (41)–(42). To prove the two remaining differential equations, we take the derivative of (132) and use (133) to get   \begin{align*} \begin{pmatrix} r_1^{-2}m_1''(z) \\ r_2^{-2}m_2''(z) \end{pmatrix} &=\begin{pmatrix} -c+r_1\tau & r_1^{-2}r_2^2 d \\ -r_1^2r_2^{-2} \tilde{d} & \tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} r_1^{-1}m_1'(z) \\ r_2^{-1}m_2'(z) \end{pmatrix}\\ &\quad - \begin{pmatrix} c^2-r_1^{-1}r_2 d\tilde{d}-2r_1^{-1}s_1+z & -b-r_1^{-1}r_2\tilde{\beta}\\ -r_1r_2^{-1}\beta-\tilde{b} & \tilde{c}^2-r_1r_2^{-1} d\tilde{d}-2r_2^{-1}s_2-z \end{pmatrix}\begin{pmatrix} m_1(z) \\ m_2(z) \end{pmatrix}\\ &\quad +\begin{pmatrix} c+r_1\tau & d \\ -\tilde{d} & -\tilde{c}-r_2\tau \end{pmatrix}\left[\begin{pmatrix} r_1^{-1}m_1'(z) \\ r_2^{-1}m_2'(z)\end{pmatrix} -\begin{pmatrix} -c+r_1\tau & r_1^{-1}r_2 d \\ -r_1r_2^{-1} \tilde{d} & \tilde{c}-r_2\tau \end{pmatrix}\begin{pmatrix} m_1(z) \\ m_2(z) \end{pmatrix}\right]. \end{align*} From this equation and (108)–(115) we obtain the desired differential equations (39)–(40). This proves Proposition 2.11(a). To prove Part (b), let $$\textbf {m}(z)$$ satisfy the differential equations (39)–(42). From the proof of Part (a) above, we see that $$\frac {\partial }{\partial z}\textbf {m}=U\textbf {m}$$ with $$U$$ in (126). But then   $\frac{\partial}{\partial z}[\tilde{M}^{-1} \textbf{m}]=\tilde{M}^{-1}\left(U-U\right)\textbf{m}=0,$ which implies Proposition 2.11(b). 6.5 Proof of Theorem 6.2 The matrices $$U,V,W$$ in (125) satisfy the compatibility conditions   \begin{align} \frac{\partial U}{\partial s} &=\frac{\partial V}{\partial z} - UV + VU \end{align} (134)  \begin{align} \frac{\partial U}{\partial \tau}&=\frac{\partial W}{\partial z} - UW + WU. \end{align} (135) These relations are obtained by calculating the mixed derivatives $$({\partial ^2}/{\partial z\partial s}) M\,{=}\,({\partial ^2}/{\partial s\partial z})M$$ and $$({\partial ^2}/{\partial z\partial \tau })M=({\partial ^2}/{\partial \tau \partial z})M$$, respectively, in two different ways. Lemma 6.6. Consider the matrices $$U,V,W$$ in (126)–(128). Then with the expressions (108)–(117) in Theorem 6.2, the compatibility conditions (134)–(135) are satisfied. Proof. This is a lengthy but direct calculation. It is best performed with the help of a symbolic computer program such as Maple. First we consider the compatibility condition with respect to $$s$$. Writing the matrix equation (134) in entrywise form, with the help of Maple, we obtain the system of equations, with the prime denoting the derivative with respect to $$s$$,   \begin{align} r_1 d' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(\tilde{c} d-b\right)+2\left(r_1^2+r_2^2\right)\sigma_1\tau d \end{align} (136)  \begin{align} r_2 d' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(c d-\tilde{\beta}\right)-2\left(r_1^2+r_2^2\right)\sigma_2\tau d \end{align} (137)  \begin{align} r_1\tilde{d}' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(\tilde{c} \tilde{d}-\beta\right)-2\left(r_1^2+r_2^2\right)\sigma_1\tau \tilde{d} \end{align} (138)  \begin{align} r_2\tilde{d}' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)\left(c \tilde{d}-\tilde{b}\right)+2\left(r_1^2+r_2^2\right)\sigma_2\tau \tilde{d} \end{align} (139)  \begin{align} r_1 c' &=2\left(\sigma_1 r_2+\sigma_2 r_1\right)d\tilde{d}+2\sigma_1^2 s \end{align} (140)  \begin{align} r_2\tilde{c}' &= 2\left(\sigma_1 r_2+\sigma_2 r_1\right)d\tilde{d}+2\sigma_2^2 s \end{align} (141) and   \begin{align} (r_1 b+r_2\tilde{\beta})' &= (r_1\tilde{c}+r_2 c)d'-2(r_1^2+r_2^2)\tau\sigma_1 (\tilde{c}d-b)+2(r_1^2+r_2^2)\tau\sigma_2(cd-\tilde{\beta})\nonumber\\ &\quad-2\frac{r_1\sigma_2+r_2\sigma_1}{r_1r_2}\left((r_1^2+r_2^2)d^2\tilde{d}+(r_1\sigma_2+r_2\sigma_1)sd\right)-4\sigma_1 \sigma_2 sd \end{align} (142)  \begin{align} (r_1\beta+r_2\tilde{b})' &= (r_1\tilde{c}+r_2 c)\tilde{d}'+2(r_1^2+r_2^2)\sigma_1\tau (\tilde{c} \tilde{d}-\beta)-2(r_1^2+r_2^2)\sigma_2\tau(c\tilde{d}-\tilde{b})\nonumber\\ &\quad -2\frac{r_1\sigma_2+r_2\sigma_1}{r_1r_2}\left((r_1^2+r_2^2)\tilde{d}^2 d+(r_1\sigma_2+r_2\sigma_1)s\tilde{d}\right)-4\sigma_1 \sigma_2 s\tilde{d}. \end{align} (143) Next we consider the compatibility condition with respect to $$\tau$$. By writing the matrix equation (135) in entrywise form, with the help of Maple, we obtain, with the prime denoting the derivative with respect to $$\tau$$,   \begin{align} r_1 d' &= \left(r_1^2+r_2^2\right)\left(r_1^2\tau\tilde{\beta}+r_2^2\tau\tilde{\beta}+r_1 c\tilde{\beta} +r_2\tilde{c}\tilde{\beta} +r_2 d^2 \tilde{d}-r_1 c^2 d+2 s_1 d+r_2 f\right) \end{align} (144)  \begin{align} r_2 d' &=\left(r_1^2+r_2^2\right)\left(r_1^2\tau b+r_2^2\tau b -r_1 c b-r_2 \tilde{c} b -r_1 d^2\tilde{d}+r_2 \tilde{c}^2 d -2s_2 d-r_1 f\right) \end{align} (145)  \begin{align} r_1 \tilde{d}' &=\left(r_1^2+r_2^2\right)\left(r_1^2\tau\tilde{b} +r_2^2\tau\tilde{b}-r_2 \tilde{c}\tilde{b}-r_1 c\tilde{b}-r_2\tilde{d}^2 d+r_1 c^2\tilde{d}-2s_1\tilde{d} -r_2\tilde{f}\right) \end{align} (146)  \begin{align} r_2 \tilde{d}' &=\left(r_1^2+r_2^2\right) \left(r_1^2\tau\beta +r_2^2\tau\beta+r_2\tilde{c}\beta+r_1 c\beta +r_1\tilde{d}^2 d-r_2\tilde{c}^2\tilde{d}+2 s_2\tilde{d}+r_1\tilde{f}\right) \end{align} (147)  \begin{align} c' &=\left(r_1^2+r_2^2\right)\left(d\beta-\tilde{d} b\right) \end{align} (148)  \begin{align} \tilde{c}' &=\left(r_1^2+r_2^2\right)\left(\tilde{d}\tilde{\beta}- d\tilde{b}\right) \end{align} (149) and   \begin{align*} r_1b'+r_2\tilde{\beta}' &= (r_1^2+r_2^2)(-r_1 c^2 b+r_2\tilde{c}^2\tilde{\beta}-r_1 d\tilde{d}\tilde{\beta}+r_2 d \tilde{d} b+2 s_1 b-2s_2\tilde{b}\\ &\quad -r_1^2\tau f-r_2^2 \tau f-r_1 c f+r_2 \tilde{c} f)\\ r_2\tilde{b}'+r_1\beta' &= (r_1^2+r_2^2)(-r_2 \tilde{c}^2 \tilde{b}+r_1 c^2 \beta-r_2 d\tilde{d}\beta+r_1 d \tilde{d} \tilde{b}+2 s_2 \tilde{b}-2s_1 b \\ &\quad -r_2^2\tau \tilde{f}-r_1^2 \tau \tilde{f}-r_2 \tilde{c} \tilde{f}+r_1 c \tilde{f}). \end{align*} Direct calculations show that all these equations are satisfied by (108)–(117).□ Lemma 6.7. Theorem 6.2 and Proposition 2.2 both hold true in the case $$\tau =0$$. Proof. For $$\tau =0$$, Proposition 2.2 was obtained in [14, Theorem 2.3], and Equations (108)–(111) were obtained in [14, Theorem 2.4]. The other equations in Theorem 6.2 then follow from (136)–(139) and (145)–(146). □ With the help of Lemmas 6.6–6.7, one can prove Theorem 6.2 and Proposition 2.2 for $$\tau \neq 0$$ in the same way as in [15, Section 5], where the symmetric case was considered. More precisely, the approach in [15, Section 5] allows to prove the existence of a matrix $$M(z)$$ which solves the RH problem 2.1, and which additionally satisfies the Lax pair   $$\frac{\partial}{\partial z}M =UM,\quad \frac{\partial}{\partial \tau}M=WM,$$ (150) where the coefficient matrices $$U,W$$ are given by (126) and (128), respectively, with the numbers $$c,\tilde {c},d,\tilde {d},\ldots$$ involved in these matrices given by the explicit Painlevé-type formulas in (108)–(117). This follows from a lengthy and tedious calculation that follows exactly the same plan as in [15]; we do not go into the details. In particular, this implies the existence part in Proposition 2.2 (the uniqueness part is a standard fact). Theorem 6.2 is another consequence of the preceding construction. Indeed, from Lemma 6.5 we already know that the RH matrix $$M(z)$$ satisfies the Lax pair (150), where the coefficient matrices $$U,W$$ are given by (126) and (128), respectively, with the numbers $$c,\tilde {c},d,\tilde {d},\ldots$$ involved in these matrices obtained from the residue matrix $$M_1$$ in (105). The construction in the previous paragraph shows that the same holds with the numbers $$c,\tilde {c},d,\tilde {d},\ldots$$ given by the right hand sides of equations (108)–(117). Since the numbers in the left-hand sides of the latter equations are uniquely retrievable from the matrices $$U,W$$ in (126) and (128), we then obtain Theorem 6.2. 6.6 Alternative approach to Theorem 6.2 The above reasoning does not give any insight on the origin of the expressions in Theorem 6.2. Therefore, in the remaining part of this section, let us deduce these formulas in a more direct way. The calculations below are partly heuristic in the sense that we will make an ansatz (159), (167). We start with Lemma 6.8. The numbers $$d,\tilde {d}$$ in (105) satisfy the system of coupled second-order differential equations   \begin{align} \frac{\partial^2 d}{\partial s^2} &= 4\tau(r_1\sigma_1-r_2\sigma_2)\frac{\partial d}{\partial s} -4(r_1^2+r_2^2)(\sigma_1^2+\sigma_2^2) \tau^2 d\nonumber\\ &\quad +8\frac{(r_1\sigma_2+r_2\sigma_1)^2}{r_1 r_2} d^2\tilde{d}+8\frac{(r_1\sigma_2+r_2\sigma_1)^3}{r_1r_2(r_1^2+r_2^2)}sd, \end{align} (151)  \begin{align} \frac{\partial^2 \tilde{d}}{\partial s^2} &= -4\tau(r_1\sigma_1-r_2\sigma_2)\frac{\partial\tilde{d}}{\partial s} -4(r_1^2+r_2^2)(\sigma_1^2+\sigma_2^2) \tau^2 \tilde{d}\nonumber\\ &\quad +8\frac{(r_1\sigma_2+r_2\sigma_1)^2}{r_1 r_2} \tilde{d}^2 d+8\frac{(r_1\sigma_2+r_2\sigma_1)^3}{r_1r_2(r_1^2+r_2^2)}s\tilde{d}. \end{align} (152) Moreover,   $$\frac{\partial d}{\partial\tau}= -\frac{r_1r_2\left(r_1^2+r_2^2\right)}{\sigma_1r_2+\sigma_2 r_1}\tau\frac{\partial d}{\partial s} + \left(r_1^2+r_2^2\right)^2 \frac{\sigma_1 r_2-\sigma_2 r_1}{\sigma_1r_2+\sigma_2 r_1}\tau^2 d+2\left(r_1 s_1- r_2 s_2\right) d.$$ (153) Proof. Equation (151) follows from (136)–(137) and (140)–(143) after some lengthy algebraic manipulations. Equation (152) follows by symmetry. To obtain (153), first note that (136)–(137) imply the relations   \begin{align} &r_2\left(\tilde{c} d-b\right)-r_1\left(cd-\tilde{\beta}\right)+\left(r_1^2+r_2^2\right)\tau d=0, \end{align} (154)  \begin{align} &r_1 r_2 \frac{\partial d}{\partial s} = \left(\sigma_1 r_2+\sigma_2 r_1\right)\left(r_2\tilde{c} d-r_2b+r_1c d-r_1\tilde{\beta}\right)+\left(r_1^2+r_2^2\right)\left(\sigma_1 r_2-\sigma_2 r_1\right)\tau d. \end{align} (155) Now by eliminating $$f$$ from (144)–(145) we get   \begin{align} \frac{\partial d}{\partial\tau} &= \left(r_1^2+r_2^2\right) \left(r_2 b + r_1\tilde{\beta}\right) \tau-\left(r_2 b-r_1\tilde{\beta}\right) \left(r_1 c+r_2 \tilde{c}\right)\nonumber\\ &\quad -d \left(r_1^2 c^2-r_2^2 \tilde{c}^2\right)+2\left(r_1 s_1- r_2 s_2\right) d, \end{align} (156) On account of (154) this becomes   $\frac{\partial d}{\partial\tau} = \left(r_1^2+r_2^2\right) \left(r_2 b + r_1\tilde{\beta}-r_1 cd-r_2 \tilde{c} d\right)\tau +2\left(r_1 s_1- r_2 s_2\right) d.$ Combining this with (155) we obtain (153). □ We seek a solution to the differential equations (151)–(152) in the form   $$d = e^{h}g,\quad \tilde{d} = e^{-h} g,$$ (157) with $$h=h(s,\tau )$$ an odd function of $$\tau$$ and $$g=g(s,\tau )$$ an even function of $$\tau$$ (recall Lemma 6.1). Plugging this in (151) we find, with again the prime denoting the derivative with respect to $$s$$,   \begin{align} g''+2h'g'+((h')^2+h'') g &= 4\tau(r_1\sigma_1-r_2\sigma_2)(g'+h' g)\nonumber\\ &\quad -4(r_1^2+r_2^2)\tau^2 (\sigma_1^2+\sigma_2^2) g+8\frac{(r_1\sigma_2+r_2\sigma_1)^2}{r_1 r_2} g^3\nonumber\\ &\quad +8\frac{(r_1\sigma_2+r_2\sigma_1)^3}{r_1r_2(r_1^2+r_2^2)}sg. \end{align} (158) To obtain further progress we make the ansatz   $$\frac{\partial^2 h}{\partial s^2}=0,\quad \frac{\partial h}{\partial s} = 2\left(r_1\sigma_1-r_2\sigma_2\right)\tau.$$ (159) After a little calculation, (158) then simplifies to   $$g'' = -4\left(r_1\sigma_2+r_2\sigma_1\right)^2\tau^2 g+8\frac{\left(r_1\sigma_2+r_2\sigma_1\right)^2}{r_1r_2} g^3+8\frac{\left(r_1\sigma_2+r_2\sigma_1\right)^3}{r_1r_2\left(r_1^2+r_2^2\right)}sg.$$ (160) We can relate (160) to the Painlevé II equation. We have that $$q=q(s)$$ satisfies $$q''\,{=}\,s q\,{+}\,2q^3$$, if and only if   $$g\left(s\right) := c_1 q\left(c_2 s + c_3\right)$$ (161) satisfies   $$g'' = c_2^2 c_3 g+2 \frac{c_2^2}{c_1^2} g^3+ c_2^3 s g.$$ (162) Comparing coefficients with (160), we see that   \begin{align} c_1 &= \frac{\left(r_1r_2\right)^{1/6}}{\left(r_1^2+r_2^2\right)^{1/3}} \end{align} (163)  \begin{align} c_2 &= 2\frac{\left(r_1\sigma_2+r_2\sigma_1\right)}{\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{1/3}} \end{align} (164)  \begin{align} c_3 &= -\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{2/3}\tau^2. \end{align} (165) Finally, substituting the formulas (157), (161),   $$d = e^{h}g =e^h \frac{\left(r_1r_2\right)^{1/6}}{\left(r_1^2+r_2^2\right)^{1/3}} q\left(2\frac{\left(r_1\sigma_2+r_2\sigma_1\right)}{\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{1/3}} s -\left(r_1r_2\left(r_1^2+r_2^2\right)\right)^{2/3}\tau^2\right)$$ (166) in (153) and using again (159) we find after some calculations,   $$\frac{\partial h}{\partial \tau} = \left(r_2^4-r_1^4\right)\tau^2+2\left(r_1\sigma_1-r_2\sigma_2\right)s,$$ (167) where we are assuming that the choice of the Painlevé II solution $$q$$ in (166) is independent from $$\tau$$. From (166)–(167) and the known result for $$\tau =0$$ [14, Theorem 2.4] we get the expression for $$d$$ in Theorem 2.9 (with $$q$$ the Hastings–McLeod solution to Painlevé II). By symmetry we obtain the expression for $$\tilde {d}$$. From (148), (136), (138) and a little calculation we then find   $\frac{\partial}{\partial\tau}c = -2r_1^{-1}\left(r_1^2+r_2^2\right)\tau C^{-2} q^2\left(\sigma\right)= \frac{\partial}{\partial\tau}\left(-r_1^{-1}C^{-1} u\left(\sigma\right)\right),$ where the second equality follows from (64) and (32). Combining this with the known result for $$\tau =0$$ [14, Theorem 2.4] we get the expression for $$c$$ in Theorem 2.9. From (136) and a little calculation we find the expression for $$b$$, while (145) yields the formula for $$f$$. Finally, the remaining formulas in Theorem 2.9 follow from symmetry considerations, see Lemma 6.1. Funding The author is a Postdoctoral Fellow of the Fund for Scientific Research—Flanders (Belgium). References 1 Adler M., Delépine J., van Moerbeke P., and Vanhaecke P.. “ A PDE for non-intersecting Brownian motions and applications.” Advances in Mathematics  226 ( 2011): 1715– 55. Google Scholar CrossRef Search ADS   2 Adler M., Ferrari P. L., and van Moerbeke P.. “ Non-intersecting random walks in the neighborhood of a symmetric tacnode.” Annals of Probability  41 ( 2013): 2599– 647. Google Scholar CrossRef Search ADS   3 Adler M., Johansson K., and van Moerbeke P.. “ Double Aztec diamonds and the tacnode process.” Advances in Mathematics  252 ( 2014): 518– 71. Google Scholar CrossRef Search ADS   4 Adler M., van Moerbeke P., and Vanderstichelen D.. “ Non-intersecting Brownian motions leaving from and going to several points.” Physica D  241 ( 2012): 443– 60. Google Scholar CrossRef Search ADS   5 Anderson G. W., Guionnet A., and Zeitouni O.. An Introduction to Random Matrices . Cambridge Studies in Advanced Mathematics 118. Cambridge: Cambridge University Press, 2010. 6 Baik J. “ Painlevé formulas of the limiting distributions for nonnull complex sample covariance matrices.” Duke Mathematical Journal  133 ( 2006): 205– 35. Google Scholar CrossRef Search ADS   7 Baik J., Liechty K., and Schehr G.. “ On the joint distribution of the maximum and its position of the Airy$$_2$$ process minus a parabola.” Journal of Mathematical Physics  53 ( 2012): 083303. Google Scholar CrossRef Search ADS   8 Bertola M. and Cafasso M.. “ Riemann-Hilbert approach to multi-time processes; the Airy and the Pearcey case.” Physica D  241 ( 2012): 2237– 45. Google Scholar CrossRef Search ADS   9 Bertola M. and Cafasso M.. “ The gap probabilities of the tacnode, Pearcey and Airy point processes, their mutual relationship and evaluation.” Random Matrices: Theory and Applications  2 ( 2013): 1350003. Google Scholar CrossRef Search ADS   10 Bleher P. and Its A.. “ Double scaling limit in the random matrix model: the Riemann-Hilbert approach.” Communications on Pure and Applied Mathematics  56 ( 2003): 433– 516. Google Scholar CrossRef Search ADS   11 Claeys T. and Kuijlaars A. B. J.. “ Universality of the double scaling limit in random matrix models.” Communications on Pure and Applied Mathematics  59 ( 2006): 1573– 603. Google Scholar CrossRef Search ADS   12 Delvaux S. “ Non-intersecting squared Bessel paths at a hard-edge tacnode.” Communications in Mathematical Physics  324 ( 2013): 715– 66. Google Scholar CrossRef Search ADS   13 Delvaux S. and Kuijlaars A. B. J.. “ A graph-based equilibrium problem for the limiting distribution of non-intersecting Brownian motions at low temperature.” Constructive Approximation  32 ( 2010): 467– 512. Google Scholar CrossRef Search ADS   14 Delvaux S., Kuijlaars A. B. J., and Zhang L.. “ Critical behavior of non-intersecting Brownian motions at a tacnode.” Communications on Pure and Applied Mathematics  64 ( 2011): 1305– 83. Google Scholar CrossRef Search ADS   15 Duits M. and Geudens D.. “ A critical phenomenon in the two-matrix model in the quartic/quadratic case.” Duke Mathematical Journal  162 ( 2013): 1383– 462. Google Scholar CrossRef Search ADS   16 Eynard B. and Orantin N.. “ Topological recursion in enumerative geometry and random matrices.” Journal of Physics A  42, no. 29 ( 2009): 293001, 117. Google Scholar CrossRef Search ADS   17 Ferrari P. L. and B. Vet̋. “ Non-colliding Brownian bridges and the asymmetric tacnode process.” Electronic Journal of Probability  17 ( 2012): 1– 17. Google Scholar CrossRef Search ADS   18 Flaschka H. and Newell A. C.. “ Monodromy and spectrum-preserving deformations I.” Communications in Mathematical Physics  76 ( 1980): 65– 116. Google Scholar CrossRef Search ADS   19 Fokas A. S., Its A. R., Kapaev A. A., and Novokshenov V. Yu.. Painlevé Transcendents: a Riemann-Hilbert Approach . Mathematical Surveys and Monographs 128. Providence, RI: Amer. Math. Soc., 2006. Google Scholar CrossRef Search ADS   20 Geudens D. and Zhang L.. “ Transitions between critical kernels: from the tacnode kernel and critical kernel in the two-matrix model to the Pearcey kernel.” International Mathematics Research Notices  ( 2014). 10.1093/imrn/rnu105. 21 Hastings S. P. and McLeod J. B.. “ A boundary value problem associated with the second Painlevé transcendent and the Korteweg-de Vries equation.” Archive for Rational Mechanics and Analysis  73 ( 1980): 31– 51. Google Scholar CrossRef Search ADS   22 Johansson K. “ Noncolliding Brownian motions and the extended tacnode process.” Communications in Mathematical Physics  319 ( 2013): 231– 67. Google Scholar CrossRef Search ADS   23 Katori M. and Tanemura H.. “ Noncolliding Brownian motion and determinantal processes.” Journal of Statistical Physics  129 ( 2007): 1233– 77. Google Scholar CrossRef Search ADS   24 Katori M. and Tanemura H.. “ Noncolliding processes, matrix-valued processes and determinantal processes.” Sugaku Expositions  24 ( 2011): 263– 89. 25 Kuijlaars A. B. J. “ The tacnode Riemann Hilbert problem.” Constructive Approximation  39 ( 2014): 197– 222. Google Scholar CrossRef Search ADS   26 Moreno Flores G. R., Quastel J., and Remenik D.. “ Endpoint distribution of directed polymers in 1+1 dimensions.” Communications in Mathematical Physics  317 ( 2013): 363– 80. Google Scholar CrossRef Search ADS   27 Schehr G. “ Extremes of $$N$$ vicious walkers for large $$N$$: application to the directed polymer and KPZ interfaces.” Journal of Statistical Physics  149 ( 2012): 385– 410. Google Scholar CrossRef Search ADS   28 Tracy C. and Widom H.. “ Level-spacing distributions and the Airy kernel.” Communications in Mathematical Physics  159 ( 1994): 151. Google Scholar CrossRef Search ADS   Communicated by Prof. Kurt Johansson © The Author(s) 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

### Journal

International Mathematics Research NoticesOxford University Press

Published: Jan 1, 2018

## You’re reading a free preview. Subscribe to read the entire article.

### DeepDyve is your personal research library

It’s your single place to instantly
that matters to you.

over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year

Save searches from
PubMed

Create lists to

Export lists, citations