Add Journal to My Library
Information and Inference: A Journal of the IMA
, Volume Advance Article – May 25, 2018

20 pages

/lp/ou_press/quantization-for-low-rank-matrix-recovery-slo6Q6zEs1

- Publisher
- Oxford University Press
- Copyright
- © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
- ISSN
- 2049-8764
- eISSN
- 2049-8772
- D.O.I.
- 10.1093/imaiai/iay007
- Publisher site
- See Article on Publisher Site

Abstract We study Sigma–Delta $$(\varSigma\!\varDelta) $$ quantization methods coupled with appropriate reconstruction algorithms for digitizing randomly sampled low-rank matrices. We show that the reconstruction error associated with our methods decays polynomially with the oversampling factor, and we leverage our results to obtain root-exponential accuracy by optimizing over the choice of quantization scheme. Additionally, we show that a random encoding scheme, applied to the quantized measurements, yields a near-optimal exponential bit rate. As an added benefit, our schemes are robust both to noise and to deviations from the low-rank assumption. In short, we provide a full generalization of analogous results, obtained in the classical setup of band-limited function acquisition, and more recently, in the finite frame and compressed sensing setups to the case of low-rank matrices sampled with sub-Gaussian linear operators. Finally, we believe our techniques for generalizing results from the compressed sensing setup to the analogous low-rank matrix setup is applicable to other quantization schemes. 1. Introduction Let $$\mathcal{M}: \mathbb{R}^{n_1\times n_2} \to \mathbb{R}^m $$ be a linear map that acts on matrices X to produce measurements \begin{equation} y= \mathcal{M}(X) = \sum\limits_{i=1}^m \langle X, A_i \rangle e_i, \end{equation} (1)where the vectors $$e_i$$ are the standard basis vectors for $$\mathbb{R}^m$$, and each $$A_i$$ is a matrix in $$\mathbb{R}^{n_1\times n_2}$$, i ∈ {1, … , m}. Here, the inner product is the standard Hilbert–Schmidt inner product given by $$\langle Y,Z \rangle = \sum _{i,\,j} Y_{ij}Z_{ij}$$. Note that for every linear operator $$\mathcal{M}$$ as above, there exists an $$m \times (n_1 n_2)$$ matrix $$A_{\mathcal{M}}$$, such that for all $$X \in \mathbb{R}^{n_1\times n_2}$$ \begin{equation*} \mathcal{M}(X) = A_{\mathcal{M}} \vec{X}. \end{equation*}Here $$\vec{X} \in \mathbb{R}^{n_1n_2}$$, the vectorized version of the matrix X, is obtained by stacking the columns of X. Low-rank matrix recovery is concerned with approximating a rank k matrix X from y, knowing the operator $$\mathcal{M}$$. It is primarily interesting in the regime where $$m \ll n_1 n_2$$, and many recent results propose recovery algorithms and prove recovery guarantees when $$m \geq C k\max \{n_1,n_2\}$$ where C > 0 is some absolute constant [7,8,23,33]. For example, when the entries of the matrix representation $$A_{\mathcal{M}}$$ are independent Gaussian or sub-Gaussian random variables, and $$y = \mathcal{M}(X) + e$$ with $$\|e\|_2 \leq \varepsilon $$, one can solve the convex optimization problem \begin{equation} {X^\sharp}:=\arg\min_Z \| Z \|_* \quad \textrm{subject to} \quad \left\| \mathcal{M}(Z) - y \right\|_2 \leq \varepsilon. \end{equation} (2)Then, with high probability on the draw of $$\mathcal{M}$$, and uniformly for all $$n_1\times n_2$$ matrices X, we have \begin{equation} \| X^{\sharp} - X\|_F \leq C\left (\frac{\sigma_k(X)_*}{\sqrt{k}} + \varepsilon \right), \end{equation} (3)as shown in, e.g., [7]. Above, ∥⋅∥F denotes the Frobenius norm ∥X∥F = $$\sqrt{\sum\nolimits_{i,j}A^{2}_{i,j}} $$ on matrices induced by the Hilbert–Schmidt inner product, $$\| Z\|_*$$ denotes the nuclear norm of Z, i.e., the sum of its singular values, and $$ \sigma_k(Z)_*:= \min\limits_{\operatorname{rank}(V)=k} \| Z - V\|_* $$denotes the error, measured in the nuclear norm, associated with the best rank k approximation of a matrix. 1.1 Background and prior work Low-rank matrix recovery has seen a wide range of applications, ranging from quantum state tomography [17] and collaborative filtering [31] to sensor localization [32] and face recognition [6], to name a few. Nuclear norm minimization was proposed by Fazel in [14] as a means of finding a matrix of minimal rank in a given convex set. Fazel motivates this through the observation that nuclear norm minimization is the convex relaxation of the rank–minimization problem, which has been shown to be NP-hard. Since then, and with the advent of compressed sensing, there has been much work on recovering low-rank matrices from linear measurements. For example, [33] considers recovering low-rank matrices given random linear measurements and establishes recovery guarantees given that the sampling scheme satisfies the matrix restricted isometry property (RIP), which we define in the subsequent section. Perhaps not surprisingly, this analysis closely follows that of sparse vector recovery under ℓ1 minimization, as it is known that random ensembles of linear maps satisfy the matrix RIP with high probability [16,33]. This led to a flurry of papers on nuclear norm minimization for matrix recovery in various contexts, see [6], [9] and [29] for example. While the theoretical results on nuclear norm minimization have been promising, convex optimization practically necessitates the use of digital computers for recovering the underlying matrix. It behoves the theory, therefore, to take into account that the measurements must be converted to bits so that numerical solvers can handle them. Indeed, quantization is the necessary step in data acquisition by which measurements taking values in the continuum are mapped to discrete sets. Without any claim to comprehensiveness, we are aware of the following developments on quantization in the low-rank matrix completion setting, i.e., the setting where one quantizes a random subset of the entries of the matrix directly. Davenport and coauthors in [11] consider recovering a rank k matrix $$X\in \mathbb{R}^{n_1\times n_2}$$ given 1-bit measurements of a subset of the entries, sampled according to a distribution that may depend on the entries. They recover an estimate $$\hat{X}$$ of the sampled matrix through maximum likelihood estimation with a nuclear norm constraint and derive error bounds which decay, as a function of the number of measurements m, like O(m−1/2). Shortly thereafter Cai and Zhou in [4] consider reconstruction given 1-bit measurements of the entries under more general sampling schemes of the indices. Unlike the argument in [11], Cai and Zhou impose a max-norm constraint on the maximum likelihood estimation to enforce the low-rank condition. Under this regime the scaled Frobenius norm error decay is also O(m−1/2). Bhaskar and Javanmard in [2] modify the optimization problem of [11] so that it now imposes an exact rank constraint in place of the nuclear norm. This yields a non-convex problem with associated computational challenges. Nevertheless, assuming one can solve this hard optimization problem, they obtain an error estimate that decays like O(m−4), at the added cost of a much increased constant that scales like $$n^7k^3$$. Proceeding towards more general quantization alphabets, [28] consider low-rank matrix completion via nuclear norm penalized maximum likelihood estimation given quantized measurements of the entries with unknown quantization bin boundaries. They propose an optimization procedure which learns the quantization bin boundaries and recovers the matrix in an alternating fashion. No theoretical guarantees are given to delineate the relationship between the number of measurements and the reconstruction error. Authors in [27] propose a low-rank matrix recovery algorithm given quantized measurements of the entries from a finite alphabet under some sampling distribution of the indices. As the aforementioned schemes have done, they propose a maximum likelihood estimation, but with a nuclear norm constraint to enforce the low-rank condition. Specifically, given m ≥ C max {n1, n2} log (max{n1, n2}) measurements where C > 0 is a universal constant, they show that the scaled Frobenius error decays like m−1. In contrast to the above works, we study the quantization problem in the low-rank matrix recovery setting given linear measurements of the form (1), where the matrices $$A_i$$ are sub-Gaussian. 1.2 Contributions To the best of our knowledge, we provide the first theoretical guarantees of low-rank matrix recovery from ΣΔ-quantized sub-Gaussian linear measurements. Our result holds for stable ΣΔ quantizers (defined in Section 2.2) of arbitrary order, and our bounds apply to the particular case of 1-bit quantization; that is, we can recover scaling information in this setting. Thus, we generalize a result from [34] that recovers sparse vectors from quantized noisy measurements so that it now applies to the low-rank matrix setting, as shown in Theorem 3. Our main tool for achieving this extension is a modification of the technique of Oymak et al. [30] for converting compressed sensing results to the low-rank matrix setting. We show that the reconstruction error under constrained nuclear norm minimization is bounded by \begin{equation*} \|{X^\sharp}-X\|_F \leq C\left(\left(\frac{m}{\ell}\right)^{-r+1/2}\beta+\frac{\sigma_k(X)_*}{\sqrt k}+\sqrt{\frac{m}{\ell}}\epsilon \right), \end{equation*}thus showing that our reconstruction scheme is robust to noise and to the low-rank assumption. Above, r denotes the order of the ΣΔ scheme and β the step size of the associated alphabet (see Section 2.2), ℓ is of order k max{n1, n2} and $$\frac{m}{\ell}$$ denotes the oversampling factor. Note that in the case of rank k matrices, with no measurement noise, our reconstruction error decays polynomially fast, namely as $$m^{-r}$$, thereby greatly improving on the rates obtained in the works cited above. Furthermore, by optimizing over the order of the ΣΔ reconstruction scheme, we show in Corollary 13 that our procedure attains root-exponential accuracy with respect to the oversampling factor. This generalizes the error decay seen in [34] for vectors. The robustness of the main result extends beyond quantization. We show in Corollary 14 that we can further reduce the total number of bits by encoding the quantized measurements using a discrete Johnson–Lindenstrauss [22] embedding into a lower-dimensional space. The resulting dramatic reduction in bit rate is coupled with only a small increase in reconstruction error. This, in turn, yields an exponentially decaying, i.e., optimal, relationship between number of bits and reconstruction error. Finally, we remark that the techniques used herein can be used to derive analogous results for other quantization schemes that share certain properties of ΣΔ quantization. Namely, suppose one is given a quantization map $$\mathcal{Q}$$ and a bijective linear map T : ℝn → ℝn which satisfy ∥T(y − $$\mathcal{Q}$$ (y))∥ < C for some norm ∥⋅∥ and some constant C that may depend on the quantization technique, but not on the dimensions. Then, the proof of Theorem 12, with a suitably altered decoder, can likely be modified to produce an analogous result for the new quantization scheme. 2. Preliminaries 2.1 Notation For x ∈ ℝn, let supp(x) denote the set of indices i for which xi is non-zero, and $$\varSigma _k^n:=\{x \in \mathbb{R}^n, |\operatorname{supp}(x)| \leq k\}$$ be the set of all k-sparse vectors in $$\mathbb{R}^n$$. For a matrix $$A\in \mathbb{R}^{n_1\times n_2}$$, we will denote its singular values by $$\sigma _i(A)$$ for i = 1, …, n where $$n:=\min \{n_1,n_2\}$$ and $$\sigma _1(A)\geq \sigma _2(A)\geq ...\geq \sigma _n(A)$$. We will require the definitions of the well-known RIP, both for linear operators acting on sparse vectors and for linear operators acting on low-rank matrices. Definition 1 (Vector RIP (e.g., [5])) We say a linear operator $$\varPhi : \mathbb{R}^{n} \to \mathbb{R}^{m}$$ satisfies the vector RIP of order k and constant $$\delta _k$$, if for all $$x \in \varSigma _k^n$$, \begin{equation*} (1-\delta_k)\|x\|_2^2 \leq \|\varPhi x\|_2^2 \leq (1+\delta_k)\|x\|_2^2. \end{equation*} Definition 2 (Matrix RIP) We say a linear operator $$\mathcal{M}: \mathbb{R}^{n_1\times n_2} \to \mathbb{R}^{m}$$ satisfies the matrix RIP of order k and constant $$\delta _k$$, if for all matrices X of rank k or less we have \begin{equation*} (1-\delta_k)\|X\|_F^2 \leq \|\mathcal{M}( X) \|_2^2 \leq (1+\delta_k)\|X\|_F^2. \end{equation*} Definition 3 (Restriction [30]) Let $$\mathcal{M}: \mathbb{R}^{n_1\times n_2} \to \mathbb{R}^m $$ be a linear operator and assume without loss of generality that $$n_1 \leq n_2$$. Given a pair of matrices U and V with orthonormal columns, define $$\mathcal{M}_{U,V}$$, the (U, V ) restriction of $$\mathcal{M}$$, by1 \begin{align*} \mathcal{M}_{U,V}: \mathbb{R}^{n_1} &\to \mathbb{R}^m \\ x &\mapsto \mathcal{M}\left(U\ \textrm{diag}(x) V^{\ast}\right)\!. \end{align*} 2.2 Preliminaries on $$\varSigma \varDelta $$ quantization $$\varSigma \varDelta $$ quantizers were first proposed in the context of digitizing oversampled band-limited functions by [20], and their mathematical properties have been studied since. In this band-limited context, the $$\varSigma \varDelta $$ quantizer takes in a sequence of point evaluations of the function sampled at a rate exceeding the critical Nyquist rate and produces a sequence of quantized elements, i.e., elements from a finite set. So, the $$\varSigma \varDelta $$ quantizer is associated with this finite set, say $$\mathcal{A}\subset \mathbb{R}$$ (called the quantization alphabet) and also with a scalar quantizer \begin{align} Q_{\mathcal{A}}: \mathbb{R} &\to \mathcal{A} \nonumber \\ z &\mapsto \arg\min_{v\in\mathcal{A}} | v - z |. \end{align} (4)$$\varSigma \varDelta $$ schemes build on scalar quantization by incorporating a state variable sequence u, which is recursively updated. In an rth-order $$\varSigma \varDelta $$ scheme, a function, say $$\rho _r$$, of r previous values of u and the current measurement are fed into the scalar quantizer to produce an element from $$\mathcal{A}$$. For example, in the band-limited context the measurements are simply the pointwise evaluations of the function. Defining $$u_i = 0$$ for i ≤ 0 and denoting the measurements by $$y_i$$ we have the recursion \begin{equation} q_i = Q_{\mathcal{A}} \left( \rho_r (u_{i-1},...u_{i-r+1},y_i) \right) \end{equation} (5) \begin{equation} (\varDelta^r u)_i = y_i - q_i.\qquad\qquad\qquad\qquad\quad\;\,\, \end{equation} (6)Here $$(\varDelta u)_i := u_i - u_{i-1}$$ and $$\varDelta ^r u := \varDelta (\varDelta ^{r-1}u)$$. Thus, the rth-order $$\varSigma \varDelta $$ quantizer updates the state variables as a solution to an rth-order difference equation. To give a concrete example, the simplest first-order $$\varSigma \varDelta $$ scheme operates by running the following recursion: \begin{equation} q_i = Q_{\mathcal{A}} ( y_i + u_{i-1} ) \end{equation} (7) \begin{equation} u_i = u_{i-1} + y_i - q_i. \end{equation} (8)Usually, the alphabet $$\mathcal{A}$$ associated with $$\varSigma \varDelta $$ quantizers is of the form \begin{equation*} \mathcal{A}:=\left\{ \pm (j-1/2)\beta, j = 1, ...,L\right\}\!. \end{equation*}We refer to such an $$\mathcal{A}$$ as a 2L-level alphabet with step size $$\beta $$. In particular, when L = 1, we have a 1-bit alphabet. For reasons related to building a circuit that implements the $$\varSigma \varDelta $$ quantization scheme and bounding the reconstruction error, an important consideration is the so-called stability of the $$\varSigma \varDelta $$ scheme. A stable rth-order $$\varSigma \varDelta $$ scheme produces bounded state variables with \begin{equation} \|u\|_\infty < \gamma(r), \end{equation} (9)whenever $$\|y\|_{\infty }$$ is bounded above. Above, γ(r) is some constant which may depend on r. For example, for the 2L-level alphabet described above, coupled with a particular choice of $$\rho _r$$ and ΣΔ order r, it is sufficient to choose L ≥ 2⌈$$\frac{\|y\|_{\infty }}{\beta} $$⌉ + 2r + 1 to guarantee (9) holds with γ(r) = β/2 [1,10]. Note that with such a choice the size of the alphabet grows exponentially as a function of the $$\varSigma \varDelta $$ order. On the other hand, given a fixed alphabet, [10] constructed the first family of functions $$\rho _r$$ with associated stability constants $$\gamma (r)$$. Subsequently, the dependence on r was improved upon by [12] and [19] via different constructions of $$\rho _r$$. In these papers it was shown that $$\varSigma\!\varDelta $$-quantized measurements of a band-limited function f, sampled at a rate $$\lambda $$ times the critical Nyquist rate, can be used to obtain an approximation $$\hat{f}$$ of f, satisfying \begin{equation*} \|\,\hat{f}-f \|_\infty \leq C \gamma(r)\lambda^{-r}. \end{equation*}By optimizing the right-hand side above, i.e., $$\gamma (r)\lambda ^{-r}$$ as a function of r, [12] and [19] obtain the error rates \begin{equation*} \|\,\hat{f}-f\|_\infty \leq C\,\textrm{e}^{-c\lambda}, \end{equation*}where c < 1 is a known constant depending on the family of schemes. Outside of the band-limited context, $$\varSigma \varDelta $$ schemes were proposed and studied for quantizing finite-frame coefficients [1,21,25,26] as well as compressed sensing coefficients [3,15,18,34]. In both these contexts, given a linear map $$\varPhi : \mathbb{R}^{n} \mapsto \mathbb{R}^m$$, absent noise, one obtains measurements \begin{equation*} y = \varPhi x \end{equation*}of a vector $$x \in \mathcal{X} \subset \mathbb{R}^n$$ and quantizes using an rth-order stable $$\varSigma \varDelta $$ scheme. To ensure boundedness of the resulting state variable, typically one has $$\mathcal{X} \subset \left \{x\in \mathbb{R}^n: \| \varPhi x\|_\infty < 1 \right \}$$. One may also enforce additional restrictions on elements of $$\mathcal{X}$$, such as k-sparsity. Here, as before, one runs a stable rth-order $$\varSigma \varDelta $$ quantization scheme \begin{equation} Q_{\varSigma\varDelta}^{(r)}: \mathbb{R}^m \to \mathcal{A}^m. \end{equation} (10)Writing the $$\varSigma \varDelta $$ state equations (5) in matrix–vector form yields \begin{equation} y - q = D^r u, \end{equation} (11)where $$D\in \mathbb{R}^{m\times m}$$ is the lower bi-diagonal difference matrix with 1 on the main diagonal and −1 on the sub-diagonal. In analogy with the band-limited case, here one defines the oversampling factor as the ratio of the number of measurements m to the minimal number $$m_0$$ needed to ensure that $$\varPhi $$ is injective (or stably invertible) on $$\mathcal X$$. For example, $$\lambda := \frac{m}{n}$$ in the finite-frames setting when $$\mathcal{X}$$ is the Euclidean ball, and $$\lambda : = \frac{m}{k \log{n/k}} $$ in the compressed sensing context when $$\mathcal{X}$$ is the intersection of the Euclidean ball with the set of k-sparse vectors in $$\mathbb{R}^n$$. As in the band-limited context, one wishes to bound the reconstruction error as a function of $$\lambda $$. A typical result states that, provided $$\varPhi $$ satisfies certain assumptions, there exists a reconstruction map \begin{equation} \mathcal{D}: \mathcal{A}^m \to \mathbb{R}^n \end{equation} (12)such that for all $$x\in \mathcal{X}$$ and $$\hat{x}:= \mathcal{D} \big (Q_{\varSigma \varDelta }^{(r)} (\varPhi x)\big ) $$, \begin{equation*} \|x - \hat{x} \|_2 \leq C \lambda^{-\alpha (r-1/2)}, \end{equation*}where $$\alpha \leq 1$$ is a parameter that, in the case of random measurements, controls the probability with which the result holds. Most relevant to this work, [34] proposes recovering arbitrary, that is, not necessarily strictly sparse, vectors in ℝn from their noisy$$\varSigma \varDelta $$-quantized compressed sensing measurements by solving a convex optimization problem. In particular, one obtains the approximation $$\hat{x}$$ from $$q:=Q_{\varSigma \varDelta }^{(r)}(\varPhi x + e)$$, where $$\|e\|_\infty \leq \varepsilon $$ via \begin{align} (\hat{x},\hat{\nu}) := \arg\min\limits_{(z,\nu)}\|z\|_1 \ \textrm{subject to}\ & \left\|D^{-r}(\varPhi z+\nu-q )\right\|_2 \leq \gamma(r)\sqrt{m}\nonumber \\ \textrm{and}\ \ & \|\nu\|_2\leq \epsilon \sqrt m. \end{align} (13)Then, [34] shows that the reconstruction error due to quantization decays polynomially in the number of measurements, while maintaining stability and robustness against noise in the measurements and deviations from sparsity. Specifically, defining \begin{equation*} \sigma_k(x):=\arg\min\limits_{v\in\varSigma_k^n} \|x-v\|_1, \end{equation*}the following theorem holds. Theorem 4 ([34]) Let k, ℓ, m, n be integers, and let $$P_\ell :\mathbb{R}^m \to \mathbb{R}^\ell $$ be the projection onto the first ℓ coordinates. Let $$D^{-r}=U\varSigma V^{\ast }$$ be the singular value decomposition of $$D^{-r}$$ and let $$\varPhi $$ be an m × n matrix such that $$\frac{1}{\sqrt{\ell }}P_\ell V^{\ast } \varPhi $$ has the vector RIP of order k and constant $$\delta _k < 1/9$$. Then, for all $$x\in \mathbb{R}^n$$, satisfying $$\|\varPhi x\|_\infty \leq \mu <1$$ and all e, $$\|e\|_\infty \leq \epsilon <1-\mu $$, the solution $$\hat{x}$$ of (13) with $$q= Q_{\varSigma \varDelta }^{(r)} (\varPhi x + e)$$ satisfies \begin{equation} \|\hat{x}-x\|_2 \leq C\left(\left(\frac{m}{\ell}\right)^{-r+1/2}\beta+\frac{\sigma_k(x)}{\sqrt k}+\sqrt{\frac{m}{\ell}}\epsilon \right). \end{equation} (14)Above, C does not depend on m, ℓ, n. The proof of Theorem 4 reveals that a more general statement is true. Indeed, it turns out that the only assumptions on $$\hat{x}$$ needed are that it satisfies the constraints in (13) and that $$\|\hat{x}\|_1 \leq \| x\|_1$$. Moreover, the only assumption needed on q is that it satisfies the state variable equations (11) and need not belong to $$\mathcal{A}^m$$. We will use this generalization in proving our main result and we state it below for convenience. Theorem 5 Let $$k,\ell ,m,n, P_\ell , V^{\ast }, \varPhi $$ be as above. The following is true for all $$x\in \mathbb{R}^n$$ and $$e \in \mathbb{R}^m$$ with $$\|\varPhi x\|_\infty \leq \mu <1$$ and $$\|e\|_\infty \leq \epsilon <1-\mu$$: Suppose q is any vector which satisfies the relation Φx + e − Dru = q and ∥u∥∞≤ γ(r) < ∞. Suppose further that $$\hat{x}\in \mathbb{R}^n$$ is feasible to (13) and satisfies $$\|\hat{x}\|_1 \leq \|x\|_1$$. Then \begin{equation} \|\hat{x}-x\|_2 \leq C\left(\left(\frac{m}{\ell}\right)^{-r+1/2}\beta+\frac{\sigma_k(x)}{\sqrt k}+\sqrt{\frac{m}{\ell}}\epsilon \right), \end{equation} (15)where C does not depend on m, ℓ, n. 2.3 Preliminaries on low-rank recovery A key idea in our proof is relating low-rank matrix recovery to sparse vector recovery, as was first done in [30], where the following useful lemmas were presented. Lemma 6 ([30]) If $$\mathcal{M}$$ satisfies the matrix RIP of order k and constant $$\delta _k$$, then for all unitary U, V, $$\mathcal{M}_{U,V}$$ satisfies the vector RIP of order k and constant $$\delta _k$$. Lemma 7 ([30]) Suppose $$W\in \mathbb{R}^{n_1\times n_2}$$ admits the singular value decomposition $$U_W\varSigma _W V_W^{\ast }$$, and suppose $$X_0 \in \mathbb{R}^{n_1 \times n_2}$$ admits the singular value decomposition $$U_{X_0}\varSigma _{X_0}{V_{X_0}}^{\ast }$$. Suppose that $$\|X_0 + W\|_* \leq \|X_0\|_*$$ and assume without loss of generality that $$n_1 \leq n_2$$. Then, there exists $$X_1 = U_W\, \textrm{diag}(z)V_W^{\ast }$$ for some $$z\in \mathbb{R}^{n_1}$$ such that $$\|X_1 + W\|_* \leq \|X_1\|_*.$$ In particular, the choice $$X_1 := -U_W \varSigma _{X_0}V_W^{\ast }$$ yields the inequality. 2.4 Preliminaries on probabalistic tools Many of the classical compressed sensing results involve sampling a sparse signal with a Gaussian linear operator. It has been noticed, however, that only a handful of the special features of the Gaussian distribution are needed for these results to hold. Examples of such features include super-exponential tail decay, the existence of a moment generating function and moments which grow ‘slowly’, see [16] and [37] for example. A class of distributions which enjoys these features is the sub-Gaussian class, which we define below. Definition 8 Let X be a real-valued random variable. We say X is a sub-Gaussian random variable with parameter K if for all t ≥ 0 \begin{equation*} \mathbb{P}[|X|> t] \le \exp(1-t^2/K). \end{equation*}We say that a linear operator $$\mathcal{M}$$ is sub-Gaussian if its associated matrix A$$\mathcal{M}$$ has entries drawn independently and identically from a sub-Gaussian distribution. The tail decay property in Definition 8 is equivalent to the jth root of the jth moment of a sub-Gaussian random variable X growing like j, or when EX = 0 equivalent to the moment generating function existing over all of ℝ. See [16] and [37] for the details. In the course of proving our main result, we will need to show a certain sub-Gaussian linear operator satisfies the matrix RIP. Our proof of such will require a technique known as chaining. Talagrand makes the following definition in [36]. Definition 9 Given a metric space (T, d), an admissible sequence of T is a collection of subsets of T, {Ts : s ≥ 0}, such that for all s ≥ 0, |Ts|≤ 22s, and |T0| = 1. The γ2 functional is defined by \begin{equation*} \gamma_2(T,d) = \inf \sup_{t \in T} \sum_{s=0}^{\infty} 2^{s/2} \mathrm{d}(t,T_s), \end{equation*}where the infimum is taken with respect to all admissible sequences of T. It is common, given the unwieldy definition above, to control the γ2 functional with the well-known Dudley integral [13]. In our case, we will consider a set of matrices $$\mathcal{S}\subset \mathbb{R}^{m\times n_{1}n_{2}}$$ equipped with the operator norm ∥A∥2→2 = sup∥x∥2=1∥Ax∥2. With this, we have for some universal constant c > 0 \begin{equation*} \gamma_2(\mathcal{S}, \| \cdot \|_{2 \to 2}) \le c \int_{0}^{d_{2 \to 2}(\mathcal{S})} \sqrt{\log\left(N(\mathcal{S}, \| \cdot \|_{2 \to 2}; u)\right)} \, \textrm{d}u, \end{equation*}where $$d_{2\to2}(\mathcal{S}) = \textrm{sup}_{A\in\mathcal{S}}\Vert A\Vert_{2\to 2}.$$ is the operator norm radius of the set $$ \mathcal{S}$$ and $$N(\mathcal{S},\ \Vert\cdot\Vert_{2\to 2};u) $$ is the covering number of $$\mathcal{S}$$ with radius u. The following useful lemma from [24] will allow us to easily control the matrix RIP of the linear operators $$P_\ell V^{\ast }\mathcal{M}$$ where $$\mathcal{M}$$ is sub-Gaussian. Lemma 10 ([24]) Let $$\mathcal{S}$$ be a set of matrices, and let ξ be a sub-Gaussian random vector with independent and identically distributed (i.i.d.) mean zero, unit variance entries with parameter K. Set \begin{align*} \mu &= \gamma_2(\mathcal{S}, \| \cdot \|_{2 \to 2}) \big( \gamma_2(\mathcal{S}, \| \cdot \|_{2 \to 2}) + d_F(\mathcal{S}) \big) \\ \nu_1 &= d_{2 \to 2}(\mathcal{S}) \big( \gamma_2(\mathcal{S}, \| \cdot \|_{2 \to 2}) + d_F(\mathcal{S}) \big) \\ \nu_2 &= d_{2 \to 2}^2(\mathcal{S}), \end{align*}where $$d_{F}(\mathcal{S})=\textrm{sup}_{A\in\mathcal{S}}\Vert A\Vert_{F}$$ is the radius of the set $$\mathcal{S}$$ with respect to the Frobenius norm. Then for all t > 0, \begin{equation*} \mathbb{P}\left[ \ \sup_{A \in \mathcal{S}} \Big| \|A \xi\|_2^2 - \mathbb{E}\|A \xi\|_2^2\Big| \ge c_1 \mu + t \right] \le 2 \exp \left( - c_2 \min \left\{ \frac{t^2}{\nu_1^2}, \frac{t}{\nu_2} \right\} \right), \end{equation*}where the constants c1, c2 > 0 depend only on K. Lemma 11 Let $$\mathcal{M} $$: ℝn1×n2 → ℝm be a mean zero, unit variance sub-Gaussian linear map with parameter K, Pℓ : ℝm → ℝℓ the projection map onto the first ℓ coordinates, and V* ∈ ℝm×m a unitary matrix. Then there exist constants $$C_1, C_2$$ which may depend on K, such that for $$ \ell \geq C_{1}\frac{k (n_{1}+ n_{2}+1 )}{\delta^{2}_{k}} $$, the operator $$ \frac{1}{\sqrt{\ell}} P_{\ell}V^{\ast}\mathcal{M} $$ has the matrix RIP with constant δk with probability exceeding 1 − 2e−C2ℓ. Proof. The proof will be an application of Lemma 10. To that end, observe that \begin{equation*} \frac{1}{\sqrt{\ell}}P_{\ell}V^{\ast} \mathcal{M} (X) = \frac{1}{\sqrt{\ell}}P_{\ell}V^{\ast}\textrm{diag}(\vec{X}^T) \xi, \end{equation*}where diag $$ (\vec{X}^T) = I_{m\times m} \otimes (\vec{X}^T) $$, and ξ ∈ ℝmn1n2 is a sub-Gaussian random vector.2 Without loss of generality, we may assume that ∥X∥F = 1 by rescaling, if necessary. It behoves us then to consider \begin{equation*} \mathcal{S} = \left\lbrace \frac{1}{\sqrt{\ell}} P_{\ell} V^{\ast} \textrm{diag}(\vec{X}^T) : X \in \mathbb{R}^{n_1 \times n_2},\, \|X\|_F = 1, \textrm{rank}(X) \le k \right\rbrace. \end{equation*}Let Vi, ⋅ denote the ith row of V. By direct calculation, we see that \begin{equation*} d_F^2(\mathcal{S}) = \sup_{X} \frac{1}{\ell} \|P_{\ell} V^{\ast} \textrm{diag}(\vec{X}^T) \|_F^2 = \sup_{X} \frac{1}{\ell} \|X\|_F^2 \sum_{i=1}^{m} \sum_{j=1}^{\ell} |V_{i,j}|^2 \le 1. \end{equation*}Likewise, by direct calculation, \begin{align*} d_{2\to 2}^2(\mathcal{S}) &= \sup_{X} \frac{1}{\ell} \left\|P_{\ell} V^{\ast} \textrm{diag}(\vec{X}^T) \right\|_{2\to 2}^2 = \sup_{X} \sup_{\|w\|_2 = 1} \frac{1}{\ell} \left\|P_{\ell}V^{\ast} \textrm{diag}(\vec{X}^T)w \right\|_2^2 \\ &\le \frac{1}{\ell} \sup_{X} \|X\|_F^2 = \frac{1}{\ell}. \end{align*}Above, the last inequality follows from the Cauchy–Schwarz inequality or alternatively from the fact that the largest singular value of $$ I \otimes \vec{X}^T $$ is just the singular value of the vector $$\vec{X}^T$$, namely its Frobenius norm. Lemma 3.1 in [7] tells us the covering number $$ N(\mathcal{S}, \|\cdot\|_{F}; \epsilon) \le \big(\frac{9}{\epsilon}\big)^{(n_{1}+n_{2}+1)k} $$. Invoking Dudley’s inequality and Hölder’s inequality and letting 𝟙$$_{B}$$ denote the indicator function on the set B, that is, 𝟙$$_{B}(x)=1$$ for x ∈ B and 0 otherwise, we get Putting it all together, we have \begin{align*} \mu &\leq c_2^2 \frac{k(n_1 + n_2 + 1)}{\ell} + c_2 \sqrt{\frac{k(n_1 + n_2 + 1)}{\ell}} \le c_3\sqrt{\frac{k(n_1 + n_2 + 1)}{\ell}} \\ \nu_1 &\leq \frac{1}{\sqrt{\ell}} + c_2\frac{\sqrt{k(n_1+n_2+1)}}{\ell} \le c_4 \frac{1}{\sqrt{\ell}} \\ \nu_2 &= \frac{1}{\ell}. \end{align*}So invoking Lemma 10 yields for all t > 0, \begin{equation} \mathbb{P}\left[\sup_{X} \Big| \frac{1}{\ell} \|P_{\ell}V^{\ast} \mathcal{M}(X)\|_F^2 - \mathbb{E}\frac{1}{\ell} \|P_{\ell}V^{\ast} \mathcal{M}(X)\|_F^2 \Big| \ge c_5 \mu + t\right] \le 2\exp\left(-c_6 \min\left\lbrace \frac{t^2}{\nu_1^2}, \frac{t}{\nu_2} \right\rbrace \right), \end{equation} (16)where the supremum is taken over all $$X \in \mathbb{R}^{n_1 \times n_2},\, \|X\|_F = 1, \textrm{rank}(X) \le k$$. Note that by independence of the Aj, we have \begin{align*} \mathbb{E}\frac{1}{\ell} \left\|P_{\ell}V^{\ast} \mathcal{M}(X)\right\|_F^2 &= \frac{1}{\ell} \mathbb{E}\sum_{i=1}^{\ell} \left( \sum_{j=1}^{m} V_{i,j} \langle A_j, X \rangle \right)^2 \\ &= \frac{1}{\ell} \sum_{i=1}^{\ell} \sum_{j=1}^{m} V_{i,j}^2 \mathbb{E}\langle A_j, X \rangle^2 \\ & = \frac{1}{\ell} \|X\|_F^2 \sum_{i=1}^{\ell} \sum_{j=1}^{m} V_{i,j}^2 = \|X\|_F^2. \end{align*}Equation (16) now becomes \begin{equation} \mathbb{P}\left[\sup_{X} \Big| \frac{1}{\ell} \left\|P_{\ell}V^{\ast} \mathcal{M}(X)\right\|_F^2 - \|X\|_F^2 \Big| \ge c_5 \sqrt{\frac{k(n_1 + n_2 + 1)}{\ell}} + t\right] \le 2\exp\left(-c_6 \min\left\lbrace c_4^{-2} t^2 \ell, t \ell \right\rbrace \right). \end{equation} (17)Choosing t = δk/2 and recalling that ℓ ≥ C1$$\frac{k(n_1 + n_2 + 1)}{\delta_{k}^{2}} $$ with $$C_1:= 4c_5^2$$, equation (17) reduces to \begin{equation*} \mathbb{P}\left[\sup_{X} \Big| \frac{1}{\ell} \left\|P_{\ell}V^{\ast} \mathcal{M}(X)\right\|_F^2 - \|X\|_F^2 \Big| \ge \delta_k\right] \le 2\exp\left(-C_2 \ell \right)\!, \end{equation*}where $$C_2:= c_6 \min \big\{ c_4^{-2}\delta _k^2/4,\delta _k/2\big\}$$. Therefore, with probability 1 − 2exp (−C2ℓ), we have that \begin{equation*} \left| \frac{1}{\ell} \left\|P_{\ell}V^{\ast} \mathcal{M}(X)\right\|_F^2 - \|X\|_F^2 \right| \le \delta_k = \delta_k \|X\|_F^2 \end{equation*}for all $$X \in \mathbb{R}^{n_1 \times n_2},\, \|X\|_F = 1, \textrm{rank}(X) \le k$$. In other words, $$ \frac{1}{\ell} P_{\ell}V^{\ast}\mathcal{M}(X) $$ has the matrix RIP with constant δk with high probability. 3. Recovery error guarantees Herein, we present our main result on the recovery error guarantees for $$\varSigma \varDelta $$-quantized sub-Gaussian measurements of approximately low-rank matrices. Specifically, our results pertain to reconstruction via the constrained nuclear norm minimization \begin{align} (\hat{X},\hat{\nu}) := \arg\min\limits_{(Z,\nu)}\|Z\|_* \ \textrm{subject to}\ & \left\|D^{-r}\left(\mathcal{M}(Z) +\nu-q \right)\right\|_2 \leq \gamma(r)\sqrt{m} \nonumber \\ \textrm{and}\ \ & \|\nu\|_2\leq \epsilon \sqrt m, \end{align} (18)where $$\gamma (r)$$ is the stability constant associated with the quantizer. As such, Theorem 12 is a generalization of Theorem 4 to the low-rank matrix case. Theorem 12 (Error guarantees for stable $$\varSigma \varDelta $$ quantizers) Let k, ℓ and r be integers and let $$\mathcal{M}:\mathbb{R}^{n_1\times n_2} \to \mathbb{R}^m$$ be a mean zero, unit variance sub-Gaussian linear operator with parameter K. Suppose that $$m\geq \ell \geq c_1 k \max \{n_1,n_2\}$$. Then, with probability exceeding $$1-c_2\,\textrm{e}^{-c_3 \ell }$$ on the draw of $$\mathcal{M}$$, the following holds for a stable $$\varSigma \varDelta $$ quantizer with stability constant $$\gamma (r)$$: For all $$X\in \mathbb{R}^{n_1\times n_2}$$, the solution $${X^\sharp }$$ of (18) where q is the $$\varSigma \varDelta $$ quantization of $$\mathcal{M}(X) + e$$ with $$\|e\|_\infty \leq \epsilon $$, satisfies \begin{equation} \|{X^\sharp}-X\|_F \leq C(r)\left(\left(\frac{m}{\ell}\right)^{-r+1/2}\beta+\frac{\sigma_k(X)_*}{\sqrt k}+\sqrt{\frac{m}{\ell}}\epsilon \right)\!. \end{equation} (19)The constants $$c_1, c_2, c_3, C$$ do not depend on the dimensions, but may depend on K and r. Proof. Recall that by the $$\varSigma \varDelta $$ state equations, we have \begin{equation*} \|u\|_\infty = \left\| D^{-r} \left(\mathcal{M}(X) + e -q\right)\right \|_\infty \leq \gamma(r) \beta. \end{equation*}Consequently, by feasibility and optimality of $$(X^\sharp , \nu ^\sharp )$$ we have $$ \| D^{-r} (\mathcal{M}(X^\sharp )+v^\sharp -q) \|_2 \leq \gamma (r) \beta \sqrt{m}$$ and $$\|X^\sharp \|_* \leq \|X\|_*$$, respectively. Define $$W:=X^\sharp -X$$ and let $$U_W \varSigma _W V^{\ast }_W$$ be the singular value decomposition of W. Then, denoting by $$U_X \varSigma _X V_X^{\ast }$$ the singular value decomposition of X, we have by Lemma 7, with $$X_1 = -U_W \varSigma _X V_W^{\ast }$$ that \begin{equation*} \|X_1 + W\|_* \leq \|X_1\|_*. \end{equation*}Moreover, defining \begin{equation*} y_1 := D^{-r}\left( \mathcal{M}(X_1) + e\right) + u, \end{equation*}we have by the linearity of $$\mathcal{M}$$ \begin{align} \left\| D^{-r} \left(\mathcal{M} (X_1 +W ) +v^\sharp\right) - y_1\right\|_2 &= \left\| D^{-r} \left(\mathcal{M} (X_1 +W )+v^\sharp\right) - \left(D^{-r}\left( \mathcal{M}(X_1) + e\right) + u\right) \right\|_2 \nonumber \\ &= \left\| D^{-r} \left(\mathcal{M} (W ) + v^\sharp-e\right) - u \right\|_2 \nonumber \\ &= \left\| D^{-r} \left(\mathcal{M} (X +W ) +v^\sharp\right)- \left(D^{-r} \left(\mathcal{M}(X) + e\right) + u\right) \right\|_2 \nonumber\\ &= \left\| D^{-r} \left(\mathcal{M} (X^\sharp ) + v^\sharp - q\right) \right\|_2 \nonumber \\ &\leq \gamma(r)\beta \sqrt{m}. \end{align} (20)Now, note that with $$x_1$$ denoting the vector composed of the diagonal entries of $$-\varSigma _X$$, we have \begin{equation} y_1 =D^{-r}\left(\mathcal{M} \left(U_W \,\textrm{diag}(x_1) V_W^{\ast}\right) + e\right) + u \end{equation} (21) \begin{equation} =D^{-r}\left(\mathcal{M}_{U_W,V_W} x_1 + e\right) + u\quad\quad\;\;\,\,\, \end{equation} (22) \begin{equation} =(D^{-r}\mathcal{M})_{U_W,V_W} x_1 + D^{-r}e + u.\quad\;\, \end{equation} (23)Above, we defined $$(D^{-r}\mathcal{M})(X):= \sum\nolimits _{i=1}^m \langle X, A_i \rangle D^{-r}e_i$$. Denoting by w the vector composed of the diagonal entries of $$\varSigma _W$$, (23) and (20), respectively, yields the inequalities \begin{equation} \left\|(D^{-r}\mathcal{M})_{U_W,V_W} x_1 +D^{-r}e- y_1 \right\|_2 \leq \gamma(r)\beta \sqrt{m} \end{equation} (24)and \begin{equation} \left\|(D^{-r}\mathcal{M})_{U_W,V_W} (x_1+w) +D^{-r}v^\sharp - y_1 \right\|_2 \leq \gamma(r)\beta \sqrt{m}. \end{equation} (25)Additionally, we have that \begin{equation*} \|x_1+w\|_1= \|X_1+W\|_* = \|X^\sharp\|_* \leq \|X\|_* = \|x_1\|_1. \end{equation*} Thus, we have shown that the vector $$x^\sharp :=x_1+w$$ has a smaller $$\ell _1$$ norm than x, and that it is feasible to (13) with $$\mathcal{M}_{U_W,V_W}$$ in place of $$\varPhi $$ and $$y_1$$ in place of $$D^{-r}q$$. So, we are almost ready to apply Theorem 5 to $$\mathcal{M}_{U_W,V_W}$$ and conclude that \begin{equation} \|X^{\sharp}-X\|_F= \|W\|_F = \|w\|_2 \le C(r) \left( \left(\frac{m}{\ell}\right)^{-r+1/2} \beta + \frac{\sigma_k(X)_*}{\sqrt k} +\sqrt{\frac{m}{\ell}}\epsilon \right). \end{equation} (26)However, to do that, we must first show that $$\frac{1}{\sqrt{\ell }}(P_\ell V^{\ast } \mathcal{M})_{(U_W,V_W)}$$ has the vector RIP of order k and constant $$\beta < 1/9$$. This, however, follows from Lemma 11 where it is established that $$ \big(\frac{1}{\sqrt{\ell }}P_\ell V^{\ast } \mathcal{M} \big)$$ has the required matrix RIP, with high probability. By Lemma 6, this implies that $$ \big(\frac{1}{\sqrt{\ell }}P_\ell V^{\ast } \mathcal{M} \big)_{U_W,V_W} $$ has the vector RIP of order k for all unitary pairs $$(U_W,V_W)$$, so now we may apply Theorem 5 to obtain (26) and conclude the proof. By finding the optimal quantization order r as a function of the oversampling factor, as is standard in the $$\varSigma \varDelta $$ literature (e.g. [19,25]), root-exponential error decay can be attained. Corollary 13 is a precise statement to that effect. Its proof follows the same argument as Corollary 11 in [34], with only the oversampling factor λ changed to reflect the fact that we are dealing with matrices instead of vectors. Next, we show that the component of the reconstruction error that is due to quantization can be made to decay root-exponentially as a function of the oversampling factor. Corollary 13 (Root-exponential quantization error decay) Let $$ \mathcal{M} $$ : ℝn1×n2 → ℝm be a mean zero, unit variance sub-Gaussian linear operator with parameter K and X ∈ ℝn1×n2 a rank k matrix with $$\Vert\mathcal{M}(X)\Vert_{F}\leq 1 $$. Denote by $$Q_{\varSigma\varDelta}^{r} $$ the rth-order ΣΔ quantizer with alphabet $$\mathcal{A} $$ of step size β and stability constant γ(r) ≤ Crrrβ. Then there exist constants c, c1, C1, C2 > 0 that may depend on K, so that when \begin{align*} \lambda &:= \frac{m}{\lceil ck \max(n_1, n_2) \rceil} \\ r &:= \left\lfloor\frac{\lambda}{2 C_1 e} \right\rfloor^{1/2} \\ q &:= Q_{\varSigma\varDelta}^{r}\left(\mathcal{M}(X)\right)\!, \end{align*}the solution $$\hat{X} $$ to (18) satisfies ∥$$\hat{X} $$ − X∥F ≤ $$C_{2}\beta \,\textrm{e}^{-c_{1}\sqrt{\lambda}} $$. Next, Corollary 14 shows that by projecting the quantized measurements onto a subspace of dimension $$L = C k\max (n_1,n_2) \leq C^{\prime }m$$, where C, C′ > 0 are absolute constants, we can obtain comparable reconstruction error guarantees to those of Theorem 12. In turn, this allows us to obtain a reconstruction with exponentially decaying quantization error, or distortion, as a function of the number of bits, or rate, used. We make this observation precise in Remark 1, thereby extending the analogous result for the vector case [35] to our matrix setting. We comment that, just like for sparse vectors, this exponentially decaying rate–distortion relationship is optimal for low-rank matrices over all possible encoding and decoding schemes. Corollary 14 (Error guarantees with encoding) Let $$\mathcal{M} $$ : ℝn1×n2 → ℝm be a mean zero, unit variance sub-Gaussian linear operator with parameter K. Let B : ℝm → ℝL be a Bernoulli random matrix whose entries are ±1. Then there exist constants c1, c2, c3, C1, C2 > 0 that may depend on K and r, so that whenever m ≥ c1L ≥ c2kmax(n1, n2) the following is true with probability greater than $$1- C_{1}\, \textrm{exp}(-c_{3}\sqrt{mL}) $$ on the draw of $$\mathcal{M} $$ and B: Suppose X ∈ ℝn1×n2 has rank k, $$\Vert\mathcal{M}(X)\Vert_{\infty}\leq\mu< 1$$, and $$q=Q_{\varSigma\varDelta}^{r}(\mathcal{M}(X)+ e) $$ with ∥e∥∞ ≤ ϵ for ϵ ∈ [0, 1 − μ). Then the solution of \begin{align} (\hat{X},\hat{\nu}) := \arg\min\limits_{(Z,\nu)}\|Z\|_* \ \textrm{subject to} & \left\|BD^{-r}\left(\mathcal{M}(Z) +\nu-q \right)\right\|_2 \leq 3 m \gamma(r) \nonumber \\ \textrm{and} &\ \|\nu\|_2\leq \epsilon \sqrt m \end{align} (27)satisfies \begin{equation*} \| \hat{X} - X \|_F \leq C_2 \left( \left( \frac{m}{L} \right)^{-r/2 + 3/4}\beta + \sqrt{\frac{m}{L}} \epsilon + \frac{\sigma_k(X)_*}{\sqrt{k}} \right). \end{equation*} Remark 1 Let $$\alpha :=\max _{a\in \mathcal{A}} \|a\|_\infty $$. A simple calculation shows that one needs a rate of at most $$\mathcal{R} $$ = Lr log2(αm) bits to store the encoded measurements. This demonstrates that in the noise–free setting, and with rank k matrices, the distortion $$\mathcal{D} := \|\hat{X}-X\|_F$$ satisfies \begin{equation} \mathcal{D} \leq \left(\frac{1}{\alpha L} 2^{\frac{\mathcal{R}}{Lr}}\right)^{-r/2 + 3/4}. \end{equation} (28)That is, the distortion decays exponentially with respect to the rate provided r ≥ 2. The proof of the above corollary follows from a combination of Theorem 12 in [35], which we state below, and an argument similar to the proof of Theorem 12. Theorem 15 ([35]) Let Φ be an m × n sub-Gaussian matrix with mean zero and unit variance entries with parameter K, and let B be an L × m Bernoulli matrix with ±1 entries. Moreover, let k ∈ {1, .., min{m, n}}. Denote by $$Q_{\varSigma\varDelta}^{r}$$ a stable rth-order scheme with r > 1, alphabet A and stability constant γ(r). There exist positive constants C1, C2, C3, C4 and c1 such that whenever $$\frac{m}{C_{2}} $$ ≥ L ≥ C1k log(n/k), the following holds with probability greater than 1 − $$C_{3}\,\textrm{e}^{-c_{1}\sqrt{mL}} $$ on the draws of Φ and B: Suppose that x ∈ ℝn, e ∈ ℝm with ∥Φx∥∞ ≤ μ < 1 and that $$q:=Q_{\varSigma\varDelta}^{r}(\varPhi x+e)$$ where ∥e∥∞ ≤ ϵ for some 0 ≤ ϵ < 1 − μ. Then the solution $$\hat{x} $$ to \begin{align} (\hat{x},\hat{\nu}) := \arg\min\limits_{(z,\nu)}\|z\|_1 \ \textrm{subject to} &\ \left\|BD^{-r}\left(\varPhi(z) +\nu-q \right)\right\|_2 \leq 3m\gamma(r) \nonumber \\ \textrm{and} &\ \|\nu\|_2\leq \epsilon \sqrt m \end{align} (29)satisfies \begin{equation*} \| \hat{x} - x \|_2 \leq C_4 \left( \left( \frac{m}{L} \right)^{-r/2 + 3/4}\beta + \sqrt{\frac{m}{L}} \epsilon + \frac{\sigma_k(x)}{\sqrt{k}} \right). \end{equation*} Remark 2 As before, the requirements on q can be relaxed so that it is any vector satisfying the relation Φx + ω + Dru = q with $$\|\varPhi x\|_\infty \leq \mu <1$$, $$\|\omega \|_\infty \leq \epsilon <1-\mu $$ and ∥u∥∞ ≤ γ(r) < ∞. Remark 3 If one restricts the scope of Theorem 15 to strictly sparse vectors, the constraint in (29) can be relaxed to ∥BD−r(Φ(z) + ν − q)∥2 ≤ 3$$\sqrt{mL}\gamma(r) $$ through a Johnson–Lindenstrauss embedding argument. See the proof of Theorem 16 in [35] for details. Proof of Corollary 14. We know (see for example [35]) that with probability exceeding 1 − c2 exp(−c1$$\sqrt{mL} $$) that ∥B∥2→2 ≤ $$\sqrt{L}\,+\,2\sqrt{m}$$. For such B, we have \begin{equation*} \| Bu \|_2 \le \|B\|_{2 \to 2}\|u\|_{\infty} \le 3m \gamma(r). \end{equation*}Define, as before, the following: \begin{align*} W &= X^{\sharp} - X = U_{W} \varSigma_{W} V_{W}^{\ast}\\ X &= U_{X} \varSigma_{X} V_{X}^{\ast}\\ X_1 &= -U_{W} \varSigma_{X} V_{W}^{\ast}. \end{align*}By Lemma 7, ∥X1 + W∥* ≤ ∥X1∥*. Now, define \begin{equation*} y_1 = B D^{-r}\Big( \mathcal{M}(X_1) + e \Big) + Bu. \end{equation*}By linearity of $$\mathcal{M} $$, \begin{align*} \left\| BD^{-r}\Big( \mathcal{M}(X_1 + W) + \nu^{\sharp} \Big) - y_1 \right\|_2 &= \left\|B \Big(D^{-r}\big(\mathcal{M}(X_1 + W) + \nu^{\sharp}\big) - D^{-r}\big(\mathcal{M}(X_1) + e \big) - u \Big)\right\| _2 \\ &= \left\| BD^{-r}\Big( \mathcal{M}(X^{\sharp}) + \nu^{\sharp} - q \Big) \right\|_2 \le 3m\gamma(r). \end{align*}Letting x1 denote the vector of diagonal elements of −ΣX and w that of ΣW, we have \begin{equation*} y_1 = (BD^{-r}\mathcal{M})_{U_W, V_W}(x_1) + BD^{-r}e + Bu. \end{equation*}Just as in the proof of the main theorem, we remark that \begin{equation*} \left \| BD^{-r}\Big( \mathcal{M}_{U_W, V_W}(x_1) + e \Big) - y_1 \right\|_2 = \left\| Bu \right\|_2 \le 3 m \gamma(r). \end{equation*}In other words, both x♯ := x1 + w and x1 are feasible to (29) with Φ set to MUW, VW and q set to y1. Moreover, we also have ∥x1 + w∥1 = ∥X1 + W∥* ≤ ∥X1∥* = ∥x1∥1. The result now follows by Theorem 15. 4. Numerical experiments Herein, we present the results of a series of numerical experiments. The goal is to illustrate the performance of the algorithms studied in this paper and to compare their empirical performance to the error bounds (up to constants) predicted by the theory. All tests were performed in MATLAB using the CVX package. One thing worth noting is that, in the interest of numerical stability and computational efficiency, we modified the constraint in (18) to be \begin{equation*} \sigma_{\ell} \left\|P_{\ell} V^{\ast}\left(\mathcal{M}(X) + \nu - q\right)\right\|_2 \le \gamma(r) \sqrt{m}, \end{equation*}where σℓ is the ℓth singular value of D−r. The motivation for this is that, as r increases, Dr quickly becomes ill-conditioned. The analysis and conclusions of Theorem 12 remain unchanged with the above modification. The only additional cost is computing the singular value decomposition of D−r before beginning the optimization. For a fixed value of m this needs to be done only once as the result can be stored and reused. To construct rank k matrices, we sampled α1, … , αk ∼ $$\mathcal{N}$$(0, 1), u1, … , uk ∼ $$\mathcal{N}$$(0, In1×n1), v1, … , vk ∼ $$\mathcal{N}$$(0, In2×n2), and set $$X:=\sum_{i=1}^{k}\alpha_{i}u_{i}v_{i}^{\ast} $$. We note that under these conditions 𝔼∥X$$\|_{F}^{2} $$ = k ⋅ n1 ⋅ n2. The measurements we collect are via a Gaussian linear operator$$\mathcal{M}$$ whose matrix representation consists of i.i.d. standard normal entries. For each experiment, we use a fixed draw of M. First, we illustrate the decay of the reconstruction error, measured in the Frobenius norm, as a function of the order of the ΣΔ quantization scheme for r = 1, 2 and 3 in the noiseless setting. Experiments were run with the following parameters: n1 = n2 = 20, ϵ = 0, alphabet step size β = 1/2, rank k = 5 and ℓ = 4 ⋅ k ⋅ n1. We let the oversampling factor $$\frac{m}{\ell} $$ range from 5–60 by a step size of 5. The reconstruction error for a fixed oversampling factor was averaged over 20 draws of X. The results are reported in Fig. 1 for the three choices of r. As Theorem 12 predicts, the reconstruction error decays polynomially in the oversampling rate, with the polynomial degree increasing with r. To test the dependence on measurement noise, we considered reconstructing 20 × 20 matrices from measurements generated by a fixed draw of $$\mathcal{M}$$. For ϵ ∈ {0, 1/10, … , 2}, we averaged our reconstruction error over 20 trials with noise vectors ν drawn from the uniform distribution on (0, 1)m and normalized to have ∥ν∥∞ = ϵ. The remaining parameters were set to the following values: r = 1, alphabet step size β = 1/2, rank k = 2, ℓ = 4 ⋅ k ⋅ n1 and m = 2ℓ. Figure 2 illustrates the outcome of this experiment, which agrees with the bound in Theorem 12. The goal of the next experiment is to illustrate, in the context of encoding (Corollary 14), the exponential decay of distortion as a function of the rate, or equivalently of the reconstruction error as a function of the number of bits (i.e., rate) used. We performed numerical simulations for ΣΔ schemes of order 2 and 3. As before, our parameters were set to the following: n1 = n2 = 20, β = 1/2, rank of the true matrix k = 5, L = 4 ⋅ k ⋅ n1, ϵ = 0, and let $$ \frac{m}{L}$$ range from 5–60 by a step size of 5. The rate is calculated to be $$\mathcal{R}$$ = L ⋅ r ⋅ log(m). Again, the reconstruction error for a fixed oversampling factor was averaged over 20 draws of X. The results are shown in Figs 3 and 4, respectively. The slopes of the lines (corresponding to the constant in the exponent in the rate–distortion relationship) which pass through the first and last points of each plot are −1.8 × 10−3 and −2.0 × 10−3 for r = 2, 3, respectively. It should further be noted that the numerical distortions decay much faster than the upper bound of (28). We suspect this to be due to the suboptimal r-dependent constants in the exponent of (28), which are likely an artefact of the proof technique in [35]. Indeed, there is evidence in [35] that the correct exponent is −r/2 + 1/4 rather than −r/2 + 3/4. This is more in line with our numerical experiments, but the proof of such is beyond the scope of this paper. Fig. 1. View largeDownload slide Log–log plot of the reconstruction error as a function of the oversampling factor $$\frac{m}{\ell} $$ for r = 1, 2, 3. Polynomial relationships will appear as linear relationships. Fig. 1. View largeDownload slide Log–log plot of the reconstruction error as a function of the oversampling factor $$\frac{m}{\ell} $$ for r = 1, 2, 3. Polynomial relationships will appear as linear relationships. Fig. 2. View largeDownload slide Reconstruction error as a function of the input noise. The rank of the true matrix is 2. Fig. 2. View largeDownload slide Reconstruction error as a function of the input noise. The rank of the true matrix is 2. Fig. 3. View largeDownload slide Log plot of the reconstruction error as a function of the bit rate R = Lr log(m) for r = 2. Exponential relationships will appear as linear relationships. Fig. 3. View largeDownload slide Log plot of the reconstruction error as a function of the bit rate R = Lr log(m) for r = 2. Exponential relationships will appear as linear relationships. Fig. 4. View largeDownload slide Log plot of the reconstruction error as a function of the bit rate R = Lr log(m) for r = 3. Exponential relationships will appear as linear relationships. Fig. 4. View largeDownload slide Log plot of the reconstruction error as a function of the bit rate R = Lr log(m) for r = 3. Exponential relationships will appear as linear relationships. Funding National Science Foundation (DMS-1517204 to R.S.); University of California San Diego senate research grant award. Footnotes 1 Here, given a vector $$x\in \mathbb{R}^n$$, diag(x) = X is a diagonal matrix in $$ \mathbb{R}^{n\times n}$$ with $$X_{ii} = x_i$$ for i ∈ {1, … , n}. 2 Here, ⊗ refers to the Kronecker product of matrices. References Benedetto, J. J., Yilmaz, O. & Powell, A. M. ( 2004) Sigma–delta quantization and finite frames. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004 (ICASSP’04), vol. 3 . Piscataway, NJ: IEEE, pp. iii– 937. Bhaskar, S. A. & Javanmard, A. ( 2015) 1-bit matrix completion under exact low-rank constraint. 49th Annual Conference on Information Sciences and Systems (CISS), 2015 . Piscataway, NJ: IEEE, pp. 1– 6. Boufounos, P. T., Jacques, L., Krahmer, F. & Saab, R. ( 2015) Quantization and compressive sensing. Compressed Sensing and its Applications . Basel, Switzerland: Springer, pp. 193– 237. Cai, T. & Zhou, W.-X. ( 2013) A max-norm constrained minimization approach to 1-bit matrix completion. J. Mach. Learn. Res. , 14, 3619– 3647. Candès, E. J., Li, X., Ma, Y. & Wright, J. ( 2011) Robust principal component analysis? J. ACM , 58, 11. Google Scholar CrossRef Search ADS Candes, E. J. & Plan, Y. ( 2011) Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Trans. Inf. Theory , 57, 2342– 2359. Google Scholar CrossRef Search ADS Candès, E. J. & Recht, B. ( 2009) Exact matrix completion via convex optimization. Found. Comut. Math. , 9, 717. Google Scholar CrossRef Search ADS Candès, E. J., Romberg, J. & Tao, T. ( 2006) Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. , 59, 1207– 1223. Google Scholar CrossRef Search ADS Candès, E. J. & Tao, T. ( 2010) The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory , 56, 2053– 2080. Google Scholar CrossRef Search ADS Daubechies, I. & DeVore, R. ( 2003) Approximating a bandlimited function using very coarsely quantized data: a family of stable sigma–delta modulators of arbitrary order. Ann. Math. , 158, 679– 710. Google Scholar CrossRef Search ADS Davenport, M. A., Plan, Y., Van Den Berg, E. & Wootters, M. ( 2014) 1-bit matrix completion. Inf. Inference , 3, 189– 223. Google Scholar CrossRef Search ADS Deift, P., Güntürk, C. S. & Krahmer, F. ( 2011) An optimal family of exponentially accurate one-bit sigma–delta quantization schemes. Commun. Pure Appl. Math. , 64, 883– 919. Google Scholar CrossRef Search ADS Dudley, R. M. ( 1967) The sizes of compact subsets of hilbert space and continuity of gaussian processes. J. Funct. Anal. , 1, 290– 330. Google Scholar CrossRef Search ADS Fazel, M. ( 2002) Matrix rank minimization with applications. Ph.D. Thesis, Stanford University, USA . Feng, J.-M., Krahmer, F. & Saab, R. ( 2017) Quantized compressed sensing for partial random circulant matrices. preprint arXiv:1702.04711. Foucart, S. & Rauhut, H. ( 2013) A Mathematical Introduction to Compressive Sensing . Basel, Switzerland: Birkhäuser Basel. Google Scholar CrossRef Search ADS Gross, D., Liu, Y.-K., Flammia, S. T., Becker, S. & Eisert, J. ( 2010) Quantum state tomography via compressed sensing. Phys. Rev. Lett. , 105, 150401. Google Scholar CrossRef Search ADS PubMed Güntürk, C. S. ( 2003) One-bit sigma–delta quantization with exponential accuracy. Commun. Pure Appl. Math. , 56, 1608– 1630. Google Scholar CrossRef Search ADS Güntürk, C. S., Lammers, M., Powell, A., Saab, R. & Yilmaz, Ö. ( 2010) Sigma delta quantization for compressed sensing. 44th Annual Conference on Information Sciences and Systems (CISS), 2010 . IEEE, pp. 1– 6. Inose, H. & Yasuda, Y. ( 1963) A unity bit coding method by negative feedback. Proceedings of the IEEE , 51, 1524– 1535. Google Scholar CrossRef Search ADS Iwen, M. & Saab, R. ( 2013) Near-optimal encoding for sigma–delta quantization of finite frame expansions. J. Fourier Anal. Appl. , 19, 1255– 1273. Google Scholar CrossRef Search ADS Johnson, W. B. & Lindenstrauss, J. ( 1984) Extensions of lipschitz mappings into a hilbert space. Contemp. Math. , 26, 1. Google Scholar CrossRef Search ADS Keshavan, R. H., Montanari, A. & Oh, S. ( 2010) Matrix completion from noisy entries. J. Mach. Learn. Res. , 11, 2057– 2078. Krahmer, F., Mendelson, S. & Rauhut, H. ( 2014) Suprema of chaos processes and the restricted isometry property. Commun. Pure Appl. Math. , 67, 1877– 1904. Google Scholar CrossRef Search ADS Krahmer, F., Saab, R. & Ward, R. ( 2012) Root-exponential accuracy for coarse quantization of finite frame expansions. IEEE Trans. Inf. Theory , 58, 1069– 1079. Google Scholar CrossRef Search ADS Krahmer, F., Saab, R. & Yilmaz, Ö. ( 2014) Sigma–delta quantization of sub-gaussian frame expansions and its application to compressed sensing. Inf. Inference , 3, 40– 58. Google Scholar CrossRef Search ADS Lafond, J., Klopp, O., Moulines, E. & Salmon, J. ( 2014) Probabilistic low-rank matrix completion on finite alphabets. Advances in Neural Information Processing Systems . One Rogers Street Cambridge, MA USA: MIT Press pp. 1727– 1735. Lan, A. S., Studer, C. & Baraniuk, R. G. ( 2014) Matrix recovery from quantized and corrupted measurements. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014 . IEEE, pp. 4973-- 4977. Lin, Z., Chen, M. & Ma, Yi ( 2010) The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. preprint arXiv:1009.5055. Oymak, S., Mohan, K., Fazel, M. & Hassibi, B. ( 2011) A simplified approach to recovery conditions for low rank matrices. Proceedings of the IEEE International Symposium on Information Theory (ISIT), 2011 . IEEE, pp. 2318– 2322. Pan, R., Zhou, Y., Cao, B., Liu, N. N., Lukose, R., Scholz, M. & Yang, Q. ( 2008) One-class collaborative filtering. 8th IEEE International Conference on Data Mining, 2008 (ICDM’08). IEEE, pp. 502– 511. Rallapalli, S., Qiu, L., Zhang, Y. & Chen, Y.-C. ( 2010) Exploiting temporal stability and low-rank structure for localization in mobile networks. Proceedings of the 16th Annual International Conference on Mobile Computing and Networking . New York, NY, USA: Association for Computing Machinery, pp. 161– 172. Recht, B., Fazel, M. & Parrilo, P. A. ( 2010) Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev. , 52, 471– 501. Google Scholar CrossRef Search ADS Saab, R., Wang, R. & Yılmaz, Ö. ( 2016) Quantization of compressive samples with stable and robust recovery. Appl. Comput. Harmon. Anal ., 44, 123– 143. Saab, R., Wang, R. & Yılmaz, Ö. ( 2017) From compressed sensing to compressed bit-streams: practical encoders, tractable decoders. IEEE Trans. Inf. Theory . Talagrand, M. ( 2005) The Generic Chaining . Berlin Heidelberg: Springer. Vershynin, R. ( 2012) Introduction to the non-asymptotic analysis of random matrices. Cambridge, United Kingdom: Cambridge University Press, pp. 210– 268. Google Scholar CrossRef Search ADS © The Author(s) 2018. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model)

Information and Inference: A Journal of the IMA – Oxford University Press

**Published: ** May 25, 2018

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.