Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

New Foundations for Information TheoryQuantum Logical Information Theory

New Foundations for Information Theory: Quantum Logical Information Theory [The transition to density matrices in QM is facilitated by reformulating the ‘classical’ (i.e., non-quantum) results about logical entropy using density matrices. Then the transition to the quantum version of logical entropy is made using the semi-algorithmic procedure of “linearization.” Given a concept applied to sets, apply that concept to the basis set of a vector space and whatever it linearly generates gives the corresponding vector space concept. Classically, the logical entropy hf−1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$h\left ( f^{-1}\right ) $$ \end{document} of the inverse-image partition of f:U→ℝ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$f:U\rightarrow \mathbb {R} $$ \end{document} with point probabilities on U is the application of the product probability p × p measure on U × U to the ditset ditf−1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\operatorname {dit}\left ( f^{-1}\right ) $$ \end{document}. That is linearized in quantum logical information theory with qudit spaces replacing the ditsets. Another set of classical definitions start with two probability distributions p=p1,…,pn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$p=\left ( p_{1},\ldots ,p_{n}\right ) $$ \end{document} and q=q1,…,qn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$q=\left ( q_{1},\ldots ,q_{n}\right ) $$ \end{document} over the same index set. Since probability distributions are supplied by density matrices, the corresponding quantum logical notions are developed starting with two density matrices ρ and τ. Finally, we show how to make the generalization of Hamming distance and that it in fact ends up being the same as the existing quantum notion of the Hilbert-Schmidt distance between two density matrices—which is the quantum version of the Euclidean distance squared d(p||q) between two probability distributions in ‘classical’ logical information theory.] http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

New Foundations for Information TheoryQuantum Logical Information Theory

Loading next page...
 
/lp/springer-journals/new-foundations-for-information-theory-quantum-logical-information-00bZrddQB0

References (8)

Publisher
Springer International Publishing
Copyright
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021
ISBN
978-3-030-86551-1
Pages
73 –93
DOI
10.1007/978-3-030-86552-8_5
Publisher site
See Chapter on Publisher Site

Abstract

[The transition to density matrices in QM is facilitated by reformulating the ‘classical’ (i.e., non-quantum) results about logical entropy using density matrices. Then the transition to the quantum version of logical entropy is made using the semi-algorithmic procedure of “linearization.” Given a concept applied to sets, apply that concept to the basis set of a vector space and whatever it linearly generates gives the corresponding vector space concept. Classically, the logical entropy hf−1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$h\left ( f^{-1}\right ) $$ \end{document} of the inverse-image partition of f:U→ℝ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$f:U\rightarrow \mathbb {R} $$ \end{document} with point probabilities on U is the application of the product probability p × p measure on U × U to the ditset ditf−1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$\operatorname {dit}\left ( f^{-1}\right ) $$ \end{document}. That is linearized in quantum logical information theory with qudit spaces replacing the ditsets. Another set of classical definitions start with two probability distributions p=p1,…,pn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$p=\left ( p_{1},\ldots ,p_{n}\right ) $$ \end{document} and q=q1,…,qn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $$q=\left ( q_{1},\ldots ,q_{n}\right ) $$ \end{document} over the same index set. Since probability distributions are supplied by density matrices, the corresponding quantum logical notions are developed starting with two density matrices ρ and τ. Finally, we show how to make the generalization of Hamming distance and that it in fact ends up being the same as the existing quantum notion of the Hilbert-Schmidt distance between two density matrices—which is the quantum version of the Euclidean distance squared d(p||q) between two probability distributions in ‘classical’ logical information theory.]

Published: Sep 2, 2021

Keywords: Density matrices; Linearization; Quantum logical entropy; Projective measurement; Quantum Hamming distance; Hilbert-Schmidt norm

There are no references for this article.