Mission-Critical Systems, Paradox of Hamming Code, Row Hammer Effect, ‘Trojan Horse’ of the Binary System and Numeral Systems with Irrational Bases

Mission-Critical Systems, Paradox of Hamming Code, Row Hammer Effect, ‘Trojan Horse’ of the... Abstract This article deals with a wide range of issues related to the design of specialized computing and measuring systems for mission-critical applications, in which the requirements of reliability and noise immunity are put to the fore. Among these issues, we consider: paradox of the Hamming code, the ‘row hammer effect’, the ‘Trojan horse’ of the binary system. It is discussed the issue of the use of numeral systems with irrational bases (Bergman’s system, ternary mirror-symmetric arithmetic, Fibonacci p-codes and codes of the golden p-proportions) for the design of specialized computing and measuring systems for mission-critical applications. 1. INTRODUCTION As it is known, digital computer technology is largely obliged to the union of two outstanding inventions of the human intellect, the Boolean (or two-alternative) logic and the binary system. The Boolean logic and theory of digital automata, first of all, are studying perfect or deterministic operations, realized by idealized logical schemes. The classical binary system, in turn, describes the processes in the idealized arithmetical devices of digital computers. However, for real conditions, all digital structures are exposed to various internal and external influences or ‘noises,’ which lead to errors in digital structures and distortion of data at their outputs. The so-called ‘Fighting against noises’ [1] is becoming one of the most important problems of computer science. For the first time, this problem became especially acute at the designing of serial data transmission systems. Within the framework of the serial data transmission systems, a well-known theory of error-correcting codes has emerged [2]. At the present time, the computer science is passing to new stage of their development, to the stage of designing computing and informational systems for mission-critical applications. In the Wikipedia article ‘Mission critical’ [3], we read: ‘Mission critical refers to any factor of a system (components, equipment, Personnel, process, procedure, software, etc.) that is essential to business operation or to an organization. Failure or disruption of mission critical factors will result in serious impact on business operations or upon an organization, and even can cause social turmoil and catastrophes. Therefore, it is extremely critical to the organization’s ‘mission’ (to avoid Mission Critical Failures). Mission critical system is a system whose failure may result in the failure of some goal-directed activity. Mission essential equipment and mission critical application are also known as mission-critical system. Examples of mission critical systems are: an online banking system, railway/aircraft operating and control systems, electric power systems, and many other computer systems that will adversely affect business and society seriously if downed. A good example of a mission critical system is a navigational system for a spacecraft.’ Designing of the mission-critical systems puts forward new requirements for ensuring noise immunity and informational reliability of such systems. The most important requirement is to prevent the occurrence of ‘false signals’ at the output of the mission-critical systems what can lead to technological disasters. Modern methods of providing noise immunity and informational reliability of mission-critical systems (in particular, the use of error-correcting codes [1, 2]) do not always provide the required informational reliability of the mission-critical systems. The main purpose of this article is to give a critical analysis of present methods of ensuring informational reliability of mission-critical systems, based on the use of error-correcting codes, and to set forth new challenges in this direction. 2. TYPICAL MODEL OF ERRORS AND PARADOX OF HAMMING AND HSIAO CODES 2.1. Two types of informational systems There are two types of informational systems, which require the usage of the redundant codes for error detection and correction, in particular, the error-correcting codes (ECC): Serial Informational Systems. In these systems, data are represented in serial form (bit by bit). The traditional data transmission systems are the brightest example of them. Parallel Informational Systems. In these systems, data are represented in parallel form. The brightest examples of them are computers, registers, RAM, microprocessors, microcontrollers and so on. It should be noted that the vast majority of the ECC are intended for detection and correction of errors in the data, represented in the serial form. Only some of them, in particular, the parity code, Hamming and Hsiao codes [4–6] and its modifications can be used for detection and correction of errors in the data, represented in the parallel form. The reason is in sharp complication of encoders and decoders for most ECC for the case of parallel informational systems. 2.2. Typical model of errors for serial communication systems The most characteristic feature of discrete data transmission systems is the serial nature of the transmission of bits 0 and 1 through communication channel. At such transmission, the action of the ‘noises’ on the n-bit serial code combination has sequential character (i.e. the ‘noises’ are acting on the first bit, then with the same intensity on the next bit, etc.). The model of the so-called ‘symmetrical channel’ as the simplest model of errors is used widely in the traditional theory of serial data transmission. A detailed analysis of this model is given in the book ‘Fighting against noises’ [1], written by the outstanding Soviet scientist academician Kharkevich, who for many years headed the Institute for Information Transmission Problems of the USSR Academy of Sciences. Let us consider the following Kharkevich’s arguments [1] related to this error model. Kharkevich writes ‘The error-correcting codes were created originally to detect and correct the independent errors. The notion of minimum code distance, in this case plays an important role …In the case of independent errors the error probability decreases with increasing multiplicity.’ This quote contains three important ideas: ECC were originally created to detect and correct independent errors, arising in serial data transmission systems. The idea of the independent errors plays an important role for the notion of ‘minimum code distance’ (or Hamming distance), which is one of the most important notions of the ECC theory. In the case of independent errors, the probability of errors decreases with increasing their multiplicity. By continuing his arguments, Kharkevich comes to the following conclusion [1], which can be considered as the main hypothesis and goal of the theory of ECC: ‘Thus, in the case of independent errors, we should first of all detect and correct errors of low multiplicity as the most probable.’ Thus, according to Kharkevich, the theory of ECC mainly is focused on the detection and correction of the errors of low multiplicity as the most probable. With regard to the errors of high multiplicity, the theory of ECC simply ignores them because of their low probability; this follows from the model of ‘symmetrical channel’. But the concept of the ‘low-probable errors’ does not exclude a possibility of their unexpected appearance. For mission-critical systems, the appearance of the errors of large multiplicity may be the cause of immense technological disasters. This is the main problem that occurs at designing highly reliable computer systems for mission-critical applications. The following important conclusion follows from the above arguments. The approach to detecting and correcting errors, which follows from model of ‘symmetrical channel,’ when the errors of large multiplicity are ignored as unlikely, is unsuitable for mission-critical systems, because this approach does not prevent erroneous output signals, which can arise due the errors of large multiplicity. The damage of such approach can be shown on the example of the Hamming and Hsiao codes [4–6], widely used for detecting and correcting errors in the informational systems with representation of data in parallel form. 2.3. Paradox of the Hamming and Hsiao codes As it is well known, the Hamming code [4] and Hsiao code [5, 6] are widely used to correct single-bit errors that occur in informational systems with parallel data representation (for example, in electronic memory). The so-called unmodified Hamming code [4] allows correcting one-bit error in the code word. In the case of the 2-bit (or double) error, the decoder cannot detect it, but it corrects the information word erroneously and informs about the successful correction of a single error in the code word. This case is called ‘false correction.’ To detect a double error, the modified Hamming code of the type SEC-DED (single-error-correcting-double-error-detecting) is used. It differs from the unmodified Hamming code by adding one more verification bit, the common parity bit of the entire code word. However, in the case of errors of multiplicity greater than 2, there is also a probability of the ‘false correction.’ Hsiao code [5, 6] is similar to the modified Hamming codes, but uses a slightly different mathematical basis. There is the question: how the Hamming and Hsiao codes operate, when the errors of the big odd multiplicity of 3 and more (5, 7, 9,…) arise in the code word? Such many-bit errors of the odd multiplicity (3, 5, 7, 9,…) are perceived by the Hamming and Hsiao codes as single-bit errors, and the Hamming or Hsiao codes begin to correct them by adding new errors to the erroneous code word. That is, for this case, the Hamming and Hsiao codes are turned into anti-ECC, because they ruin the Hamming and Hsiao code words (the effect of ‘false correction’). This ‘paradoxical’ property of the Hamming and Hsiao codes is well known to experts in the field of ECC [5, 6], but many consumers do not always know about this. For such cases, the main argument for customers is the indication of the fact that the errors of large multiplicity are unlikely, but such arguments are unacceptable for mission-critical applications. The modified Hamming and Hsiao codes differ in the possibility of detecting 3-bit (triple) and 4-bit (quadruple) errors (as well as errors of larger multiplicity). A comparison of the codes for this parameter is presented in Table 1 (the table is taken from [6]). Table 1. The probability of error correction and error detection by codes for an information word consisting of 64 bits (taken from [6]). Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 Table 1. The probability of error correction and error detection by codes for an information word consisting of 64 bits (taken from [6]). Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 This table confirms that the modified Hamming code and Hsiao codes have a very high percentage of ‘false corrections’ of 3-bit errors, what is NOT ADMISSIBLE for mission-critical applications. Unfortunately, Table 1 does not contain data on the probability of ‘false correction’ of the odd errors of higher multiplicity (5.7.9,…); we cannot neglect these errors in mission-critical systems. This means that the modified Hamming code and Hsiao codes do not protect the computer systems and their main structures (in particular, electronic memory) from the appearance of the output ‘false data’ what may lead to technological disasters in mission-critical applications. High percentage of ‘false corrections’ of the odd errors of large multiplicity (3, 5, 7,…) in Hamming and Hsiao codes questions the usefulness of using these codes for mission-critical applications. 2.4. ‘Row hammer’ effect ‘Row hammer’ effect is a new phenomenon in the field of electronic memory. In the Wikipedia article [7], the essence of this effect is explained as follows: ‘Row hammer … is an unintended side effect in dynamic random-access memory (DRAM) that causes memory cells to leak their charges and interact electrically between themselves, possibly altering the contents of nearby memory rows that were not addressed in the original memory access. This circumvention of the isolation between DRAM memory cells results from the high cell density in modern DRAM …’ As follows from this quote, the main reason of the ‘row hammer’ effect is the microminiaturization of electronic memory, which leads to mutual electrical interaction between nearby memory rows. This interaction is ‘altering the contents of nearby memory rows that were not addressed in the original memory access.’ No effective methods of fighting against ‘row hammer effect’ have been proposed until now. Possibly, the only reasonable proposal is to introduce restrictions on the microminiaturization of electronic memory. But then the question arises how we have to design nano-electronic memory? 3. COMPUTER REVOLUTION, BASED ON THE BINARY SYSTEM, AND THE ‘TROJAN HORSE’ OF THE BINARY SYSTEM 3.1. Leibniz’s binary arithmetic The prominent German scientist Wilhelm Gottfried Leibniz (1646–1716) was the creator of the binary arithmetic. Since student years until his death, Leibniz studied the properties of the binary system, which has become in the future the basis of modern computers. The binary system has been fully described by Leibniz in the XVII century in the work ‘Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures of Fu Xi’ (1703). Leibniz attributed to the binary system a mystical meaning, and believed that by using it, we can create a universal language for explaining all phenomena of the world. In 1697, Leibniz created the medal, which demonstrates the relationship between the binary and decimal numbers. As a fan of Chinese culture, Leibniz was aware of the Chinese ‘Book of Changes’ and one of the first to notice that the hexagram corresponds to the binary numbers from 0 to 111 111. Leibniz believed that the ‘Book of Changes’ is evidence of major Chinese contribution in mathematical philosophy of that time. Leibniz did not recommend the binary system instead of the decimal to practical calculations, but he stressed that ‘the calculation by using binary numerals 0 and 1, in spite of its length, is major in science, and even in the computing practice, especially in geometry: the reason consists of the fact that by reducing numbers to the simplest principles, that is, 0 and 1, we establish everywhere the wonderful order’ (the quote is taken from [8]). In this quote, Leibniz anticipated the modern ‘computer revolution,’ based on the binary system! 3.2. John von Neumann’s Principles and computer revolution, based on the binary system A direct outcome of the first electronic computer ENIAC (Pennsylvania University, 1942) was a confirmation in practice of high efficiency of electronic technology in computers. A problem of maximal realization of huge advantages of electronic technology came in front of computer designers. It was necessary to analyze strong and weak aspects of the ENIAC project and to give appropriate recommendations. A brilliant solution of this task was given in the famous Report ‘Preliminary discussion of the logical design of an electronic computing instrument’ (1946) [9]. This Report, written by the brilliant mathematician John von Neumann and his colleagues from the Prinstone Institute Goldstein and Berks, did present the project of new electronic computer. The Report [9] became the beginning of the computer revolution, based on the binary system! The essence of the main recommendations of this Report, named John von Neumann’s Principles, is the following: The machines on electronic elements should work not in the decimal system but in the binary system. The program should be placed in the machine block, called storage device, which should have a sufficient capacity and appropriate speeds for access and entry of program commands. Programs, as well as numbers, with which the machine operates, should be represented in the binary code. Thus, the commands and the numbers should have one and the same form of representation. This meant that the programs and all intermediate outcomes of calculations, constants and other numbers should be placed in the same storage device. The difficulties of physical realization of the storage device, speed of which should correspond to the speed of logical elements, do demand hierarchical organization of memory. The arithmetical device of the machine should be constructed on the basis of the logical summation element; it is inadvisable to create special devices for the fulfillment of other arithmetical operations. The machine should use the parallel principle of the organization of computing processes, that is, the operations over the binary words should be fulfilled over all digits simultaneously. Thus, the historical significance of John von Neumann’s Principles consists of the fact that they are the brilliant confirmation of Leibniz’s predictions about the role of the binary system in the future development of computer science and technology. The prominent American scientist, physicist and mathematician John von Neumann (1903–1957), together with his colleagues from the Prinstone Institute Goldstein and Berks after careful analysis of the strengths and weaknesses of the first electronic computer ENIAC gave strong preference to the binary system as a universal way of coding of data in electronic computers. 3.3. ‘Trojan horse’ of the binary system The famous Russian expert in computer science academician Jaroslav Khetagurov in one of his articles [10] discusses the problem of the use of modern microprocessors, based on the binary system, in terms of national security: ‘The use of microprocessors, controllers, and software computing resources of a foreign proceeding to solve problems in real-time systems of military, administrative and financial destination is fraught with big problems. This is a sort of ‘Trojan horse’, a role of which only now became to be manifested . Losses and damage from their use can significantly affect the national security of Russia…’ Academician Khetagurov does not use the concept of ‘mission-critical applications’ in this quote, but he clearly implies them if you carefully read this quote (real-time systems of military, administrative and financial destination is fraught with big problems). Thus, academician Khetagurov raises the challenge of designing modern computational tools, having built-in system of error detection for ensuring high informational reliability and noise immunity of the mission-critical systems. This problem is not new, but its solution is far from its completion due to the lack of sufficient effective scientific solutions in this area. All the main devices of computers and microprocessors (registers, counters, summators and so on) can be classified as PARALLEL SYSTEMS, for which the number of the used redundant codes is very limited (Hamming and Hsiao codes [4–6]). It has been shown above that these codes have a significant drawback (the effect of ‘false correction’) what is unacceptable for mission-critical applications. 3.4. The opinion of the US engineer and expert in coding theory W. Kauth Already in the middle of the sixth decade of the 20th century, the US engineer and expert in coding theory W. Kauth drew attention to the fact that the attempts to use the existing ECC for computer systems may be not effective because of the following properties of ‘computing channels’ (see Kauth’s quote from the article [11]): Criterion of effectiveness may differ from corresponding criterion for traditional communication channels. The most likely errors may not correspond to the most likely errors in the traditional communication channels. It is necessary to take into consideration a possibility of errors in the logical devices for encoding-decoding. By opportunity, the codes must fulfill arithmetic and other operations. Note that the point 1 relates to important problem of the effectiveness of application of the ECC for particular subject areas. The point 2 puts forward the question about real models of errors in the ‘computing channels,’ in particular, what kind of errors are the most probable for the ‘computing channels’. The point 3 puts forward the question about complexity of encoding-decoding devices for the best ECC. This problem is particularly acute for computing systems and other informational systems with representation of data in PARALLEL FORM. A complexity of the technical implementation of encoders–decoders for many effective ECC is the primary reason why these codes are not used in informational systems with the representation of data in PARALLEL FORM. The point 4 indicates one essential shortcoming of the existing ECC. These codes are not-arithmetical and do not enable to fulfill arithmetical operations. Therefore, they cannot be used for detecting and correcting errors in arithmetical units of computers. Although the article [11] was written in the 1960s of the 20th century, but the ideas of the article [11] are very relevant now when designing computing and measuring systems for mission-critical applications. The book [12] contains interesting experimental data about the nature and statistics of errors, which can occur in the typical computer structures (registers, counters, summators and so on) under the influence of noises in the electrical energy sources: With increasing the noises level in the electrical energy source, the number of errors of big multiplicity is increasing, and distribution of errors by their multiplicity nears to uniform distribution. Counters and summators are devices with a brightly expressed asymmetric nature of errors. For example, the probability of false increasing or decreasing of the number in the counter are equal to 0.96 and 0.04, respectively, for the case of summators, these probabilities are equal, respectively, 0.8 and 0.2. 3.5. On the redundant numeral systems The traditional approach to introducing code redundancy into computational structures suggests that code redundancy is introduced into digital structures after the numeral system, used to perform arithmetic operations, is considered to be chosen (according to Neumann’s principles, the classical binary system is more preferable for computational structures). It is often forgotten that the code redundancy, needed to detect errors, can be introduced into computational structures at the earliest stage of their design, at the stage of choosing numeral system for arithmetical calculations. Examples of such approach are described in the book [21]. The system for residual classes is the most well-known among the redundant numeral systems [21]. It has two advantages in comparison to the classical binary system, namely, increasing speed for execution of arithmetic operations and a possibility of errors detection in such operations. In the USSR, based on the system for residual classes, a specialized processor for military applications was developed. Unfortunately, the system for residual classes did not justify all the benefits that were expected from it. Its main disadvantage is its non-positional character what leads to many shortcomings at its practical use (difficulties in representing negative numbers, comparing numbers by their value, etc.). The main objection of computer experts against the use of new redundant numeral systems is the fact that existing software is closely related to the binary system and binary coding. With this argument, of course, we can agree, if not for one circumstance. In this article, we are talking about designing computing and measuring systems for mission-critical systems. In most cases, we are talking about not universal, but about specialized computing and measuring systems, which perform a narrowly specialized task. In such systems, the number of used programs is limited. For each specific application, these specialized programs can be developed. The main task of such informational systems is ensuring a highly reliable performance of the computer program and preventing ‘false signals’ at the output, which can lead to a technological catastrophe. The ‘Trojan horse’ of the binary system excludes a possibility designing highly reliable specialized informational systems. Therefore, the new positional numeral systems, described below, are eligible for use in mission-critical systems. 4. BERGMAN’S SYSTEM AS THE FIRST IN HISTORY NUMERAL SYSTEM WITH IRRATIONAL BASE 4.1. Definition In 1957, the young American mathematician George Bergman published the article A number system with an irrational base in the authoritative journal Mathematics Magazine [13]. The following sum is called Bergman’s system: A=∑iaiΦi, (1) where A is any real number, ai is a binary numeral {0, 1} of the ith digit, i=0,±1,±2,±3,…,Φi is the weight of the ith digit and Φ=(1+5)/2 is the base of the numeral system (1). 4.2. The main distinction between Bergman’s system and binary system On the face of it, there is not essential distinction between the formula (1) for Bergman’s system and the formula for the binary system: A=∑iai2i(i=0,±1,±2,±3,…)(ai∈{0,1}), (2) where the digit weights are connected by the following ‘arithmetical’ relations: 2i=2i−1+2i−1=2×2i−1, (3) which underlie the binary arithmetic. The principal distinction of Bergman’s system (1) from the binary system (2) is the fact that the famous irrational number Φ=(1+5)/2(thegoldenratio) (4) is used as the base of the numeral system (1) and its digit weights are connected by the following well-known relations for the powers of the golden ratio: Φi=Φi−1+Φi−2=Φ×Φi−1, (5) which underlie the ‘golden’ arithmetic. That is why Bergman called his numeral system the numeral system with irrational base. Although Bergman’s article [13] is a fundamental result for number theory and computer science, mathematicians and experts of computer science of that period were not able to appreciate the mathematical discovery of the American wunderkind. It is interesting to note the following. Now the concept of Bergman’s system has entered widely into Internet and modern scientific literature. The special article in Wikipedia [14] is dedicated to Bergman’s system. It is described briefly in WolframMathWorld [15]. Professor Donald Knuth refers to Bergman’s article [13] in his outstanding book [16]. The special paragraph in author’s book [17] is dedicated to Bergman’s system. ‘The Computer Journal’ (British Computer Society) has published in 2002 author’s article [18], devoted to Bergman’s system and its applications. This means that the interest in Bergman’s system has increased in modern mathematics and computer science. 4.3. Evaluation of Bergman’s system and its applications As it is known, new scientific ideas do not always arise there, where they are expected. Apparently, Bergman’s system [13] is one of the most unprecedented scientific discoveries in the contemporary history of science and mathematics. First of all, Bergman’s mathematical discovery [13] returns mathematics to the initial period of its development, when the numeral systems and rules of arithmetic operations were one of the most important goals of mathematics (Babylon and ancient Egypt). However, the greatest impression is the fact that a new scientific discovery in the theory of numeral systems was made by 12-year-old American wunderkind George Bergman. This is really an unprecedented case in the history of science and mathematics. The mathematical formula (1) for Bergman’s system looks so simple that it is difficult to believe that Bergman’s system is one of the largest modern mathematical discoveries, which is of fundamental interest for history of mathematics, number theory and computer science. In this regard, one can compare Bergman’s system with the discovery of incommensurable segments, made in Pythagoras’ scientific school. The proof of the incommensurability of the diagonal and the side of the square is so simple that any amateur of mathematics can get this proof without any difficulties. However, this mathematical discovery until now causes delight, since it was this discovery that was the turning event in the development of mathematics and led to the introduction into mathematics of the irrational numbers, without which it is difficult to imagine the existence of mathematics. Time will show how fair is the above comparison of Bergman’s system with the discovery of the incommensurable segments. Two newest scientific results in computer science and number theory are following from Bergman’s system (1): the ‘golden’ ternary mirror-symmetrical arithmetic and the ‘golden’ number theory. Let us briefly consider the essence of these mathematical results. The readers can familiarize themselves with these results more in detail in the articles [18–20]. 5. THE ‘GOLDEN’ TERNARY MIRROR-SYMMETRICAL NUMERAL SYSTEM AND TERNARY MIRROR-SYMMETRICAL ARITHMETIC 5.1. Definition and property of mirror symmetry Let us consider the most interesting scientific results of the articles [18, 20]. It is proved in [18, 20] that any integer N (positive or negative) can be represented as a sum: N=∑i=−mmci(Φ2)i, (6) where ci∈{1¯=−1,0,1} is the ternary numeral of the ith digit; (Φ2)i is the weight of the ith digit; Φ2=3+52 is the base of the numeral system (6), (−m), (−m + 1),…, (−2), (−1), 0, 1, 2,…,m are integers. We name the sum (6) the ternary Φ-code of integer N. The abridged notation of the sum (6) can be represented in the form of the following ternary (2m + 1)-code combination: N=cmcm−1⋯c2c1︸mc0⋅c−1c−2⋯c−(m−1)c−m︸m. (7) We can see that the ternary (2m + 1)-code combination consists of two parts relatively to the 0th ternary numeral c0: the left-hand part cmcm−1⋯c2c1︸m, which consists of the ternary numerals with the positive indexes 1,2,3,…,m, and the right-hand part c−1c−2⋯c−(m−1)c−m︸m, which consists of the ternary numerals with the negative indexes (−1), (−2), (−3),…, (−m). It is proved that the ternary (2m + 1)-code combination (7) for every integer N has property of mirror symmetry relative to the 0th ternary numeral c0, namely: c1=c−1,c2=c−2,…,cm=c−m. (8) Taking into consideration the property (8), the ternary numeral system (6) is called ternary mirror-symmetrical numeral system and ternary code combination (7) is called ternary mirror-symmetrical representation. 5.2. Examples of the ternary mirror-symmetrical representations Table 2 demonstrates the property of ‘mirror symmetry’ for some initial natural numbers. Table 2. Property of ‘mirror symmetry’. i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 Table 2. Property of ‘mirror symmetry’. i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 Let us give the explanations of Table 2. The first row i means the digit indices of the 7-digit ternary mirror-symmetrical code (6), the second row Φ2i means the digit weights of the 7-digit ternary mirror-symmetrical Φ-code (6), The third row N means positive integers from 0 to 10; their ternary ‘golden’ mirror-symmetrical representations are represented in the rows below the third row. All data, relating to the 0th digit, which separates left-hand and right-hand parts of the ternary ‘golden’ mirror-symmetrical representations of positive integers (see the column 0) are in bold. Thus, thanks to this simple observation, we have found the most important fundamental property of integers called mirror-symmetrical property of integers. Based on this fundamental property, the ‘ternary numeral system,’ given by (6), was named ternary mirror-symmetrical numeral system [18]. Another interesting feature of the ternary mirror-symmetric system (6) follows from Table 1. For all well-known positional numeral systems the ‘extension’ of the positional representation of the number is carried out only to the side of the higher digits. For the ternary mirror-symmetrical system (6), the ‘extension’ of the ternary mirror-symmetrical representation (7) occur to both sides, i.e. to the sides of the higher and junior digits simultaneously. This feature, as also the property of ‘mirror-symmetry’ and other features, single out the ternary mirror-symmetrical positional numeral system (6) among all other positional numeral systems. 5.3. Ternary mirror-symmetrical arithmetic The rules of mirror-symmetrical summation and subtraction are based on the following identities for the golden proportion: 2Φ2k=Φ2(k+1)–Φ2k+Φ2(k−1) (9) 3Φ2k=Φ2(k+1)+0+Φ2(k−1) (10) 4Φ2k=Φ2(k+1)+Φ2k+Φ2(k−1), (11) where k = 0, ±1, ±2, ±3,…. Table of mirror-symmetrical summation (subtraction) has the following form: ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 The peculiarity of summation (subtraction) of ternary digits ak+bk consists in the fact that in the case of the summation (subtraction) of ternary numerals of the same sign, the intermediate sum of the opposite sign and the carry-over of the same sign arise, but carry-over spreads symmetrically toward two adjacent digits. When we summarize multi-digit ternary numbers, the sum always appears in the mirror-symmetrical form. The following trivial identity for the golden ratio powers underlies the mirror-symmetrical multiplication: Φ2n×Φ2m=Φ2(n+m). (12) The table of the mirror-symmetrical multiplication of two single-digit ternary mirror-symmetrical numbers ak×bk is given below: ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 The final part of the article [13] describes the unique multi-digit ternary mirror-symmetrical summator (subtractor) and the matrix mirror-symmetrical summator (subtractor), on the basis of which a ternary mirror-symmetric pipelined summator (subtractor) and pipelined device for multiplication have been designed. The article [18], published in The Computer Journal, aroused great interest of the Western computer community. The outstanding American computer expert Professor Donald Knuth was the first who congratulated the author for this publication. 5.4. The main arithmetical advantages of the ternary mirror-symmetrical arithmetic We can point on the number of the important advantages of the ternary mirror-symmetrical arithmetic from the ‘technical’ point of view: The mirror-symmetrical subtraction is the same arithmetic operation as the mirror-symmetrical summation. The mirror-symmetrical summation (subtraction) is fulfilled by means one and the same mirror-symmetrical summator (subtractor) in the ‘direct’ code, that is, without the use of the notions of the inverse and additional codes. The sign of the summarized or subtracted numbers is defined automatically, because it coincides with the sign of the higher significant ternary numeral of the ternary mirror-symmetric representation of the summation (subtraction) result. The summation (subtraction) results are represented always in the mirror-symmetrical form that allows checking a process of the ternary mirror-symmetrical summation (subtraction) according to the property of ‘mirror-symmetry.’ The mirror-symmetrical multiplication is reduced to the mirror-symmetrical summation (subtraction). The ternary mirror-symmetrical multiplication can be fulfilled over ternary mirror-symmetrical numbers of equal signs or different signs in the ‘direct’ code, that is, without the use of the notions of the inverse and additional codes. The sign of the results of the mirror-symmetrical multiplication is defined automatically because it coincides with the sign of the higher significant ternary numeral (1 or 1¯) of the ternary mirror-symmetrical representation of the result of the mirror-symmetrical multiplication. The results of the mirror-symmetrical multiplication are represented always in the mirror-symmetrical form that allows checking a process of the ternary mirror-symmetrical multiplication. The operation of the ternary mirror-symmetrical division is more complicated arithmetical operation compared with the arithmetical operations of the ternary mirror-symmetrical summation, subtraction and multiplication. By its complexity, the operation of the ternary mirror-symmetrical division is comparable to the same operation in the ternary-symmetrical numeral system [21], used in the ternary computer ‘Setun,’ designed in Moscow University. 6. THE ‘GOLDEN’ NUMBER THEORY AND NEW PROPERTIES OF NATURAL NUMBERS 6.1. The ‘extended’ Fibonacci and Lucas numbers Bergman’s system (1) is connected closely with the so-called ‘extended’ Fibonacci and Lucas numbers Fi and Li (i=0,±1,±2,±3,…) (see Table 3). Table 3. The ‘extended’ Fibonacci and Lucas numbers. n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 Table 3. The ‘extended’ Fibonacci and Lucas numbers. n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 As it follows from Table 3, the ‘extended’ Fibonacci and Lucas numbers are connected by the following simple relations: F−n=(−1)n+1Fn;L−n=(−1)nLn. (13) 6.2. The ‘golden’ representations of natural numbers Let us consider the ‘golden’ representation of natural numbers in Bergman’s system (1): N=∑iaiΦi, (14) where ai∈{0,1} is the bit of the ith digit, Φi is the weight of the ith digit and Φ=1+52 is the base of the numeral system (14). We will name the sum (14) the Φ-code of natural number N. The abridged notation of the Φ-code of natural number N has the following form: N=anan−1…a1a0⋅a−1a−2…a−k (15) and is named the ‘golden’ representation of natural number N. Note that the point in the ‘golden’ representation (15) separates the ‘golden’ representation (15) on two parts: the left-hand part, where the bits anan−1…a1a0 have non-negative indices, and the right-hand part, where the bits a−1a−2…a−k have negative indices. Note that the weights Φi of the Φ-code (14) are connected by the following relation: Φi=Φi−1+Φi−2. (16) Besides, the power of the golden ratio Φi is expressed through the ‘extended’ Fibonacci and Lucas numbers Fi and Li (see Table 2) as follows: Φi=Li+Fi52(i=0,±1,±2,±3,…). (17) By using the relations (13), (16) and (17), the following theorem has been proved in [19]. Theorem 1 All natural numbers can be represented in the Φ-code (14) of Bergman’s system (1) by the finite number of bits. Note that Theorem 1 is far from trivial, if we take into consideration that all powers of the golden proportion Φi (i=±1,±2,±3,…) in the sum (14) (with the exception of Φ0 = 1) are irrational numbers. Note that Theorem 1 is true only for natural numbers. Therefore, Theorem 1 can be referred to the category of new properties of natural numbers. 6.3. Multiplicity and MINIMAL FORM of the ‘golden’ representations The main feature of the ‘golden’ representations (15) of real numbers in Bergman’s system, compared with the binary system (2), is a multiplicity of the ‘golden’ representations of the same real number. The various ‘golden’ representations of one and the same real number can be obtained by using the operations of convolutions (18) and devolutions (19) in the ‘golden’ representations (15): Convolution:011→100 (18) Devolution:100→011 (19) Note that the micro-operations (18) and (19) are based on the main mathematical identity (16), which relates the weights of the digits in Bergman’s system (1). The performance of these micro-operations in the ‘golden’ representation (15) of a certain number does not change the value of this number. The so-called MINIMAL FORM plays a special role among the various ‘golden’ representations (15) of one and the same number. The MINIMAL FORM can be obtained from the initial ‘golden’ representation by means of fulfilling in it all the possible convolutions (18). The MINIMAL FORM has the following important features: Since the operation of the convolution (011→100) is reduced to the transformation of the triple of the neighboring bits 011 into the triple of the neighboring bits 100, this means that in the MINIMAL FORM two bits 1 do not meet alongside. The MINIMAL FORM has the minimal number of 1’s among all the possible ‘golden’ representations of the same number. 6.4. Z- and D-properties of natural numbers Bergman’s system (1) is a source for new number-theoretical results. We give without proof the following properties of the Φ-code (14), which are given by Theorems 2, 3. Theorem 2 (Z-property of natural numbers). If we represent an arbitrary natural number N in the Φ-code (14) and then substitute the ‘extended’ Fibonacci numbers Fi(i=0,±1,±2,±3,…)instead of the golden ratio powers Φi(i=0,±1,±2,±3,…)into the sum (14), then the sum that appears as a result of such a substitution will be equal identically to 0, independently on the initial natural number N, that is, ForanyN=∑iaiΦiaftersubstitutionFi→Φiwehave:∑iaiFi≡0(i=0,±1,±2,±3,…). (20) Theorem 3 (D-property of natural numbers). If we represent an arbitrary natural number N in the Φ-code (14) and then substitute the ‘extended’ Lucas numbers Li(i=0,±1,±2,±3,…)instead of the golden ratio powers Φi(i=0,±1,±2,±3,…)into the sum (14), then the sum that appears as a result of such a substitution will be equal identically to 2N, independently of the initial natural number N, that is, ForanyN=∑iaiΦiaftersubstitutionLi→Φiwehave:∑iaiLi≡2N(i=0,±1,±2,±3,…). (21) We note that Theorems 2 and 3 like Theorem 1 are valid only for natural numbers; consequently, they are describing new properties of natural numbers. The article [19] describes other new properties of natural numbers. For example, if we substitute the ‘extended’ Fibonacci numbers Fi+1(i=0,±1,±2,±3,…) instead of the golden ratio powers Φi(i=0,±1,±2,±3,…) in the Φ-code (14), then we get another representation of the same natural number, called F-code of natural number N: N=∑iaiFi+1. (22) If we substitute the ‘extended’ Lucas numbers Li+1(i=0,±1,±2,±3,…) instead of the golden ratio powers Φi(i=0,±1,±2,±3,…) in the Φ-code (14), then we get another representation of the same natural number, called L-code of natural number N: N=∑iaiLi+1. (23) This means that in Bergman’s system, there are three different ways of representing the same natural number N: Φ-code (14), F-code (22) and L-code (23), that is, N=∑iaiΦi=∑iaiFi+1=∑iaiLi+1. (24) For many mathematicians in the field of number theory, it is a great surprise that new properties of natural numbers were discovered in the 21st century, that is, 2.5 millennia after the writing of Euclid’s Elements, in which systematic studying the properties of natural numbers started. Bergman’s system is the source for the ‘golden’ number theory [19] what once again emphasizes a fundamental nature of the mathematical discovery of George Bergman [13]. 7. PASCAL’S TRIANGLE, FIBONACCI p-NUMBERS AND GOLDEN p-PROPORTIONS 7.1. Mathematical discovery by George Polya As it is known, Pascal triangle plays an important role in the combinatorial analysis and has many interesting applications in mathematics and computer science, in particular, in coding theory. By studying the so-called diagonal sums of Pascal’s triangle, American mathematician George Polya came to very simple and unexpected discovery, described in the book [22] (see Fig. 1). It should be noted that this very simple mathematical result during many centuries was not known to Blaise Pascal and other mathematicians, who studied Fibonacci numbers and combinatorial analysis. Figure 1. View largeDownload slide Fibonacci numbers in Pascal’s triangle. Source: Pascal’s Triangle http://www.goldennumber.net/pascals-triangle/ Figure 1. View largeDownload slide Fibonacci numbers in Pascal’s triangle. Source: Pascal’s Triangle http://www.goldennumber.net/pascals-triangle/ 7.2. Fibonacci p-numbers By studying the optimal measurement algorithms in his Doctoral dissertation (1972) [25] and diagonal sums of Pascal’s triangle (Fig. 1) in the book [23], the author found infinite number of recurrent sequences, which for the given р=0,1,2,3,… are described by the following recurrent relation: Fp(n)=Fp(n–1)+Fp(n–p–1)forn>p+1 (25) for the seeds Fp(1)=Fp(2)=⋯=Fp(p+1)=1. (26) The numerical sequences, generated by the recurrent relation (25) at the seeds (26), are named in [23] the Fibonacci p-numbers. It is clear that for the case p = 0, the Fibonacci p-numbers are reduced to the classical binary sequence 1, 2, 4, 8, 16, 32, 64,…,2n−1 and for the case p = 1 to the classical Fibonacci numbers 1, 1, 2, 3, 5, 8, 13, 21, 34,…, Fn. For the case p = ∞, the Fibonacci p-numbers are reduced to the following trivial sequence: {1,1,1,…,1,…}. (27) 7.3. A representation of the Fibonacci p-numbers through binomial coefficients It is well known in combinatorial analysis the following formula, representing the binary numbers 2n through binomial coefficients: 2n=Cn0+Cn1+⋯+Cnn. (28) By studying the Pascal triangle (Fig. 1) [23], we can represent the generalized Fibonacci p-number Fn (n + 1), given by the recurrent relation (25) at the seeds (26), through the binomial coefficients as follows: Fp(n+1)=Cn0+Cn−p1+Cn−2p2+Cn−3p3+Cn−4p4+⋯. (29) Note that the known formula (28) is a partial case of (29) for the case p = 0. For the cases p = 1, the formula (28) is reduced to the following formula, which connects the classic Fibonacci numbers Fn+ 1 = F1(n + 1) to binomial coefficients: Fn+1=F1(n+1)=Cn0+Cn−11+Cn−22+Cn−33+Cn−44+⋯. (30) It is clear that the formulas (29) and (30) are another confirmation of the deep connection between the theory of Fibonacci p-numbers and combinatorial analysis. 7.4. The golden p-proportions 7.4.1. A ratio of the adjacent Fibonacci p-numbers It is well known the so-called Kepler’s formula, which gives relationships between the golden ratio and Fibonacci numbers Fn: Φ=lim︸n→∞FnFn−1=1+52. (31) It is proved in [24] that for the given p = 0, 1, 2, 3,… the limit of the ratio of two adjacent Fibonacci p-numbers is equal to the following: lim︸n→∞Fp(n)Fp(n−1)=Φp, (32) where Φp is mathematical constant, which is the positive roots of the following algebraic equation: xp+1=xp+1. (33) Note that for the case p = 0, Eq. (33) is reduced to the trivial equation: x = 2. For the case p = 1, Eq. (33) is reduced to the algebraic equation of the golden ratio: x2=x+1 (34) with positive root Φ=1+52 (the golden ratio). As follows from the above arguments, the constants Φp are new fundamental mathematical constants that are directly related to the binomial coefficients, Pascal’s triangle, and combinatorial analysis in general. 7.4.2. The simplest algebraic properties of the golden p-proportions If we substitute the golden p-proportion Φp instead x in Eq. (33), we get the following identity for the golden p-proportion: Φpp+1=Φpp+1. (35) If we divide all terms of the identity (35) by Φpp, we get the following identities for the golden p-proportion: Φp=1+1Φpp (36) or Φp−1=1Φpp. (37) Note that for the case p = 0 (Φ0 = 2) the identities (36) and (37) are reduced to the following trivial expressions: 2=1+11and2−1=11. For the case p=1, we have Φ1=Φ=1+52 and the identities (36) and (37) are reduced to the well-known identities for the golden proportion Φ: Φ2=Φ+1 (38) Φ=1+1Φ. (39) If we multiply and divide repeatedly all terms of the identity (35) on Φp, we get the following remarkable identities, connecting the powers of the golden p-proportion: Φpn=Φpn−1+Φpn−p−1=Φp×Φpn−1(n=0,±1,±2,±3,…). (40) Note that for the case p = 0, Φp = Φ0 = 2 and then the identities (40) are reduced to the following trivial identities for the ‘binary’ numbers: 2n=2n−1+2n−1=2×2n−1. For the case p = 1, we have Φ1=Φ=1+52 and then the identities (40) are reduced to the following well-known identities for the classic golden ratio: Φn=Φn−1+Φn−2=Φ×Φn−1. (41) 8. FIBONACCI p-CODES, CODES OF THE ‘GOLDEN’ p-PROPORTIONS AND THEIR APPLICATIONS IN COMPUTER SCIENCE AND DIGITAL METROLOGY 8.1. Fibonacci p-codes 8.1.1. Definition In 1972, the author of this book defended his Grand Doctoral dissertation ‘Synthesis of Optimal Algorithms for Analog-Digital Conversion’ [25]. On the basis of this dissertation the author wrote the book ‘Introduction into Algorithmic Measurement Theory’ [23], devoted to the substantiation of the theory of the so-called Fibonacci p-codes: N=anFp(n)+an−1Fp(n−1)+⋯+aiFp(i)+⋯+a1Fp(1), (42) where N is the natural number, ai∈{0,1} is a binary numeral of the ith digit of the code (42); n is the digit number of the code (42). Here {Fp(1),Fp(2),…,Fp(i),…,Fp(n)} (43) are the weights of the code (42), Fp(i)(i=1,2,3,…,n) are Fibonacci p-numbers, which follow from Pascal triangle (diagonal sums) and are expressed through binomial coefficients (30). The formula (42) was obtained by the author during the synthesis of the so-called optimal measurement algorithms, the theory of which is described in the book [23]. The formula (42) defines the class of positional numeral systems, corresponding to the so-called Fibonacci measurement algorithms, since the Fibonacci p-numbers are the digit weights of the numeral system (42). 8.1.2. Partial cases of the Fibonacci p-codes Note that the Fibonacci p-codes (42) include the infinite number of the different positional ‘binary’ representations of positive integers because every p originates its own Fibonacci p-code (42) (p=0,1,2,3,…). In particular, for the case p=0, the Fibonacci p-code (42) is reduced to the classic binary code: N=an2n−1+an−12n−2+⋯+ai2i−1+⋯+a120, (44) which underlie classical binary arithmetic, the basis of modern ‘binary’ computers. For the case p=1, the Fibonacci p-code (42) is reduced to the classic Fibonacci code, named Fibonacci 1-code: N=anFn+an−1Fn−1+⋯+aiFi+⋯+a1F1, (45) where Fi=Fi−1+Fi−2;F1=F2=1(i=1,2,3,…,n) are the classical Fibonacci numbers. The abridged representations of the Fibonacci p-code (42), as also the classical binary code (44) and the Fibonacci 1-code (45) have one and the same form: N=anan−1…ai…a1 (46) and are named Fibonacci p-representations or simply Fibonacci representations of natural numbers. Consider now the partial case p=∞. For this case, every Fibonacci p-number is equal to 1 identically, that is, for any integer i=1,2,3,…,n we have: Fp(i)=1. Then, for this case, the sum (42) takes the form of the so-called unitary code: N=1+1+⋯+1︸N. (47) However, the expression (47) coincides with Euclidean definition of natural numbers, used by Euclid in his elementary number theory. Hence, the Fibonacci p-codes, given by (42), are a very wide generalization of the binary code (44) and Fibonacci 1-code (45), which are the partial cases of the Fibonacci p-codes (42) for the cases p=0 and p=1, respectively. On the other hand, the Fibonacci p-code (42) for the case p=∞ is reduced to the Euclidean definition of natural numbers (47). Thus, the fundamental significance of the formula (42) consists of the fact that it connects various mathematical theories and concepts, including Fibonacci numbers theory [26–28], Pascal triangle and combinatorial analysis, the theory of numeral systems and number theory, and finally binary arithmetic, the basis of modern computers. The formula (42) can be viewed from different points of view. First of all, as a generalization of Fibonacci numbers theory [26–28], because the Fibonacci p-numbers are wide generalization of classical Fibonacci numbers. Secondly, because the Fibonacci p-codes (42) are a generalization of the Euclidean definition of natural numbers (47), this approach can lead us to the extension of number theory. In essence, the modern Fibonacci numbers theory [26–28] is such ‘extension’ of number theory. This article is devoted to the applied aspects of the Fibonacci p-codes (42), in particular, their applications in computer science. Since the number of the Fibonacci p-codes (42) is theoretically infinite, we must choose such redundant Fibonacci p-code, which would be most suitable for designing high-reliable Fibonacci computers as a new direction in computer technology. Here it is appropriate to draw an analogy between the Fibonacci p-codes (42) and the so-called canonical numeral systems, described in [21]. As it is known, the number of canonical numeral systems, that is, numeral systems with the bases 2, 3,…,10,…,12,…, 60, etc., theoretically infinite. But the main criterion for choosing the binary system (44) as the main positional numeral system for electronic computers was the principle of simplicity of technical implementation. Von Neumann’s idea to use the binary system (44) in electronic computers is based on arithmetic advantages of the binary system and specifics of electronic components and Boolean logic. Von Neumann wrote ‘Our main memory unit by nature is adapted to binary system … A flip-flop in fact is again a binary device … The main advantage of the binary system in comparison with a decimal consists in greater simplicity of technical realization and big speed, with which the basic operations can be performed. An additional remark consists of the following. The main part of the computer by its nature is not arithmetical, but mainly logical. The new logic, being the system ‘yes-not’ is mainly binary. Therefore, the construction of the binary arithmetical devices greatly facilitates the construction of more homogeneous machine, which can be designed better and much effectively.’ We have similar situation for the case of the redundant Fibonacci p-codes (p = 1,2,3,…). It follows from the above reasoning that the number of the redundant Fibonacci p-codes, given by (42), is theoretically infinite. However, they in general have different applied significance. For the Fibonacci p-codes, the principle of simplicity of technical realization is very important. The Fibonacci p-code, corresponding to the case p = 1, has the least redundancy, sufficient for designing built-in error-detecting devices, and it is the simplest Fibonacci p-code from the point of view of technical implementation. 8.1.3. A little of history, first author’s publications in the field and Fibonacci patenting Since the Fibonacci p-codes (42) are a generalization of the binary system, which underlies modern computers, immediately after defending his Grand Doctoral dissertation (1972) [25], the author posed as his main challenge to create new arithmetical and informational foundations of new computers, Fibonacci computers, based on the Fibonacci p-codes (42). The first author’s articles [29, 30] on this theme (in Russian) had been published in 1974–75. The scientific trip to Austria (January–March 1976), the work as Visiting Professor at the Vienna Technical University and author’s speech at the joint session of the Austrian Cybernetics and Computer societies on the theme Algorithmic Measurement Theory and Foundations of Computer Arithmetic (3 March 1976) became the beginning of the international recognition of author’s scientific direction. High evaluation of author’s speech by Austrian scientists became a cause of the wide patenting author’s inventions in the field ‘Fibonacci computers’ abroad. Sixty international patents, given out on the Soviet inventions in the field of computer science and digital metrology in USA, Japan, England, France, FRG, Canada, Poland and DDR [31–43], are official legal documents, confirming the priority of Soviet science (and the author of this article) on a new direction in the field of computer science and digital metrology. After moving to Canada in 2004, the main author’s goal became to acquaint the Western computer community with the main ideas of this scientific direction. To this end, the author wrote the book [17], which was published by the Publishing House ‘World Scientific’ in 2009, and newest articles [19, 20, 44], which were published in ‘British Journal of Mathematics and Computer Science’ during 2015–16. Taking into consideration the availability of English publications [17, 19, 20, 44], we will outline only the most interesting results in the field of the theory of Fibonacci p-codes and following from them Fibonacci arithmetic, by referring readers to these publications for more detailed acquaintance. 8.1.4. ‘Convolution’ and ‘devolution’ for the Fibonacci 1-code Note that the Fibonacci 1-code (45) is discrete analog of the Φ-code (14) N=∑iaiΦi and the conceptions of convolution (011 → 100) and devolution (100 → 011) can be applied to Fibonacci representations: (a)Convolutions7={011111001110100 (48) (b)Devolutions5={100000110001011 (49) The convolutions result 10 100 in (48) is named ‘convolute’ Fibonacci representation and the devolutions result 01 011 in (49) is named ‘devolute’ Fibonacci representation. For the case p=1, the ‘convolute’ and ‘devolute’ Fibonacci representations of the positive integer N have peculiar indications. In particular, in the ‘convolute’ Fibonacci representations two bits of 1 together do not meet and in the ‘devolute’ Fibonacci representations two bits of 0 together do not meet, starting from the highest bit of 1 of the Fibonacci representation (46). Consider now peculiarities of the convolution and devolution for the lowest digits of the Fibonacci representation (46). As it is well known, for the case p=1, the weights of the two lowest digits of the Fibonacci 1-code (45) are equal to 1 identically, that is, F1=F2=1. And then the operations of the devolution and convolution for these digits are performed as follows: 10→01(devolution)and01→10convolution). 8.1.5. The base of the Fibonacci p-code For the case p=0, the base of the binary system (44) is calculated as the ratio of the adjacent digit weights, that is, 2k2k−1=2. Apply this principle to the Fibonacci p-code (42) and consider the ratio Fp(k)Fp(k−1). (50) A limit of the ratio (50) for k→∞ is the base of the Fibonacci p-code (42). As it follows above, the limit of (50) is equal: limk→∞Fp(k)Fp(k−1)=Φp, (51) where Φp is the golden p-proportion. This means that the base of the Fibonacci p-code (42) for the case p>0 is the irrational number Φp and hence Fibonacci p-codes (42) are a new class of positional numeral systems with irrational bases. For the case p = 1, we have limk→∞F(k)F(k−1)=Φ=1+52(thegoldenratio), (52) that is, the base of the Fibonacci 1-code (45) coincides with the base of Bergman’s system (1). 8.2. Fibonacci arithmetic 8.2.1. Comparison of numbers in the Fibonacci 1-code It is proved in [17, 44] that the comparison of numbers in the Fibonacci 1-code (45) is fulfilled similar to the classic binary code (44), if the comparable numbers are represented in the MINIMAL FORM. This property (simplicity of number comparison) is one of the important arithmetical advantages of the Fibonacci 1-code (45). 8.2.2. The basic micro-operations The main distinction of the Fibonacci 1-code (45) from the binary code (44) is a multiplicity of Fibonacci representations of one and the same positive integer. By using the above micro-operations of devolution (011→100) and convolution (100→011), we can change the forms of Fibonacci representations of one and the same positive integer. This means that the binary 1’s in the Fibonacci representation (46) of one and the same number can move to the left or to the right along Fibonacci representation (46) of the same number by using the micro-operations devolution (011→100) and convolution (100→011). Recall once more that the fulfillment of these micro-operations does not change the number itself, that is, we will get the different Fibonacci representations of one and the same number. This fact allows developing the original approach to the Fibonacci arithmetic, based on the so-called basic micro-operations. Let us introduce the following four basic micro-operations, used to fulfill logical and arithmetical operations over binary words: (53) Note that the noise-immune Fibonacci arithmetic, based on the above micro-operations (53), is described for the first time in the article [45] and later in the book [17] and the article [44]. Note that the convolutions and devolutions, shown in the table (53), are the simple code transformations, which are performed over the adjacent three bits of the Fibonacci representation of one and the same number N in the Fibonacci 1-code (45). The micro-operation of replacement [10↓=01] is a two-placed micro-operation, which is fulfilled over the same digits of two registers, the top register A and the lower register B. Consider now the case, when the register A has the bit of 1 in the kth digit and the register B has the bit of 0 in the same kth digit (the condition for the replacement). The micro-operation of the replacement consists in the moving of the bit 1 from the kth digit of the top register A to the kth digit of the lower register B. Note that this operation can be fulfilled only for the condition, if the bits of the kth digits of the registers A and B are equal to 1 and 0, respectively. The micro-operation of absorption [10↕=10] is a two-placed micro-operation for the condition, when the bits of 1 are in the kth digits of the top register A and the lower register B. This micro-operation consists in mutual annihilation of the bits of 1 in the top and lower registers A and B. After fulfillment of the micro-operation of absorption, the bits of 1 are replaced by the bits of 0. It is necessary to pay our attention to the following ‘technical’ peculiarity of the above ‘basic micro-operations’ (53). At the register interpretation of these micro-operations, each micro-operation may be fulfilled by means of the inversion of the flip-flops, involved into the micro-operation. This means that each micro-operation is reduced to the flip-flops’ switching. 8.2.3. Logic operations We can demonstrate a possibility to fulfill the simplest logic operations by means of the above basic micro-operations (53). Let us fulfill now all possible replacements from the top register A to the lower register B: View largeDownload slide View largeDownload slide As the result of the replacement, we get the two new binary words A′ and B′. We can see that the binary word A′ is a logic conjunction (∧) of the initial binary words A and B, that is, A′=A∧B and the binary word B′ is a logic disjunction (∨) of the initial binary words A and B, that is, B′=A∨B. A logic operation of the module 2 addition is fulfilled by means of the simultaneous fulfillment of all the possible replacements and absorptions. For example: View largeDownload slide View largeDownload slide We can see that the results of this code transformation are two new binary words A′ = const 0 and B′ = A ⊕ B. It is clear that the binary word A′ = const 0 plays a role of checking binary word for the module 2 addition what is important for computer applications. A logic operation of the code A inversion is reduced to the fulfillment of the absorptions over the initial binary word A and the special binary word B=const1: View largeDownload slide View largeDownload slide The binary word A′=const0 plays a role of checking binary word for the inversion what is important for computer applications. 8.2.4. Fibonacci summation The idea of the summation of the two numbers A and B by using the basic micro-operations consists of the following. We have to move all the binary 1’s from the top register A to the lower register B. With this purpose, we use the micro-operations of replacement, devolution and convolution. The result is formed in the register B. For example, let us summarize the following numbers A0=010100100 and B0=001010100 as follows. The first step of the summation consists in the replacement of all possible bits of 1 from the register A to the register B: View largeDownload slide View largeDownload slide For this, we apply the micro-operation of replacement to all digits of the initial numbers A and B. However, this can be fulfilled only for those digits, where the condition of replacement is satisfied. The second step is the fulfillment of all the possible devolutions in the binary word A1 and all the possible convolutions in the binary word B1, that is, A1=000000100B1=011110100⇓A2=000000011B2=100110100 The third step is the replacement of all the possible bits of 1 from the register A to the register B: View largeDownload slide View largeDownload slide The summation is over, because all bits of 1 have moved from the register A to the register B. After reducing the binary word B3 to the MINIMAL FORM, we get the sum B3=A0+B0, represented in the MINIMAL FORM: B3=100110111=101001001=101001010=A0+B0. Thus, the summation is reduced to a sequential fulfillment of the micro-operations of the replacement for the two binary words A and B and the micro-operations of the convolution for the binary word B and the devolution for the binary word A. 8.2.5. Fibonacci subtraction The idea of the Fibonacci subtraction of the number B from the number A by using the basic micro-operations consists in the mutual absorptions of the binary 1’s in the Fibonacci representations of the numbers A and B, until one of them becomes equal to 0. To realize this idea we have to fulfill sequentially the mentioned micro-operations of absorption for the Fibonacci representations A and B and then the micro-operations of devolution for the Fibonacci representations A and B. The subtraction result is always formed in the register of the bigger number. If the result is formed in the top register A, it follows that the sign of the subtraction result is ‘+’, in the opposite case the subtraction result has the sign ‘−’. Let us demonstrate now this idea on the following example. Let us subtract the number B0=101010010 from the number A0=101001000, represented in the MINIMAL FORM of the Fibonacci 1-code. The first step is the absorption of all possible binary 1’s in the initial Fibonacci representations A0 and B0: View largeDownload slide View largeDownload slide The second step is the devolutions for the Fibonacci representations A1 and B1: A1=000001000B1=000010010⇓A2=000000110B2=000001101 The third step is the absorptions for the Fibonacci representations A2 and B2: View largeDownload slide View largeDownload slide The fourth step is the devolutions for the Fibonacci representations A3 and B3: A3=000000010B3=000001001⇓A4=000000001B4=000000111 The fifth step is the absorptions for the Fibonacci representations A4 and B4: View largeDownload slide View largeDownload slide The subtraction is over because A5=000000000. After reducing the Fibonacci representation B5 to the MINIMAL FORM, we get the subtraction result: B5=000001000. The subtraction result is in the register B. This means that the sign of the subtraction result is ‘−’, that is, the difference of the numbers A − B is equal: D=A−B=−000001000. If we code the sign ‘−’ by the bit of 1, then we can represent the difference D as follows: D=A−B=1.000001000. 8.2.6. The ‘binary’ multiplication For finding the algorithms of the Fibonacci multiplication and division we will use an analogy to the classic binary multiplication and division. We start from the multiplication. To multiply two numbers A and B in the classic binary code (44), that is, to get the product P=A×B, we have to represent the multiplier B in the form of the n-digit binary code (44). Then, the product P=A×B can be written in the following form: P=A×B=A×bn2n−1+A×bn−12n−2+⋯+A×bi2i−1+⋯+A×b120, (54) where bi∈{0,1} is the binary numerals of the multiplier B. It follows from (54) that the binary multiplication is reduced to forming the partial products of the kind A×bi2i−1 and their summation. The partial product A×bi2i−1 is formed by shifting the Fibonacci representation of the number A to the left in the (i−1) digits. The binary multiplication algorithm, based on (54), has a long history and goes back to the doubling method of ancient Egyptian mathematics [46]. 8.2.7. Fibonacci multiplication The analysis of the Egyptian doubling method [46] allows suggesting the following method of the Fibonacci multiplication for the general case of p. Let us consider now the product P=A×B, where the numbers A and B are represented in the Fibonacci p-code (42). By using the representation of the multiplier B in the Fibonacci p-code (42), we can represent the product P=A×B as follows: P=A×B=A×bnFp(n)+A×bn−1Fp(n−1)+⋯+A×biFp(i)+⋯+A×b1Fp(1), (55) where Fp(i)(i=1,2,3,…,n) are the Fibonacci p-numbers. Note that the sum (55) is a generalization of the sum (54), which underlies the algorithm of the ‘binary’ multiplication. The algorithm of the Fibonacci multiplication follows from the sum (55). The multiplication is reduced to the summation of the partial products of the kind A×biFp(i). They are formed from the multiplier A according to the special procedure, which is an analog of the Egyptian multiplication. Demonstrate now the Fibonacci multiplication for the case of the simplest Fibonacci 1-code (45). Example 4.1. Find the following product: 41 × 305. Solution is in Table 4. Let us explain Table 4: Construct Table 4, consisting of the three columns: F, G and P. Insert the Fibonacci numbers 1,1,2,3,5,8,13,21,34 into the F-column of Table 4. Insert the generalized Fibonacci 1-sequence: 305,305,610,…,10370, which is formed from the first multiplier 305 according to the ‘Fibonacci recurrent relation’ Gi=Gi−1+Gi−2, to the G-column. Mark by the inclined line (/) and block font all the F-numbers that give the second multiplier in the sum (41 = 34 + 5 + 2). Mark by block font all the G-numbers 610, 1525, 10 370, corresponding to the marked F-numbers and rewrite them to the P-column. By summarizing all the P-numbers 610 + 1525 + 10 370, we get the product: 41 × 305 = 12 505. This multiplication algorithm is easily generalized for the case of the Fibonacci p-codes (42). Table 4. Example of Fibonacci multiplication. F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 Table 4. Example of Fibonacci multiplication. F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 8.2.8. Fibonacci division We can apply the above Egyptian method of division to construct the algorithm of the Fibonacci division. The example of the Fibonacci division in the Fibonacci 1-code (44) is described in the book [17] and the article [44]. 8.3. A conception of the Fibonacci high-reliable arithmetical processor based on the basic micro-operations 8.3.1. Checking of the basic micro-operations The basic idea of designing self-checking Fibonacci processor consists in the following. It is necessary to develop the effective system of checking the basic micro-operations in process of their fulfillment. Let us demonstrate now a possibility of the realization of this idea by using the above basic micro-operations (convolution, devolution, replacement and absorption), used in the noise-immune Fibonacci arithmetic. We pay our attention to the following ‘technical’ peculiarity of the above basic micro-operations. For ‘register interpretation’ of these micro-operations, each micro-operation may be realized by means of the inversion of the flip-flops, involved into the micro-operation. This means that each micro-operation is realized technically by means of flip-flops’ switching. Let us evaluate now a potential ability of the basic micro-operations to detect errors, which may appear during the micro-operations realization. As it is well-known, the potential error-detection ability is determined by the relationship between the number of the detectable errors and the general number of all possible errors. Let us explain the essence of our approach to the detection of errors in the above micro-operations on the example of the micro-operation of convolution: 011⇒100 (56) The convolution is fulfilled for the 3-digit binary code combination (56). It is clear that there are 23 = 8 possible transitions, which can arise at the fulfillment of the micro-operation (56). Note that the only one of them, given by (56), is correct, that is, unmistakable transition. The code combinations {011,100}, (57) which are involved into the unmistakable transition (56), are called allowed code combinations for the convolution. The all the remaining code combinations, which can appear during the convolution (56) {000,001,010,101,110,111}, (58) are prohibited code combinations. The idea of the error detection consists in the following. If during the fulfillment of the micro-operation (56), one of the prohibited code combinations (58) appears, this fact is the indication of error. Note that if the erroneous transition 011⇒011, (59) when the allowed code combination 011 passes on into the same allowed code combination 011, we can interpret this transition as the case of undetectable error. Let us consider now the different erroneous situations, which can appear at fulfillment of the micro-operation (56): 011⇒{011000001010101110111}. (60) Among them, only the erroneous transition (59) is undetectable, because the code combination 011 is the allowed code combination. All the remaining erroneous transitions (60) are detectable. Let us analyze the transition (59) from the arithmetical point of view. It is clear that the essence of the erroneous transition (59) consists in the repetition of the same code combination 011. If we analyze this transition from the arithmetical point of view, we can see that this transition does not destroy the numerical information and does not influence on the outcome of the arithmetical operations. Hence, the erroneous transition (59) does not belong to the errors of catastrophic character. It can delay maybe only the data processing. All the remaining erroneous transitions from (60) are destroying the numerical information and hence can lead to the errors of catastrophic character. The main conclusion, following from this consideration, consists in the fact that the set of the ‘catastrophic’ code combinations from (58) coincides with the set of the detectable code combinations from (60). This means that all the ‘catastrophic’ transitions for the convolution are detectable. We emphasize once again that the undetectable transition (59) does not destroy numerical information and, therefore, from the arithmetical point of view cannot belong to the erroneous transitions of catastrophic character. This undetectable transition is delaying only the data processing. Thus, it follows from this consideration that we can design, by using this idea, the computer device for the fulfillment of the convolution with the absolute (i.e. 100%) potential ability to detect all catastrophic transitions, which may appear at the realization of the convolution. We can do the similar conclusion for other basic micro-operations. But the fulfillment of some data processing algorithm in the Fibonacci processor, based on the basic micro-operations, is reduced to the sequential fulfillment of the certain basic micro-operations on each computation step. Because checking of circuits for realization of the basic micro-operations has the ‘absolute’ error-detecting ability, it follows from this consideration a possibility designing the arithmetical self-checked Fibonacci processor, which has the ‘absolute’ error-detection ability (100%) for the ‘catastrophic’ errors, arising in the high-reliable Fibonacci processor at the flip-flops’ switching. 8.3.2. The hardware realization of the Fibonacci high-reliable processor The Fibonacci high-reliable processor is based on the principle of ‘cause–effect,’ described in the article [45]. The essence of the principle consists in the following. The initial information (the ‘cause’), which is subjected to the data processing, is transformed into the ‘result’ by using some micro-operations. After that we transform the ‘result’ (the ‘effect’) to the initial information (the ‘cause’) and then we check that the ‘effect’ fits to its ‘cause.’ For example, at the fulfillment of the convolution for the binary combination 011 (the ‘cause’), we get the new binary combination 100 (the ‘effect’), which is necessary condition for the fulfillment of the devolution. This means that the correct fulfillment of the convolution leads to the condition for the devolution. Analogously the correct fulfillment of the devolution leads to the condition for the convolution. It follows from this consideration that the micro-operations of the convolution and devolution are mutually checked. These conclusions are true for all the above basic micro-operations, represented in the table (53). For the ‘register interpretation’ the obtaining a correspondence between the ‘cause’ and the ‘effect’ is realized by using the ‘checking flip-flop’. The ‘cause’ sets up the ‘checking flip-flop’ into the state of 1 and the correct fulfillment of the micro-operation (the ‘effect’ fits to the ‘cause’) overthrows the ‘checking flip-flop’ into the state 0. If the ‘effect’ does not fit to the ‘cause’ (the micro-operation is fulfilled incorrectly), then the ‘checking flip-flop’ remains in the state 1, which indicates the error. If we analyze the ‘causes’ and the ‘effects’ for every basic micro-operation, we can determine that every ‘effect’ is the inversion of its ‘cause,’ that is, all micro-operations could be realized by means of the inversion of the flip-flops, involved into the micro-operation. The block diagram of the Fibonacci device for the realization of the principle of ‘cause–effect’ is shown in Fig. 2. The device in Fig. 2 consists of the information and check registers, which are connected by means of the logic ‘cause’ and ‘effect’ circuits. The code information, entering the information register through the ‘Input,’ is analyzed by the logic ‘cause’ circuit. Figure 2. View largeDownload slide The block diagram of the Fibonacci device for the realization of the principle of the ‘cause–effect’. Figure 2. View largeDownload slide The block diagram of the Fibonacci device for the realization of the principle of the ‘cause–effect’. Suppose that we need to fulfill the convolution for the binary combination in the information register. Let some flip-flops Tk−1, Tk, Tk + 1 of the information register be in the state 011, i.e. the condition for the convolution is satisfied for this group of flip-flops. Then, the logic ‘cause’ circuit (the logic circuit for the convolution for this example) results in writing the logic 1 into the corresponding flip-flop Tk of the check register. The written logic 1 is resulting into the inversion of the flip-flops Tk−1, Tk, Tk + 1 of the information register by using the back connection, that is, their new states are 100. This means that the condition for the devolution is satisfied for this group of the flip-flops. Then, the logic ‘effect’ circuit (the logic circuit for the devolution in this example) analyzes the states of the flip-flops Tk−1, Tk, Tk + 1 of the information register and overthrows the same flip-flop Tk of the check register to the initial state of 0. Overthrowing the flip-flop Tk of the check register into the initial state of 0 confirms that the ‘cause’ (011) fits to its ‘effect’ (100), that is, the micro-operation of the convolution is correct. Hence, if we get the code word of 00…0 in the check register after the end of all micro-operations, this means that all ‘causes’ fit to their ‘effects,’ that is, all the micro-operations are correct. If the check register contains at least one logic 1 in some flip-flop, this means that at least one basic micro-operation is not correct. The logic 1’s in the flip-flops of the check register cause the error signal of 1 at the output ‘Error’ of the device in Fig. 2. The signal of 1 at the output ‘Error’ prohibits the use of the data on the ‘Output’ of the Fibonacci device in Fig. 2. The most important advantage of the check principle of the ‘cause–effect’, which is realized in Fibonacci device in Fig. 2, is the detection of error in the moment of its appearance. The correction of error in the micro-operation is realized by the repetition of this micro-operation. Hence, the above approach, based on the principle of the ‘cause–effect,’ permits to detect and then to correct data by means of repetition of all ‘catastrophic’ errors, arising in the moment of flip-flop switching with 100% guarantee. A more detailed description of all the benefits of this principle of implementation of the high-reliable Fibonacci processor is given in the article [45]. The article stresses that ‘this approach can lead to designing a new class of high-reliable computers and processors, which provide a significant increase of the reliability of information processing in computer systems and the creation of new methods of information processing.’ 8.3.3. USA researches in Fibonacci computers field It is necessary to note that along with the Soviet studies on ‘Fibonacci Arithmetic’ and ‘Fibonacci computers’, in the same period, the similar studies have been fulfilled in the United States (University of Maryland) under the scientific supervision of Prof. Robert Newcomb [53–57]. The studies of the American, Soviet and Ukrainian scientists in this field are confirmation of the fact that, since the 1970s of the 20th century, the notions of ‘Fibonacci code,’ ‘Fibonacci arithmetic’ and ‘Fibonacci computer’ became widely known in the world scientific and technical literature. 8.4. Codes of the golden p-proportions 8.4.1. Definition The binary code of the real number A, which is determined by the formulas (2) A=∑iai2i(i=0,±1,±2,±3,…)(ai∈{0,1}) and (3) 2i=2i−1+2i−1=2×2i−1, assumes the following generalization. Let us consider the set of the following standard line segments: {Φpn,Φpn−1,…,Φp0=1,Φp−1,…,Φp−k,…}, (61) where Φp is the golden p-ratio, the positive root of the golden p-ratio equation xp+1=xp+1. (62) By using (61), we can get the following positional method of the real number representation, introduced in [47–49]: A=∑iaiΦpi, (63) where A is the positive real number, ai∈{0,1} is the bit of the ith digit; Φpi is the weight of the ith digit; Φp is the base of the numeral system (63), i=0,±1,±2,±3,…, p=0,1,2,3,… is the given integer. The general theory of the codes of the golden p-proportions (63) has been described in the book [24]. 8.4.2. Partial cases of the codes of the golden p-proportions First of all, we note that the formula (63) sets forth a theoretically infinite number of the binary positional representations of real numbers, because every p=0,1,2,3,… ‘generates’ its own method of the binary positional number representation in the form (63). The base of numeral system is one of the fundamental notions of positional numeral system. The analysis of the sum (63) shows that the golden p-ratio Φp, the positive root of the golden p-ratio equation (62), is the base of the numeral system (63). Note that except for the case p = 0 (Φp=0=2) all the remaining golden p-proportions Φp are irrational numbers. It follows from this fact that the codes of the golden p-proportions (63) are the binary numeral systems with irrational bases Φp for the cases p > 0. Note that for the case p = 0, the codes of the golden p-proportions (63) are reduced to the classic binary code (2) and for the case p = 1 to Bergman’s system (1). It is clear that Bergman’s system (1) has the most practical significance because this numeral system with irrational base Φ=1+52 is the simplest for technical realization. 8.5. Application of the codes of the golden p-proportions in self-correcting analog-to-digital and digit-to-analog converters 8.5.1. The ‘binary’ resistive divisor In measurement practice, the so-called resistor divisors, intended for the division of electric currents and voltages in the given ratio, are widely used. One of the variants of such divisor is shown in Fig. 3. Figure 3. View largeDownload slide The resistive divisor. Figure 3. View largeDownload slide The resistive divisor. The resistive divisor in Fig. 3 consists of the ‘horizontal’ resistors of the kind R1 and R3 and the ‘vertical’ resistors R2. The resistors of the divisor are connected between themselves by the ‘connecting points’ 0,1,2,3,4. Each point connects three resistors, which are forming together the resistor section. Note that Fig. 3 shows the resistive divisor, which consists of the five resistor sections. In general, a number of resistor sections can be extended ad infinitum. First of all, we note that the parallel connection of the resistors R2 and R3 to the right of the ‘connecting point’ 0 and to the left of the ‘connecting point’ 4 can be replaced by the equivalent resistor with the resistance, which can be calculated according to the well-known electric law on parallel connection of two resistors R2 and R3 (see Fig. 3): Re1=R2×R3R2+R3. (64) Then, it is easy to calculate the equivalent resistance of the resistor section to the right of the ‘connecting point’ 1 and to the left of the ‘connecting point’ 3: Re2=R1+Re1. (65) In dependence on the choice of the resistance values of the resistors R1, R2, R3, we can get the different coefficients for the current or voltage division. Let us consider now the ‘binary’ resistive divisor, corresponding to p=0. For this case, the resistive divisor consists of the following resistors: R1=R;R2=R3=2R, where R is some standard resistance value. For this case, the expressions (64) and (65) take the following values: Re1=R;Re2=2R. (66) Then, taking into consideration (66), we can prove that the equivalent resistance of the ‘binary’ resistive divider to the left or to the right of any ‘connecting point’ 0,1,2,3,4 is equal to 2R. This means that the equivalent resistance of the resistive divisor in the ‘connecting points’ 0,1,2,3,4 can be calculated as the resistance of the parallel connection of the three resistors of the value 2R. By using the electric circuit laws, we can calculate the equivalent resistance of the ‘binary’ resistive divisor in each ‘connecting point’ 0,1,2,3,4 as follows: Re3=23R. (67) Let us connect now the generator of the standard electric current I to one of the ‘connecting points’, for example, to the point 2. Then according to Ohm’s law the following electric voltage appears in this point: U=23RI. (68) Let us calculate now the electric voltages in the ‘connecting points’ 3 and 1, which are adjacent to the point 2. It is easy to show that the voltage transmission coefficient between the adjacent ‘connecting points’ is equal to 12. This means that the ‘binary’ resistive divisor fits very well to the binary system and this fact is a cause of wide use of the ‘binary’ resistive divisor in Fig. 3 in modern ‘binary’ digit-to-analog and analog-to-digit converters (DAC and ADC). 8.5.2. The ‘golden’ resistive divisors and their electric properties Let us take the values of the resistors of the ‘golden’ resistive divisors in Fig. 3 as follows: R1=Φp−pR;R2=Φpp+1R;R3=ΦpR, (69) where Φp is the golden p-ratio, p∈{0,1,2,3,…}. It is clear that the ‘golden’ resistive divisor in Fig. 3 sets an infinite number of the different ‘golden’ resistive divisors, because every p ‘generates’ a new ‘golden’ resistive divisor. In particular, for the case p=0, the value of the golden 0-ratio Φ0=2 and the ‘golden’ resistive divisor is reduced to the classic ‘binary’ resistive divisor, based on the resistors R−2R. For the case p=1 the resistors R1,R2,R3 in Fig. 3 take the following values: R1=Φ−1R=0.618R;R2=Φ2R=2.618R;R3=ΦR=1.618R. (70) Let us show that the ‘golden’ resistive divisors in Fig. 3 with the resistors R1,R2,R3, given by (69) and (70), have the following unique electric properties. To find these unique electric properties, we will use the following fundamental mathematical relations for the golden p-proportions Φp: Φp=1+Φp−p, (71) Φpp+2=Φpp+1+Φp, (72) which takes the following forms for the cases p=0(Φp=0=2) and p+1(Φp=1=Φ=1+52=1.618), respectively: p=0:2=1+1;22=2+2 (73) p=1:Φ=1+Φ−1;Φ3=Φ2+Φ. (74) By using the identity (72), we can deduce the value of equivalent resistance of the resistor circuit of the ‘golden’ resistive divisor in Fig. 3 to the left and to the right from the ‘connecting points’ 0 and 4. In general case of p (p≥1), the formula (64) looks as follows: Re1=R2×R3R2+R3=Φpp+1R×ΦpRΦpp+1R+ΦpR=Φpp+2R2(Φpp+1+Φp)R=R. (75) Note that we have simplified the formula (75) by using the mathematical identity (72). By using (65) and (71), we can calculate the value of the equivalent resistance of Re2 as follows: Re2=Φp−pR+R=(Φp−p+1)R=ΦpR. (76) Thus, according to (76), the equivalent resistance of the resistive circuit of the ‘golden’ resistive divisor in Fig. 3 to the left or to the right of the ‘connecting points’ 0,1,2,3,4 is equal to ΦpR, where Φp is the golden p-proportion. This fact can be used for the calculation of the equivalent resistance Re3 of the ‘golden’ resistive divisor in the ‘connecting points’ 0,1,2,3,4. In fact, the equivalent resistance Re3 can be calculated as the resistance of the electrical circuit, which consists of the parallel connection of the ‘vertical’ resistor R2=Φpp+1R and the two ‘lateral’ resistors with the resistance ΦpR. But because according to (75) the equivalent resistance of the parallel connection of the resistors R2=Φpp+1R and R3=ΦpR is equal to R, then the equivalent resistance Re3 of the divider in each ‘connecting point’ can be calculated by the formula: Re3=ΦpR×RΦpR+R=ΦpR2(Φp+1)R=11+Φp−1R. (77) Note that for the case p = 0 (the ‘binary’ resistive divisor) we have Φp=0=2 and then the expression (76) is reduced to (67). For the case p = 1, the formula (76) is reduced to the following formula: Re3=11+Φ−1R=1ΦR=Φ−1R. (78) Let us calculate now the voltage transmission coefficient between the adjacent ‘connecting points’ of the ‘golden’ resistive divisor. For this purpose, we connect the generator of the standard electric current I to one of the ‘connecting points,’ for example, to the point 2. Then according to Ohm’s law, the following electrical voltage appears in this point: U=11+Φp−1RI. (79) Note that for the case p=0, we have Φp=0=2 and the formula (79) is reduced to the following formula: U=11+2−1RI=11+12RI=23RI, (80) which coincides with the formula (68) for the ‘binary’ resistive divisor. Let us calculate now the electrical voltage in the adjacent ‘connecting points’ 3 и 1. The voltages in the points 3 and 1 can be calculated as a result of linking the voltage U, given by (78), to the resistive circuit, which consists of the sequential connection of the ‘horizontal’ resistor R1=Φp−pR and the resistive circuit with the equivalent resistance R. Then, for this case, the electrical current I, which appears in the resistive circuit to the left and to the right of the ‘connecting point’ 2, will be equal to I=UR1+R=U(Φp−p+1)R=UΦpR. (81) If we multiply the electrical current (80) by the equivalent resistance R, we get the following value of the electrical voltage in the adjacent ‘connecting points’ 3 and 1: UΦp. (82) This means that the voltage transmission coefficient between the adjacent ‘connecting points’ of the ‘golden’ resistive divisor in Fig. 3 is equal to the reciprocal to the golden p-proportion Φp! Thus, the ‘golden’ resistive divisor in Fig. 3, based on the golden p-proportions Φp, is quite real electrical circuits. It is clear that the above theory of the ‘golden’ resistive divisors [16] can become a new source for the development of the ‘digital metrology’ and analog-to-digit and digit-to-analog converters. Note that the above theory of the ‘golden’ resistive divisor is described for the first time in the author’s 1978 article [50]. 8.5.3. Self-correcting «golden» ADC There is a problem to guarantee the temperature and prolonged in time stability for the high-reliable control systems. Because ADC and DAC are very important devices of high-reliable control systems for many complicated technological objects; therefore, designing the self-correcting ADC and DAC is one of the most important areas of applications of the Fibonacci and golden ratio codes. While the faults and failures of the digital components of computers and microprocessors (e.g. flip-flops and logic gates) are the main cause of non-reliability of the digital systems, the deviations of parameters of the analog elements ADC and DAC from their standard values are the main cause of the informational non-stability of measurement systems. These deviations depend from different interior and exterior factors (‘aging’ of elements, temperature influences, technological errors, etc.) and they are usually the ‘slow’ time functions. At the designing the exact measurement systems, there is a problem decreasing the requirements to the technological exactness of the analog elements and eliminating such difficult technological procedures as the laser ‘tuning’ of the analog elements. The solution of this problem is realized by the application of principle of self-correction. The Fibonacci and golden proportion codes allow applying the principle of self-correction to improve the exactness and metrological stability of ADC and DAC. At the realization of the ‘golden’ and Fibonacci self-correcting ADC and DAC, the most important advantage is correction of the non-linearity of transfer function of the ‘golden’ resistive divisor. In the Special Design Bureau ‘Module’ of the Vinnytsia Technical University (Ukraine), under author’s scientific leadership, several modifications of self-correcting ADCs and DACs were developed, in which the special procedure for correcting the deviations of the digit weights from their ideal values (Fibonacci numbers or the golden ratio powers) was realized. The self-correcting 17-digit ADC, based on the Fibonacci code, was one of the best engineering developments, which was designed and produced in the Special Design Bureau ‘Module’ [51, 52] (Fig. 4). Figure 4. View largeDownload slide 17-Digit self-correcting ADC. Figure 4. View largeDownload slide 17-Digit self-correcting ADC. ADC in Fig. 4 had the following technical parameters: The number of digits—18 (17 digital and one sign digit). Conversion time—15 ms. Total error—0.006%. Linearity error—0.003%. Frequency range—25 kHz. Operating temperature range—20 ± 30°C. The correction system, built in the ADC, allows to correct the zero drift, the linearity of the AD-conversion, what is fulfilled by traditional methods, and most importantly, to correct the deviations of the digit weights from their nominal values (Fibonacci numbers or powers of the golden proportion). According to the opinion of the Soviet well-known metrological firms, the Soviet electronic industry not produced ADC with such high technical parameters at that time. 9. CONCLUSIONS, BASIC CONCEPTS AND THE MAIN SCIENTIFIC RESULTS Mission-critical applications. At the present time, the computer science and digital metrology are passing to new stage of their development, to the stage of designing computing and measuring systems for mission-critical applications. This puts forward new requirements for ensuring informational reliability of such systems. The most important requirement is to prevent the occurrence of ‘false signals’ at the output of the mission-critical systems that can lead to technological disasters. ‘Philosophy’ of error detection for the error-correcting codes. Modern methods of providing informational reliability of mission-critical systems (in particular, the use of error-correcting codes) do not always provide the required informational reliability of the mission-critical systems. In particular, the theory of ECC mainly is focused on the detection and correction of the errors of low multiplicity (e.g. single-bit and double-bit errors) as the most probable. With regard to the errors of high multiplicity, the theory of ECC simply ignores them because of their low probability; this follows from the model of ‘symmetrical channel’. Such ‘philosophy’ of error detection is absolutely unacceptable for the case of the mission-critical systems, because these undetectable errors can be the source of ‘false signals’ at the output of mission-critical systems what can lead to enormous social and technological disasters. Paradox of Hamming code. The main paradox of Hamming code and its analogs (for example, Hsiao code) consists of the fact that the Hamming and Hsiao codes perceive many-bit errors of the odd multiplicity (3,5,7,9,…) as single-bit errors, and for these cases, they begin ‘false correction’ by adding new errors to the erroneous code word. That is, for this case, the Hamming and Hsiao codes are turned out into anti-ECC, because they are ‘ruining’ the Hamming and Hsiao code words. This ‘paradoxical’ property of the Hamming and Hsiao codes is well known to experts in the field of ECC, but many consumers do not always know about this. For such cases, the main arguments for customers consist in the fact that the errors of large multiplicity are unlikely, but such arguments are unacceptable for mission-critical applications. Row hammer effect is a new phenomenon in the field of electronic memory. The main reason of this phenomenon is microminiaturization of electronic memory, which leads to mutual electrical interaction between nearby memory rows. This interaction is altering the contents of nearby memory rows that were not addressed in the original memory access. No effective methods of fighting against row hammer effect have been proposed until now. Possibly, the only reasonable proposal is to introduce restrictions on microminiaturization of electronic memory. But then the question arises how we have to design nano-electronic memory? ‘Trojan horse’ of the binary system. The prominent American scientist, physicist and mathematician John von Neumann (1903–1957), together with his colleagues from the Prinstone Institute Goldstein and Berks after careful analysis of the strengths and weaknesses of the first electronic computer ENIAC gave strong preference to the binary system as a universal way of coding of data in electronic computers. However, this proposal contains in itself a great danger for the case of mission-critical systems. The classical binary code has zero code redundancy what excludes a possibility detecting any errors in computer structures. This danger was called ‘Trojan horse’ of binary system by the Russian academician Yaroslav Khetagurov. Because of the ‘Trojan Horse’ phenomenon, humanity becomes a hostage to the binary system for the case of mission-critical applications. From here, it follows the conclusion that the binary system is unacceptable for designing computational and measuring systems for mission-critical applications. Bergman’s system, introduced in 1957 by the American 12-year-old wunderkind George Bergman, is an unprecedented case in the history of mathematics. The mathematical discovery of the young American mathematician returns mathematics to the Babylonian positional numeral system, that is, to the initial period in the development of mathematics, when the numeral systems and rules of performing basic arithmetic operations stood at the center of mathematics. But the most important is the fact that the famous irrational number Φ=1+52 (the golden ratio) is the base of Bergman’s system what puts forward the irrational numbers on the first position among the numbers. It can be argued that the Bergman’s system is the greatest modern mathematical discovery in the field of numeral systems, which changes our ideas about numeral systems and alters both the number theory and computer science. The ‘golden’ number theory and new properties of natural numbers is the first important consequence, following from Bergman’s system. For many mathematicians in the field of number theory, it is a great surprise that new properties of natural numbers (Z-property, D-property, F-code, L-code) were discovered in the 21st century, that is, 2.5 millennia after the writing of Euclid’s Elements, in which systematic studying the properties of natural numbers started. Bergman’s system is the source for the ‘golden’ number theory what once again emphasizes a fundamental nature of the mathematical discovery of George Bergman. Ternary mirror-symmetrical numeral system and new ternary mirror-symmetrical arithmetic are the main applied scientific results, following from Bergman’s system. These results alter our ideas about ternary numeral system. The property of mirror symmetry is the main checking property, which allows detecting errors in all arithmetical operations. Fibonacci p-codes and Fibonacci arithmetic based on the basic micro-operations. The new computer arithmetic consists in the sequential execution of the so-called ‘basic micro-operations.’ The errors are detected by built-in error-detection device simultaneously with the execution of the micro-operations in the moment of errors occurrence what ensures the high information reliability of the arithmetic device for mission-critical applications. Codes of the golden p-proportions, ‘golden’ resistive divisors and self-correcting ADC and DAC. The codes of the golden p-proportions with the base Φp (the positive root of the algebraic equation xp+1−xp−1=0,p=0.1,2,3,…) are a wide generalization of the binary system (p = 0) and Bergman’s system (p = 1). The ‘golden’ resistive divisors, based on the golden p-proportions Φp, have unique electrical properties, which allow us to design self-correcting analog-to-digital and digital-to-analog converters. Metrological parameters of such ADCs and DACs remain unchanged in the process of temperature changing and elements aging, what is important for mission-critical applications. The final conclusion. The above theory of numeral systems with irrational bases are a new direction in the field of coding theory, intended for increasing informational reliability and noise immunity of specialized computing and measuring systems. This direction does not set itself the task of replacing the classical binary system in those cases where the use of the binary system does not threaten an appearance of technological disasters and where informational reliability and noise immunity can be ensured by traditional methods. The main task of this direction is preventing or significantly reducing the probability of ‘false signals’ at the output of information systems that can lead to social or technological disasters. This scientific direction is at the initial stage and its development can lead to new technical solutions in the field of computer science and digital metrology. REFERENCES 1 Kharkevich , A.A. ( 1963 ) Fighting against Noises . State Publishing House of Physical and Mathematical Literature , Moscow . (Russian). 2 MacWilliams , F.J. and Sloane , N.J.A. ( 1978 ) The Theory of Error-Correcting Codes . North-Holland Publishing Company . 3 Mission critical . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mission_critical. 4 Hamming code . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hamming_code. 5 Hsiao , M.Y. ( 1970 ) A class of optimal minimum odd-weight-column SEC-DED codes . IBM J. Res. Develop , 14 , 395 – 401 . Google Scholar CrossRef Search ADS 6 Petrov , K.A. Investigation of the characteristics of noise-immune codes used in submicron static RAMs (Russian) http://gigabaza.ru/doc/194118.html. 7 Row hammer . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Row_hammer. 8 Bashmakova , J.G. and Youshkevich , A.P. ( 1951 ) An Origin of the Numeral Systems. Encyclopedia of Elementary Arithmetics. Book 1. Arithmetic . Gostekhizdat , Moscow, Leningrad . (Russian). 9 Von Neumann Architecture . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Von_Neumann_architecture. 10 Khetagurov , J.A. ( 2009 ) Ensuring the national security of real-time systems . BC/NW , Vol. 2, 11.1 . http://network-journal.mpei.ac.ru/cgi-bin/main.pl?l=ru&n=15&pa=11&ar=1. (Russian). 11 Kautz , W. ( 1966 ) Error-Correcting Codes and Their Implementation in Digital Systems. In Methods of Introducing Redundancy for Computing Systems. Transl. from English . Soviet Radio , Moscow . (Russian). 12 Tolstyakov , V.S. , Nomokonov , V.N. and Kartsovsky , M.G. et al . ( 1972 ) Detection and Correction of Errors in Discrete Devices. Edited by V.S. Tolstyakov. Moscow: Soviet Radio (Russian). 13 Bergman , George ( 1957 ) A Number System with an Irrational Base . Math. Mag. , Vol. 31. https://en.wikipedia.org/wiki/Golden_ratio_base. doi:10.2307/3029218 . JSTOR 3029218. 14 Golden Ratio Base. From Wikipedia, the Free Encyclopaedia https://en.wikipedia.org/wiki/Golden_ratio_base. 15 Phi Number System . From WolframMathWorld http://mathworld.wolfram.com/PhiNumberSystem.html 1957 , No 31. 16 Knuth , Donald E. ( 1997 ) The Art of Computer Programming. Volume 1. Fundamental Algorithms ( 3rd edn ). Addison-Wesley , Massachusetts . 17 Stakhov , A.P. ( 2009 ) The Mathematics of Harmony. From Euclid to Contemporary Mathematics and Computer Science. Assisted by Scott Olsen . International Publisher ‘World Scientific’ , (New Jersey, London, Singapore, Beijing, Shanghai, Hong Kong, Taipei, Chennai) . Google Scholar CrossRef Search ADS 18 Stakhov , A.P. ( 2002 ) Brousentsov’s ternary principle, Bergman’s number system and ternary mirror-symmetrical arithmetic . Comput. J. , 45 , 221 – 236 . Google Scholar CrossRef Search ADS 19 Stakhov , A.P. ( 2015 ) The ‘Golden’ number theory and new properties of natural numbers . Br. J. Math. Comput. Sci. , 11 , 1 – 15 . Google Scholar CrossRef Search ADS 20 Stakhov , A.P. ( 2016 ) The importance of the Golden Number for Mathematics and Computer Science: Exploration of the Bergman’s system and the Stakhov’s Ternary Mirror-symmetrical System (Numeral Systems with Irrational Bases) . Br. J. Math. Comput. Sci. , 18 , 1 – 34 . Google Scholar CrossRef Search ADS 21 Pospelov , D.A. ( 1970 ) Arithmetic Foundations of Computers . High School , Moscow . (Russian). 22 Polya , G. ( 1962 ), (1965) Mathematical Discovery. On Understanding, Learning and Teaching Problem Solving , Vols. I and II . Ishi Press , New York, London . 23 Stakhov , A.P. ( 1977 ) Introduction into Algorithmic Measurement Theory . Soviet Radio , Moscow . (Russian). 24 Stakhov , A.P. ( 1984 ) Codes of the Golden Proportion. Radio and Communication , Moscow . (Russian). 25 Stakhov , A. ( 1972 ) Synthesis of optimal algorithms for analog-to-digital conversion. Doctoral Thesis, Kiev Institute of Civil Aviation Engineers (Russian) 26 Vorobyov , N.N. ( 1961 ) Fibonacci Numbers . Publishing House ‘Nauka’ , Moscow . (Russian). 27 Hoggatt , V.E. ( 1969 ) Fibonacci and Lucas Numbers . Houghton-Mifflin , Palo Alto, CA . 28 Koshy , Thomas. ( 2017 ) Fibonacci and Lucas Numbers with Applications ( 2nd edn ). John Wiley & Sons, Inc . Google Scholar CrossRef Search ADS 29 Stakhov , A.P. ( 1974 ) Redundant Binary Positional Numeral Systems. In Homogenous digital computer and integrated structures . Taganrog Radio University . No 2 (Russian). 30 Stakhov , A.P. ( 1975 ) A Use of Natural Redundancy of the Fibonacci Number Systems for Computer Systems Control. Automation and Computer Systems, No 6 (Russian). 31 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of USA No. 4187500. 32 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of USA No. 4290051. 33 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of England No. 1543302. 34 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of England No. 2050011. 35 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Germany No. 2732008. 36 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of Germany No. 2921053. 37 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Japan No. 1118407. 38 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificates of France Nos. 7722036 and 2359460. 39 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificates of France Nos. 7917216 and 2460367. 40 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Canada No. 1134510. 41 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of Canada No. 1132263. 42 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Poland No. 108086. 43 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent certificate of DDR No 150514. 44 Stakhov , A.P. ( 2016 ) Fibonacci p-codes and codes of the golden p-proportions: New informational and arithmetical foundations of computer science and digital metrology for mission-critical applications . Br. J. Math. Comput. Sci. , 17 , 1 – 49 . Google Scholar CrossRef Search ADS 45 Luzhetsky , V.A. , Stakhov , A.P. and Wachowski , V.G. ( 1989 ) Noise-Immune Fibonacci Computers. The Brochure ‘Noise-Immune Codes. Fibonacci Computer’ . Knowledge , Moscow . A series ‘New life, Science and Technology’ (Russian). 46 Ancient Egyptian Mathematics . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Ancient_Egyptian_mathematics#Multiplication_and_division. 47 Stakhov , A.P. ( 1978 ) Fibonacci and ‘Golden’ Ratio Codes. In Fault–Tolerant Systems and Diagnostic FTSD-78, Gdansk. 48 Stakhov , A.P. ( 1980 ) The golden mean in the digital technology . Autom. Comput. Syst. , No 1 (Russian), pp. 27 – 33 . 49 Stakhov , A.P. ( 1981 ) Perspectives of the use of numeral systems with irrational bases in the technique of analog-to-digital and digital-to-analog conversion . Measurements, Control, Automation, Moscow. No. 6 (Russian). 50 Stakhov , A.P. ( 1978 ) Digital Metrology on the basis of the Fibonacci codes and Golden Proportion Codes. In Contemporary Problems of Metrology . Machine-Building Institute , Moscow . (Russian). 51 Stakhov , A.P. , Azarov , A.D. , Moiseev. , V.I. , Martsenyuk , V.P. and Stejskal , V.Y. ( 1986 ) The 18-bit self-correcting ADC . Devices Control Syst. , 1 . 52 Stakhov , A.P. , Azarov , A.D. , Moiseev. , V.I. and Stejskal , V.Y. ( 1989 ) Analog-to-Digital Converters on the Basis of Redundant Numeral Systems. The Brochure ‘Noise-Immune Codes. Fibonacci Computer.’ . Knowledge , Moscow . A series ‘New life, science and technology’ (Russian), pp. 40 – 48 . 53 Licomenides , P. and Newcomb , R. ( 1984 ) Multilevel Fibonacci conversion and addition . Fibonacci Q. , 22 . 54 Ligomenides , P. and Newcomb , R. ( 1981 ) Equivalence of some Binary, Ternary, and Quaternary Fibonacci Computers. Proc. Eleventh Int. Symp. Multiple-Valued Logic, Norman, Oklahoma. 55 Ligomenides , P. and Newcomb , R. ( 1981 ) Complement Representations in the Fibonacci Computer. Proc. Fifth Symp. Computer Arithmetic, Ann Arbor, MI. 56 Newcomb , R. ( 1974 ) Fibonacci Numbers as a Computer Base. Conference Proc. Second Inter-American Conf. Systems and Informatics, Mexico City. 57 Hoang , V.D. ( 1979 ) A Class of Arithmetic Burst-Error-Correcting Codes for the Fibonacci Computer. PhD Thesis, University of Maryland. Author notes Handling editor: Fionn Murtagh © The British Computer Society 2017. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Computer Journal Oxford University Press

Mission-Critical Systems, Paradox of Hamming Code, Row Hammer Effect, ‘Trojan Horse’ of the Binary System and Numeral Systems with Irrational Bases

The Computer Journal , Volume Advance Article (7) – Oct 10, 2017

Loading next page...
 
/lp/ou_press/mission-critical-systems-paradox-of-hamming-code-row-hammer-effect-gMH0DUXUkg
Publisher
Oxford University Press
Copyright
© The British Computer Society 2017. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
ISSN
0010-4620
eISSN
1460-2067
D.O.I.
10.1093/comjnl/bxx083
Publisher site
See Article on Publisher Site

Abstract

Abstract This article deals with a wide range of issues related to the design of specialized computing and measuring systems for mission-critical applications, in which the requirements of reliability and noise immunity are put to the fore. Among these issues, we consider: paradox of the Hamming code, the ‘row hammer effect’, the ‘Trojan horse’ of the binary system. It is discussed the issue of the use of numeral systems with irrational bases (Bergman’s system, ternary mirror-symmetric arithmetic, Fibonacci p-codes and codes of the golden p-proportions) for the design of specialized computing and measuring systems for mission-critical applications. 1. INTRODUCTION As it is known, digital computer technology is largely obliged to the union of two outstanding inventions of the human intellect, the Boolean (or two-alternative) logic and the binary system. The Boolean logic and theory of digital automata, first of all, are studying perfect or deterministic operations, realized by idealized logical schemes. The classical binary system, in turn, describes the processes in the idealized arithmetical devices of digital computers. However, for real conditions, all digital structures are exposed to various internal and external influences or ‘noises,’ which lead to errors in digital structures and distortion of data at their outputs. The so-called ‘Fighting against noises’ [1] is becoming one of the most important problems of computer science. For the first time, this problem became especially acute at the designing of serial data transmission systems. Within the framework of the serial data transmission systems, a well-known theory of error-correcting codes has emerged [2]. At the present time, the computer science is passing to new stage of their development, to the stage of designing computing and informational systems for mission-critical applications. In the Wikipedia article ‘Mission critical’ [3], we read: ‘Mission critical refers to any factor of a system (components, equipment, Personnel, process, procedure, software, etc.) that is essential to business operation or to an organization. Failure or disruption of mission critical factors will result in serious impact on business operations or upon an organization, and even can cause social turmoil and catastrophes. Therefore, it is extremely critical to the organization’s ‘mission’ (to avoid Mission Critical Failures). Mission critical system is a system whose failure may result in the failure of some goal-directed activity. Mission essential equipment and mission critical application are also known as mission-critical system. Examples of mission critical systems are: an online banking system, railway/aircraft operating and control systems, electric power systems, and many other computer systems that will adversely affect business and society seriously if downed. A good example of a mission critical system is a navigational system for a spacecraft.’ Designing of the mission-critical systems puts forward new requirements for ensuring noise immunity and informational reliability of such systems. The most important requirement is to prevent the occurrence of ‘false signals’ at the output of the mission-critical systems what can lead to technological disasters. Modern methods of providing noise immunity and informational reliability of mission-critical systems (in particular, the use of error-correcting codes [1, 2]) do not always provide the required informational reliability of the mission-critical systems. The main purpose of this article is to give a critical analysis of present methods of ensuring informational reliability of mission-critical systems, based on the use of error-correcting codes, and to set forth new challenges in this direction. 2. TYPICAL MODEL OF ERRORS AND PARADOX OF HAMMING AND HSIAO CODES 2.1. Two types of informational systems There are two types of informational systems, which require the usage of the redundant codes for error detection and correction, in particular, the error-correcting codes (ECC): Serial Informational Systems. In these systems, data are represented in serial form (bit by bit). The traditional data transmission systems are the brightest example of them. Parallel Informational Systems. In these systems, data are represented in parallel form. The brightest examples of them are computers, registers, RAM, microprocessors, microcontrollers and so on. It should be noted that the vast majority of the ECC are intended for detection and correction of errors in the data, represented in the serial form. Only some of them, in particular, the parity code, Hamming and Hsiao codes [4–6] and its modifications can be used for detection and correction of errors in the data, represented in the parallel form. The reason is in sharp complication of encoders and decoders for most ECC for the case of parallel informational systems. 2.2. Typical model of errors for serial communication systems The most characteristic feature of discrete data transmission systems is the serial nature of the transmission of bits 0 and 1 through communication channel. At such transmission, the action of the ‘noises’ on the n-bit serial code combination has sequential character (i.e. the ‘noises’ are acting on the first bit, then with the same intensity on the next bit, etc.). The model of the so-called ‘symmetrical channel’ as the simplest model of errors is used widely in the traditional theory of serial data transmission. A detailed analysis of this model is given in the book ‘Fighting against noises’ [1], written by the outstanding Soviet scientist academician Kharkevich, who for many years headed the Institute for Information Transmission Problems of the USSR Academy of Sciences. Let us consider the following Kharkevich’s arguments [1] related to this error model. Kharkevich writes ‘The error-correcting codes were created originally to detect and correct the independent errors. The notion of minimum code distance, in this case plays an important role …In the case of independent errors the error probability decreases with increasing multiplicity.’ This quote contains three important ideas: ECC were originally created to detect and correct independent errors, arising in serial data transmission systems. The idea of the independent errors plays an important role for the notion of ‘minimum code distance’ (or Hamming distance), which is one of the most important notions of the ECC theory. In the case of independent errors, the probability of errors decreases with increasing their multiplicity. By continuing his arguments, Kharkevich comes to the following conclusion [1], which can be considered as the main hypothesis and goal of the theory of ECC: ‘Thus, in the case of independent errors, we should first of all detect and correct errors of low multiplicity as the most probable.’ Thus, according to Kharkevich, the theory of ECC mainly is focused on the detection and correction of the errors of low multiplicity as the most probable. With regard to the errors of high multiplicity, the theory of ECC simply ignores them because of their low probability; this follows from the model of ‘symmetrical channel’. But the concept of the ‘low-probable errors’ does not exclude a possibility of their unexpected appearance. For mission-critical systems, the appearance of the errors of large multiplicity may be the cause of immense technological disasters. This is the main problem that occurs at designing highly reliable computer systems for mission-critical applications. The following important conclusion follows from the above arguments. The approach to detecting and correcting errors, which follows from model of ‘symmetrical channel,’ when the errors of large multiplicity are ignored as unlikely, is unsuitable for mission-critical systems, because this approach does not prevent erroneous output signals, which can arise due the errors of large multiplicity. The damage of such approach can be shown on the example of the Hamming and Hsiao codes [4–6], widely used for detecting and correcting errors in the informational systems with representation of data in parallel form. 2.3. Paradox of the Hamming and Hsiao codes As it is well known, the Hamming code [4] and Hsiao code [5, 6] are widely used to correct single-bit errors that occur in informational systems with parallel data representation (for example, in electronic memory). The so-called unmodified Hamming code [4] allows correcting one-bit error in the code word. In the case of the 2-bit (or double) error, the decoder cannot detect it, but it corrects the information word erroneously and informs about the successful correction of a single error in the code word. This case is called ‘false correction.’ To detect a double error, the modified Hamming code of the type SEC-DED (single-error-correcting-double-error-detecting) is used. It differs from the unmodified Hamming code by adding one more verification bit, the common parity bit of the entire code word. However, in the case of errors of multiplicity greater than 2, there is also a probability of the ‘false correction.’ Hsiao code [5, 6] is similar to the modified Hamming codes, but uses a slightly different mathematical basis. There is the question: how the Hamming and Hsiao codes operate, when the errors of the big odd multiplicity of 3 and more (5, 7, 9,…) arise in the code word? Such many-bit errors of the odd multiplicity (3, 5, 7, 9,…) are perceived by the Hamming and Hsiao codes as single-bit errors, and the Hamming or Hsiao codes begin to correct them by adding new errors to the erroneous code word. That is, for this case, the Hamming and Hsiao codes are turned into anti-ECC, because they ruin the Hamming and Hsiao code words (the effect of ‘false correction’). This ‘paradoxical’ property of the Hamming and Hsiao codes is well known to experts in the field of ECC [5, 6], but many consumers do not always know about this. For such cases, the main argument for customers is the indication of the fact that the errors of large multiplicity are unlikely, but such arguments are unacceptable for mission-critical applications. The modified Hamming and Hsiao codes differ in the possibility of detecting 3-bit (triple) and 4-bit (quadruple) errors (as well as errors of larger multiplicity). A comparison of the codes for this parameter is presented in Table 1 (the table is taken from [6]). Table 1. The probability of error correction and error detection by codes for an information word consisting of 64 bits (taken from [6]). Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 Table 1. The probability of error correction and error detection by codes for an information word consisting of 64 bits (taken from [6]). Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 Type of code The probability of erroneous correction of 3-bit error, % The probability of detection of 4-bit error, % Modified Hamming code (72 64) 75.9 98.9 Hsiao code (72, 64) 56.3 99.2 This table confirms that the modified Hamming code and Hsiao codes have a very high percentage of ‘false corrections’ of 3-bit errors, what is NOT ADMISSIBLE for mission-critical applications. Unfortunately, Table 1 does not contain data on the probability of ‘false correction’ of the odd errors of higher multiplicity (5.7.9,…); we cannot neglect these errors in mission-critical systems. This means that the modified Hamming code and Hsiao codes do not protect the computer systems and their main structures (in particular, electronic memory) from the appearance of the output ‘false data’ what may lead to technological disasters in mission-critical applications. High percentage of ‘false corrections’ of the odd errors of large multiplicity (3, 5, 7,…) in Hamming and Hsiao codes questions the usefulness of using these codes for mission-critical applications. 2.4. ‘Row hammer’ effect ‘Row hammer’ effect is a new phenomenon in the field of electronic memory. In the Wikipedia article [7], the essence of this effect is explained as follows: ‘Row hammer … is an unintended side effect in dynamic random-access memory (DRAM) that causes memory cells to leak their charges and interact electrically between themselves, possibly altering the contents of nearby memory rows that were not addressed in the original memory access. This circumvention of the isolation between DRAM memory cells results from the high cell density in modern DRAM …’ As follows from this quote, the main reason of the ‘row hammer’ effect is the microminiaturization of electronic memory, which leads to mutual electrical interaction between nearby memory rows. This interaction is ‘altering the contents of nearby memory rows that were not addressed in the original memory access.’ No effective methods of fighting against ‘row hammer effect’ have been proposed until now. Possibly, the only reasonable proposal is to introduce restrictions on the microminiaturization of electronic memory. But then the question arises how we have to design nano-electronic memory? 3. COMPUTER REVOLUTION, BASED ON THE BINARY SYSTEM, AND THE ‘TROJAN HORSE’ OF THE BINARY SYSTEM 3.1. Leibniz’s binary arithmetic The prominent German scientist Wilhelm Gottfried Leibniz (1646–1716) was the creator of the binary arithmetic. Since student years until his death, Leibniz studied the properties of the binary system, which has become in the future the basis of modern computers. The binary system has been fully described by Leibniz in the XVII century in the work ‘Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures of Fu Xi’ (1703). Leibniz attributed to the binary system a mystical meaning, and believed that by using it, we can create a universal language for explaining all phenomena of the world. In 1697, Leibniz created the medal, which demonstrates the relationship between the binary and decimal numbers. As a fan of Chinese culture, Leibniz was aware of the Chinese ‘Book of Changes’ and one of the first to notice that the hexagram corresponds to the binary numbers from 0 to 111 111. Leibniz believed that the ‘Book of Changes’ is evidence of major Chinese contribution in mathematical philosophy of that time. Leibniz did not recommend the binary system instead of the decimal to practical calculations, but he stressed that ‘the calculation by using binary numerals 0 and 1, in spite of its length, is major in science, and even in the computing practice, especially in geometry: the reason consists of the fact that by reducing numbers to the simplest principles, that is, 0 and 1, we establish everywhere the wonderful order’ (the quote is taken from [8]). In this quote, Leibniz anticipated the modern ‘computer revolution,’ based on the binary system! 3.2. John von Neumann’s Principles and computer revolution, based on the binary system A direct outcome of the first electronic computer ENIAC (Pennsylvania University, 1942) was a confirmation in practice of high efficiency of electronic technology in computers. A problem of maximal realization of huge advantages of electronic technology came in front of computer designers. It was necessary to analyze strong and weak aspects of the ENIAC project and to give appropriate recommendations. A brilliant solution of this task was given in the famous Report ‘Preliminary discussion of the logical design of an electronic computing instrument’ (1946) [9]. This Report, written by the brilliant mathematician John von Neumann and his colleagues from the Prinstone Institute Goldstein and Berks, did present the project of new electronic computer. The Report [9] became the beginning of the computer revolution, based on the binary system! The essence of the main recommendations of this Report, named John von Neumann’s Principles, is the following: The machines on electronic elements should work not in the decimal system but in the binary system. The program should be placed in the machine block, called storage device, which should have a sufficient capacity and appropriate speeds for access and entry of program commands. Programs, as well as numbers, with which the machine operates, should be represented in the binary code. Thus, the commands and the numbers should have one and the same form of representation. This meant that the programs and all intermediate outcomes of calculations, constants and other numbers should be placed in the same storage device. The difficulties of physical realization of the storage device, speed of which should correspond to the speed of logical elements, do demand hierarchical organization of memory. The arithmetical device of the machine should be constructed on the basis of the logical summation element; it is inadvisable to create special devices for the fulfillment of other arithmetical operations. The machine should use the parallel principle of the organization of computing processes, that is, the operations over the binary words should be fulfilled over all digits simultaneously. Thus, the historical significance of John von Neumann’s Principles consists of the fact that they are the brilliant confirmation of Leibniz’s predictions about the role of the binary system in the future development of computer science and technology. The prominent American scientist, physicist and mathematician John von Neumann (1903–1957), together with his colleagues from the Prinstone Institute Goldstein and Berks after careful analysis of the strengths and weaknesses of the first electronic computer ENIAC gave strong preference to the binary system as a universal way of coding of data in electronic computers. 3.3. ‘Trojan horse’ of the binary system The famous Russian expert in computer science academician Jaroslav Khetagurov in one of his articles [10] discusses the problem of the use of modern microprocessors, based on the binary system, in terms of national security: ‘The use of microprocessors, controllers, and software computing resources of a foreign proceeding to solve problems in real-time systems of military, administrative and financial destination is fraught with big problems. This is a sort of ‘Trojan horse’, a role of which only now became to be manifested . Losses and damage from their use can significantly affect the national security of Russia…’ Academician Khetagurov does not use the concept of ‘mission-critical applications’ in this quote, but he clearly implies them if you carefully read this quote (real-time systems of military, administrative and financial destination is fraught with big problems). Thus, academician Khetagurov raises the challenge of designing modern computational tools, having built-in system of error detection for ensuring high informational reliability and noise immunity of the mission-critical systems. This problem is not new, but its solution is far from its completion due to the lack of sufficient effective scientific solutions in this area. All the main devices of computers and microprocessors (registers, counters, summators and so on) can be classified as PARALLEL SYSTEMS, for which the number of the used redundant codes is very limited (Hamming and Hsiao codes [4–6]). It has been shown above that these codes have a significant drawback (the effect of ‘false correction’) what is unacceptable for mission-critical applications. 3.4. The opinion of the US engineer and expert in coding theory W. Kauth Already in the middle of the sixth decade of the 20th century, the US engineer and expert in coding theory W. Kauth drew attention to the fact that the attempts to use the existing ECC for computer systems may be not effective because of the following properties of ‘computing channels’ (see Kauth’s quote from the article [11]): Criterion of effectiveness may differ from corresponding criterion for traditional communication channels. The most likely errors may not correspond to the most likely errors in the traditional communication channels. It is necessary to take into consideration a possibility of errors in the logical devices for encoding-decoding. By opportunity, the codes must fulfill arithmetic and other operations. Note that the point 1 relates to important problem of the effectiveness of application of the ECC for particular subject areas. The point 2 puts forward the question about real models of errors in the ‘computing channels,’ in particular, what kind of errors are the most probable for the ‘computing channels’. The point 3 puts forward the question about complexity of encoding-decoding devices for the best ECC. This problem is particularly acute for computing systems and other informational systems with representation of data in PARALLEL FORM. A complexity of the technical implementation of encoders–decoders for many effective ECC is the primary reason why these codes are not used in informational systems with the representation of data in PARALLEL FORM. The point 4 indicates one essential shortcoming of the existing ECC. These codes are not-arithmetical and do not enable to fulfill arithmetical operations. Therefore, they cannot be used for detecting and correcting errors in arithmetical units of computers. Although the article [11] was written in the 1960s of the 20th century, but the ideas of the article [11] are very relevant now when designing computing and measuring systems for mission-critical applications. The book [12] contains interesting experimental data about the nature and statistics of errors, which can occur in the typical computer structures (registers, counters, summators and so on) under the influence of noises in the electrical energy sources: With increasing the noises level in the electrical energy source, the number of errors of big multiplicity is increasing, and distribution of errors by their multiplicity nears to uniform distribution. Counters and summators are devices with a brightly expressed asymmetric nature of errors. For example, the probability of false increasing or decreasing of the number in the counter are equal to 0.96 and 0.04, respectively, for the case of summators, these probabilities are equal, respectively, 0.8 and 0.2. 3.5. On the redundant numeral systems The traditional approach to introducing code redundancy into computational structures suggests that code redundancy is introduced into digital structures after the numeral system, used to perform arithmetic operations, is considered to be chosen (according to Neumann’s principles, the classical binary system is more preferable for computational structures). It is often forgotten that the code redundancy, needed to detect errors, can be introduced into computational structures at the earliest stage of their design, at the stage of choosing numeral system for arithmetical calculations. Examples of such approach are described in the book [21]. The system for residual classes is the most well-known among the redundant numeral systems [21]. It has two advantages in comparison to the classical binary system, namely, increasing speed for execution of arithmetic operations and a possibility of errors detection in such operations. In the USSR, based on the system for residual classes, a specialized processor for military applications was developed. Unfortunately, the system for residual classes did not justify all the benefits that were expected from it. Its main disadvantage is its non-positional character what leads to many shortcomings at its practical use (difficulties in representing negative numbers, comparing numbers by their value, etc.). The main objection of computer experts against the use of new redundant numeral systems is the fact that existing software is closely related to the binary system and binary coding. With this argument, of course, we can agree, if not for one circumstance. In this article, we are talking about designing computing and measuring systems for mission-critical systems. In most cases, we are talking about not universal, but about specialized computing and measuring systems, which perform a narrowly specialized task. In such systems, the number of used programs is limited. For each specific application, these specialized programs can be developed. The main task of such informational systems is ensuring a highly reliable performance of the computer program and preventing ‘false signals’ at the output, which can lead to a technological catastrophe. The ‘Trojan horse’ of the binary system excludes a possibility designing highly reliable specialized informational systems. Therefore, the new positional numeral systems, described below, are eligible for use in mission-critical systems. 4. BERGMAN’S SYSTEM AS THE FIRST IN HISTORY NUMERAL SYSTEM WITH IRRATIONAL BASE 4.1. Definition In 1957, the young American mathematician George Bergman published the article A number system with an irrational base in the authoritative journal Mathematics Magazine [13]. The following sum is called Bergman’s system: A=∑iaiΦi, (1) where A is any real number, ai is a binary numeral {0, 1} of the ith digit, i=0,±1,±2,±3,…,Φi is the weight of the ith digit and Φ=(1+5)/2 is the base of the numeral system (1). 4.2. The main distinction between Bergman’s system and binary system On the face of it, there is not essential distinction between the formula (1) for Bergman’s system and the formula for the binary system: A=∑iai2i(i=0,±1,±2,±3,…)(ai∈{0,1}), (2) where the digit weights are connected by the following ‘arithmetical’ relations: 2i=2i−1+2i−1=2×2i−1, (3) which underlie the binary arithmetic. The principal distinction of Bergman’s system (1) from the binary system (2) is the fact that the famous irrational number Φ=(1+5)/2(thegoldenratio) (4) is used as the base of the numeral system (1) and its digit weights are connected by the following well-known relations for the powers of the golden ratio: Φi=Φi−1+Φi−2=Φ×Φi−1, (5) which underlie the ‘golden’ arithmetic. That is why Bergman called his numeral system the numeral system with irrational base. Although Bergman’s article [13] is a fundamental result for number theory and computer science, mathematicians and experts of computer science of that period were not able to appreciate the mathematical discovery of the American wunderkind. It is interesting to note the following. Now the concept of Bergman’s system has entered widely into Internet and modern scientific literature. The special article in Wikipedia [14] is dedicated to Bergman’s system. It is described briefly in WolframMathWorld [15]. Professor Donald Knuth refers to Bergman’s article [13] in his outstanding book [16]. The special paragraph in author’s book [17] is dedicated to Bergman’s system. ‘The Computer Journal’ (British Computer Society) has published in 2002 author’s article [18], devoted to Bergman’s system and its applications. This means that the interest in Bergman’s system has increased in modern mathematics and computer science. 4.3. Evaluation of Bergman’s system and its applications As it is known, new scientific ideas do not always arise there, where they are expected. Apparently, Bergman’s system [13] is one of the most unprecedented scientific discoveries in the contemporary history of science and mathematics. First of all, Bergman’s mathematical discovery [13] returns mathematics to the initial period of its development, when the numeral systems and rules of arithmetic operations were one of the most important goals of mathematics (Babylon and ancient Egypt). However, the greatest impression is the fact that a new scientific discovery in the theory of numeral systems was made by 12-year-old American wunderkind George Bergman. This is really an unprecedented case in the history of science and mathematics. The mathematical formula (1) for Bergman’s system looks so simple that it is difficult to believe that Bergman’s system is one of the largest modern mathematical discoveries, which is of fundamental interest for history of mathematics, number theory and computer science. In this regard, one can compare Bergman’s system with the discovery of incommensurable segments, made in Pythagoras’ scientific school. The proof of the incommensurability of the diagonal and the side of the square is so simple that any amateur of mathematics can get this proof without any difficulties. However, this mathematical discovery until now causes delight, since it was this discovery that was the turning event in the development of mathematics and led to the introduction into mathematics of the irrational numbers, without which it is difficult to imagine the existence of mathematics. Time will show how fair is the above comparison of Bergman’s system with the discovery of the incommensurable segments. Two newest scientific results in computer science and number theory are following from Bergman’s system (1): the ‘golden’ ternary mirror-symmetrical arithmetic and the ‘golden’ number theory. Let us briefly consider the essence of these mathematical results. The readers can familiarize themselves with these results more in detail in the articles [18–20]. 5. THE ‘GOLDEN’ TERNARY MIRROR-SYMMETRICAL NUMERAL SYSTEM AND TERNARY MIRROR-SYMMETRICAL ARITHMETIC 5.1. Definition and property of mirror symmetry Let us consider the most interesting scientific results of the articles [18, 20]. It is proved in [18, 20] that any integer N (positive or negative) can be represented as a sum: N=∑i=−mmci(Φ2)i, (6) where ci∈{1¯=−1,0,1} is the ternary numeral of the ith digit; (Φ2)i is the weight of the ith digit; Φ2=3+52 is the base of the numeral system (6), (−m), (−m + 1),…, (−2), (−1), 0, 1, 2,…,m are integers. We name the sum (6) the ternary Φ-code of integer N. The abridged notation of the sum (6) can be represented in the form of the following ternary (2m + 1)-code combination: N=cmcm−1⋯c2c1︸mc0⋅c−1c−2⋯c−(m−1)c−m︸m. (7) We can see that the ternary (2m + 1)-code combination consists of two parts relatively to the 0th ternary numeral c0: the left-hand part cmcm−1⋯c2c1︸m, which consists of the ternary numerals with the positive indexes 1,2,3,…,m, and the right-hand part c−1c−2⋯c−(m−1)c−m︸m, which consists of the ternary numerals with the negative indexes (−1), (−2), (−3),…, (−m). It is proved that the ternary (2m + 1)-code combination (7) for every integer N has property of mirror symmetry relative to the 0th ternary numeral c0, namely: c1=c−1,c2=c−2,…,cm=c−m. (8) Taking into consideration the property (8), the ternary numeral system (6) is called ternary mirror-symmetrical numeral system and ternary code combination (7) is called ternary mirror-symmetrical representation. 5.2. Examples of the ternary mirror-symmetrical representations Table 2 demonstrates the property of ‘mirror symmetry’ for some initial natural numbers. Table 2. Property of ‘mirror symmetry’. i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 Table 2. Property of ‘mirror symmetry’. i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 i 3 2 1 0 −1 −2 −3 Φ2i Φ6 Φ4 Φ2 Φ0 Φ−2 Φ−4 Φ−6 N↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ 0 0 0 0 0. 0 0 0 1 0 0 0 1. 0 0 0 2 0 0 1 1. 1 0 0 3 0 0 1 0. 1 0 0 4 0 0 1 1. 1 0 0 5 0 1 1 1. 1 1 0 6 0 1 0 1. 0 1 0 7 0 1 0 0. 0 1 0 8 0 1 0 1. 0 1 0 9 0 1 1 1. 1 1 0 10 0 1 1 0. 1 1 0 Let us give the explanations of Table 2. The first row i means the digit indices of the 7-digit ternary mirror-symmetrical code (6), the second row Φ2i means the digit weights of the 7-digit ternary mirror-symmetrical Φ-code (6), The third row N means positive integers from 0 to 10; their ternary ‘golden’ mirror-symmetrical representations are represented in the rows below the third row. All data, relating to the 0th digit, which separates left-hand and right-hand parts of the ternary ‘golden’ mirror-symmetrical representations of positive integers (see the column 0) are in bold. Thus, thanks to this simple observation, we have found the most important fundamental property of integers called mirror-symmetrical property of integers. Based on this fundamental property, the ‘ternary numeral system,’ given by (6), was named ternary mirror-symmetrical numeral system [18]. Another interesting feature of the ternary mirror-symmetric system (6) follows from Table 1. For all well-known positional numeral systems the ‘extension’ of the positional representation of the number is carried out only to the side of the higher digits. For the ternary mirror-symmetrical system (6), the ‘extension’ of the ternary mirror-symmetrical representation (7) occur to both sides, i.e. to the sides of the higher and junior digits simultaneously. This feature, as also the property of ‘mirror-symmetry’ and other features, single out the ternary mirror-symmetrical positional numeral system (6) among all other positional numeral systems. 5.3. Ternary mirror-symmetrical arithmetic The rules of mirror-symmetrical summation and subtraction are based on the following identities for the golden proportion: 2Φ2k=Φ2(k+1)–Φ2k+Φ2(k−1) (9) 3Φ2k=Φ2(k+1)+0+Φ2(k−1) (10) 4Φ2k=Φ2(k+1)+Φ2k+Φ2(k−1), (11) where k = 0, ±1, ±2, ±3,…. Table of mirror-symmetrical summation (subtraction) has the following form: ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 ak+bk 1¯ 0 1 1¯ 1¯11¯ 1¯ 0 0 1¯ 0 1 1 0 1 11¯1 The peculiarity of summation (subtraction) of ternary digits ak+bk consists in the fact that in the case of the summation (subtraction) of ternary numerals of the same sign, the intermediate sum of the opposite sign and the carry-over of the same sign arise, but carry-over spreads symmetrically toward two adjacent digits. When we summarize multi-digit ternary numbers, the sum always appears in the mirror-symmetrical form. The following trivial identity for the golden ratio powers underlies the mirror-symmetrical multiplication: Φ2n×Φ2m=Φ2(n+m). (12) The table of the mirror-symmetrical multiplication of two single-digit ternary mirror-symmetrical numbers ak×bk is given below: ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 ak+bk 1¯ 0 1 1¯ 1 0 1¯ 0 0 0 0 1 1¯ 0 1 The final part of the article [13] describes the unique multi-digit ternary mirror-symmetrical summator (subtractor) and the matrix mirror-symmetrical summator (subtractor), on the basis of which a ternary mirror-symmetric pipelined summator (subtractor) and pipelined device for multiplication have been designed. The article [18], published in The Computer Journal, aroused great interest of the Western computer community. The outstanding American computer expert Professor Donald Knuth was the first who congratulated the author for this publication. 5.4. The main arithmetical advantages of the ternary mirror-symmetrical arithmetic We can point on the number of the important advantages of the ternary mirror-symmetrical arithmetic from the ‘technical’ point of view: The mirror-symmetrical subtraction is the same arithmetic operation as the mirror-symmetrical summation. The mirror-symmetrical summation (subtraction) is fulfilled by means one and the same mirror-symmetrical summator (subtractor) in the ‘direct’ code, that is, without the use of the notions of the inverse and additional codes. The sign of the summarized or subtracted numbers is defined automatically, because it coincides with the sign of the higher significant ternary numeral of the ternary mirror-symmetric representation of the summation (subtraction) result. The summation (subtraction) results are represented always in the mirror-symmetrical form that allows checking a process of the ternary mirror-symmetrical summation (subtraction) according to the property of ‘mirror-symmetry.’ The mirror-symmetrical multiplication is reduced to the mirror-symmetrical summation (subtraction). The ternary mirror-symmetrical multiplication can be fulfilled over ternary mirror-symmetrical numbers of equal signs or different signs in the ‘direct’ code, that is, without the use of the notions of the inverse and additional codes. The sign of the results of the mirror-symmetrical multiplication is defined automatically because it coincides with the sign of the higher significant ternary numeral (1 or 1¯) of the ternary mirror-symmetrical representation of the result of the mirror-symmetrical multiplication. The results of the mirror-symmetrical multiplication are represented always in the mirror-symmetrical form that allows checking a process of the ternary mirror-symmetrical multiplication. The operation of the ternary mirror-symmetrical division is more complicated arithmetical operation compared with the arithmetical operations of the ternary mirror-symmetrical summation, subtraction and multiplication. By its complexity, the operation of the ternary mirror-symmetrical division is comparable to the same operation in the ternary-symmetrical numeral system [21], used in the ternary computer ‘Setun,’ designed in Moscow University. 6. THE ‘GOLDEN’ NUMBER THEORY AND NEW PROPERTIES OF NATURAL NUMBERS 6.1. The ‘extended’ Fibonacci and Lucas numbers Bergman’s system (1) is connected closely with the so-called ‘extended’ Fibonacci and Lucas numbers Fi and Li (i=0,±1,±2,±3,…) (see Table 3). Table 3. The ‘extended’ Fibonacci and Lucas numbers. n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 Table 3. The ‘extended’ Fibonacci and Lucas numbers. n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 n 0 1 2 3 4 5 6 7 8 9 10 Fn 0 1 1 2 3 5 8 13 21 34 55 F−n 0 1 −1 2 −3 5 −8 13 −21 34 −55 Ln 2 1 3 4 7 11 18 29 47 76 123 L−n 2 −1 3 −4 7 −11 18 −29 47 −76 123 As it follows from Table 3, the ‘extended’ Fibonacci and Lucas numbers are connected by the following simple relations: F−n=(−1)n+1Fn;L−n=(−1)nLn. (13) 6.2. The ‘golden’ representations of natural numbers Let us consider the ‘golden’ representation of natural numbers in Bergman’s system (1): N=∑iaiΦi, (14) where ai∈{0,1} is the bit of the ith digit, Φi is the weight of the ith digit and Φ=1+52 is the base of the numeral system (14). We will name the sum (14) the Φ-code of natural number N. The abridged notation of the Φ-code of natural number N has the following form: N=anan−1…a1a0⋅a−1a−2…a−k (15) and is named the ‘golden’ representation of natural number N. Note that the point in the ‘golden’ representation (15) separates the ‘golden’ representation (15) on two parts: the left-hand part, where the bits anan−1…a1a0 have non-negative indices, and the right-hand part, where the bits a−1a−2…a−k have negative indices. Note that the weights Φi of the Φ-code (14) are connected by the following relation: Φi=Φi−1+Φi−2. (16) Besides, the power of the golden ratio Φi is expressed through the ‘extended’ Fibonacci and Lucas numbers Fi and Li (see Table 2) as follows: Φi=Li+Fi52(i=0,±1,±2,±3,…). (17) By using the relations (13), (16) and (17), the following theorem has been proved in [19]. Theorem 1 All natural numbers can be represented in the Φ-code (14) of Bergman’s system (1) by the finite number of bits. Note that Theorem 1 is far from trivial, if we take into consideration that all powers of the golden proportion Φi (i=±1,±2,±3,…) in the sum (14) (with the exception of Φ0 = 1) are irrational numbers. Note that Theorem 1 is true only for natural numbers. Therefore, Theorem 1 can be referred to the category of new properties of natural numbers. 6.3. Multiplicity and MINIMAL FORM of the ‘golden’ representations The main feature of the ‘golden’ representations (15) of real numbers in Bergman’s system, compared with the binary system (2), is a multiplicity of the ‘golden’ representations of the same real number. The various ‘golden’ representations of one and the same real number can be obtained by using the operations of convolutions (18) and devolutions (19) in the ‘golden’ representations (15): Convolution:011→100 (18) Devolution:100→011 (19) Note that the micro-operations (18) and (19) are based on the main mathematical identity (16), which relates the weights of the digits in Bergman’s system (1). The performance of these micro-operations in the ‘golden’ representation (15) of a certain number does not change the value of this number. The so-called MINIMAL FORM plays a special role among the various ‘golden’ representations (15) of one and the same number. The MINIMAL FORM can be obtained from the initial ‘golden’ representation by means of fulfilling in it all the possible convolutions (18). The MINIMAL FORM has the following important features: Since the operation of the convolution (011→100) is reduced to the transformation of the triple of the neighboring bits 011 into the triple of the neighboring bits 100, this means that in the MINIMAL FORM two bits 1 do not meet alongside. The MINIMAL FORM has the minimal number of 1’s among all the possible ‘golden’ representations of the same number. 6.4. Z- and D-properties of natural numbers Bergman’s system (1) is a source for new number-theoretical results. We give without proof the following properties of the Φ-code (14), which are given by Theorems 2, 3. Theorem 2 (Z-property of natural numbers). If we represent an arbitrary natural number N in the Φ-code (14) and then substitute the ‘extended’ Fibonacci numbers Fi(i=0,±1,±2,±3,…)instead of the golden ratio powers Φi(i=0,±1,±2,±3,…)into the sum (14), then the sum that appears as a result of such a substitution will be equal identically to 0, independently on the initial natural number N, that is, ForanyN=∑iaiΦiaftersubstitutionFi→Φiwehave:∑iaiFi≡0(i=0,±1,±2,±3,…). (20) Theorem 3 (D-property of natural numbers). If we represent an arbitrary natural number N in the Φ-code (14) and then substitute the ‘extended’ Lucas numbers Li(i=0,±1,±2,±3,…)instead of the golden ratio powers Φi(i=0,±1,±2,±3,…)into the sum (14), then the sum that appears as a result of such a substitution will be equal identically to 2N, independently of the initial natural number N, that is, ForanyN=∑iaiΦiaftersubstitutionLi→Φiwehave:∑iaiLi≡2N(i=0,±1,±2,±3,…). (21) We note that Theorems 2 and 3 like Theorem 1 are valid only for natural numbers; consequently, they are describing new properties of natural numbers. The article [19] describes other new properties of natural numbers. For example, if we substitute the ‘extended’ Fibonacci numbers Fi+1(i=0,±1,±2,±3,…) instead of the golden ratio powers Φi(i=0,±1,±2,±3,…) in the Φ-code (14), then we get another representation of the same natural number, called F-code of natural number N: N=∑iaiFi+1. (22) If we substitute the ‘extended’ Lucas numbers Li+1(i=0,±1,±2,±3,…) instead of the golden ratio powers Φi(i=0,±1,±2,±3,…) in the Φ-code (14), then we get another representation of the same natural number, called L-code of natural number N: N=∑iaiLi+1. (23) This means that in Bergman’s system, there are three different ways of representing the same natural number N: Φ-code (14), F-code (22) and L-code (23), that is, N=∑iaiΦi=∑iaiFi+1=∑iaiLi+1. (24) For many mathematicians in the field of number theory, it is a great surprise that new properties of natural numbers were discovered in the 21st century, that is, 2.5 millennia after the writing of Euclid’s Elements, in which systematic studying the properties of natural numbers started. Bergman’s system is the source for the ‘golden’ number theory [19] what once again emphasizes a fundamental nature of the mathematical discovery of George Bergman [13]. 7. PASCAL’S TRIANGLE, FIBONACCI p-NUMBERS AND GOLDEN p-PROPORTIONS 7.1. Mathematical discovery by George Polya As it is known, Pascal triangle plays an important role in the combinatorial analysis and has many interesting applications in mathematics and computer science, in particular, in coding theory. By studying the so-called diagonal sums of Pascal’s triangle, American mathematician George Polya came to very simple and unexpected discovery, described in the book [22] (see Fig. 1). It should be noted that this very simple mathematical result during many centuries was not known to Blaise Pascal and other mathematicians, who studied Fibonacci numbers and combinatorial analysis. Figure 1. View largeDownload slide Fibonacci numbers in Pascal’s triangle. Source: Pascal’s Triangle http://www.goldennumber.net/pascals-triangle/ Figure 1. View largeDownload slide Fibonacci numbers in Pascal’s triangle. Source: Pascal’s Triangle http://www.goldennumber.net/pascals-triangle/ 7.2. Fibonacci p-numbers By studying the optimal measurement algorithms in his Doctoral dissertation (1972) [25] and diagonal sums of Pascal’s triangle (Fig. 1) in the book [23], the author found infinite number of recurrent sequences, which for the given р=0,1,2,3,… are described by the following recurrent relation: Fp(n)=Fp(n–1)+Fp(n–p–1)forn>p+1 (25) for the seeds Fp(1)=Fp(2)=⋯=Fp(p+1)=1. (26) The numerical sequences, generated by the recurrent relation (25) at the seeds (26), are named in [23] the Fibonacci p-numbers. It is clear that for the case p = 0, the Fibonacci p-numbers are reduced to the classical binary sequence 1, 2, 4, 8, 16, 32, 64,…,2n−1 and for the case p = 1 to the classical Fibonacci numbers 1, 1, 2, 3, 5, 8, 13, 21, 34,…, Fn. For the case p = ∞, the Fibonacci p-numbers are reduced to the following trivial sequence: {1,1,1,…,1,…}. (27) 7.3. A representation of the Fibonacci p-numbers through binomial coefficients It is well known in combinatorial analysis the following formula, representing the binary numbers 2n through binomial coefficients: 2n=Cn0+Cn1+⋯+Cnn. (28) By studying the Pascal triangle (Fig. 1) [23], we can represent the generalized Fibonacci p-number Fn (n + 1), given by the recurrent relation (25) at the seeds (26), through the binomial coefficients as follows: Fp(n+1)=Cn0+Cn−p1+Cn−2p2+Cn−3p3+Cn−4p4+⋯. (29) Note that the known formula (28) is a partial case of (29) for the case p = 0. For the cases p = 1, the formula (28) is reduced to the following formula, which connects the classic Fibonacci numbers Fn+ 1 = F1(n + 1) to binomial coefficients: Fn+1=F1(n+1)=Cn0+Cn−11+Cn−22+Cn−33+Cn−44+⋯. (30) It is clear that the formulas (29) and (30) are another confirmation of the deep connection between the theory of Fibonacci p-numbers and combinatorial analysis. 7.4. The golden p-proportions 7.4.1. A ratio of the adjacent Fibonacci p-numbers It is well known the so-called Kepler’s formula, which gives relationships between the golden ratio and Fibonacci numbers Fn: Φ=lim︸n→∞FnFn−1=1+52. (31) It is proved in [24] that for the given p = 0, 1, 2, 3,… the limit of the ratio of two adjacent Fibonacci p-numbers is equal to the following: lim︸n→∞Fp(n)Fp(n−1)=Φp, (32) where Φp is mathematical constant, which is the positive roots of the following algebraic equation: xp+1=xp+1. (33) Note that for the case p = 0, Eq. (33) is reduced to the trivial equation: x = 2. For the case p = 1, Eq. (33) is reduced to the algebraic equation of the golden ratio: x2=x+1 (34) with positive root Φ=1+52 (the golden ratio). As follows from the above arguments, the constants Φp are new fundamental mathematical constants that are directly related to the binomial coefficients, Pascal’s triangle, and combinatorial analysis in general. 7.4.2. The simplest algebraic properties of the golden p-proportions If we substitute the golden p-proportion Φp instead x in Eq. (33), we get the following identity for the golden p-proportion: Φpp+1=Φpp+1. (35) If we divide all terms of the identity (35) by Φpp, we get the following identities for the golden p-proportion: Φp=1+1Φpp (36) or Φp−1=1Φpp. (37) Note that for the case p = 0 (Φ0 = 2) the identities (36) and (37) are reduced to the following trivial expressions: 2=1+11and2−1=11. For the case p=1, we have Φ1=Φ=1+52 and the identities (36) and (37) are reduced to the well-known identities for the golden proportion Φ: Φ2=Φ+1 (38) Φ=1+1Φ. (39) If we multiply and divide repeatedly all terms of the identity (35) on Φp, we get the following remarkable identities, connecting the powers of the golden p-proportion: Φpn=Φpn−1+Φpn−p−1=Φp×Φpn−1(n=0,±1,±2,±3,…). (40) Note that for the case p = 0, Φp = Φ0 = 2 and then the identities (40) are reduced to the following trivial identities for the ‘binary’ numbers: 2n=2n−1+2n−1=2×2n−1. For the case p = 1, we have Φ1=Φ=1+52 and then the identities (40) are reduced to the following well-known identities for the classic golden ratio: Φn=Φn−1+Φn−2=Φ×Φn−1. (41) 8. FIBONACCI p-CODES, CODES OF THE ‘GOLDEN’ p-PROPORTIONS AND THEIR APPLICATIONS IN COMPUTER SCIENCE AND DIGITAL METROLOGY 8.1. Fibonacci p-codes 8.1.1. Definition In 1972, the author of this book defended his Grand Doctoral dissertation ‘Synthesis of Optimal Algorithms for Analog-Digital Conversion’ [25]. On the basis of this dissertation the author wrote the book ‘Introduction into Algorithmic Measurement Theory’ [23], devoted to the substantiation of the theory of the so-called Fibonacci p-codes: N=anFp(n)+an−1Fp(n−1)+⋯+aiFp(i)+⋯+a1Fp(1), (42) where N is the natural number, ai∈{0,1} is a binary numeral of the ith digit of the code (42); n is the digit number of the code (42). Here {Fp(1),Fp(2),…,Fp(i),…,Fp(n)} (43) are the weights of the code (42), Fp(i)(i=1,2,3,…,n) are Fibonacci p-numbers, which follow from Pascal triangle (diagonal sums) and are expressed through binomial coefficients (30). The formula (42) was obtained by the author during the synthesis of the so-called optimal measurement algorithms, the theory of which is described in the book [23]. The formula (42) defines the class of positional numeral systems, corresponding to the so-called Fibonacci measurement algorithms, since the Fibonacci p-numbers are the digit weights of the numeral system (42). 8.1.2. Partial cases of the Fibonacci p-codes Note that the Fibonacci p-codes (42) include the infinite number of the different positional ‘binary’ representations of positive integers because every p originates its own Fibonacci p-code (42) (p=0,1,2,3,…). In particular, for the case p=0, the Fibonacci p-code (42) is reduced to the classic binary code: N=an2n−1+an−12n−2+⋯+ai2i−1+⋯+a120, (44) which underlie classical binary arithmetic, the basis of modern ‘binary’ computers. For the case p=1, the Fibonacci p-code (42) is reduced to the classic Fibonacci code, named Fibonacci 1-code: N=anFn+an−1Fn−1+⋯+aiFi+⋯+a1F1, (45) where Fi=Fi−1+Fi−2;F1=F2=1(i=1,2,3,…,n) are the classical Fibonacci numbers. The abridged representations of the Fibonacci p-code (42), as also the classical binary code (44) and the Fibonacci 1-code (45) have one and the same form: N=anan−1…ai…a1 (46) and are named Fibonacci p-representations or simply Fibonacci representations of natural numbers. Consider now the partial case p=∞. For this case, every Fibonacci p-number is equal to 1 identically, that is, for any integer i=1,2,3,…,n we have: Fp(i)=1. Then, for this case, the sum (42) takes the form of the so-called unitary code: N=1+1+⋯+1︸N. (47) However, the expression (47) coincides with Euclidean definition of natural numbers, used by Euclid in his elementary number theory. Hence, the Fibonacci p-codes, given by (42), are a very wide generalization of the binary code (44) and Fibonacci 1-code (45), which are the partial cases of the Fibonacci p-codes (42) for the cases p=0 and p=1, respectively. On the other hand, the Fibonacci p-code (42) for the case p=∞ is reduced to the Euclidean definition of natural numbers (47). Thus, the fundamental significance of the formula (42) consists of the fact that it connects various mathematical theories and concepts, including Fibonacci numbers theory [26–28], Pascal triangle and combinatorial analysis, the theory of numeral systems and number theory, and finally binary arithmetic, the basis of modern computers. The formula (42) can be viewed from different points of view. First of all, as a generalization of Fibonacci numbers theory [26–28], because the Fibonacci p-numbers are wide generalization of classical Fibonacci numbers. Secondly, because the Fibonacci p-codes (42) are a generalization of the Euclidean definition of natural numbers (47), this approach can lead us to the extension of number theory. In essence, the modern Fibonacci numbers theory [26–28] is such ‘extension’ of number theory. This article is devoted to the applied aspects of the Fibonacci p-codes (42), in particular, their applications in computer science. Since the number of the Fibonacci p-codes (42) is theoretically infinite, we must choose such redundant Fibonacci p-code, which would be most suitable for designing high-reliable Fibonacci computers as a new direction in computer technology. Here it is appropriate to draw an analogy between the Fibonacci p-codes (42) and the so-called canonical numeral systems, described in [21]. As it is known, the number of canonical numeral systems, that is, numeral systems with the bases 2, 3,…,10,…,12,…, 60, etc., theoretically infinite. But the main criterion for choosing the binary system (44) as the main positional numeral system for electronic computers was the principle of simplicity of technical implementation. Von Neumann’s idea to use the binary system (44) in electronic computers is based on arithmetic advantages of the binary system and specifics of electronic components and Boolean logic. Von Neumann wrote ‘Our main memory unit by nature is adapted to binary system … A flip-flop in fact is again a binary device … The main advantage of the binary system in comparison with a decimal consists in greater simplicity of technical realization and big speed, with which the basic operations can be performed. An additional remark consists of the following. The main part of the computer by its nature is not arithmetical, but mainly logical. The new logic, being the system ‘yes-not’ is mainly binary. Therefore, the construction of the binary arithmetical devices greatly facilitates the construction of more homogeneous machine, which can be designed better and much effectively.’ We have similar situation for the case of the redundant Fibonacci p-codes (p = 1,2,3,…). It follows from the above reasoning that the number of the redundant Fibonacci p-codes, given by (42), is theoretically infinite. However, they in general have different applied significance. For the Fibonacci p-codes, the principle of simplicity of technical realization is very important. The Fibonacci p-code, corresponding to the case p = 1, has the least redundancy, sufficient for designing built-in error-detecting devices, and it is the simplest Fibonacci p-code from the point of view of technical implementation. 8.1.3. A little of history, first author’s publications in the field and Fibonacci patenting Since the Fibonacci p-codes (42) are a generalization of the binary system, which underlies modern computers, immediately after defending his Grand Doctoral dissertation (1972) [25], the author posed as his main challenge to create new arithmetical and informational foundations of new computers, Fibonacci computers, based on the Fibonacci p-codes (42). The first author’s articles [29, 30] on this theme (in Russian) had been published in 1974–75. The scientific trip to Austria (January–March 1976), the work as Visiting Professor at the Vienna Technical University and author’s speech at the joint session of the Austrian Cybernetics and Computer societies on the theme Algorithmic Measurement Theory and Foundations of Computer Arithmetic (3 March 1976) became the beginning of the international recognition of author’s scientific direction. High evaluation of author’s speech by Austrian scientists became a cause of the wide patenting author’s inventions in the field ‘Fibonacci computers’ abroad. Sixty international patents, given out on the Soviet inventions in the field of computer science and digital metrology in USA, Japan, England, France, FRG, Canada, Poland and DDR [31–43], are official legal documents, confirming the priority of Soviet science (and the author of this article) on a new direction in the field of computer science and digital metrology. After moving to Canada in 2004, the main author’s goal became to acquaint the Western computer community with the main ideas of this scientific direction. To this end, the author wrote the book [17], which was published by the Publishing House ‘World Scientific’ in 2009, and newest articles [19, 20, 44], which were published in ‘British Journal of Mathematics and Computer Science’ during 2015–16. Taking into consideration the availability of English publications [17, 19, 20, 44], we will outline only the most interesting results in the field of the theory of Fibonacci p-codes and following from them Fibonacci arithmetic, by referring readers to these publications for more detailed acquaintance. 8.1.4. ‘Convolution’ and ‘devolution’ for the Fibonacci 1-code Note that the Fibonacci 1-code (45) is discrete analog of the Φ-code (14) N=∑iaiΦi and the conceptions of convolution (011 → 100) and devolution (100 → 011) can be applied to Fibonacci representations: (a)Convolutions7={011111001110100 (48) (b)Devolutions5={100000110001011 (49) The convolutions result 10 100 in (48) is named ‘convolute’ Fibonacci representation and the devolutions result 01 011 in (49) is named ‘devolute’ Fibonacci representation. For the case p=1, the ‘convolute’ and ‘devolute’ Fibonacci representations of the positive integer N have peculiar indications. In particular, in the ‘convolute’ Fibonacci representations two bits of 1 together do not meet and in the ‘devolute’ Fibonacci representations two bits of 0 together do not meet, starting from the highest bit of 1 of the Fibonacci representation (46). Consider now peculiarities of the convolution and devolution for the lowest digits of the Fibonacci representation (46). As it is well known, for the case p=1, the weights of the two lowest digits of the Fibonacci 1-code (45) are equal to 1 identically, that is, F1=F2=1. And then the operations of the devolution and convolution for these digits are performed as follows: 10→01(devolution)and01→10convolution). 8.1.5. The base of the Fibonacci p-code For the case p=0, the base of the binary system (44) is calculated as the ratio of the adjacent digit weights, that is, 2k2k−1=2. Apply this principle to the Fibonacci p-code (42) and consider the ratio Fp(k)Fp(k−1). (50) A limit of the ratio (50) for k→∞ is the base of the Fibonacci p-code (42). As it follows above, the limit of (50) is equal: limk→∞Fp(k)Fp(k−1)=Φp, (51) where Φp is the golden p-proportion. This means that the base of the Fibonacci p-code (42) for the case p>0 is the irrational number Φp and hence Fibonacci p-codes (42) are a new class of positional numeral systems with irrational bases. For the case p = 1, we have limk→∞F(k)F(k−1)=Φ=1+52(thegoldenratio), (52) that is, the base of the Fibonacci 1-code (45) coincides with the base of Bergman’s system (1). 8.2. Fibonacci arithmetic 8.2.1. Comparison of numbers in the Fibonacci 1-code It is proved in [17, 44] that the comparison of numbers in the Fibonacci 1-code (45) is fulfilled similar to the classic binary code (44), if the comparable numbers are represented in the MINIMAL FORM. This property (simplicity of number comparison) is one of the important arithmetical advantages of the Fibonacci 1-code (45). 8.2.2. The basic micro-operations The main distinction of the Fibonacci 1-code (45) from the binary code (44) is a multiplicity of Fibonacci representations of one and the same positive integer. By using the above micro-operations of devolution (011→100) and convolution (100→011), we can change the forms of Fibonacci representations of one and the same positive integer. This means that the binary 1’s in the Fibonacci representation (46) of one and the same number can move to the left or to the right along Fibonacci representation (46) of the same number by using the micro-operations devolution (011→100) and convolution (100→011). Recall once more that the fulfillment of these micro-operations does not change the number itself, that is, we will get the different Fibonacci representations of one and the same number. This fact allows developing the original approach to the Fibonacci arithmetic, based on the so-called basic micro-operations. Let us introduce the following four basic micro-operations, used to fulfill logical and arithmetical operations over binary words: (53) Note that the noise-immune Fibonacci arithmetic, based on the above micro-operations (53), is described for the first time in the article [45] and later in the book [17] and the article [44]. Note that the convolutions and devolutions, shown in the table (53), are the simple code transformations, which are performed over the adjacent three bits of the Fibonacci representation of one and the same number N in the Fibonacci 1-code (45). The micro-operation of replacement [10↓=01] is a two-placed micro-operation, which is fulfilled over the same digits of two registers, the top register A and the lower register B. Consider now the case, when the register A has the bit of 1 in the kth digit and the register B has the bit of 0 in the same kth digit (the condition for the replacement). The micro-operation of the replacement consists in the moving of the bit 1 from the kth digit of the top register A to the kth digit of the lower register B. Note that this operation can be fulfilled only for the condition, if the bits of the kth digits of the registers A and B are equal to 1 and 0, respectively. The micro-operation of absorption [10↕=10] is a two-placed micro-operation for the condition, when the bits of 1 are in the kth digits of the top register A and the lower register B. This micro-operation consists in mutual annihilation of the bits of 1 in the top and lower registers A and B. After fulfillment of the micro-operation of absorption, the bits of 1 are replaced by the bits of 0. It is necessary to pay our attention to the following ‘technical’ peculiarity of the above ‘basic micro-operations’ (53). At the register interpretation of these micro-operations, each micro-operation may be fulfilled by means of the inversion of the flip-flops, involved into the micro-operation. This means that each micro-operation is reduced to the flip-flops’ switching. 8.2.3. Logic operations We can demonstrate a possibility to fulfill the simplest logic operations by means of the above basic micro-operations (53). Let us fulfill now all possible replacements from the top register A to the lower register B: View largeDownload slide View largeDownload slide As the result of the replacement, we get the two new binary words A′ and B′. We can see that the binary word A′ is a logic conjunction (∧) of the initial binary words A and B, that is, A′=A∧B and the binary word B′ is a logic disjunction (∨) of the initial binary words A and B, that is, B′=A∨B. A logic operation of the module 2 addition is fulfilled by means of the simultaneous fulfillment of all the possible replacements and absorptions. For example: View largeDownload slide View largeDownload slide We can see that the results of this code transformation are two new binary words A′ = const 0 and B′ = A ⊕ B. It is clear that the binary word A′ = const 0 plays a role of checking binary word for the module 2 addition what is important for computer applications. A logic operation of the code A inversion is reduced to the fulfillment of the absorptions over the initial binary word A and the special binary word B=const1: View largeDownload slide View largeDownload slide The binary word A′=const0 plays a role of checking binary word for the inversion what is important for computer applications. 8.2.4. Fibonacci summation The idea of the summation of the two numbers A and B by using the basic micro-operations consists of the following. We have to move all the binary 1’s from the top register A to the lower register B. With this purpose, we use the micro-operations of replacement, devolution and convolution. The result is formed in the register B. For example, let us summarize the following numbers A0=010100100 and B0=001010100 as follows. The first step of the summation consists in the replacement of all possible bits of 1 from the register A to the register B: View largeDownload slide View largeDownload slide For this, we apply the micro-operation of replacement to all digits of the initial numbers A and B. However, this can be fulfilled only for those digits, where the condition of replacement is satisfied. The second step is the fulfillment of all the possible devolutions in the binary word A1 and all the possible convolutions in the binary word B1, that is, A1=000000100B1=011110100⇓A2=000000011B2=100110100 The third step is the replacement of all the possible bits of 1 from the register A to the register B: View largeDownload slide View largeDownload slide The summation is over, because all bits of 1 have moved from the register A to the register B. After reducing the binary word B3 to the MINIMAL FORM, we get the sum B3=A0+B0, represented in the MINIMAL FORM: B3=100110111=101001001=101001010=A0+B0. Thus, the summation is reduced to a sequential fulfillment of the micro-operations of the replacement for the two binary words A and B and the micro-operations of the convolution for the binary word B and the devolution for the binary word A. 8.2.5. Fibonacci subtraction The idea of the Fibonacci subtraction of the number B from the number A by using the basic micro-operations consists in the mutual absorptions of the binary 1’s in the Fibonacci representations of the numbers A and B, until one of them becomes equal to 0. To realize this idea we have to fulfill sequentially the mentioned micro-operations of absorption for the Fibonacci representations A and B and then the micro-operations of devolution for the Fibonacci representations A and B. The subtraction result is always formed in the register of the bigger number. If the result is formed in the top register A, it follows that the sign of the subtraction result is ‘+’, in the opposite case the subtraction result has the sign ‘−’. Let us demonstrate now this idea on the following example. Let us subtract the number B0=101010010 from the number A0=101001000, represented in the MINIMAL FORM of the Fibonacci 1-code. The first step is the absorption of all possible binary 1’s in the initial Fibonacci representations A0 and B0: View largeDownload slide View largeDownload slide The second step is the devolutions for the Fibonacci representations A1 and B1: A1=000001000B1=000010010⇓A2=000000110B2=000001101 The third step is the absorptions for the Fibonacci representations A2 and B2: View largeDownload slide View largeDownload slide The fourth step is the devolutions for the Fibonacci representations A3 and B3: A3=000000010B3=000001001⇓A4=000000001B4=000000111 The fifth step is the absorptions for the Fibonacci representations A4 and B4: View largeDownload slide View largeDownload slide The subtraction is over because A5=000000000. After reducing the Fibonacci representation B5 to the MINIMAL FORM, we get the subtraction result: B5=000001000. The subtraction result is in the register B. This means that the sign of the subtraction result is ‘−’, that is, the difference of the numbers A − B is equal: D=A−B=−000001000. If we code the sign ‘−’ by the bit of 1, then we can represent the difference D as follows: D=A−B=1.000001000. 8.2.6. The ‘binary’ multiplication For finding the algorithms of the Fibonacci multiplication and division we will use an analogy to the classic binary multiplication and division. We start from the multiplication. To multiply two numbers A and B in the classic binary code (44), that is, to get the product P=A×B, we have to represent the multiplier B in the form of the n-digit binary code (44). Then, the product P=A×B can be written in the following form: P=A×B=A×bn2n−1+A×bn−12n−2+⋯+A×bi2i−1+⋯+A×b120, (54) where bi∈{0,1} is the binary numerals of the multiplier B. It follows from (54) that the binary multiplication is reduced to forming the partial products of the kind A×bi2i−1 and their summation. The partial product A×bi2i−1 is formed by shifting the Fibonacci representation of the number A to the left in the (i−1) digits. The binary multiplication algorithm, based on (54), has a long history and goes back to the doubling method of ancient Egyptian mathematics [46]. 8.2.7. Fibonacci multiplication The analysis of the Egyptian doubling method [46] allows suggesting the following method of the Fibonacci multiplication for the general case of p. Let us consider now the product P=A×B, where the numbers A and B are represented in the Fibonacci p-code (42). By using the representation of the multiplier B in the Fibonacci p-code (42), we can represent the product P=A×B as follows: P=A×B=A×bnFp(n)+A×bn−1Fp(n−1)+⋯+A×biFp(i)+⋯+A×b1Fp(1), (55) where Fp(i)(i=1,2,3,…,n) are the Fibonacci p-numbers. Note that the sum (55) is a generalization of the sum (54), which underlies the algorithm of the ‘binary’ multiplication. The algorithm of the Fibonacci multiplication follows from the sum (55). The multiplication is reduced to the summation of the partial products of the kind A×biFp(i). They are formed from the multiplier A according to the special procedure, which is an analog of the Egyptian multiplication. Demonstrate now the Fibonacci multiplication for the case of the simplest Fibonacci 1-code (45). Example 4.1. Find the following product: 41 × 305. Solution is in Table 4. Let us explain Table 4: Construct Table 4, consisting of the three columns: F, G and P. Insert the Fibonacci numbers 1,1,2,3,5,8,13,21,34 into the F-column of Table 4. Insert the generalized Fibonacci 1-sequence: 305,305,610,…,10370, which is formed from the first multiplier 305 according to the ‘Fibonacci recurrent relation’ Gi=Gi−1+Gi−2, to the G-column. Mark by the inclined line (/) and block font all the F-numbers that give the second multiplier in the sum (41 = 34 + 5 + 2). Mark by block font all the G-numbers 610, 1525, 10 370, corresponding to the marked F-numbers and rewrite them to the P-column. By summarizing all the P-numbers 610 + 1525 + 10 370, we get the product: 41 × 305 = 12 505. This multiplication algorithm is easily generalized for the case of the Fibonacci p-codes (42). Table 4. Example of Fibonacci multiplication. F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 Table 4. Example of Fibonacci multiplication. F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 F G P 1 305 1 305 /2 610 →610 3 915 /5 1525 →1525 8 2440 13 3965 21 6505 /34 10 370 →10 370 41 = 34 + 5 + 2 41 × 305 =12 505 8.2.8. Fibonacci division We can apply the above Egyptian method of division to construct the algorithm of the Fibonacci division. The example of the Fibonacci division in the Fibonacci 1-code (44) is described in the book [17] and the article [44]. 8.3. A conception of the Fibonacci high-reliable arithmetical processor based on the basic micro-operations 8.3.1. Checking of the basic micro-operations The basic idea of designing self-checking Fibonacci processor consists in the following. It is necessary to develop the effective system of checking the basic micro-operations in process of their fulfillment. Let us demonstrate now a possibility of the realization of this idea by using the above basic micro-operations (convolution, devolution, replacement and absorption), used in the noise-immune Fibonacci arithmetic. We pay our attention to the following ‘technical’ peculiarity of the above basic micro-operations. For ‘register interpretation’ of these micro-operations, each micro-operation may be realized by means of the inversion of the flip-flops, involved into the micro-operation. This means that each micro-operation is realized technically by means of flip-flops’ switching. Let us evaluate now a potential ability of the basic micro-operations to detect errors, which may appear during the micro-operations realization. As it is well-known, the potential error-detection ability is determined by the relationship between the number of the detectable errors and the general number of all possible errors. Let us explain the essence of our approach to the detection of errors in the above micro-operations on the example of the micro-operation of convolution: 011⇒100 (56) The convolution is fulfilled for the 3-digit binary code combination (56). It is clear that there are 23 = 8 possible transitions, which can arise at the fulfillment of the micro-operation (56). Note that the only one of them, given by (56), is correct, that is, unmistakable transition. The code combinations {011,100}, (57) which are involved into the unmistakable transition (56), are called allowed code combinations for the convolution. The all the remaining code combinations, which can appear during the convolution (56) {000,001,010,101,110,111}, (58) are prohibited code combinations. The idea of the error detection consists in the following. If during the fulfillment of the micro-operation (56), one of the prohibited code combinations (58) appears, this fact is the indication of error. Note that if the erroneous transition 011⇒011, (59) when the allowed code combination 011 passes on into the same allowed code combination 011, we can interpret this transition as the case of undetectable error. Let us consider now the different erroneous situations, which can appear at fulfillment of the micro-operation (56): 011⇒{011000001010101110111}. (60) Among them, only the erroneous transition (59) is undetectable, because the code combination 011 is the allowed code combination. All the remaining erroneous transitions (60) are detectable. Let us analyze the transition (59) from the arithmetical point of view. It is clear that the essence of the erroneous transition (59) consists in the repetition of the same code combination 011. If we analyze this transition from the arithmetical point of view, we can see that this transition does not destroy the numerical information and does not influence on the outcome of the arithmetical operations. Hence, the erroneous transition (59) does not belong to the errors of catastrophic character. It can delay maybe only the data processing. All the remaining erroneous transitions from (60) are destroying the numerical information and hence can lead to the errors of catastrophic character. The main conclusion, following from this consideration, consists in the fact that the set of the ‘catastrophic’ code combinations from (58) coincides with the set of the detectable code combinations from (60). This means that all the ‘catastrophic’ transitions for the convolution are detectable. We emphasize once again that the undetectable transition (59) does not destroy numerical information and, therefore, from the arithmetical point of view cannot belong to the erroneous transitions of catastrophic character. This undetectable transition is delaying only the data processing. Thus, it follows from this consideration that we can design, by using this idea, the computer device for the fulfillment of the convolution with the absolute (i.e. 100%) potential ability to detect all catastrophic transitions, which may appear at the realization of the convolution. We can do the similar conclusion for other basic micro-operations. But the fulfillment of some data processing algorithm in the Fibonacci processor, based on the basic micro-operations, is reduced to the sequential fulfillment of the certain basic micro-operations on each computation step. Because checking of circuits for realization of the basic micro-operations has the ‘absolute’ error-detecting ability, it follows from this consideration a possibility designing the arithmetical self-checked Fibonacci processor, which has the ‘absolute’ error-detection ability (100%) for the ‘catastrophic’ errors, arising in the high-reliable Fibonacci processor at the flip-flops’ switching. 8.3.2. The hardware realization of the Fibonacci high-reliable processor The Fibonacci high-reliable processor is based on the principle of ‘cause–effect,’ described in the article [45]. The essence of the principle consists in the following. The initial information (the ‘cause’), which is subjected to the data processing, is transformed into the ‘result’ by using some micro-operations. After that we transform the ‘result’ (the ‘effect’) to the initial information (the ‘cause’) and then we check that the ‘effect’ fits to its ‘cause.’ For example, at the fulfillment of the convolution for the binary combination 011 (the ‘cause’), we get the new binary combination 100 (the ‘effect’), which is necessary condition for the fulfillment of the devolution. This means that the correct fulfillment of the convolution leads to the condition for the devolution. Analogously the correct fulfillment of the devolution leads to the condition for the convolution. It follows from this consideration that the micro-operations of the convolution and devolution are mutually checked. These conclusions are true for all the above basic micro-operations, represented in the table (53). For the ‘register interpretation’ the obtaining a correspondence between the ‘cause’ and the ‘effect’ is realized by using the ‘checking flip-flop’. The ‘cause’ sets up the ‘checking flip-flop’ into the state of 1 and the correct fulfillment of the micro-operation (the ‘effect’ fits to the ‘cause’) overthrows the ‘checking flip-flop’ into the state 0. If the ‘effect’ does not fit to the ‘cause’ (the micro-operation is fulfilled incorrectly), then the ‘checking flip-flop’ remains in the state 1, which indicates the error. If we analyze the ‘causes’ and the ‘effects’ for every basic micro-operation, we can determine that every ‘effect’ is the inversion of its ‘cause,’ that is, all micro-operations could be realized by means of the inversion of the flip-flops, involved into the micro-operation. The block diagram of the Fibonacci device for the realization of the principle of ‘cause–effect’ is shown in Fig. 2. The device in Fig. 2 consists of the information and check registers, which are connected by means of the logic ‘cause’ and ‘effect’ circuits. The code information, entering the information register through the ‘Input,’ is analyzed by the logic ‘cause’ circuit. Figure 2. View largeDownload slide The block diagram of the Fibonacci device for the realization of the principle of the ‘cause–effect’. Figure 2. View largeDownload slide The block diagram of the Fibonacci device for the realization of the principle of the ‘cause–effect’. Suppose that we need to fulfill the convolution for the binary combination in the information register. Let some flip-flops Tk−1, Tk, Tk + 1 of the information register be in the state 011, i.e. the condition for the convolution is satisfied for this group of flip-flops. Then, the logic ‘cause’ circuit (the logic circuit for the convolution for this example) results in writing the logic 1 into the corresponding flip-flop Tk of the check register. The written logic 1 is resulting into the inversion of the flip-flops Tk−1, Tk, Tk + 1 of the information register by using the back connection, that is, their new states are 100. This means that the condition for the devolution is satisfied for this group of the flip-flops. Then, the logic ‘effect’ circuit (the logic circuit for the devolution in this example) analyzes the states of the flip-flops Tk−1, Tk, Tk + 1 of the information register and overthrows the same flip-flop Tk of the check register to the initial state of 0. Overthrowing the flip-flop Tk of the check register into the initial state of 0 confirms that the ‘cause’ (011) fits to its ‘effect’ (100), that is, the micro-operation of the convolution is correct. Hence, if we get the code word of 00…0 in the check register after the end of all micro-operations, this means that all ‘causes’ fit to their ‘effects,’ that is, all the micro-operations are correct. If the check register contains at least one logic 1 in some flip-flop, this means that at least one basic micro-operation is not correct. The logic 1’s in the flip-flops of the check register cause the error signal of 1 at the output ‘Error’ of the device in Fig. 2. The signal of 1 at the output ‘Error’ prohibits the use of the data on the ‘Output’ of the Fibonacci device in Fig. 2. The most important advantage of the check principle of the ‘cause–effect’, which is realized in Fibonacci device in Fig. 2, is the detection of error in the moment of its appearance. The correction of error in the micro-operation is realized by the repetition of this micro-operation. Hence, the above approach, based on the principle of the ‘cause–effect,’ permits to detect and then to correct data by means of repetition of all ‘catastrophic’ errors, arising in the moment of flip-flop switching with 100% guarantee. A more detailed description of all the benefits of this principle of implementation of the high-reliable Fibonacci processor is given in the article [45]. The article stresses that ‘this approach can lead to designing a new class of high-reliable computers and processors, which provide a significant increase of the reliability of information processing in computer systems and the creation of new methods of information processing.’ 8.3.3. USA researches in Fibonacci computers field It is necessary to note that along with the Soviet studies on ‘Fibonacci Arithmetic’ and ‘Fibonacci computers’, in the same period, the similar studies have been fulfilled in the United States (University of Maryland) under the scientific supervision of Prof. Robert Newcomb [53–57]. The studies of the American, Soviet and Ukrainian scientists in this field are confirmation of the fact that, since the 1970s of the 20th century, the notions of ‘Fibonacci code,’ ‘Fibonacci arithmetic’ and ‘Fibonacci computer’ became widely known in the world scientific and technical literature. 8.4. Codes of the golden p-proportions 8.4.1. Definition The binary code of the real number A, which is determined by the formulas (2) A=∑iai2i(i=0,±1,±2,±3,…)(ai∈{0,1}) and (3) 2i=2i−1+2i−1=2×2i−1, assumes the following generalization. Let us consider the set of the following standard line segments: {Φpn,Φpn−1,…,Φp0=1,Φp−1,…,Φp−k,…}, (61) where Φp is the golden p-ratio, the positive root of the golden p-ratio equation xp+1=xp+1. (62) By using (61), we can get the following positional method of the real number representation, introduced in [47–49]: A=∑iaiΦpi, (63) where A is the positive real number, ai∈{0,1} is the bit of the ith digit; Φpi is the weight of the ith digit; Φp is the base of the numeral system (63), i=0,±1,±2,±3,…, p=0,1,2,3,… is the given integer. The general theory of the codes of the golden p-proportions (63) has been described in the book [24]. 8.4.2. Partial cases of the codes of the golden p-proportions First of all, we note that the formula (63) sets forth a theoretically infinite number of the binary positional representations of real numbers, because every p=0,1,2,3,… ‘generates’ its own method of the binary positional number representation in the form (63). The base of numeral system is one of the fundamental notions of positional numeral system. The analysis of the sum (63) shows that the golden p-ratio Φp, the positive root of the golden p-ratio equation (62), is the base of the numeral system (63). Note that except for the case p = 0 (Φp=0=2) all the remaining golden p-proportions Φp are irrational numbers. It follows from this fact that the codes of the golden p-proportions (63) are the binary numeral systems with irrational bases Φp for the cases p > 0. Note that for the case p = 0, the codes of the golden p-proportions (63) are reduced to the classic binary code (2) and for the case p = 1 to Bergman’s system (1). It is clear that Bergman’s system (1) has the most practical significance because this numeral system with irrational base Φ=1+52 is the simplest for technical realization. 8.5. Application of the codes of the golden p-proportions in self-correcting analog-to-digital and digit-to-analog converters 8.5.1. The ‘binary’ resistive divisor In measurement practice, the so-called resistor divisors, intended for the division of electric currents and voltages in the given ratio, are widely used. One of the variants of such divisor is shown in Fig. 3. Figure 3. View largeDownload slide The resistive divisor. Figure 3. View largeDownload slide The resistive divisor. The resistive divisor in Fig. 3 consists of the ‘horizontal’ resistors of the kind R1 and R3 and the ‘vertical’ resistors R2. The resistors of the divisor are connected between themselves by the ‘connecting points’ 0,1,2,3,4. Each point connects three resistors, which are forming together the resistor section. Note that Fig. 3 shows the resistive divisor, which consists of the five resistor sections. In general, a number of resistor sections can be extended ad infinitum. First of all, we note that the parallel connection of the resistors R2 and R3 to the right of the ‘connecting point’ 0 and to the left of the ‘connecting point’ 4 can be replaced by the equivalent resistor with the resistance, which can be calculated according to the well-known electric law on parallel connection of two resistors R2 and R3 (see Fig. 3): Re1=R2×R3R2+R3. (64) Then, it is easy to calculate the equivalent resistance of the resistor section to the right of the ‘connecting point’ 1 and to the left of the ‘connecting point’ 3: Re2=R1+Re1. (65) In dependence on the choice of the resistance values of the resistors R1, R2, R3, we can get the different coefficients for the current or voltage division. Let us consider now the ‘binary’ resistive divisor, corresponding to p=0. For this case, the resistive divisor consists of the following resistors: R1=R;R2=R3=2R, where R is some standard resistance value. For this case, the expressions (64) and (65) take the following values: Re1=R;Re2=2R. (66) Then, taking into consideration (66), we can prove that the equivalent resistance of the ‘binary’ resistive divider to the left or to the right of any ‘connecting point’ 0,1,2,3,4 is equal to 2R. This means that the equivalent resistance of the resistive divisor in the ‘connecting points’ 0,1,2,3,4 can be calculated as the resistance of the parallel connection of the three resistors of the value 2R. By using the electric circuit laws, we can calculate the equivalent resistance of the ‘binary’ resistive divisor in each ‘connecting point’ 0,1,2,3,4 as follows: Re3=23R. (67) Let us connect now the generator of the standard electric current I to one of the ‘connecting points’, for example, to the point 2. Then according to Ohm’s law the following electric voltage appears in this point: U=23RI. (68) Let us calculate now the electric voltages in the ‘connecting points’ 3 and 1, which are adjacent to the point 2. It is easy to show that the voltage transmission coefficient between the adjacent ‘connecting points’ is equal to 12. This means that the ‘binary’ resistive divisor fits very well to the binary system and this fact is a cause of wide use of the ‘binary’ resistive divisor in Fig. 3 in modern ‘binary’ digit-to-analog and analog-to-digit converters (DAC and ADC). 8.5.2. The ‘golden’ resistive divisors and their electric properties Let us take the values of the resistors of the ‘golden’ resistive divisors in Fig. 3 as follows: R1=Φp−pR;R2=Φpp+1R;R3=ΦpR, (69) where Φp is the golden p-ratio, p∈{0,1,2,3,…}. It is clear that the ‘golden’ resistive divisor in Fig. 3 sets an infinite number of the different ‘golden’ resistive divisors, because every p ‘generates’ a new ‘golden’ resistive divisor. In particular, for the case p=0, the value of the golden 0-ratio Φ0=2 and the ‘golden’ resistive divisor is reduced to the classic ‘binary’ resistive divisor, based on the resistors R−2R. For the case p=1 the resistors R1,R2,R3 in Fig. 3 take the following values: R1=Φ−1R=0.618R;R2=Φ2R=2.618R;R3=ΦR=1.618R. (70) Let us show that the ‘golden’ resistive divisors in Fig. 3 with the resistors R1,R2,R3, given by (69) and (70), have the following unique electric properties. To find these unique electric properties, we will use the following fundamental mathematical relations for the golden p-proportions Φp: Φp=1+Φp−p, (71) Φpp+2=Φpp+1+Φp, (72) which takes the following forms for the cases p=0(Φp=0=2) and p+1(Φp=1=Φ=1+52=1.618), respectively: p=0:2=1+1;22=2+2 (73) p=1:Φ=1+Φ−1;Φ3=Φ2+Φ. (74) By using the identity (72), we can deduce the value of equivalent resistance of the resistor circuit of the ‘golden’ resistive divisor in Fig. 3 to the left and to the right from the ‘connecting points’ 0 and 4. In general case of p (p≥1), the formula (64) looks as follows: Re1=R2×R3R2+R3=Φpp+1R×ΦpRΦpp+1R+ΦpR=Φpp+2R2(Φpp+1+Φp)R=R. (75) Note that we have simplified the formula (75) by using the mathematical identity (72). By using (65) and (71), we can calculate the value of the equivalent resistance of Re2 as follows: Re2=Φp−pR+R=(Φp−p+1)R=ΦpR. (76) Thus, according to (76), the equivalent resistance of the resistive circuit of the ‘golden’ resistive divisor in Fig. 3 to the left or to the right of the ‘connecting points’ 0,1,2,3,4 is equal to ΦpR, where Φp is the golden p-proportion. This fact can be used for the calculation of the equivalent resistance Re3 of the ‘golden’ resistive divisor in the ‘connecting points’ 0,1,2,3,4. In fact, the equivalent resistance Re3 can be calculated as the resistance of the electrical circuit, which consists of the parallel connection of the ‘vertical’ resistor R2=Φpp+1R and the two ‘lateral’ resistors with the resistance ΦpR. But because according to (75) the equivalent resistance of the parallel connection of the resistors R2=Φpp+1R and R3=ΦpR is equal to R, then the equivalent resistance Re3 of the divider in each ‘connecting point’ can be calculated by the formula: Re3=ΦpR×RΦpR+R=ΦpR2(Φp+1)R=11+Φp−1R. (77) Note that for the case p = 0 (the ‘binary’ resistive divisor) we have Φp=0=2 and then the expression (76) is reduced to (67). For the case p = 1, the formula (76) is reduced to the following formula: Re3=11+Φ−1R=1ΦR=Φ−1R. (78) Let us calculate now the voltage transmission coefficient between the adjacent ‘connecting points’ of the ‘golden’ resistive divisor. For this purpose, we connect the generator of the standard electric current I to one of the ‘connecting points,’ for example, to the point 2. Then according to Ohm’s law, the following electrical voltage appears in this point: U=11+Φp−1RI. (79) Note that for the case p=0, we have Φp=0=2 and the formula (79) is reduced to the following formula: U=11+2−1RI=11+12RI=23RI, (80) which coincides with the formula (68) for the ‘binary’ resistive divisor. Let us calculate now the electrical voltage in the adjacent ‘connecting points’ 3 и 1. The voltages in the points 3 and 1 can be calculated as a result of linking the voltage U, given by (78), to the resistive circuit, which consists of the sequential connection of the ‘horizontal’ resistor R1=Φp−pR and the resistive circuit with the equivalent resistance R. Then, for this case, the electrical current I, which appears in the resistive circuit to the left and to the right of the ‘connecting point’ 2, will be equal to I=UR1+R=U(Φp−p+1)R=UΦpR. (81) If we multiply the electrical current (80) by the equivalent resistance R, we get the following value of the electrical voltage in the adjacent ‘connecting points’ 3 and 1: UΦp. (82) This means that the voltage transmission coefficient between the adjacent ‘connecting points’ of the ‘golden’ resistive divisor in Fig. 3 is equal to the reciprocal to the golden p-proportion Φp! Thus, the ‘golden’ resistive divisor in Fig. 3, based on the golden p-proportions Φp, is quite real electrical circuits. It is clear that the above theory of the ‘golden’ resistive divisors [16] can become a new source for the development of the ‘digital metrology’ and analog-to-digit and digit-to-analog converters. Note that the above theory of the ‘golden’ resistive divisor is described for the first time in the author’s 1978 article [50]. 8.5.3. Self-correcting «golden» ADC There is a problem to guarantee the temperature and prolonged in time stability for the high-reliable control systems. Because ADC and DAC are very important devices of high-reliable control systems for many complicated technological objects; therefore, designing the self-correcting ADC and DAC is one of the most important areas of applications of the Fibonacci and golden ratio codes. While the faults and failures of the digital components of computers and microprocessors (e.g. flip-flops and logic gates) are the main cause of non-reliability of the digital systems, the deviations of parameters of the analog elements ADC and DAC from their standard values are the main cause of the informational non-stability of measurement systems. These deviations depend from different interior and exterior factors (‘aging’ of elements, temperature influences, technological errors, etc.) and they are usually the ‘slow’ time functions. At the designing the exact measurement systems, there is a problem decreasing the requirements to the technological exactness of the analog elements and eliminating such difficult technological procedures as the laser ‘tuning’ of the analog elements. The solution of this problem is realized by the application of principle of self-correction. The Fibonacci and golden proportion codes allow applying the principle of self-correction to improve the exactness and metrological stability of ADC and DAC. At the realization of the ‘golden’ and Fibonacci self-correcting ADC and DAC, the most important advantage is correction of the non-linearity of transfer function of the ‘golden’ resistive divisor. In the Special Design Bureau ‘Module’ of the Vinnytsia Technical University (Ukraine), under author’s scientific leadership, several modifications of self-correcting ADCs and DACs were developed, in which the special procedure for correcting the deviations of the digit weights from their ideal values (Fibonacci numbers or the golden ratio powers) was realized. The self-correcting 17-digit ADC, based on the Fibonacci code, was one of the best engineering developments, which was designed and produced in the Special Design Bureau ‘Module’ [51, 52] (Fig. 4). Figure 4. View largeDownload slide 17-Digit self-correcting ADC. Figure 4. View largeDownload slide 17-Digit self-correcting ADC. ADC in Fig. 4 had the following technical parameters: The number of digits—18 (17 digital and one sign digit). Conversion time—15 ms. Total error—0.006%. Linearity error—0.003%. Frequency range—25 kHz. Operating temperature range—20 ± 30°C. The correction system, built in the ADC, allows to correct the zero drift, the linearity of the AD-conversion, what is fulfilled by traditional methods, and most importantly, to correct the deviations of the digit weights from their nominal values (Fibonacci numbers or powers of the golden proportion). According to the opinion of the Soviet well-known metrological firms, the Soviet electronic industry not produced ADC with such high technical parameters at that time. 9. CONCLUSIONS, BASIC CONCEPTS AND THE MAIN SCIENTIFIC RESULTS Mission-critical applications. At the present time, the computer science and digital metrology are passing to new stage of their development, to the stage of designing computing and measuring systems for mission-critical applications. This puts forward new requirements for ensuring informational reliability of such systems. The most important requirement is to prevent the occurrence of ‘false signals’ at the output of the mission-critical systems that can lead to technological disasters. ‘Philosophy’ of error detection for the error-correcting codes. Modern methods of providing informational reliability of mission-critical systems (in particular, the use of error-correcting codes) do not always provide the required informational reliability of the mission-critical systems. In particular, the theory of ECC mainly is focused on the detection and correction of the errors of low multiplicity (e.g. single-bit and double-bit errors) as the most probable. With regard to the errors of high multiplicity, the theory of ECC simply ignores them because of their low probability; this follows from the model of ‘symmetrical channel’. Such ‘philosophy’ of error detection is absolutely unacceptable for the case of the mission-critical systems, because these undetectable errors can be the source of ‘false signals’ at the output of mission-critical systems what can lead to enormous social and technological disasters. Paradox of Hamming code. The main paradox of Hamming code and its analogs (for example, Hsiao code) consists of the fact that the Hamming and Hsiao codes perceive many-bit errors of the odd multiplicity (3,5,7,9,…) as single-bit errors, and for these cases, they begin ‘false correction’ by adding new errors to the erroneous code word. That is, for this case, the Hamming and Hsiao codes are turned out into anti-ECC, because they are ‘ruining’ the Hamming and Hsiao code words. This ‘paradoxical’ property of the Hamming and Hsiao codes is well known to experts in the field of ECC, but many consumers do not always know about this. For such cases, the main arguments for customers consist in the fact that the errors of large multiplicity are unlikely, but such arguments are unacceptable for mission-critical applications. Row hammer effect is a new phenomenon in the field of electronic memory. The main reason of this phenomenon is microminiaturization of electronic memory, which leads to mutual electrical interaction between nearby memory rows. This interaction is altering the contents of nearby memory rows that were not addressed in the original memory access. No effective methods of fighting against row hammer effect have been proposed until now. Possibly, the only reasonable proposal is to introduce restrictions on microminiaturization of electronic memory. But then the question arises how we have to design nano-electronic memory? ‘Trojan horse’ of the binary system. The prominent American scientist, physicist and mathematician John von Neumann (1903–1957), together with his colleagues from the Prinstone Institute Goldstein and Berks after careful analysis of the strengths and weaknesses of the first electronic computer ENIAC gave strong preference to the binary system as a universal way of coding of data in electronic computers. However, this proposal contains in itself a great danger for the case of mission-critical systems. The classical binary code has zero code redundancy what excludes a possibility detecting any errors in computer structures. This danger was called ‘Trojan horse’ of binary system by the Russian academician Yaroslav Khetagurov. Because of the ‘Trojan Horse’ phenomenon, humanity becomes a hostage to the binary system for the case of mission-critical applications. From here, it follows the conclusion that the binary system is unacceptable for designing computational and measuring systems for mission-critical applications. Bergman’s system, introduced in 1957 by the American 12-year-old wunderkind George Bergman, is an unprecedented case in the history of mathematics. The mathematical discovery of the young American mathematician returns mathematics to the Babylonian positional numeral system, that is, to the initial period in the development of mathematics, when the numeral systems and rules of performing basic arithmetic operations stood at the center of mathematics. But the most important is the fact that the famous irrational number Φ=1+52 (the golden ratio) is the base of Bergman’s system what puts forward the irrational numbers on the first position among the numbers. It can be argued that the Bergman’s system is the greatest modern mathematical discovery in the field of numeral systems, which changes our ideas about numeral systems and alters both the number theory and computer science. The ‘golden’ number theory and new properties of natural numbers is the first important consequence, following from Bergman’s system. For many mathematicians in the field of number theory, it is a great surprise that new properties of natural numbers (Z-property, D-property, F-code, L-code) were discovered in the 21st century, that is, 2.5 millennia after the writing of Euclid’s Elements, in which systematic studying the properties of natural numbers started. Bergman’s system is the source for the ‘golden’ number theory what once again emphasizes a fundamental nature of the mathematical discovery of George Bergman. Ternary mirror-symmetrical numeral system and new ternary mirror-symmetrical arithmetic are the main applied scientific results, following from Bergman’s system. These results alter our ideas about ternary numeral system. The property of mirror symmetry is the main checking property, which allows detecting errors in all arithmetical operations. Fibonacci p-codes and Fibonacci arithmetic based on the basic micro-operations. The new computer arithmetic consists in the sequential execution of the so-called ‘basic micro-operations.’ The errors are detected by built-in error-detection device simultaneously with the execution of the micro-operations in the moment of errors occurrence what ensures the high information reliability of the arithmetic device for mission-critical applications. Codes of the golden p-proportions, ‘golden’ resistive divisors and self-correcting ADC and DAC. The codes of the golden p-proportions with the base Φp (the positive root of the algebraic equation xp+1−xp−1=0,p=0.1,2,3,…) are a wide generalization of the binary system (p = 0) and Bergman’s system (p = 1). The ‘golden’ resistive divisors, based on the golden p-proportions Φp, have unique electrical properties, which allow us to design self-correcting analog-to-digital and digital-to-analog converters. Metrological parameters of such ADCs and DACs remain unchanged in the process of temperature changing and elements aging, what is important for mission-critical applications. The final conclusion. The above theory of numeral systems with irrational bases are a new direction in the field of coding theory, intended for increasing informational reliability and noise immunity of specialized computing and measuring systems. This direction does not set itself the task of replacing the classical binary system in those cases where the use of the binary system does not threaten an appearance of technological disasters and where informational reliability and noise immunity can be ensured by traditional methods. The main task of this direction is preventing or significantly reducing the probability of ‘false signals’ at the output of information systems that can lead to social or technological disasters. This scientific direction is at the initial stage and its development can lead to new technical solutions in the field of computer science and digital metrology. REFERENCES 1 Kharkevich , A.A. ( 1963 ) Fighting against Noises . State Publishing House of Physical and Mathematical Literature , Moscow . (Russian). 2 MacWilliams , F.J. and Sloane , N.J.A. ( 1978 ) The Theory of Error-Correcting Codes . North-Holland Publishing Company . 3 Mission critical . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mission_critical. 4 Hamming code . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hamming_code. 5 Hsiao , M.Y. ( 1970 ) A class of optimal minimum odd-weight-column SEC-DED codes . IBM J. Res. Develop , 14 , 395 – 401 . Google Scholar CrossRef Search ADS 6 Petrov , K.A. Investigation of the characteristics of noise-immune codes used in submicron static RAMs (Russian) http://gigabaza.ru/doc/194118.html. 7 Row hammer . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Row_hammer. 8 Bashmakova , J.G. and Youshkevich , A.P. ( 1951 ) An Origin of the Numeral Systems. Encyclopedia of Elementary Arithmetics. Book 1. Arithmetic . Gostekhizdat , Moscow, Leningrad . (Russian). 9 Von Neumann Architecture . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Von_Neumann_architecture. 10 Khetagurov , J.A. ( 2009 ) Ensuring the national security of real-time systems . BC/NW , Vol. 2, 11.1 . http://network-journal.mpei.ac.ru/cgi-bin/main.pl?l=ru&n=15&pa=11&ar=1. (Russian). 11 Kautz , W. ( 1966 ) Error-Correcting Codes and Their Implementation in Digital Systems. In Methods of Introducing Redundancy for Computing Systems. Transl. from English . Soviet Radio , Moscow . (Russian). 12 Tolstyakov , V.S. , Nomokonov , V.N. and Kartsovsky , M.G. et al . ( 1972 ) Detection and Correction of Errors in Discrete Devices. Edited by V.S. Tolstyakov. Moscow: Soviet Radio (Russian). 13 Bergman , George ( 1957 ) A Number System with an Irrational Base . Math. Mag. , Vol. 31. https://en.wikipedia.org/wiki/Golden_ratio_base. doi:10.2307/3029218 . JSTOR 3029218. 14 Golden Ratio Base. From Wikipedia, the Free Encyclopaedia https://en.wikipedia.org/wiki/Golden_ratio_base. 15 Phi Number System . From WolframMathWorld http://mathworld.wolfram.com/PhiNumberSystem.html 1957 , No 31. 16 Knuth , Donald E. ( 1997 ) The Art of Computer Programming. Volume 1. Fundamental Algorithms ( 3rd edn ). Addison-Wesley , Massachusetts . 17 Stakhov , A.P. ( 2009 ) The Mathematics of Harmony. From Euclid to Contemporary Mathematics and Computer Science. Assisted by Scott Olsen . International Publisher ‘World Scientific’ , (New Jersey, London, Singapore, Beijing, Shanghai, Hong Kong, Taipei, Chennai) . Google Scholar CrossRef Search ADS 18 Stakhov , A.P. ( 2002 ) Brousentsov’s ternary principle, Bergman’s number system and ternary mirror-symmetrical arithmetic . Comput. J. , 45 , 221 – 236 . Google Scholar CrossRef Search ADS 19 Stakhov , A.P. ( 2015 ) The ‘Golden’ number theory and new properties of natural numbers . Br. J. Math. Comput. Sci. , 11 , 1 – 15 . Google Scholar CrossRef Search ADS 20 Stakhov , A.P. ( 2016 ) The importance of the Golden Number for Mathematics and Computer Science: Exploration of the Bergman’s system and the Stakhov’s Ternary Mirror-symmetrical System (Numeral Systems with Irrational Bases) . Br. J. Math. Comput. Sci. , 18 , 1 – 34 . Google Scholar CrossRef Search ADS 21 Pospelov , D.A. ( 1970 ) Arithmetic Foundations of Computers . High School , Moscow . (Russian). 22 Polya , G. ( 1962 ), (1965) Mathematical Discovery. On Understanding, Learning and Teaching Problem Solving , Vols. I and II . Ishi Press , New York, London . 23 Stakhov , A.P. ( 1977 ) Introduction into Algorithmic Measurement Theory . Soviet Radio , Moscow . (Russian). 24 Stakhov , A.P. ( 1984 ) Codes of the Golden Proportion. Radio and Communication , Moscow . (Russian). 25 Stakhov , A. ( 1972 ) Synthesis of optimal algorithms for analog-to-digital conversion. Doctoral Thesis, Kiev Institute of Civil Aviation Engineers (Russian) 26 Vorobyov , N.N. ( 1961 ) Fibonacci Numbers . Publishing House ‘Nauka’ , Moscow . (Russian). 27 Hoggatt , V.E. ( 1969 ) Fibonacci and Lucas Numbers . Houghton-Mifflin , Palo Alto, CA . 28 Koshy , Thomas. ( 2017 ) Fibonacci and Lucas Numbers with Applications ( 2nd edn ). John Wiley & Sons, Inc . Google Scholar CrossRef Search ADS 29 Stakhov , A.P. ( 1974 ) Redundant Binary Positional Numeral Systems. In Homogenous digital computer and integrated structures . Taganrog Radio University . No 2 (Russian). 30 Stakhov , A.P. ( 1975 ) A Use of Natural Redundancy of the Fibonacci Number Systems for Computer Systems Control. Automation and Computer Systems, No 6 (Russian). 31 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of USA No. 4187500. 32 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of USA No. 4290051. 33 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of England No. 1543302. 34 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of England No. 2050011. 35 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Germany No. 2732008. 36 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of Germany No. 2921053. 37 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Japan No. 1118407. 38 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificates of France Nos. 7722036 and 2359460. 39 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificates of France Nos. 7917216 and 2460367. 40 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Canada No. 1134510. 41 Device for Reduction of p-Fibonacci Codes to the Minimal Form. Patent Certificate of Canada No. 1132263. 42 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent Certificate of Poland No. 108086. 43 Reduction Method of p-Fibonacci Code to the Minimal Form and Device for its Realization. Patent certificate of DDR No 150514. 44 Stakhov , A.P. ( 2016 ) Fibonacci p-codes and codes of the golden p-proportions: New informational and arithmetical foundations of computer science and digital metrology for mission-critical applications . Br. J. Math. Comput. Sci. , 17 , 1 – 49 . Google Scholar CrossRef Search ADS 45 Luzhetsky , V.A. , Stakhov , A.P. and Wachowski , V.G. ( 1989 ) Noise-Immune Fibonacci Computers. The Brochure ‘Noise-Immune Codes. Fibonacci Computer’ . Knowledge , Moscow . A series ‘New life, Science and Technology’ (Russian). 46 Ancient Egyptian Mathematics . From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Ancient_Egyptian_mathematics#Multiplication_and_division. 47 Stakhov , A.P. ( 1978 ) Fibonacci and ‘Golden’ Ratio Codes. In Fault–Tolerant Systems and Diagnostic FTSD-78, Gdansk. 48 Stakhov , A.P. ( 1980 ) The golden mean in the digital technology . Autom. Comput. Syst. , No 1 (Russian), pp. 27 – 33 . 49 Stakhov , A.P. ( 1981 ) Perspectives of the use of numeral systems with irrational bases in the technique of analog-to-digital and digital-to-analog conversion . Measurements, Control, Automation, Moscow. No. 6 (Russian). 50 Stakhov , A.P. ( 1978 ) Digital Metrology on the basis of the Fibonacci codes and Golden Proportion Codes. In Contemporary Problems of Metrology . Machine-Building Institute , Moscow . (Russian). 51 Stakhov , A.P. , Azarov , A.D. , Moiseev. , V.I. , Martsenyuk , V.P. and Stejskal , V.Y. ( 1986 ) The 18-bit self-correcting ADC . Devices Control Syst. , 1 . 52 Stakhov , A.P. , Azarov , A.D. , Moiseev. , V.I. and Stejskal , V.Y. ( 1989 ) Analog-to-Digital Converters on the Basis of Redundant Numeral Systems. The Brochure ‘Noise-Immune Codes. Fibonacci Computer.’ . Knowledge , Moscow . A series ‘New life, science and technology’ (Russian), pp. 40 – 48 . 53 Licomenides , P. and Newcomb , R. ( 1984 ) Multilevel Fibonacci conversion and addition . Fibonacci Q. , 22 . 54 Ligomenides , P. and Newcomb , R. ( 1981 ) Equivalence of some Binary, Ternary, and Quaternary Fibonacci Computers. Proc. Eleventh Int. Symp. Multiple-Valued Logic, Norman, Oklahoma. 55 Ligomenides , P. and Newcomb , R. ( 1981 ) Complement Representations in the Fibonacci Computer. Proc. Fifth Symp. Computer Arithmetic, Ann Arbor, MI. 56 Newcomb , R. ( 1974 ) Fibonacci Numbers as a Computer Base. Conference Proc. Second Inter-American Conf. Systems and Informatics, Mexico City. 57 Hoang , V.D. ( 1979 ) A Class of Arithmetic Burst-Error-Correcting Codes for the Fibonacci Computer. PhD Thesis, University of Maryland. Author notes Handling editor: Fionn Murtagh © The British Computer Society 2017. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

The Computer JournalOxford University Press

Published: Oct 10, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off