Accuracy of floating point number can be expanded up to 39 times, that is possible, without expanding the floating point number itself. Using Shewchuck s algorithm, improvements by S. Boldo and Malcolm, Sterbenz theorem etc. But accuracy of fraction / mantissa is only improved up to 39 times, not exponent. This accuracy can be used for example to store hundreads of other FP numbers inside this 2028 bit wide mantissa. 64 bit FP number has 52 (+1) bit mantissa, 39 X 52 bits is 2028, 31 X 64 bits is 1984 bits, so 31 other FP numbers can be inside just one “original mother number” (inside its mantissa). But first 64 bit FP number inside “original mother” can be used as “second mother number”, it has 1984 - 64 bits = 1920 bits available information space. This “second mother” can contain “third mother number” that has 1856 bits of information space (1920 - 64 bits) etc., so hundreads of 64 bit FP numbers fit inside just one “original mother floating point number”, and much more than just 31 other FP numbers inside one 1984 bit mantissa. Blockchains are used in Ethereum etc., and they require gigabytes of information space at largest. Perhaps floating point numbers can be used as blockchains to replace ordinary blockchains. Those mantissa expansion computations are done in software not in hardware, so computing is slow. If dozens of mantissa expansion computations up to about 2000 bit FP mantissa accuracy is needed, computing is complicated and takes lots of time. Like in ordinary blockchain, but in this case just one or couple of 64 bit FP numbers is needed instead of gigabytes of information space. Just couple of 64 bit FP numbers are needed for “floating point blockchain” and gigabytes of storage space is saved because it is not needed for just blockchain. How close to ordinary blockchain this FP blockchain is or is it even near to ordinary blockchain technique I don t know. One FP “original mother” number storing inside itself other FP numbers, those other FP numbers storing inside them other FP numbers etc. technique can also be used to store some other information than just blockchains, and FP numbers inside each other principle can be used in wide amount of information storing applications, altough computing several about 2000 bit mantissa FP numbers is slow and requires lots of computing power. But blockchain computing is slow and complicated also. If ordinary gigabytes long blockchain is replaced with just few 64 bit or 80 bit FP numbers, information space storing savings are huge. Computing of those up to over 2000 bit accuracy FP numbers is done in software, and software computing is up to 1000 times slower than computing in hardware, so making complicated and slow computing using “floating point blockchain” is comparable to usual blockchain techniques. But information space that floating point blockchain needs is extremely small compared to ordinary blockchains. The smaller the exponent of FP number is the simpler and faster it is to compute. I don t know should the exponent of blockchain FP number be small or large. If FP numbers are used as data storage only the smaller the exponent the better and faster computing. 52 bits is accuracy that is enough for almost every application, so exponent is quite irrevelant. “Floating point adder design flow” 2011 by Michael Parker (Altera), Chipdesignmag “Between fixed point and floating point by Dr. Gary Ray”, unum / ubox computing by John Gustafsson, unconventional number systems by Paul Tarau etc. can also be used. Unconventional number systems can store lots of information in itself, not just using this floating point trick is possible. In few bits some number system can store lots of information (and this information can include few bits that represent another similar high efficiency number etc., so this numbers inside each other principle can be used in other number systems also, not just in floating point). Also “Gal s accuracy tables method revisited” method can be used instead of method of S. Boldo / Malcolm to improve floating point accuracy. Gal s (improved) method also suits for logarithmic numbers. Using analog computing in for example signal processing can be improved computing speeds (Glenn Cowan 2005 “A VLSI analog math-coprocessor”), “Floating-point analog-to-digital converter” 1999. Analog compression like PAL TV analog quadrature compression or MUSE HD TV compression, with digital compression can perhaps be used. Albert Weggener has patented several ways to use hardware (floating point) data compression for his Samplify Systems Inc. Because 64 bit FP number has max 2040 bits bit length (2028 bit mantissa accuracy, 1 sign bit, 11 bit exponent), if such number is computed in hardware it needs (?) 2040 bit (?) wide floating point unit? Or does not neeed such large FPU? Largest FPUs are 512 bit wide nowdays. It is also possible to use 80 bit FP number, it has enormous exponent range that can be also exploited. But is also possible to use small FP numbers, 16 or 8 bit mini- and microfloats. 8 bit FP number was used in Playstation / IBM processor. 8 bit FP number has small mantissa, so it is computed fast in software or hardware. It is also possible to go smaller microfloats, 5 or 4 bit floating point numbers, that are “differential floating point” like differential PCM (DPCM) is not linear PCM, “DFP” is similar version of floating point. But mantissa expansion cannot perhaps be used on those extra small microfloats. 64 bit FP has 52 bit mantissa, so 39 X 52 is 2028 bits. 32 X 64 is 2048, and 32 suits for digital binary processor, so perhaps one FP number inside FP “Mother number” is missing 20 least significant bits, or unnecessary exponents are dropped from previous FP numbers so that 20 bits can be saved etc. If 16 bit FP number has 11 bit mantissa, 39 X 11 is 429 bits, and 27 X 16 is 432, so only 3 bits must be dropped. I am not a mathematician so I don t know which is best way to divite information space (accuracy bits) between “mother numbers” that contain other FP numbers, and actual information FP numbers. If about half (?) of about 2000 bit information space is for “mother numbers” and half for “Information numbers” (that are not used for information space multiplication like mother numbers but to store other information) that is the best (?) way? I don t know. If those FP numbers are used as information storage, but If this floating point numbers inside each other method is used only for blockchain, perhaps all or almost all (?) FP numbers can be mother numbers only? “Simplified floating point division and square toot” Viitanen. Also if mother numbers are 64 bit FP numbers information numbers can be 16 bit FP numbers. Or otherway around, the smaller mantissa is the faster it is to compute up to 39 times expansion I think. Also information numbers can other than just FP numbers, integers can be used also, for example 16 bit words chained together to make 16 + 16 + 16 bit chains until about 1000 bits of available information space for them is used. first 16 bits of FP number mantissa accuracy is used for to store first 16 bit integer, next 16 bits (16+16=32 bits) for second integer etc. until about 1000 bits of FP number precision is achieved. This about 1000 bits FP accuracy is divided down to 16 bit (integer) bits (16 bit + 16bit etc.) until 1000 bits is used. Another 1000 bits of FP accuracy is used for mother numbers, other FP numbers that are used for information space multiplication. Every mother number can contain other mother numbers but it also contains information numbers inside its accuracy potential (bits of accuracy available). All these mother numbers and information numbers are inside one “original mother number”. In text “Algorithms for quad-double precision floating-point arithmetic” by Y. Hida in last page is mentioned that FP number mantissa can be expanded up to 39 times (mantissa accuracy improved 39 X ). Gal s accuracy tables “revisited” method can be used also. Or other number systems than floating point, Neuraloutlet wordpress netpage is “U-value number system”, Rob Graigen at Researchgate 2012 " Alternative number system bases?", Quote notation Eric Hehner, at netpage MROB is (Robert) “Munafo PT number system” and “Multiple base composite integer”. “A new uncerntainty-bearing floating point arithmetic” 2012 Chengpu Wang. Unum / ubox computing can be used with ordinary (standard) FP numbers, for example 16 bit unum section + 64 bit FP number would make 80 bit number. Hatami (improved version by Johansson et al) pseudo-parallel delta sigma coding, and multirate / multibit delta sigma (Hannu Tenhunen) can reach great accuracy, but require analog electronics. Multirate DSM (Tenhunen) reaches 1400 bit multirate, one bit encoding with 100 times (?) oversampling is 14 X “compression”, when floating point number has 39 X “compression”. I don t know how much oversampling this 1400 bit multirate one bit encoding needs. So delta sigma modulation or Takis Zourntos “oversampling without delta sigma” can be used in this “numbers inside each other” principle “compression” also, not just floating point or exotic number systems. Analog delta sigma and floating point numbers can also be combined in DAC or ADC. “Direct digital synthesis using delta-sigma modulation” Orino. “Numerical linear algebra using the PcppEigen package”, “Numbers as stream of digits” C. Froygny 2012., Bohr compactification. Dithering can be used to improve accuracy, “ADC look-up based post correction combined with dithering”. Delta sigma modulation uses some very efficient noise shaping techniques. Feedback amplitude modulation FBAM and Vector Phaseshaping Synthesis are used in sound synthesis but perhaps can be used in signal processing? Multiple description coding with delta sigma modulation is used sometimes, perhaps floating point etc. number systems can use it also?" Numbers inside each other" trick has been used in computer arithmetic long ago (for example in blockchains?), I just propose that when now is possible to use up to 39 X “compression” in FP numbers it could be used as data storage or blockchain. Data “compression” in satellite communication, space probe communication, or in internet use and in magnetic memory etc. can benefit this numbers inside each other trick, not just blockchains. If it is possible to use only one mother number, not many inside one, there is still about 32 X compression that can be used. In book “Analog signal processing” von Plassche 2013 is in section 5 “analog floating point”. So floating point numbers and delta sigma modulation can be combined. Using either 24 bit multibit DSM, improved Hatami pseudo parallel DSM perhaps 16 bits, or multirate DSM up to 1400 bits (Hannu Tenhunen) brings information “compression”, 24 X in multibit, 16 X in pseudoparallel, and unknown compression ratio in multirate DSM. Using floating point number in DSM is possible that 1 bit information has 16, 32 or 64 bits floating point accuracy. Takis Zourntos oversampling without delta sigma can be used also. Delta sigma has some very efficient noise shaping methods which can boost performance further still, up to 60 to almost 80 decibels, or 10 to 14 bits. Because delta sigma is one bit that means 10 X to 14 X extra bitrate compression. Odelta compression (Ossi Mikael Kalevo) is one method also. But if its purpose to make blockchain floating point expansion using Boldo / Malcolm methods or Gal s accuracy tables “revisited” method is best perhaps. Information space savings can be up to almost 32 times, so blockchain needs almost 32 times less data storage than ordinary blockchain. It is possible to use delta (sigma) values to information expansion also, but not perhaps in blockchains. Exotic number systems are other way, but how them can be used in blockchains I don t know. Examples of them are “Magical skew number system” (Elmesry, Jensen Katajainen) , Paul Tarau s number systems, ternary number systems etc.There is posit number system (John L. Gustafsson) and in Google Groups 11.8 2017 "Beating posits at their own game " John G. Savard s (Quadiblock) EGU system, and in Google Groups 29.9 2017 “Issues with posits & how to fix them” is “valid” number system. How those can be used in blockchains I don t know. EGU number system is a-Law compressed, so it is logarithmic and can be used in analog hardware? Intel ADX is improved floating point. Additive Quantization (Martinez), “Quantization and greed are good” Mroueh 2013, pulse group modulation patents by Clinton S. Hartmann. Posit / EGU / valid number system can be diffrential like DPCM is version of PCM, in 4-8 bit “microposits” and/or use Gary Rays reversed elias gamma coding? Dithering and noise shaping can also dramatically increase small microfloat or microposit accuracy. Multiple description coding, multiple base composite integer, Bohr compactification etc. can be used. The same information saving that is possible in floating point numbers is available at exotic number systems also, perhaps much more. In netpage stackoverflow “Algorithms based on number base systems? (closed)” 2011 are many different number systems, and stackexchange “Data structures: what is good binary encoding for phi-based balanced ternary?” 2012 is ternary, “A new number system for faster multiplication” 1996 Hashemian, “ZOT-binary: new number system”, “Number representations as purely data structures” 2002 Ivanovic, “Hierarchical residue number systems” Tomczak, Index-calculus number systems, residue-, reduntant number systems etc. In “New approach based on compresive sampling” Bonavolonta 2014 is that only 1/50th of signal is needed to reconstruct it, so it has 50 times compression ratio. There is book “Dynamics of number systems - computation with arbitrary precision” by Petr Kurka how to use different number systems. “Dynamical directions in numeration” Barat 2015, “A reduntant digit floating point system” Fahmy 2003, “Abelian complexity in minimal subshifts” 2011, “Optimal left to right binary signed digit recoding” Joye 2000. Finite State Entropy uses asymmetric numbers, “Asymmetric high-radix signed-digit number systems for carry-free addition”. “New formats for computing with real numbers” Hermigo 2015, “Adapted modular number system” AMNS. “Dealing with large datasets (by throwing away most of data)” by A. Heavens is “massive data compression”. “Ordinals in HOL: transfinite arithmetic up to (and beyond)” by Norrish is perhaps nearer “infinity computer” than ordinary computer arithmetic. “A floating-point ADC with variable gain pipeline stages” Kosonen, “Design and implementation of a self-calibrating floating-point analog-to-digital converter” 2004. “Arithmetic units for high performance digital signal processor” Lai 2004, “Multiple base number system” Dimitrov, “algebraic integer quantization” concept, “Pairwise quantization”, “Sparse composite quantization”, “Universal rate-efficient scalar quantization”, “Tournament coding of integers” Teuhola, “Liao style numbers of differential systems”. And “1-bit digital neuromorphic signal path with dither”. Preferred numbers like Renard series numbers or E-series numbers can offer better integer accuracy instead of binary 1 and 0 perhaps. Perhaps integer or floating point accuracy could be improved if number system is different than binary 1 and 0, for example if preferred numbers are used that are integers, not logarithmc numbers, that go with geometric series (Renard series and E-series numbers are integers). Perhaps multiple base composite integer at MROB netpage, or Dimitrov s multiple base number system can be combined with preferred numbers etc. But floating point mantissa can be expanded up to 39 times in standard FP number, that is “Information compression” enough, for example to use in blockchain.

# Using floating point numbers as blockchain

In netpage Yehar (Olli Niemitalo) is “Enumeration of functions”, and in netpage stackoverflow (2011) is “Algorithms based on number base systems? (closed)”, that mentions “Meta-base enumeration n-tuple framework”. Has those two enumeration subjects anything in common, I don t know. There is also thing called “promise library”, or “futures and promises”, and that is about functions also (in computer arithmetic). Has those three things something in common I don t know (promise library, enumeration of functions, and meta-base enumeration n-tuple framework). Addition to previous text: If floating point numbers are used as general data storage, not blockchain, and mantissa of FP number can be expanded up to 39 times (64 bit FP number has 52 bit mantissa, that 39 times is about 2000 bits then), this 2000 bit mantissa can be used in a way where 1000 mantissa bits is used for 64 bit floating point “mother numbers” to increase information space (up to 16 times, 16 X 64 = 1024 bits), and another 1000 bits for “information space” where actual information is. But those “second generation mother numbers” include inside themselves “third generation mother numbers”, for example first mother number has 2024 bit mantissa accuracy (for example), second mother number has 2024 - 64 = 1960 bit accuracy. This second mother number include itself “third mother” that has 1960 -64 bit = 1896 bit accuracy, and so on, until about 1000 bits of accuracy is left inside “mother numbers”. I tried to calculate the available information space expansion that is possible this way, and about 15 000 times expansion of information space is perhaps possible (if my calculation was right, or wrong, I don t know). Floating point number that has 2000 bit accuracy is enormous accuracy already, and perhaps 1000 bits X 15 000 times is such accuracy that it is not needed in any scientific computing. Instead this information space, that is inside one single “first mother number”, can be used as information storage, to store textual or visual information as bits (or long line of bits, where for example 8 or 16 bit information is chained to 8 + 8+ 8 bits etc or 16 + 16 +16 bits etc until 1000 bits is full) which is available information space in each expanded accuracy floating point number. If one 64 bit floating point “first mother” number can store 15 million bits of information inside itself, that is enormus data compression, and data is not compressed, it is just represented as floating point number. So this one 64 bit FP number can be stored in memory or send through internet etc., and when it is received it can be “read” and expanded back to 15 million bit information space form. Space probe communication, satellite communication, or even interstellar communication can use his method (if aliens understand this method too and can read 64 bit floating point numbers). Of course it is possible to use lower accuracy floating point, for example about 1000 bit mantissa or about 500 bit mantissa expansion. But information space is now much lower, if max available is 39 X mantissa expansion is 39 X 52 bits = 2028 bits, 11 bit exponent, 1 sign bit = 2040 bits. if now only 10 X expansion = 520 bits, 11 bit exp, 1 sign bit = 532 bits. If for mother numbers are used 5 X 64 bits = 320 bits from mantissa that is 200 mantissa bits information space still available. Available multiplication space is 5 mother numbers, but how it is counted? Is it 5 + 4 + 3 + 2 +1 = 15 mother numbers inside one, or is it 5 X 1 + 4 X 2 + 3 X 3 + 2 X 4 + 1 X 5 = 35 mother numbers inside one, or is it 5 X 4 + 4 X 3 + 3 X 2 + 2 X 1 = 40 mother numbers? Or is it some other combination? I don t know. This supposed to be simple calculation but I just don t understand how this multiplication thing should be done. If multiplication for 2036 bit (2024 mantissa bits + 11 bit exp + sign bit) number (39 X 52 bit mantissa expansion) and 1000 bits is information space, and 16 X 64 bits for mother numbers (1024 bits) , and now multiplication of information space is 16 + 15 + 14 +13 +12 etc., that is only about 136 times expansion, not more. End result is 136 X 1000 bits = 136 000 bits in one 64 bit FP number, or 2125 X 64 bits, so it is only 2125 times information space expansion overalll, not 15 000 times that I thought. Anyway back to 532 bit number, 200 bits are available as information space inside each mother number and 320 bits are used for mother numbers in first 532 bit mother FP number, so it is either 15, 35 or 40 X 200 bits (or something else, I just can t count altough this is simple expansion computation, but I cant figure it out, I am very bad in math) so from 3000 bits to 8000 bits. All this inside one 64 bit FP number, if mantissa of that number is expanded ten times. Using floating point library, which is integer computing in processor, can be done to fasten computations, either together with floating point computing or alone. So that those mantissa expansions, that must be done in software anyway, are faster. Extreme case would be storing each value of floating point number in very large memory library, no need for any computing but library would be terabytes or petabytes large perhaps, or even more. That would however make possible using FP numbers as information storage and compression, so information can be stored in extremely “compressed” form inside floating point number, and no computing is needed to get it out becuase every FP value is stored in gigantic memory library, bad thing is that simple library that stores every possible FP value as memory bits is enormously large, so information can be compressed in extremely compressed form and it can get out of FP number without computing, using only memory library but this memory had to be enormous. Perhaps best would be combining all those thee methods into one, using floating point unit, together with integer computing of floating point numbers (algorithms compute floating point numbers using integer processor) together with memory library to help both FPU and integer FP computing. So 2040 bit FP numbers could be computed suitably fast. Hardware that has 2040 bit wide FPU, so that this expansion is no need to compute in software, (software computing is 1000 times slower than hardware computing) largest FPUs are now 512 bit wide. Another way is to use quantum computing, analog computer (smallest analog circuits are only 16nm now so analog FPU that computes 2000 bit accuracy is perhaps possible) or analog or digital optical computer, or fast optical FPU in normal silicon VLSI CPU. Anyway if information is stored using this “floating point numbers inside each other” principle high information density and low storage requirements are possible, one cell phone can include in its memory large amount of content of whole internet, so internet searches are not even needed when everything is inside phone s memory. When those 2000 bit FP numbers are computed fast they cannot be used as blockchain, because the whole idea of blockchain is complex computations that require much time and computing power. If maximum information space expansion is only 2125 times that is far too less to include whole internet in cell phone memory, I made that statement when I thought it is possible to expand information space million times or more in 2040 bit floating point. Making processing faster, not only using shorter than 2000 bits in 64 bit FP, but using 32 or 16 bit FP as base, however then information space is then also very small, 32 bit FP has 23 bits mantissa and 16 bit 10 bits, and that is expanded to 39 times. Actually 992 bits not 1000 is best for 64 bit FP information space, and 1024 bits for mother nubers. Other way to use floating point, not necesseraly mantissa expansion, is using FP with shared exponent, this 2000 bit FP uses shared exponent by default because those 992 or 1000 information bits if they are divided 16+16 or 32+32 etc sections they all have shared 11 bit exponent and 1 sign bit. Dividing 32 bits to three 9 bits sections plus 5 bit exponent (or 4 bit exp and 1 sign bit) or 4 X 7 bits plus 4 bit exponent. Those 9 or 7 mantissa bits can perhaps include “hidden bit”, so they are 10 and 8 bit mantissa values. Is it possble that those chained 16+16 bit or 32+32 bit values inside one 992 bit section has all hidden bit, if they are not integer values but floating point numbers (inside one large floating point number)? So they are 16 bit +1 hidden bit=17 bit, and 32bit + 1 hidden bit =33 bit values actually like IEEE standard FP numbres. That would increase information space. Is that hidden bit same (shared) between every small bitsection in 992 bit long bitchain? Or can they have different hidden bits in each 16 bit or 32 bit part of bitchain? I think separate hidden bit for each 16 or 32 bit FP number inside long bitchain is possible (this long bitchain is one 64 bit FP number mantissa expanded, other 32 or 16 bit FP numbers are inside its mantissa in long bitchain 16 + 16 bits etc). Perhaps “promise library” can be used as floating point library to increase FP computation speed and accuracy. GPUs are 64 bit now, so GPGPU computing of 64 bit FP with GPUs is possible, but how about accuracy? If GPU FPUs are smaller accuracy but still IEEE standard FPU. Some laptop PCs perhaps have low-end 64 bit GPUs, but computing this extented to 2000 bit mantissa using tablet PC or smartphone and then all other FP numbers inside this one computing will take days / weeks if one 64 bit FP number with 2000 bit mantissa and then other FP numbers inside it computing is done in tablet PC or smartphone? Home PCs can use GPGPU computing and efficient 64 bit FPU GPUs. There is “desktop supercomputer”, Beowulf cluster, Aiyara cluster etc. Cheap desktop supercomputer with lots of GPGPU processing is perhaps possible to solve those floating point computations. “Home PC outperforms a supercomputer in complex calculations”. Googling “optical desktop supercomputer by 2020” brings many results. FiRe Pattern computer is capable of seeing patterns, seeing patterns when searching for “true compact numbers”, or patterns in Champernowne constant is useful if data compression is the goal. Desktop supercomputers have cost of several dozen thousands of dollars, next step upwards is real supercomputer like Cray that has price of 0,5 million dollar for cheapest model. Those perhaps can solve that 2000 bit FP number computing fast. Using unum / posit or some other method, not just plain ordinary IEEE standard floating point, is perhaps possible, for increased accuracy of some number, exponent (range) accuracy is not needed like in floating point numbers, main point is that 64 bit or other bitwidth number has much higher than just 64 bits accuracy, so that other numbers can be put inside it and increase information space that way. Perhaps exotic number systems that have high accuracy measured in bits, but need only small amount of bits to represent that accuracy, can be used. Using simultaneus fixed point and floating point computing uses capacity of fixed point integer computing in computer (with floating point library) to do floating point computing with integers, together with real FPU, so computing those 2000 bit FP numbers is faster when both FPU and floating point computing in integer ALU (with floating point library) is used. Unum / posit is version of FP, and diffrent versions of FP computing that uses similar principles that unum / posit (seems to be simply different versions of unum/posit principle) exist. Small chained values inside one 2000 bit FP number (information space, which is either 1000 or 992 bits long, or some other bitwidth) can be “differential floating point” 4, 6 or 8 bit FP or posit etc. microfloat values. Differential FP is similar to DPCM / ADPCM or Takis Zourntos “one bit without delta sigma” using FP instead of integer 4 bit etc. values. Differential logarithmic etc. or other number systems than integer / FP can also be used. Vector compression with FP / unum or other (like “Between fixed point and floating point by Dr. Gary Ray” that uses data compression with floating point numbers, reversed elias gamma coding), additive quantization AQ or other method etc. to make bitwidth smaller. Those data compression methods don t work in numbers inside each other principle, once data compressed number cannot be compressed twice, so only one layer of numbers can benefit from data compression techniques. Using 4-8 bit differential floating point / posit computing etc. with data- / vector compression techiques for microfloats / microposits etc. Numbers inside each other - principle is not data compression, number values are just represented as floating point numbers, 64 bit or 32 bit or 16 bit or microfloats. Perhaps unum / posit or other similar can also be used in numbers inside each other principle. Every time 64 bit or 32 bit or another value is much more accurate than just 64 or 32 bits etc numbers can be put inside each other and so increase information space substantially, I used 64 bit IEEE standard FP numbers, but if mantissa expansion can be done with unum / posit etc numbers they can be used too, or exotic number systems like fractional, index-calculus (something like APL programming language, but not computer programming language, but number system. Munafo PT number system at MROB netpage comes close) or complex numbers etc. Anything that has high accuracy with small bitwidth. “Recycled error bits: energy efficient architectural support for higher precision floating point” 2013 and Michael Parker (Altera): “Floating point adder design flow” 2011 are other methods. Googling “Y. Hida double precision floating point” brings many results, like “Stochastic arithmetic in multiprecision” 2011, ReproBLAS reproducible basic linear algebra subprograms, “Parallel algorithms for summing floating- point numbers” 2016, IEEE floating point standard 754-2018. New number systems are “prof. Hachioji new number system of integral numbers with four lowercase letters”. Hachioji s system is said to be arithmetic compression, and arithmetic compression is data compression. But if it is possible to put numbers inside itself (like in floating point) using Hachioji s numbers, does that mean that mathematical law that once data compressed number cannot be compressed another time using same data compression is not true anymore? If Hachioji s numbers are data compression (arithmetic cmpression), or are they? So once data compressed number can be data compressed again? That seems to violate laws of mathematics. “New number systems seek their lost primes” 2017 (that triangle looking number representation perhaps can use for example GPU computing of graphical triangles where one side of triangle is fractional value, or Eric James Parfitt infinity number system with triangles), “Counting systems and the first Hilbert problem” (Piraha system), “A new computational approach to infinity for modelling physical phonemena”, Google groups 2011: “A new numeral system” Quoss Wimblik, reddit math: “A triquaternary number system for my world”. If logarithmic number systems are in fact versions of floating point, and vice versa, is it possible to use fractional / logarithmic values in FP format, like IEEE FP standard but logarithmic / fractional vales for mantissa or exponent, not integers as bit values? Or logarthmic / fractional bit values in unum, posit, valid or EGU / HGU format? If that incrases accuracy. In XLNS research - overview netpage in articles section is logarithmic number systems. Using Microsoft browser instead of Google brings Google groups msg comp arch posting “Re: beating posits at their own game” 11.8 2017 by Quadibloc (John G. Savard) visible. Extemely gradual underflow EGU, hyper gradual underflow HGU. NICAM was compression half between ADPCM and linear compression, NICAM that uses dithering, not white noise, and floating point, unum, posit valid, HGU/EGU, logarithmic or other number system to compression can be used.

In text “Gradual and tapered overflow and underflow: a functional diffrential equation and its approximation” 2006 is that floating point format of presented (tapered floating point like unum/ubox computing?) has overflow threshold of 10 to 600 000 000 potence, meaning number in decimal system that has 600 000 000 decimals. Number with 6 decimals is million and number with 12 decimals is trillion, but number that has 600 000 000 decimals propably has not even name. That is the overflow accuracy of this floating point format. So this is kind of (practically) endless data compression. Altough mantissa of FP number is the accuracy of FP number, making floating point computing with this kind of non-overflowing FP format makes extreme data storing capacity in this floating point number. Because it does not overfloat, making computing that in other FP formats will overfloat is non-overfloating in this. So for example all content of the intrenet, or all information in the world, or all information of the universe can be coded into very long number. This very long number is computing product of one floating point computation that produces non-overfloating list of bits that represent all information of the world. So all information that is in the world and even more can be in one floating point number. Overfloating is one special case of floating point accuracy, others are underfloating and mantissa accuracy. Mantissa accuracy is the floating point accuracy. But if this overfloat capacity can produce almost endless amount of information accurately, this non-overfloating floating point format can be used to store (almost) endless amount of information in its overfloat accuracy potential. So data storages needs just this one floating point number or small amount of them if computing is easier with shorter line of bits. No need for gigabytes of memory or terabytes of data storage if small amount of floating point numbers can include all information. This was invented already in 2006. This is about diffrential equations and tapered floating point, and tapered floating point is unum/ubox computing. Do ultrafilters and zero sets with this kind of differential equations bring similar almost endless data compression? But if this kind of accurcay is achieved (number with 600 000 000 decimals) that means that endless data compression is already achieved. Also “Magical skew number system” by Elmasry, Jensen, Katajainen and other similar texts like “Strictly regular number system and data structures”, “The magic of number system”, “Optimizing binary heaps”, “bitprobe model”. In Elmasry, Jensen, Katajainen number systems is tree based structure? And alternative to Elias omega model? Then it can be used as exponent in FP number in Gary Ray s “Between fixed and floating point by Dr. Gary Ray” chipdesignmag article where FP number has reversed Elias gamma exponent? Paul Tarau has also proposed accurate number systems, some of them tree based structure. So some of Paul Tarau s numbers can be used as exponent replacing Gary Ray s reversed Elias gamma exponent in floating point number also? But this overflow accuracy capacity of that FP format in that 2006 text can be used to store data in that overflow capacity, almost endless amount of data. If I am right, I can be wrong as always. John G. Savard s model is logarithmic, but is it possible to make floating point system that uses Fibonacci series (golden ratio), if it makes any benefit using Fibnacci numbers with floating point like Savard s logarithmic a-law model.