Accuracy of floating point number can be expanded up to 39 times, that is possible, without expanding the floating point number itself. Using Shewchuck s algorithm, improvements by S. Boldo and Malcolm, Sterbenz theorem etc. But accuracy of fraction / mantissa is only improved up to 39 times, not exponent. This accuracy can be used for example to store hundreads of other FP numbers inside this 2028 bit wide mantissa. 64 bit FP number has 52 (+1) bit mantissa, 39 X 52 bits is 2028, 31 X 64 bits is 1984 bits, so 31 other FP numbers can be inside just one “original mother number” (inside its mantissa). But first 64 bit FP number inside “original mother” can be used as “second mother number”, it has 1984 - 64 bits = 1920 bits available information space. This “second mother” can contain “third mother number” that has 1856 bits of information space (1920 - 64 bits) etc., so hundreads of 64 bit FP numbers fit inside just one “original mother floating point number”, and much more than just 31 other FP numbers inside one 1984 bit mantissa. Blockchains are used in Ethereum etc., and they require gigabytes of information space at largest. Perhaps floating point numbers can be used as blockchains to replace ordinary blockchains. Those mantissa expansion computations are done in software not in hardware, so computing is slow. If dozens of mantissa expansion computations up to about 2000 bit FP mantissa accuracy is needed, computing is complicated and takes lots of time. Like in ordinary blockchain, but in this case just one or couple of 64 bit FP numbers is needed instead of gigabytes of information space. Just couple of 64 bit FP numbers are needed for “floating point blockchain” and gigabytes of storage space is saved because it is not needed for just blockchain. How close to ordinary blockchain this FP blockchain is or is it even near to ordinary blockchain technique I don t know. One FP “original mother” number storing inside itself other FP numbers, those other FP numbers storing inside them other FP numbers etc. technique can also be used to store some other information than just blockchains, and FP numbers inside each other principle can be used in wide amount of information storing applications, altough computing several about 2000 bit mantissa FP numbers is slow and requires lots of computing power. But blockchain computing is slow and complicated also. If ordinary gigabytes long blockchain is replaced with just few 64 bit or 80 bit FP numbers, information space storing savings are huge. Computing of those up to over 2000 bit accuracy FP numbers is done in software, and software computing is up to 1000 times slower than computing in hardware, so making complicated and slow computing using “floating point blockchain” is comparable to usual blockchain techniques. But information space that floating point blockchain needs is extremely small compared to ordinary blockchains. The smaller the exponent of FP number is the simpler and faster it is to compute. I don t know should the exponent of blockchain FP number be small or large. If FP numbers are used as data storage only the smaller the exponent the better and faster computing. 52 bits is accuracy that is enough for almost every application, so exponent is quite irrevelant. “Floating point adder design flow” 2011 by Michael Parker (Altera), Chipdesignmag “Between fixed point and floating point by Dr. Gary Ray”, unum / ubox computing by John Gustafsson, unconventional number systems by Paul Tarau etc. can also be used. Unconventional number systems can store lots of information in itself, not just using this floating point trick is possible. In few bits some number system can store lots of information (and this information can include few bits that represent another similar high efficiency number etc., so this numbers inside each other principle can be used in other number systems also, not just in floating point). Also “Gal s accuracy tables method revisited” method can be used instead of method of S. Boldo / Malcolm to improve floating point accuracy. Gal s (improved) method also suits for logarithmic numbers. Using analog computing in for example signal processing can be improved computing speeds (Glenn Cowan 2005 “A VLSI analog math-coprocessor”), “Floating-point analog-to-digital converter” 1999. Analog compression like PAL TV analog quadrature compression or MUSE HD TV compression, with digital compression can perhaps be used. Albert Weggener has patented several ways to use hardware (floating point) data compression for his Samplify Systems Inc. Because 64 bit FP number has max 2040 bits bit length (2028 bit mantissa accuracy, 1 sign bit, 11 bit exponent), if such number is computed in hardware it needs (?) 2040 bit (?) wide floating point unit? Or does not neeed such large FPU? Largest FPUs are 512 bit wide nowdays. It is also possible to use 80 bit FP number, it has enormous exponent range that can be also exploited. But is also possible to use small FP numbers, 16 or 8 bit mini- and microfloats. 8 bit FP number was used in Playstation / IBM processor. 8 bit FP number has small mantissa, so it is computed fast in software or hardware. It is also possible to go smaller microfloats, 5 or 4 bit floating point numbers, that are “differential floating point” like differential PCM (DPCM) is not linear PCM, “DFP” is similar version of floating point. But mantissa expansion cannot perhaps be used on those extra small microfloats. 64 bit FP has 52 bit mantissa, so 39 X 52 is 2028 bits. 32 X 64 is 2048, and 32 suits for digital binary processor, so perhaps one FP number inside FP “Mother number” is missing 20 least significant bits, or unnecessary exponents are dropped from previous FP numbers so that 20 bits can be saved etc. If 16 bit FP number has 11 bit mantissa, 39 X 11 is 429 bits, and 27 X 16 is 432, so only 3 bits must be dropped. I am not a mathematician so I don t know which is best way to divite information space (accuracy bits) between “mother numbers” that contain other FP numbers, and actual information FP numbers. If about half (?) of about 2000 bit information space is for “mother numbers” and half for “Information numbers” (that are not used for information space multiplication like mother numbers but to store other information) that is the best (?) way? I don t know. If those FP numbers are used as information storage, but If this floating point numbers inside each other method is used only for blockchain, perhaps all or almost all (?) FP numbers can be mother numbers only? “Simplified floating point division and square toot” Viitanen. Also if mother numbers are 64 bit FP numbers information numbers can be 16 bit FP numbers. Or otherway around, the smaller mantissa is the faster it is to compute up to 39 times expansion I think. Also information numbers can other than just FP numbers, integers can be used also, for example 16 bit words chained together to make 16 + 16 + 16 bit chains until about 1000 bits of available information space for them is used. first 16 bits of FP number mantissa accuracy is used for to store first 16 bit integer, next 16 bits (16+16=32 bits) for second integer etc. until about 1000 bits of FP number precision is achieved. This about 1000 bits FP accuracy is divided down to 16 bit (integer) bits (16 bit + 16bit etc.) until 1000 bits is used. Another 1000 bits of FP accuracy is used for mother numbers, other FP numbers that are used for information space multiplication. Every mother number can contain other mother numbers but it also contains information numbers inside its accuracy potential (bits of accuracy available). All these mother numbers and information numbers are inside one “original mother number”. In text “Algorithms for quad-double precision floating-point arithmetic” by Y. Hida in last page is mentioned that FP number mantissa can be expanded up to 39 times (mantissa accuracy improved 39 X ). Gal s accuracy tables “revisited” method can be used also. Or other number systems than floating point, Neuraloutlet wordpress netpage is “U-value number system”, Rob Graigen at Researchgate 2012 " Alternative number system bases?", Quote notation Eric Hehner, at netpage MROB is (Robert) “Munafo PT number system” and “Multiple base composite integer”. “A new uncerntainty-bearing floating point arithmetic” 2012 Chengpu Wang. Unum / ubox computing can be used with ordinary (standard) FP numbers, for example 16 bit unum section + 64 bit FP number would make 80 bit number. Hatami (improved version by Johansson et al) pseudo-parallel delta sigma coding, and multirate / multibit delta sigma (Hannu Tenhunen) can reach great accuracy, but require analog electronics. Multirate DSM (Tenhunen) reaches 1400 bit multirate, one bit encoding with 100 times (?) oversampling is 14 X “compression”, when floating point number has 39 X “compression”. I don t know how much oversampling this 1400 bit multirate one bit encoding needs. So delta sigma modulation or Takis Zourntos “oversampling without delta sigma” can be used in this “numbers inside each other” principle “compression” also, not just floating point or exotic number systems. Analog delta sigma and floating point numbers can also be combined in DAC or ADC. “Direct digital synthesis using delta-sigma modulation” Orino. “Numerical linear algebra using the PcppEigen package”, “Numbers as stream of digits” C. Froygny 2012., Bohr compactification. Dithering can be used to improve accuracy, “ADC look-up based post correction combined with dithering”. Delta sigma modulation uses some very efficient noise shaping techniques. Feedback amplitude modulation FBAM and Vector Phaseshaping Synthesis are used in sound synthesis but perhaps can be used in signal processing? Multiple description coding with delta sigma modulation is used sometimes, perhaps floating point etc. number systems can use it also?" Numbers inside each other" trick has been used in computer arithmetic long ago (for example in blockchains?), I just propose that when now is possible to use up to 39 X “compression” in FP numbers it could be used as data storage or blockchain. Data “compression” in satellite communication, space probe communication, or in internet use and in magnetic memory etc. can benefit this numbers inside each other trick, not just blockchains. If it is possible to use only one mother number, not many inside one, there is still about 32 X compression that can be used. In book “Analog signal processing” von Plassche 2013 is in section 5 “analog floating point”. So floating point numbers and delta sigma modulation can be combined. Using either 24 bit multibit DSM, improved Hatami pseudo parallel DSM perhaps 16 bits, or multirate DSM up to 1400 bits (Hannu Tenhunen) brings information “compression”, 24 X in multibit, 16 X in pseudoparallel, and unknown compression ratio in multirate DSM. Using floating point number in DSM is possible that 1 bit information has 16, 32 or 64 bits floating point accuracy. Takis Zourntos oversampling without delta sigma can be used also. Delta sigma has some very efficient noise shaping methods which can boost performance further still, up to 60 to almost 80 decibels, or 10 to 14 bits. Because delta sigma is one bit that means 10 X to 14 X extra bitrate compression. Odelta compression (Ossi Mikael Kalevo) is one method also. But if its purpose to make blockchain floating point expansion using Boldo / Malcolm methods or Gal s accuracy tables “revisited” method is best perhaps. Information space savings can be up to almost 32 times, so blockchain needs almost 32 times less data storage than ordinary blockchain. It is possible to use delta (sigma) values to information expansion also, but not perhaps in blockchains. Exotic number systems are other way, but how them can be used in blockchains I don t know. Examples of them are “Magical skew number system” (Elmesry, Jensen Katajainen) , Paul Tarau s number systems, ternary number systems etc.There is posit number system (John L. Gustafsson) and in Google Groups 11.8 2017 "Beating posits at their own game " John G. Savard s (Quadiblock) EGU system, and in Google Groups 29.9 2017 “Issues with posits & how to fix them” is “valid” number system. How those can be used in blockchains I don t know. EGU number system is a-Law compressed, so it is logarithmic and can be used in analog hardware? Intel ADX is improved floating point. Additive Quantization (Martinez), “Quantization and greed are good” Mroueh 2013, pulse group modulation patents by Clinton S. Hartmann. Posit / EGU / valid number system can be diffrential like DPCM is version of PCM, in 4-8 bit “microposits” and/or use Gary Rays reversed elias gamma coding? Dithering and noise shaping can also dramatically increase small microfloat or microposit accuracy. Multiple description coding, multiple base composite integer, Bohr compactification etc. can be used. The same information saving that is possible in floating point numbers is available at exotic number systems also, perhaps much more. In netpage stackoverflow “Algorithms based on number base systems? (closed)” 2011 are many different number systems, and stackexchange “Data structures: what is good binary encoding for phi-based balanced ternary?” 2012 is ternary, “A new number system for faster multiplication” 1996 Hashemian, “ZOT-binary: new number system”, “Number representations as purely data structures” 2002 Ivanovic, “Hierarchical residue number systems” Tomczak, Index-calculus number systems, residue-, reduntant number systems etc. In “New approach based on compresive sampling” Bonavolonta 2014 is that only 1/50th of signal is needed to reconstruct it, so it has 50 times compression ratio. There is book “Dynamics of number systems - computation with arbitrary precision” by Petr Kurka how to use different number systems. “Dynamical directions in numeration” Barat 2015, “A reduntant digit floating point system” Fahmy 2003, “Abelian complexity in minimal subshifts” 2011, “Optimal left to right binary signed digit recoding” Joye 2000. Finite State Entropy uses asymmetric numbers, “Asymmetric high-radix signed-digit number systems for carry-free addition”. “New formats for computing with real numbers” Hermigo 2015, “Adapted modular number system” AMNS. “Dealing with large datasets (by throwing away most of data)” by A. Heavens is “massive data compression”. “Ordinals in HOL: transfinite arithmetic up to (and beyond)” by Norrish is perhaps nearer “infinity computer” than ordinary computer arithmetic. “A floating-point ADC with variable gain pipeline stages” Kosonen, “Design and implementation of a self-calibrating floating-point analog-to-digital converter” 2004. “Arithmetic units for high performance digital signal processor” Lai 2004, “Multiple base number system” Dimitrov, “algebraic integer quantization” concept, “Pairwise quantization”, “Sparse composite quantization”, “Universal rate-efficient scalar quantization”, “Tournament coding of integers” Teuhola, “Liao style numbers of differential systems”. And “1-bit digital neuromorphic signal path with dither”. Preferred numbers like Renard series numbers or E-series numbers can offer better integer accuracy instead of binary 1 and 0 perhaps. Perhaps integer or floating point accuracy could be improved if number system is different than binary 1 and 0, for example if preferred numbers are used that are integers, not logarithmc numbers, that go with geometric series (Renard series and E-series numbers are integers). Perhaps multiple base composite integer at MROB netpage, or Dimitrov s multiple base number system can be combined with preferred numbers etc. But floating point mantissa can be expanded up to 39 times in standard FP number, that is “Information compression” enough, for example to use in blockchain.