In netpage Yehar (Olli Niemitalo) is “Enumeration of functions”, and in netpage stackoverflow (2011) is “Algorithms based on number base systems? (closed)”, that mentions “Meta-base enumeration n-tuple framework”. Has those two enumeration subjects anything in common, I don t know. There is also thing called “promise library”, or “futures and promises”, and that is about functions also (in computer arithmetic). Has those three things something in common I don t know (promise library, enumeration of functions, and meta-base enumeration n-tuple framework). Addition to previous text: If floating point numbers are used as general data storage, not blockchain, and mantissa of FP number can be expanded up to 39 times (64 bit FP number has 52 bit mantissa, that 39 times is about 2000 bits then), this 2000 bit mantissa can be used in a way where 1000 mantissa bits is used for 64 bit floating point “mother numbers” to increase information space (up to 16 times, 16 X 64 = 1024 bits), and another 1000 bits for “information space” where actual information is. But those “second generation mother numbers” include inside themselves “third generation mother numbers”, for example first mother number has 2024 bit mantissa accuracy (for example), second mother number has 2024 - 64 = 1960 bit accuracy. This second mother number include itself “third mother” that has 1960 -64 bit = 1896 bit accuracy, and so on, until about 1000 bits of accuracy is left inside “mother numbers”. I tried to calculate the available information space expansion that is possible this way, and about 15 000 times expansion of information space is perhaps possible (if my calculation was right, or wrong, I don t know). Floating point number that has 2000 bit accuracy is enormous accuracy already, and perhaps 1000 bits X 15 000 times is such accuracy that it is not needed in any scientific computing. Instead this information space, that is inside one single “first mother number”, can be used as information storage, to store textual or visual information as bits (or long line of bits, where for example 8 or 16 bit information is chained to 8 + 8+ 8 bits etc or 16 + 16 +16 bits etc until 1000 bits is full) which is available information space in each expanded accuracy floating point number. If one 64 bit floating point “first mother” number can store 15 million bits of information inside itself, that is enormus data compression, and data is not compressed, it is just represented as floating point number. So this one 64 bit FP number can be stored in memory or send through internet etc., and when it is received it can be “read” and expanded back to 15 million bit information space form. Space probe communication, satellite communication, or even interstellar communication can use his method (if aliens understand this method too and can read 64 bit floating point numbers). Of course it is possible to use lower accuracy floating point, for example about 1000 bit mantissa or about 500 bit mantissa expansion. But information space is now much lower, if max available is 39 X mantissa expansion is 39 X 52 bits = 2028 bits, 11 bit exponent, 1 sign bit = 2040 bits. if now only 10 X expansion = 520 bits, 11 bit exp, 1 sign bit = 532 bits. If for mother numbers are used 5 X 64 bits = 320 bits from mantissa that is 200 mantissa bits information space still available. Available multiplication space is 5 mother numbers, but how it is counted? Is it 5 + 4 + 3 + 2 +1 = 15 mother numbers inside one, or is it 5 X 1 + 4 X 2 + 3 X 3 + 2 X 4 + 1 X 5 = 35 mother numbers inside one, or is it 5 X 4 + 4 X 3 + 3 X 2 + 2 X 1 = 40 mother numbers? Or is it some other combination? I don t know. This supposed to be simple calculation but I just don t understand how this multiplication thing should be done. If multiplication for 2036 bit (2024 mantissa bits + 11 bit exp + sign bit) number (39 X 52 bit mantissa expansion) and 1000 bits is information space, and 16 X 64 bits for mother numbers (1024 bits) , and now multiplication of information space is 16 + 15 + 14 +13 +12 etc., that is only about 136 times expansion, not more. End result is 136 X 1000 bits = 136 000 bits in one 64 bit FP number, or 2125 X 64 bits, so it is only 2125 times information space expansion overalll, not 15 000 times that I thought. Anyway back to 532 bit number, 200 bits are available as information space inside each mother number and 320 bits are used for mother numbers in first 532 bit mother FP number, so it is either 15, 35 or 40 X 200 bits (or something else, I just can t count altough this is simple expansion computation, but I cant figure it out, I am very bad in math) so from 3000 bits to 8000 bits. All this inside one 64 bit FP number, if mantissa of that number is expanded ten times. Using floating point library, which is integer computing in processor, can be done to fasten computations, either together with floating point computing or alone. So that those mantissa expansions, that must be done in software anyway, are faster. Extreme case would be storing each value of floating point number in very large memory library, no need for any computing but library would be terabytes or petabytes large perhaps, or even more. That would however make possible using FP numbers as information storage and compression, so information can be stored in extremely “compressed” form inside floating point number, and no computing is needed to get it out becuase every FP value is stored in gigantic memory library, bad thing is that simple library that stores every possible FP value as memory bits is enormously large, so information can be compressed in extremely compressed form and it can get out of FP number without computing, using only memory library but this memory had to be enormous. Perhaps best would be combining all those thee methods into one, using floating point unit, together with integer computing of floating point numbers (algorithms compute floating point numbers using integer processor) together with memory library to help both FPU and integer FP computing. So 2040 bit FP numbers could be computed suitably fast. Hardware that has 2040 bit wide FPU, so that this expansion is no need to compute in software, (software computing is 1000 times slower than hardware computing) largest FPUs are now 512 bit wide. Another way is to use quantum computing, analog computer (smallest analog circuits are only 16nm now so analog FPU that computes 2000 bit accuracy is perhaps possible) or analog or digital optical computer, or fast optical FPU in normal silicon VLSI CPU. Anyway if information is stored using this “floating point numbers inside each other” principle high information density and low storage requirements are possible, one cell phone can include in its memory large amount of content of whole internet, so internet searches are not even needed when everything is inside phone s memory. When those 2000 bit FP numbers are computed fast they cannot be used as blockchain, because the whole idea of blockchain is complex computations that require much time and computing power. If maximum information space expansion is only 2125 times that is far too less to include whole internet in cell phone memory, I made that statement when I thought it is possible to expand information space million times or more in 2040 bit floating point. Making processing faster, not only using shorter than 2000 bits in 64 bit FP, but using 32 or 16 bit FP as base, however then information space is then also very small, 32 bit FP has 23 bits mantissa and 16 bit 10 bits, and that is expanded to 39 times. Actually 992 bits not 1000 is best for 64 bit FP information space, and 1024 bits for mother nubers. Other way to use floating point, not necesseraly mantissa expansion, is using FP with shared exponent, this 2000 bit FP uses shared exponent by default because those 992 or 1000 information bits if they are divided 16+16 or 32+32 etc sections they all have shared 11 bit exponent and 1 sign bit. Dividing 32 bits to three 9 bits sections plus 5 bit exponent (or 4 bit exp and 1 sign bit) or 4 X 7 bits plus 4 bit exponent. Those 9 or 7 mantissa bits can perhaps include “hidden bit”, so they are 10 and 8 bit mantissa values. Is it possble that those chained 16+16 bit or 32+32 bit values inside one 992 bit section has all hidden bit, if they are not integer values but floating point numbers (inside one large floating point number)? So they are 16 bit +1 hidden bit=17 bit, and 32bit + 1 hidden bit =33 bit values actually like IEEE standard FP numbres. That would increase information space. Is that hidden bit same (shared) between every small bitsection in 992 bit long bitchain? Or can they have different hidden bits in each 16 bit or 32 bit part of bitchain? I think separate hidden bit for each 16 or 32 bit FP number inside long bitchain is possible (this long bitchain is one 64 bit FP number mantissa expanded, other 32 or 16 bit FP numbers are inside its mantissa in long bitchain 16 + 16 bits etc). Perhaps “promise library” can be used as floating point library to increase FP computation speed and accuracy. GPUs are 64 bit now, so GPGPU computing of 64 bit FP with GPUs is possible, but how about accuracy? If GPU FPUs are smaller accuracy but still IEEE standard FPU. Some laptop PCs perhaps have low-end 64 bit GPUs, but computing this extented to 2000 bit mantissa using tablet PC or smartphone and then all other FP numbers inside this one computing will take days / weeks if one 64 bit FP number with 2000 bit mantissa and then other FP numbers inside it computing is done in tablet PC or smartphone? Home PCs can use GPGPU computing and efficient 64 bit FPU GPUs. There is “desktop supercomputer”, Beowulf cluster, Aiyara cluster etc. Cheap desktop supercomputer with lots of GPGPU processing is perhaps possible to solve those floating point computations. “Home PC outperforms a supercomputer in complex calculations”. Googling “optical desktop supercomputer by 2020” brings many results. FiRe Pattern computer is capable of seeing patterns, seeing patterns when searching for “true compact numbers”, or patterns in Champernowne constant is useful if data compression is the goal. Desktop supercomputers have cost of several dozen thousands of dollars, next step upwards is real supercomputer like Cray that has price of 0,5 million dollar for cheapest model. Those perhaps can solve that 2000 bit FP number computing fast. Using unum / posit or some other method, not just plain ordinary IEEE standard floating point, is perhaps possible, for increased accuracy of some number, exponent (range) accuracy is not needed like in floating point numbers, main point is that 64 bit or other bitwidth number has much higher than just 64 bits accuracy, so that other numbers can be put inside it and increase information space that way. Perhaps exotic number systems that have high accuracy measured in bits, but need only small amount of bits to represent that accuracy, can be used. Using simultaneus fixed point and floating point computing uses capacity of fixed point integer computing in computer (with floating point library) to do floating point computing with integers, together with real FPU, so computing those 2000 bit FP numbers is faster when both FPU and floating point computing in integer ALU (with floating point library) is used. Unum / posit is version of FP, and diffrent versions of FP computing that uses similar principles that unum / posit (seems to be simply different versions of unum/posit principle) exist. Small chained values inside one 2000 bit FP number (information space, which is either 1000 or 992 bits long, or some other bitwidth) can be “differential floating point” 4, 6 or 8 bit FP or posit etc. microfloat values. Differential FP is similar to DPCM / ADPCM or Takis Zourntos “one bit without delta sigma” using FP instead of integer 4 bit etc. values. Differential logarithmic etc. or other number systems than integer / FP can also be used. Vector compression with FP / unum or other (like “Between fixed point and floating point by Dr. Gary Ray” that uses data compression with floating point numbers, reversed elias gamma coding), additive quantization AQ or other method etc. to make bitwidth smaller. Those data compression methods don t work in numbers inside each other principle, once data compressed number cannot be compressed twice, so only one layer of numbers can benefit from data compression techniques. Using 4-8 bit differential floating point / posit computing etc. with data- / vector compression techiques for microfloats / microposits etc. Numbers inside each other - principle is not data compression, number values are just represented as floating point numbers, 64 bit or 32 bit or 16 bit or microfloats. Perhaps unum / posit or other similar can also be used in numbers inside each other principle. Every time 64 bit or 32 bit or another value is much more accurate than just 64 or 32 bits etc numbers can be put inside each other and so increase information space substantially, I used 64 bit IEEE standard FP numbers, but if mantissa expansion can be done with unum / posit etc numbers they can be used too, or exotic number systems like fractional, index-calculus (something like APL programming language, but not computer programming language, but number system. Munafo PT number system at MROB netpage comes close) or complex numbers etc. Anything that has high accuracy with small bitwidth. “Recycled error bits: energy efficient architectural support for higher precision floating point” 2013 and Michael Parker (Altera): “Floating point adder design flow” 2011 are other methods. Googling “Y. Hida double precision floating point” brings many results, like “Stochastic arithmetic in multiprecision” 2011, ReproBLAS reproducible basic linear algebra subprograms, “Parallel algorithms for summing floating- point numbers” 2016, IEEE floating point standard 754-2018. New number systems are “prof. Hachioji new number system of integral numbers with four lowercase letters”. Hachioji s system is said to be arithmetic compression, and arithmetic compression is data compression. But if it is possible to put numbers inside itself (like in floating point) using Hachioji s numbers, does that mean that mathematical law that once data compressed number cannot be compressed another time using same data compression is not true anymore? If Hachioji s numbers are data compression (arithmetic cmpression), or are they? So once data compressed number can be data compressed again? That seems to violate laws of mathematics. “New number systems seek their lost primes” 2017 (that triangle looking number representation perhaps can use for example GPU computing of graphical triangles where one side of triangle is fractional value, or Eric James Parfitt infinity number system with triangles), “Counting systems and the first Hilbert problem” (Piraha system), “A new computational approach to infinity for modelling physical phonemena”, Google groups 2011: “A new numeral system” Quoss Wimblik, reddit math: “A triquaternary number system for my world”. If logarithmic number systems are in fact versions of floating point, and vice versa, is it possible to use fractional / logarithmic values in FP format, like IEEE FP standard but logarithmic / fractional vales for mantissa or exponent, not integers as bit values? Or logarthmic / fractional bit values in unum, posit, valid or EGU / HGU format? If that incrases accuracy. In XLNS research - overview netpage in articles section is logarithmic number systems. Using Microsoft browser instead of Google brings Google groups msg comp arch posting “Re: beating posits at their own game” 11.8 2017 by Quadibloc (John G. Savard) visible. Extemely gradual underflow EGU, hyper gradual underflow HGU. NICAM was compression half between ADPCM and linear compression, NICAM that uses dithering, not white noise, and floating point, unum, posit valid, HGU/EGU, logarithmic or other number system to compression can be used.