Floating point numbers can also be used as blockchain. Blockchains that Ethereum etc. blockchain based technology uses are large, many gigabytes at largest. If instead of normal blockchain floating point numbers are used to create complicated and large information field, that is also possible. Using floating point computation that is calculated at software not hardware is perhaps 1000 times slower than computing floating point number in hardware. If Shewchuk s algorithm, improvements by S. Boldo and Malcolm, Sterbenz theorem etc. is used, mantissa (significand) accuracy of floating point number can be improved up to 39 times. If 64 bit FP number has 52 (+1) bit mantissa, accuracy is 2028 bits, not 52 bits. Now this 2028 bits can be used to store several floating point numbers 64 bit + 64 bit etc., so 31 FP numbers that are 64 bit wide can be stored in one 64 bit FP number (inside its mantissa). Concerning slowness of computation when computer computes 1984 bit FP number to needed to store all 31 FP numbers, and possibility that those FP numbers inside one FP number can contain also other FP numbers ( 1984 bits - 64 bits is 1920 bits available for first FP number inside 1984 bit “original mother number”, this 1920 bit number becomes “second mother number”, 1920 - 64 bits is 1856 bits available information space for first FP number inside “second mother number” etc.), so very long chains of floating point numbers can be made. One floating point number, “original mother number” can contain in 1984 bits hundreads of other FP numbers if those others are also used as “second mother number”, “third mother number” etc. that themselves contain more FP numbers inside them. So instead of blockchains of gigabytes length there is just one or only few 64 bit “mother” floating point numbers, each storing information of dozens or less other FP numbers. This 64 bit FP has mantissa accuracy expanded up to 2028 bits maximum (first FP number, "original mother number) using computing in software (which is slow). I don t know how close this “floating point blockchain” is to ordinary blockchan techniques. But perhaps just one or couple of 64 bit FP numbers are needed to use this “FP number as blockchain” technique, and to replace ordinary blockchains that require gigabytes of storage space, if just 64 bits of FP information that inside itself contains hundreads of other FP numbers is needed. Can ordinary blockchains be replaced with “floating point blockchain” I don t know. FP numbers can be used to store other information than just blockchains. Lots of information can be inside one 64 bit “original mother” floating point number. This is not data compression because data is not compressed, data is just repsented as floating point number (inside each other). But these texts are about data compression: “ACE: Adaptive cluster expansion for maximum entropy”, “Phase shift migration using orthogonal beamlet transforms”, “Cyclic spectrum reconstruction from sub-Nyquist sampling dual sparse…”, “Xampling: analog to digital at sub-Nyquist rates”, “Reliable and efficient sub Nyquist sampling”, “Point cloud data compression using a space-filling curve”, and patents US 20040200734 “Apparatus and method for generating…” Sullivan, US 8274921 “System and method for communicating…”, US 9595976 “Folded integer encoding”, US 5182642 “Apparatus and method for the compression”… etc. Exponent of FP number is quite irrelevant in this method of using FP numbers as blockchain or data storage. The faster the FP number is computed the better in data storage application, so exponent is small, because 52 bits maximum accuracy is enough for almost every application. I don t know should exponent be large or small if FP numbers are used as blockchain. But using FP numbers inside each other saves storage space enormously compared to ordinary blockchain techniques. If it is not possible to multiply information space using second or third mother numbers etc., there still is first original mother number. When 32 X 64 is 2048 and 39 X 52 mantissa bits is 2048, simply dropping 20 least significant bits, for example dropping 1 bit from 20 numbers, now 32 64 bit numbers (some of them 63 bits) is in use, and 32 suits for binary systems. Almost 32 X “compression” is still better than no compression at all, and this is not data compression so “real” data compression can be added to improve efficiency still. Using 16 bit FP number with 11 bit mantissa, 39 X mantissa expansion is 429 bits, almost 27 X 16 bit which is 432 bits so 27 X “compression” dropping only 3 bits is possible. Also integer numbers can be inside one floating point mantissa, simply chained 16 + 16 bits etc. until about 2000 bits of mantissa space is used. It is still about 32 or 27 times “compression” with only mother number, if it is impossible to use other mother numbers inside one. This numbers inside one trick is used in computer arithmetics for a long time. Using floating point numbers in blockcahin etc. is still about 32 times smaller information space than using other techniques. Also exotic number systems can be used, like U-value number system in Neuraloutlet wordpress netpage, Zero Displacement Ternary ZDTNS, balanced ternary tau, Quote notation (Eric Hehner), Munafo PT number system and multiple base composite integer (both of them in MROB netpage), Paul Tarau s systems, Researchgate Rob Graigen “Alternative number system bases?” 2012 etc. Floating point numbers can be combined with analog delta sigma modulation. Using improved Hatami pseudo parallel delta sigma (Johansson et al 2014), multibit DS, or multirate DS (Hannu Tenhunen) even more “compression” is possible. Some of Tenhunen s papers read that up to 1400 bits multirate (1417 bits? or so) is possible DS modulator reach. Because it is 1 bit system, using perhaps 100 X oversampling, it has 14 X “compression ratio”. That multirate DS paper I have not found after I read first time, its seems to disappeared (?) from internet. But writer was Tenhunen and someone else. I don t know was oversampling ratio 100 or else. And using multibit DS with FP numbers. IF multibit DS uses 24 bits, it is one bit system, so compression ratio is 24 X. Using improved Hatami DS with 16 X pseudo parallel is 16 X compression. Pulse group keying patents (Clinton Hartmann) 2002, 2003, Additive Quantization (Martinez) also known as extreme vector compression (suits perhaps together with delta sigma modulation also), Qdigest algorithm, Finite State Entropy, Sparse Fourier Transform, Octasys Comp (van den Boom) etc. “real” data compression can be used to improve performance further still. Multipre description coding is also used with delta sigma, floating point can use it too. Exotic number systems can perhaps still improve performance, but there is no hardware where those could be used in direct hardware. “Direct digital synthesis using delta-sigma modulated signals” Orino is digital delta sigma. Using Bohr compactification, or dithering in signal, or Vector Phasehaping Synthesis and Feedback amplitide Modulation (FBAM), those two are sound synthesis methods but can be used in other signal processing perhaps also. Takis Zourntos has introduced “one bit oversampling without delta sigma”, it can be used also.That kind of very high information density but very small bitrate signal, whatever method it uses, exotic number system or floating point, or floating point and delta sigma together etc., can be used in satellite communication, space probe communication, or terrestial internet communication and in magnetic memory etc., anything that needs high information density, like in blockchains. Main principle is that one small bitwidth data sequence can have large information density when measured in binary bits, and several of those binary bits can be put inside this small bitwidth data, which is then expanded to its large binary bits containing binary information, either integer or floating point etc., and several information blocks can be chained together simply putting 16 + 16 +16 bit information in long bitchain if 16 bits is the information block, and this long bitchain for example about 2000 bits long inside one 64 bit floating point number. Now 64 bit number has 2000 bits of information, not just 64 bits. Now 64 bits can be send to receiving device which expands FP mantissa accuracy to 2000 bits end extracts information from it. Delta sigma modulation can use very efficient noise shaping, up to 60 or even over 80 decibels, 10 to 14 bits. Because DSM is one bit information, “compression ratio” of noise shaping is 10 X or 14 X. Compression ratio here means bitrate compression. Actually data is not compressed in floating point numbers and delta sigma values, data is only represented as floating point numbers and delta sigma values. So actual data compression can be used to improve performance further still. So other mother numbers inside first one is not needed, even one “mother number” is enough, if it is not theoretically possible to use several mother numbers inside first one. But even one 64 bit FP or FP + DSM number etc. is enough. Or to use exotic number systems, that would be perhaps even better. Unum / ubox computing can improve accuracy, for example 64 bit FP with 16 bit unum section, 80 bits together. Or use “Between fixed point and floating point by Dr. Gary Ray” (Chipdesignmag) model of “reversed elias gamma coding” to improve FP performance, or use unum and Gary Ray s model together in FP number, in its extra 16 bit section, when 64 bits is standard FP number (total FP number length is then 80 bits). Gal s accuracy tables “revisited” method (french) can also be used to improve FP performance if 39 X mantissa expansion is not used. Small microfloats can perhaps use Gal s revisited method better. 16 bit FP minifloat or 8 bit microfloat (which has very small mantissa) etc. So mini- and microfloats can also use accuracy expansion, not just big 64 or 80 bit FP numbers, and computing is faster. Gal s revisited method can also be used in big FP numbers, and 39 X mantissa expansion in small FP numbers (?). ODelta compression by Ossi Mikael Kalevo is another delta compression. Very small microfloats can use “differential floating point” which is similar to DPCM (differential PCM) that is different from linear PCM, similar small microfloats can use differential floating point, 8bit FP has very small accuracy if it is linear like PCM, but differential FP like DPCM can have better accuracy. “Information compression without data compression” is possible using several ways: 1: Delta values: 1.1: Multibit delta sigma, up to 24 bits, 32 bit DSM perhaps coming soon, if DSM is normally 1 bit, compression is 24 X., 1.2: Multirate DSM, 1400 bit multirate requires huge oversampling? Compression ratio unknown, 1.3: Improved Hatami pseudo parallel DSM, 16 X pseudo parallel is 16 X compression., 1.4: Using delta sigma modulation and floating point numbers together, 64 bit FP DSM is 1 bit with 52 bit accuracy and 1000 bit range, or is it? Idon t know how FP DSM works. So 1 bit of infarmation has 52 bit accuracy and 1000 bit range (if it is possible to do so), 1.5: Noise shaping DSM, up to 14 bits or perhaps more is possible using noise shaping with DSM, so 14 X compression. Noise shaping can be combined with previous methods. Takis Zourntos s and other delta models can be used. 2: Floating point numbers, 2.1: S. Boldo / Malcolm etc. mantissa expansion up to 39 times, 2.2: Unum / ubox computing, 2.3: Gary Ray s list of improved floating point techniques (elias gamma coding reversed etc.). 2.4: Gal s accuracy tables revisited method (french netpage). Both Unum and Gary Ray s methods can be combined, if for example standard 64 bit FP has additional 16 bit section, unum bits can be coded using Gary Ray s method etc., 64 bit standard FP + 16 bit unum elias gamma reversed coded etc in 80 bit FP number. 3: Using exotic number systems. There are hundreads of them. “Exotic number system” here means anything other than usual integer or floating point representation. 4: “Infinity computer”. Almost infinite accuracy. Lastly 5: nonstandard floating point representation. But is better to shift directly to exotic number systems in hardware than use totally nonstandard FP. “Floating point adder design flow” Michael Parker 2011 is another improved floating point design (claiming 1000 times improved accuracy?). But it uses nonstandard (non IEEE standard) FP, that is used in FPGAs. However it can be used to information space exopansion also, in FPGAs and other things that do not use standard FP numbers. In some internet message chain was that in Berkeley university experimental 13 bit FP that is like 32 bit standard FP number but uses only 13 bits not 32, was made, using “bit width reduction” or other technique. It is nonstandard FP number also. “A new uncerntainty-bearing floating point arithmetic” 2012 Chengpu Wang is nonstandard FP also. But standard FP can be improved using extra unum etc. section in standard FP number. Combined unum, Gary Ray s models, and mantissa expansion by Boldo / Malcolm or Gal s accuracy tables is perhaps possible? Differential floating point can be used in small 8 bits microfloats. Albert Weggener of Samplify Systems Inc. has patented several hardware data compression (floating point) methods.“Simplified floating point division and square root” Viitanen. Analog processing, for example in computing floating point numbers can be used, Glenn Cowan “Analog VLSI math co-processor” 2005. Now analog components can be 16nm small, so analog computing can be much more powerful than digital now. 16nm delta sigma modulator is pretty fast. It is strange that huge sums are spend for developing data compression methods, nearer and nearer Shannon limit. But much better “compression ratio” is available if information space is expanded. And it is lossless “compression”. So lossless information compression, up to dozens or hundreads times compression, is available. This information compression is not traditional compression, information is just represented in other form than usual integer or simple floating point. This information space expansion is much more better and fruitful method to “compress” information than usual data compression. Others such as “multiple base composite integer” etc., can be found to be suitable for this “compression”. “Precision arithmetic: a new floating-point arithmetic” (Wang), “Rigorous estimation of floating point round off errors with symbolic expansions” 2015, Intel ADX. There is “posit” arithmetic by John L. Gustafsson, that is not improved FP with extra bits like unum but replacement of FP with nonstandard model, and “less radical” than unums. Google Groups 11.8. 2017 “Beating posits in their own game” is John G. Savard s (Quadiblock) HGU / HGO and EGU models that are versions of posits, based on a-Law compression. At least mu-Law is logarithmic /analog compression and a-Law perhaps also. So posit / EGU number system can be used in analog hardware like logarithmic computing? 16nm analog computer with posit / EGU number system computing would be fast. In Google Groups 29.9 2017 “Issues with posits & how to fix them” is mentioned “valid”, another number system. Posit, valid, EGU or another system that has high accuracy can be used in information space expansion. If multiplying information space by using several mother numbers inside one is impossible, even one number with high accuracy is enough to store information with high “compression ratio”, when measured in binary bits. I prefer some high information density / very accurate number system like “magical skew number system” (Elmesry, Jensen, Katajainen) over traditional integer based systems, so exotic number system used in hardware is better. “Quantization and greed are good” Mroueh 2013, Additive Quantization AQ (Martinez). I don t know how “exotic” posit / valid / EGU is, and what is its maximum accuracy, in 64 bits or 80 bits. Also diffrential posit number system like DPCM is version of PCM can be used in “microposit” numbers like 4, 6 or 8 bit posit / valid / EGU numbers. Can Gary Ray s reversed elias gamma coding etc. used in posit / valid / EGU numbers? Dithering and noise shaping like those used in delta sigma modulation, up to 14 bits accuracy increase, can be used in small microfloats / microposits. Multiple description coding, multiple base composite integer, Bohr compactification etc. can be used. In Google Groups Forum by Rick C. Hodgin is 27.3 2018 “A right alternative to IEEE 754 s format” that has many aspects of computer arithmetic etc. discussed. “The slide number format” Ignaz Kohlberg is logarithmic scale so it is suitable for analog electronics? “Universal coding of the reals: alternative to IEEE floating points” Lindstrom, and “Provably correct posit arithmetic with fixed big integer” are other new publications. Bounded Integer Sequence Encoding BISE is used in hardware already but only in integers? “GATbx_V12 genetic algorithm genetic recombination”, “IIR filters using stochastic arithmetic”. In stackoverflow netpage “Algorithms based on number base systems? (closed)” 2011 are many different number systems, in stackexchange “Data structures: what is good encoding for phi-based balanced ternary?” 2012 is ternary numbers. There are residual-, reduntant-, index calculus- etc. number systems, “A new number system for faster multiplication” Hashemian 1996, “ZOT-binary: new number system”, “Hierachical residue number systems” (HRNS) Tomczak, “Number representations as purely data structures” Ivanovic 2002, in netpage XLNSresearch are logarithmic number systems, they can be used in analog electronics, “Abelian complexity in minimal subshifts” 2009. Analog electronics is manufactured at 16nm now, so analog circuit can beat digital in its own game. If analog compression, like quadrature PAL / NTSC TV compression or MUSE HD TV compression, is used, and magnetic memory can be analog too using OUM magnetic phase memory which can store 1000 bits accuracy in one analog memory “spot” when digital magnetic memory stores 3 bits in one “spot”. Not to mention analog optical computers. Analog signal is noisy but noise shaping is invented to improve signal quality. “New approach based on compressive sampling” 2014 Bonavolonta is sparse sampling that needs only 1/50th of signal to reconstruct it, so it has compression ratio of 50 times. There is book “Dynamics of number systems - computation with arbitrary precision” by Petr Kurka which is good representation of different number systems and how to use them. “New formats for computing with real numbers” Hermigo 2015. Finite State Entropy uses asymmetrical number system. “Asymmetric high-radix signed-digit number systems for carry-free addition”, “Dynamical directions in numeration” Barat, “Dealing with large datasets (by throwing away most of the data)” by A. Heavens is “massive data compression”, “Optimal left to right binary signed digit recoding” Joye 2000, “Adapted modular number system” AMNS, “A reduntant digit floating point system” Fahmy 2003, “Arithmetic units for high performance digital signal processor” Lai 2004, “Simplified floating point division and square root” Viitanen, “A floating-point ADC with variable gain pipeline stages” Kosonen, “Design and implementation of a self-calibrating floating-point analog-to-digital converter” (differential predictive ADC) 2004, “Abelian complexity in minimal subshifts” Saari 2011, “jittered random sampling” concept (Luo 2014, and Parsons 2014), “Sparse composite quantization”, “Pairwise quantization”, “Universal rate-efficient scalar quantization”, “A compression sampling sparse AR model” Chang, Ye et al 2014, “algebraic integer quantization” concept, “Multiple base number systems” Dimitrov, beta encoder, “Liao style numbers of differential systems”, “Tournament coding of integers” Teuhola, “A golden ratio notation for the real numbers” Pietro Di Gianantonio, and “1-bit digital neuromorphic signal path with dither”. “Preferred numbers” are numbers which go for geometric series, like Renard series numbers and E-series numbers. If integer processing uses instead of binary integer uses E-series or Renard series numbers, or other preferred numbers, perhaps better integer accurcay can be achieved in integer computing or floating point computing. Perhaps multiple base composite integer, or Dimitrov s multiple base number system can be combined with preferred numbers etc. “Novaloka maths on the number horizon - beyond the abacus” is another page which deals with number systems. Perhaps my cubic bitplane approach (early in this text chain) can use cubic vector quantization or Additive Quantization (Martinez) to compress cubic bitplane or vectors inside it. Cubic bitplane can also use “iterated function system” like “L-system” vectors inside cubic bitplane to find information among bits inside 3D cubic bitplane. If delta-sigma modulator uses 1 bit external bitrate but it has 16 bit internal multibit or multirate processing, does that mean 16 to 1 data “compression” ratio? So (partly analog or direct digital) delta sigma or Takis Zourntos etc. modulators could be used for lossless data compression of 16:1 ratio, not just audio etc. applications. Or use other ADC structure if it offers similar possibilities. Multiterminal source coding, multiple description coding, “Generalized massive optimal data compression” A. Heavens, Pixinsight dithering, Gary Ray s model of floating point with Elias gamma exponent reversed, Microsoft 8 bit FP format for deep learning has only 2 bit significand, but if it is any way comparable to IEEE standard 2 bits can perhaps be expanded up to 39 times, so it is real accurate 8 bit FP format using software mantissa expansion. Intel has its own deep learning 16 bit FP with 7 bit mantissa, it is IEEE comparable (similar to 32 bit FP standard but only 16 bits with 7 bit significand), so mantissa expansion should work with it. For AI shared exponent flixed and floating point number systems (dynamic fixed point, Flexpoint, Courbariaux 2015, Koster 2017, and stochastic rounding number system Gupta 2015), Flexpoint is 16 bit integer with 5 bit shared exponent, and 8 bit FP that has one of the exponents shared, etc. systems. Then “Universal lossless coding of sources with large and unbound alphabets”, “Accuracy-guaranteed bit-width optimization” (Minibit and Minibit+). Posit floating point is relative to a-Law logarithmic encoding according to John G. Savard s EGU floating point, so Gal s accuracy tables (revisited) method should expand posit significand also, if S.Boldo s and Malcolm mantissa expansion methods do not work in posit. Gal s tables work in logarithmic and floating point. In Texas Instruments “Where will floating point take us?” page is 8 bit exponent but only 1 bit implied mantissa format. “Representing numeric data in 32 bits while preserving 64 bit precision” 2015 Neal is 64 bit to 32 FP, similarly 128 bit FP can truncated to 64. IBM mainframes had 64 bit FP truncated to 32. Quire is version of unum, it makes dot products, they make vectors, so cubic bitplane can use quires like other vectors. Additive Quantization works with quires? If quire is vector type format it can be used in video and audio (audio like TwinVQ). “Profiliting floating point value ranges for reconfigurable implementation” Brown 2012 has double fixed point, between FP and fixed point format. Also integer accuracy can be increased, dynamic fixed point with stochastic rounding 2017 Essam has 3X8 bit integer with almost 64 bit FP accuracy. Making 64, 32, 16 or 8 bit integer with Bounded integer sequence encoding that uses ZDTNS or order-3 Fibonacci (tribonacci, Sayood, Lossless compression handbook) or “constrained triple-base number sytem”, if not using magigal skew or Paul Tarau s etc exotic numbers or Quote notation, and stochastic rounding, and then Flexpoint style (shared) exponent of 5-16 bits and lastly dithering to improve accuracy. It would be pretty accurate integer. Quaternary BISE can be used, “A survey of quaternary codes and their binary image” Özkaya 2009. “Compactification of integers” Royden Wysoczanski 1996, Bohr compctification. “Twofold fast summation” Latkin 2014, “Improving floating point compression through binary masks” , the Aggregate netpage “magical algorithm”, “Unifying bit-width optimization for fixed-point and floating-point design”, “Improving floating-point accuracy: lazy approach”, bounded floating point, SZ Floating point compression, 10,11 and 14 bit FP graphic processing formats, 14 bit FP has shared exponent. Differential 4-8 bit floating point microfloat can be useful in video or audio coding, or 4-8 bit differential posit / quire, or differential bounded FP, -EGU, -Gary Ray s elias gamma exponent etc. differential microfloat-similar small bit number. Modern FPUs have 512 bit wide FP processing, so when small mantissa is expanded to hundreads of bits accuracy FPU can process it right away, in a single cycle if number when expanded is 512 bits or smaller. 16 or 32 bit FP number first packed with mantissa accuracy, then send through internet or memory in 16-32 bit form, can be expanded in processor from 16 or 32 bits to hundreads of bits, then processed. If data compression is needed asymmetrical numeral systems that Finite State Entropy uses are most likely used, but I don t know how well or bad some of the other methods mentioned in this text will work with FSE or not. FSE asymmetric encoding can be reserved to final level of “numbers inside each other” encoding, rest of number layers, mother numbers etc. can work without it. Delta sigma modulators are 32 bit now, 64 bit in tomorrow, only 4-8 bit DSM is needed to 16 bit information block because noise shaping has max 10- 12 bits efficiency (60 - 70 dB) so DSM signal can be used "numbers inside each other principle " like floating point, 4-8 bit DSM signal X 16 - 8 to 64 bit DSM signal, so max 15 layers is possible to put numbers inside mother numbers. “Differential ternary” Nadezda Bazunova, like “differential calculus D 3 = 0” (Bazunova), “Ternary differential models” Pilitowska, Liao-style numbers, “A modified adaptive nonuniform sampling delta modulation ANSDM”, “Fascinating triangular numbers” Shyamsundergupta. “Cascaded Hadamart based parallel sigma delta modulator” Alonso, “A novel multi-bit parallel delta/sigma FM-to-digital converter” Wisland, “A new number system using alternate Fibonacci numbers” Sinha 2014. Perhaps Quote notation can be used with BISE (bounded integer) encoding, doubling its efficiency. Perhaps Quote notation can be coupled with ternary, asymmetric, skew number system etc. numerals to improve efficiency. If not direct digital DSM (Orino) is used, analog VCO-ADC like structures that have two VCOs per one DSM ADC, “Ternary R2R DAC design for improved efficiency”, “Edwards and Penney differential equations computing modeling solution”. Erkki Hartikainen: “Mathematics without quantors is possible” 2015, nevermind his ideological writings focus only to scientific text, altough it is bit difficult to find. “Tridimensional trivalent graph”, trivalent sets. “Amplituhedron” is used in quantum physics, it shortens difficult equations, similar can be used in true compact numbers as compact number or other, or in cubic bitplane as vector or other, or to make amplituhedron bitplane, or in quasicrystal bitplane if possible. 64 bit integer has accuracy of 64 bits and 64 bit FP accuracy of 53 bits (but large range). But when 64 bit number has accuracy of 2000 bits that is enormous “information compression” ratio. This accuracy is available in standard FP number. Why it is not used? I have written about this information space expansion thing two years now. I have made several simple calculation mistakes when I wrote those previous texts, because I write them in a hurry and then I don t check what I have written, so sometimes in the text there is very clear counting mistakes, wrong numbers in wrong places etc. I am not a mathematician so I don t actually often understand papers I am reading in internet, so there may be some fundamental mistakes in the texts I have written. I wrote them so that someone who understands math would perhaps get some ideas from my incoherent writings, I myself don t know are my ideas wrong or right or am I totally wrong or is there perhaps some useful ideas in my texts. In previous texts I used star mark to define two s complement, so 2 plus star mark and then 24 did mean two s complement 24 exponents, but now those star marks seem to disappeared from my text somehow, so now it reads 224, so when two s complement marks have disappeared the text seems quite strange whenever two s complement is in question. So when obscure three number sequencies that does not seem to make sense are in my text it propably is two s complement with star mark missing between the one number and two numbers to mark two s complement.