In text “Gradual and tapered overflow and underflow: a functional differential equation and its approximation” 2006 is that one floating point number can have overflow threshold of 10 to 600 000 000 potence accuracy, meaning number of 600 000 000 decimal digits accuracy, I think this method seems to be tapered floating point, which is version of unum/ubox computing. So all information of the world can be encoded to just one floating point number, in its overflow accuracy. This can be one version of almost endless data compression. It uses differential equations, can differential equations or linear equations or polynomial equations be used with ultrafilters and zero sets to make other forms of almost endless data compression also? Texts related to this: “Solver for systems of linear equations with infinite precision on a GPU cluster” Jiri Khun, “Ultrafilters, compactness, and Stone-Chech compactification” Bar-Natan 1993, “Equivalence and zero sets of certain maps in finite dimensions” Michal Feckan, “Zero sets and factorization of polynomials of two variables” 2012, “P-adic numbers” Jan-Hendrik Evertse 2011, “Infinite dimensional analysis” Aliprantis, Border. “Are zero sets of polynomial equations closed because of the fundamental theorem of algebra?” Instead of endless data compression, data compression can be “almost endless” that for example contain all content of internet, and it can be inaccurate, error correction codes are efficient nowdays, and even after error correction information can be only partially usable, but because it is almost endless compression even partially usable information can be used because compression of data is huge. Instead of one floating point number it can use 1000 million or 1000 000 million floating point numbers to store all information of the world (if floating point numbers are the information storage / compression format), so about 64 gigabits or 64 terabits (8 gigabytes and 8 terabytes) is enough to store all content of internet in one quite small data storage. So instead of one gigantic long computation is large number of shorter and easier (faster) computations. There is infinity computer principle in four diffrent versions, infinity computer by Ya.D. Sergeyev, The ReAl computer architecture by W. Matthes, Perspex machine by J.A.D.W. Anderson, and Oswaldo Cadenas patent WO 2008078098A1. But still I think that zero sets are the key to endless or almost endless data compression, with polynomial / differential / linear representations / equations or other. Ultrafilters with zero sets or other. Number systems that are efficient like Paul Tarau s number systems or “Magical skew number system” by Elmasry, Jensen, Katajainen or other number systems of them, can be used in information compression, or other efficient number system, for example “Between fixed and floating point by Dr. Gary Ray” chipdesignmag netpage is floating point number with “reversed Elias gamma exponent”. Using Paul Tarau s numbers or Magical skew number, or other number in place of reversed Elias gamma in floating point, in exponent or other duties, can be efficient floating point / tapered floating point / unum / posit number. Also Google groups forums netpage is message chain “Beating posits at their own game” 11.8 2017 by John G. Savard, that is Extreme gradual undreflow EGU. If that page does not show in Google browser it is seen in Microsoft browser. “Simplified floating point division and square root”. “Bounded floating point”. “Twofold fast summation” Latkin. “New number systems seek their lost primes”, “Base infinity number system” Eric James Parfitt, “Peculiar pattern found in “random” prime numbers”, “Hyperreal structures arising from logarithm”. Benford s law is explained in text “DSP guide chapter 34: explaining Benford s law” by Steven W. Smith. Parhaps same style of explanation (Benford s law is antilogarithm thing according to Smith) like Benford s law that appears everywhere in nature is antilogarithm, and similar thing can be explanation to “hyperreal structures from logarithm” and “peculiar pattern in prime numbers”. Perhaps that information helps to make almost endless data compression. Wikipedia: “Ideal (ring theory)”, “Ideal number”. Because analog circuits are made at 16nm nowdays, and analog circuits at 5nm are planned, data compression can use analog processing instead of digital, for example KLT transform or anamorphic strecth transform, that work well in analog domain but not in digital. Video and audio codec or any signal processing can use analog signal processor made at 16-5nm. If in every PC has analog circuits and signal processor doing KLT transforms or AST, like MP3 player that has DSM modulator that is analog. Analog processing can be used in almost endless data compresson also. John G. Savard s floating point model is based on logarithmic system (a-law), and logarithms are fractional values, not integers, so is this Savard s floating point “logarithmic flating point”, that uses logarithmic fractions instead of integers, also “fractional floating point” system. So is this Savard s model key to fractional/logarithmic values instead of integer values floating point computing? Is this similar to bounded floating point or unum computing? Modern GPU s have support for 8 bit floating point. G.711 telephone standard uses 16 bit to 13 bit, and that to 8 bit logarithmic. Perhaps similar 8 bit minibit floating point format can be done using Savard s model. “Sparse sampling” in “compressive sensing” is also way to make information compact, one 250th of sample is needed in minimum to make reconstruction of sample succesfull in some cases. Also can Fibonacci numbers be used with floating point numbers like John G. Savard s FP / tapered floating point uses logarithms? Fibonacci series is integer form of fractional golden ratio / tau (about 1,6), so Fibonacci series is logarithmic also. In text “DC accurate 32 bit DAC achieves 32 bit resolution” 2008 is that this DAC model has theoretically infinite DNL and monotonicity, if something has infinite accuraty it can be used in data compression. Mathematician Srinivasan Ramanujan made studies about mathematical models that have something to do with infinity, his biopic is named “The man who knew infinity”. If John G. Savard s floating point HGU / EGU system is similar to logarithmic a-law and mu-law companding, can things like “Asymptotically optimal scalable coding for minimum weighted mean square error” 2001 and “Geometric piecewise uniform lattice vector quantization of the memoryless gaussian source” 2011 and “Spherical logarithmic quantization” Matschkal and “On logarithmic spherical vector quantization” LSVQ, “Gosset low complexity vector quantization” GLCVQ, “Lattice spherical vector quantization” 2011 Krueger, can be used with floating point number system like Savard uses logarithmic companding in floating point numbers? Most of those mentioned have something to do with a-law or mu-law logarithmic companding like Savard s floating point. So floating point system that uses spherical logarithmic quantization instead of Savard s version of logarithmic companding, or any other logarithmic quantization / companding is possible with floating point numbers? Increasing accuracy dramatically? I don t know. There is also “Low delay audio compression using predictive coding” 2002 that has “weighted cascaded least mean square” WCLMS principle. Has this anything to do with floating point number systems or not I don t know. In netpage “neuraloutlet wordpress” (com) is “metallic numbers” and “U-value number system”. Are those logarithmic systems and can those be used with floating point I don t know. In netpage “shyamsundergupta number recreations” is section “fascinating triangular numbers” where number sequence 1, 11, 111 etc are triangular numbers in base 9. So binary compression of them is easy. In same section “unique numbers” in same netpage is that digital root of unique numbers is 9, and base / number 9 is widely connected to other properties of unique numbers also. So is base 9 then best integer base for computing, not base 10 or 2, or 6 or 12 etc? In section “curious properties of 153” is “the curious properties of binary 153” where number 153 form “octagonal binary ring”, that reminds of “Z4 cycle code” that is used for example turning quaternary values to binary. So is it possible to use number / base 9 in data compression, or value 153 that has this “binary ring” property of 8 bit / 255 values circling around? If data compression techniques can use them. Other: “A bridge between numeration systems and graph directed iterated function systems”, “preferred numbers”,“State estimation of chaotic Lurie system with logarithmic quantization”, Logarithmic Cubic Vector Quantization LCVQ, “Logarithmic quantization in the least mean squares algorithm” Aldajani, “A logarithmic quantization index modulation”, “Semi-logarithmic and hybrid quantization of laplacian source”, “Finite gain stabilisation with logarithmic quantization”, “Wavelet audio coding by fixed point short block processing”. Can any of those mentioned be used with floating point number system like Savard uses logarithm in his HGU / EGU? Preferred numbers are “logarithmic” and used in parcel sizes, can preferred numbers be used in floating point system like Savard s logarithm? In text “Making floating point math highly eficient for AI hardware” 2018 by Jeff Johnson is listed FP techniques: Nonlinear significand maps / LNS Kingsbury, Rainer 1971, Binary stochastic numbers Gaines 1969, Entropy coding / tapered floating point Morris 1971 (if tapered floating point is data compression, how about using Finite State Entropy in FP systems?), Reciprocal closure 2015, Posit 2018, Fraction map significand (“Universal coding of the reals” 2018 Lindstrom), Kulich accumulation, Exact log linear multiply-add ELMA. That example of ELMA has 8 bits, 4 bit acuracy, and 24 bit range, so it is suitable for ADPCM style systems. This text was writen only few months ago so it is latest thing in FP research. In netpage XLNSresearch (com) is list of logarithmic number system studies, like multidimensional LNS, CORDIC based logarithmic, index calculus DBNS, hybrid FP and LNS, multi-operand LNS, “Architectures for logarithmic addition in integer rings and galois fields”. If Savard s model is FP with logarithm, how about using cubic logarithmic-, spherical logarithmic-, or pyramid logarithmic quantization with floating point ( “The pyramid quantized Weis Feiler-Lehman graph representation” 2014). Other: Stackoverflow 2018 “Algorithm-compression by quazi-logarithmic scale”, “High resolution FP ADC” Nandrakumar, “Parametrizable CORDIC based FP library”, FloPoCO library, “A new approach to data conversion: Direct analog-to-residual conversion”, “Complex LNS arithmetic using high-radix reduntant CORDIC algorithms”. Lucian Jurca has written texts about hybrid LNS / FP. Bounded FP and block FP are other new FP systems. Those hybrid FP/LNS are combining logarithmic and integer (FP) numbers, but Savard s model is FP numbers (FP “integers”) that just take logarithmic “steps”. So cubic-, spherical- pyramid-, residual-, multi-operand-, index calculus- or multidimensional logarithmic quantization, or CORDIC (actually BKM algorithm would be better?) or Fibonacci sequence can be used similar way like a-law with floating point? There is logarithmic systems like complex / LNS hybrid, Monte Carlo LNS, denormal LNS, interval (arithmetic) LNS, hybrid real / LNS numbers, serial LNS, two dimensional LNS (2DLNS), signed digit LNS, semilogarithmic LNS, multi-operand LNS. Combining those like Savard s a-law with floating point is perhaps possible. FP numbers also have huge disparity between exponent range and mantissa accuracy. If mantissa is either expanded using software tricks (up to 39 times) or using data compression in mantissa only, not in exponent. Mantissa accuracy is now closer to exponent range, and data compression is used only in mantissa, so processing is faster. ADPCM / delta compression style lossy compression can be used, like in ultra low delay audio compression methods that have 1 millisecond or less processing time and other similar methods. Bit truncation, Finite state entropy etc data compression also. “Quantization and greed are good” Mroueh 2013. AI research is using 8 and 4 bit minifloats, IBM, Clover library etc. They use stochastic rounding, “dynamic point integer” etc methods. Data compressed mantissa is suitable for them. Multiple description coding (delta sigma multiple description coding), Multiple base composite integer from MROB (com) netpage, where is “lexicographic strings” and other methods also that can be used. Munafo PT number system uses 17 symbols, so 15 X 17 = 255, which is near to 256 vales which is 8 bits, so perhaps 8 bit chunks of binary data are suitable for Munafo PT number system. Googling “one bit covariance estimation” brings many results. “Sparse composite quantization”, “Pairwise quantization”, “Robust 1-bit compressive sampling via sparse vectors”, space time DSM vector quantization (2002), “Time-quantized frequency modulation with time dispersive codes” Hawkesford, “Mean interleaved round robin algorithm” 2015, “OMDRRS algorithm for CPU” 2016, alias free short time Fourier transform, sparse fractional Fourier transform, implicit sparse code hashing, decomposed algebraic integer quantization, multiple description coding DSM, “Space vector based dithered DSM”, nonuniform sampling DSM, “Design of multi-bit multiple-phase VCO based ADC”, “Multii-amplitude minimum shift keying format” 2008, “Time quantized FM using time dispersive codes”, Hawkesford. I tried to google zero set with such words as Pisot number and Parry number, if those are any help for endless data compression, and texts “Ito-Sadaharo numbers vs. Parry numbers”, “Palindromic complexity of infinite words associated with simple Parry numbers”, “A family of non-sofic beta-expansions”, “Combinatorics, automata and number theory”, “Beta-shifts, their languages, and computability”, “Abelian complexity of infinite words associated with quadratic Parry numbers” I found, altough I don t understand those mathematical formulas.