 # Endless data compression is possible

Endless data compression has been possible since 2006: “Gradual and tapered overflow and underflow: a functional differential equation and its approximation” . In it is “tapered overflow and underflow” , that has accuracy of 10 to potence 600 000 000. That is much higher than 2 to potence 2000 (floating point number that has extented accuracy of 2000 bits from 64 bit FP, from 39 X increased mantissa accuracy, using software algorithms). That FP number accuracy in this text from 2006 means that all information in the universe can be encoded in one floating point number, I think. This data compression is not endless but practically it is, when all possible information in the world fits in. There should be floating point unit that uses this endless compression (accuracy increase) capability, either digital FPU , analog FPU, optical FPU , or quantum FPU. If floating point number has endless accuracy, it means that endless amount of information can be encoded in this accuracy of FP number. Information in floating point form can be text, sound, video etc. Information can be in block of bits 16 bit + 16 bit + 16 bit etc. in long (endless) chain, one 16 bit block is pixel in picture or herz in sound. Or whole time scale / frequency spectrum is in one huge floating point number. Other ways to get endless accuracy: “Solver for systems of linear equations with infinite precsion on GPU cluster " Jiri Khun. If accuracy is infinite, infinite amount of information can be encoded to linear equations, to any mathematical entity that has endless accuracy. Or to floating point numbers / differential equations of that year 2006 text.
So information can be encoded in floating point number, in differential equations, or in linear equations.
“Ultrafilters, compactness and Stone-Chech compactification” 1993, “Equivalence of zero sets of certain infinite dimensions” , “Zero sets and factorization of polynomials of two variables”, “P-adic numbers” Jan-Hendrik Evertse, “Infinite dimensional analysis”.
“Bendable polynomials” may help, or not, in endless data compression.
Infinity computer concept: Ya. D. Sergeyev, ReAl computer architecture W. Matthes, Perspex machine J. A. D. W. Anderson, Patent Oswaldo Cadenas WO 2008078098A1.
Special IC chip that does this endless data compression / floating point processing is needed, it may use as much electricity as CPU or GPU, made at 5 nanometer tech, and needs (liquid?) cooling because it is red hot, it works as fast as possible. Optical circuits do not make heat. Analog floating point / analog computing IC can be used also, 16 nanometer is smallest analog circuits made in Japan.
Other than standard floating point format (IEEE) can be used, there is posit, quire, fractional floating point (Richey), Gary Ray s “floating point with bit reversed Elias gamma coded exponent”, and “Dual fixed point, alternative to floating point”, and others perhaps also. “Multiple base composite integer” in MROB com netpage.“Canonical signed digit” is used in optical computing etc. “Floating-point calculations in bit serial SIMD” text 1997 has “shared optical link”.
Is it better to use 16 bit floating point number and extent it to infinity, or 512 bit FP number (some CPUs have 512 bit floating point units)? Which is faster when amount of information is compressed and accuracy and compression time is compared?. Also not just increasing accuracy, but traditional data compression methods can be used together with accuracy increase. If there is data compression that makes 64 bit FP to 32 bit, and accuracy extension that makes 64 bit FP accuracy to 128 bit, those can be used together. If 64 bits has accuracy of 128 bits, it is then compressed to 32 bits, result is 32 bit number that has 128 bit accuracy, and this is achieved using both lossless data compression and accuracy extension.
Error correcting codes can also be used, using error correction as data compression, when only just enough information is stored that error correcting code can reconstruct information.
“Anamorphic Stetch Transform” is data compression method, that can be used in analog domain (or digital). Xampling is analog sampling. Analog information can be stored in analog memory chip (IBM). Analog to information compression (AIC) is sampling lower than Nyquist rate. Short time Fourier transform, sparse Fourier transform, etc. sparse data methods can be used.
There is post “Quasicrystals and data compression” in Robin Hood Coop forums.
There is book “Dynamics of number systems” by Petr Kurka, and “Dynamical directions in numeration” (Barat), so there are more efficient number systems than just integer binary / floating point. More is in internet, “magical skew number system”, heriditary number systems and giant numbers (Paul Tarau), U-value number system (Neuraloutlet wordpress com) PT number system (MROB com netpage), fibonacci-, tribonacci-, ternary tau-, zero displacement ternary number system etc.
" One hot residue number system” and “skew number system” are sometimes used with delta sigma modulation (and is “one bit modulation without delta sigma” by Takis Zourntos), and DSM is logarithmic system, and logarithmic number systems are versions of floating point and vice versa, so perhaps one hot residue floating point and skew number system floating point are possible also. Pseudo parallel DSM also exist so perhaps there can be pseudo parallel floating point also. Vector quantization / compression is sometimes used with DSM, so floating point can use it also? Additive Quantization (AQ) is extreme vector compression. Logarithmic number systems are many, in “XLNSresearch com” netpage. In book “Analog circuit design: low poser low voltage…” 2013 is “Analog floating point converters”. VCO-ADC is analog to digital converter using delta sigma modulation, so logarithmic, so then floating point. Can there be VCO-floating point number system and this VCO (or digital oscillator, ring oscillator etc. like delta sigma modulation uses) somehow increases floating point efficiency? Floating point number systems with delta sigma modulation has been studied since 1992.
“Spacetime coded vector quantization” is proposed for telecommunications, spacetime coded floating point is perhaps also possible. So large amount of information , for example two hour movie video coded to 8K fits in one large spacetime encoded floating point number? Or spacetime encoded to some other number system. “Unexplored areas in data compression” Colt Mcanlis in Medium com netpage. Holographic processor Horn 8, mathematical hologram (for data compression), holographic universe.
Programming language like APL uses one symbol to describe vector, matrix or function. It makes very dense information if complex mathematical functions need only one symbol to describe itself. So those programming languages like APL are “data compression” itself, and if floating point accuracy expansion needs complex functions, one symbol is needed per function. “ZKPDL: a language based system for efficient zero-knowledge and electronic cash”. Quasicrystals can perhaps be used in optical memory? In holographic memory? There is post in Robin Hood Coop forums “Hardware data compression chip”.
Multiple description coding can be used with floating point? It has been used together with delta sigma modulation. Also “Multiple-base number system: theory and applications”. “Bayesian inference engine”. Hybrid number system, double base number system, two dimensional number system.
There is also binary scaling, block floating point scaling, floating point modulo operation, Quote notation (Hehner). “Hardware-based floating-point design flow” 2011 Altera (Parker), for designing FP adders. “A fused hybrid floating-point and fixed-point dot product”.
If number can be losslessly compressed to (average) 20 X smaller, compression ratio depends how much much repeatable information data has, random information can not be compressed. Then 100 bit number can have 2000 bits of information. But this 100 bits has just 500 bits inside it, not 2000. Now this 500 bits is divided to 5 X 100 bit number. Because those numbers have used only max 500 bits of 2000 bit information capacity, and there is still 1500 bits left, it is possible that those five 100 bit numbers have each inside them 1500 bits or more information. Now information capacity is not 2000 bits in 100 bit number, but more than 5 X 1500 bits. Information capacity expands enormously. This is “numbers inside itself” principle. The first 100 bit number that starts this all, is “mother number” that has inside itself “daughter numbers”, they may become “second mothers” if they too use numbers inside itself principle, until accuracy potential wears out (in this case it is 2000 bits) or there is no sense to use this principle because there is no significant increase in data capacity or computing this numbers inside itself takes too long time compared to attained additional data space.
This numbers inside itself principle can be used either in data compressed numbers or other compressed information, and also in accuracy increased numbers, for example floating point accuracy increased numbers.

There are texts in Robin Hood Coop forums about (almost infinite) data compression and related topics: “Endless data compression is possible”, “Hardware data compression chip”, “Quasicrystals and data compression”, “Using floating point numbers as information storage and data compression”, “Data compression using delta (differential) values, signal processor and delta - sigma modulation”, “Petabyte SSD / memory card and DVD & CD audio disc”, and “Cheap and effective 60 - 72 fps video coding”.
There is residue number systems, but also redundant number systems etc. that can be used in data compression. Mathematical entities like Quasicrystal, and Amplituhedron, Associahedron, Cyclohedron, Permutahedron and things related to them can perhaps be used in data compression. There is Octasys Comp data compression method (van den Boom), and "Extreme-weighted feature extraction for functional data ". Odelta compression (Gurulogic OY) is delta compression.

If in text “gradual and tapered overflow and underflow” 2006 is possible to have almost endless accuracy in floating point numbers, that means almost endless data storing capacity in one floating point number. “Solver for systems of linear equations with infinite precision” is another method. Linear and differential equations are needed in this data “compression” method (it is accuracy increasing, not compression). Analog circuits, optical analog circuits are fast and efficient way to increase accuracy / solve linear and differential equations. So endless amount of information can be encoded with those equations, and optical analog circuits that are fast and produce less heat than electric circuits can be used.
In book “Analog circuit design: low power low voltage” 2013 is “Analog floating point converter”, and floating point number systems have been studied with delta sigma modulators about 25 years already, so analog circuits and (digital) number systems can work with each other.
Is it possible to use method like tuning of analog TV? Analog TV set has automatic tuning that finds right TV channel (frequency) and automatically makes fine tuning of it so that TV picture is sharp. Can such analog tuning method be used in optical analog chip that makes analog “fine tuning” to high accuracy, of linear and differential equations and floating point number? Gal s accuracy tables etc. Bendable polynomials? Ultrafilters? Zero sets? “Bayesian compressed regression”. Analog to digital conversion uses “time interleaved ADC” to increase accuracy, so perhaps time interleaved method can be used also. Or pseudo parallel (“pseudo-parallel delta sigma modulation” as example). Or whatever system optical communication links use to increase efficiency. Some number systems use fractional values, like logarithmic number systems. Logarithmic and floating point are related to each other. Can logarithmic number systems accuracy be increased like floating point? Gal s accuracy tables method (revisited) works in logarithms. Does other FP accuracy increasing methods work with logarithms also, like expanding floating point mantissa accuracy up to 39 times (Y. Hida etc)? There are fractional number systems, for example PT number system in MROB com netpage. Analog computing can use fractional values, so logarithmic and fractional number systems work with it. In netpage XLNSresearch com is large amount of different logarithmic number systems. Digital-analog hybrid computer / chip can be made also. Smallest electric analog circuits made are 5nm small, although those circuits are just some specific analog type. But at least some analog circuits can be made using 5nm manufacturing. Digital-analog hybrid computing can be then used. Although optical computing may be faster than electric. Or some really efficient number system can be used to store information, non binary. Computing with those can be done with analog computer, and result of computation can be stored as binary digital value. Those number systems have sometimes much higher accuracy even in binary form than normal binary integer. Or analog memory can be used. Analog and digital chip can be in same plastic IC package, or in only one IC has both analog and digital circuits (hybrid chip). Vectors are used with floating point computing, vectors can be analog too. Different filters are also used with digital information / signal processing. Filters can be analog too (and analog optical).
Skew number systems are related to floating point and logarithmic number systems? Or not? So “Magical skew number system” etc. can be used instead of floating point?
There is also text “Using integer ALU for data compression and storage” in Robin Hood Coop forums netpage.
There is Wikipedia pages “Category: infinite group theory”, “Gramov s theorem on groups of polynomials”, “Cyclic groups”, “Invariant theory”. Other netpages/writings: “Polynomial representations of the general linear group”, “Representations of the infinite symmetric group”, “Infinite-dimensional irreducible representations of Lie algebra”, “Deligne categories and representations of infinite polynomial rings”, “Polynomial representations of symbolic groups”, “Non-zero real numbers under multiplication form abelian group”. And bendable polynomials. If any of those texts has something to do with (almost) endless data compression or almost endless accuracy increasing of numbers or other information, I don t know.
Asymmetric numbers are used in finite state encoding data compression, so using them instead of floating point numbers and integers gives instant data compression with FSE. But some number systems, that are not binary integer, can have very high accuracy, so using them is “data compression” also when their small amount of bits make high accuracy (large number of bits) when represented as binary integer. Those other number systems can perhaps too be encoded using FSE / asymmetric encoding so they have both increased accuracy and “real” data compression.
Anamorphic Stretch Transform and its relatives Discrete Wavelet Transform and Fast Wavelet Transform can perhaps be used in analog domain, in analog circuit. If AST makes stretch to numeral values (number range) like floating point numbers stretch number range, AST can then be used as sort of analog floating point or posit/quire number system? In analog circuit, without quantization steps like floating point numbers? Or other analog stretch system like analog video synthesizer uses, to make analog version of floating point / posit / quire numbers, perhaps stored in analog memory?
Q Digest and its relative encoding methods make for example efficient video coding. If Q Digest etc. principles are used in number system, floating point / posit number system for example with Q Digest or T Digest principle etc.? To make efficient number system with data compression like Q Digest. But when this number system is data compressed it does not compress as good as other number systems because it has already been compressed.
KLT transform also works in analog domain.

About making data compression or increasing accuracy of number systems, some of these topics may have something common with them: “Compound Bessel ultra-hyperbolic equation”, “Heaviside step function”, “Chebyshev-Gruss inequality on time series”. Googling “analog floating point converter” brings many results. “Massively parallel array delta sigma modulator optically sampled”, floating point delta sigma modulation, VCO-ADC. VCO-ADC is logarithmic? If computer is analog, or ALU, then logarithmic number systems will work with it fine. Or other fractional number systems. “FPnew” is multi-precision FPU. “Self-calibrating floating-point analog-to-digital converter”, “delta-sigma modulation based analog multiplier with digital output”., “One-bit adaptive sigma-delta…”, "Principle of MSD floating-point division based on Newton-Raphson method on ternary optical computer. “Integrated quantum photonics”, “split-gate transistor qubit”. If quantum computer is possible, and almost endless accuracy is possible in floating point or another number system, then almost endless data compression can be made, one floating point etc. accurate number can contain all information of the world, and quantum computer computes that number to its needed accuracy in no time, and in this number (accuracy) is the required information. So no big memory is then needed in computers if all information of the world is inside one floating point number of 64 - 512 bit wide. Or inside some other number system that has high accuracy but small amount of bits when stored in binary form. Logarithmic / fractional numbers can be stored in analog form, in analog memory. Can there be hybrid binary - fractional number system? Like floating point - fractional&logarithmic number system hybrid? Or something like that. There was hybrid digital-analog computers, so perhaps there can be hybrid digital-analog ALU / FPUs, and perhaps even number systems that are hybrids of integer-floating point and fractional-logarithmic values. If that makes data smaller and faster to process.
Also googling “delta sigma modulation Hadamart” brings many results. Hadamart modulation is used in DSM modulation sometimes. There is Hadamart transform, and Hadamart product. “High performance SIMD modular arithmetic for polynomial evaluation”.