In text “Gradual and tapered overflow and underflow: a functional differential equation and its approximation” 2006 is that one floating point number can have overflow threshold of 10 to 600 000 000 potence accuracy, meaning number of 600 000 000 decimal digits accuracy, I think this method seems to be tapered floating point, which is version of unum/ubox computing. So all information of the world can be encoded to just one floating point number, in its overflow accuracy. This can be one version of almost endless data compression. It uses differential equations, can differential equations or linear equations or polynomial equations be used with ultrafilters and zero sets to make other forms of almost endless data compression also? Texts related to this: “Solver for systems of linear equations with infinite precision on a GPU cluster” Jiri Khun, “Ultrafilters, compactness, and Stone-Chech compactification” Bar-Natan 1993, “Equivalence and zero sets of certain maps in finite dimensions” Michal Feckan, “Zero sets and factorization of polynomials of two variables” 2012, “P-adic numbers” Jan-Hendrik Evertse 2011, “Infinite dimensional analysis” Aliprantis, Border. “Are zero sets of polynomial equations closed because of the fundamental theorem of algebra?” Instead of endless data compression, data compression can be “almost endless” that for example contain all content of internet, and it can be inaccurate, error correction codes are efficient nowdays, and even after error correction information can be only partially usable, but because it is almost endless compression even partially usable information can be used because compression of data is huge. Instead of one floating point number it can use 1000 million or 1000 000 million floating point numbers to store all information of the world (if floating point numbers are the information storage / compression format), so about 64 gigabits or 64 terabits (8 gigabytes and 8 terabytes) is enough to store all content of internet in one quite small data storage. So instead of one gigantic long computation is large number of shorter and easier (faster) computations. There is infinity computer principle in four diffrent versions, infinity computer by Ya.D. Sergeyev, The ReAl computer architecture by W. Matthes, Perspex machine by J.A.D.W. Anderson, and Oswaldo Cadenas patent WO 2008078098A1. But still I think that zero sets are the key to endless or almost endless data compression, with polynomial / differential / linear representations / equations or other. Ultrafilters with zero sets or other. Number systems that are efficient like Paul Tarau s number systems or “Magical skew number system” by Elmasry, Jensen, Katajainen or other number systems of them, can be used in information compression, or other efficient number system, for example “Between fixed and floating point by Dr. Gary Ray” chipdesignmag netpage is floating point number with “reversed Elias gamma exponent”. Using Paul Tarau s numbers or Magical skew number, or other number in place of reversed Elias gamma in floating point, in exponent or other duties, can be efficient floating point / tapered floating point / unum / posit number. Also Google groups forums netpage is message chain “Beating posits at their own game” 11.8 2017 by John G. Savard, that is Extreme gradual undreflow EGU. If that page does not show in Google browser it is seen in Microsoft browser. “Simplified floating point division and square root”. “Bounded floating point”. “Twofold fast summation” Latkin. “New number systems seek their lost primes”, “Base infinity number system” Eric James Parfitt, “Peculiar pattern found in “random” prime numbers”, “Hyperreal structures arising from logarithm”. Benford s law is explained in text “DSP guide chapter 34: explaining Benford s law” by Steven W. Smith. Parhaps same style of explanation (Benford s law is antilogarithm thing according to Smith) like Benford s law that appears everywhere in nature is antilogarithm, and similar thing can be explanation to “hyperreal structures from logarithm” and “peculiar pattern in prime numbers”. Perhaps that information helps to make almost endless data compression. Wikipedia: “Ideal (ring theory)”, “Ideal number”. Because analog circuits are made at 16nm nowdays, and analog circuits at 5nm are planned, data compression can use analog processing instead of digital, for example KLT transform or anamorphic strecth transform, that work well in analog domain but not in digital. Video and audio codec or any signal processing can use analog signal processor made at 16-5nm. If in every PC has analog circuits and signal processor doing KLT transforms or AST, like MP3 player that has DSM modulator that is analog. Analog processing can be used in almost endless data compresson also. John G. Savard s floating point model is based on logarithmic system (a-law), and logarithms are fractional values, not integers, so is this Savard s floating point “logarithmic flating point”, that uses logarithmic fractions instead of integers, also “fractional floating point” system. So is this Savard s model key to fractional/logarithmic values instead of integer values floating point computing? Is this similar to bounded floating point or unum computing? Modern GPU s have support for 8 bit floating point. G.711 telephone standard uses 16 bit to 13 bit, and that to 8 bit logarithmic. Perhaps similar 8 bit minibit floating point format can be done using Savard s model. “Sparse sampling” in “compressive sensing” is also way to make information compact, one 250th of sample is needed in minimum to make reconstruction of sample succesfull in some cases. Also can Fibonacci numbers be used with floating point numbers like John G. Savard s FP / tapered floating point uses logarithms? Fibonacci series is integer form of fractional golden ratio / tau (about 1,6), so Fibonacci series is logarithmic also. In text “DC accurate 32 bit DAC achieves 32 bit resolution” 2008 is that this DAC model has theoretically infinite DNL and monotonicity, if something has infinite accuraty it can be used in data compression. Mathematician Srinivasan Ramanujan made studies about mathematical models that have something to do with infinity, his biopic is named “The man who knew infinity”. If John G. Savard s floating point HGU / EGU system is similar to logarithmic a-law and mu-law companding, can things like “Asymptotically optimal scalable coding for minimum weighted mean square error” 2001 and “Geometric piecewise uniform lattice vector quantization of the memoryless gaussian source” 2011 and “Spherical logarithmic quantization” Matschkal and “On logarithmic spherical vector quantization” LSVQ, “Gosset low complexity vector quantization” GLCVQ, “Lattice spherical vector quantization” 2011 Krueger, can be used with floating point number system like Savard uses logarithmic companding in floating point numbers? Most of those mentioned have something to do with a-law or mu-law logarithmic companding like Savard s floating point. So floating point system that uses spherical logarithmic quantization instead of Savard s version of logarithmic companding, or any other logarithmic quantization / companding is possible with floating point numbers? Increasing accuracy dramatically? I don t know. There is also “Low delay audio compression using predictive coding” 2002 that has “weighted cascaded least mean square” WCLMS principle. Has this anything to do with floating point number systems or not I don t know. In netpage “neuraloutlet wordpress” (com) is “metallic numbers” and “U-value number system”. Are those logarithmic systems and can those be used with floating point I don t know. In netpage “shyamsundergupta number recreations” is section “fascinating triangular numbers” where number sequence 1, 11, 111 etc are triangular numbers in base 9. So binary compression of them is easy. In same section “unique numbers” in same netpage is that digital root of unique numbers is 9, and base / number 9 is widely connected to other properties of unique numbers also. So is base 9 then best integer base for computing, not base 10 or 2, or 6 or 12 etc? In section “curious properties of 153” is “the curious properties of binary 153” where number 153 form “octagonal binary ring”, that reminds of “Z4 cycle code” that is used for example turning quaternary values to binary. So is it possible to use number / base 9 in data compression, or value 153 that has this “binary ring” property of 8 bit / 255 values circling around? If data compression techniques can use them. Other: “A bridge between numeration systems and graph directed iterated function systems”, “preferred numbers”,“State estimation of chaotic Lurie system with logarithmic quantization”, Logarithmic Cubic Vector Quantization LCVQ, “Logarithmic quantization in the least mean squares algorithm” Aldajani, “A logarithmic quantization index modulation”, “Semi-logarithmic and hybrid quantization of laplacian source”, “Finite gain stabilisation with logarithmic quantization”, “Wavelet audio coding by fixed point short block processing”. Can any of those mentioned be used with floating point number system like Savard uses logarithm in his HGU / EGU? Preferred numbers are “logarithmic” and used in parcel sizes, can preferred numbers be used in floating point system like Savard s logarithm? In text “Making floating point math highly eficient for AI hardware” 2018 by Jeff Johnson is listed FP techniques: Nonlinear significand maps / LNS Kingsbury, Rainer 1971, Binary stochastic numbers Gaines 1969, Entropy coding / tapered floating point Morris 1971 (if tapered floating point is data compression, how about using Finite State Entropy in FP systems?), Reciprocal closure 2015, Posit 2018, Fraction map significand (“Universal coding of the reals” 2018 Lindstrom), Kulich accumulation, Exact log linear multiply-add ELMA. That example of ELMA has 8 bits, 4 bit acuracy, and 24 bit range, so it is suitable for ADPCM style systems. This text was writen only few months ago so it is latest thing in FP research. In netpage XLNSresearch (com) is list of logarithmic number system studies, like multidimensional LNS, CORDIC based logarithmic, index calculus DBNS, hybrid FP and LNS, multi-operand LNS, “Architectures for logarithmic addition in integer rings and galois fields”. If Savard s model is FP with logarithm, how about using cubic logarithmic-, spherical logarithmic-, or pyramid logarithmic quantization with floating point ( “The pyramid quantized Weis Feiler-Lehman graph representation” 2014). Other: Stackoverflow 2018 “Algorithm-compression by quazi-logarithmic scale”, “High resolution FP ADC” Nandrakumar, “Parametrizable CORDIC based FP library”, FloPoCO library, “A new approach to data conversion: Direct analog-to-residual conversion”, “Complex LNS arithmetic using high-radix reduntant CORDIC algorithms”. Lucian Jurca has written texts about hybrid LNS / FP. Bounded FP and block FP are other new FP systems. Those hybrid FP/LNS are combining logarithmic and integer (FP) numbers, but Savard s model is FP numbers (FP “integers”) that just take logarithmic “steps”. So cubic-, spherical- pyramid-, residual-, multi-operand-, index calculus- or multidimensional logarithmic quantization, or CORDIC (actually BKM algorithm would be better?) or Fibonacci sequence can be used similar way like a-law with floating point? There is logarithmic systems like complex / LNS hybrid, Monte Carlo LNS, denormal LNS, interval (arithmetic) LNS, hybrid real / LNS numbers, serial LNS, two dimensional LNS (2DLNS), signed digit LNS, semilogarithmic LNS, multi-operand LNS. Combining those like Savard s a-law with floating point is perhaps possible. FP numbers also have huge disparity between exponent range and mantissa accuracy. If mantissa is either expanded using software tricks (up to 39 times) or using data compression in mantissa only, not in exponent. Mantissa accuracy is now closer to exponent range, and data compression is used only in mantissa, so processing is faster. ADPCM / delta compression style lossy compression can be used, like in ultra low delay audio compression methods that have 1 millisecond or less processing time and other similar methods. Bit truncation, Finite state entropy etc data compression also. “Quantization and greed are good” Mroueh 2013. AI research is using 8 and 4 bit minifloats, IBM, Clover library etc. They use stochastic rounding, “dynamic point integer” etc methods. Data compressed mantissa is suitable for them. Multiple description coding (delta sigma multiple description coding), Multiple base composite integer from MROB (com) netpage, where is “lexicographic strings” and other methods also that can be used. Munafo PT number system uses 17 symbols, so 15 X 17 = 255, which is near to 256 vales which is 8 bits, so perhaps 8 bit chunks of binary data are suitable for Munafo PT number system. Googling “one bit covariance estimation” brings many results. “Sparse composite quantization”, “Pairwise quantization”, “Robust 1-bit compressive sampling via sparse vectors”, space time DSM vector quantization (2002), “Time-quantized frequency modulation with time dispersive codes” Hawkesford, “Mean interleaved round robin algorithm” 2015, “OMDRRS algorithm for CPU” 2016, alias free short time Fourier transform, sparse fractional Fourier transform, implicit sparse code hashing, decomposed algebraic integer quantization, multiple description coding DSM, “Space vector based dithered DSM”, nonuniform sampling DSM, “Design of multi-bit multiple-phase VCO based ADC”, “Multii-amplitude minimum shift keying format” 2008, “Time quantized FM using time dispersive codes”, Hawkesford. I tried to google zero set with such words as Pisot number and Parry number, if those are any help for endless data compression, and texts “Ito-Sadaharo numbers vs. Parry numbers”, “Palindromic complexity of infinite words associated with simple Parry numbers”, “A family of non-sofic beta-expansions”, “Combinatorics, automata and number theory”, “Beta-shifts, their languages, and computability”, “Abelian complexity of infinite words associated with quadratic Parry numbers” I found, altough I don t understand those mathematical formulas.
In text “24/192 music downloads are very silly indeed” not in the text itself but in its footnotes list in bottom of the text is reference number 13: The 16 bit audio can have infinite dynamic range if infinite Fourier transform is used to extract sound from 16 bit. That principle is used in radio astronomy. So if 16 bit or any bitlength can use Fourier transform (or something similar) which has infinite amount of range, that means that infinite (or almost infinite) amount of information can be encoded to those few bits. So Fourier transform or something similar that has infinite range (or near infinite range, for encoding near infinite amount of information) is needed. If similar to Fourier transform or version of Fourier transform that has almost infinite or infinite performance / range is found, data compression that can compress infinite or almost infinite amount of data in only few bits is possible. Infinite versions of Fourier exist, like Sliding Window Fourier (SWIFT), and others that work like Fourier and offer infinite resolution, like ARMA, (ARMO) and MUSIC algorithms. There are other transforms, Walsh/Hadamart and others etc. In audio infinite dynamic range means that from 16 bit (for example) audio information (or just signal information, not necessirily audio) can be when encoded and then played back dynamic range extented to infinity using (infinite ?) Fourier transform. Infinite range means infinite amount of information in 16 bits only. But that is for “pure tones”. So inside pure tones or something similar must be put information somehow, or principle of infinite resolution expanded outside just basic pure tones. If those goals are met infinite amount (or almost infinite amount) of information can be put inside 16, 24, 32 or 64 bits etc. So that is endless data compression, using Fourier transform or something similar, using frequency resolution to infinity or some other principle. I don t know about radio astronomy and how there this Fourier transform (to infinity) principle is used. Not endless data compression is needed, even almost-endless is enough, or simply data compression that far exceeds data compression methods used today. There are texts that may or may not be something to do with endless data compression: “Computable function representation using effective Chebyshev polynomials”, “Polynomial dictionary learning algorithms in sparse representations”, “Exact semidefinite representations of genus”, and old text (1980?): “The evaluation of irreducible polynomial representations of central linear groups”. I was googling about polynomial representations of linear or differential equations, and zero sets together. If zero sets are the key to endless data compression, somehow, with linear / differential equations, or polynomial representations? That should lead to ultrafilters somehow, and from there to endless data compression or almost endless data compression?
Is fuzzy logic used in error correction systems? Like error correction code when used with erroneous data, and when this data is decoded, information is analysed using fuzzy logic and this fuzzy logic tries to make erroneus data at least partially corrected to normal. Altough fuzzy logic cannot perhaps make 100% correct data perhaps it can recover at least partially data that is damaged or inaccurate. SAR ADC or SAR DAC is method of analog - digital conversion. Then there is R2R DAC like Etalon DAC that uses uses “Super R2R” topology, that is better than other DAC / ADC types. If delta - sigma DACs and modulators use floating point systems, at least theoretically DSM and floating point numbers together has been proposed (delta sigma floating point), why not using SAR DAC or R2R DAC with floating point, exact log linear (ELMA) floating point, flexpoint, dual fixed point, dynamical fixed point or other floating point / fixed point formats, like posit, or Chipdesignmag article “Between fixed and floating point by Dr. Gary Ray” reversed Elias gamma exponent. If DSM can use floating point then other one bit systems like ADPCM and Takis Zourntos model 1 bit can use them also. In Hydrogenaudio forum netpage “Good 1-bit codecs?” is 1 bit ADPCM that has transparent quality compared to CD audio using 7-9 X sampling rate and 1 bit only. DSM can be pseudo parallel, which leads to data savings. Can SAR DAC and R2R DAC be also pseudo parallel? SAR DAC can be pseudo and it can be parallel, but can it be pseudo parallel, those two things together? And R2R DAC also. Fractal compression is used in coding of pictures, experimentally. If instead of one picture, picture is matrix of 16 X 16 pictures, so 256 pictures in one frame, and then this one picture that has 256 smaller pictures inside, is compressed using fractal compression, is compression ratio now smaller than compressing this 256 pictures inside one picture using MDCT method or other compression methods? Does it make easier to fractal compression to compress 256 small pictures in same picture, or more difficult? If compared between compressing just one picture, and then one picture where picture area is divided between 256 smaller pictures, those 256 are not sub blocks of one picture like in video coding etc. but all 256 are separate pictures themselves. Chord Electronics makes upsampling systems for audio that takes normal 16 bit audio and then upsamples it 16 times (16 bit X 16 times). Upsampling improves accuracy in receiving end, without using data compression, so can similar upsampling accuracy improving systems be used for example in video coding? 1 bit (ADPCM or other) needs only 2 values, not 65 000 values like 16 bit, and that Chord Electronics upsampling uses “taps” (values) for upsampling, so 16 X 1 bit needs 65 000 taps, 16 X 16 bit needs over million. 1 bit system that uses upsampling in receiving end can be economical video compression, picture compression, and audio compression. Or 3-5 bit ADPCM, multibit DSM etc. AptX is ADPCM audio compression that uses 8 bits in lower frequencies. If those 8 bits were posit or ELMA (exact log linear) or other that is not pure integer, like multiple base composite integer (In MROB com netpage) range / accuracy would be much improved. Multiple description coding can also be used, in delta sigma and other 1 bit systems also. Oversampling is used in 1 bit and other systems. Audio can be 192 khz 32 bit 4 X 48 khz sampled. If sound is not in only first 48 khz section, but also in section 3 (128 khz) and 4 (192 khz). Those three sections are separate audio tracks, audio tracks in sections 3 and 4 are put to ultrasound frequency range of bottom section 1. Sections 3 and 4 contain spectral ultrasound frequency components of section 1 that are now audible and not in ultrasound range, but there are efficient noise removing programs, that in real time can remove noise or unwanted portions from audio material, and “cue” information how to remove noise can be included in audio stream. Section 2 (96 khz) can be empty so that distance from other sections is greater and then less energy in high frequencies. Last section, section 4 (192 khz) can be only 32 or 36 khz sampling not 48 khz, so that distance from previous section is larger, and is not oversampling because it is in highest frequency section 144-192 khz. 8 X 44,1 khz 16 bit can use similar method. 16 bit must be floating point so that quantization noise is in minimum. Similar system can be used in video coding, 1 bit ADPCM pixel colour values oversampled, in bottom section is one colour value, but also in higher frequency oversampled sections of this bottom value is two other colour values. Those have high frequency spectral components of bottom colour value, but they can be filtered out like sound can be cleared from noise. If not all noise can be filtered out at least great portion of it. So in video 1 bit ADPCM video stream it is not just one colour per pixel but three, those two aditional two colour values have decreasing quality the further away they are from bottom oversampled colour value, and also noise (high frequency spectral components) of bottom oversampled 1 bit value, and third colour value noise from previous two colours and also least quality of them all because it is set highest in frequency range and has lowest oversampling ratio. Those two additional 1 bit colour values that are put “inside” one oversampled colour value must have oversampling too, so that they are acceptable quality. My description is turning frequency range upside down perhaps, because people understand for example in sound that 192 khz means 4 X oversampled 48 khz, so that 4 X 48 khz is the bottom value section 1 that actually is using 192 khz sampling. And “last highest section 192 khz” means only 144-192 khz range so it is only 48 khz available where only 32 -36 khz actually is used because distance from noise (previous sections) must be as great as possible. That principle of putting information “inside” high frequency components of oversampled bits that are usually filtered out, can be used in for example data storage etc. This is perhaps one version of putting information inside other information, the another version of it is putting floating point numbers inside other floating point numbers, inside their mantissa accuracy which is increased using software tricks. Or putting information / other floating point numbers inside extented overflow threshold of floating point numbers (“Gradual and tapered overflow and underflow: a functional differential equation and its approximation” 2006). Or using posit or other method to extend floating point accuracy and then put other posit numbers inside one posit number etc. so that information space (accuracy) multiplies exponentially. In audio or video encoding or other signal transmission if ADPCM signal is used, multiple base composite integer (from MROB com netpage) or multiple description coding can be used. Delta-sigma can use multiple description coding. Other one bit systems can perhaps also use multiple description coding and multiple base composite integer, Takis Zourntos model, monobit etc. ADPCM is usually 4-5 bit and DSM also uses 5-6 bit internal processing altough DSM is 1 bit. 32 bit DSM exists but they often are 6 bit with 32 bit filters etc. Such multibit systems can be used instead of 1 bit so quantization noise is smaller. Fourier transform can have infinite accuracy if pure tones (sine waves) are used (this princile is used in radio astronomy etc.). In sound information can be represented as derived / derivating square wave. Sound content can be represented as “peaks” that are made of square wave derivated. Sound is information. Square waves are special cases of sine waves. So is endless amount of information possible to transmit using Fourier transform and either sine waves derivated directly, or sine waves transformed to square waves and then derivating them? If only pure tones have endless accuracy, derivating pure tones so that they contain information. Information is encoded in square waves or sine waves using derivation, and then this uses infinite Fourier transform, is result endless accuracy in few bits of signal transmitted? Or is this possible? “Wave equation” is in Wikipedia. “Geometric waves”. “System of linear equations”. In Stackexchange com questions: “What is the formula for square wave?” 2016, “A set of linear equation to zero” 2015. Using ultrafilters, zero sets, linear or differential equation, is endless data compression possible? “Finding Fourier coefficients for square wave”. “Resonances, waves and fields their applications, physics, and math…” Peter Ceperley. Brilliant org: “Amplitude, frequency, wave number, phase shift”. Yutsumura com: “Summary: possibilities for the solution set of system of linear equations”. Root mean square sine waves. If infinite Fourier transform can use sine waves or sine waves transformed to square waves, and when derivated these waves can contain information, does that mean that endless amount of information can be transmitted using few bits only? Or other way to use infinite Fourier transform to send information. Jiri Khun: “Solver for systems of linear equations with infinite precision on a GPU cluster”. If tapered floating point or posit / valid / unum / ubox computing is version of floating point, can same software tricks that are used in floating point to expand mantissa accuracy over 30 times be used in posit / tapered floating point computing? Standard IEEE floating point with those software tricks offers already massive information compression capability if FP numbers are put inside each other (in their expanded mantissa accuracy) so that overall information storing capacity (accuracy) expands exponentially. One FP number can store inside itself (in its mantissa) several other FP numbers, they have inside themselves other FP numbers etc. until mantissa accuracy is used, or about half of mantissa accuracy is used to store other FP numbers for information space expansion and half of mantissa accuracy for actual information. Can posit / tapered FP use same thing (mantissa accuracy expansion up to about 10 - 30 times from normal FP number and then storing other similar posit / tapered FP numbers inside it)? Or Gary Ray s “Elias gamma reversed” coded exponent used in FP numbers? In “Justia quantizer patents class 341 / 200” is quantizing methods. “Derivate level-crossing sampling”, “nonuniform derivative sampling”, dsp stackexchange “What are the advantages, if any, of derivative sampling?” 2011, “sampling method of band-limited signals” 2015, “On the reconstruction of derivative sampling method of band-limited signals” 2016. Justia patents 8300711 and 9825645. Quire is another version of unum, for dot products. Can it be used in floating point / unum numbers inside each other principle? Can posit be used in similar way? Can dot products be used in derivating waves like sine wave, or dot product in other ways with sine waves? Type 3 unum is called sigmoid number or posit. Sine waves and sigmoid have something in common altough they are different things, and sine waves with Fourier transform can lead to infinite accuracy. Can sigmoid or quire do the same? With sine or square waves? “New activation functions in DeepTrainer: Sigmoid, TanH, Arctan, Relu, Prelu, Elu, Softplus” 2018. “Efficient digital implementation of the sigmoid function”. BiSeNet. “How to prevent overflow and underflow in logistic regression” lingpipe-blog 2012.