There are signal processing methads that make efficient data representation possible, like patents by Clinton Hartmann (Multiple pulses per group keying), about 15 X “compression”, Delta sigma modulation that is 1 bit but internal processing is multibit, up to 32 bits, so that is 32 X “data compression”, and Takis Zourntos 1 bit modulation without DSM that is sort of 1 bit ADPCM and more stable than DSM. 1 bit ADPCM (2 X oversampled) is in Opencores netpages also. There is also pseudo- parallel DSM (Hatami) and then improved pseudo parallel DSM Hatami model from 2014. Those methods are analog/digital signal processing, so data, digital bits or analog information must be transformed to electric signal in signal processor. If those models offer efficient processing of data without using usual mathematical data compression methods (Huffman coding, Finite state entropy etc.), that means that data compression that those signal processing models offer can be used frst, and then that “data compressed” (it si not data compression but efficient signal processing) can be compressed using (actual) data compression. That means that every PC can have inbuild sinal processo that transform incoming analog or digital data to electric signal, process it and then turn it to digital bits. A/D conversion is already used in almost every electronics widely. Frirst analog signal is turned to digital by DSM ADC, it can reach 32 X compression ratio, then further processed by Hartmann multiple pulse group keying compression. If those methods are used in chain compression is 32 X 15 X = 480 X compression rate without using data compression yet, it can be added and icrease compressio ratio further still. Turning information isnto signal and hen sampling that signal menas that half of processing speed is lost, meaning 0,5 data rate reduction. Ideo of use sampling of signal is odd becuse it just halves bitrate because sample must be two times larger than signal that is sampled. But 32 X plus 15 X expansion of data density achieved with these signal processing methods is much more eficient that 2 X loss due sampling. If analog information is used for signal it must stored in analog form, in magnetic tape (videotape), in analog optical disc, or analog magnetic semiconductor memory (analog RAM, and perhaps Richard Lienau SHRAM or magnetic bbble memory?). Or digital information is turned to analog in DAC, goes to signal processor, and then turned to digital bits again. Analog electronics are manufactured at 16nm today, and plans for analog circuits at 5nm is being made. Purely analog processing without conversion to digital is possible, but that needs analog electric or optical computer to use that information. DSM signal has property hat its dynamic range (information density) expands as frequency drops, for example 10khz signal that has sampled at 40khz, and has 20khz range, has double the dynamic range of 20khz DSM signal, so double the information in the signal. Analog audio range signal has maximum range of 20 - 22 bits (slightly over 120 decibels). Noise shaping can be used in DSM signal and similar to Takis Zourntos mdel also. Experiental noise shaping in DSM reaches 60 - 70 decibels noise reduction, so about 10 - 12 bits of extra information range. Those DSM methods are used in auido signal only, but processing electric signal that can be any information, either sound, picture or text or binary computer program etc. as DSM or other signal in signal processor. If 48khz signal is using 32 bit DSM, now 1 bit is needed to store 32 bits of information. Multirate DSM is version of DSM, internal multiate processing increases information density like multibit processing. Takis Zourntos model can perhaps use those methods, or ADPCM methods. Floating point DSM exists, and Vector quantization can be used with DSM (like Additive quantization AQ?) also. If 16 X 44,1 sampling rate is used, freqiuency is 705,6 khz. If DSM is 20 bit multibit, and noise shaping inceases 12 bits of accuracy, signal is now 32 bit accuracy, not 1 bit. Dynamc range doubles with every 44,1 khz segment of signal, so information range inreases also, only highest frequency (705,6 - 44,1 khz) has 32 bits range, lower frequency has always more bits of information. 64 bit DSM signal can be slow 1khz only speed, but now 64 bits “inside” 1 bit information. Bit rates must be halved because sampling is used, so 16 X and 32 X actual efficiency. Pseudo parallel DSM can use fractions of bits, so below 1 bit DSM. Fractons of bits are used already in Bounded Integer Sequence Encoding (BISE) so it is nothing new, fractions of bits are used in data representation. 32 X pseudo parallel DSM that is using 0,25 bit values is putting information to 4 X 8 chunks, four data paths fit into one 32 X pseudo parallel DSM with each one having 8 bit accuracu, with noise shaping perhaps 16 bit ( I don t know what noise shaping is suitable for pseudo parallel DSM). So one 48 khz pseudo parallel DSM signal is outward looking 1 bit, but inside this 1 bit is four 8 bit signals with 16 bit accuracy, if noise shaping gives every chunk 8 bit extra accuracy. Now information can be packed not one signal in 1 bit but four signals in 1 bit.
Please please please start editing your texts!!! This mass blob of text you always post is impossible to read, and turns something that very possibly is very well thought out and intelligent content into just plain gibberish.
Please use separate paragraphs put a bit of “air” into your texts, now the reader is suffocated by the share mass of words compressed together into a small as possible space. If you don’t have the time or just can’t be bothered please ask someone to help edit your posts.
I only say this because I suspect you have something interesting to say and I’d be keen on continuing the conversation, but as it stands I just lose the will to live after a couple of sentences and give up.
Thank you very much in advance!
I almost always write in hurry, and about things that I don t understand because I am very bad about math. There are lots of simple counting- and other mistakes in my texts, as always. If text that I read in internet has mathematical formulas etc. I understand if in text is explained what it means, if formulas are not explained I cannot understand that scientific paper. I have difficulties to understand what “pseudo parallel DSM” actually is. But basic idea is that if according to Clinton Hartmann multiple pulse per group keying patents c. 2002 information measured in bits can be compressed about 15 times. And DSM if it uses multibit or multirate internal processing, for example internal 16 bit processing, altough DSM is one bit signal, that means that now information compression ratio is 16 times. If I am right about that what multibit or multirate DSM is, I don t know technical details of multibit/multirate DSM either. I can be wrong also (I can always be wrong in my texts that I have been written, because I don t understand technical details). But if (if) I am right, now signal if is using multiple pulse group keying can be scrutinized to 15 X smaller. And if that signal is then using DSM multibit or multirate processing that has16 bit internal, now signal can be 15 X 16 = 240 times smaller. But signal that goes through those processing steps must be analog signal? So digital information must be turned to analog in DAC, making bits of information analog electric frequency signal? and then when it is sampled back to digital using sampling (also there is Xampling, analog sampling) sampling must use 2 X frequency of signal, so 50% of processing efficiciency is lost. But 240 : 2 = 120, so it is still making information 120 times smaller. Without data compression. Noise shaping can be used in DSM signal, increasing 10-12 extra bits of accuracy. But DSM signal cannot be smaller than 1 bit. Pseudo parallel DSM is possible. BISE encoding uses fractional bits. Is fractional bits possible in pseudo parallel DSM? For example if 8 bit accuracy signal is turned to pseudo parallel signal that is using 8 X 0,125 bits, so result is 8 bit accuracy in 1 bits. Now if this same process is used in only 1 bit signal it requires just 0,125 bits, not 1 bit in DSM. But if fractional bit pseudo parallel DSM is possible I don t know, and I don t understand what pseudo parallel DSM really is, so I can be (again) wrong. Takis Zourntos 1 bit model is 1 bit without DSM, so it is like 1 bit ADPCM, Takis Zourntos model can be used instead of DSM. Also according to Y. Hida floating point accuracy texts, floating point number s mantissa accuracy can be expanded up to 39 times. So if 32 bit FP has mantissa of 23 bits, that up to 39 times is almost 900 bit accuracy (in standard FP number). If from this 900 bit (actually 897 bit) mantissa accuracy, 32 bits is used to represent another FP number, now is one number with 897 bit accuracy, and inside it is another FP number with 897 - 32 = 865 bit accuracy. Inside this 865 bit accuracy number can be third FP number with 865 - 32 = 833 accuracy etc., so information space inside just one IEEE standard FP number is enormous. If signal is using Hartmann data packing method, then multibit / multirate DSM, that is 240 times compression with 16 bit DSM. With 32 bit DSM it is 32 X 15 = 480 times. Sampling requires 2 X reduction in compression, so 240 times compression rate. If this 32 bit signal is floating point number, that FP accuracy can be expanded to 897 (39 X 23) bits. Now accuracy is 897 X 240 = 215 000 bits about. If floating point numbers inside each other principle is used information density is enormous. Without using actual data comression like Finite State Entropy or what those mathematical compression methods are. I don t know if I am wrong or I am right, because I have no mathematical knowledge. That s why I am writing to these forums that someone who understands math perhaps can get some ideas from my writings. I don t know if it is possible to use Clinton Hartmann method and DSM together or not, and in what order, Hartmann method first and then DSM or DSM first and then Hartmann. But 32 bit DSM (1 bit DSM signal with 32 bit internal processing, so 32 bit accuracy) can be used to represent 32 bit floating point number. Or am I wrong with that also? I just don t know myself. I don t know how multibit or multirate DSM works, or how pseudo parallel DSM works. So I write these posts that someone will figure out if I am wrong or right in something, or I am totally wrong with everything etc. Inside every PC can be signal processor that turns information to (frequency) signal and process it using those methods. Information that is compressed can be sound, video, picture, text or whatever, either digital bits or analog signal. Enormous data compression is possible. Altough just floating point processing with numbers inside itself principle is enornous data compression, so signal processor compression is not needed in every case. I read somewhere in the net that delta sigma modulation gives better accuracy than linear PCM in sound because DSM dynamic range (accuracy in bits) increases when frequency of sampled signal gets lower, if audio is sampled at 16 bit / 44,1khz PCM it has 16 bit dynamic range, but 16 X oversampled 1 bit DSM that has same 20 khz audio range (sampled at 16 X 44.1khz) has 16 bit accuracy only at highest (20khz) frequency, in 10 khz DSM has 2 X dynamic range (32 bits), and at 5khz 4 X (64 bits), in theory at least not perhaps in practice. I read this claim in one internet forum, I have not noticed this claim in scientific DSM papers that I have read. But if it is true, it makes possible to use combination of oversampling and multibit or multirate DSM processing. Combination of optimal oversampling and multibit / multirate DSM can be found that gives maximal information density (bits) in DSM processing. Information can be audio, video, text or picture. For example only 1 khz signal that is 100 X or 1000 X oversampled. But better is for example if 1 khz signal is send, making 1000 khz signal and send that. Signal must be sampled so 500 khz left. Delta sigma modulated signal has high noise with normal sampling ratio making high frequency unusable. But because only 1khz is needed, that 1khz signal can be in bottom of frequency range, and because it is put to bottom of 500 khz DSM signal, this 1 bit signal has 500 bit information / dynamic range. And when changed 1 khz upwards to 2 khz, there is another space for another 1 khz signal with 499 bit theoretical information space available, etc. Only in highest 500 khz portion of signal has 1 bit information space, but it is unusable because of high noise anyway. When noise becomes too much in high frequency DSM signal without oversampling becomes unusable, but low parts of frequency range are usable, and dividing one high frequency range to very small “channels” of 1 khz information from 500 khz signal, capacity is actually quite enormous if compared to traditional method where 1 khz 1 bit signal is send using 1 khz channel. Low oversampling ratio DSM converters are using 4 X oversampling or even less, with noise shaping, and are multibit or multirate designs. 1 megaherz low oversampling DSM 4 X oversampling ratio means that frequency deveded by 2 for sampling, 500 khz left, dvided by 4 is 125 khz left. But now signal quality is good in high frequencies. If 125 khz is divided to 1 khz channels, bottom of the frequency (first 1 khz) has 125 X information density of top of the channel. Again information “compression” is huge compared to just one 1 khz channel at 1 khz. Hannu Tenhunen has made high oversampling multirate DSM studies, these have about 1400 bit multirate ratio (altough DSM is 1 bit signal), oversampling is well over 100 X in those. If for example 16 X oversampling with 1400 bit multirate is possible, that is about 90 bit accuracy in 1 bit DSM signal, so about 90 X compression ratio. Without using data compression. Takis Zourntos model instead of DSM is possible. VCO-ADC or SAR-ADC can be used instead of plain DSM. Other: “A novel parallel delta sigma modulation FM-to-digital converter”, “Complex frequency modulation - Ian Scott s technology pages” (patented complex FM method). “Cascaded Hadamart-based delta sigma modulator” Alonso, “A nonuniform sampling adaptive delta modulator ANSDM”, and PWM DSM are other alternatives perhaps. “A mostly digital PWM-based delta sigma modulation ADC with matched quantizer”, “range scaling delta sigma modulator based on a multi-criteria optimization”, “A 2 bit adaptive delta modulation system with improved performance”. Direct digital DSM is also possible (DDS delta / sigma modulator) without analog components. If multibit or multiate DSM does not offer any improvement in information compression, perhaps pseudo parallel DSM can used then if it offers information compression capacity. Is pseudo parallel DSM possible to use together with multibit or multiate DSM I don t know. Vector quantization can be used with DSM, perhaps even “hyperdimensional vectors” or “multidimensional vectors” in vector quantization, or Additive Quantization AQ. APL programming language uses one symbol for one (multidimensional) vector, and there are many symbols in APL. Perhaps similar symbol system that APL does can be used in delta sigma modulation with vector quantization, or other signal processing. Signal is multidimensional vector quantized and made to signal that has “tag” of APL symbol (or similar) that describes the multidimensional vector quantization method. Receiver receives the signal and when receiver notices that in signal is APL symbol tag, that APL symbol is key how to decode multidimensional vector, because there are many different ways to use multidimensional vectors (in APL programming language at least). “Hyperdimensional vector data multi-space” text or texts in internet. Gal s accuracy tables have something to do with vector quantization also? Or not. A. Lavzin: A higher order mismatch-shaping method for multi-bt delta sigma-delta modulators". Text compression can use 24 bit values per mark, that means 16 million values per one text character. Inside PC memory can be library of 16 million different letters, marks, IDBE dictionary compression two- or three character blocks, and finally complete words and then complete sentances (16 million different possibilities). One “text character” can be 16 million different things, even complete sentance if most used sentances are in the library. Every IDBE character block must be in memory two times, because every IDBE character block is either connected to another block or it ends a word, so two identical IDBE blocks but marked as “connected to another” and “end of word” are needed. If that makes efficient text compression. There is 32 bit (actually only 21 bit) UTF32 text standard. Instead of 2 million text characters, 21 bit UTF could use just one language and every language has its own 21 bit UTF system, In english few hundread marks can go numbers and special marks and different letters and IDBE characer blocks, rest, about 2 million values available in 21 bit is divided between 0,5 million most used english words and 1,5 million complete most used sentances in english language. Those words and sentances in fixed ROM 2 million memory places can use 5 bits per letter (32 values, enough for letters in english plus other most used marks). This kind of text library is so small that it fits inside every cell phone etc. So one 21 bit “text character” can be complete sentance, not just one letter like UTF32 (21 bit) nowdays. Even 5 bit and 21 bit character encoding can be used together, 5 bit (32 values) adds information like does the 21 bit sentance have quotation marks or question mark etc. if 21 bit includes complete sentance but not information is there question mark or point at end of sentance etc., and now is 32 different possibilites to add different marks to sentance, either in front or back of sentance. This 5 bit “header” is added only when needed to 21 bit text encoding. Or use 24 bit encoding which offers 16 million values, then text library is very large. Paul Tarau has made several efficient number systems. Elmasry, Jensen, Katajainen: Magical skew number system. People have dreamed about endless data compression. Jiri Khun: “Solver for systems of linear equations with infinite precision” 2015. If any mathematical model can give infinite accuracy, infinite amount of information can be packed in that model. Is this Jiri Khun model path to endless data compression? Some mathematical equation is computed, and result of that computing (of linear equation in this case) is endless list of numbers, and those numbers can represent text, pictures or other information, and that information can be there because that mathematical model has endless accuracy, so computer can just crunch numbers eternally without ever reaching limits of accuracy. So all information of universe can be in that simple (or not so simple) mathematical model, represented in few bits (or not so few bits). Every mathematical computing that gives endless accuracy is candidate for endless data compression. So inside linear equations is packed endless compression capacity of information? Accuracy is not need to be 100%, error corrction codes are very efficient nowdays, and even after error correction information that uses endless data compression can be only partly usable. But endless data compression that makes compression limitless is useful even in very inaccurate form. But I can be wrong like always because I have no knowledge what linear equations are. Q-digest is one of avarage quantization algorithms. Ultrafilter is one way to find information. From “polynomial representations” (of linear equations) or other, I just dont know, I don t know math. But ultrafilters can be used in endless data compression or in other data compression? Zero sets have something to do with endless data compression also, texts: “Ultrafilters, compactness and Stone-Chech compactification” by Bar-Natan, “Equivalence and zero sets of certain maps in finite dimensions” Michal Feckan, “Zero sets and factorization of polynomials of two variables” 2012, “P-adic numbers” by Jan Hendrik Evertse, “Infinite dimensional analysis” Aliprantis, Border. Differential equations, or differential evolution, polynomial equations, zero sets all have something to do with ultrafilters and endless data compression? Also “infinity computer” principle is in four different forms, infinity computer by Ya. D. Sergeyev, The REAL computer archcitecture by W. Matthes, Perspex machine by J.A.D.W. Anderson, and patent by Oswaldo Cadenas: WO2008078098A1. Also infinity computer is patented. Reading those patents and texts may help, but altough they are about computing with infinite values they are not exactly endless data compression (I think, I could be wrong). I think ultrafilters and zero sets are the key to endless data compression. “Are zero sets of polynomial equations closed because of the fundamental theorem of algebra” 2018. If some number space is closed, information can be picked up from that number space using Q-digest or other quantile or “avarage concensus” algorithms (I think), ultrafilters etc. Closed number space is not eternal so data compression is not endless, but almost endless which is enough. I don t know, I am not mathematician, I can be wrong with everything. “Ordinals in HOL” by Norrish, “Hyperreal structures arising from logarithm”. I still think ultrafilters and zero sets are the best candidates for endless data compression. There is text “Gradual and tapered overflow and underflow: a functional differential equation and its approximation” 2006. It has floating point format (tapered FP like unum/ubox format?) that has overflow accuracy (overflow threshold) of 10 to 600 000 000 potence, meaning number with 600 000 000 decimal digits accuracy. Number with 6 digits is million and 12 digits is trillion, so 600 000 000 is enormous amount of decimal digits. Overflow is one case of FP accuracy, underflow and mantissa accuracy are others. But if that FP number has almost endless (overflow) accuracy that means that this FP normat has almost endless data storing capacity. All information of the world, all internet content can be put to one floating point number computing. When computing this FP number result is almost endless list of bits that are from non-overflowing computing of this FP number, and those non-overflowing bits of FP number can contain in long line of bits all information of the world. So version of practically endless data compression was invented back in 2006 already. It uses differential equations, can ultrafilters and zero sets be used with differential equations to make other forms of almost endless data compression also? “Magical skew number system” by Elmasry, Jensen, Katajainen is another accurate number system, and some of their number systems can be used as additions to other number systems that are in wide use. Paul Tarau has made accurate number systems also. I don t know how overflow accuracy is related to mantissa accuracy of floating point number, and mantissa accuracy is the real accuracy of FP number, so I can be wrong about that endless data storing capacity of that 2006 text. Texts not so closely related to endless data compression: “Phasor measurement unit PMU data compression”, “Distributed avarage consensus with dithered quantization”, “iAVQ”, “Min-sum allocation with maximum avarage information quantization”, “Vectrex data compression through vector quantization”, “ADC look up table based post correction with dithering”, “Gal s accuracy tables revisited”, “Locally adaptive vector quantization”, “One-bit of delta modulation receiver via deep learning”. ODelta compression Gurulogic OY, Octasys comp compression. Fenwick additive code with additive quantization or vector quantization, possible or not? “Twofold fast summation”. “ELDON: floating point format for signal processing”, bounded floating point, Quire unum format for vectors, “Cliffosor parallel geometric algebra”, “Bc interactive algebraic language”, “A new number system using alternate Fibonacci numbers”, “Unifying bit-width optimization for fixed-point and floating point design”, “SEERAD: A high speed yet energy efficient rounding-based approximate divider”, “Simplified floating point division and square root”. There is multidimensional modulation, hyperdimensional modulation, multidimensional DSM, space-time vector DSM, multiple description coding with DSM, “Statistical data compression and differential coding for digital mobile fronthaul” 2019, “Learning vector symbolic architectures”, “Dense binary hyperdimensional modulation computing”, “Optimal delta-sigma modulation based noise shaping for truly aliasing-free digital PWM”, “Bit-interleaved turbo-coded modulation BICTM”, “Differential coding modulation with noise shaping NS-DPCM” 2018, “Superposition coded modulation WDM”, “Superposition for lambda-free higher order logic”. Higher order logic can also be in VLSI circuit. “An energy efficient QAM modulation with multidimensional signal constellation” 2016 is based on triangles, so GPU processing in graphics processor should be fast. And again I made everything in very long slump of list of things, but I am in hurry (again). And digital DSM is also possible, direct digital synthesis DDS using delta sigma (or sigma delta) modulation, so analog path in signal processing of DSM signal is then not needed? DDS version of Takis Zourntos model etc. is possible? And / or Clinton Hartmann pulse keying model? Hartmann multiple pulse group keying that has US patent 20030142742 from year 2003 and other associated patents by him about multiple pulse per group keying, that is the first method. Other method is delta sigma modulation and similar methods. And third method is floating point numbers inside each other. Those three methods are much better data compression methods than typical arithmetic methods like Huffman coding or FSE encoding, and those three methods are lossless (they are not data compression but just making signal to pulse keying signal, or making signal to DSM signal, or making number to floating point number, so altough they have extremely high information density, they are not data compression). Using those three methods either separately (Pulse group keying, DSM and other similar methods, multibit or multirate and/or dividing one frequency channel to large group of smaller channels so channels low in frequency range have high information density, and third is floating point numbers inside each other principle) or two or three of those methods together (if they work together) can give enormous data savings, much more than any arithmetic based bit reduction. Because those methods are not data compression ( I am not sure If Hartmann model is data compression or not) actual data compression can be used with signal, making it even more compact. I have written three years about those things. Any corrections or suggestions are welcome, about where I went wrong and what I can do better. I would really like that someone who knows math would read these posts and if he/she notices that what I have written is some usable ideas he/she could write real scientific publication or something if any of my ideas are any worthwile (or not). Any feedback is welcome. At least Jocky has read my posts, and I am grateful to any suggestions or corrections. Those ideas are all freeware and public domain material that I have written all around Robin Hood Coop forums, and public domain material can anyone use. That s why I am writing in RH forums. If I am wrong with everything that I have written, if someone would send notification where I went wrong that would be good too. If I should clarify something or make it to more understandable form any notice is welcomed too. But problem is that I don t know math, so it is difficult because I don t know mathematical terms and I have sometimes very hazy idea what I am meaning. But text “Gradual and tapered overflow and underflow: a functional differential equation and its approximation” 2006 is worthwile. “Bounded floating point”. Google groups forums 2017 “Beating posits at their own game” By John G. Savard, EGU extreme gradual underflow, if it is not shown in Google browser it is shown in Microsoft browser. Analog circuits are made at 16nm nowdays, and 5nm analog circuits are planned. KLT transform and anamorphic stretch ransform that work well in analog domain but not well in digital, can use analog signal processor, like video and audio codecs, and analog processing can be used in almost endless data compression also perhaps. “New number systems seek their lost primes” “Base infinity number system” Eric James Parfitt, “Peculiar pattern found in “random” prime numbers”, “Hyperreal structures arising from logarithm”. Benford s law is explained in “DSP guide chapter 34: Explaining Benford s law” by Steven W. Smith, and according to it Benford s law is antilogarithm thing. Similar style of explanation can be true for “peculiar pattern in random prime numbers” and “Hyperreal structures from logarithm”. Perhaps that information can be used to make almost endless data compression. Wikipedia “Ideal (ring theory)”, “Ideal number”. John G. Savard s model is logarithmic, a-law, that is used in G.711 telephone standard, 16 bit to 13 bit and that to 8 bit logarithmic. Can Savard s model be used to making 8 bit minibit floating point also? And higher than 8 bit FP formats also. Is this “logarithmic floating point” or “fractional floating point” because logarithms are fractional not integer values? Instead of making almost endless data compression that packs everything in one number, there can be 1000 million or 1000 000 million 64 bit numbers that contain all information of internet and then fits into 64 gigabits or 64 terabits (8 gigabytes and 8 terabytes). Computings per number are shorter and faster. Sparse sampling can reconstruct sample from only one 250th of information in some cases, so sparse sampling can be used in information compression also, and analog sampling (Xampling). Integer computing: balanced ternary number system uses number 1 as base (+1, 0, -1), and is very efficient. Number 3 is near natural logarithm (about 2,7) so balanced base 3 ternary is better (+3, 0, -3). How about balanced 30 ternary number system (+30, 0, -30)? Because number 30 has some good properties as number base (like numbers 6 and 60 also have). So number 30 is about 10 X natural logarithm. There is balanced ternary tau (+1,6 0 -1,6) using logarithmic value (tau is 1,6 about) and zero displacement ternary and Fibonacci base also. Can Fibonacci numbers be used with floating point like John. G. Savard uses logarithm? Fibonacci series is about 1,6 value (golden ratio / tau) in integer form, it is fractional value like logarithms are. Text “DC-accurate, 32 bit DAC achieves 32 bit resolution” 2008 is that DNL and monotonicity are theoretically infinite in this DAC design, so perhaps data compression can use such structures. Srinivasan Ramajunan was a mathematician who made studies about infinity, or at least biopic film about him is named “The man who knew infinity”. If analog circuits are used “analog error cancellation logic circuit” can be used, with SAR-DAC mostly. “A constant loop bandwith in delta sigma fractional-N PLL frequency synthesizer” 2018. If John G. Savard s floating point model is logarithmic a-law, can other logarithmic companding / quantization systems be used with floating point also? “Asymptotically optimal scalable coding for minimum weighted mean square error” 2001, “Geometric piecewise uniform lattice vector quantization of the memoryless gaussian source” 2011, “Spherical logarithmic quantization” Matschkal, logarithmic spherical vector quantization LSVQ, Lattice spherical vector quatization Krueger 2011, Gosset low complexity vector quantization GLCVQ. Most of those have somehing to do with logarithms, so they can be used with floating point like Savard s a-law? Increasing accuracy? “Low delay audio compression using peredictive coding” 2002 has “weighted cascaded least mean square” WCLMS principle, has it something to do with logarithms I don t know. In netpage “neuraloutlet wordpress” (com) is other logarithmic systems, can those be used with floating point like a-law I don t know. In netpage “shyamsundergupta number recreations” is “fascinating triangular numbers” that numbers 1, 11, 111 etc has triangular base 9, and in section “unique numbers” that base / number 9 is widely connected to unique numbers also, like that digital root of unique numbers is 9. In section “curious properties of 153” is in “curious properties of binary 153” that number 153 forms “octagonal binary ring” where binary values of 8 bit / 255 values circle around. Is number / base 9 then best integer base for computing? And can number 153 and its binary ring property be used in data compression somehow? This circling binary ring reminds of “Z4 cycle code” used in turning quaternary values to binary etc. Other: “A bridge between numeration systems and graph iterated function systems”. “Preferred numbers” are integers that are used in package parcel sizes. They are logarithmic in a way also, so can preferred numbers be used in floating point like Savard uses logarithm? “Logarithmic quantization in the least mean squares algorithm” Aldajani, Logarithmic Cubic Vector Quantization LCVQ, “Semi-logarithmic and hybrid quantization of laplacian source”, “Finitie gain stabilisation with logarithmic quantization”, “A logarithmic quantization index modulation”, can any of these be used with floating point like Savard s model? In netpage XLNSresearch (com) is long list of LNS related studies, about multidimensional LNS, hybrid FP / LNS, multi-operand LNS, index calculus DBNS, CORDIC based LNS. Lucian Jurca has written about hybrid FP / LNS subjects. Other: FloPoCO library, stackoverflow 2018: “Algorithm-compression by quazi-logarithmic scale”, “A new approach to data conversion: Direct analog-to-rsidue conversion”, High resolution FP ADC Nandrakumar. In text “Making FP math highly effective for AI hardware” 2018 is listed FP formats “nonlinear significand maps / LNS” ,“entropy coding / tapered floating point” (how about using FInite state entropy with tapered FP / posit?), reciprocal closure (unum), binary stochastic numbers, posit, fraction map sicnificand (Universal coding of the reals 2018 Linsdtrom) and then Kulich accumulation / ELMA (exact log linear multiply-add). ELMA is 8 bit with 4 bit accuracy and 24 bit range so it is suitable for ADPCM systems. Bounded FP and block FP are other new FP formats. If Savard uses a-law compression in FP, can cubic-,spherical-, pyramid logarithmic quantitization and others like residual-, multioperand-, multidimensional logarithmic also be used in floating point, and CORDIC (but BKM algorithm is better perhaps) and Fibonacci sequence also with floating point? Savard uses FP numbers that take logarithmic “steps”, so those methods mentioned can also be used to FP “steps”, in Fibonacci scale or logarithmic scale, BKM etc. Other logarithmic are complex / LNS hybrid, Monte Carlo LNS, Denormal LNS, interval LNS, real / LNS, serial LNS, two dimensional (2DNLS), signet digit LNS, one hot residue LNS. “Quantization and greed are good” 2013 by Mroueh is extreme 1 bit compression. If only mantissa of FP number is data compressed, not exponent, processing is faster because FP numbers are difficult to compress, and there is huge disparity between exponent range and mantissa accuracy in FP numbers. Increasing only mantissa accuracy lessens this disparity. ADPCM / delta compression (lossy) can be used like in ultra low delay audio codecs that have about 1 millisecond or less processing time. Or use Finite state entropy / bit truncation. AI research is using stochastic rounding, 8 and 4 bit minifloats, “dynamic point integer” etc rounding and compression methods (IBM and Clover FP library). About analog to digital converters: “Analytical evaluation of VCO-ADC quantization noise spectrum using pulse frequency modulation” Hernandez, “Time-quantized frequency modulation using time dispersive codes” Hawkesford, “Enchanting analog signal convesion digital channel using PWM” Zucchetti Advanest, “Mean interleaved round robin algorithm”, OMDRRS algorithm for CPU 2016, PWM-ADC, PFM-ADC, “A new method for analog to digital conversion based on error redundancy” 2006, sparse composite quantizatin, pairwise quantization, “Robust 1-bit compression using sparse vectors”, space time vector DSM (2002), “A time-interleaved zero-crossing-based analog-to-digital converter” 2011, “A level crossing flash synchronous analog- to digital converter”, “Space vector based dithered DSM”, “Nonuniform sampling delta modulation - practical design studies”, “Multi-amplitude minimum shift keying format” 2008, Implicit sparse code hashing, “Design of multi-bit multiple - phase VCO-based ADC” 2016, fractional Fourier shift, fractional Fourier transform, sparse fractional Fourier transform, “Decomposed algebraic integer quantization”, “Waveform similarity based overlapp-add shift WSOLA”.
There is logarithmic quantization methods like spherical quantization, cubic (vector) quantization and pyramid (vector) quantization. Gal s accuracy tables and their improved version “Gal s accuracy tables method revisited” can improve accuracy. Can Gal s accuracy tables revisited method and spherical-, cubic- or pyramid (vector) quantization be combined? Using those accuracy tables to improve spherical-, cubic- or pyramid logarithmic quantization. And is it possible to build in hardware in some video or audio codec or other, so no software processing, but straight in hardware? “Extreme vector compression” called AQ (additive quantization) exist. Can similar accuracy tables be used with it? In post “Using floating point numbers as information storage” is that software can expand 64 bit floating point number up to over 2000 bit mantissa accuracy (39 X bitwidth of mantissa). If floating point or tapered floating point / posit or other number system is used, and in the hardware is similar accuracy improving system that is in software that increases accuracy of those floating point numbers, in every microprocessor like IEEE standard (nowdays microprocessors use IEEE standard, but they don t have ways to improve mantissa accuracy, so that must be done in software). Posit / tapered floating point is more efficient than normal FP, but accuracy can be further improved using those accuracy improving tricks in hardware form, not only in software. Or ordinary IEEE standard floating point used and then in hardware accuracy increasing. So 8 bit or 4 bit floating point or posit can be enough for pixel of picture or herz in sound if its accuracy can be increased without increasing bitwidth (mantissa accuracy increased max. 39 times). Is tapered floating point / posit like logarithmic way of floating point numbers? If it is so, can Gal s accuracy tables revisited be used with posit etc. to increase accuracy of posit etc. number systems? And this accuracy increasing can be put to hardware, in some microprocessor? Other accuracy increasing methods (including that one that increases accuracy of mantissa up to 39 times) can offer more accuracy but are more complex and require more time? Gal s method revisited is simplest method? And those accuracy increasing methods can be put to hardware in some processor? Is it possible to build dual IEEE floating point / modern posit FPU that can use either standard IEEE floating point numbers but if needed it can use also modern posit etc. efficient number system? And this processor can use accuracy increasing in hardware? Can mantissa accuracy increasing in hardware be included in future IEEE standard or standard of using posit numbers?