Floating point standard uses 16, 32, 64 or 80 bits FP formats. But openGL graphic format uses 10, 11 and 13 bit formats, non-standard but working in GPU as computer graphics. Information in FP format is always linear like ordinary PCM. But integer PCM can use differential method (DPCM or ADPCM), not coding the whole information spectrum like PCM does but just difference between previous and coming value, and only change of of one value to another is coded. That is economical, 4 bit DPCM is is quality almost like 16 bit PCM. Why not use floating point format as differential method also, like integer PCM does when it is changed by DPCM. This “DFP” (Differential Floating Point) uses smallest possible FP format (10 or 11 bit OpenGl standard) as standard and is differential like differential PCM. 10 bit FP format has 5 bit mantissa and 5 bit exponent, 11 bit format adds sign bit also. This is somewhat similar like 4 bit DPCM integer format, but uses 5 bits (mantissa) with added 5 bit exponent. Differential floating point encoding has much higher range than integer 4 bit, but accuracy is not (because 4 bit DPCM has 4 bits accuracy and 10/11 bit FP has 5 bits accurac/mantissa. However combined range + accuracy is much higher number field than 4 bit integer, like in all FP numbers have much higher range than integers. This 10/11 bit differential floating point is fast to compute because it uses only 10 or 11 bits, and accuracy of differential encoding increases accuracy even more. For example sound recording uses 64 bits FP format (professional). 10 or 11 bits differential floating point PCM is almost the same accuracy perhaps, at least in quiet sound volume. Also picture pixel of computer graphics use 32 or 16 bit floating point. 10/11 bit differential would be more accurate. But there are even smaller floating point formats proposed like George Spelvi at MROB.com netpage 8-bit FP format, in article “Between fixed and floating point by DR. Gary Ray” is tapered, fractional and reversed Elias gamma exponent floating point, and new unum/ubox computing is coming which increases accuracy even more. So perhaps below 10 bit floating point format is possible, altough it is not standard like OpenGL is nowadays. And if the frequency changes rapidly maybe fast switching to standard 32 or 16 bit floating point PCM in short time and then switching back to differential 10/11 bit mode is best, nowadays processors are fast so dual standard FP / DFP is possible. Both for visual information (pixel of video transmission or GPU graphics) and sound. If sound recording is changed from 32 bit to 10/11 bit FP perhaps then 400 - 512 track simultaneus recording is possible, now about 128 tracks is the practical limit. Adding unum bits to FP values, for example 32 bits + 8 bits would be 40 bit unum format, and 10/11 bit FP + 5/6 bits another unum format (16 bits together), 16 bit floating point + 8 bits unum 24 bits etc. And because 10/11 bit FP is much faster than 64 or 32 bit FP, perhaps 6000 track simultaneus recording of sound in hard disk is now possible using GPU floating point computing, if latency is not anymore an issue. Also integer recording of thousands of tracks simultaneusly using GPU if GPU has integer ALU, integer sound should be some efficient format like ZDTNS (zero displacement ternary) ADPCM wth 3 bits leading to 2 trits ZDTNS ADPCM or something similar. If dither, quadrature mirror filter or other can also be applied to make information to fit smaller space. 6000 tracks is more than any real world sound desk can handle, so virtual sound desk is only way to edit thousands of channels of sound, sound editing by almost completely with software and not “hands on” knobs. And sima-delta modulation uses very efficient noise shaping methods that reach about 90 decibel or 15 bits improvement over non- noise shaped signal quality. Differential floating point is not 1-bit delta-sigma, but differential coding anyway, so perhaps DSM noise shaping techniques can be used with DFP also, boosting 5 bit mantissa accuracy to 20 bits etc, and then adding 5 bit exponent. There has been also propesed floating point delta-sigma modulation since 1992: “Floating-point sigma-delta D/A converter” 1992 Kalliojärvi. Accuracy using delta sigma with 10/11 bit FP is such that no dither or noise shaping is perhaps needed, so this delta-sigma with floating point is simpler(?) than with noise shaping. Also model by Takis Zourntos that is similar to delta-sigma but uses nonlnear control and is more stable, coupled with floating point format. There is also Block Floating Point format, a cross between fixed and floating point represenetation, and has already used for ages in NICAM video etc. television formats. Differential block floating point would perhaps be simpler(?) than usual FP with delta modulation / diffrential format, however differential / delta representation and floating point / block floating point must be coupled somehow for accuracy reasons, if only about 10 bits is in use and no oversampling ratio is used Also quantization noise is shifted on high frequencies and then cutoff filter added so noise would not leak to audio/video content. But this noisy frequency can still hold some information, like compression and data processing information. Quantization noise in PCM or noise shaping in delta sigma transfers noise to high frequency. If inside that noise is some information, not audio/video content but information how to code that information, digital signals have very high signal to noise ratio when information content is simple numerical information, not audio or video. Actual “information storage” withing quantization noise PCM or noise shaped delta sigma signals, that are ususally filtered out, is quite large. But that information is for next sample, not the sample that is being played, because decoder must decode signal first. So there is slight delay, so perhaps this is not suitable for “live” transmisson but for prerecorded material. So inside those frequencies that are usually filtered out is in fact quite large information storage space. Now that information that usually is in the additional “header” of sample block and contains information how encoded information is decoded can be put inside signal itself, so saving bitrate.There is also “Quantization and greed are good: one bit phhase retrieval, robustness and greedy refinenements” Mrooeh 2013 (exteremely quantized phaseless measurements). That and additive quantization (extereme vector compression) are two methods for low bitrate, like TwinVQ compression that is “TwinAQ” format. Also there exists Unum concept. If unum is coupled with 10/11 bit floating point number to 16 bit number (5/6 bits of unum part) now perhaps dither and noise shaping is not needed for accurate sound/video reproduction in differential floating point. But if unum is used that means that it must be better than 16 bit floating point number, either faster to compute or more accurate than 16 bit FP number, in differential format. If it does not offer improvement over 16 bit FP in diferential form it should not be used. Also block floating point (BFP) is between fixed and floating point and uses interval arithmetic, like unum does. So perhaps block floating point and unum format works together well. But BFP is not standard FP format. Also 12 bit differential unum is possible, 10 bit FP number (without sign bit) with only 2 bit unum section for compact 12 bit differential unum value, if bitrate must be slow so 12 bits is preferred over 16.

# Floating point differential modulation / encoding

For quantization there is vector quantization method “Additive quantization” (Martinez). Differential floating pint can use small 10/11 bit FP numbers, OpenGL standard does not use sign bit with them but adding sign bit to 10 bit OpenGL number make 11 bit FP number suitable for differential foating point. Even smaller FP numbers are possible, Cell processor and others are using 8 bit “quaternary precision” FP8 format. It has 1 sign bit, 4 bit exponent and 3 mantissa bits, so its accuracy is very poor. But accuracy can be improved using dither like sound recording with FP uses (and noise shaping in signal), Using mu-law logarithmic 8 bit as floating point, or improving FP accuracy using software methods or using non-standard FP format. Mini- and microfoats are non-standard anyway. Other like ti.com netpage “Where wil floating point take us?” that is microfloat with 8 bits exponent and only 1 bit “implied mantissa”, bartwronski.com netpage floating point section has new type of microfloat suggested also, or simply using 7 bit integer (with or without hidden bit) as “floating point” number without exponent and 1 sign bit and running this integer through floating point processor. All these methods can be used in differential floating point system mini- and microfloats. Even smaller FP formats like 4 or 5 bits can be used if their accuracy is improved with aforementioned methods and they are used as differential FP. “Multiple base composite integer” in MROB:com netpage in 8 bit form and used in differential way is another possibility. There even exists 1-bit floating point format in book “Analog circuit design: low power low voltage integrated filter” 2013 that is “Analog floating point converters”, delta-sigma modulation with floating point numbers. There is improved Hatami distributed DSM model: “A novel speculative pseudo-parallel delta sigma modulation” Johansson 2014. This is very efficient. “A higher order mismatch-shaping method for multi-bit sigma-delta modulators” A. Lavzin 2002 is another, “Direct digital synthesis using delta-sigma modulated signals” Orino also. In “Design and impementation of complex floating point processor using FPGA” Pavuluri 2013 is three ALU FPGA: one for FPU, one for integer and third for complex numbers. If CPUs or GPUs would have three ALUs, one of them 1-bit delta sigma (or Takis Zourntos “Oversampling without delta sigma”) distributed Hatami / Johansson model, that 1-bit ALU could handle some of the FPU or integer ALU workload and be very effective, leading to a very low bitrate. Roll printed electronics need simple circuits so 1-bit is suitable for it. Electronic devices already have D-S modulators in audio circuits and other A/D conversion, so using these circuits as base which 1-bit ALU can operate, with their analog electronic components, is cheap and simple. So 1-bit floating point unit is possible, and this can be used like normal FPUs are used today, altough accuracy is perhaps about 10 - 15 bits or so. Dithering/ noise shaping etc. methods improve accuracy (from 1 bit to 10- 15 bits). Oversampling ratio could be low, down to 2-3, or perhaps no oversamling at all if accuracy is good enough.

Using very small floating point values together with differential (delta) encoding is perhaps suitable, altough differential encoding is not so efficient when accuracy becomes large. But small 8 bit “quarter precision” FP numbers and others very small (like 6 bit IEEE standard compatible in Wikipedia microfloat page) together with differential representation can offer accuracy improvement. Other way to improve very small FP number accuracy is using software mantissa expansion (up to 38 - 39 times mantissa accuracy is possible in modern math from standard IEEE number), but small FP microfloats have very small mantissas, so software mantissa expansion of 6-8 bit microfloats or 10/11 bit FP numbers is not so effective compared to larger FP numbers (16, 24 or 32 bits). Delta-sigma modulation is one form of differential encoding, and delta-sigma encoding together with floating point numbers have been proposed since early 1990s (“Floating point sigma delta converter” Kalliojärvi 1992/1994). Small microfloat together with delta-sigma encoding can lead to accuracy improvement of both delta-sigma systems (audio coding etc.) and floating point microfloats. Patents about and around subject are “Multipierless interpolator for delta-sigma analog to digital converter” US pat 6392576B1 Wilson 2001, pat. US 6252531 Gordon 2001 (this patent is cited in many other patents), “A Nyquist response restoring delta-sigma modulator based analog to digital and digital to analog conversion” pat US 6373418B1 Abbey 2002. “Floating point analog to digital converter (FADC)” is in texts by Shu 2009, and “11-bit floating-point pipelined analog to digital converter in 18…” Sadaghar, “Hybrid sign/logarithmic delta-sigma analog-to-digital converter for audio applications” Seh Wah.Kwa 1996. Block floating point is another FP system which uses integer not straight FP values and offers some improvement over fixed point or FP numbers. “Hybrid hardware/software floating-point implementations for optimized area and throughputs tradeoffs” 2017. This hybrid hardware / software floating-point approach can be perhaps used in mantissa expansion (up to 38 - 39 times accuracy) of large FP numbers and using then these very (mantissa) accurate FP numbers as data compression method piling FP numbers inside each other / top of each other. Unum computing increases accuraty on the hardware, and similar several floating point numbers inside just one FP number (inside mantissa accuracy, if mantissa accuracy is higher than normal IEEE standard then several other FP numbers can be inside mantissa of one number), and these other FP numbers can have several FP numbers inside them etc. until extented mantissa accutacy have been used (if mantissa accuracy of one number is 2000 bits in unum or software computing extented precision, first 64 bits can be used to represent another FP number, now is two FP numbers with 2000 and 1936 bits mantissa accuracy, these two numbers use first 64 bits for another 64 bit FP number so these FP numbers have 1936 bits and 1872 bits available for their “own”, the second FP number can also have first 64 bits of mantissa accuracy representing another FP number, now third FP number has 1872 bits mantissa accuracy (because it was cut off from 1936 bit accuracy FP number using first 64 bits of its accuracy) etc. Several FP numbers can be put in a row inside one FP number s accuracy, so that first 640 bits represent another ten FP numbers, 64 bit + 64bit etc row. Several thousand of IEEE standard FP numbers now fit inside just one IEEE standard number whose (mantissa) accuracy has been extented using software computing or unum computing (unum uses standard IEEE FP numbers with extension that is not IEEE standard). This numbers inside each other procedure can be used until accuracy potential (2000 bits) has been used. Several thousand other FP numbers now fits inside just one. In text “Between fixed point and floating point by Dr. Gary Ray” (chipdesignmag.com) is several other non- standard floating point number systems represented that are more accurate than standard IEEE numbers. In netpage MROB.com is “Multiple base composite integer”. Nonuniform quantization is another method used with delta-sigma: “Wide dynamic range delta sigma A/D converter” US pat 5896101A Melanson 1999. Other: “Elimination of limit cycles in floating-point implementation of direct-form recursive filters” Laakso 1994, “Simplified floating point division and square root” Viitanen, “Tournament coding of integers” Teuhola. Vector quantization (vector compression) can also be used with delta sigma or other delta structures. Vector compression techniques are for example spherical-, cubic-, or pyramid vector quantization, and perhaps Vector Phaseshaping Synthesis can be used with them, or with other vector compression, or with standard delta / delta sigma (there are other delta medhots than just delta-sigma), or with Quadrature Amplitude Modulation etc. Vector quantization like Additive Quantization (Martinez), Sample-by-sample adaptive differential vector quantization (SADVQ) by Chan, different from serial adaptive VQ also called SADVQ. Takis Zourntos has made “oversampling without delta sigma” model that is based on nonlinear control and is more stable than delta sigma and is one bit structure. Multibit ADPCM can also be used with vector quantization (spherical, cubuc or pyramid vector quantization), and with floating point number systems also. VCO ADC (voltage controlled oscillator analog to digital conversion) can be used with delta sigma structure (and like VCO structure designed by Petri Huhtala: PortOSC), and in text “Analog circuit design: low power low voltage integrated filters and smart power” 2013 is in section “Analog floating point converters” delta sigma coupled with analog floating point system. Perhaps small 6, 8 or 10/11 bit floating point number can be used with vector compression, delta or delta sigma structure to improve accuracy. Unum / ubox computing is one way to improve floating point numbers, but needs extra bits so it increases bitwidth, at least in very small microfloats, and block floating point is another method. Vector phaseshaping synthesis can represent several waves at once (for making synth sound) and patents by Clinton Hartmann (“multiple pulse per group keying”) also makes several waves at once, and Feedback Amplitude Modulation (FBAM) can perhaps be used also with QAM, delta / delta sigma or other. Combining several waves at one form saves bandwith. That can be used with Quadrature Amplitude Modulation etc. CPFSK modulation is combined frequency modulation and PCM, “On differentially demodulated multi-level digital modulation” Griffin, and CPFSK is multisymbol coding method also. Aomedia Video 1 (AV1) is new video coding format, what if its video coding (movement prediction) algorithms are used with audio ADPCM or other delta coding, with spherical, cubic or pyramid vector quantization, or AQ (additive quantization), using video movement prediction algorithms for audio prediction, making unified audio / video codec? Also ODelta (Gurulogic OY) compression. 1 bit differential audio with delta-sigma or Takis Zourntos model with modern delta- sigma noise shaping which achieves about 12 or 13 bit (70 to 80 decibels) or even more (14 bits or close to 90 decibels) noise reduction with dither, together with Hatami / Johansson distributed delta -sigma system would propably lead very low bitrate. Even 48 khz sample rate can be used if oversampling rate is just 1,75 or 2 leading to 13,75 khz or 12 khz sound highest frequency, if efficient noise shaping is used. Higher frequencies can be made using High Frequency Replication. 12 khz model can be used when people are over 40 years old, they don t hear over 12 khz frequencies anyway. Even analog dither can be used instead of digital in 1 bit system, if analog circuit is used and analog dither is more efficient than digital, in 1 bit ADPCM or delta (sigma), “Hybrid digital-analog noise shaping in the sigma-delta conversion” US pat. 20170222657 Ullman 2017. K2 audio and MegaBitMax (ExtraBit Mastering) are noise shaping methods, K2 has been used since 1980s. NICAM stereo was old sound format that used PCM but truncated 14 bit audio to 10 bits removing least significant bits, making PCM to slightly ADPCM -style. If 1 bit is noise shaped to 4 bit accuracy, or 2-3 bit to 8 bit, 4 bits can be used to logarithmic 7 bits integer dynamic range, and 8 bits to 14 bits integer (logarithmic) range. Now 7 bit log range can be used to DPCM-style audio (like DDPCM in bitsnbites.eu netpage), and 14 bit range to PCM (a-law or mu-law logarithmic) sound. 7 bits integer using 4 bits log makes propably good quality DPCM sound, this using only 1 bit (if noise shaping is possible, analog or otherwise, to 1 bit only and expand this 1 bit to 4 bit accuracy). K2 audio and other improvement methods can be used. 14 bit integer (log) PCM is made from only 2-3 bits using (digital) dither and noise shaping, 2-3 bits to 8 bits with noise shaping then this 8 bits to 14 bits logarithmic dynamic range. Similar bit reduction technique like NICAM to drop from 16 bits to 14 bits (K2 audio uses something similar bit reduction) and then this 14 bits to logarithmic 8 bits is possible. Audio can be improved even more using additional header in the receiver itself, 4 bit log / 7 bit integer DPCM can be made to 14 bit integer PCM, and 8 bit log to 14 bit integer PCM, then this 14 bit integer is expanded 4 X 14 bits to 56 bits integer, additional header (14 bits + 42 bit header) offer additional dynamic range and improvement of sound, but only few bits, not even near to 56 bit range, but still this method that some high-end audio noise shaping DACs use can improve sound. No floating point is used, because small microfloats have such minimal mantissa accuracy (8 bit microfloat has only 3 bit mantissa). However there is “Gals accuracy tables revisted” method that extends FP accuracy much more than just 10 bits. Smallest theoretical floating point numbers at MROB.com netpage are about 4 bits, and in wikipedia minifloat page is 6 bit FP number. If 1 or 2 bit number is noise shaped and dithered to 4 or 6 bits, 4 or 6 bit FP can be used to sound reproduction, and Gal s accuracy tables revisited to improve quality to close of 16 bit audio. Gal s accuracy tables method can be used to logarithms also.

If differential representation can be used with asymmetrical numeral systems like recursively indexed ADPCM uses its values, RIQ-ADPCM and RIVQ-ADPCM. Delta sigma or Takis Zourntos model of one bit without delta sigma can perhaps use asymmetrical numeral system also, like DSM uses multi-bit modulator. Also in netpage stackoverflow 2009 is “8 bit sound samples to 16 bit” so 8 bit can be expanded near to 16 bit. So only 8 bit information must be compressed to 4 or 2 or 1 bit ADPCM or 1 bit DSM / Takis Zourntos 1 bit from 16 bit beginning. Xampling is version of sampling, but that is analogue sampling? Sparse fourier transform etc can be used. And something like NICAM compression (version of ADPCM) but instead of white noise with ADPCM it uses dithering with ADPCM to improve accuracy. NICAM was reasonably simple coding? ADPCM used integers, but floating point version (differential floating point), with dither, like NICAM is possible? With 4, 6 or 8 bit bitwidth. There is unum, posit, valid, and extreme gradual inderflow EGU and hyper gradual underflow HGU in post Google groups msg comp arch 2017 “Re: beating posits at their own game” (by Quadibloc / John G. Savard), if Google won t show it it is shown in Microsoft browser. So unum, posit, valid or EGU or HGU can be used in ADPCM or a-law and mu-law type encoding. Asymmetrical numeral systems which are data compression, can be used with ADPCM or even delta-sigma (multibit DSM) or Takis Zourntos model (but asymmetrical numbers use several bits not just one). So “multibit Takis Zourntos model” like multibit DSM is only chance to make asymmetrical numbers work with one bit coding. One bit serial logic and transputers were used in supercomputers. Using differential representation of information like ADPCM / DSM makes bitwidth smaller, if one bit logic is used. in XLNS reseach - overview netpage is in articles section logarithmic number systems. NICAM was half between ADPCM and linear coding, so NICAM with dithering not white noise, and floating point, unum, posit, valid, HGU/EGU or logarithmic number system or other number system in NICAM can be used, in 4, 6 or 8 bits. Using lambda calculus in hardware using 1 bit serial processor is perhaps possible, or transputer with lambda calculus in hardware.