Using very small floating point values together with differential (delta) encoding is perhaps suitable, altough differential encoding is not so efficient when accuracy becomes large. But small 8 bit “quarter precision” FP numbers and others very small (like 6 bit IEEE standard compatible in Wikipedia microfloat page) together with differential representation can offer accuracy improvement. Other way to improve very small FP number accuracy is using software mantissa expansion (up to 38 - 39 times mantissa accuracy is possible in modern math from standard IEEE number), but small FP microfloats have very small mantissas, so software mantissa expansion of 6-8 bit microfloats or 10/11 bit FP numbers is not so effective compared to larger FP numbers (16, 24 or 32 bits). Delta-sigma modulation is one form of differential encoding, and delta-sigma encoding together with floating point numbers have been proposed since early 1990s (“Floating point sigma delta converter” Kalliojärvi 1992/1994). Small microfloat together with delta-sigma encoding can lead to accuracy improvement of both delta-sigma systems (audio coding etc.) and floating point microfloats. Patents about and around subject are “Multipierless interpolator for delta-sigma analog to digital converter” US pat 6392576B1 Wilson 2001, pat. US 6252531 Gordon 2001 (this patent is cited in many other patents), “A Nyquist response restoring delta-sigma modulator based analog to digital and digital to analog conversion” pat US 6373418B1 Abbey 2002. “Floating point analog to digital converter (FADC)” is in texts by Shu 2009, and “11-bit floating-point pipelined analog to digital converter in 18…” Sadaghar, “Hybrid sign/logarithmic delta-sigma analog-to-digital converter for audio applications” Seh Wah.Kwa 1996. Block floating point is another FP system which uses integer not straight FP values and offers some improvement over fixed point or FP numbers. “Hybrid hardware/software floating-point implementations for optimized area and throughputs tradeoffs” 2017. This hybrid hardware / software floating-point approach can be perhaps used in mantissa expansion (up to 38 - 39 times accuracy) of large FP numbers and using then these very (mantissa) accurate FP numbers as data compression method piling FP numbers inside each other / top of each other. Unum computing increases accuraty on the hardware, and similar several floating point numbers inside just one FP number (inside mantissa accuracy, if mantissa accuracy is higher than normal IEEE standard then several other FP numbers can be inside mantissa of one number), and these other FP numbers can have several FP numbers inside them etc. until extented mantissa accutacy have been used (if mantissa accuracy of one number is 2000 bits in unum or software computing extented precision, first 64 bits can be used to represent another FP number, now is two FP numbers with 2000 and 1936 bits mantissa accuracy, these two numbers use first 64 bits for another 64 bit FP number so these FP numbers have 1936 bits and 1872 bits available for their “own”, the second FP number can also have first 64 bits of mantissa accuracy representing another FP number, now third FP number has 1872 bits mantissa accuracy (because it was cut off from 1936 bit accuracy FP number using first 64 bits of its accuracy) etc. Several FP numbers can be put in a row inside one FP number s accuracy, so that first 640 bits represent another ten FP numbers, 64 bit + 64bit etc row. Several thousand of IEEE standard FP numbers now fit inside just one IEEE standard number whose (mantissa) accuracy has been extented using software computing or unum computing (unum uses standard IEEE FP numbers with extension that is not IEEE standard). This numbers inside each other procedure can be used until accuracy potential (2000 bits) has been used. Several thousand other FP numbers now fits inside just one. In text “Between fixed point and floating point by Dr. Gary Ray” (chipdesignmag.com) is several other non- standard floating point number systems represented that are more accurate than standard IEEE numbers. In netpage MROB.com is “Multiple base composite integer”. Nonuniform quantization is another method used with delta-sigma: “Wide dynamic range delta sigma A/D converter” US pat 5896101A Melanson 1999. Other: “Elimination of limit cycles in floating-point implementation of direct-form recursive filters” Laakso 1994, “Simplified floating point division and square root” Viitanen, “Tournament coding of integers” Teuhola. Vector quantization (vector compression) can also be used with delta sigma or other delta structures. Vector compression techniques are for example spherical-, cubic-, or pyramid vector quantization, and perhaps Vector Phaseshaping Synthesis can be used with them, or with other vector compression, or with standard delta / delta sigma (there are other delta medhots than just delta-sigma), or with Quadrature Amplitude Modulation etc. Vector quantization like Additive Quantization (Martinez), Sample-by-sample adaptive differential vector quantization (SADVQ) by Chan, different from serial adaptive VQ also called SADVQ. Takis Zourntos has made “oversampling without delta sigma” model that is based on nonlinear control and is more stable than delta sigma and is one bit structure. Multibit ADPCM can also be used with vector quantization (spherical, cubuc or pyramid vector quantization), and with floating point number systems also. VCO ADC (voltage controlled oscillator analog to digital conversion) can be used with delta sigma structure (and like VCO structure designed by Petri Huhtala: PortOSC), and in text “Analog circuit design: low power low voltage integrated filters and smart power” 2013 is in section “Analog floating point converters” delta sigma coupled with analog floating point system. Perhaps small 6, 8 or 10/11 bit floating point number can be used with vector compression, delta or delta sigma structure to improve accuracy. Unum / ubox computing is one way to improve floating point numbers, but needs extra bits so it increases bitwidth, at least in very small microfloats, and block floating point is another method. Vector phaseshaping synthesis can represent several waves at once (for making synth sound) and patents by Clinton Hartmann (“multiple pulse per group keying”) also makes several waves at once, and Feedback Amplitude Modulation (FBAM) can perhaps be used also with QAM, delta / delta sigma or other. Combining several waves at one form saves bandwith. That can be used with Quadrature Amplitude Modulation etc. CPFSK modulation is combined frequency modulation and PCM, “On differentially demodulated multi-level digital modulation” Griffin, and CPFSK is multisymbol coding method also. Aomedia Video 1 (AV1) is new video coding format, what if its video coding (movement prediction) algorithms are used with audio ADPCM or other delta coding, with spherical, cubic or pyramid vector quantization, or AQ (additive quantization), using video movement prediction algorithms for audio prediction, making unified audio / video codec? Also ODelta (Gurulogic OY) compression. 1 bit differential audio with delta-sigma or Takis Zourntos model with modern delta- sigma noise shaping which achieves about 12 or 13 bit (70 to 80 decibels) or even more (14 bits or close to 90 decibels) noise reduction with dither, together with Hatami / Johansson distributed delta -sigma system would propably lead very low bitrate. Even 48 khz sample rate can be used if oversampling rate is just 1,75 or 2 leading to 13,75 khz or 12 khz sound highest frequency, if efficient noise shaping is used. Higher frequencies can be made using High Frequency Replication. 12 khz model can be used when people are over 40 years old, they don t hear over 12 khz frequencies anyway. Even analog dither can be used instead of digital in 1 bit system, if analog circuit is used and analog dither is more efficient than digital, in 1 bit ADPCM or delta (sigma), “Hybrid digital-analog noise shaping in the sigma-delta conversion” US pat. 20170222657 Ullman 2017. K2 audio and MegaBitMax (ExtraBit Mastering) are noise shaping methods, K2 has been used since 1980s. NICAM stereo was old sound format that used PCM but truncated 14 bit audio to 10 bits removing least significant bits, making PCM to slightly ADPCM -style. If 1 bit is noise shaped to 4 bit accuracy, or 2-3 bit to 8 bit, 4 bits can be used to logarithmic 7 bits integer dynamic range, and 8 bits to 14 bits integer (logarithmic) range. Now 7 bit log range can be used to DPCM-style audio (like DDPCM in bitsnbites.eu netpage), and 14 bit range to PCM (a-law or mu-law logarithmic) sound. 7 bits integer using 4 bits log makes propably good quality DPCM sound, this using only 1 bit (if noise shaping is possible, analog or otherwise, to 1 bit only and expand this 1 bit to 4 bit accuracy). K2 audio and other improvement methods can be used. 14 bit integer (log) PCM is made from only 2-3 bits using (digital) dither and noise shaping, 2-3 bits to 8 bits with noise shaping then this 8 bits to 14 bits logarithmic dynamic range. Similar bit reduction technique like NICAM to drop from 16 bits to 14 bits (K2 audio uses something similar bit reduction) and then this 14 bits to logarithmic 8 bits is possible. Audio can be improved even more using additional header in the receiver itself, 4 bit log / 7 bit integer DPCM can be made to 14 bit integer PCM, and 8 bit log to 14 bit integer PCM, then this 14 bit integer is expanded 4 X 14 bits to 56 bits integer, additional header (14 bits + 42 bit header) offer additional dynamic range and improvement of sound, but only few bits, not even near to 56 bit range, but still this method that some high-end audio noise shaping DACs use can improve sound. No floating point is used, because small microfloats have such minimal mantissa accuracy (8 bit microfloat has only 3 bit mantissa). However there is “Gals accuracy tables revisted” method that extends FP accuracy much more than just 10 bits. Smallest theoretical floating point numbers at MROB.com netpage are about 4 bits, and in wikipedia minifloat page is 6 bit FP number. If 1 or 2 bit number is noise shaped and dithered to 4 or 6 bits, 4 or 6 bit FP can be used to sound reproduction, and Gal s accuracy tables revisited to improve quality to close of 16 bit audio. Gal s accuracy tables method can be used to logarithms also.