Ordinary IEEE standard floating point is about 35 years old when it is first time used in computer processors. Processor transitors count has increased enormoysly compared to 1980s. Is now a chance to add third arithmetic unit as standard to computer processors, together with integer ALU and FPU? In "design and implementation of complex floating point processor using FPGA" Pavuluri 2013 is third ALU using complex numbers. In "Hardware-based floating-point design flow " Parker (Altera) 2011 is effective FP method. Unum computing is another FP method that uses IEEE standard numbers with additional extra bit section. In "Between fixed point and floating point by Dr. Gary Ray" (chipdesignmag.com) is several other FP formats, one uses in exponent reversed bit Elias gamma coding. So it is using data compression with exponent bit or bits. Now new data compression methods like ANS (asymmetrical number system) is in use, in multisymbol coding etc. In text "Hybrid hardware/software floating-point implementations for optimized area and throughput tradeoffs" 2017 is hybrid hardware / software computing. Unum computing uses extra bits that are integer, what if unum bits use data comressed bits like Gary Ray s Elias gamma coding or ANS coding or other? To increase information density. Standard IEEE numbers with extra data compressed unum bits. If computer program has no unum capacity it uses standard IEEE numbers, if unum is used ALU uses IEEE + compressed unum bits. Other data compression methods like Additive Quantization wihich is vector quantization method (Martinez) can perhaps be used, and even combined ANS + AQ perhaps (just an example, I don t know if its working), for unum bits if Gary Ray -type compressed floating point bits are used. But this is for standard IEEE floating point extension only, what should the third ALU be (third after fixed point and floating point ALU)? There are complex base number systems like 2i or -1+1, or Rog Graigen s "Alternate number system bases?" text 2+i base. Really complex numbers are in "infinity computer" projects with infinitive (?) precision, like Yaroslav D. Sergeyev, J.A.D.W. Anderson "Perspex machine", Matthes "REAL computer architechture" or Oswaldo Cadenas s computing with infinities system. Also "Ordinals in HOL" Norrish. Other number base possibilities are "Magical skew number system" (Elmesry, Jensen, Katajainen), Paul Tarau s number systems (several different), MROB.com Munafo PT number system and "multiple base composite integer", and "Fast addition using a new number system" 1997 and "A new number method for converting of 2 s complement to canonical signed digit number system" 1996 by Reza Hashemian. There are other number systems also. So this third ALU can use some real efficient number base system, or logarithmic base like BTTS (Balanced ternary tau) and "metallic number system" or other at neuraloutlet.com netpage. ZDTNS (Zero displacement ternary) and "order-3 fibonacci" encoding by Sayood that are integer bases with efficiency. So there are choices what number system to use in this theoretical third ALU for modern computers. Chances are from simple integer based (but perhaps ternary) to sophisticated complex number base "infinity computer". Floating point unit can also be extented using unum computing, standard IEEE numbers plus unum extension. If unum section must have highest accuracy unum section must be large, 80 bit (actually 79 bit) FP number can be extented to 128 bits using 48 (49) bit unum section, 32 bit FP to 64 bits with 32 bit unum section, and graphic cards are using 24 bit FP format nowadays so 24 (or 23) bit FP number with 8 or 9 bit unum section makes 32 bit number. Very large unum section is needed if purpose is to make super- accurate FP numbers for "FP numbers inside each other" representation, which leads to massive compression. Unused bit in 80 bit IEEE FP number (there is one "integerbit") can be used with data compression also (like Gary Ray model etc.) and increase sign bit accuracy and add unused bit to proper use also with data compression adding extra information to FP number. If integer bit is used in FP numbers then there is no extra bit to use. Sign bit is just 1 bit plus or minus value in floating point numbers, but what if sign bit in IEEE number can also be something like 1 bit ADPCM or delta-sigma value? Gary Ray s model has one bit exponent with data compression. So perhaps additional (if integer bit is unused) bit in FP numbers can be used to expand exponent (range) value together with sign bit, both data compressed for increased range, if mantissa accuracy is expanded using software computing or unum. If that additional exponent bit (and sign bit also) is data compressed it can perhaps represent large value. Sign bit can be a two different kind of: normal 1 bit plus or minus value, or then some more exotic like vector, ADPCM or delta (sigma) value. Takis Zourntos has made delta encoding that is diferent from delta sigma. Also sort of vector compression of bit(s) like AQ is perhaps possible, delta (sigma) with vector compresssion is also studied in sound etc. compression. Altough only 2 bits (sign bit and one extra bit) is only 4 values for codebooks etc. Sample-by-sample adaptive differential vector quantization (SADVQ) by Chan is method that can represent sound using only 1-3 bits (soundwave). If SADVQ with 1-2 bits can be applied to floating point system also (using sign bit and extra bit). Serial adaptive differential VQ is also called SADVQ, but different from Chan model. Also using digital dither to improve accuracy of 2 available bits can be used, and in unum section also. There is also Quote Notation (math), Q number format, and Bounded Integer Sequence Encoding (BISE). ARM processors have hardware BISE capacity, I think. So fractional values can also be used like Gary Ray s "Richey and Sadiean format" in floating point system, and there is tapered floating point also . Quadrature delta (sigma) is also studied in radio frequency communication. Spherical-, cubic- or pyramid vector quantization with ADPCM require perhaps more than 1 bits to represent ADPCM value. There is also CPFSK coding which is frequency modulation and PCM together, "Iterative multisymbol noncoherent reception of coded CPFSK" 2010, "Multisymbol with memory noncoherent detention of CPFSK" 2017, "On differentially demodulated multi-level digital modulation" Griffin. CPFSK is version of multisymbol coding. Alexia Massalin has made Synthesis kernel that uses "Quajects" that contain both information and also just pointers to codebooks etc. together, perhaps Quaject - style coding can be used in data compression. in "DEC64: decimal floating point" (Hacker News) is different propositions how to improve floating point formats. One possibility is to use delta value bits instead of integer value bits in fixed or floating point processing, either each bit or only sign bit is delta (sigma) or Takis Zourntos model delta value, now same dither and noise shaping methods that are used in delta sigma modulation can be used in fixed or floating point numbers also. There are some very efficient delta (sigma) noise shaping methods that increase noise distance to 12 bits (70 decibels) or 13 - 14 bits (over 80 decibels) or almost 90 decibels (15 bits) in audio coding. Similar dither and noise shaping with numbers, quantization noise is pushed from most significant bits to least significant bits, making perhaps LSBs unusable, but increased bitwidth of 14 -15 bits is more than some bits of LSB unusability. Also dithering noise can be made audible in audio coding increasing bitwidth even more, similar dithering noise very loud ("audible" in audio coding) affects information coded in noise shaped numbers but again bitwidth increases. And delta coding makes possible to use Hatami model of pseudo-parallel delta sigma (or Takis Zourntos model) "A novel speculative pseudo-parallel delta sigma modulator" Johansson 2014. Hatami model makes possible oversampling ratio to only 1/16th of otherwise required, Hatami model in delta encoded numbers makes possible 16-times "compression" of numbers (actually no data compression is used, numbers are only represented as Hatami/Johansson pseudo-parallel delta form). Using Hatami/Johansson delta model in audio, video (each pixel is 20 times oversampled delta-sigma Hatami/Johansson value, each pixel is 20 bit colour delta value, using 20 X reduction only 1 bit is needed instead of 20 bits to represent 20 bit colour), also "Video compression using color quantization and dither" Vignesh Raja. Using 24 or 32 times reduction makes 1/24th or 1/32th bitwidth, so possibilities are enormous to save bitwidth and bandwith with binary numbers in audio, video or any numerical information, like fixed point or floating point numbers. Efficiency of delta (sigma) can be improved using analog electronic components, there are VCO-ADC (voltage controlled oscillator DAC) that use delta-sigma conversion and have for example two VCOs per one DSM. PortOSC (portable oscillator) by Petri Huhtala is simple analog VCO, Delora (Harmony Systems inc.) VCO, Pigtronix Mothership VCO, Malekko/Richter OSC2 and coupled planar VCO are other new VCO designs. Analog electric components are being made using 28nm process and plans to make 22nm or 20nm analog components are being made, "Hybrid digital-analog noise shaping in the sigma-delta conversion" US pat. 20170222657 Ullman 2017. Delta system (and other numerical systems) can also use spherical-, cubic-, or pyramid vector quantization, or Additive Quantization AQ (Martinez). Also ADPCM- style delta representation can be used. Delta sigma conversion can use only digital components and digital conversion (DDS): "Direct-digital synthesis using delta-sigma modulated signals" Orino, "A direct digital synthesis with tunable delta sigma modulation" Vainikka. Improvemets in floating point: "Multifunctioning floating-point MAF design with dot product support" Gök 2007. Also integer accuracy can be increased using signed digit representation like Gary Ray s floating point, instead of 32 bit integer there is 31 bit integer with "bit reversed elias gamma" sign bit or ANS or other way data compressed sign bit, or delta (sigma/ Zourntos / ADPCM or other) value sign bit. Fixed point numbers can also have Reduntant Binary Representation (RBR) used by computer ALUs, RBR numbers are pairs of 2 bits with each bitpair own sign (sign bit?). Now if every RBR pair has one fixed point "normal" bit and another bit (sign bit) is data compressed or delta value bit, increasing bitwidth to another level. Other data compression methods than ANS are Bohr compactification, ultrafilters, Stone-Chech compactification, and second order arithmetic. Using Lambda calculus directly on hardware using binary Lambda calculus ("Introducing PilGRIM: a processor...") or SKI combinator logic can also be used, if these are tree-based representations then "An efficient data clustering algorithm using isoperimetric number of trees" 2017 etc. would help. Also floating point libraries like "Gal s accuracy tables revisited" and quinaplus.com netpage "Qflops: An ARM cortex M0 floating-point library" that has only 436 bits and can be used inside processor cache memory etc. Another way to use delta-sigma than Hatami (Johansson & Karlsson) model is from text "Design of multi-bit sigma-delta modulators for digital wireless communications" Bingxin Li page 45: MATLAB toolbox by Richard Schreier for third, fourth, fifth order quantizer that has 23, 164 and 1428-rate efficiency (bits?), and is mentioned that multibit structure is better (reaching 1428 bits max?). In other texts by Hannu Tenhunen, B.Li, or A. Gothenberg are mentioned cascaded and pipelined delta-sigma (sigma-delta) modulators of high efficiency (possible to max 1400bit theoretically?). Altough this is radio frequency communication with large oversampling ratio, that applied for example integer bit line of 64 bits, instead of 64 bit integer it is 64 times oversampled multibit delta-sigma value with 1428 bit accuracy, "compression ratio" is 22 times without using data compression. But perhaps Hatami method is more efficient. Using dither and noise shaping methods that are very efficient with delta-sigma, efficiency can be improved still. Using Takis Zourntos model or DPCM model can be also used. Also if real compact presentation is needed combining three methods, vector quantization (like AQ or cubic-, spherical-, pyramid- etc.) with delta (sigma etc.) coding, then this 1 bit value is data compressed like Gary Ray s floating point exponent, using ANS or other data compression method. So three methods: vector quantization, delta, and data compression is used. Or quadrature delta model used instead of vector quantization (if they are different things). This one bit method (can also bit many bit method if DPCM-style coding is used) can be used as alternative to sign bit in FP numbers, in RBR represenattion as one of two bits in bitpairs etc. About alternative numbers systems: there are logarithmic, hybrid log/fixed, floating point / fixed, floating point / logarithmic etc. and even one multiplier format that has all three together (fixed, floating point and logarithmic) in one number system. IEEE standard FP has roots to 1960s altough standised 1985, so it is 50 years old now. Extending it with unum concept makes possible to use old code in FPUs but also newer with unum. If sign bit (and integer bit of 80 bit FP number) is effectively delta- and data compressed information density of FP number increases. this data compressed sign bit can be used as alternative to normal FP sign bit. Similar sign bit can also be used at integers. Perhaps 32 bit integer + 1 sign bit, because it takes much more time to decode that one bit than 32 bit integer. Or use 31+1 bit in single-cycle four bytes. Same way to FP, normal 32 or 64 bit FP has perhaps 1 additional bit (32 +1, 64 +1), or sign bit in those numbers is data coded so additional bit is not needed. Modern processors can have third ALU instead of two (FP and integer). This third ALU can use complex numbers, super accurate number system etc, or infinity computer principle (RAM machine or other super accurate system). The better accuracy is achieved with fewer bits the better "numbers inside each other" compression rate is (without using data compression). This improved accuracy can also be achieved with FP numbers with unum or software accuracy expansion, and with integers if integers have sign bit that is super accurate or if accurate integer number system (like ZDTNS) is used. Every time if number has accuracy that is higher than its standard binary representation it can be used as "numbers inside each other" compression. If floating point number can have 39 times mantissa accuracy improvement, now 2000 bits of information is used instead of 53 in 64 bit FP number mantissa, that 2000 bit can store other 64 bit FP numbers etc, first one consumes 64bits of 2000 bit accuracy, it has itself 1936 bit accuracy, now there is two FP numbers instead of just one, first one with 2000 bit mantissa accuracy, and second with 1936 bit accuracy, this "numbers inside each other" can be used until accuracy has worn off, thousands of FP numbers fit inside just one. Same principle can be applied to any super accurate number system, integer, complex numbers etc. If delta conversion uses analog components is perhaps possible to use extra signal processor or math co-processor outside CPU like old 1980s floating point processors did, extra math or signal co-processor has partially or complete analog structure compared to digital CPU. Analog computing can offer power efficiency improvement according to G.E.R. Cowan "A VLSI math co-processor..." But delta (sigma) conversion can be also done digitally, and in multi bit DPCM etc. form, and Takis Zourntos model perhaps also digitally not analog. Memristors and optical components are coming to computers so perhaps additional co-processor can use those elements, at least memristors with delta-sigma conversion has been studied. But regular analog or digital electronics will do also. If Hatami model (or other, multibit or multirate delta-sigma) leads to bitrate efficiency (up to 16 X, or 24X or 32 X information reduction is perhaps possible, huge "data compression" without using data compression actually) that delta model can be used to all information storing and companding and compression perhaps. If delta modulation with or without vector quantization or quadrature delta modulation is used, then using Feedback Amplitude Modulation (FBAM) with delta-sigma or other, and Vector Phaseshaping Synthesis with vector quantization or Quadrature Amplitude Modulation and other can perhaps be used, and patents by Clinton Hartmann ("Multiple pulse per group keying"), and by Albert W. Wegener about on-chip data compression. Things like FBAM, multiple pulse group keying, VPS etc. can perhaps be used with QAM and other applications. Analog electronic can be used not only delta-sigma but also for Karhunen-Loeve Transform (KLT) that has many usages for multichannel audio etc. signal processing. So if additional analog or partially analog math/ signal co-processor is used together with traditional CPU, it may bring improvement ("A VLSI math co-processor" G.E.R. Cowan). Netpage xlnsearch.com has many articles on logarithmic number systems. LNS libraries can also be used like FP libraries using memory table lookup values not direct computing. Floating point and delta-sigma has been connected together since 1990s: "Floating point sigma delta conversion" Kalliojärvi 1992/1994, "A Nyquist response restoring delta-sigma modulator based analog to digital and digital to analog conversion" US pat. 6373418B1 Abbey 2002, US pat. 6252531 by Gordon 2001 that is cited in many other patents, "Multipierless interpolator for sigma-delta analog to dogital converter" 6392576B1 Wilson 2001, "Floating point analog to digital converter (FADC)" Shu 2009, 11-bit floating-point pipelined analog to digital converter in 18" Seh Wah.Kwa 1996, "Wide dynamic range delta sigma A/D converter" US pat 5896101A Melanson, US pat 5598158A "Digital noise shaping circuit" Linz 1994. "Single chip synthesizer for generating digital signals of tunable precision with nearly no spurs", "Digital fractional frequency synthesizer based on counters", "Design and implementation of multilevel interwidth modulation index (MI)", Adaptive modulation ratio (AMR), "Quantiation and greed are good: one bit phase retrieval, robustness and greedy refinement" Mroueh 2013 (exteme one bit sampling), MEMS oscillator, "Design of multi-bit phase variable VCO-based ADC" Madhulika 2016, Recursively indexed quantizer RIQ-ADPCM. Using analog circuits also makes possible to use KLT transform. "Compact floating point delta encoding for complex data" US pat 20150139285A1, "Elimination of limit cycles in floating-point implementation of direct-form filters" Laakso 1994. Block floating point is another method. "Simplified floating point division and square root" Viitanen, "Tournament coding of integers" Teuhola. Because there is so much floating point standards (8 bit quarter precision, 10-,11-,14-,16-, unofficial but used 24bit, 32bit, 48 bit unofficial, 64 bit and 80- and 128bit extented FP formats (11 "standard" formats together) and many DSP chips use their own non-standard FP formats, is it sensible to use use small microfloat FP (8 bits perhaps) and then if needed accuracy of this microfloat can be expanded using software expansion (up to 39 times), hybrid software/hardware accuracy expansion, "Gal s accuracy tables revisited" table lookup method, Qflib (at quinaplus.com netpage, small FP library of only 436 bytes) and codeproject.com netpage Minimalist floating point type "Small floating point" SFP (for 8 bit etc. microcontrollers own 15-bit FP format) so small FP library like that fits inside CPU cache memory, or unum computing with extra bits (8 bits FP + 8 bits unum for example)? Now all differnt FP formats can be put away if there is just one with reconfigurable accuracy. User can decide itself what accuracy to use in FP number. Also bitwidth is much smaller using only 8 bits instead of 32 or 64. This 8 bit FP can be improved using dither and "noise shaping", perhaps up to 16 bit (altough dithering/ noise shaping affects information?). Using Gary Ray s model data compressed one exponent bit out of five (or data compressed sign bit, sign bit is not in 8 bit quarter precision FP) range is perhaps same as 32 bit FP. Now 8 bit FP is close to 32 bit FP accuracy. After that software or hardware expansion can be used (table lookup, library, software computing, unum etc.) to improve accuracy even more. Theoretical mantissa expansion using 39X software expansion is 117 mantissa bits out of just 3 in 8 bit number, that is enough for scientific computing. At MROB.com netpage is many microfloats, some of them have very large range (but low accuracy) and are not IEEE standard. In Wikipedia minifloat page is 6 bit "IEEE standard" FP number. That can be used also in 8 bit byte, 6 bit FP + 2 bit data compressed unum bits etc, sign bit of 6 bit number can also be data compressed (unum) bit. 6 bit FP can perhaps be dithered to 12 bits. One byte standard using 8 or 6 bit FP number can be accuracy and range improved using either extra unum bits, data compressed exponent/sign/unum bits (only 1 bit is needed to be data compressed in Gary Ray s model, data compression can be ANS or other), dither and noise shaping. These can be applied directly on hardware. After that using library-, table lookup-, software/hardware hubrid etc. methods if even more additional accuracy is needed. "User defined floating point accuracy format": simple and cheap devices can use same FP numbers as expensive and complicated, user decides what accuracy wants to use, and 8 bits is far less than 64 or 32 bits with same accuracy if needed (but much more computing time, if 8 bit FP must be expanded to 64 bit FP accuracy). One way to improve small accuracy 6 or 8 bit FP numbers is differential encoding like DPCM. That is accurate eneugh even without additional improvements to be used in sound or video (pixel). Another way to improve small accuracy is audio DSP additional "header" model, high end audio DSPs use 16 bit integer, add 24 bit header to it so result is 40 bit audio with artificially expanded accuracy from 16 to 18 or 20 bits or so. Some use 16 bit + 48 bit header= 64 bit. Wolfson audio uses 16 bit to 24 bit integer, that to 32 bit FP. Similar artificial accuracy expansion using additional header can be used with other than audio information also? For example if information has 14 bit integer accuracy 42 bit header is put in it and processed at 56 bit integer processor, result is about 20 bit accuracy (from 14 bit source), 56 bits can be scrutinized to 20 bits if information is stored. "Complex block floating-point format with box encoding for wavelength reduction in communications systems" 2017, "as a floating point system with unique shared fixed exponent reconfigurable operations 23.sep 2015", "Complex floating point - a new data word representation" 2012, "Unity3d fuzzy logic a minimal yet robust floating point", smtlib.csuiowa.edu "Theory of floating point numbers", "Floating point numbers in PVS" S. Boldo (higher level formalization), "Floating point division and square root" Kwan 2008, "Fast dithering on a data-parallel converter", "Shared floating point unit in a single chip microprocessor" Texas Instruments 1997/2000. Odelta compression (Gurulogic OY) and Octasys Comp compression. Inexact computing in signal processor DSP and in CPU also increases speed, if programs are written so that small mistakes does not crash programs or programs are run in small snippets so that small snippet is stored in memory and can be rerun if it has processing fault. There is also FP shared exponent model, used by 14 bit OPenGL FP standard. Shared exponent can be used in image (video, computer graphics) and sound also. Because sampling in sound is typically 48khz, and human hearing has avarage 12-16khz range 48khz is 3-4 times higher than hearing range. Sometines 96khz double sampling rate is used, 6-8 times human range. 96khz signal is 4 times 24khz frequency that is the audio output of sound. If four samples of 96khz sound are floating point numbers and have all same exponent nobody propably notices difference. Using 32 bits divided to 4 X 7 bit mantissa and 4 bit exponent. 7 bit mantissa has hidden bit, so it is 8 bit accuracy. Dither increases FP (mantissa) accuracy, so it is added to 7(8) bit, result is about 12 bit accuracy, or 14 bit if noise shaping is used. 14 bit noise shaped X 4 have same 4 bit FP exponent. Now one 32 bit number has 4 X 14 bit numbers and one 4 bit exponent inside itself. K2 Audio improvement and other can be used. 16 bit sound has 4X3 bit mantissa. 3 bits has hidden bit so 4 bits accuracy, noise shaped to about 8 - 10 bits is perhaps possible, all four have same 4 bit exponent (or 3 bit exp and 1 sign bit). Or use Gary Ray s only 1 exponent bit and 3 X 4 and 1 X 3 bit asymmetrical mantissa, if 1 bit must go to sign bit. 8 bit FP sound has 48khz sampling rate and 2X3 bit mantissa (upt to 8-10 bits accuracy with hidden bit and noise shaping) and 2 bit exponent, or 2X2 bit exponent and 4 bit mantissa, 2 bit has hidden bit, this 3 bit dithered to 6 bits perhaps with 4 bit exponent.16 bit sound with 2X5 bit mantissa, and 5 bit exponent and 1 sign bit, 5 bit mantissa has hidden bit, 6 bits noise shaped to 12 bits. Same shared mantissa can be used in computer graph and video also. Usually shared mantissa is used to store RGB color value. But there are is two value color system Practical Color Coordinate system PCCS. IHS color coordinate system has intensity, hue and saturation, of which intensity value can be eliminated mostly. IHS color system can also be vectorized in color space so only one vector (color value?) is needed (?) perhaps. Using Additive Quantization or other method can perhaps be used with it. "Cube-root-color coordinate system". PCCS uses hue and tone values (two colors). Coding using "chroma subsampling" in 4X4, or 3X3 pixel grid where central pixel is avaraged of surrounding pixel values and only 8 pixel values needed, is possible. Using PCCS with only 2 colour values needed, 4 X 7 bits mantissa, =28 bits, 4 bit common exponent. Mantissa and exponent are either hue or tone values, used like chroma subsampling when there is only one exponent for all mantissas. 7 bit mantissa has hidden bit, 8 bit value dithered to 12 bits, now gamma correction used like RGB video, gamma corrcetion is Elias gamma coding that is used as standard feature in RGB color. Same can be used in PCCS also. 12 bit dithered mantissa gamma corrected to 16 bit value. Now there is 32 bit number with 4 X 16 bit mantissa and 4 bit common exponent. 32 bit number with 8 X mantissa 8 X 3 bit, hidden bit used to 4 bit, dithered to 8 bit, gamma corrected to 12 bit, now 8 X 12 bit mantissa and 8 bit exponent fit inside one 32 bit number, for 3X3 pixel grid (9th pixel is interpolated). For 16 bits 4 X 3 bit mantissa with 4 bit exponent, mantissa to 12 bits dithered and gamma corrected. 16 bit number with 8 X 2 bit mantissa, 1 sign bit for Gary Ray s exponent, one mantissa bit only 1 bit to make room for it. Hidden bit used in mantissa 2 bits to 3 bits, dithered to 6 bit, gamma corrected to 9 bits. Now 8 X 9 bit mantissa and 1 data compressed exponent in 16 bit number (the one 1 bit mantissa, 2 bits with hidden bit has lower accuracy than other mantissa bits, perhaps 6 bits ditherd and gamma corrceted). Or using only 8X1 bit mantissa with hidden bits, 2 bit mantissa perhaps dithered to 3 bits, gamma corrected to 5 bits, now 8 X 5 bit mantissa and 7 bit exponent and 1 sign bit. 8 bit FP 4 X 1 bit mantissa and 4 bit exponent, 1 bit mantissa to 5 bits using dither and gamma correction. Using two-colour system makes possible to use mantissa and exponent for FP number as two colour values, and one of them (exponent) as a similar common value as chroma subsampling does. Instead of gamma correction ANS or AQ or other method can be used. Small microfloats like 8 bit formats at MROB.com can have very large range but small accuracy, and are not IEEE standard. But small (mantisssa) accuracy can be improved using table-lookup, unum extra bits, software computing etc. so that large exponent range can be used. This about FP. The third "modern" ALU that uses most moder super-accurate number system other than previously mentioned are residue number system (wikipedia "redundancy information theory" article, reduced-precision residue number system, associated mixed radix system, "Reduced complexity analog-to-residue conversion employing folding number system" 2010, "Sign detection in non-reduntant RNS with reduced information", "A reduced number system reduced instruction set processor", "Efficient exponentation of primitive root in GF(2m)" 1997, "Using primite roots and index to solve congrueges", "A novel RNS-based SIMD RISC processor", "Direct residue-to-analog conversion scheme based on chinese remainder theorem" 2011, "Efficient converters for residue and quadratic-residue number systems" 1993). Others are mixed radix-, factorial-, combinatorial-, and primorial number systems. Even more strange is "Hyper-Cantor normal form" (Cantor normal form?) and "Base infinity number system" (Eric James Parfitt), If Parfitt s model is tree based number system, then data compression that compress binary trees can be used with it. Originally base infinity number system was going to use QR- code type representation, if Colour Construct Code or other high information density code can store base infinity tree-based number system. Other: "High-speed synchronious counters with reduced logic", "Signal processing with reduced combinatorial complexity", "Simple factorial tweezers for delicate serial and parallel processing" 2016, " Coding using a combinatorial number system" 2013/16. Asymmetrical number system is used in data compression (ANS) nowadays, that is another possible number system. "Low power turbo equalizer architecture" 2002, "Adaptive minimum bit-error rate equalization for binary signaling" 2000, "semifinite embedding", "multidimensional scaling", Complex minimal resource allocation network CMRAN, "Duality between probability and optimization" Akian, "Variable precision floating point division and square root" Leeser, "FPGA based floating-point library for CORDIC algorithms" 2010, "Herbie, automated improved floating point" Panchekha, FP libraries are LibBF Fabrice Bellard, MSPMATHLIB, GoFast, PICFLOAT. Also if AOmedia Video 1 (AV1) is the new video coding standard, using its video prediction algorithms as audio predictor for audio codec also, making unifidied audio/video codec. Also: Fast Walsh-Hadamart Fourier Transform, "The simple genetic algorithm and the Walsh transform" algorithm, WHT/DPCM, "High speed system using Walsh function", Floyd-Warshal algorithm, Metropolis-Hastings algorithm, "A novel data compression method" Gordon Chalmers 2005, "Efficient data compression using prime numbers" 2013, "Coincidence, data compression and Mach s concept of Economy of thought", DFT Fourier, "Compression of trajectory data: a comprehensive evaluation", "Achieving better compression applying index-based byte pair transformation", "pipelined data compression", Compressing data using Golbach s conjucture 2010, "Real numbers and compression" Salomon, "Tree structure lossless compression using interleaved angle", Efficiently storing a list of prime numbers, US pat 9450603B2 Dickie 2016, Prime factor compression, Qdigest,- Magic Square genetic- and Subpopulation algorithm based on novelty.