Advances in modern mathematics have made possible that 80 bit floating point number has accuracy of 39 times mantissa accuracy, or 39 X 64 bits = 2496 bits, when previously accuracy was "only" 64 bits. However nobody seems to be interested in exploiting this accuracy. Why? 2*2496 (Two s complement with 2496 exponents) is so enormously large number (zillions times zillions times zillions etc. converted to decimal system), it is so large number that I don t know if there is a name for it. And 80 bits is only neededed for this super-large value. This accuracy must be exploited and put to good use somehow. Making compact numbers, several numbers into one is one way, but making compact numbers is not easy, and only way I know is large two s complement number, 16 bits is 2*16 in two s complement form and 16 and other 16 bits is 2*32 etc. But continuing this is huge waste of number values because 2*32 is 4 billion values, and 16 bits is 65 000 values and 2 X 16 bits is 130 000 values. So if 2*32 is used for compact number that should contain about 65 000 X 16 bit compact numbers ( Because about 65 000 X 65 000 = about 4 billion values, or 2*32), not just two 16 bit numbers in 2*32. This my technique of putting several two s complement smaller numbers inside one in two s complement form is not perhaps at all real "compact number", but that is only using large two s complement number to represent several other two s complement numbers, that is perhaps no "true" compact number nor data compression. In answers.com netpage "What is a compact number" answer is that compact number from 5 + 9 +4 + 5 is 5945, and that 5945 is the compact number containing four different numbers. But 5 + 9 + 4 + 5 = 24, so number 24 should be the "true" compact number, not 5945. Using value of 24 instead value of 5945 would be huge improvement of information space. However there is the problem that from number 24 the principal components (5 and 9 and 4 and 5) must be accurately extracted back when compact number is decoded. And that is the huge problem. Processing of sound uses two techniques, dithering and Quadrature Mirror Filter to improve quality and for data compression. I am just wondering if these two can be used in this floating point thing also. Small minifloat or microfloats, smallest would be 2 bit floating point (1 bit exponent 1 bit mantissa) are best for using dither to improve accuracy, 4 bit accuray increase is possible, and 6 bits with "noise shaping" and 8 bits or more so now dither noise is in the signal channel but dynamic range (accuracy) is 8 bits (or more). So now 2 bit floating point number will have 4 - 8 bit or more accuracy. Coupling dither with logarithmic number system increases accuracy even more in very small minifloats but now it is logarithmic number not floating point number. Also because in my example 2400 bits of accuracy is divided to 30 different 80 bit sections, is it possible that all those 30 sections have dithered accuracy improvement, not just one base value, all inside one 2400 bit accuracy? I don t know is it technically possible or even sensible to use. And principle of quadrature mirror filter is to shift information from lower frequency to higher frequency and save information that way (I don t know actually how QMF works I just presume). If using 16 bit sound QMF filter banks are used, and 16 bit integer is 65 000 values about, how using QMF filter bank in 80 bit floating point number with its zillions (two s complement with 64 exponents, 2*64) values and extra large range? If about 30 - 70 % information saving capacity is available at 16 bit integer has the 80 bit floating point number greater capacity of shifting small values to higher -potential, and creating information storage savings? That was using QMF in 80 bit standard accuracy number. But then another way, using full 2400 bit two s complement accuracy of 80 bit floating point number, that is so large number that QMF filter bank can shift information from lower exponent range (below 2*800 - 2*400 or something like that) to higher range (near 2*2400) and save information space that way, just like standard 16 bit integer audio compression uses when it uses QMF filter banks. So now only 2000 or 1600 bits are used in two s complement form, not 2400, which would make computing significanly faster (because the numbers are in two s complement form) than using full 2400 bit (2*2400) computing. But I don t know if this is technically feasible or even possible to use. If numbers are put in chain, putting them in two s complement form (so first 16 bit is 2*16, 16 and 16 bits is 2*32, first 16 bit number occupies 2*16 bits, second 16 bit number occupies 2*17 - 2*32 twos complement exponents leading to 2*32 two s complement number etc.,) is quite awkward but only way I know to combine set of numbers as one. Googling "compact numbers" gives only a few results about the actual subject matter (how to to combine numbers, or how to make compact numbers from several different numbers into one that could not only put together but also extracted back to different numbers if needed). Different ways to make compact numbers is the thing I am searching. Wikipedia: "Extreme value theorem" , "Compact space", "Supercompact space", "Empty set", "Fuzzy set", "Recursive set", "Multiset", "Aleph number", "Supercompact cardinal", "Subcompact cardinal", "Strongly compact cardinal", "Limit point compactness", "Sequential compactness", "Equivalence of mappings between sets of same cardinality", "Formal language", "Harmonic analysis", "Automata theory" (Wiki & Tuomas Hytönen), "Vector space", "Complex plane", "Kautz graph", "Sparse matrix" "Bohr compactification", "Partition", "Approximate computing", "Bayesian computation", "Byzantine fault tolerance", "Paraconsistent logic", "Hermite polynomials", "Fractals", "Differential evolution", "Complete sequence", "Ostrowski numeration", "Continued fraction", "Self- synchronozing code", "Fibonacci coding", "Universal code", "List of data structures", "Enumerated type", "Markov chain". Wikipedia is full of articles about "Computable function", like "Bijective numeration", "Gödel numbers", "Gödel numbering for sequences", "Effective results in number theory", "Effective method", "Recursive function theory", "u-recursive function", "Lambda calculus", "General recursive function", "Primitive recursive functions" "Church- Turing thesis", "Chinese remainder theorem" , "Helly family", "Square- free integer", "Constant-recursive sequence" and most importantly "Ultrafilter" (with "Ultraproduct" and "Compactness theory") and "Stone-Chech compactification". I don t know if ultrafilters and cardinal numbers are only things actually something to do with compact numbers. Bijective number systems like Gödel numbers will make compact numbers (and other recursive, Lucas sequences etc). Other texts: "Ultrafilters, compactness, and Stone-Chech compactification" Dror Bar-Natan 1993, "P-adic numbers" Jan-Hendrik Evertse 2011, "Equivalence and zero sets of certain maps in finite dimensions" Michal Feckan 1993, "Zero sets and factorization of polynomials of two variables" Micki Balaich, Mihail Cocos 2012. I don t actually even understand what these texts are about. And there is a book "Infinite dimensional analysis: a hitchiker s guide" by Charalambos D. Aliprantis and Kim Border. However I understand something from netpages sfu.ca/rpyke "Countable and uncountable sets" and divisbyzro.com "Cardinals of infinite sets, part 1: four nonstandard proofs of countability". What those all is to do with compact numbers I don t know. If some super accurate number system is used, in binary form or other (analogue computers and optical and quantum computers can use other than binary numbers), and that number system / several number systems are layered on top of each other, data packing capacity becomes huge, no need for internet searchs anymore because every mobile phone could have full content of world s internet inside phone s memory in single memory card. And if there is a way to make "true" compact numbers, for example 2*2400 is so enormous value (zillions times zillions times zillions etc. in decimal system), and that is available at the 80 bit standard floating point number, so for example making chained 16 bit compact number (with each 16 bit number occuping only 65 000 values in this huge zillions etc. number space), that this 2*2400 bit accuracy and 2*16382 bit range one 80 bit standard floating point number can now contain perhaps thousands (?) or millions (?) of 16 bit numbers (instead of just 150, because 150 X 16 = 2400) in one huge "true compact number". If now some super accurate number system is used as base (and not just binary, or then this super accurate number system is transferred to binary), and using "several layers on top of each other" principle and in each layer using "true compact numbers" several numbers chained to a very large single compact number, now perhaps the information content of the whole universe can be scrutinized to a one single 80 bit number. Perhaps for search for "true compact numbers" "ultrafilters" or "compact cardinal numbers" are helpful. Googling around this compact number thing brings only search results of texts of difficult math that I don t understand. If there is a way to "true compact numbers", however erratic or unfinished or inaccurate or limited range etc., possibility of even non- perfect solution for chaining several values into one and extracting them back if needed (not just my simple two s complement on top of each other- simple technique), would be huge improvement in data compression, considering that 80 bit floating point number has 2*2496 available number space for accuracy and 2*16382 for range. Ultrafilters and compact cardinal numbers are the two things that I find, but no other ways to make compact numbers "truly compact". Perhaps sparse sampling (compressive sampling, like Francesco Bonavololonta 2014 text, only 2% or one 50th of information is needed to restore information back), or error correction codes or other data sorting and data compression techniques can lead toward "true compact numbers", and Trachtenberg speed system -type counting algorithms perhaps also are useful for compact numbers. Further reading (I don t actually understand this math, it is so complicated): "Advanced calculus" Patrick Fitzpatric, "Geometrics and numbers" C. G. Lekkerkerker, "Dynamics and numbers" Sergii Kolyada, "Fundamental theorems of real analysis AB2.15: Theorems of real analysis", "Infinity in compactification" cut- the-knot.org netpage, "Hyperbolic manifolds and discrete groups" Michael Kopovich, "Theorems on ultrafilters" Tycho Neve 2013, "Is this a known compactification of the natural numbers?" mathoverflow.net netpage 16.3. 2016, "Near ultrafilters and LUC- compactification of real numbers" Mahmut Kocak, "Formulating Szemeredi s theorem in terms of ultrafilters" HG Zirnstein 2012, "Three-element bands in BN" Yevhen Zelenyuk, Yuliya Zelenyuk 2015, "Lecture 2 : Compact sets and continous functions" ms.dky.edu netpage, "Single variable optimization" econ.iastate.edu/classes netpage, "14.7: Maximum and minimum values" Kim Heong Kwa, "Graphing highly skewed data" Tom Hopper, "Compactness and compactification" Terrence Tao, "Analysis-1 induction, sequences and limits" 29.1. 96, "Hereditarily on uniform perfect sets" Stankewitz, Sugawa, Sumi 23.9 2016, six questions at stackexchange.com/questions netpage: "The set of zeros a holomorphic functions is finite in compact sets" 3. 6. 2014 and "A set that it is uncountable, has measure zero, and is not compact" 26. 6. 2012 and "Any infinite set K has a limit point in K?" 12. 8. 2012 and "Examples of compact sets that are infinite dimensional and bounded?" 22. 9. 2014, and "Can a set be infinite and bounded" 7. 8. 2014. "How to pick up all data into hive from subdirectories" 25. 12 2013. Googling stackexchange.com and "recursive integer partitioning" brings many results, such as "Recursive integerer partitioning algorithm" 28. 9. 2012. From mathoverflow.net 30.12 2014 : "Set of critical values is compact (closed)", "Rings of functions determinined by zero-set" Hugh Gordon 1971, "Supports of continuous functions" and "Metrization of the one-point compactification" Mark Mandelkern 1971 and 1989, "Fourier analysis on number fields" Ramakrishnan, Valenza 2013, "The difference between measure zero and empty interior" Tom Leinster 28. 8. 2010, "Topology of the real numbers - UC Davis mathematic", "Two examples of zero-dimensional sets in product spaces" Roman Pol 1989, and two texts from "mathematics subject classification 2000" series: "GO ideals of compact sets " Slawomir Solecki, "Universal measure zero sets with full Hausdorff dimension" Ondrej Zindulka, math.berkeley.edu/kozai : " Notes 8. Analysis: sets and spaces", and pirate.shu.edu : "5.2 Compact and perfect sets", and lastly from physicsforums.com netpage: "Proof the lim N (1/n) using a hint" 21. 1. 2014. Other: "On one type of compactification of positive integers 2015", "Compactification of integers" Royden Wysoczanski 1996, "Compactness" Jerry Kazdan 2014, "Professor Smith math 295 lectures" 2010, "Pseudocompact group topologies with infinite compact subsets", ucsd.udu netpage: "Math 140 A - HW 3 solutions" and "HW 4", msc.uky.edu netpage "Chapter 5 Compactness", "Compact sets- math forum - ask Dr. Math", math.umaine.edu netpage "Section 5.4 Accumulation points Bolzano- Weierstrass and Heine- Borel theorems", "Discrete set" (Wolfram Alpha), zonalandeducation.com netpage: "Interval notation", math.psu.edu/katok netpage "Basic topology", mathcaptain.com/algebra/set-theory netpage, "Introduction to formal set theory", "Using nested iterations and the OVER clause for running aggregates", monsterdex.wordpress.com: "Generating integer partitions in C/C++ (recursive)", jmp.com: "Support/help/partition_models". "Formal language" is mathematical theory mainly for text compression, but can it be applied to strings of numbers also? For example "Inflectional morphology, reverse similarity and data mining- " (Alfred Holl) and "Paradigm based morphological text analysis for natural language" (Antonio Zamora 1986). Ypo (Ipo) P. W. M.M. van den Boom has invented "Octasys comp" data compression (patent "Method and device for encoding and decoding of data as unique number values" 2013), and that patent cites old text by W. D. Hageman ("Encoding verbal information as unique numbers" 1972). Can these principles be used for compact numbers also, extracting smaller numerical values from larger one after larger value has been decoded using smaller values? Is it possibler that number 24 can be at some accuracy be divided back for example numbers 5, 4, 9 and 5 after 5+4+9+5 calculation, when decoder receives just number 24 and some additional information? And lastly "Some remarks on the Bohr compactification of the number line" O. V. Ivanov 1984. And googling "differential equation complex plane" or "Differential equations and function spaces in the complex plane" brings also many results, and also googling "Bohr compactification of the integers" and "Fenwick additive code". Also "On the reduction of entropy coding complexity via symbol groupung: 1 - redundancy analysis and optimal alphabet partitioning" 2004, "Compression with the polynomial transform" 2002, "Applications of Laguerre functions to data compression" Martin 2012, "High speed codebook design by the multitrack competetive learning on a systolic memory architechture" 2004, "Locally adaptive vector quantization" (NASA). But are any of these texts any help for making compact numbers, numbers that in itself contain several other numbers, and those other numbers can be extracted out from the one large value, if any of these texts are helpful of that I don t know. Also googling "number compactification" brings many results. Making ("encoding") "true compact number" is easy, putting 5 + 4 +9 +5 makes 24, but how the decoder now knows that this number 24 is made using principal components of 5, 4, 9 and 5 and not some other combination? That is the problem of decoding "true" compact numbers. So "true" compact numbers must have a sort of header in front of them or inside them some "arithmetic code" that decoder can divide large sum back to its components, or some sort of "information tag" within. That arithmetic code / information tag can be quite large. For example if 80 bit integer is composed of 16 bit values, and 16 bit value is about 65 000 numerical values, and 80 bit integer is million X million X million X million values (two s complemet with 80 exponents), this 80 bit number can contain 16 X million X million X million of these 16 bit (65 000 numerical) values, because one 16 bit number is 65 000 values, 16 + 16 bit is 130 000 values and so on. Now this large 80 bit "true" compact number needs some header information how to decode back those separate components from 80 bits of information. The header / information tag can be 80 bit floating point number, which has 2*2496 available numerical values, and 2*16382 range. So 2*2496 different " arithmetic codes" are available (because each value of huge 2*2496 number space can be "address" to a different arithmetic code) with combined range of 2*16382. The "header" is much larger than actual information, but it is needed to decode large compact number. Now 160 bits (80 bit integer + 80 bit FP header) of information contain 16 X million X million X million different 16 bit numbers of information. That is much more than 5 (5 X 16 = 80) or about 150 (150 X 16 = 2400) 16 bit numbers in integer or floating point form. That is just one example. If 80 bit floating point numbers are used also as information store, information storing capacity grows to astronomical proportions. If capacity of floating point numbers are expanded using layer upon layer technique, information capacity grows yet more. So perhaps only 80 bits, 80 ones and zeros, is enough to include itself the complete information content of the whole universe. But making "arithmetic code" that is capable of decoding those "true" compact numbers is complicated task and math behind ultrafilters and other things is such that I don t understand it. The "information tag" can also be in other form than separate header, for example information how to decode compact number can be encoded inside number itself, like error correction codes are using. The separate smaller numbers that are chained to large one value, or the one large value that contains itself several smaller values, is composed of "arithmetic code" that includes the information how to decode smaller numbers back from one unified larger value, and then the actual numerical information. Error correction codes use about the same principle. Also "blockchain" method of different cryptocurrencies make long chain of information that is encoded and then decoded back. If this same "blockchain" method of cryptocurrencies can be used to making compact numbers in long blockchain that is then when needed decoded back to its principal several smaller numbers, at some accuracy (no 100% accuracy is necessirily needed for compact number encoding & decoding). Almost similar like modern blockchain method used by cryptocurrencies is old "executable compression" method, used for decades in computer industry. Both make several smaller vales a "blockchain" and then put that combination of several smaller values to one large information value. If those principles can be used to making "true" compact numbers also, not just cryptocurrencies and computer programs. There are other methods, ultrafilters, Bohr compactification, compact cardinal numbers etc. And things like Hash Zip, compression that makes information really small, but cannot decode back information that it has encoded, and CABAC video compression and other compression / data sorting methods, like Finite State Entropy (FSE), Q-Digest data compression, error correction codes principles, "Fenwick additive code" etc. could be used for search for true number compactification. If something like that can bring "true compact numbers" to reality, that those numbers can be decoded back to their principal components using "arithmetic code" with actual information in the "blockchain", additional "header" or "information tag" or something. If ordinary 80 bit floating point number has 2*2496 information space and several zillions of smaller numbers could be chained to "true compact number", put that large "true compact number" inside 80 bit floating point number and when needed then decoded back (at some accuracy), about all information in the world would fit inside one 80 bit floating point number. Adding layer upon layer technique would expand it even more (first 2*800 values are used to describing other 10 X 80 bit FP numbers, these have inside 10X other FP numbers and these still third layer of 10 X Fp numbers, now 1000 floating point numbers inside just one, consuming 2*2400 or 2400 bits of 2496 bits available exponent accuracy). No "true compact numbers" are necessiraly needed, but information compression ratio of 80 bit FP number is "only" about 50 000:1 (or perhaps slightly more) in two s complement "numbers on top of each other" (16 bits is 2*16, 16 and 16 bits is 2*32 etc) and layering floating point numbers layer upon layer principle. So either super accurate number systems are needed for layer upon layer principle or using "true compactct numbers". If all those (laeyer upon layer numbers, super accurate numbers systems, true compact numbers) are used data packing capacity is truly astronomical, one 80 bit number is enough for all information that the whole universe can contain in any form. Even without "true compact numbers" , using super accurate number systems and layering these layer upon layer is enough for enormous data storage, but two s complement is uneconomical way to store data, because 2*32 is just two 16 bit numbers, but 2*16 is 65 000 values and 2*32 is 4 billion values, so there is huge disparity. If 64 000 X 65 000 (=4 billion values) could be packed inside 2*32 number that would be much better, and those values extracted back to their 16 bit (65 000 values) principal components (in my example, of course any bitwidth will do, 16 -80 bits, or more, or less etc.). And accuracy of "true compact number" decoding is not critical, even inaccurate and erratic method will do, because benefits of data packing are so huge (considering that 80 bit FP number has 2*2496 number space with 2*16382 range). But even without "true compact numbers" or super accurate numbers systems, using standard 80 bit floating point number with 16 bit unum section or other (leading to 96 bit floating point number with unum section, unum / ubox computing), data packing capacity is huge and lossless. But processor gets buried in number crunching, so perhaps floating point library (with terabyte? or gigabytes? of memory) is needed to help processor to count for example 2*2400 floating point number, large floating point computations could take hours. But optical or quantum computers are much faster. Also unum/ubox concept and "A new untertainty-bearing floating point arithmetic" Chengpu Wang 2012 perhaps help. At wikipedia "Partition (number theory)" page is "Young diagram" that reminds "Base infinity number system" by Eric James Parfitt. MIPS R18000 processor had effective floating point ALU that was more capable than usual Intel version 1990s, and used different principle than Intel and other similar processors, and also IBM Cell processor had very effective FP unit. Also text "hardware-based floating point design flow" (Michael Parker, Altera) mentions that FPGA- based FP computations can have same speed as integer computing, but this requires FPGA and non IEEE- standard FP numbers. Similar FPGA- based floating point formats are also other writers proposed, some of them use reversible computing. Also graphics cards (GPUs) have now over 6000 floating point units max, and use 64 bit FP format, and these are used in GPGPU concept together with main main CPU. So using GPU as floating point accelerator to solve for example 2400 bit accuracy floating point computation, if GPU could handle 80 bit numbers, and GPUs have teraflops performance, that should bring computing time short for even most difficult (2400 bits accuracy) floating point computations, and if this computation is augmented by floating point library computation time is even shorter. Because number of FPUs of GPU are nearing the amount of FPGA "logic slices" similar tricks that are used in FPGA floating point computation perhaps can be applied to hardwired GPU computing also. Xilinx FPGA have inbuilt ability to use ternary logic (googling "Xilinx FPGA ternary"), perhaps this ternary logic can use ternary numbers systems also, like balanced teranary Tau, tribonacci (on the book "Lossless compression handbook" Sayood in section 3.10.5 is mentioned "a new order-3 Fibonacci code" which should be best of tribobacci codes) and Zero Displacement Ternary (ZDTNS) number system which should be best of all integer number systems, or at least "simple" integer number systems. Other ternary NS are from quadiblock.com netpage "changing the base" section (in the bottom of the page, quadiblock.com also has "The proposed decimal floating point standard" section) and in netpage wikivisually.com/wiki/talk: data_compression "Ancient ideas, as far as I know it" (23.2. 2013). And "Constrained triple-base number system". In stackexchange.com netpage "Data structures- what is good binary encoding for Phi- based balanced ternary algorithms" 2012. But can Xilinx or any other FPGA use them I don t know. Also complex number system has already FPGA implementation in " "Design and implementation of complex floating point processor using FPGA" Pavuluri, Prasad, Rambabu 2013. Not to mention forthcoming Mill processor which is a sort of Philips Trimedia modernized. If unum concept is used 96 bit floating point number has in fact 17 bit unum section because only 79 bits is used in 80 bit floating point number. If sign bit is not used now range halves from about 2*16000 to 2*8000, but now 18 bits is for unum section. Leaving sign bit to unum use can be used in other smaller floating point / unum number combinations also, range is now halved but one extra unum bit is in use, if for example unum has only 7 bits now 8 bits of unum can be used.