 # Using floating point numbers as information storage and data compression

If any number presentation or any number system has much higher precision than usual integer binary presentation (where 2 bits is written as 2 bits and 8 bits written as 8 bits), That other than usual integer number system can be used as “information storage container”. For example floating point numbers: their maximum precision is used only for some accurate scientific computing that requires high precision. But more precision means more bits (in two s complement), and the more bits number can represent (accuracy) means that more information number can store in these bits. That information that is in this large two s complement bitstream of floating point (mantissa) number can be any information that is represented in bits, not just end result of some large scientific calcutation. Instead of putting integer bits in a row (for example 8bits + 8 bits + 8 bits) these can be “piled on top of each other” in very large two s complement number (8 bits + 8 bits = 216 (two s complement 16 bits), 8bits + 8 bits + 8bits = 224 (two s complement 24 bits) etc. I know this is computationally overcomplicated way to represent numbers but optical computing and quantum computing is coming and can make fast computations in no time. The more accuracy floating point number has (more mantissa bits) , the more information storing capacity the number has. In text “Algorithms for quad-double precision floating-point arithmetic” ( Y. Hida ) is on last page that using Shewchuk algorithm improved by S. Boldo (Sterbenz `s theorem / Malcom improvement) floating point number`s precision can be improved to 2000 bits using 39 operations (iterations?). If mantissa bits of for example 80 bit Intel FP number (mantissa is 64 bits) is expanded 39 times (39 x 64 bits) result is about 2500 bits precision. More bits means more information can be stored inside floating point number, in this case 80 bit number can hold 2500 bits of information, with a range of 16384 bits (exponent range of floating point number, that is in fact 16383 or 16382 bits). That is much more, 31 times more accuracy and 200 times more range than usual integer number presentation of 80 bits with 80 bit accuracy and 80 bit number range. That information stored inside floating point number can be text, audio or video quantized as bits. Because 2000 bits or more is too much to represent one pixel of picture or herz of sound or text block, chaining bits together in one large number (for example 8 bits + 8 bits +8 bits etc.) each 8 bit value on top of each other (8 bits + 8 bits =16 bits, 8+8+8 bits is 24 bits together etc.) until some 2000 bits is full (250 x 8 bits) and then putting this bitchain inside floating point number. One floating point number now has 250 pixel or herz information or 250 text characters. 80 bit Intel FP number has range of 16384 bit values. So now we can use 32 iterations (32 x 64), result is 2048 bits precesion. Now this 2048 bits can be divided down to just one bit separate values and chain them “on top of each other” (1 bit + 1 bit +1 bit etc.) until 2048 bits is full. Each 1 bit value now has range of 8 bits (16384 : 2048 = 8). 8 bits is 256 in decimal system. This 1 bit value of 8 bit range can be text character (one letter in the 256 available letters and numbers), one pixel of picture (that pixel has 256 values) or herz in sound (8 bit dynamic range). So instead of one pixel or herz in sound or text character this one 80 bit floating point number now has 2048 different 1 bit pixels or herz or text with with 8 bit range, and this one 80 bit number can contain all those 2048 1 bit values with 8 bit range together. Storing information 8 bit precision x 2048 = 16384 bits. So instead of using 16384 bits for storing information now only one 80 bit floating point number is needed to contain same amount of information. Actually 80 bit floating point number has exponent range of 16383 or 16382 bits, not 16384 bits. This information storage even does not use data compression, so data compression methods that scrutinise information even more can be applied also together with this floating point information storage format. Logarithmic number systems, if they have similar properties (accuracy), can be used also as this kind of information storage. There are many kinds experimental logarithmic, semi-logarithmic, complex base logarithmic, etc. number systems. Complex base number systems also, if they have similar accuracy, can be used similar way as floating point numbers in information storage. Integer numbers can also be used to store information but they must now be converted to use more precision than they have in bit 2-value format. For example Zero Displacement Ternary (ZDTNS) is claiming to be most efficient number system. Similar, like “A new number system for faster multiplication” Hashemian 1996, signet- digit number (BS) canonical sign digit (CSD), exists. And “ternary Tau” number system (Stakhov 2002, Balanced Ternary Tau BTTS). In book “Lossless Compression Handbook” in section “Polynomial representation” Sayood writes on pages 56-78 that “order 3- tribonacci code” is the most economical way to represent numbers. Also “Tournament coding of integer sequences” Teuhola 2007. If integer processors have 56 bits as maximum, we can now make super long integer 63 x 56bit = 3528 bits, that long integer is divided by 3, so ternary can be represented as binary using simply conversion (3 bits to 2 ternary trits, 3528 bits becomes 2352 trits). If accuracy of these ternary numbers expands exponentially compared to ordinary binary system as number becomes larger, finally accuracy of this non-binary number is perhaps millions times more than ordinary binary number in 63 x 56bit = 3528 bit long super long ternary number. If some other integer system offers exponentially expanded accurary compared to ordinary binary integer it can be used in super long (thousands of bits) integer whose accuracy is much higher than ordinary binary nteger of thousands of bits long. If binary information is put “bits on top each other” like in floating point number, and then encode this binary to long non-binary number as ternary or complex base, and when information is decoded process is repeated otherway around. There are methods to encode ternary or even quaternary “bits” (or trits or quarts) to ordinary 2-value bits so that one bit equals to 3-value trit or 4-value quaternary, but how these coding methods work on these more complex ternary numbers systems I don`t know. Using complex number bases as information storage like MROB.com "alternative number formats" Munafo PT system, or even "infinity computer" ( Y ad Sergeyiev) or Wolfgang Matthes`s REAL computer algebra or J. A. W. Anderson`s Perspex Machine or ideas of Oswaldo Cadenas about complex/ real numbers, accuracy of single number perhaps comes close to infinity, and possibilities to store information as chain of bits etc. inside this one single number also. And multiple base number systems (Dimitrov), "The shifted number system for fast linear algebra on integer matrices", "Adapted modular number system" AMNS, Paul Tarau`s “bihereditary numbers” at Paul Tarau `s homepage, "Weighted bit-set encodings for reduntant digit sets: theory" 2001, "A reduntant digit floating point system" Fahmy 2003, "Efficient binary-to-CSD encoder using bypass signal", "Performing arithmetic operations on round-to-nearest representations" Kornerup 2009, "New formats for computing with real numbers" Hermigo 2015, "Canonical Booth encoding", "Constrained triple-base number system", "Fast modular exponentiation of large numbers", "Radix-2r arithmetic by multiplication by constant", "Asymmetric high-radix signet-digit number systems for carry-free addition", "Numbers as streams of digits" C. Froygny 2012, "ZOT-binary: a new number sytem with an application on big-integer multiplication", and lastly "Dynamical directions in numeration" Barat, and "Optimal left-to-right binary signed digit recoding" Joye 2000, are options also. This floating point example was largest possible (80 bit) floating point number, but on smaller scale for example 10 bit OpenGL format floating point number (5 bit exponent and 5 bit mantissa) can be expanded accuracy. If 5 bit mantissa is expanded 39 times it becomes 195 bits. 192 bits is 24 x 8, so one 10 bit floating point number can replace 24 ordinary 8 bit numbers. The problem is that range is not expanding with those accuracy / mantissa expandind algorithms. So algorithm that expands exponent also and not just mantissa is perhaps needed to very small floating point numbers, or use very large exponent but small mantissa. Different doubling algorithms that make double or quadruple precision floating point out of single precision exists, such as "Representing numeric data in 32 bit while preserving 64 bit accuracy" Neal 2015, shoup.net NTL:quad float, "Twofold fast summation " Latkin 2014, "Extented precision floating-point numbers for GPU computatation", "Vectorization of multibyte floating point data formats" 2016. And new and imprived floating point formats that use IEEE standard FP fomat but extended accyracy versions, such as Unum / Ubox number concept by John Gustafsson, and Altera / Intel "Floating point adder design flow" by Michael Parker 2011 that increases FP computation efficiency signifigantly. And methods that use non-standard FP formats such as chipdesignmag.com article "Between fixed and floating point by dr. Gary Ray" and its table 4 alternative floating point formats. So it is possible perhaps improve further still this about 2000 bits (64 bits X 32 iterations) precision in 80 bit number principle. Some processors like Intel Xeon uses 256 and 512 bit floating point units, new AMD processors use 512 FP units also, and all these are based on Elbrus 512 bit FPU from which Intel bought patents and licence deal. Altough those are not real 512 or 256 bit FP processors, but chained 4 x 64 bit and 8 x 64 bit, they at least in some extent can operate 256 and 512 bit floating point values if needed. Perhaps they then expand even more efficiency of precision than 32 or 39 max. operations / iterations of 64 or 80 bit number, or at least faster processing if 32 - 39 iterations is used. Altough this kind of storing several number values inside one floating point number "on top of each other" saves bit rate, it is computationally extremely heavy and complicated solution compared to than just using separate 8 bit vales in signal processing. But every year processors becomes faster, and soon optical computing is coming and increases computation speed even more. Even fatser is quantum computing. So altough computationally heavy solution for data rate reduction and "compression" (altough no data compression is used, information is just presented in different form as one floating point number and not as integer separate line of bits values, so data compression methods can be used with this and compress information even further) in the future or already now, this several integer values inside one floating point number can be a method for economical storing of information. If is compared range of 16383 bits to 80 bits, "compression" ratio is about 1 : 205, or only 0,5% of bits is now needed if that information is put inside floating point number instead of long stream of integers, and if accuracy of 2496 bits is compared to 80 bits, 31 to1 is the compression ratio. Other texts: "Circuit which performs split precision, signed/unsigned, fixed and floating point, real and complex multiplication", and "A new uncertainity-bearing floating-point arithmetic" 2012. And if complex number bases are used or ternary, quaternary or even pentanary (5-value) number systems, then those "numbers" would not be traditional numbers at all but such symbols that for example APL programming language uses, one "number" can now be vector, matrix, equation (instead of quaternary number can be quaternion, a four-value equation) etc. That would increase information density and increase computation speed because instead of one bit that has integer value of 0 or 1, now there is a whole mathematical toolbox in one symbol (ternary, quaternary or pentanary value "number" that is some mathematical entity). There are "Index-calculus" number systems that use this idea, but in binary form, mainly logarithmic or other index-calculus number system. For ordinary integers are methods like "Furer theorem`" or (Furer `s theorem) and its developed versions that increase accuracy. For floating point there is "Herbie" automatical FP accuracy analysis system.For data compression there is "Method and device for encoding and decoding data in unique number values" by Ypo (Ipo) P.W.M.M. van den Boom. Van den Boom has invented Octasys Comp compression for cloud storage, and Octasys has won innovation award. And ODelta / direct ODelta method patent by Ossi Mikael Kalevo. "Dealing with large datasets (by throwing away most of the data)" by A. Heavens is "massive data compression". Chinese TransOS cloud OS is theoretical but available method for cloud OS. The iSpaces cloud browser / computer is multiple browser / multiuser solution also. For other data compression Finite State Entropy or Q-Digest and its versions perhaps are suitable. For text compression "Text compression and superfast searching" by O. Khurana 2005. It uses 16 bits, but if 8 bit values is used using IDBE (Intelligent Dictionary Based encoding) with 200 values of 256 available reserved for codebook and only 55 for text characters and numbers etc. and 56th is shift mark which opens another 8 bit (256 value) character table if 55 characters is not enough for some rare text marks. And using Kaufman -Klein "semi-lossless compression" for text. An example of using floating point number to store audio is 80 bit intel number: 24 khz sampling rate audio channels are piled "on top of each other", 24khz + 24khz = 48 khz, 4 X 24khz is 88 khz, etc. until 1024 channels is piled on top of each other (1024 X 24khz = 24,576 megaherz). This about 25 megaherz is close to TV system bandwith. This 24,5 megaherz can be divided to 1024 separate channels simply using frequency splitter in 24 khz intervals. Now 80 bit floating point number is used that has 1024 bit extented accuracy (16 iterations / operations of FP 64 bit mantissa extended range software) and 16384 bit range, this one floating point number can represent 1024 bits accuracy and 16383 bit range in 24,575 megaherz audio stream, divided to 24 khz channels (1024 X 24khz cannels together). Per one 24khz channel (of which there are 1024 together) floating point number has 1 bit accuracy and 16 bit range, so this 1 bit can (?) represent 16 bit value. This is the same (or is it? I don`t know) accuracy as 16 bit integer audio stream. Now this 80 bit floating point number can represent 1024 different audio channels with 16 bit range / 1 bit accuracy per channel. So instead of representing audio 1024 X 16 bits = 16 384 bits integer audio stream there is just one 80 bit floating point number (if 1 bit accuracy and 16 bit range equals to 16 bit integer accuracy. If it does not, this does not affer similar accuracy as 16 bit integer). Bitrate savings are huge, 80 bit versus 16384 bits. And this even is not data a compression, numbers are just represented as floating point value and not as integer value. So different data compression methods can be installed and make bitrate even smaller. The same frequencies on top of each other / on long string after each other - principle can be used in pixels of picture and text characters perhaps also, piling them (bits of information) on top of each other / in long stream.

Trachtenberg speed system of mathematics is set of simple algorithms in decimal system that make possible even most complicated calcutations be done in very simple number shifting trics etc. and no actual calcutations is needed. Modern version of it is “Global number system: high speed number system for planet” by Jadhiav 2015. It has some improvemets to original and “modified Quine-Mcluskey method” included and indian Vedic mumber system additions. I have seen other indian texts of Vedic number system also. There are proposals that binary encoded decimal should be new integer standard on processors, such number systems as DEC64 (the proposed standard), or speleotrove.com : “A decimal floating-point specification”, InterSystems \$Decimal, \$Double, and patent “System and method for converting from decimal floating point into scaled binary decimal”, and patent “Decomposition of decimal floating point data”. And “Clean arithmetic with decimal base and controlled precision” CADAC. In quadiblock.com netpage is John G. Savard`s quasilogarithmic number system which is something like binary encoded decimal also. In netpage speleotrove .com are comprehensive lists of different binary encoded decimal number system studies. If binary encoded decimal is used, in integer form or other, perhaps then is possible to build arithmetic logic unit that uses modified Trachtenberg speed system of mathematics as means of calcutations. Because calculations now are simple number shifting tricks, these would speed up at least integer computations to extreme fast computing. And if it is not sensible using Trachtenberg system as hardware, at least in software form where program code directs those calcutations that can be done with Trachtenberg system to software based calcutations and those computations that are not suitable for Trachtenbetg system to direct hardware computing, if decimal number system is used. And earlier when floating point computers had not been yet manufactured, a method to increase integer math accuracy was called "floating vectors", perhaps a sort of vector math processing in integer form. It was used in old tube valve computers perhaps etc. It was invented by James H. Wilkinson who also made "iterative refinement" for computing. But I don`t know if that old floating vectors -thing and iterative refinement are same thing or not. Anyway this floating vectors computation was a sort alternative way to increase accuracy of integer computing when floating point calcutations were not possible. After floating point computers came to use floating vectors integer computer calcutations were outdated. In integer processing perhaps this floating vector computation can increase accuracy if accuracy increase is needed.

If error correction codes can restore information even if maximum 90% of it is destroyed, error correction codes themselves are a sort of “data compression”. Altough not as good as dedicated data compression formats, error correction in noisy signal conditions such as radio trequency transmission double as information compression also. If radio connection channel is noiseless bitrate can be reduced maximum to only 10% of original if error correction code is going to repair the result back to original. If channel becomes noisy some sort of header before encoded bit section gives information where the missing bits are in encoded bitchain. Using header that gives information of where the missing bits are makes possible to use variable bitrate, in less noisy channel minimum of 10% bits are needed and in noisy channel variable percent 70 - 10 % for example are needed to restore original bitchain. Header gives information how much bits are missing in encoded bitstream and also where the missing bits are. Error correction is used mainly as restore errors in the bitcahin, but when error correction is used in enviroment that is not noisy error correction codes can be used simulatneysly as data compression, delibarately removing bits from the bitchain if error correction can restore bitchain back to normal, now error correction can be used as data compression method also and no dedicated data compression method are needed. When bits are removed that makes “empty spaces” in the bitchain and not wrong bits that noisy channel makes in the bitchain. Because there is no wrong bits in the encoded bitchain but only places of those bits are left empty, and header gives exact information where those empty places in the bitchain are, and it is flexible encoding that can vary between 70 - 10% of original bitrate and bitrate can be optimized with header (that gives information) to maximum efficiency, error correction codes in some form can perhaps be used as data compression also and not just as error correction. When channel becomes noisy error correction codes revert to their original use as error correction method. So this is flexible error correction / data compression that can be used in noisy and non-noisy channel situations, and the error correction / data compression combination can be flexible combination that adapts itself to channel noise situations and finds optimal compromise between data correction and compression (where decoding system delibarately lefts out bits in the bitchain to shorten bitrate if that is possible). Another form of using error correction is to use very large number like Champernowne constant as information carrier. Using some large number like Champernowne number or other large number like Skew `s number or Graham`s number or other that can be calcutated by the computer, can be used so that no information such as text, picture etc. is actually transmitted. If Champernowne constant can include all the possible information of the universe in single number, that information just had to get out from that number. If error correction codes can restore information even if 90% of it has destroyed, now only is needed suitable bitstreams to be found in Champernowne constant or some else large that represent this information with 10% or more accuracy. Then this information, text or picture or sound, can be pulled out inside Champertowne constant. If text, picture or sound is encoded with error correction codes, and then from Champernowne constant is searched suitable bitsreams that represent this error coded information at least 10% accuracy. Encoded information can be divided small streams of bits then, similar to these small streams of bits are serched out of Champernowne constant wich resembles this information at least 10% accuracy. Error correction code restores information. No actual information is needed to information transfer, only coordinates of those small streams of bits that can be found inside Champernowne number. Coordinates can be the places in Champernowne constant (Champernowne constant is long stream of bits, so knowledge of the right places in this long bitsream, that are similar to encoded small bitstreams of information, is needed so that information can be pulled out from Champernowne constant), or simply just time that processor needs to calcutate some length of Champertnowne constant and this time information is used to find right places of information in Champernowne constant. So in order to transfer information, instead of sending text, picture or sound only error correction code enchanted coordinates how to sort out this information out from Champertowne constant is transmitted. If Champernowne constant can include itself all information in the world, it can be used as “data storage”, information just must get out of it in some way. And if error correction codes can restore information even when maximum 90% information is lost, suitable small bitstream blocks that contain information at least 10% accuracy can be easily found in Champernowne constant. So Champernowne constant itself can be used as data storage, only the right places where these small bitstream blocks are in huge Champertowne constant must be found, and when they are found information where they are inside Champernowne constant is only transmitted, not the information itself. The Champernowne constant itself is “information carrier” and its information storing capacity is endless. Or use some other large number, Graham `s number, Skew`s number etc. as information storage. If some time information about how long processor takes time to calculate some length of Champernowne constant is stored and these calcutation time messages are just send as information transfer, the processor itself sorts out the actual information from Champertowne constant that it is calculating using timecodes, when some time of calculations has expired processor has reached the right place of Champernowne constant that contain the required information (in line of bits). Or if Champertowne constant is divided to small blocks of bits and now only a order- on- the line (bitblocks) number of specific bitblocks are transmitted, processor finds out specific information when it itself calcutates Champertowne constant to that length that is needed so that all bitblocks whose order on the line address code (from left to right lineof bits, and Champernowne constant is divided to small bitblocks and these have each own “address code” along Champernowne constant) are transmitted and are found in Champertowne constant. 80 bit floating point number can perhaps have 2500 bit precision and 16384 bit range. Precision is two `s complement number, and possible different combinations of 2500 bits are 2*2500 (two s complement * 2500, or 2500 two s complement bits). 80 bit floating point number can have 2* 2500 different address codes where to find information in Champernowne constant, Skew`s number or Graham `s number etc. If 2500 bits have 2*2500 different combinations and 80 bit floating point number can find them all, now 80 bit floating point number can now be used as finding 2500 bits of information in any order in some large number (Graham ´s number, Skew`s number, Champernowne constant etc.). Information storing capacity is 31 to 1 (2500 to 80). This is even more over- complicated way to store information than simply use floating point number as “piling bits on top of each other” principle. If error correction is used and that error correction has capacity to correct information if only about 17% of it is right and 83% wrong (2500 bit precision in 16000 bit range), range of 80 bit FP number can now be used and 16 000 bits can be used and information storing capacity is now 200 to 1, but because error correction codes use about 2,5 times more bits than without error correction, information storing capacity is only about 80 to 1 and not 200 to 1, and because error correction is data sorting, that worsenes data compression ratio because data compression is data sorting also. This method is super complicated way to store information but quantum computing and perhaps optical computing if it is fast enough can use it. But 2500 bit / 33 to 1 ratio does not use error correction and can be scrutinized even further using data compression techniques. Perhaps best way is to use timecode information, how much time procesor is spending calculating some large number, and when time code is accurate enough suitable bitchain in some large number can be found that matches information that is encoded, and now only time code of that time that processor needs to reach that place of bitchain in some large number is needed to be transmitted, not the bitchain itself. 80 bit floating point number has 2*2500 maximum precision, or two s complement with 2500 exponents. That is so enormously large number that timecode can find almost every possible bitchain in the world (some line of bits) in some large number. Even ordinary 80 bit number has 64 two s complement mantissa bits, or 16 million x million x million (in decimal numbers) accuracy, so timecode even without additional accuracy tricks is very huge and perhaps is enough for timecode, if error correction match bitchains are used that don t require 100% exact match of bitchain that is compared to bitchain of some large number, and then only timecode that processor needs to calculate some large number to that bitchain s place in that number is transmitted, not the bitchain itself. And if some errors are tolerated even after error correction has corrected information, but these errors that bitchain has after error correction don t make information unusable but just “lossy”, calculations can be made faster still when exact match is not needed. Third way to use error correction is to use bit plane coding, instead of of 2D bitplanes 3D bitplanes can be used, and now large streams of bits can be installed in large cube, for example 1024 X 1024 X 1024 bits. Now bits can be read inside this cube not just as left to right bitstream ,but up and down, sideways from X and Y axle, from corner to corner inside cube, and even using Splini and Bezier and Korch vectors inside cube. A single bit can be a a part of several bitsreams at once, error correction code corrects errors of bitstreams, and now one bit can be recycled and used in several streams of information at once, vectors of bits going forward , sideways, up and down , from backwards etc inside cubic bitplane. This is like finding pieces of information at Champernowne constant, but instead of having normal left to right written bitstream, bitsreams can now be in any direction inside 3D cubic bitplane, and instead of reading left to right bitstreams can now be read from backward to forward, from sideways, up or down, from corner to corner inside cube etc. If information is divided in suitable short bitstreams, and these small bitsreams are coded using efficient error correction, now these small bitstreams can be found not only in the some length of Champertowne constant but also inside cubic bitplane. If cubic bitplane has enough bits inside itself, almost any combination of bits can be found inside it, either in X- axle, in Y-axle from corner to corner line or even curved vector line (Bezier and Splini), now some computer program must just found these small bitsreams inside cubic bitplane, and give coordinates where to find them. Only coordinates are transmitted as information transfer, receiving device when having right coordinates just finds out the right places (bitstreams) inside 3D cubic bitplane that is inside receiving device`s memory. If error correction can correct information if maximum of 90% of it is wrong, then only minimum of 10% of bitsream`s bits are needed to be correct. Altough 90% error correction capacity is occasional situation and actual correction capacity of error correction is less than 90% in bitstreams, still error correction makes possible to “recycle” bits, a single bit can now be part of several bitstreams, one is going straight forward, second criss-crossing that one, third coming from up to down etc, and this one bit is in criss-cross of these all and is part of several information streams at once. So placing information in small error correction coded bitsreams and these bitstreams inside large cibic 3D bitplane, the cubic bitplane can now hold thousands of bitstreams that far outnumber the bit capacity of the cubic bitplane itself, because every bit inside this cubic bitplane can be “recycled” and used in several information carrying bitsreams at once. If curved vectors are used for bitstreams and not just straight lines, information store capacity is close to infinity in cubic bitplane. No actual information is needed to transfer, only “address code” of bit stream`s position inside cubic bitplane, the cubic bitplane itself can some standardised composition bits in for example 1024 X 1024 X 1024 formation, and this standardised cubic bitplane is inside every receiving and sendind device, but only "addrees code" of bitstreams inside this cubic bitplane is transmitted, not information inside of these bitstreams themselves. If long bitstream must be send, for example video stream etc, computer program cuts information in suitable length smaller bitsreams, adds error correction, then finds inside cubic bitplane bitsreams vectors (from forward, backward, sideways, up or down, curved vectors etc.) that match information to be send, and now sends only address codes that receiving device can find these bit patterns/ bit streams in its own cubic bitplane and then reperesent information. Error correction codes helps that information is not required to be 100% match, only accuracy that is enough to error correction to to correct is needed in bit patterns / bit streams. The long information stream can be cut several types of smaller streams, and these smaller streams can be in anywhere or every form inside cubic bitplane, in straight forward to bacward, but also from up to down, in curved bistream line if Bezier and Splini vectors are used, and even criss-crossing each other because "recycling" of bits is possible so one bit can be actually be part of several smaller bitsreams at once that are all part of large bitsream cut to small pieces. Bitstreams can also be not only straight line or curved, but in L-form making 90 degree turn in midway, or other geometrical shapes. And bitsreams can now be read from backward to forward, and same bitsream can be recycled like this way: one bistream starts, but second bitsream starts about 4-5 bits later, and is "inside" this first bistream, and when information between bitsreams became so different that error correction code can`t fix it, this second bistream turns away in L-form from first one, so two bitsreams in some amount share the same bits and this bitstream is in fact two bitstreams at once before second bitstream takes L-turn. Almost infinite amount of information can now be encoded inside one limited space 3D bitplane, when different bitstreams are criss-crossing each one constantly and almost infinite way in different axles and positions, and bits inside cubic bitplane are part of several different criss-crossing bit streams at once. And cubic is not only form available, icosahedron or dodecahedron or other many-corner geometric form (polyhedra) can be used if that is possible, or even larger than three-dimensional “hypercube” in four-dimensional mathematical form, offering even more different ways to assemble bitstreams inside large many dimensional bitplane. Geometrical forms can be not only polyhedra, but complex polytype, Kepler -Poinsot type, “120 cell” type etc. These geometrical forms offer large number of vectors coming from different directions and different axels through bitplane, if bitplane is not a 2D plane or 3D cube but some complicated geometrical form. If only “address” of bitstreams are transmitted instead of bitstreams itself, savings in information transmission capacity is achivied. This address is bitstreams position in bitplane (X- and Y- axle, from corner to corner or from backward to forward) and bitstream shape code (straight, curved, or L-form etc.). Error correction can correct mistakes if some bit stream form found inside 3D bitplane does not match perfectly the information that is transmitted, but at some percentage match is found and error correction code can fix information to original. 3D bitplanes must be standardised in every device, or if some special case needs it, also perhaps 3D bitplane must be transmitted before information transmission (sending address codes) starts. Limited amount of bits in bitplane can now contain almost indefinitely amount of information, because the bitplane now has so many axles that can be used to divide bitplane that as single bit can have over 100 different directions that bitstream “go through” that bit and one bit can be a part of over 100 different bitstreams simultaneysly. That offers large amount of possible bitsream combinations inside bitplane and now almost any possible short bitstream combination (the line of 0 `s and 1`s in the short bitsream) can be found somewhere in the 3D bitmap. When the bitsream is error coded errors can be tolerated and no exact match of bits are needed. For example when some information (line of bits) is encoded into standardized 3D bitplane, first it is cut to suitable length short streams that are error correction coded, then from the bitplane must be found suitable places that have same line of bits than those bitstreams are at that accuracy that is needed to error correction to work properly, and then only “address code” or the place of these bitsreams inside 3D bitplane is needed to be transmitted, receiving device searches its own standard 3D bitplane the places of bitstreams inside according to “address codes” it has received, and then decodes the information using error correction. Different curved vectors like Korch, Splini and Bezier curves can be used instead of straight line or L-form for bitsreams, like lazynezumi.com “L-system tutorial” shows.

Texas instruments Omniview was old 3D video representing technology that does not require special glasses and can be viewed at large angle. However it has been forgotten like its another version “Felix 3D Display: interactive tool for volumetric imaging” (Langhans). These belong to “volumetric display” class of 3D displays, they were “Swept volume” versions but Langhans also shows “Static volumetric display”. Another forgotten feature is “Sheet RAM” or SHRAM memory, by Richard Lienau, a class of magnetic memory. That promised replacing both hard disk and DRAM memory by one single fast SHRAM memory device. These two innovations have not been developed further or gained widespread publicity, for some reason or other. I don`t know was there something wrong with them or what was the reason that they never gained attention.

Previous examples use only floating point numbers, and altough accuracy of those can be improved even further using unum concept (AMD), or “floating point adder design flow” (Altera), Gary Ray `s model ("Between fixed and floating point by dr: Gary Ray") etc, some more exotic number systems can offer even more accuracy and so even more number (information) storing capacity. There are logarithmic and complex base numbers but floating point has become standard, and ALUs and processors that use logarithmic number systems or residue / reduntant number systems ("one hot residue number system"), or real /complex base number systems have remained at research state only and no commercial products using these have published. Altough storing information other ways than simple line of integer bits makes processor slow because it has to do sophisticated calcuatations, computers became faster each year and when quantum computing and optical computing becomes reality, perhaps storing information not as integer bits but some number system form that has high accuracy using few bits (and high accuracy means more bits that can store information, any information, not just calculation result of some high accuracy scientific computing). so instead of rerly used scientific computing number`s accuracy is used to maximum to store all kindsds of information, and if floating point number can have in 80 bits 2500 bit precision and 16 000 bit range, that ability can be exploited as information storage, and other number systems than just plain integers can be used as “information storage containrs”. And because that is not data compression, information is just stored in different number format than integers, different data compression methods can be applied after this from integer to other number system conversion and make data rates even smaller still. Some more exotic number systems are not just those mentioned before, but also “lazy number systems”, “segmented number systems”, “combinatorial number systems”, “mixed-radix number systems”, and index-calculus, hyperreal/ surreal numbers etc. There is a book by Petr Kurka: " Dynamics of number systems: computation with arbitrary precision" (2016) which is perhaps state- of- the- art of this matter. On the book “Fun with algorithms: 5th international conference” 2010 is on page 158 “New number system” (skew binary that uses five symbols and skew weights). More from Elmary, Jensen, Katajainen: “Two skew - binary numeral systems and one application” , (magical skew number system and canonical skew). And the Finite State Entropy coder uses asymmetric numeral system (and finite state transducer is in Petr Kurka `s book). And then there is Higher Order Logic (HOL), "Ordinals in HOL: transfinite arithemetic up to (and beyond)..." Norrish, "Towards efficient higher - order logic learning in first order datalog framework " Pahlavi, "Second order logic and set theory" Väänänen. More number systems: "Numeral representetations as purely data structures" Ivanovic 2002 (4 digit + invariant number system), "Hierarchical residue number systems" (HRNS) Tomczak. If binary base is abondaned and instead ternary, quaternary or pentanary base numbers are used, those numbers themselves are coming close to "combinatorial logic" computer systems and one number or symbol can now be something like APL or J computer language symbols, one (number) symbol is in itself a matrix, index or some other sophisticated mathematical entity, not just 0 or 1 like in binary system. If ternary, quaternary or penatanary base is used number symbols can be more varied and different encoding methods that scrutinize 3 value ternary and 4 value quaternary number to just ordinary 2 value bit have been published. " Arithmetic operation in multi-valued logic" 2010 Patel, "Application on Galois field in VLSI using multi-valued logic" Sakharev 2013, "A cost effective for making BLUTs to QLUTs in FPGAs", "Arithmetic algorithms of ternary number system" S. Das 2012, "A novel approach to ternary multiplication" B.V.S. Vidya 2012, "Addition and multiplication of beta-expansion in generalized tribonacci base" Ambroz, Masakova, Pelantova 2007, "Balances and abelinian complexity of certain ternary words" Turek, "Self-determing binary representation of ternary list", "Formulation and developing of novel quaternary algebra" 2011, "New quaternary number design of some quaternary combinatorial blocks using new logical system", "Design of high performance quaternary adders", "A survey of quaternary codes and their binary image" Derya Özkaya 2009, "Bitwise gate grouping algorithm for mixed radix conversion", "Ternary and quaternary logic to binary conversion CMOS...", "BIN@ERN: binary-ternary compression data coding", "Binary to binary encoded ternary BET", "Quaternary encoding of dynamic XML data" 2006, "Real-life applications of soft computing... unary encodig, quaternary encoding", "Database and systems applications 2012: the SCOOTER labeling scheme" pages 27-29. There is stackoverflow.com netpage "Algorithms based on number base systems? (closed) " discussion 18.3 2011, including "meta-base enumeration n-tuple framework", and at stackexchange.com "Data structures - what is good binary encoding for phi- based balanced ternary algorithms?" 20.7 2012. At neuraloutlet.com there is "metallic number systems", and at MROB.com alternative number system bases, at researchgate.net "Alternative number system bases?" (Rob Graigen) base 2+i number system instead of 1+i, and "Greedy and lazy representations of numbers in negative base system (Tom Hejda 2013). In "Research proposal: design of low power IEEE 754 compliant arithmetic operator ip for mobile application" is listed different floating point systems. Other texts: "Arithmetic units for a high performance digital signal processor" Lai 2004, "Differential predictive floating point analog- to - digital converter" (Croza, Dzeroz 2003), "Image compression by economical quaternary reaching method", "Guided quaternary method for wavelet - based image compression", "Design of low power multiplier with efficient full adder coding DPTAAL", "Pseudoternary coding" (Matti Pietikäinen), "Abelian complexity in minimal subshifts" 2009. Different data compression methods are byte pair encoding, morphing match chain, FSE (finite state entropy), VSEncoding (vector of splits encoding) Varchar, CONCISE N, Q-digest etc. Others: "Compressing 16 bit data using predictor values", "The algorithm and circuit design 400mhz 16 bit hybrid multiplier", "Sabrewing processor", "Towards an implementation of computer algebra system in functional language" Lobachev, "SINGULAR computer algebra system", "A field theory motivated approach to symbolic algebra" Peeters, "Design of ternary logic 8 bit array multiplier based on QDFET", "GRFPU - high performance IEEE-754 floating point unit", "A novel implementation of radix - 4 floating point-division/square-root using comparison multiples", "Charge-balanced floating-point analog-to-digital converter using cyclic conversion", "Simplified floating point division and square root" 2011. And perhaps more importantly Paul Tarau and his ideas of "hereditary numbers systems", "compressed number representations", and "giant numbers", "tree-based numbering systems". For example Tarau`s texts are “Arithmetic algorithms and applications of hereditarily binary numbers”, “Bijective Goedel numbers”, “Arithmetic operations with tree-based natural and rational numbers”, “A generic numbering system based on catalan families of combinatorial objects”, “A declarative specification of giant number arithmetic”, “the arithmetic of recursively run- length compressed natural numbers”. The more number has accuracy, the accuracy means more bits, and more bits means more information can be stored (in bits) of that number. For example chaining together 8 or 16 bit information blocks (one pixel of picture, herz of sound or text character) and put this long bitchain in some super-accurate numbering systems number. If accuracy of some unorthodox number system has for example 1000 (binary) bits and for example that accuracy is achivied using 10 or 20 (binary) bits, information can now be chained (8 bit + 8 bit + 8 bit etc. until 125 X 8 bit = 1000 bits is full). Now 125 pieces of 8 bit information can be put inside number that has accuracy of 1000 binary bits but this number can be represented in just 10 or 20 bits etc. because it uses some super- accurate numbering system, “compression ratio” is enormous and because no data compression is used, numbers are just represented in different form than binary integers, data compression can be used and put information density even further. In my previous examples I used floating point number as “information storage container”, and mantissa bits as information storage, but other numbering systems if they are more effective (more accurate and easier to calculate) can be used also, including different exotic number systems.

There are microcontroller processor architechturers that use 4 - 16 bit signal path so 24 bit or larger unum format is unsuitable for simple microcontrollers. But if there ever will be a 16 bit floating point unit in the microcontroller unum has a chance. OpenGL format has 10, 11 and 13 bits FP numbers, derived from 16 bit IEEE standard I presume, so making 16 bit unum floating point standard for microcontrollers is possible. If 10 bit OpenGL FPis used 6 extra bits (16 bits total) are for unum values and if 11 bit then 5 bits is left for unum. In the MROB.com netpage in microfloats section also 8 bit floating point by George Spelvi that has large range but small accuracy. If 8 bit floating point format is expanded by 4 or 8 bit unum section result is 12 - 16 bit unum / floating point format, non-standard altough. Also 8 bit microcontrollers exist. In the MROB.com microfloat section is mentioned 5 bit IBM keyboard scanning floating point format used in IBM computer keybords, that uses only 5 bits as floating point number. So combining this 5 bit FP to 3 bit unum section makes 8 bit unum / FP number, if that has any realistic use in microcontroller (requires that 8 bit microcontroller has floating point unit). Smallest microcontrollers are 4 bit only, and smallest useful logarithmic value is 4 bits, so instead of 4 bit integer if small 4 bit microcontrollers use logarithmic scale their precision is better. Also using dither in improving data accuracy, audio systems use dither in audio applications but using it to improve precision of other data also is useful in small 4 - 16 bit microcontrollers. Sony Super Bit Mapping added 6 bit accuracy to 16 bits integer precision in audio format (resulting 22 bit accuracy). There are other methods like “ExtraBit Mastering audio processor” (“Sonically improved noise shaping…”) etc. 4 - 8 bit microcontrollers benefit best from added dither. Up to 10 extra bit accuracy is achieved but dither noise becomes audible in the signal chain over 6 bits about. Because dither improves better low bitwitdth, it can be used in future manycore microcontrollers like XMOS xCore or Imsys future 1000 core CISC microcontroller. If microcontroller architechture is 4 - 16 or even 32 bit (integer) wide. There is a text “Generating dithering noise for maximum likelihood estimation from quantized data” (Gustafsson, Karlsson 2013). Increased simplicity of low bitwitdth and increased processing power of manycore microcontrollers may even challenge CPU manufacturers if very fast processing “microcontroller” with 1000 or so cores reaches CPU level data processing capacity. And for high performance floating point computation: MIPS R10000 and R18000 had different FP units than typical Intel / AMD. The MIPS FP unit was split into different functional sections, but otherwise used standard floating point number formats. These MIPS R10000, R18000 and perhaps proposed R20000 had FP units that were promised to be more effective than Intel / AMD of that time (late 1990s - early 2000s).

I don t know which is the best way to use accuracy of extented precision floating point number. If 80 bit FP number has 2496 bit accuracy (39 X 64 mantissa bits), which is the best way to divide it to “numbers on top of each other” concept? If 10 X 80 bits = 800 bits is used then one 80 bit number can be used as 10 X 10 X 10 = 1000 (800 bits + 800 bits + 800 bits = 2400 bits). “Compression ratio” is 1000 to 1, one thousand other 80 bit FP numbers can be included inside just one 80 bit FP number. But there are other ways: 7 X 320 bit + 1 X 240 bit = 2480 bit, (320 is 4 X 80 and that repeats seven times, and 240 is 3 X 80, so 47 X 3 = 49 000 something), now one 80 bit FP number can include 49 000 other 80 bit FP numbers. If simply 160bit (2 X 80 bit) is used for representing FP numbers result is now 15 X 160 bit = 2400 bits. 215 is 32 700 about, so one 80 bit floating point number can include about 33 000 other 80 bit floating point numbers in itself . This last layer, 49 000 or 33 000 floating point numbers is the layer where actual information is, other layers are used only for representetion of other floating point numbers and if needed, inaccurate “aft section” of floating point number that is not used in representation of other FP numbers can be used in some sort of information storage also. I don t know how much time computer use calculating this (this extented 39 X mantissa precision is calculated at software, not hardware, because no hardware system exists to calculate this "39 X “extented precision”) system, and what is the best way to arrange floating point numbers that computation is as fast as possible in this kind of “one floating point number represents several another floating point numbers” concept. No actual data compression is used, only floating point numbers are “piled on top of each other / inside each other”. But still “compression ratio” of 1000:1 to about 50 000:1 is achieved. Layers are three (800bit, 10 X 80), eight (7 X 320 in 4 X 80 plus 1 X 240 bit in 3 X 80), and fifteen ( 15 X 160 bit in 2 X 80). Is it more efficient to have large bitwidth and few layers, or small bitwidth but many layers, what is computationally efficient I don t know. And unum concept is not even included here, if it increases accuracy it increases efficiency still in this kind of “mathematical perpetum mobile” concept. But accuracy worsenes in each layer so it is not perpetum mobile but it must stop somewhere. However tens of thousands times “compression ratio” is achieved before accuracy worsenes to inacceptable levels. If some other number system is used than floating point / unum or then mixing different number systems in different layers would perhaps offer more accuracy, but makes this system even more complicated still. Anyway this is the way to “compress” information, not just floating point numbers, the last layer where information is can be any binary information, not just floating point numbers, if floating point / unum numbers are used as “compressing” information. Texts in the field and outside it: “Stochastic optimization of floating-point programs with tunable precision”, “On the maximum relative error when computing x in floating-point arithmetic”, “Design and implementation of complex floating point processor using FPGA”, “Encoding permutations as integers via the Lehmer code (Java script)”, “Profiling floating point value ranges for reconfigurable implementation” Brown 2012. And more: “RAIVE: runtime assessment of floating-point instability by vectorization”, “Variation-adaptive stockhastic computing organization VASCO”, “Accuracy configurable floating point multiplier for error tolerant applications”, “Auto-tuning of floating -point precision with discrete stochastic arithmetic”, “Deep learning with limited numerical precision”, "Algorithms with numbers " Vazirami, boost1.59.0 “Theory behind floating point comparisons”, “Robustness of numerical representations”.

I simply use same principle like using frequency splitter in sound, one 24 khz mono channel can be two 12 khz channels if simply frequency splitter is used. Now two s complement to 800 exponents (2800 in this notation what I use) is simply 10 times 280, and ten 80 bit numbers can now put to 800 two s complement exponents. Exponent range 1-80 is reserved for first 80 bit number, exponent range 81 - 160 reserved for second 80 bit number and so on, so two s complement with 800 exponents can include in itself ten 80 bit floating point numbers. If one 80 bit floating point number has 39 times extended (mantissa) precision, that means 2496 bits in two s complement if mantissa has 64 bits. Those precision bits can be used to store other 80 bit numbers with extented precision and so on, creating mathematical perpetum mobile, until accuracy potential has worn off. Compression ratio can be thousands of times “compression” without even using data compression, information is just presented as floating point numbers “on top of each other”. Last layer has the actual information in bits in two s complement form. If 2496 bits is divided to 800 bits sections it is 3 X 800 = 2400 bits, each 800 bit section has ten 80 bit numbers, so now this system consumes 2400 bits of one 80 bit FP number s accuracy, and consists 10 X 10 X 10 seperate 80 bit FP numbers, or one thousand (1000) other 80 bit FP numbers inside just one 80 bit FP number. Inside these thousand other 80 bit FP numbers the actual information is. Compression ratio is one to one thousand, without data compression. Of course using 2800 to store 10 X 280 information is computationally heavy, and computing 22400 is more computationally heavy still. And computing in software is 1000 times slower than computing in hardware, so I don t know how much time computer takes to decode this thing. But anyway at least 50 000 times compression rate (and perhaps more) is achieved and nowadays floating point processors are very fast, and become faster as quantum computing and optical computing is arriving. Adding unum principle propably increases accuracy and compression potential further still. But is it possible to build hardware floating point unit that uses 39 X extenyed precision?. Modern FP units have 512 bit bit width, altough that is simply 8 X 64 bits parallel, not true 512 bits bitwidth. If OpenGL FP number standard is related to 16 bit standard IEEE floating point, smallest version of OpenGL has 5 bit exponent and 5 bit mantissa, and perhaps 1 bit sign bit. If now we make 8 bit floating point number out of this, it would have 5 bit exponent, 1 sign bit and 2 bit mantissa. That 2 bit mantissa accuracy is increased to 39 X extented precision using hardware ALU, so actual accuracy is 78 bits (2 X 39) and bitwidth is 84 bits (78 bits for mantissa, 5 for exponent and 1 sign bit). Also Gal s accuracy tables or french “Gal s accuracy tables method revisited” (Stehle 2004) method can be used in hardware FP unit, if that method brings 10 bits extra accuracy to 2 bit mantissa, it is now 12 bit mantissa and 5 bit exponent and 1 bit sign bit makes 18 bit wide FP unit. Actually this “Gal s tables revisited” method brings more than 10 bits extra accuracy. If 8 bit unum section is added to the 8 bit FP number with 39 X mantissa accuracy increase the result is 16 bit number with 78 bit ( X 2 if sign bit is used, 156 bit accuracy) FP number with 5 bit exponent, 2 bit mantissa, 1 sign bit and 8 bit unum section. If IEEE standard compability is completely discarded such FP numbers as from MROB.com netpage “microfloats” section George Spelvi 8 bit with large range but small accuracy 5 bit exponent 3 bit mantissa type can be used. Now small accuracy can be improved either Gal s tables revisited method or extented 39 times precision (the latter perhaps does not work other than IEEE standard numbers?). 3 bit mantissa makes either 13 bit or 117 bit (?) accuracy depending on method. The “5 bit IBM computer keyboard scanning format” (at MROB.com microfloats) is 2 bit exponent and 3 bit mantissa. Mantissa accuracy of it can be again expanded with Gal s tables revisited or other precision methods. Even 4 bit floating point with 3 bit exponent and 1 bit mantissa is possible with extented mantissa precision, and extented precision can be done at hardware. Extreme minifloat format would be 2 bit floating point, 1 bit exponent, 1 bit mantissa. If IEEE compability is discarded minifloats can use tapered floating point, Richey and Sadeian fractional format, Gray Ray s “bit reversed Elias gamma coding” at exponent etc. methods (at chipdesignmag.com “Between fixed and floating point by dr. Gary Ray”), or Quote mathematical notation, Bounded Integer Sequence Encoding, or Multiple-base composite integer (at MROB.com) as exponent and mantissa etc., but compability to “standard” floating point formats is then no more. And use other floating point accuracy improvement methods like “Löwner s theorem” at “Applied linear algebra” book by James W. Demmel at page 226. There are several accuracy improvement methods for floating point computation, I just mentioned only three. Even my proposed 16/24 bit (11 bit exponent, 5 bit mantissa, 1 sign bit, 7 unum bits) format can have hardware FPU, because 5 X 39 = 195 but only 192 bits is used, so required floating point hardware unit bitwidth is 192 bit mantissa, 11 bit exponent, 1 sign bit and 7 unum bits = 211 bits, and 24/32 bit FP number also, 15 bit exponent, 9 bit mantissa, 1 sign bit, 7 bit unum. 9 X 39 = 351 bits, 351 + 15 bit + 1 bit + 7 bit is 374 bits, which is less than 512 bit width floating point ALU. The FP unit begins with 24 or 32 bit number and gradually expands it to 211 or 374 bit width, not like ordinary FPU that uses from the beginning 256 or 512 bit bitwidth and end result is again 256 or 512 bit bitwidth. If hardware is 1000 times faster than computing in software, floating point units that use this 39 X mantissa accuracy expansion can be built, and these 24 and 32 bit formats are based on “standard” floating point formats (OpenGL and IEEE standard). And using Gal s accuracy tables revisited method in hardware as table-lookup floating point unit (?) etc. For piling floating point numbers “on top of each other”: if simply 2 X extented precision is used altough 39 X is the maximum available, is used in the accuracy, only 160 bit bitwidth is used in 80 bit floating point number so FPU needs to be only 160 bits wide hardware unit. When accuracy improvement has been repeated 15 times accuracy potential has worn off (15 X 160 = 2400 bits, which is still less than 2496 bits available for accuracy) but now inside one 80 bit FP number has 32 700 other 80 bit FP numbers (215 = 32 700), so “compression” ratio (without data compression) is about 1 : 33 000. And floating point unit can be 160 bit width hardware unit that uses 80 bit floating point numbers expanded to 160 bits extented precision. When expanding is done in 15 times result is almost 33 000 times “compression ratio”, but the procedure must be repeated 15 times, before final result (almost 33 000 other 80 bit FP numbers inside one 80 bit FP number) is achieved, and FPU unit must be 160 bits wide not 80 bits as normal FPU. If unum bits are used accuracy increases still perhaps but now perhaps 8 or 12 extra bits are needed for unum so result is 88 - 92 bit floating point format (and doubling it makes 176 - 184 bit bitwidth).

There is a new text “Solver for systems of linear equations with infinite precision” by Jiri Khun 2015. If any calculation can reach endless accuracy, that means that endless amount of information can be packed in very small number space, in this case System of Linear Equations (SLE). Infinite accuracy also leads to infinite calculating time, but if very large number, or line of integers can be put inside (encoding) in SLE which is combination of two linear equations (as far as I understand), almost endless amount of information can be encoded inside one SLE. That requires that result of SLE is very large numbe when equations are solvedr. The computing method presented is using GPU, and GPUs have teraflops of computing power. So perhaps this SLE solving method is one way to encode almost endless amount of information in very small space, in equations of SLE. Encoding is transforming very long line of integers as SLE, that is small enough and not require much numbers to written down. But when this SLE is being solved (decoded) the solving (calculation) leads to a very large number (line of integers). That line of integers has chained information for example simple 16 + 16 + 16 bits etc. (meaning that two s complement with 16 exponents, 216 is first 16 bit number, two s complement with 32 exponents, 232 is one number 216 in exponents 1-16, and another 16 bit number 216 in exponents 17 -32, etc.),-chain, making large two s complement number a “compact number” or in “true compact numbers” integer form, true compact numbers need some algorithm to dismantle or partition very large integer to its smaller number components. Also, there is a new method of vector compression, “Additive Quantization” (AQ, by Martinez) that leads to “Extreme vector compression” etc. results. Any mathematical entity that promises “endless accuracy” or “infinitive precision” can be a key to endless data compression. If simple mathematical equation can have endless accuracy of large number of integer values (millions or billions), then this equation can be used to store information in that equation. In my example I used standard 80 bit floating point number that can have almost 22500 information content, or two s complement with almost 2500 exponents. It is not infinite accuracy but close enough, 22500 is enormous number. If unum consept is used accuracy is even greater. If system of linear equations can be used also to store almost infinite amount of numbers, that can be also used. Large line of integers are used to “warp” SLE and perhaps then this SLE that can perhaps be represented in few lines of code, and can include itself the whole information content of the universe, etc. If someone just would check out internet and libraries and find out every article and study that promises “infinitive precision” or endless accuracy of some mathematical calculation that is relatively simple. Then those simple calculations can be used to encode itself almost endless amount of information. When mathematical calculation is solved it brings out almost endless line of numbers, and in those numbers are the information that is encoded as line of integers etc. ways to incode information.

Musical scales in modern music that use other than standard octave scale have developed some mathematically very complicated ways to divide number base 12 (octave scale) to smaller parts. As base 12 is also one of most efficient number bases (together with number bases 3 and 6) that ocatve dividing (musical) methods may provide efficient ways to divide number base 12, and then this “octave” number base can be used as data compression and representing numbers with high efficiency, now musical notation can lead to effective data compression of numbers. Some of muiscal scales have high mathematical complexity or they are otherwise very sophisticated methods, some of them use fibonacci numbers and golden ratio, like “Bohlen 833 cent” scale that is one of non-standard musical scales. Bohlen 833 cent has “unique properties” and “complicated harmonic relations”. (huygens-fokker.org netpage and archive.org Billy Stiltler 2015 Bohlen 833 cent approximate integer). If tribonacci base or Ostrowski numeration is used perhaps even more efficiency is achivied. In netpage xenharmonic.wikispaces.com is in section “ScaleIndex” and its subsection “Families of scales” some mathematically very sophisticated microtonal music scales. Microtonal scales divide octave to small portions. Some of those base 12 dividing scales perhaps can be used as number base for data compression, to represent numbers. Microtonal scales at xenharmonic.wikispaces.com scaleindex and families of scales include three major scale families “equal temperement” (standard scale), Fokker blocks and Moment of symmetry (MOS) scales. If data compression is the goal, perhaps Lesfip scale, Maximal harmony epimorphic scales, MOS cradle, Combination product sets, Overtone scales (Mode 30, the last overtone scale in list), Numerological ontemperement, Superparticular-nonoctave-MOS, Marvel Woo, Yantras, Hemifourths, Peppermint-24, The Marveldene and “Crystal balls” are suitable for number bases for data compression. The previous list is choosed from ScaleIndex and its subsection Families of scales, there are many more microtonal scales on those lists. “Euler genera”, Fokker blocks and “High school scales” are general methods to microtonal tuning. Articles in xenharmonic.wikispaces.com “Monzos” and “Monzos and interval space” explain some of those octave dividing systems. In wikipedia is “Limit (music)” and “Define function” etc. articles. Modern microtonal musical tuning uses some extremely sophisticated methods to divide octave (or base 12) to parts, and use such as Phi, golden ratio, fibonacci numbers etc, and sometimes even combining two of those together. Making a way to represent numbers most economically also uses base 12 sometimes, fibonacci / tribonacci bases, golden ratio (negative beta encoder) etc., so perhaps some music scales offer ways to compress number values to a very compact form. Because “interval arithmetic” is used to many mathematical applications, and music is based on intervals or dividing larger intervals (octaves) to smaller parts (microtonal scales) perhaps musical scales / microtonal scales can offer method of data compression to interval arithmetic and any other number based systems. Not just microtonal scales but other music scales also like Bohlen 833 cent that deal with octave dividing methods. If octave is number base 12 number that is very effective as data compression because base 12 number is one of most effective number bases. In netpage nicksworldofsynthesizers.com in section “interactive synths” in section “my explanation of just intonation” in section “my system” is another octave dividing method (based on just intonation), and there are many more octave dividing methods, microtonal or otherwise. Representing fractional values in effective numerical form is one of the challenges of mathematics and data compression. Because those musical scales, in netpage xenharmonic.wikispaces.com for example, are very sophisticated and complex mathematically, they may offer a way to represent fraction of number values very effectively.

1 Like