Instead of ordinary 8 or 16 bit unicode text text compression that uses 8 or 16 bit integer, Intelligent Dictionary Based Encoding (IDBE) or other text compression method can be used. 16 bit integer makes possible to use very large libraries, so IDBE library can use 65 000different sentances or vowels in library and only 536 different “unicode” characters, because 16 bits is 65 536 000 values . 16, 20 or 21 bit integer instead of unicode, or 16 or 32 bit floating point IDBE libraries can be used. However floating point numbers have large range but smaller accuracy (16 bit FP has 10 bits and 32 bit FP 23 bits available mantissa accuracy) than integers of same size. If complete sentences are in one 16 - 32 bit IDBE dictionary efficient text compression is possible. 2-4 letter vowels can also be in the library like IDBE uses already. I have written more about this in “Free internet for third world” text. English language has 170 000 words so 18 bits is needed. 16 bit integer can be expanded using Bounded Integer Sequence Encoding to 18 bits, or using dither from 16 bit to 19-20 bit accuracy. 18 bits is 256 000 values so 80 000 can be used for most used sentences of english, 170 000 for words and about 3000 to IDBE vowels of 2-4 letter long, and 3000 for unicode characters. Memory requirements are low, if one word has 10 letters, and one letter needs 6 bits (UTF64) or only 5 bits (32 different letters can be used), 170 000 X 10 X 6 bits is needed, about 10 megabits, about 1,2 megabytes for dictionary of words. 80 000 sentences about 10 - 50 letter long, 25 letters avarage, 6 X 25 X 80 000 bits, 12 megabits, 1,5 megabytes for sentences. If dithered 16 bit used, 19,5 bit accuracy like HDCD audio coding for CD discs, 19,5 bits is about 750 000 values, now about 600 000 different sentences can be in dictionary, and 170 000 words . 90 megabits for sentences are needed, 11 megabytes, still low memory requirement. Using floating point, 16 bit FP has 40 bit range and 10(11) bit accuracy, expanded using Gal s accuracy tables method revised etc. method up to 20-24 bits, million to 16 nillion values, storing in maximum 15 million sentences, about 300 megabytes memory requirement. 16 bit FP has 16 bit positive and 24 bit negative range, so first using positive values up to 65 000 for IDBE vowels, unicode characters (if needed) and 65 000 most used words. If that is not enough, switch to using negative values up to 24 bits (16 million values, 15-16 million possible sentences in memmory and rest of 110 000 words for 170 000+ words total). Using Intel deep learning 16 bit format with mantissa expansion by S. Boldo and others bring memory capacity to 250 bit of values (up to zilllions of values) so if only 21 bits or two million values is used, and Gal s tables revisited method increase 16 bit FP to 25 bit mantissa accuracy 16 different languages or writing systems (chinese, japanese, arabic, kyrillic, 21+4 bit is 25 bit, 4 bits is 16 values ) etc. text possibilities of which only one 21 bit system is stored in computer memory, switching to latin text to chinese for example needs changing 21 bit (two million values) text memory of computer. 21 bit number is used as pointer to memory of two million different sentences, words and vowels. This is like UTF 8 that uses 8-32 bits for one character (letter), this uses 16 bits for one “character”, but in this case character is not a letter but it can be vowel, word or complete sentence. Text can be now in very small memory space if 16 bits is enough for complete sentence. But dictionary needs maximum up to 300 megabytes. But even with few megabytes of dictionary very efficient text compression is achivied. It is not even mathematical data compression, so data compression methods can compress text even further. Possible integers, not floating point, for text compression are Zero Displacement Ternary ZDTNS, Magical skew number system (Elmesry, Jensen, Katajainen), Multiple base composite integer at MROB netpage, U-value number system at Neuraloutlet wordpress netpage etc. Multiple base numbers system (Dimitrov), multiple description coding, multiterminal source coding etc. can be used. Also 8 bit floating point like Microsoft deep learning format with only 2 bit significand, that 2 bits is expanded using Gals accuracy tables or other method so that minimum 17-18 bit accuracy is reached. Or use logarithmic 8 bit, Gals accuracy tables can be used in logarithmic also. If significand of floating point number can be expanded by up to 39 times, which is possible accirding to S. Boldo and Malcolm, almost endless dictionary storage space is available in 8 bit FP number also. Or use 1 bit delta sigma modulation with 16 bit internal multibit or multirate processing, now 1 bit is enough to store 16 bits of information (65 000 values), so “compression ratio” is 16:1. Large dictionary method can be used in video compression also, video generator has in memory large number of different textures and shapes and colours. Screen is divided to large number of smaller screens, each has texture processing with time encoding, with sufficient amount of small screens one big picture is made. No actual visual information is sent to receiver, only few hundread or thousand bytes of guidance information to dictionary memory and time encoding. So if watcher watches some news and newscaster he watches Max Headroom- style computer generated image of newscaster, not real human, but this CGI image looks like that newscaster, close approximation build using textures and shapes inside receiver dictionary with time encoding which changes the shapes and colours of textures as the person in the picture is moving. It is real low bitrate video coding, perhaps not so realistic but low bitrate at least. Virtual reality needs lots of pixels, but no pixels are needed in this video coding process, because video is constructed using abstract shapes and patters with time encoding, not individual pixels so pixell count can be anything in video glasses etc. Resulting virtual reality is CGI construct so not so convincing, but at least cheaply made. Only large dictionary is needed for shapes and textures, and they can be coloured with different colours so colour information can be send with time encoding to receiver, and what shape/pattern to use in every small screen that together make big picture, and final image is CGI construction. However floating point numbers have huge range, and now their accuracy can also be increased to same size of their range, so one 16 or 32 bit FP number can point accurately to dictionary of billions or zillions of values and find the right one among zillions, so this kind of large dictionary is possible, and phones not counting computers have gigabytes of memory, so few gigabytes to this kind of video format is not much memory loss, and it can be used for virtual reality also. Only few floating point numbers can build picture or video stream, altough images are computer animation. Like when camera is looking at forest, computer finds “tree” at dictionary, basic CGI generated tree and what kind of tree the camera is filming, puts these standard trees amidst each other like came observes forest and with different tree sizes, and if trees sway in the wind time encoding that makes “trees” sway accordingly is also send. But because it is CGI image that receiver builds from camera information it is perhaps bit crude looking. This can be used in low bitrate virtual reality transmission etc. This like fractal image compression, but fractals are inside computer memory in large dictianry of one gigabyte or several gigabytes large. Only pointers to these stored fractals are send to receiver through internet. Because 16 bit Intel bflops can store 255 bit range and 252 bit accuracy information if its mantissa is expanded 36 times using method of S. Boldo and Malcolm, 252 bits is zillions X zillions X zillions etc., so memory of computer ends before pointer values in 16 bit pointer to video dictionary fractals. “Fractal coding based on image local fractal dimension” 2005. Video codecs work dividing picture to smaller range blocks and domain blocks, those small blocks can be constructed using predefined “building blocks” of shapes, patterns and fractals stored in large memory video dictionary. Time encoding to change basic shape to another over time, and pointers to find suitable shape from memory is needed to send through internet only. Computer must analyse the picture that camera is filming and divide image to smaller blocks and then computer must find suitable “building blocks” for range & domain blocks and then only pointer information of these is send through internet. Receiving device, telephone etc., finds right building blocks of image from memory using pointers send through internet. Intel 16 bit bflops and Microsoft 8 bit ms-fp8 are new floating point formats. Deep learning also uses fixed point, “dynamic fixed point” like 12 bit with stochastic rounding (Guo 2015) and Flexpoint 16 bit fixed point with 5 bit shared exponent, 8 bit format with one shared exponent etc. Unum / posit computing is one version of floating point also, and “Between fixed point and floating point by dr. Gary Ray” chipdesignmag article, reversed Elias gamma exponent floating point, Minibit and Minibit+ 2006 and 2007 floating point, those can offer increased accuracy over ordinary floating point or fixed point formats. Posit floating point is relative to logarithmic a-Law encoding, according to Quadiblock (John G. Savard) in Google Groups 29.9. 2017 “Beating posits at their own game” text, so Gal s accuracy tables (revisited) method can expand posit significand also like floating point, if mantissa expansion methods by S. Boldo and Malcolm do not work in posit floating point like in ordinary floating point. So even 8 bit pointer may be enough to point to millions of values of text information (letters, vowels and sentences) dictionary, or point to gigabytes large video shape / fractal pattern library. Octasys comp (van den Boom), “Universal lossless coding of sources with large and unbound alphabets”, “Generating dithering noise for maximum likelihood estimation”. Bounded floating point, “Improving floating point compression through binary masks”, “Representing numeric data in 32 bits while preserving 64 bit precision”, “Twofold fast summation” Latkin 2014, “Improved floating point division and square root” Viitanen, “New uncerntainty-bearing floating point arithmetic” Wang. Text compression can also be semi-lossless, some of text information is lost but text is still readable, that would lead to even better text compression. IIf differential integer, like Dynamic DPCM at bitsnbites netpage, delta sigma or Takis zourntos 1 bit without delta sigma etc., are used in video or audio or text compression, or even small minifloat floating point or unum/posit in differential form that is possible. Video compression can use small blocks from bigger picture to encode image but also if camera is filming tree or human computer does not divide them into range or domain blocks but makes tree or human one big block marked “trr” or “human” and computer memory has basic tree and human shapes according to tree type and humans (man, woman, child etc.) and these basic-shape humans and trees are used with computer animation to represent real people and real trees with small amount of information (colour, face shapes, clothing etc shapes) so those CGI images at least somehow seem like real person and tree that camera is filming. Miinimal amount of information is enough to send a video stream. Better reality is achivied using range and domain blocks of big picture with predefined dictionary of shapes and patterns, bitrate is still low but picture is more realistic. “Profiling floating point value ranges for reconfigurable implementation” 2012 Brown is double fixed point that is between fixed and floating point. Dynamic fixed point, like 64, 32, 16 or 8 bit integer with Bounded integer sequence encoding that uses Zero displacement ternary, order-3 fibonacci (Sayood: Lossless compression handbook) or constrained triple-base number system, and then (shared) exponent like Flexpoint format, and then stochastic rounding, and then dithering would make accurate integer. Quire is posit / unum that makes dot products, they make vectors, and vectors can be used in video or audio compression (TwinVQ audio compression). Floating point formats like Texas Instruments “Where will floating point lead us?” 8 bit exponent 1 bit implemented mantissa, “Between fixed and floating point by dr. Gary Ray” chipdesignmag, 10, 11 and 14 bit graphic FP formats (14 bit uses shared exponent, it has 3 X 9 bit plus shared 5 bit so 32 bit together, unlike 16 bit FP which has 10 bit mantissa + sign bit + 5 bit exp, so 3X 16 = 48, 14 bit format has only one bit less mantissa but saves 50% bitwidth). Those minifloats with mantissa accuracy expansion can be very accurate. IBM mainframes used 64 bit FP truncated to 32 bit. “SZ floating point compression”, modulo operation, “Compactification of integers” Royden Wysoczanski 1996, Bohr compactification. Modern FPUs have 512 bit bitwidth, so when FP number of 16 or 32 bits is accuracy expanded to hundreads of bits it can be processed in FPU in single cycle if number is 512 bit or less wide in its expanded form. Small microfloats of 4-8 bits or similar can be differential floating point / posit / quire / Gary Ray s floating point etc, with diffrential form. “Video compression with color quantization and dither” (Vignesh Raja). If 32 bit delta sigma modulators are being made, and smallest analog circuits are now 16nm wide, perhaps 64 bit DSM or Takis Zourntos sampling model or “Cascaded Hadamart based parallel sigma delta modulator” Alonso can be made, or Johansson “pseudo-parallel sigma delta modulator”. Or use multirate modulatorIt. 1 bit sampling with 144 000Hz sampling rate is 4 X 36 000Hz, enough for audio, and perhaos enough for compressed video also. Because DSM has very effficient noise shaping methods that add 60 -70 dB to noise distance, 10 -12 bits, and 16 bit dynamic is enough for audio, only 4-8 bit DSM is needed, rest of DSM bits can be used in data compression “numbers on top of each other” (“Using floating point numbers as data storage and compression”). However if mathematical data compresssion is used asymmetrical numberal ssytems that Finite State Entropy uses are propably used with usual integers, so all my previously proposed methods are not used in FSE / asymmetric number system enviroment. If FSE can be used without asymmetric numerals or those previous methods combined with asymmetric numerals I don t know. If BISE (bounded integer) is used, and/or differential form, and in ternary, BTTS (balanced ternady tau) or according to “differential ternary” texts by Nadezda Bazunoni, like “D 3 = 0” ternary from her text. BTTS is logarithmic not integer so needs “beta encoder” or something like that. “Ternary differential models” Palitowska, “Liao style numbers for diffrenetial systems”, “A modified adaptive nonuniform sampling delta modulation ANSDM”, “Pseudoternary coding” Astola, Pietikäinen, “A new number system using alternate Fibonacci numbers” Sinha 2014, “A novel multi-bit parallel delta/sigma FM-to-digial converter” Wisland, “Fascinating triangular numbers” (Shyamsundergupta), “Beam shaping using a new digital noise generator”. Using either digital “Direct digital synthesis using delta sigma modulation” (Orino) or analog like VCO-ADC structure, some delta sigma VCO-ADCs have even two VCOs per one DSM, might work. In Github Agnusmaximus netpage is “Quantized word vectors take 8-16 X less space than ordinary vectors”, those quantized word vectors can be used in text compression.