Main Site | Join Robin Hood Coop | Projects | Events | Blog | Media | Forums | Mailing List | Twitter | Facebook

Hardware data compression chip

Data compression is needed in every place in datacenters etc. Data compression seems to be some computer program, not hardware encoder / decoder. If software is sometimes thousand times or more slower than hardware, hardware data compressor should be much faster than software. Made at 5nm manufacturing tech and optimised for speed, so it then uses lots of electric power, perhaps as much as CPU, and is hot, so needs cooling, from PC cooler or other. This hardware data compressor can be put to computer motherboard like any other IC, or in separate USB stick, in hard disc drive etc. This hardware data compressor / decoder can be in digital television sets, virtual glasses etc. also. Those external compressors outside PC need not so much power than chip inside PC and are then less effective. USB stick etc. data compressor can have electric cooling and cooling pipes in USB stick etc. to cool down hot chip. There are compression algorithms like Zstandard, FSE/ANS - based algorithms, LZ4, and compilations of many algorithms to one (like Zstandard). Floating point numbers have their own compression algorithms. Data compression can have average of over 20 X compression, average because compression needs repeatable data, random numbers can not be compressed. If in hardware chip is not only video / audio codecs, but also data compression (general data compression plus specific floating point compression) included, not only PC can have video and audio compressed but any other data too. This compression can be used in every data that PC uses. There are different levels of compression, speed / level tradeoff, and PC can automatically choose best speed / compression level it compresses according to different data sets. So no need to buy expensive large memory SSD cards and hard discs, when more data fits in smaller and cheaper space, and if data compression is used no hard disc is perhaps even needed. PC becomes then cheaper, smaller amount of memory will do, in ultra-cheap PCs for third world market for example. For text information can be separate text compression, like IDBE. Special ASIC made for data compression only, chip made as fast as possible. If this chip needs own memory, IDBE library or other, this memory can be integrated to chip itself or memory chip with compression chip is in same plastic IC package etc. This chip can be removable from motherboard, using sort of “motherboard adapter” so when better chips are being made, and data compression programs become more efficient, compression / decoder chip can be changed to better one. One version of data compression is vector compression, it is used in compressing big data. If PC has cheap petabyte memory, like big LTO - type magnetic tape of petabyte capacity but made mass produced like VHS cassette and not much higher price, or optical tape memory (Folio Photonics) or optical disc (googling petabyte optical disc), that is big data and becomes even bigger when data compressed. Version of vector quantization is called “additive quantization” AQ, that is extreme vector compression, “Revisiting additive quantization”. AQ can perhaps used to compress big data sets. Vector quantization is used in video and audio compression, so AQ perhaps also. In another post “Using floating point numbers as information storage and data compression” is how increasing accuracy of FP numbers can be used as data store. This FP accuracy increasing is done by software programs, but hardware digital logic would be much faster. So block of information, any information, about 2000 bits long, is turned to extented accuracy floating point number that has about 2000 bit accuracy in one 64 bit IEEE floating point number for example. Then this 64 bit FP number is compressed further still using lossless floating point compression. There are other number systems like posit, quire, Gary Ray s floating point that has “bit reversed Elias gamma coded exponent”, and fractional floating point (or fixed point / floating point hybrid) by Richey and Sadeian, “A new class of floating-point data formats with…”, “Performance evaluation of orthogonal…” fixed point fractional format. Posit etc. number systems can also use extented software precision. Or simply turn binary information to some other number system like heriditary number system, giant numbers (Paul Tarau), magical skew number system, PT number system (MROB com), U-value number system (Neuraloutlet worpress com), ternary tau, zero displacement ternary etc. If those number systems have much higher accuracy than standard binary integers, then they can contain information in much denser state, even when those other number systems are encoded as binary form. So simply turning information to some other number system that has high accuracy but uses very few bits, “data compression” (without data compression, nothing is taken away from information, it is only turned to another number system) is possible. This can be done in hardware IC chip, or in software.
There is also delta sigma modulation. New versions of it, like multibit DSM or pseudo-parallel DSM, can have “data compression” of sorts, so when DSM has 16 bit processing of 1 bit DSM signal using 4 X oversampling ratio, it is 16 : 4 = 4, so four times “data compression”. When audio signal is in question this uses only 4 bits / frequency, not 16 bits like linear PCM. Pseudo parallel DSM with 24 X pseudo parallel processing has 24 to 1 “data compression”, I believe. DSM is used in audio signal, but perhaps other signal processing / data compression methods can use it too. DSM can be coupled with u-law encoding, floating point number system (so that 16 bit internal processing of 1 bit signal in multibit DSM uses FP numbers), or other number systems, like those versions of floating point number systems mentioned earlier. Or other numbers than floating point / posit, like “one hot residue number system” etc. Also vector quantization can be coupled with delta sigma modulation. Can AQ be coupled with DSM, multibit DSM or pseudo parallel DSM I don t know. There are patents by Clinton Hartmann: “Multiple pulse per group keying”. It makes 15 bits to 1 and can be used in internet or radio frequency communication etc. That kind of compression methods, if they are difficult to do in software they can be in hardware, in special IC chip that can be put to each computer, or as USB stick etc.

For making data compression chip for computers, perhaps it needs not only digital IC, but either optical or analog chip also. Optical chip can do fast signal processing, “All optical delta-sigma modulator”, “On chip optical modulator using epsilon-near-zero”, “Optical isolation amplifier using sigma-delta…”, “All optical binary delta-sigma modulator”, “ASIC chip for real-time vector quantization of video signals”, “VLSI chip for affine-based video compression”. There is multirate DSM, cascaded DSM, and VCO-ADC, not only multibit or pseudo parallel / oversampling DSM. Not only sigma delta but one bit modulation without delta sigma (Takis Zourntos), and ADPCM / DPCM merthods can be used. “ADPCM with pre and post filtering”, ADPCM with noise feedback ". Vector quantization can be used with ADPCM / DPCM. Vector quantization models are for example spherical- pyramid- or cubic vector quantization, SLQ, LSVQ, transform VQ, and additive quantization. “SCELP low delay audio codec”. “Quantization and greed are good” 2013. “Analog to information compression” AIC is new method that makes possible to sample below Nyquist rates, sometimes up to rates of 50:1 compression in IoT sensor nodes, and Xampling (analog sampling) is another method too. “Segmented compressed sampling”, “Error corrector codes for sparse signal”, “PROMPT: a sparse recovery…”, analog error correction codes. Analog processing makes possible KLT transform, and Anamorphic Stretch Transform compression, so perhaps analog chip is needed too. Smallest ADC / analog chips are made using 16 nm tech in Japan. “Easy analog analog data compression”, “Lossless analog data compression”, analog data compression patents.
So perhaps when data compression / video compression / audio compression chip is made to computers, it needs perhaps optical and/or analog chip also. All two or three chips can in same plastic IC package, together with memory chip is memory is needed for something like “Gal s accuracy tables revisited”. Optical chip makes not much heat, so needs less cooling.
XXH3 is new compression algorithm. If error correction codes are used, error correction (turbo codes etc.) can be made to data compression, only that amount of information is used that error correcting code can recover information back, all other information is removed so data is then smaller.
Using other numbers systems than binary: multiple base composite integer at MROB com netpage, Quote notation by Hehner.
In Robin Hood Coop forums netpage is text “Petabyte SSD / memory card and DVD & CD audio disc”, where is that optical disc can still be used as mass memory, because it is cheaper than magnetic memory. Ordinary PC can have petabyte optical disc or optical tape memory or magnetic tape memory of petabyte size, and it will be cheap too, much cheaper than hard disc / IC chip magnetic memory. There is also analog IC memory (IBM and others), so analog information can be stored in analog state in analog memory chip.

Hardware data compression chips exist: zEDC hardware accelerator, AHA data compressor, “Compression accelerators” (Microsoft), data-compression org / hardware netpage. Multiple description coding can be used, it is used together with delta sigma modulation. “ZKPDL: a language based system for efficient zero-knowledge and electronic cash”. Colt Mcanlis: “Unexplored areas in data compression” in medium com netpage. If there is hardware data compression chip that can use many compression methods in one, like Zstandard software does, hardware compressor then has much flexibility (compression ratio / speed). Those hardware compressors that are made seem to use only one algorithm. For floating point compression there is accuracy increasing methods, binary scaling, floating point scaling, and “Gal s accuracy tables method revisited”, and method that offers endless accuracy in text “Endless data compression is possible” in Robin Hood Coop forums.