Because 1-bit digital has been studied in “compressive sensing” and other applications, perhaps it is possible to use 1 bit digital in neuromorphic computing also. In text “Maximum likelihood estimation of quantized data” (Gustafsson, Karlsson) is the “data dither” studied. Using dithering max 8 - 10 bit improvement is achieved but now dither noise is in the signal chain. Coupling dither with noise shaping techniques increases “dynamic range” (precision) of dithered 1 bit to about 15 - 16 bits max, (33 000 -65 000 numerical values), but that requires complex noise shaping and dither procedures (used usually with delta-sigma modulation, altough use of these methods is not perhaps restricted to delta-sigma but other modulation and D/A conversion also). If dither noise becomes clearly present in signal chain perhaps 20, 22 or 24 bit precision is achieved with these methods. Using 1 bits as base. But those are complex methods (studied with delta- sigma modulation, among other modulation methods) , and if neuromorphic digital neurons or neuristors (memristor based artificial neuron) should be simple, complex noise shaping circuits in every digital neuron is very impractical. Simpler noise shaping methods perhaps will do. Complicated “neurons” like european union brain project with its ARM9 processor based “neurons” perhaps can use complex noise shaping in every “neuron” and max 24 bits signal path using only 1 bit in signal chain with noise shaping and dither. Analogue circuits are being made at 65 nanometers and even 32 or 28 nanometer analog circuits have been proposed. But what is the quality of analogue transmission in these small 28 nm circuits? Propably very noisy. But if analogue transmission is achivied with quality that is enough for neuromorphic chips perhaps then digital neuromorphic chips are not needed. Digital chips can be made smaller than analog, however, if 28 nm is achieved with analog design the difference between digital and analog is not very big. But if it is possible that the “neuron” uses 1 bit logic with 15 - 24 bit precision with dither all the way, in a way that no conversion of 1 bit dithered information is needed to change back to 15 - 24 bits, but the neuron is capable of using 1-bit dithered information in its logic functions (logic gates) and those 1 bit signals contain 15 - 24 bits precision of dithered and noise shaped information, then even simply circuits can be used as “neurons”. But I don t know if it is possible to manufacture 1-bit logic, 1-bit logic gates or 1-bit CPU that uses dithered information so that 1 bit contain 10 - 24 bits precision without need to convert 1-bit dithered signal back to 10- 24 bits for logic gate functions. Or is the only possibility that 1-bit dithered information must always change back to “normal” 10 - 24 bits so logic gates can then use them? If 1-bit logic gates are possible to made that can use dithered information in its dithered and noise shaped form, that opens new possibilities for circuit design on the whole.Human senses operate using logarithmic system, perhaps human brain information processing on the whole operates using logarithmic scale, adding logarithmic scale to dithered 1 bit signal increases accuracy and range even more. There is a concept of “Negative beta encoder” for using fractional number system. And “neuraloutlet.com” netpage has “Metallic number systems” (logarithmic) and U-value NS. There are many other number systems to choose from using with 1 bit dithered signal. The 1 bit dithered system can use for example “Multiple base composite integer” (at “MROB.com” netpage), Zero Displacement Ternary Number System (ZDTNS), “Magical skew number system”, on the netpage “iteror.org” is “Noviloka maths on the number horizon - beyond the abacus” is “Nested binary biological number system B notation” that is inspired by biological model. There are dozens or hundreads different high-accuracy and high range number systems that can be used with dithered and noise shaped 1-bit quantization (perhaps). So perhaps 1-bit dithered and noise shaped digital signal can have very high accuracy despite the fact that it is just a 1-bit digital signal with dither. For simple quantization ADPCM or other simple methods perhaps are best, ADPCM has coupled with vector quantization (cubic-, pyramid-, or spherical vector quantization, someties with fibonacci coding). There are other DPCM schemes RIQ / RIVQ A/DPCM (Recursively Indexed /Vector/ Quantization). This is related somehow to “Octree” quantization. And Noise Feedback Coding DPCM (NFC-DPCM), Adaptive Quantizer DPCM (DPCM-AQB) on the text “Subjective evaluation of four low-complexity audio coding schemes” (Joseph, Maher). And backwards adaptive DPCM also exists. In text “Quantize and conquer: a dimensionalty-recursive solution to clustering, vector quantization, and image retrieval” (Aurithis) is one solution, “Distortion-limited vector quantization” (Hahn) and “An improved interpolative vector quantization scheme using non-recursive adaptive decimation” Tsang 1995 are other RIQ- related methods, if RIQ quantizer is gonna be used with 1-bit dithered signal. There are many ways to use 1-bit dithered signal with quantization. Not only using integer number systems, but if indexing modulation scheme is used, RIQ or other, then “index calculus number system” etc. or other suitable number system can be used. There are authors like Dimitrov, Joux, Padmavathy, Howell, Muscadere etc. about different index calculus methods. There is a book “Residue number systems: algorithms and architechtures” Mohan 2002. Different indexing methods: “The use of index calculus and Mersenne primers for the design of high-speed digital multiplier” Fraenkel 1964, “Index number systems (Ralph W. Ffouts 1972) “Arithmetic codes in residue number systems with magnitude index” 1978/2006, “A multilateral index number system based on the factorial approach” 1986, “inner product computational architechture using the double-base number system” Eskritt, “Power optimisation of FIR filter using an advanced number representation” (Reddy, Rahaul, Nithin, Valarmathi 2016). Other ways to use indexed number systems are level-index number systems, others than strictly index based are tree based number representations, on netpage stackexchange.com/questions “Is there tree-based numeral system? (closed)” 30.6 2016, “hereditary number systems”, like hereditary binary (Paul Tarau), giant numbers (Paul Tarau again), base infinity number system (Eric James Parfitt), factoradic number systems, zero-based numbering etc. A new improved quantization scheme instead of VQ is Additive Quantization, like “Revisiting additive quantization” (Martinez) and “Solving multi-codebook quantization in the GPU”. Other texts are “Additive quantization for extreme vector compression”, etc. Stacked quantizer was an earlier RVQ- based method (Martinez). If those additive quantizer methods are based on recursively indexed quantizer, perhaps then recursive number systems or index-calculus number systems can be used. 1-bit quantizer can be used in ZISC (Zero Instruction Set Computer) solutions, ZISC is neuromorphic- style computing design. There is an article “A new approach to the classification of positional number systems” (Borisenko, Kalashnikov 2014). 1 bit signal can use “Direct digital synthesis using delta-sigma modulated signals” Orino, technique or similar “oversampled 1 bit audio without delta-sigma modulation” by Takis Zourntos, that is more stable. Differential 1-bit system can use any other method like tracking ADC or dual-slope ADC, monobit or SAR ADC etc. together with dithering and noise shaping. " Hardware realization of novel pulsed neuron networks based on delta-sigma modulation with…” 2002 is one 1-bit scheme. Other differential- based modulation methods are “differential space time frequency” (DSTF) that requires in some version residue number system, and “Simple data compression by differential analysis using bit reduction and number system theory” 2011, and “Liao style numbers of differential systems”. Also “Tournament coding of integers” Teuhola, “1-bit level systolic array” that uses “Winograd fourier transform”, “On the FFT of 1-bit dither-quantized deterministic signals” Cheded, “A novel fast FFT scheme” Cheded. “Video compression with colour quantization and dithering” Raja has logarithmic scale and dithering applied together as video compression, same can be applied perhaps to other data compression also. “Synthesis and analysis via Walsh-Hadamart transformation” Varkonai-Koczy is signal analysis, also “Multi-resolution short time fourier trasform”, cs.tut.fi netpage “Shape-adaptive transform” (SA-DCT), and “anamorphic stretch transform” based solutions. There is “vector-radix FFT” that perhaps can be improved using “additive quantization” (AQ) Martinez, “Multi-resolution short-time fourier transform”, “Faster than FFT: The chirp-z-rag-N”. If best integer numbers system would be based on decimal value 840 according to one netpage, because 840 can be divided using 3, 6, or 12, which all are economical number bases, then 1-bit signal that is dithered to 720 or 840 decimal values of accuracy, or 9,5 bits, would be the best. When this dithered 1- bit is converted to binary it needs 10 bits because 720 or 840 is in between 512 and 1024 decimal values. But if 720 or 840 in decimal really is the best number base for integer, perhaps 1-bit signal with enough dither can use this base number 720 or 840 instead of just number 1 as integer base. Divided to 3, 720 becomes 240, another integer base that needs 8 bit precision about dithering. Other integer bases to use with 1-bit dither perhaps are “bounded integer sequence encoding” (BISE) and “multiple base composite integer (at MROB:com webpage) and Quote (mathematical) notation. The Bryce 3D graphic program uses “single precision reals” (real numbers, fractional format?) to increase 16 bit integer to -38 to +38 bits accuracy (76 bits together) and then adds dither to 48 bits accuracy in positive number value, so it is 86(?) bits together if I understand it right. In neuraloutlet.com netpage is “U-value” and “metallic number systems”. If dithering is used in 1-bit, perhaps using maximum dithering so that actual 1-bit information is barely recovered amidst ditehring noise, but now dynamic range and information density of 1-bit signal is its maximum level. The dithering noise can be filtered out using different filter techniques, now heavily filtered 1-bit signal can be boosted to beyond 10 bit information density, and after dithering noise has done its duty it can be in some extend filtered out in the final information result. TwinVQ is compression method for audio, Additive Quantization (AQ) by Martinez is better, so perhaps additive quantizer “TwinAQ” based compression , something similar like in texts of “extreme vector compression” etc., perhaps together with dual-slope ADC or other can be used. Quantizing signals to 1-bit delta-sigma modulation is used sometimes, like “Single-bit, pseudo parallel processing delta-sigma modulator” Hatami 2010/2014 and “A novel speculative pseudo-parallel delta-sigma modulation” Johansson, Svensson 2014. Takis Zourntos has “Oversampling without delta sigma modulation” based on nonlinear control. Adding dither and noise shaping to 1-bit increases its accuracy, so perhaps non-oversamapling pseudo-parallel delta sigma is possible, or Zourntos more stable than delta-sigma model. Perhaps adding “extreme vector compression” (additive quantization) also. There is “Pulse group modulation” written by someone called Ries in 2001, and patent “Modulation by multiple pulse per group keying and method of using the same” US pat. 20030142691A1 Hartmann 2003, and “Processing mixed numeric and symbolic data encodings using scaling” US pat. 7716148B2 Meng 2010. 1-bit signal can include lots of information using these methods. And “Equivalent complex baseband model for digital transmitter based on 1-bit quadrature pulse encoding” 2014, “Time-domain dither, dispersive codes, and controlled noise shaping in SDM” Hawksford, “Rational dither modulation in audio signals” 2007 Hernandez, “Single-bit oversampled A/D conversion with exponential accuracy in the bit-rate” Cvetkovic, Debauches 2000. “Feynman machine: the universal dynamical systems computer” 2016 is paradigm for neuromorphic computing. If for example instead of integer base number 1 base ZDTNS (zero displacement ternary number system) is used, and together with neuraloutlet.com netpage metallic numbers, that in one version has 3,3… logariththmic value. Now number 10 in decimal is three metallic numbers (when divided to 3,333…, it is not exatly same as metallic number but very close). Now instead of integer base 1 base is integer base 10, but that number has three metallic numbers (10, and 6,666… , and 3,333…) inside and is using ZDTNS (with ternary values of 3,333…, 6,666…, 10 instead of 1, 2, and 3). Or if the base is logarithmic, ternary 3.333… leads to 3.333, then 10, then 33,33 etc., and if two of these 33,33 value trits are used result is 1000 in decimal system (33,33 X 33,33= 1000). So number 10 in decimal system can be used as combined ZDTNS & metallic number system base. But I don t know if this idea is any way worthwhile, or do ZDTNS and metallic numbers work together at all. Or use neuraloutlet.com U-value number base or any other econimical number base, like balanced ternary tau (BTTS) and negative beta encoder. Also “High Frequency Replication” (HFR) is used in audio coding. Can similar methods be applied to neuromorphic “spiking” networks also. About 50% information space is saved using HFR in audio, but audio content is almost the same than using without HFR. And audio can use HFR from 4 khz upwards, frequencies 4-8khz are reconstructed using HFR, only 4 khz has “real” signal, and then 8 - 15 khz range can use simply aural expander that creates harmonics based on 1 - 8 khz frequencies. Now 15 khz sound is made using only 4 khz base spectrum. I don t know if that audio sound is even near to neuromorphic digital spiking network signal, but that is one idea. Also because dither is used extensively in a way that it almost buries the 1-bit signal, using Hadamart code or something similar, Walsh-Hadamart transform for encoding or Reed-Solomon or other, so that signal can be recovered amidst very heavy dither noise, and then dither noise is filtered out, if not out at least to acceptable levels that signal can be useful. Dynamical range and information density of 1-bit signal is at its max, but so is dither noise. Also using sparse vector/ radix/ matrix/ index/ etc. compressive sensing saves information space. Human brain works using logarithmic scales and dither, that has proved, so artificial network that uses these is near to biological brain. Also using nonuniform sample rates, even delta-sigma or other non-integer mode can use together with logarithmic scale in addition non-uniform sample rates. “Analog to digital conversion using nonuniform sample rates” US pat. 5963160A, “Adaptive concurrent scan order” US pat. 20060146936 Microsoft, “Digital filterbank for LOWRAN by monobit receiver”. Time-interleaved ADCs (in texts by Elettra Venosa) are very efficient also, and “multi sampling monobit receiver”, “Universal rate-efficient scalar quantization information rate for faster-than-Nyquist signaling with 1-bit quantization and oversampling at the receiver”, “Sparse sampling of signal innovations”, “Beam shaping using a new digital noise generator”, “Optimal noise shaping using least squares theory”, “Realtime multiband dynamic compression and noise shaping”, “Digital noise shaper circuit” US pat. 55988A Alfred Linz 1994, “Direct-digital synthesis using delta-sigma modulated signals” Orino, " A new Poisson noise filter based on weight optimization” Jin 1998. These systems can use “differential calculus” or other, and for example signal generation by BKM algorithm and its many new versions etc. “An improved least mean kurtosis (LKM) algorithm for sparse system identification” (Yoo, Park), “Improved filtered -X least mean kurtosis algorithm for active noise control” (Zhao), “Stochastic analysis of the least meam kurtosis algorithm for gaussian inputs” (Bershad, Bermudez), “A family of adaptive filter algorithms in noise cancellation for speech enhancement”, “Kernel least mean kurtosis based online chaotic time series prediction”, “Adaptive filtereing aqlgorithms for noise cancellation” (Falcao), “Active noise cancellation project” Liu 2008, “Digital signal processing algorithm for noise reduction, dynamic range compression, and feedback cancellation in hearing aids” (Ngo). Sometimes quadrature mirror filters are coupled with ADPCM, and if QMF, dither and differential PCM methods are combined result is perhaps 1 bit signal that has 24 bit information density, even without oversampling. QMF leads to lossy and lossless data compression formats, but perhaps data compression is too much for simple digital neurons or neuristors, simple QMF methods like ITU telephone standard ADPCM that is simple “split- subband ADPCM” or similar will perhaps do. In bitsnbites.eu netpage is DDPCM (Dynamic DPCM) that is simple. There are “Xtreme quality IMA-ADPCM” that is similar to WavPack compression, Warped linear prediction models and Continously variable slope delta modulation (CVDM) models also, perhaps dither and QMF filters work with them too, in 1- bit signal path, if they are not too complicated (CVDM is by a nature 1-bit method). Actual data compression algorithms like FSE (Finite state entropy) and its asymmetrical number system are effective but perhaps computationally heavy solutions for every digital neuron or neuristors. Also googling “differential ternary” brings many results, mainly texts by Nadezda Bazunova, like “Differential calculus D3=0 on binary and ternary associative algebras” 2011 and earlier texts. Also “Ternary differential modes” (Pilitowska), “TLDS (Ternary lines differential signaling)”, “Differential cascade voltage switch (DVCS)” (2012), “Design pipelined CMOS simple ternary differential logic” (Wu 1993), “Improved error performance in SOQPSK modulation uisng a ternary symbol encoder” (2009 / 2017), “Almost-differential quasi-ternary code (ADQ) code” 1976, “Ternary R2R DAC design for improved energy efficiency” Guerber 2013, “CMOS ternary dynamic differential logic” Herrfield 1994. There also ways to encode ternary 3-value content to 2-bit form not using 3 values but one bit (2 values), ternary information is scrutinized in some accuracy to 2-value binary. How that works with differential ternary I don t know. If radix 3 or value 3 is best because it is close to natural logarithm, and can be used in ZDTNS number system or otherwise, perhaps 1-bit signaling can use 3-value number system also. And logarithmic base is preferred to high dynamic range information, but logarithmic base is not integer base, and that is difficulty in simple math. There is Renard system of numbers or its modern version E- series number system, it is not logarithmic, but works when information has high dynamic range and small amount of numerical values (like for example 1-bit dithered / QMF filtered signal). Perhaps E- series numbers or similar integer, not logarithmic non-integer, can be used for simplicity in digital neuristor. For modulation there is “New advances in pulse width modulation techniques and multilevel inverters” Peddapelli 2014, “Variable frequency pulse width modulation” Stork, Hammerbauer 2015, “Nonuniform sampling delta modulation-practical design studies” Golanski, “Variable modulus delta-sigma in fractional N frequency synthesis” Borkowski 2007, “Spread spectrum modulation”, “Space vector based dithered sigma delta modulation” Biji Jacob, “Delta-modulation coding redesign for feedback-controlled systems” de-Wit 2009, “Space-time vector delta-sigma modulation” Scholnik 2002, “Adaptive sigma-delta modulation with one-bit quantization” Zierhofer, “A 2-bit adaptive delta modulation system with improved performance” Prosalentis 2006, “Stability of adaptive delta modulation with forgetting factor and constant inputs”, “A new adaptive delta modulation system Kyaw” 1973 / 2016, “Method and apparatus for variable sigma-delta modulation” Herbert Ko 2006. “A modified adaptive nonuniform sampling delta modulation” (ANSDM). Combining vector compression (space vector, space-time vector, additive quantization “extreme vector” etc.), Predictive methods (warped linear predictor etc. to 1-bit), radix3 numbers that are close to natural logarithmic (order-3 fibonacci code by Sayood) or E-series numbers, recursively quantized index - delta modulation (RIQ-ADPCM), multiple description delta- etc. methods and techniques and trying to find some that is simple enough (like dither and noise shaping and quadrature mirror filter) to be used in simple digital neuristors or similar circuits. Delta-sigma compression uses noise shaping and part of frequency range is filtered out because it is too noisy for usual audio or video information. But this noisy section can still contain information, for example MP3 players use block coding and some sort of “header” is used together with compression that contains information how the information is compressed. Now this filtered out section of signal that is too noisy of regular audio or video can be used as “information store” that contain same information as block coding headers etc. additional than pure audio voice that compression methods use. Similarly 16 bit CD sound uses PCM audio that filters 2khz away from 22khz audio sound because of quantization noise. This 2khz section (or if cutoff filter is set at 18khz, nobody hears high frequencies anyway, and now 4 khz is for additional information) can be used as “information store” also. Signal is noisy but some information can be stored for example Reed-Solomon coded if CD sound is in question. This information is for next sample, not the sample that is playing because decoder needs to decode the information before extra information can be used, so small delay is in this system for information processing, so live recording is perhaps bit difficult but for prerecorded material this will work. Now this extra information can be in the signal itself and no extra “header” information blocks are needed so bitrate is saved. There is “Quantization and creed are good: one bit phase retrieval, robustness and greedy refinements” Mrooeh 2013 (extremely quantized one-bit phase-less measurements). Using tri-level (ternary) delta-sigma coding leads to significant savings in bitrate according to “Improved signaltonoise ratio using trilevel deltasigma modulator” 2009, “Bitstream address and multipliers for tri-level sigma delta modulator”. There are pseudoternary coding (Matti Pietikäinen, and Jaakko T. Astola), “BIN@ERN: binary-ternary compression data coding”, “Binary to binary encoded ternary BET”, “Self-determining binary representation of ternary list”, “Arithmetic with binary encided balanced ternary numbers” 2013, “Arithmetic algorithms of ternary number systems” S. Das 2013, “A novel approach to ternary multiplication” Vidya 2012, " Balances and abelinian complexity of certain class of ternary words" Turek, “Abelinian complexity in minimal subshifts” Saari 2011. Other: “Delta-sigma algorithmic analog-to-digital conversion” Mulliken, “A structure of dithered nested digital delta sigma modulator” Nouri 2016, “A novel multi-bit parallel delta/sigma FM-to-digital converter with 24 bit resolution” Wisland, “Cascade Hadamart based parallel S modulator” Alonso, “A parallel delta-sigma ADC based on compressive sensing” Xiongl, “Randomized iterative reconstruction algorithms for delta-sigma A/D converters” Marijan 2011, “Delta-sigma modulator based A/D conversion without oversampling” 1996, “Design and analysis of multi-stage quadrature sigma-delta A/D converter” Marttila 2011, “VLSI delta-sigma cellular neural network for analog random vector generation” 1999. Vector compression and delta modulation together (in “space-time vector” or other form) and using Additive Quantization (“extreme vector compression”) perhaps works. There is also block floating point (BFP), that is between fixed and floating point representation, “On finite wordlength properties of block-floating-point arithmetic” Mitra 2008, “Integral noise shaping for quantization of pulse-width modulation” Midya 2000, “noise coupled delta sigma modulation”, “dynamic element matching DEM”, “dynamical weight avaraging DWA”, “A higher-order mismatch -shaping method for multi-bit sigma-delta modulators” A. Lavzin 2002, “Improved stability and performance of from sigma-delta modulator using 1-bit vector quantization” Risbo 1993, “A novel speculative pseudo-parallel delta sigma modulator” Johansson 2014, “Hardware realization of novel pulsed neural networks based on delta-sigma with GHA learning rule”. If ternary format is used, if trilevel delta number representation is better than just binary, in netpage “Fascinating triangular numbers” at shyamsundergupta.com is ternary / tri-level arithmetic. If balanced ternary, ternary Tau (BTTS), zero displacement ternary etc. number systems are used, such as “ternary tau storage system”. Even stranger numerology is in netpage “Constable research B. V. about the number nine” (hans.wyrdweb.eu). At neuraloutletcom is U-value number system that is good for representing fractals, in MROB.com is multiple base composite integer, and Munafo PT system (17 value number system) that promises almost infinite accuracy, there is magical skew number system etc, “Addition and multiplication in generalized tribonacci base” 2007, “A new unterntainty-bearing floating point arthmetic” Wang 2012, “A survey of quaternary codes and their binary image” Derya Özkaya 2009 including “Z4 cycle code” that transform quaternary information to binary. Paul Tarau has several number systems, “tree-based” and “Giant numbers” etc. Block floating point (BFP) uses interval arithmetic, and is between fixed and floating point representation. Unum concept adds interval arithmetic “header” to FP number (as far as I have understood it), so BFP and unum should work together well. Also floating point and delta (sigma) modulation is coupled in some designs. So perhaps unum concept (interval arithemetic) and delta / delta sigma modulation, or some other delta modulation (like Takis Zourntos nonlinear control model, or DPCM, etc.) can be coupled together, like floating point and delta modulation does. I don t know what kind of that “unum delta modulation” would be. But perhaps unum/ubox information can be stored in that additional storage space that delta sigma modulated noise shaping shifts noise to high frequency and then that high frequency is filtered out. But in that noisy high frequency can have information, for example unum/ubox information for delta modulated signal or something similar. That information is for next sample because that sample that decoder reads is already being decoded. Also PCM or Floating Point signal can have unum/ubox information in quantization noise frequency that is normally filtered out. If that information is not Unum/ubox interval arithmetic, it can be other, for example fractional number format, like Quote Notation, Q number format, BISE (bounded integer sequence encoding) Qdigest compression information etc. (if those formats need additional information that can be stored in high frequency storage space), or data compression/decoding information. I don t know if unum or ubox concept (interval arithmetic) works in delta / differential encoding, like DPCM and ADPCM, or adaptive delta modulation / delta-sigma, but that is one idea. In concept that is 1-bit differential encoding. Or multi-bit differential encoding.Other texts: “Sparse composite quantization” 2015, “Pairwise quantization” 2016, “Optimum quantization and its applications” 2004, The Optimal Quantization website, “Efficient quantization based on rate-distordion optimization” 2016, “Universal rate-efficient scalar quantization” 2010, “Perceptual signal coding for more efficient usage of bit codes” 2012, “An efficient law-of-cosine-based search for vector quantization” 2004, “Robust iterative quantization for efficient p-norm similarity search” 2016, “Implicit sparse code hashing” 2015. There is “Xampling” theorem by Mishali, Eldar, sub-Nyquist sampling. “Extreme compressive sampling for covariance estimation” 2015, “Compressive phase-only filtering at extreme compression rates” 2016, “New approach based on compressive sampling for sample-rate enchancement” 2014 Bonavolonta, “Robust 1-bit compressive sensing via sparse vectors” 2013 Jacques, “COSAMP: iterative signal recovery from incaccurate samples” 2008, “Single-pixel camera with compressive sensing by non-uniform sampling” 2016. There is also VCO-ADC, oscillator based A/D conversion. This is a bit similar to sound synthesis with oscillators. Also delta-sigma 1-bit modulators use sometimes ring oscillator so 1 bit VCO-ADC DSM exists. Is it possible to use similar methods that sound synthesis does, like Polynomial Transistion Regions PTR, Vector Phaseshaping Synthesis VPS, Auto-Regressive Moving Avarage Filter ARMO, Bandlimited Ramp (BLAMP) function, PolyBLEP, Adaptive Phase Distortion Synthesis, Feedback Amplitude Modulation FBAM etc., in sound / speech signal reproduction and decoding (not just synth sound for music but any sound encoding and decoding), with 1-bit signal, and if 1-bit modulation is used, in video coding (pixel) also and not just audio? And PortOSC (TuKoKe 2014 Finland competition winners Petri Huhtala), “Sound synthesis using an allpass signal chain” (no modulation needed), “Bitwise logical modulation”. Using “Delta sigma modulation 1-bit quantization and oversampling at the receiver”, direct digital synthesis DSM: “All digital frequency synthesis based on new sigma-delta modulator architechtures” 2015, “High order delta sigma noise shaping” flintbox.com 2003/2011, “Look-up table delta-sigma conversion” Robinson 2003/2007, “Sigma-delta modulation based digital filter design techniques on FPGA” Memon 2012 (single bit filters), “Threshold direct synthesis structure for digital delta-sigma modulation” Song 2008, “Fractal additive synthesis: a spectral modelling of sounds for low-rate coding of quality audio” 2003, “A digital incrimental oscellator: generation of sinusoidal waves in DPCM” 1996, “Granular synthesis of sound by fractal organization” Paul Rhys, “spherical logarithmic quantization” uses CORDIC and DPCM. “A new frequency source based on sigma delta modulation and CORDIC” 2012, “A direct digital synthesis with tunable delta sigma modulation” Vainikka, “Division-free multiquantization scheme for modern video codecs” Das 2012, “Frequency modulation and first-order delta sigma modulation: signal representation with unity weighted Dirac pulses” 2008 Zierhofer, “Fast hologram generation of three-dimensional objects using line-redundancy and look-up table method” Choe 2010, “Permutation enchanted parallel reconstruction for compressive sampling” Fang 2015, “Permutation limits of segmented compressive sampling” Fang, “Robust simultaneus sparse approximation” Ollila 2015, “Nonparametric simultaneus sparse recovery” Ollila 2015, “Generalized quadrically constrained quadratic programming for signal processing” Khabbazibasmenj 2014,“Quadrature noise shaped encoding” Ruotsalainen 2014, “Quantization noise reduction techniques for digital pulsed RF signal”, “True discrete Cepstrum: an accurate and smooth spectral envelope estimation for music processing” “Non-negative matrix factorization” Koivunen 11.2.2016, “Fixed-point algorithm development” Haghparast, Bit Angle Modulation BAM, and other nonstandard modulation.