Because 1-bit digital has been studied in “compressive sensing” and other applications, perhaps it is possible to use 1 bit digital in neuromorphic computing also. In text “Maximum likelihood estimation of quantized data” (Gustafsson, Karlsson) is the “data dither” studied. Using dithering max 8 - 10 bit improvement is achieved but now dither noise is in the signal chain. Coupling dither with noise shaping techniques increases “dynamic range” (precision) of dithered 1 bit to about 15 - 16 bits max, (33 000 -65 000 numerical values), but that requires complex noise shaping and dither procedures (used usually with delta-sigma modulation, altough use of these methods is not perhaps restricted to delta-sigma but other modulation and D/A conversion also). If dither noise becomes clearly present in signal chain perhaps 20, 22 or 24 bit precision is achieved with these methods. Using 1 bits as base. But those are complex methods (studied with delta- sigma modulation, among other modulation methods) , and if neuromorphic digital neurons or neuristors (memristor based artificial neuron) should be simple, complex noise shaping circuits in every digital neuron is very impractical. Simpler noise shaping methods perhaps will do. Complicated “neurons” like european union brain project with its ARM9 processor based “neurons” perhaps can use complex noise shaping in every “neuron” and max 24 bits signal path using only 1 bit in signal chain with noise shaping and dither. Analogue circuits are being made at 65 nanometers and even 32 or 28 nanometer analog circuits have been proposed. But what is the quality of analogue transmission in these small 28 nm circuits? Propably very noisy. But if analogue transmission is achivied with quality that is enough for neuromorphic chips perhaps then digital neuromorphic chips are not needed. Digital chips can be made smaller than analog, however, if 28 nm is achieved with analog design the difference between digital and analog is not very big. But if it is possible that the “neuron” uses 1 bit logic with 15 - 24 bit precision with dither all the way, in a way that no conversion of 1 bit dithered information is needed to change back to 15 - 24 bits, but the neuron is capable of using 1-bit dithered information in its logic functions (logic gates) and those 1 bit signals contain 15 - 24 bits precision of dithered and noise shaped information, then even simply circuits can be used as “neurons”. But I don t know if it is possible to manufacture 1-bit logic, 1-bit logic gates or 1-bit CPU that uses dithered information so that 1 bit contain 10 - 24 bits precision without need to convert 1-bit dithered signal back to 10- 24 bits for logic gate functions. Or is the only possibility that 1-bit dithered information must always change back to “normal” 10 - 24 bits so logic gates can then use them? If 1-bit logic gates are possible to made that can use dithered information in its dithered and noise shaped form, that opens new possibilities for circuit design on the whole.Human senses operate using logarithmic system, perhaps human brain information processing on the whole operates using logarithmic scale, adding logarithmic scale to dithered 1 bit signal increases accuracy and range even more. There is a concept of “Negative beta encoder” for using fractional number system. And “neuraloutlet.com” netpage has “Metallic number systems” (logarithmic) and U-value NS. There are many other number systems to choose from using with 1 bit dithered signal. The 1 bit dithered system can use for example “Multiple base composite integer” (at “MROB.com” netpage), Zero Displacement Ternary Number System (ZDTNS), “Magical skew number system”, on the netpage “iteror.org” is “Noviloka maths on the number horizon - beyond the abacus” is “Nested binary biological number system B notation” that is inspired by biological model. There are dozens or hundreads different high-accuracy and high range number systems that can be used with dithered and noise shaped 1-bit quantization (perhaps). So perhaps 1-bit dithered and noise shaped digital signal can have very high accuracy despite the fact that it is just a 1-bit digital signal with dither. For simple quantization ADPCM or other simple methods perhaps are best, ADPCM has coupled with vector quantization (cubic-, pyramid-, or spherical vector quantization, someties with fibonacci coding). There are other DPCM schemes RIQ / RIVQ A/DPCM (Recursively Indexed /Vector/ Quantization). This is related somehow to “Octree” quantization. And Noise Feedback Coding DPCM (NFC-DPCM), Adaptive Quantizer DPCM (DPCM-AQB) on the text “Subjective evaluation of four low-complexity audio coding schemes” (Joseph, Maher). And backwards adaptive DPCM also exists. In text “Quantize and conquer: a dimensionalty-recursive solution to clustering, vector quantization, and image retrieval” (Aurithis) is one solution, “Distortion-limited vector quantization” (Hahn) and “An improved interpolative vector quantization scheme using non-recursive adaptive decimation” Tsang 1995 are other RIQ- related methods, if RIQ quantizer is gonna be used with 1-bit dithered signal. There are many ways to use 1-bit dithered signal with quantization. Not only using integer number systems, but if indexing modulation scheme is used, RIQ or other, then “index calculus number system” etc. or other suitable number system can be used. There are authors like Dimitrov, Joux, Padmavathy, Howell, Muscadere etc. about different index calculus methods. There is a book “Residue number systems: algorithms and architechtures” Mohan 2002. Different indexing methods: “The use of index calculus and Mersenne primers for the design of high-speed digital multiplier” Fraenkel 1964, “Index number systems (Ralph W. Ffouts 1972) “Arithmetic codes in residue number systems with magnitude index” 1978/2006, “A multilateral index number system based on the factorial approach” 1986, “inner product computational architechture using the double-base number system” Eskritt, “Power optimisation of FIR filter using an advanced number representation” (Reddy, Rahaul, Nithin, Valarmathi 2016). Other ways to use indexed number systems are level-index number systems, others than strictly index based are tree based number representations, on netpage stackexchange.com/questions “Is there tree-based numeral system? (closed)” 30.6 2016, “hereditary number systems”, like hereditary binary (Paul Tarau), giant numbers (Paul Tarau again), base infinity number system (Eric James Parfitt), factoradic number systems, zero-based numbering etc. A new improved quantization scheme instead of VQ is Additive Quantization, like “Revisiting additive quantization” (Martinez) and “Solving multi-codebook quantization in the GPU”. Other texts are “Additive quantization for extreme vector compression”, etc. Stacked quantizer was an earlier RVQ- based method (Martinez). If those additive quantizer methods are based on recursively indexed quantizer, perhaps then recursive number systems or index-calculus number systems can be used. 1-bit quantizer can be used in ZISC (Zero Instruction Set Computer) solutions, ZISC is neuromorphic- style computing design. There is an article “A new approach to the classification of positional number systems” (Borisenko, Kalashnikov 2014). 1 bit signal can use “Direct digital synthesis using delta-sigma modulated signals” Orino, technique or similar “oversampled 1 bit audio without delta-sigma modulation” by Takis Zourntos, that is more stable. Differential 1-bit system can use any other method like tracking ADC or dual-slope ADC, monobit or SAR ADC etc. together with dithering and noise shaping. " Hardware realization of novel pulsed neuron networks based on delta-sigma modulation with…” 2002 is one 1-bit scheme. Other differential- based modulation methods are “differential space time frequency” (DSTF) that requires in some version residue number system, and “Simple data compression by differential analysis using bit reduction and number system theory” 2011, and “Liao style numbers of differential systems”. Also “Tournament coding of integers” Teuhola, “1-bit level systolic array” that uses “Winograd fourier transform”, “On the FFT of 1-bit dither-quantized deterministic signals” Cheded, “A novel fast FFT scheme” Cheded. “Video compression with colour quantization and dithering” Raja has logarithmic scale and dithering applied together as video compression, same can be applied perhaps to other data compression also. “Synthesis and analysis via Walsh-Hadamart transformation” Varkonai-Koczy is signal analysis, also “Multi-resolution short time fourier trasform”, cs.tut.fi netpage “Shape-adaptive transform” (SA-DCT), and “anamorphic stretch transform” based solutions. There is “vector-radix FFT” that perhaps can be improved using “additive quantization” (AQ) Martinez, “Multi-resolution short-time fourier transform”, “Faster than FFT: The chirp-z-rag-N”. If best integer numbers system would be based on decimal value 840 according to one netpage, because 840 can be divided using 3, 6, or 12, which all are economical number bases, then 1-bit signal that is dithered to 720 or 840 decimal values of accuracy, or 9,5 bits, would be the best. When this dithered 1- bit is converted to binary it needs 10 bits because 720 or 840 is in between 512 and 1024 decimal values. But if 720 or 840 in decimal really is the best number base for integer, perhaps 1-bit signal with enough dither can use this base number 720 or 840 instead of just number 1 as integer base. Divided to 3, 720 becomes 240, another integer base that needs 8 bit precision about dithering. Other integer bases to use with 1-bit dither perhaps are “bounded integer sequence encoding” (BISE) and “multiple base composite integer (at MROB:com webpage) and Quote (mathematical) notation. The Bryce 3D graphic program uses “single precision reals” (real numbers, fractional format?) to increase 16 bit integer to -38 to +38 bits accuracy (76 bits together) and then adds dither to 48 bits accuracy in positive number value, so it is 86(?) bits together if I understand it right. In neuraloutlet.com netpage is “U-value” and “metallic number systems”. If dithering is used in 1-bit, perhaps using maximum dithering so that actual 1-bit information is barely recovered amidst ditehring noise, but now dynamic range and information density of 1-bit signal is its maximum level. The dithering noise can be filtered out using different filter techniques, now heavily filtered 1-bit signal can be boosted to beyond 10 bit information density, and after dithering noise has done its duty it can be in some extend filtered out in the final information result. TwinVQ is compression method for audio, Additive Quantization (AQ) by Martinez is better, so perhaps additive quantizer “TwinAQ” based compression , something similar like in texts of “extreme vector compression” etc., perhaps together with dual-slope ADC or other can be used. Quantizing signals to 1-bit delta-sigma modulation is used sometimes, like “Single-bit, pseudo parallel processing delta-sigma modulator” Hatami 2010/2014 and “A novel speculative pseudo-parallel delta-sigma modulation” Johansson, Svensson 2014. Takis Zourntos has “Oversampling without delta sigma modulation” based on nonlinear control. Adding dither and noise shaping to 1-bit increases its accuracy, so perhaps non-oversamapling pseudo-parallel delta sigma is possible, or Zourntos more stable than delta-sigma model. Perhaps adding “extreme vector compression” (additive quantization) also. There is “Pulse group modulation” written by someone called Ries in 2001, and patent “Modulation by multiple pulse per group keying and method of using the same” US pat. 20030142691A1 Hartmann 2003, and “Processing mixed numeric and symbolic data encodings using scaling” US pat. 7716148B2 Meng 2010. 1-bit signal can include lots of information using these methods. And “Equivalent complex baseband model for digital transmitter based on 1-bit quadrature pulse encoding” 2014, “Time-domain dither, dispersive codes, and controlled noise shaping in SDM” Hawksford, “Rational dither modulation in audio signals” 2007 Hernandez, “Single-bit oversampled A/D conversion with exponential accuracy in the bit-rate” Cvetkovic, Debauches 2000. “Feynman machine: the universal dynamical systems computer” 2016 is paradigm for neuromorphic computing. If for example instead of integer base number 1 base ZDTNS (zero displacement ternary number system) is used, and together with neuraloutlet.com netpage metallic numbers, that in one version has 3,3… logariththmic value. Now number 10 in decimal is three metallic numbers (when divided to 3,333…, it is not exatly same as metallic number but very close). Now instead of integer base 1 base is integer base 10, but that number has three metallic numbers (10, and 6,666… , and 3,333…) inside and is using ZDTNS (with ternary values of 3,333…, 6,666…, 10 instead of 1, 2, and 3). Or if the base is logarithmic, ternary 3.333… leads to 3.333, then 10, then 33,33 etc., and if two of these 33,33 value trits are used result is 1000 in decimal system (33,33 X 33,33= 1000). So number 10 in decimal system can be used as combined ZDTNS & metallic number system base. But I don t know if this idea is any way worthwhile, or do ZDTNS and metallic numbers work together at all. Or use neuraloutlet.com U-value number base or any other econimical number base, like balanced ternary tau (BTTS) and negative beta encoder. Also “High Frequency Replication” (HFR) is used in audio coding. Can similar methods be applied to neuromorphic “spiking” networks also. About 50% information space is saved using HFR in audio, but audio content is almost the same than using without HFR. And audio can use HFR from 4 khz upwards, frequencies 4-8khz are reconstructed using HFR, only 4 khz has “real” signal, and then 8 - 15 khz range can use simply aural expander that creates harmonics based on 1 - 8 khz frequencies. Now 15 khz sound is made using only 4 khz base spectrum. I don t know if that audio sound is even near to neuromorphic digital spiking network signal, but that is one idea. Also because dither is used extensively in a way that it almost buries the 1-bit signal, using Hadamart code or something similar, Walsh-Hadamart transform for encoding or Reed-Solomon or other, so that signal can be recovered amidst very heavy dither noise, and then dither noise is filtered out, if not out at least to acceptable levels that signal can be useful. Dynamical range and information density of 1-bit signal is at its max, but so is dither noise. Also using sparse vector/ radix/ matrix/ index/ etc. compressive sensing saves information space. Human brain works using logarithmic scales and dither, that has proved, so artificial network that uses these is near to biological brain. Also using nonuniform sample rates, even delta-sigma or other non-integer mode can use together with logarithmic scale in addition non-uniform sample rates. “Analog to digital conversion using nonuniform sample rates” US pat. 5963160A, “Adaptive concurrent scan order” US pat. 20060146936 Microsoft, “Digital filterbank for LOWRAN by monobit receiver”. Time-interleaved ADCs (in texts by Elettra Venosa) are very efficient also, and “multi sampling monobit receiver”, “Universal rate-efficient scalar quantization information rate for faster-than-Nyquist signaling with 1-bit quantization and oversampling at the receiver”, “Sparse sampling of signal innovations”, “Beam shaping using a new digital noise generator”, “Optimal noise shaping using least squares theory”, “Realtime multiband dynamic compression and noise shaping”, “Digital noise shaper circuit” US pat. 55988A Alfred Linz 1994, “Direct-digital synthesis using delta-sigma modulated signals” Orino, " A new Poisson noise filter based on weight optimization” Jin 1998. These systems can use “differential calculus” or other, and for example signal generation by BKM algorithm and its many new versions etc. “An improved least mean kurtosis (LKM) algorithm for sparse system identification” (Yoo, Park), “Improved filtered -X least mean kurtosis algorithm for active noise control” (Zhao), “Stochastic analysis of the least meam kurtosis algorithm for gaussian inputs” (Bershad, Bermudez), “A family of adaptive filter algorithms in noise cancellation for speech enhancement”, “Kernel least mean kurtosis based online chaotic time series prediction”, “Adaptive filtereing aqlgorithms for noise cancellation” (Falcao), “Active noise cancellation project” Liu 2008, “Digital signal processing algorithm for noise reduction, dynamic range compression, and feedback cancellation in hearing aids” (Ngo). Sometimes quadrature mirror filters are coupled with ADPCM, and if QMF, dither and differential PCM methods are combined result is perhaps 1 bit signal that has 24 bit information density, even without oversampling. QMF leads to lossy and lossless data compression formats, but perhaps data compression is too much for simple digital neurons or neuristors, simple QMF methods like ITU telephone standard ADPCM that is simple “split- subband ADPCM” or similar will perhaps do. In bitsnbites.eu netpage is DDPCM (Dynamic DPCM) that is simple. There are “Xtreme quality IMA-ADPCM” that is similar to WavPack compression, Warped linear prediction models and Continously variable slope delta modulation (CVDM) models also, perhaps dither and QMF filters work with them too, in 1- bit signal path, if they are not too complicated (CVDM is by a nature 1-bit method). Actual data compression algorithms like FSE (Finite state entropy) and its asymmetrical number system are effective but perhaps computationally heavy solutions for every digital neuron or neuristors. Also googling “differential ternary” brings many results, mainly texts by Nadezda Bazunova, like “Differential calculus D3=0 on binary and ternary associative algebras” 2011 and earlier texts. Also “Ternary differential modes” (Pilitowska), “TLDS (Ternary lines differential signaling)”, “Differential cascade voltage switch (DVCS)” (2012), “Design pipelined CMOS simple ternary differential logic” (Wu 1993), “Improved error performance in SOQPSK modulation uisng a ternary symbol encoder” (2009 / 2017), “Almost-differential quasi-ternary code (ADQ) code” 1976, “Ternary R2R DAC design for improved energy efficiency” Guerber 2013, “CMOS ternary dynamic differential logic” Herrfield 1994. There also ways to encode ternary 3-value content to 2-bit form not using 3 values but one bit (2 values), ternary information is scrutinized in some accuracy to 2-value binary. How that works with differential ternary I don t know. If radix 3 or value 3 is best because it is close to natural logarithm, and can be used in ZDTNS number system or otherwise, perhaps 1-bit signaling can use 3-value number system also. And logarithmic base is preferred to high dynamic range information, but logarithmic base is not integer base, and that is difficulty in simple math. There is Renard system of numbers or its modern version E- series number system, it is not logarithmic, but works when information has high dynamic range and small amount of numerical values (like for example 1-bit dithered / QMF filtered signal). Perhaps E- series numbers or similar integer, not logarithmic non-integer, can be used for simplicity in digital neuristor. For modulation there is “New advances in pulse width modulation techniques and multilevel inverters” Peddapelli 2014, “Variable frequency pulse width modulation” Stork, Hammerbauer 2015, “Nonuniform sampling delta modulation-practical design studies” Golanski, “Variable modulus delta-sigma in fractional N frequency synthesis” Borkowski 2007, “Spread spectrum modulation”, “Space vector based dithered sigma delta modulation” Biji Jacob, “Delta-modulation coding redesign for feedback-controlled systems” de-Wit 2009, “Space-time vector delta-sigma modulation” Scholnik 2002, “Adaptive sigma-delta modulation with one-bit quantization” Zierhofer, “A 2-bit adaptive delta modulation system with improved performance” Prosalentis 2006, “Stability of adaptive delta modulation with forgetting factor and constant inputs”, “A new adaptive delta modulation system Kyaw” 1973 / 2016, “Method and apparatus for variable sigma-delta modulation” Herbert Ko 2006. “A modified adaptive nonuniform sampling delta modulation” (ANSDM). Combining vector compression (space vector, space-time vector, additive quantization “extreme vector” etc.), Predictive methods (warped linear predictor etc. to 1-bit), radix3 numbers that are close to natural logarithmic (order-3 fibonacci code by Sayood) or E-series numbers, recursively quantized index - delta modulation (RIQ-ADPCM), multiple description delta- etc. methods and techniques and trying to find some that is simple enough (like dither and noise shaping and quadrature mirror filter) to be used in simple digital neuristors or similar circuits. Delta-sigma compression uses noise shaping and part of frequency range is filtered out because it is too noisy for usual audio or video information. But this noisy section can still contain information, for example MP3 players use block coding and some sort of “header” is used together with compression that contains information how the information is compressed. Now this filtered out section of signal that is too noisy of regular audio or video can be used as “information store” that contain same information as block coding headers etc. additional than pure audio voice that compression methods use. Similarly 16 bit CD sound uses PCM audio that filters 2khz away from 22khz audio sound because of quantization noise. This 2khz section (or if cutoff filter is set at 18khz, nobody hears high frequencies anyway, and now 4 khz is for additional information) can be used as “information store” also. Signal is noisy but some information can be stored for example Reed-Solomon coded if CD sound is in question. This information is for next sample, not the sample that is playing because decoder needs to decode the information before extra information can be used, so small delay is in this system for information processing, so live recording is perhaps bit difficult but for prerecorded material this will work. Now this extra information can be in the signal itself and no extra “header” information blocks are needed so bitrate is saved. There is “Quantization and creed are good: one bit phase retrieval, robustness and greedy refinements” Mrooeh 2013 (extremely quantized one-bit phase-less measurements). Using tri-level (ternary) delta-sigma coding leads to significant savings in bitrate according to “Improved signaltonoise ratio using trilevel deltasigma modulator” 2009, “Bitstream address and multipliers for tri-level sigma delta modulator”. There are pseudoternary coding (Matti Pietikäinen, and Jaakko T. Astola), “BIN@ERN: binary-ternary compression data coding”, “Binary to binary encoded ternary BET”, “Self-determining binary representation of ternary list”, “Arithmetic with binary encided balanced ternary numbers” 2013, “Arithmetic algorithms of ternary number systems” S. Das 2013, “A novel approach to ternary multiplication” Vidya 2012, " Balances and abelinian complexity of certain class of ternary words" Turek, “Abelinian complexity in minimal subshifts” Saari 2011. Other: “Delta-sigma algorithmic analog-to-digital conversion” Mulliken, “A structure of dithered nested digital delta sigma modulator” Nouri 2016, “A novel multi-bit parallel delta/sigma FM-to-digital converter with 24 bit resolution” Wisland, “Cascade Hadamart based parallel S modulator” Alonso, “A parallel delta-sigma ADC based on compressive sensing” Xiongl, “Randomized iterative reconstruction algorithms for delta-sigma A/D converters” Marijan 2011, “Delta-sigma modulator based A/D conversion without oversampling” 1996, “Design and analysis of multi-stage quadrature sigma-delta A/D converter” Marttila 2011, “VLSI delta-sigma cellular neural network for analog random vector generation” 1999. Vector compression and delta modulation together (in “space-time vector” or other form) and using Additive Quantization (“extreme vector compression”) perhaps works. There is also block floating point (BFP), that is between fixed and floating point representation, “On finite wordlength properties of block-floating-point arithmetic” Mitra 2008, “Integral noise shaping for quantization of pulse-width modulation” Midya 2000, “noise coupled delta sigma modulation”, “dynamic element matching DEM”, “dynamical weight avaraging DWA”, “A higher-order mismatch -shaping method for multi-bit sigma-delta modulators” A. Lavzin 2002, “Improved stability and performance of from sigma-delta modulator using 1-bit vector quantization” Risbo 1993, “A novel speculative pseudo-parallel delta sigma modulator” Johansson 2014, “Hardware realization of novel pulsed neural networks based on delta-sigma with GHA learning rule”. If ternary format is used, if trilevel delta number representation is better than just binary, in netpage “Fascinating triangular numbers” at shyamsundergupta.com is ternary / tri-level arithmetic. If balanced ternary, ternary Tau (BTTS), zero displacement ternary etc. number systems are used, such as “ternary tau storage system”. Even stranger numerology is in netpage “Constable research B. V. about the number nine” (hans.wyrdweb.eu). At neuraloutletcom is U-value number system that is good for representing fractals, in MROB.com is multiple base composite integer, and Munafo PT system (17 value number system) that promises almost infinite accuracy, there is magical skew number system etc, “Addition and multiplication in generalized tribonacci base” 2007, “A new unterntainty-bearing floating point arthmetic” Wang 2012, “A survey of quaternary codes and their binary image” Derya Özkaya 2009 including “Z4 cycle code” that transform quaternary information to binary. Paul Tarau has several number systems, “tree-based” and “Giant numbers” etc. Block floating point (BFP) uses interval arithmetic, and is between fixed and floating point representation. Unum concept adds interval arithmetic “header” to FP number (as far as I have understood it), so BFP and unum should work together well. Also floating point and delta (sigma) modulation is coupled in some designs. So perhaps unum concept (interval arithemetic) and delta / delta sigma modulation, or some other delta modulation (like Takis Zourntos nonlinear control model, or DPCM, etc.) can be coupled together, like floating point and delta modulation does. I don t know what kind of that “unum delta modulation” would be. But perhaps unum/ubox information can be stored in that additional storage space that delta sigma modulated noise shaping shifts noise to high frequency and then that high frequency is filtered out. But in that noisy high frequency can have information, for example unum/ubox information for delta modulated signal or something similar. That information is for next sample because that sample that decoder reads is already being decoded. Also PCM or Floating Point signal can have unum/ubox information in quantization noise frequency that is normally filtered out. If that information is not Unum/ubox interval arithmetic, it can be other, for example fractional number format, like Quote Notation, Q number format, BISE (bounded integer sequence encoding) Qdigest compression information etc. (if those formats need additional information that can be stored in high frequency storage space), or data compression/decoding information. I don t know if unum or ubox concept (interval arithmetic) works in delta / differential encoding, like DPCM and ADPCM, or adaptive delta modulation / delta-sigma, but that is one idea. In concept that is 1-bit differential encoding. Or multi-bit differential encoding.Other texts: “Sparse composite quantization” 2015, “Pairwise quantization” 2016, “Optimum quantization and its applications” 2004, The Optimal Quantization website, “Efficient quantization based on rate-distordion optimization” 2016, “Universal rate-efficient scalar quantization” 2010, “Perceptual signal coding for more efficient usage of bit codes” 2012, “An efficient law-of-cosine-based search for vector quantization” 2004, “Robust iterative quantization for efficient p-norm similarity search” 2016, “Implicit sparse code hashing” 2015. There is “Xampling” theorem by Mishali, Eldar, sub-Nyquist sampling. “Extreme compressive sampling for covariance estimation” 2015, “Compressive phase-only filtering at extreme compression rates” 2016, “New approach based on compressive sampling for sample-rate enchancement” 2014 Bonavolonta, “Robust 1-bit compressive sensing via sparse vectors” 2013 Jacques, “COSAMP: iterative signal recovery from incaccurate samples” 2008, “Single-pixel camera with compressive sensing by non-uniform sampling” 2016. There is also VCO-ADC, oscillator based A/D conversion. This is a bit similar to sound synthesis with oscillators. Also delta-sigma 1-bit modulators use sometimes ring oscillator so 1 bit VCO-ADC DSM exists. Is it possible to use similar methods that sound synthesis does, like Polynomial Transistion Regions PTR, Vector Phaseshaping Synthesis VPS, Auto-Regressive Moving Avarage Filter ARMO, Bandlimited Ramp (BLAMP) function, PolyBLEP, Adaptive Phase Distortion Synthesis, Feedback Amplitude Modulation FBAM etc., in sound / speech signal reproduction and decoding (not just synth sound for music but any sound encoding and decoding), with 1-bit signal, and if 1-bit modulation is used, in video coding (pixel) also and not just audio? And PortOSC (TuKoKe 2014 Finland competition winners Petri Huhtala), “Sound synthesis using an allpass signal chain” (no modulation needed), “Bitwise logical modulation”. Using “Delta sigma modulation 1-bit quantization and oversampling at the receiver”, direct digital synthesis DSM: “All digital frequency synthesis based on new sigma-delta modulator architechtures” 2015, “High order delta sigma noise shaping” flintbox.com 2003/2011, “Look-up table delta-sigma conversion” Robinson 2003/2007, “Sigma-delta modulation based digital filter design techniques on FPGA” Memon 2012 (single bit filters), “Threshold direct synthesis structure for digital delta-sigma modulation” Song 2008, “Fractal additive synthesis: a spectral modelling of sounds for low-rate coding of quality audio” 2003, “A digital incrimental oscellator: generation of sinusoidal waves in DPCM” 1996, “Granular synthesis of sound by fractal organization” Paul Rhys, “spherical logarithmic quantization” uses CORDIC and DPCM. “A new frequency source based on sigma delta modulation and CORDIC” 2012, “A direct digital synthesis with tunable delta sigma modulation” Vainikka, “Division-free multiquantization scheme for modern video codecs” Das 2012, “Frequency modulation and first-order delta sigma modulation: signal representation with unity weighted Dirac pulses” 2008 Zierhofer, “Fast hologram generation of three-dimensional objects using line-redundancy and look-up table method” Choe 2010, “Permutation enchanted parallel reconstruction for compressive sampling” Fang 2015, “Permutation limits of segmented compressive sampling” Fang, “Robust simultaneus sparse approximation” Ollila 2015, “Nonparametric simultaneus sparse recovery” Ollila 2015, “Generalized quadrically constrained quadratic programming for signal processing” Khabbazibasmenj 2014,“Quadrature noise shaped encoding” Ruotsalainen 2014, “Quantization noise reduction techniques for digital pulsed RF signal”, “True discrete Cepstrum: an accurate and smooth spectral envelope estimation for music processing” “Non-negative matrix factorization” Koivunen 11.2.2016, “Fixed-point algorithm development” Haghparast, Bit Angle Modulation BAM, and other nonstandard modulation.
There are ways to make 1 bit information effective, for example using cascaded delta sigma modulators or something similar in papers by Hannu Tenhunen, Bingxin Li and A. Gothenberg, max accuracy using multibit DSM is about 1400 bits, altough this is radio frequency communication with large oversampling ratio. But still using DSM 1 bit concept with 1400 bits of accuracy. If neuromorphic system is used 1-bit or 1-bit oversampled/dithered, multibit or 1 bit phase shift keying or QAM modulation could be used, because brainwaves are waveform (sinewave etc.) shaped. Binarized Neural Networks (BNN) are effective binary 1-bit structures for artificial intelligence that leads to huge savings in FPGA slices etc. If tree-based logic structure is the one that AI uses then tree-based number systems (Paul Tarau etc.) can be used or “logic system based on isoperimetric trees” (FPGA). If multibit accuracy is from 1-bit signal then these AI circuits are like Coarse Grained Reconfigurable Arrays / architectures (CGRA) that are simplified multibit versions of 1-bit FPGAs. Multibit processing is needed if 1-bit signal has more than 1-bit precision, using oversampled DSM, Takis Zourntos model, Hatami/Johansson, Tenhunen/Li delta sigma etc. Additive Quantization (AQ, Martinez) or Sample by sample Adaptive Differential Vector Quantization (SADVQ, by C. F. Chan, that uses 1- 3 bits per sample, and can be used in music synth or waveform synthesizing) can perhaps be used in artificial neural nets also. Is it already possible to build silicon chip that has the processing capacity of human brain? “Neurosynaptic operations per second” SUPS is the speed measurement of human brains, rough estimation is quadrillion SUPS (1015, number 10 to potence 15 or number 1 and 15 zeros). But different estimations vary from 1012 to 1016 SUPS (with 1012, 1013, 1014, 1015 and 1016 SUPS all having proposed). What is already build using neuromorphic chips? IBM True North has 46 billion synaptic operations / sec performance using 1 watts of electricity, altough normally it uses only 70 milliwatts in operation. It is made uising 28nm process. IBM is moving already to 7nm process 2018, and skipping 10nm manufacturing that other chip vendors are using. 7nm True North will be about 16 times faster (this is very rough estimate, this is just plain simple example, not accurate performance calculation), also using 16 times less power per SUPS. So 28nm size True North chip manufactured at 7nm process will have 256 times more SUPS capacity (rough estimate again, 16X more transistors than 28nm circuits and 16X faster, so 256X is performance improvement). 28nm True Nort has 3,5 cm2 die area because 20 milliwatts per cm2 are specs and 70 milliwatts is power usage. If now this 7nm chip uses 8 times more die area, result is 28cm2 chip, that has 2048X SUPS of 28nm True North from year 2014. So this chip has about 1410 SUPS capacity, similar that human brains have (2048 X 46 billion SUPS= 1410, humans 1510, but cerebral cortex about 1410 if movement control in little brains requires rest of capacity). Power consumption is 128 watts about if it is 16 times more efficient than 28nm True North, but actual power needed when peak performance is not used is 9 watts or 128 X 70 milliwatts. Transistor count is 691 billion transistors (128 X 5,4 billion transitors of 28nm True North). So this microchip has human brain processing capacity. Altough human brain are much complicated and that chip is digital, not analog, and only 256 synapses are available per neuron, not 7000 - 1000 like humans have etc., some kind of model that human brain works can be used in that microchip. This chip overcomes low number of neurons and synapses by operating much faster than human brain. Because human big brains have only 16 billion neurons of 86 billion neurons total count and almost all the rest are in little brains that are used only in movement coordination (motoristic duties) 1014 SUPS microchip is almost like 1015 SUPS human brain. And in big brains only frontal lobe does abstract thinking, it is estimated that only 40 bits per second brain uses in “conscious bandwith”. And this performance count does not even notice information loss of unreliable synapses, very large proportion of SUPS activity of human brain is lost due to unreliable synapses, misfiring synapses etc. So this microchip can be more powerful than human brain. And not even big brains only SUPS are needed, if AI chip does not need motoristic control (is not included in moving robot), and does not need vision or hearing, then only frontal lobe of big brains is needed to SUPS count. The AI chip is now deaf and blind and does not have speech formation capability. But additional speech synth chip (text to speech conversion) like Yamaha Pocket Miku, and simple speech to text conversion program (using AI) makes possible AI to speak and “hear” (using speech to text conversion). Not even two hemispheres of human brains are needed, some people have one hemisphere of brain removed because otherways they had died on epilepsy. Altough they have only one hemisphere of brains they are living people and almost like normal persons, altough less IQ. So only one brain hemisphere frontal lobe SUPS performance is needed for human like AI chip. And because of old age brain suffers serious loss of synaptic connections, and neurons and synapses that remain work slower and more unreliable than in younger people, perhaps as little as 1/3 or 1/4 SUPS capacity is needed compared to younger people if target is simulating very old age (80 - 90 years of old) human brains. Despite low SUPS requirements these are still human NOPS performance figures, altough now those humans are old age and senile. But this theoretical 28 cm2 True North chip manufactured at 7nm process has roughly estimated same SUPS capacity as human brains have. What is the cost of manufacturing such a chip? When Apple A8 chip went into production using 20 nm process it had about 20 dollars or less production cost and die area of about 1 cm2. That was when 20nm was the newest tech available for microchips. 28 cm2 is so large chip that 450mm wafers is needed instead of 300mm. There is transition phase going on and in future 450mm will be the production standard. New “fifth generation” 450mm wafers make possible to use 300mm wafer production tools and handling. If about 20 dollars is price of newest tech manufacturing costs per square centimeter, 28 cm2 chip will cost 560 dollars (manufacturing cost only, profit and taxes not included). If 450mm wafers have 2,25 times the area of 300mm wafers but manufacturing costs of chips are same per wafer (rough estimate again) price of 28 cm2 chip is about 250 dollars (560:2,25). So human brain processing capacity microchip costs only about 250 dollars to make (this is bit over-strecthing but anyway). And the same using 10nm process: 10nm versus 28nm is 8 times more efficiency and 8 times faster, and 8 times more transistors than 28nm chip, 8X8 =64, now make chip 8 times larger to 28cm2 it will be 8X64 = 512 times SUPS, 512 X 46 billion SUPS is 2,4 X 10*13 SUPS, still about roughly human brain processing capacity. Power consumption (peak power) is 64 - 65 watts about but normal consumption is 4,5 watts, so laptop or even mobile phone phablets can use this AI processors. I am not sure are my power estimates realistic, or does the efficeiency improvement count at all when electric circuits scale on smaller dimensions. If power consumption would be kilowatts (peak performance) for 7nm chip or 10nm chip it would still be usable chip, actual operating power would be couple of watts or couple dozen of watts about (normal slow operating mode). This cheap AI processor can be in any consumer product that nowadays uses some sort of logic chips, in washing machines (using “fuzzy logic”), diswasher machines, electric ovens, microwave ovens, coffee machines, kitchen machines, breadmachines, refrigators, television sets, home stereo equipment, portable radio/CD players etc. Cars can now drive themselves because human diver is not needed when human processing capacity AI is the driver. Of course mobile phones, tablet PCs, laptop PCs, desktop PCs etc. can use this AI chip with human brain processing capacity. If chip consumes much electricity using electric cooling that used in some CCD cameras would make chip cool. If chip is used in refrigator using its cooling system or two stage cooling using liquid nitrogen cooling that in turn makes refrigator coolant cool makes chip run in very low temperatures and increases efficiency. If washing machine is in use using cold water as coolant is solution, this cold water is pumped back to coldwater pipe so no water consuming increases costs. Using sparse representations, similar data compression methods that human brains use (logarithmic compression, dither, sparsity etc.) make data to fit smaller space. Human memory capacity is about 1,5 petabytes. According to dramechange.com netpage spot DRAM memory price is 4,75 dollars per 128 gigabytes, or 38 dollars per terabyte. If one this kind of AI chip is augmented by 1 TB chip with cheap production,and thsi memory chip uses data compression, like Finite State Entropy or other, it would be like long term memory in humans. Encryption is needed, like between chip communication through internet so that AI chips can form “hive intelligence”, or intranet communication between different chips in same household or blockhouse. Larger chips that are more like human brains could be wafer size chips. 450mm wafer size chips if manufacturing costs are same as 300mm wafers is 14 000 dollars for full sized chip or 7000 dollars for half wafer size chips (two half wafer chips could be like hemispheres of human brain), with power consumption of 7 kilowatts (7nm full wafer size processor) and 3,5 kilowatts (10 nm full wafer size processor). Additional memory could be large hard drive or caraousel memory using magnetic tape, about 270 terabyte hard drive or 1 petabyte (1000 terabyte) carousel memory with 64 rolls using Fujifilm nanocubic tape are small enough to fit for example human-like android robot s stomach. If one roll of magnetic tape is 10 cm2 diameter like nowadays magnetic tape storage, and 154 terabits fits in this roll like Fujifilm nanocube has promised, about 6 such rolls is needed for 960 terabyte storage. divided by 64 rolls of carousel memory that leads to about 3 cm diameter roll X 64. If tape is 2 inch (5cm) not 1/2 inch (1,27cm) height now tape roll is 1,5 cm thick. If central section of the roll is is plastic or metal 1 cm diameter and then tape is in 1,8 cm roll, 1cm (0,4 inch) central section and from 1cm to 1,8 cm (0,7 inch) actual tape it now has same amount of tape as 1,5 cm thick roll without centre section. This thin but height wide roll will rewind fast and will contain 15 terabytes (64 X 15 = 960) of information. 64 such rolls is in carousel memory. Actual swedish carousel memory used 2 seconds to find informaton, avarage median memory search time for this kind of 960 terabyte memory is perhaps same kind or faster. It is also small enough to fit in the stomach of an android robot. If 4,5 inch hard drive has 3,7 inch platter innards and 12 terabytes of capacity, 11 inch (28 cm) diameter platter with 2 inch (5cm) height instead of 0,8 inch (2cm) it has 9 X 2,5 times more capacity than 12 terabyte ordinary HDD, so 270 terabyte about 6 cm (2,4 inch) thick and 28cm (11 inch) wide HDD will fit inside android stomach. Also because memory information is stored encrypted and using data compression decoding and encoding of memory information takes time, but it is the same case in human long term memory, remembering something that happened long time ago or finding information in human long term memory will take time too. Due to data compression memory capacity is much greater than 270 or 960 terabytes and can actually be better than 1,5 petabytes of humans. This hard drive memory can be augnmented with silicon chip based magnetic memory, if spot DRAM price is 38 dollars per terabyte cheap 1 terabyte memory chips can be coupled with actual AI chip (of 28 cm2 die area) to augment memory, if less than 57 chips of 28cm2 fit in 450mm wafer, then wafer- sized neuromorphic processor is connected to about 60 of these cheap memory chips for 60 terabyte long term memory. 450mm wafer size AI processor will fit inside human sized android robot. Texts: “Quantized neural networks: NNs with low precision”, “CUMF: Scale index factorization”, “PANDA: extreme K-nearest neighbour search”, “AQSort sorting algorithm”, “Outrageously large neural networks” 2017 Shazeer, “An efficient data clustering algorithm using isoperimetric number of trees”, “Number systems and structures” Hinze 2005, “A scenario tree based decomposition for solving multistage stochastic programs”. There is MPSoC, Petri Nets, Ising computer (Hitachi), Boltzmann machine, The Feynman Machine (AI concept). If brainwaves are waveforms, then Vector Phaseshaping Synthesis, or Feedback Amplitude Modulation FBAM can be used simulating them. If delta sigma modulation is used then in analog form VCO (voltage controlled oscillator) DSM structures etc, or “analog floating point format delta sigma” (in the book “Analog circuit design: low power low voltage integrated filters” 2013, section 5: “Analog floating point converters”) based DSM structures. 28 and 32nm analog chips have been proposed, and as digital chips move to 7 or 10 nm analog chips of 22, 20 or 16 nm are perhaps possible, with or without memristor technology. The high level of noise is not problem if aim is to simulate human brain, brains have high noise levels and information loss due unreliability also, so unreliability and high noise of these narrow analog circuits just increase realism in brain simulating hardware. In netpage aleph.se/andart2 in section “Energetics of the brain and AI” is about the subject matter. This AI processor, either gigantic 450mm wafer size (mistakes in circuit does not affect neuromorphic chips because they can bypass nonfunctioning blocks, so wafer size neuromorphic processors are possible, and for smaller chips no discarding dysfunctional chips anymore is needed from the wafer, because neuromorphic chips can operate even with small dysfunctional parts, unlike ordinary CPUs) or smaller chip (if chip is 28cm2 large, less than 60 of them fits in 450mm wafer, so chips near the edge of wafer must have at least one corner shaped so that they fit in wafer despite that part of one corner of chip is missing, and chips designed so that missing corner does not alter chip function much, these are neuromorphic chips so it is possible, now even chips that does not fit perfectly into wafer can be used, they have less die area and are not effective as full chips because one corner is missing, but still usable and can be sold, at slightly less price than full chips, and full or almost full wafer area is now used despite that chips have huge 28cm2 area, that is just about 4,5cm / 1,75 inch X 6,2 cm / 2,4 inch so these chips are not so big at all and fit into cell phones etc.) needs smaller augmentetive ordinary CPU to do encryption, internet communication, video and audio processing from internet material etc. duties. This additional “media processor” can be standard cheap CPU or SoC, tablet PC or mobile phone cheap chip. The AI processor works completely separate from this ordinary CPU in “sandboxed” enviroment to prevent hacking, like humans operate computers through keyboard and mouse, not straight brain connected interface. This AI chip has similar completely separated connection interface to media processor CPU. If media CPU or internet connection is hacked it does not affect AI processor. Internet connection is through optical and copper wire ethernet, HDMI cable, USB cable etc, UFS memory cards can be used to increase memory size of AI processor. Silicon on insulator (SOI) hardware makes possible to use optical and electric (neuromorphic) signal path in same chip. Optical computers are different from electric von Neuman machines anyway so optical neuromorphic or hybrid optical/electric neuromorphic chips are suitable option. If AI chips are used in large refrigators like big industrial refrigators or refrigator ships. then using two stage cooling in refrigator that uses liquid helium to cool actual refrigator coolant makes possible to cool AI chips to quantum computer temperatures, now this AI becomes quantum computer AI, expanding its capability enormously. These Ai chips need large data busses to communicate with each other and outside world, like 2048 or 4096 bit wide data bus in every four corner of the chip. Additional memory using magnetic tape memory, hard drive or magnetic chip memory, there are claims that cheap chinese 50-60 dollars terabyte USB mermory sticks are hoax, but concerning low price per terabyte USB terabyte stick with 60 dollars is possible. Human cerebral cortex has 74 terabyte capacity and total memory capacity is 1,5 petabytes. Cheap magnetic chip DRAM memory (perhaps even lower than 38 dollars per terabyte is possible) makes cheap human capacity memory possible, in additional memory chip if not in actual Ai chip itself, slowness is not a problem, human memory is also slow in times. Using optical or copper wire connections these AI chips can group themselves to large “hive intelligence”, altough how safe it is use net connections even heavily encrypted to prevent hacking I don t know. If AIs simply share sensory information net connection is not so risky, but if AI thought process are being shared between AIs so that giant AI “superbrain” is possible, net connections need strong encryption to prevent hacking. If additional media chip that makes this connection possible is cheap but effective, and cross platform operating system like Kalray or ICube UPU Harmony processor (latter also suitable for cheap manycore operations, cheap price and also Android OS) so now different operating systems can AI use (desktop, mobile, Linux, Android, Windows and iOS apps etc.). Not only large wafer size or half wafer size processors, that are expensive, but smaller chips (like 28cm2) can be used in AI, and these smaller ships grouped to bigger concentrations for improved efficiency like two chips (like two hemispheres of human brain), more chips like lobes of brain to 4 chips ( 4 X 2 for two brain hemispheres) grids etc. If AI chip is in the android robot additional chips for motoristic control (little brains in humans) are needed etc. Because intelligence is not connected to brain size, women have smaller brains than men but no IQ difference, and dumb people have similar amount of synaptic connections than wise, but intelligent people just have more efficient synaptic “wiring” in their brains and dumb people lost their brain power in synaptic noise, candidate for human brain AI is small brained woman (because brains have huge individual size variations, but that does not affect intelligence so much) and that woman from south not north (northern brains are slightly bigger but that is because they have about 20% greater visual cortex, that has nothing to with intelligence so those synaptic connections are wasted), and this small brained woman should be a genius also or some very intelligent type. Making detailed study of this brain and transferring it to some sort of AI structure would make SUPS effective AI possible if humanlike artificial brain is a goal. Minimal requirements for humanlike AI would be very old people (80 - 90 years old) brain with their smaller amount of working synaptic connections, using frontal lobe only (so this AI is blind, deaf and cannot form speech, speech is created by additianal speech synth chip, it can have conversations if simple speech to text conversion program is used so it can “hear”, if this AI have abstract thinking only and information must be transferred to abstract form, like speech to text, altough text is visual form also, but this is just example) and even only one hemisphere of brain frontal lobe is enough because people of epilepsy have sometimes another hemisphere removed but they still are functional human beans. So study of these half brain people gives information of synaptic connection efficient brains for AI research. But if AI processors are near human intelligence level, that raises moral problems. If AI is deaf and blind, and not connect to internet or intranet, perhaps only way to have enternetainment for it is cheap radio receiver so that it can “listen” (through speech to text conversion) radio programs, both analog and digital (in countries where digital radio broadcasts exist through radiowaves). If this AI is connected to refrigator, oven or coffee machine and is even somewhat near to human intelligence level, what are moral problems for dooming this intelligence for such kind of existence? And if someone s house is plastered with intelligent AI processors, with television sets, PCs, ovens, refrigators, coffee machines atc. and these multiple AIs have each almost equal or even better intelligence than the person who owns all these things, what are moral concerns of this intelligent beings usage, altough these AIs are not “human” (but perhaps even more intelligent than human that owns the house)? Wireless and wired internet connections are needed and radio and television receivers (both digital and analog TV) for each AI chip SoC that these intelligent AIs can have some enterntainment among their usual duties. If great hive intelligence is possible, restricting it to household level or blockhouse level intranet for security reasons, or on largest scale city level entity several square kilonetres perhaps largest concentrations of AI, no long range direct hive intelligence (altough sensory information can be sent to long distances) and absolutely no intercontinental “supermind”, for security reasons that no hacker can crack the AI system of the whole world. Because these AI processors are so large, and so perhaps expensive, in order to put price down perhaps smaller tax for them than usual silicon chips is needed at least 5 years of beginning of sale of near-human intelligence processors. If year 2018 see firstreal AI processor then perhaps 2018- 2023 low tax period manages to increase AI processor usage. In many countries books have minimal tax, because books are “educational material” (including fiction books), so if these AI processor are labeled as “educational material” like books low tax is possible at least couple of years. Human processing capacity AI processors are possible, not human brain capacity, but SUPS processing capacity (because microchips are much faster than human brains), when 10 and 7 nanometer manufacturing process begins, and they can even be manufactured at competetive prices. If manufacturers also restrict their usual profit percentage of product (if Intel is selling four-core Atom processor for 5 dollars, it won t get much profit for this processor), similar small profit per chip may increase the sales of AI processors, and the more this artificial intelligence is at usage, the closer we get to “singularity”. If semiconductor plant works at fastest possible production rate that is possible, and production cost of microprocessor is about the same using 450mm fifth generation wafer than it is with 300mm wafer, chips would be 2,25 times cheaper (cost of producing 300mm wafer with processors is same as 450mm wafer), and if 20 dollars per 1cm2 processor manufacturing cost (like top-class Apple A8 processor cost for Samsung when 20nm process was all new, and A8 had about 1cm2 die area), now manufacturing cost of 10 nm or 7 nm chip is 250 dollars for about if chip area is 28cm2, if chips are designed so that chips are functional even if some part of it does not fit to wafer and part of die area is missing (neuromorphic chips), about 60 chips fit in 450mm wafer. If small tax like books is used and small profit margin, selling price of these chips can be less than 300 dollars. Wafer size processor will cost 14 000 dollars to make and selling price could be less than 16 000 dollars with small tax. Half wafer processor 7000 dollars manufacturing cost and less than 8000 dollars selling price. Because full production maximum production rate is always at use 24 hours day 7 days week, no dysfunctional chips discarded anymore because neuromorphic chips can themselves fix (round around ) small mistakes of circuitry so avarage production cost per processor is lower than other processors, and if only about 60 processors fit in wafer production must be rapid to answer demand. Even more so if wafer size chips are manufactured. Wafer size “chip” will have 7 kilowatts peak power and 500 watts in normal operating mode if 7 nm, if 10 nm then 3,5 kW and 250W. Transistor count will be 39 000 billion transistors in 7nm and 19 000 billion in 10nm in 450mm wafer. If two 7nm wafer size prozessors are coupled like hemispheres of human brain, then about 80 000 billion transitors is at use. Transistor count neares cerebral cortex synaptic connections (16 billion neurons each with 1000 - 5000 synaptic connections). Altough transistors are not synapses and artificial neuron and synapse count is smaller than in AI than in human brain, AI is about 100 times faster measured in SUPS. So there it is, almost human brain, what it lacks in sophistication it makes up with speed, artificial human brain (or actually cerebral cortex) that cost 30 000 dollars to make and fits in two 17,5 inch silicon platters and is 100 times faster than human brain, so it makes everything 100 times faster than humans. Its memory capacity is not as humans, cerebral cortex has 74 terabytes and humans overall including long term memory 1- 2,5 petabytes capacity. True North has 428 million bits or 53,5 megabytes of internal memory, 28cm2 version manufactured at 7nm process will have 7 gigabytes and wafer size version only 400 gigabytes, much less than humans. But outside magnetic memory can be used, cheap DRAM and other memory with low prices makes cheap magnetic memory possible that overcomes this defect, and if this outside memory is data compressed even human size memory capacity is possible, even in human sized android robot. If competetive price range AI chips can be made, and their intelligence reaches human levels, that would have great effect in civilization as whole (singularity), so faster the human capability AI comes true faster the singularity arrives. These chips can be made without memristors and optical components already, but later memristors and optical component can be added to design when they become production ready. 7nm and 10 nm production technology is ready 2018, but when memristors or optical components come to production line I don t know. Altough those synapses of artificial neurons of True North are only 256 per artificial neuron, 256 synapses can in fact be avarage operational synapse count per neuron in very old people, 80 -90 years old, altough person originally had 1000 - 5000 (or 7000) working synapses per neuron. Because digital synapses does not have high information loss like human brain have, 256 synapses per neuron can be in fact suitable synapse per neuron number. Always is counted how much neurons and synapses there is in human brain, but how little of these connections is used in abstract thinking, and how high is the information loss brains have due to unreliable synapses, synapses misfiring etc. have not counted on. So perhaps very little SUPS capacity can lead to human brain- like performance. If artifial brain is modeled after “small-brained female genius” or other people that use their brains (they have effective synaptic “wiring” in their brain) better, that AI can have better than avarage human capacity. Low tax of neuromorphic chips is needed for them becoming really cheap, at least in restricted time (5 years or so) when neuromorphic “brain chips” are coming from production line first time (2018 - ?). Even without low tax they have competetive prices, but low tax would help to increase sales like low tax of books is used to increase book sales for common good. If common good principle is used in low taxation of printed books, same principle can be used for low taxation for neuromorphic chips also. 28cm2 chip made at 7nm process has about 135 million artificial neurons and 35 billion synapses. If about 60 these chips fit in 450mm wafer, wafer sized chip will have 8 billion neurons and 2 trillion synapses. Human cerebral cortex has about 16 billion neurons with each neuron 1000 or more synapses (16 trillion synapses). Now if two wafer make artificial brain like brain hemispheres AI will have 16 billion neurons and 4 trillion synapses. Same number of neurons as humans but only 1/4 or less synaptic connections. And less than terabyte integral memory, less than 1/50 of human capacity if cerebral cortex has 74 terabyte memory. But outside magnetic memory can bring memory level to human size petabyte class including long term memory. But AI will have 100 times SUPS speed of humans, so it thinks 100 times faster than humans. With human-size intelligence. And manufacturing cost is about 30 000 dollars (in my very rough estimate). But it is digital, not analog like humans and human brain is electro-chemical entity so altough similar structures can be build in Ai like real brain, it is still different than human brain (actually cerebral cortex). If it needs motoristic control (movement) like android robot it needs extra chips for those movement control duties (humans have “little brains” for those duties). But high level of unreliability of synaptic connections is missing in AI, humans waste much of their brain power to synaptic noise. Even small 28cm2 chip that has very small amount of neuron/synapses compared to human brain has SUPS speed of human cerebral cortex. And manufacturing cost of “about” 250 dollars. That chip also needs (like larger wafer processors) additional “media processor” to do internet linking etc. duties, and that additional processor is completely separated from AI. But that processor can be cheap 4-7 dollar tablet PC or phone SoC at cheapest option. Also additional memory is needed, but massive bulk orders of terabyte-class DRAM or other memory will bring price perhaps below 30 dollars for terabyte if now “spot DRAM” price is 4,75 dollars for 128 gigabytes. One 28cm2 chip needs one 1TB memory chip at least, and wafer size processor about 60 1TB memory chips, and additional hard drive or carousel memory (like smaller 28cm chip needs if its functional capacity is near human brain, but data compressed 1TB memory chip is minimum requirement). Also half wafer size processor can be used to bring down cost, with half capacity. Processor should be in some way non-programmable unlike True North, to prevent hacking. So “neuroplasticity” of human brain is not in the AI, but for security reasons AI should be “preprogrammed” and not fully programmable. Also connections between AIs should be strongly encrypted so that “hive intelligence” can resist hacking. Also AI outside chip memory should be data compressed and encrypted. Some data compression methods took over half hour decode because they compress data to very small, but human long term memory also takes sometimes half an hour to remember something. These AI chips can be in large number of different consumer products, not just electronics and household machines but in earrings and jewelry etc. and other “ubitiquous” wear. Price of these products will be higher than avarage consumer products due to AI chip price, but they propably sell well due to novelty value. AIso connection to household and blockhouse intranet and city level internet or other net, and also TV and radio in SoC.
Analog circuits will provide more realistic simulation of human brain, and there is already analog circuit production in 28nm process, and plans to go below this near to about 20nm and beyond has been made with analog electronics. 28nm analog has good signal quality obtained but how signal quality becomes below 28nm I don t know (chipdesignmag.com “Analog circuits benefit from scaling trends” 2011). In brain simulation bad signal quality does not matter, brain signals has “bad quality” (unreliability and noise) anyway. Combining memristors and/or optical components with neuromorphic chips will increase efficiency also. There is firm POET Technologies inc. making hybrid electric/optical components. SOI (silicon on insulator) manufacturing is suitable for optics and electronics. Magnetic memory can be analog also at least in some form, like OUM magnetic phase memory. Other memories are “polymer memories”, “Charge trapping NVM 3D”, FeRAM, MRAM, RRAM, “Conductive bridge RAM”, “metal oxide ReRAM”, T-CAM, memristor, “Phase change memory”, all consired as memory for neuromorphic chips. But brain operates both digital and analog mode, and Brainscales project making brain simulation is using both analog and digital components for this reason. “Stochastic computing” /circuits are model for AI and other neuromorphic (“The noisy brain: stochastic dynamics as a principle of brain function” ). Waveform modulation like “Space vector modulation” 2013, or “Space-time domain index modulation” exists, and brain activity is waveform modulation, “State-space representation”. In powerpoint file “class.ee.washington.edu/555/lectures” in “History of computation” is about neuromorphic computing. In previous post I used IBM True North as example because semiconductor manufacturing is almost all digital. It uses digital and binary numbers to simulate brains. But if brain operates tree-based logic structures then perhaps tree-based numeral systems work better, (Paul Tarau and others) or “Noviloka maths on the number horizon - beyond abacus” netpage “nested binary biological number system”. If AI reaches even near to human levels it needs same kind of laws and legislation like for example animals have today (animal rights), and even more as those AIs have human size intelligence and animals except dolphins and big apes are very dumb. So goverment offices and officials for AI rights are needed because similar offices and legislation is for example concerning animals who have much less intelligence than those forthcoming AIs (2018-?). If animals have rights then much intelligent beings like AIs should have too, and legislation according to it. Each human-class AI chip needs additional media processor, cheap CPU/GPU SoC for internet connection etc. AI can use this media processor in completely “sandboxed” separate enviroment from the AI. Media processor needs its own memory chip, but because both cheap CPU SoCs and 128 gigabytes magnetic memory cost about 4 dollars both at cheapest, price is not a problem. Also more expensive CPUs and memory up to 1 terabyte can be used. AI chip (28cm2) has itself additional 1 terabyte memory chip for long term memory, perhaps price is below 30 dollars for 1 terabyte large bulk order memory. This AI s own memory chip is only connected to AI using strong encryption and data compression and does not connect anywhere else. Human brains have sophisticated visual and auditory cortexes, AI can do with much simplified model, so its “eyesight” and hearing is bad compared to humans perhaps. Even separate speech synth chip that uses text to speech conversion can be used instead of complex human brain speech formation process in the brain. If only frontal lobe is modeled AI “brain” can not see, can not hear and can not form speech. It however speaks through speech synth chip, and “hear” through simple speech to text-like abstraction conversion that its “brain” understands. It does not understand visual information, but when connected to internet (through media processor with additional separate user interface) it will understant text as abstraction, not as visual text like humans, so it can “read” textual information. Also AIs will have in each SoC individual radio and TV (both analog and digital) so that they can watch TV programs and listen to radio. Small integrated chip radio and TV receivers are small and cheap and used for example in cellphone SoCs. If visual and auditory cortex is missing in AI then only “listening” radio through speech to “textual” abstraction conversion is possible if internet connection is not used. Connections to internet using both wired and wireless (when wired connections are not possible) connections. Several AIs can be chained to intranet-style connections so that only one AI (in household) has connection to internet, but other AIs can use this connection. AIs can in some way also connect to large chained and shared “hive intelligence” supermind, but how this is possible and what are security demands, even in small few square kilometer area connection, I don t know. So now if some coffee machine AI is making coffee, it can simultaneusly surf the internet, read e-book, listen to radio and watch TV, like humans do, if this AI has human-class intelligence, and perhaps participate in hive intelligence supermind also. However memory capacity to download material from net is restricted to media processor connected memory, AI s own 1 terabyte additional “long term memory” is restricted for its own abstract thoughts like human memory. How hive intelligence that requires some brain process abstraction sharing between AIs is possible to organize I don t know (AI can also connect with other AIs to form much larger intelligence, not only surf on the internet). Intranet connections can be optical or electric ethernet or HDMI or other, and internet connections with optical or copper wire ethernet etc. Additional UFS memory card or USB stick memory can be used at least in media processor. “Inexact computing” can increase efficiency of ordinary CPUs if accuracy is not so important, according to Rice University. Is there any performance improvement in neuromorphic computing that uses “stochastic circuits” / stochastic computing or other inexact computing model I don t know. The power consumption of human-size AI that has 450mm wafer-size processor has kilowatt-class electric consumption. So if this processor is inside human-like android robot it needs electric wire connection to power source because no battery is small and light enough for such electric consumption that fits inside android. So this android drags behind itself electric wire that brings power to its AI processor, electric or hydraulic moving motors and cooling system. And because most human sensory functions are logarithmic, not only logarithmic number systems but ZDTNS (zero displacement ternary number system), “Magical skew number system” (Elmesry, Jensen, Katajainen), neuraloutlet.com netpage “metallic numbers” or “U-value numbers” or MROB.com netpage “multiple base composite integer” can be suitable for neuromorphic computing. If logarithmic base is the goal there is Balanced Ternary Tau number system (BTTS). It is both logarithmic (tau) and ternary. J.A.D.W. Anderson has proposed a “Perspex machine”, transreal number system computing (transreal coding). There are other transreal coding schemes also. It is just one of “infinity computer” theories. BTTS and other number systems can (?) be used with it, or are somewhat related to it (?). Fibonacci ternary coding schemes are computationally effective also. Ternary values can be encoded in “binary coded ternary and its inverse”. There are several ways to encode ternary values to binary information, if binary is required (and ways to encode quaternary values too to binary). Also some other number system than just integer base can be used in neuromorphic computing, like tree-based or biologically inspired number systems. Trachtenberg system is group of algorithms that does math with simple number shifting trics, so even large calculations can be done with simple rules, so simple software can do that what usually requires hardware like floating point unit or integer ALU. Removing FP unit or integer ALU from ordinary CPU makes CPU much simpler, but Trachtenberg system even in its improved form is not so flexible like ordinary ALU. Vitthal B. Jadhav has made improved Trachtenberg system in book “Global number system” and several other publications, Jadhav has made “VJs golden lemma, VJs matrix method, VJs crossbinary test, Sliding rule multiplication, Twisted math for quick squaring” etc., more or less relegated to mental math. If these mental math algorithms would be in hardware, fast computation using simple rules like humans who use these mental math tricks would be possible, in neuromorphic chip or other. Inexact computing is according to Rice University 15 times more efficent than normal FPU or ALU performance, but leads to 8% error rate. If processor has inbuild error corrector hardware, information enters this unit where error corrcetion code is added, then at FPU, ALU or GPU where it has 8% errors due to inexact computing, then goes back to error correcting unit where it is corrected back to normal and 8% errors corrected, now processor would be 15 times more effective than normal due to errors that are accepted, but error correction unit adds additional hardware that is needed, and error correcting code adds bitlength of information also. This for ordinary CPU or GPU. Unum computing is coming to floating point computation also. For other coding methods and neuromorphic computing: Distributed arithmetic (DA) coding " with internal swapping", “Pulse group modulation”, “ODelta compression” (Gurulogic Microsystems), “Octasys comp” (Ypo P.W.M.M. van den Boom), “A simple oscillator using memristor” 2017 (for VCO ADC for example), Albert W. Wegener data compression patents and Clinton Hartmann “group keying” patents (multiple pulse / phase shift group keying that offers high efficency). If birds like crows or parrots have high intelligence compared to their small brains, then artificial “superbird” would have highly efficient synaptic “wiring” modeled after biological model, and human-class intelligence using much less synaptic connections than humans if modeled after bird brain biological model. Birds have accurate senses too like eyesight and hearing (in owls and bats and parrots). Or then use AI that is not modeled after any biological model but is pure artificial entity that has nothing to compare with biological world. If human like android is build it does not need sense of smell or touch, only in hands pressure sensors that it it can grab things properly. Humans need two eyes for stereoscopic sight. Android needs only one good eyesight eye, another can be low pixel count eye that is only useful for stereoscopic distance measurements, all accurate visual information is seen on the another eye. Other rationalisation of senses etc. possible. Memristors are coming to mass production 2018 also, STT-MRAM (Global Foundries), eMRAM (TSMC), IBM phase change memory, others like Intel XPoint, Storage-class memory SCM, Nano-RAM etc. In hard drives HAMR (heat-assisted magnetic recording) makes 10-fold increase in data size, MAMR is another new memory, and SHRAM memory from early 1990s (Richard Lienau) is forgotten concept. Organic and polymer memory is at laboratory stage. Massachusetts university Amherst has made CMOS memristor that can be manufactured with ordinary semiconductor plant technology, the same like silicon microchips, so cheap mass production of memristors is possible. So possible is also large scale production of memristor-based AI microprocessors (from 2018-?). Also “Synapse memristor” CNRS/Thales Vincent Garcia, Sören Boyn 2017, and university of Michigan “sparse coding memristor” 2017. Concept of “Artificial consciousness” is related to AI, and “Electromagnetic theories of consciousness” are more esoteric ways to model human mind. According to Wikipedia “electromagnetic consciousness” is being build using “row hammer effect” of memory chips, and even Raspberry Pi can be used as artificial intelligence platform, according to unpublished scientific paper (information is from 2017). Those theories are esoteric in nature and unproven, like “Physics of consciousness” by Gustavo Figueiredo. If consciousness is build partially at electromagnetic base then it has effect on AI and artificial consciousness studies, for example quantum computer artificial consciousness. Also “Memristor ratioed logic” (MRL) concept. If AI or virtual reality is coming, then much larger size memory cards are needed than SD card size UFS memory card. Higher internal volume that makes possible for example 100 terabyte memory card is needed, if magnetic memory price is going down year by year. Large memory cards with high volumetric size with cheap magnetic memory with terabytes or petabytes of capacity is the future perhaps. If 28nm and smaller analog circuits are made, using 1 bit digital signal with analog dither is one solution in hybrid analog/digital circuit for greater dithering performance. When dither expands this 1 bit to multibit, multibit processing is digital, and Additive Quantization AQ or other method can be used to expand bitwidth even more. So this processor has two circuits, 1 bit analog dithered signal, used in memory call-up for example (magnetic memory like magnetic phase memory can be analog also and store this 1 bit dithered signal in its original form) and then multibit digital processing when dithered 1 bit signal is transferred to multibit. “Data compression with low distortion and finite blocklength” 2017, “Nonuniform dithered quantization” 2009.
There are concepts of AI like “Conceptual blending”, and “Adaptive informatics”, "Peace machine " (Timo Honkela), “Glossasoft” (Honkela), Antti Ylikoski: “On theoretical computer science and artificial intelligence”, and post to Google Groups 14.5 2015 by Ylikoski: “A newcomer: on the metalevel reasoning system”. “Querying string databeses with modal logic” Matti Nykänen 1997, and “Aspects of relevance in information modelling” Esko Marjomaa 1997 (ISBN 9514442237) are publications. Analog circuits are manufactured at 16nm now, and 14nm, 10nm, 7nm and 5nm analog manufacturing is proposed in the future. So analog AI that has many small artificial neurons is possible, if for example wafer size AI uses 450mm size. Analog circuit can be made almost as small as digital, and human intelligence is analog operation (altough partly digital), so analog electronic AI is possible and perhaps even better than digital in performance. Not to mention optical analog computers or optical analog (not digital) AI using optical circuits. If analog circuit ever reaches 5nm (that is proposed already) analog signal processing is much better than digital, in AI at least, altough noise is problem but noise shaping techniques have been invented to correct this. Human brains have extremely high information noise levels so noise in analog AI circuit is not perhaps so bad thing, but only increases realism in AI. Because 16nm analog circuit is possible now, very efficient analog AI processors using analog logic can be made even today. 16nm analog AI is much better than digital 16nm or 12nm, 10nm etc., or at least more close to animal / human intelligence. It is better that human inteligence-class AI is not sold to customers, but only rented for some monthly sum, so AI belongs to private property of its manufacturer. So any damage to AI must be paid by that one who has rented the AI from manufacturer, and scale of payment from damage could be much higher than just financial damage (manufacturing costs) of AI. For example criminal punishment scale for damaging human intelligence class AI can be used, leading to high payment rates if AI is damaged. When AI is not owned but just rented to user that kind of legislation is possible. And cost of manufacturing human size AI is high so instead of selling but renting those who cannot afford to pay the cost immediately can have human class AI also. So human size AI can be like rented luxory car, altough if damaged repayment sum will be much higher, or even lead to punishment according to criminal law. Atomic layer deposition (ALD) makes perhaps possible to manufacture analog electronics, and ALD is perhaps way to make analog electronics down to 1 nanometer size. But if human neuron has 4 micron size, and analog ALD has 45nm manufacturing process (the artificial neuron is bigger than 45 nm because it needs many components made at 45nm narrowness), but nearly 1000 to 1 size reduction is possible if biological brain is downsized to electronic version, not to mention grey matter / white matter etc., mostly only grey matter neurons are needed and cerebellum has 4 times as much neurons than big brains and only 10% of size of big brains. So about 1 cubic centimeter size artificial brain is possible to make. ALD manufacturing makes possible three dimensional electronic structures with thousands (perhaps millions) of layers so it is good candiate for human class AI manufacturing. Other manufacturing processes cannot make very large 3D structures. Mimicking human brain needs three dimensional structure manufacturing so ALD is best process for human class AI. ALD is also possible as roll printing of electronics, so large structures with billions of artificial neurons are relatively cheap to make. Thin 0,1 - 0,5 mm (100 - 500 micron) thick artificial brain structure that has same neuron capacity as human brain is perhaps possible, this “flat brain” has some dfferences to human brain because it is so thin but wide compared to human brain, but close approximation of human brain is perhaps possible using ALD process.
Now when manufacturing is nearing 7nm, and it is in 300 mm wafer, how about using optical litography of same accuracy for wafer diameter, but now wafer is same size than largest LCD display wafer semiconductor wafers which is about 3 metres, so it is now about 70 nm (65 nm) accuracy in 3 metre wafer, manufacturing can be in same factory that makes those 3 metre wafers for LCD TV and displays. It is much simpler to use optical litography at 65 nm even in 3 metre size than 7 nm in 300 mm size. Wafer has 10 X width of 300 mm wafer but 100 X area. This large 3 metre wafer is for analog neuromorphic processor, for creating human - size AI. Two such large wafer size processors can be stacked on top of each other to mimic human big brain hemispheres, or several of such wafers on top of each other to mimic diffrent parts of the brain. Optical- electronic through-hole connectors can be between wafers. Altough accuracy / width ratio is in same class with optics of 7nm / 300 mm, due to larger wavelength 65 nm or even 45 nm and 3 meter wafer (6,5 X width of 7 nm at 300 mm) is perhaps possible. 3 meter wafer is analog neuromorphic large processor mimicking human brain or part of it. So human level AI is possible, and manufacturing in standard LCD screen wafer size at largest size wafer (about 3 metres). And same factories can be used, altough litography methods are diferent in LCD screen making and making of processors. Or use LCD screen making tech or other manufacturing tech but with 65 nm or 45 nm accuracy in 3 m wafer, and analog circuits in wafer. There are mistakes and faults in wafer but because it is neuromorphic processor it can bypass fault lines of information and still work. European union is using ARM processor core for AI, using wafer size package of 300 mm without separating processor dies from wafer but connecting those dies with PCB board that connectens them when they are still in wafer saves costs when processor dies are no separated from wafer and packed individually. Some processors are fault and some work only partially, but there are still plenty of working processor dies on wafer. It is now more compact than computer with separate processors, and propably cheaper to make. If memristor tech comes usable it can be used in those AI chips. In netpage “A list of chip/IP for deep learning - Shang Tang - medium” is list of firms like GreenWaves, Esperanto Technologies, Groq, KRTKL, Kalray, Mythic, Knowm, Adapteva, Koniku, Knuedge, Pezy, Graphcore, WaveCompute, and some of these firms are making microcontroller -size cheap AI chips. 1 bit serial logic and transputers were used in supercomputers in 1980s and 1990s. In netpage XLNS research - overview in articles section is many logarithmic number systems. In stackoverflow netpage 2009 “8 bit sound samples to 16 bit” is many methods how to improve 8 bit information to almost 16 bit accuracy. In Google groups msg comp arch 2017 is “Re: beating posits at their own game” extreme gradual underflow EGU and hyper gradual underflow HGU, If it does not show in Google it is shown in Microsoft browser. Unum, posit, valid, HGU or EGU can perhaps be used in AI also. NICAM was ADPCM compression that used white noise, but instead of white noise NICAM with dithering to improve accuracy for low bitrate processing can be used, NICAM with dithering and floating point, unum, posit, valid, HGU/EGU, logarithmic number system or other. NICAM is almost ADPCM but not quite, half between ADPCM and linear compression.