Using Walsh-Hadamart transform in signal processing instead of Fourier transform is possible, and Hadamart transform can be used in analogue domain, not digital, and now analog circuits are made with 16nm manufacturing tech. So Hadamart transform / Walsh functions in analog form can be more efficient than Fourier / fast Fourier / sparse Fourier transform (if there is “sparse Hadamart transform” or “sparse Walsh function”). “Application of a realtime Hadamart transform network to sound synthesis” Bernard Hutchins 1975, “Experimental electronic music devices employing Walsh functions” 1973. “Walsh functions in waveform synthesizers” Insam 1974. Fuzzy granural synthesis with Walsh functions. Phase offset modulation is one of many modulation methods of signal (addition to list of modulation methods in previous text ). Wave folding (“Simple wavefolder” like youtube “DIY analog synth project wavefolder post VCA” 2018 ) and wave concatenator and dynamic waveform morphing / crossfading are other signal processing methods, and timeline based synthesis (Progress Audio Kinisis) / timesplice synthesis (Synclavier), and Spherical harmonics synthesis, in “Spherical harmonics synthesizer”. Spherical quantization is used in for example ADPCM. If Additive Quantization AQ can be used in spherical / pyramid / cubic quantizing schemes I don t know. In Chipdesignmag netpage is “Between fixed point and floating point by Dr. Gary Ray”, presenting different experimental floating point systems, version which uses “reversed Elias gamma exponent” can perhaps be used in delta sigma modulation or Takis Zourntos model one bit modulation without delta sigma, 1 bit (reversed) Elias gamma as DSM or Takis Zourntos model. Or other exponent models presented in that article, in 1 bit form, can perhaps make efficient DSM or Takis Zourntos model. Vector quantization (additive quantization AQ) with DSM or Takis Zourntos 1 bit model perhaps can also be done. Vector phaseshaping is used in oscillators but does it work with DSM or Takis Zourntos model also? Or does Anamorphic strecth transform or Phase stretch transform work with DSM or Takis Zourntos model? DSM can have internal multibit or multirate processing, altough it is 1 bit scheme. In netpage CHRTsynth is “natural modulation” done with tube valves and Abraham-Block multivibrator, CHRT tube valve wind controller has formant and timbre modulation. Simple analog oscillator and filter designs with absolute minimal amount of components are for example in netpages EEWeb extremecircuits and in Circuit-finder netpage, and in netpage bowdenshobbycircuits (Bill Bowden). Examples are “How to build 2nd order op amp filters”, “Triangle / squarewave generator”, “How to build reverse bias oscillator”, “How to build low frequency sinewave oscillator”, “Simple op amp bandpass filter”, “Variable high-pass filter”, “Simple white noise generator”, “FET audio mixer”, “Derive pure sine waves from digital signals”. Other simple are “Simple VCF schematic”, “Ultra simple VCF”, Thomas Henry VCF-1, and “Very simple VCF+LFO”. Martin Vicanek has made many improvements in digital signal processing. Convolution synthesis and polyconvolution synthesis (BT Phobos) are synthesis / modulation methods, and Spectral morphing synthesis (Cube 2), Harmonic content morphing synthesis (Firebird VST) and Transwave synthesis (Ensoniq Fizmo) also. Unusual oscillators are TMP array oscillator, Theta oscillator (Räsänen), Kassutronics: Avalance VCO, Stroh Flexwave VCO, PortOSC Petri Huhtala portable oscillator for cell phones, granular oscillator, “Grainy clamp-it”, Falcontinuum VST multi-granular oscillator, scanned oscillator, Swarmsynth swarm oscillator, “The geometric oscillator: sound synthesis with cyclic shapes” 2017, Zero Vector VST White Noise Audio, Native Instruments Form synth oscillator, Nonlinear Instruments VCV rack modules. Dhalang MG uses “Rossler and Lorentz abstractor fractal waveforms”. Other complex sequenced waveforms are in hardware “Ornament&Crime” sequencer and in software Modulys VST, PolyWaves VST, Subconscious VST, Seeq One VST, Photon VST, Pandemonium VST, Metatron 2 VST (“Nippy Baynes wavetable OSC”), Hydi VST. Le Sound AudioTexture and “A new paradigm for sound design” are new methods. In Elektor magazine is “Digital formant synth (D-Formant) 130374”, 2013, that uses BLEP (PolyBLEP) in new form that uses large ROM tables. RCS 370 Polyphonic harmonic generator. Granular synthesis for signal processing, like Borderlands iOS app. Fuzzy granular synthesis uses Walsh-Hadamart transform, and that is working in analog domain, so granular signal processing with analog electronics is possible? Anamorphic stretch transform and Phase stretch transform are used in signal processing. Both have something in common with Fourier transform. Is it possible to make Walsh-Hadamart transform based anamorphic strecth transform or phase strecth transform that work in analog domain? If 16nm analog circuits are used. Similar way that AST and PST are relative to Fourier transform, but now version of AST and PST that are relative to Walsh-Hadamart transform and work with analog circuits. Analog (or digital) Walsh-Hadamart processing instead of digital Fourier transform, with or without AST and PST. Vector phaseshaping is version of phase distortion, so can Phase stretch transform and vector phaseshaping be combined? If vector phasehaping works in oscillators, and analog oscillators are used in VCO-ADC, can VCO-ADC be build with for example vector phaseshaper, or Walsh-Hadamart transform, or phase stretch transform, or can all these be combined in VCO-ADC? Or Anamorphic stretch transform in VCO-ADC? Or Anamorphic strecth transform in DSM or Takis Zourntos model? Or (analog or digital) Walsh-Hadamart functions in delta sigma modulation or Takis Zourntos model, with or without vector phaseshaping, Phase stretch transform or Anamorphic stretch transform? Recursively indexed quantizer is used in ADPCM (RIQ-ADPCM), so it can be perhaps be used with Takis Zourntos model also. Index-calculus number system is used in one version of logarithmic number system. Finite State Entropy coding is data compression, how those three models are suitable for one bit system like DSM, I don t know. But perhaps multibit / multirate DSM will do, or “pseudo-parallel DSM”? If any of those previous methods will make efficient signal processing, efficient analog and digital processing, in DSM, Takis Zourntos model and VCO-ADC, or other signal processing duties. Simple VCO and VCF designs can be used in miniature analog signal processing for example in VCO-ADC in 16nm manufacturing, and analog or digital Walsh-Hadamart functions in 16nm analog or digital signal processor instead of Fourier transforms. Xampling is analog version of sampling, sampling below Nyquist rate, Walsh-Hadamart functions in fuzzy granular (analog) sampling also? “Granular synthesis of sound by fractal organization” Paul Rhys. Elettra Venosa: “Time-interleaved ADCs”. Compressive sampling, like “Multi-channel simultaneus data acquisation through a compressive sampling-based approach” is that 1/250th is enough for sampling analog signal, making sample reconstruction with 250 X compression ratio possible. “Robust compressive sensing via sparse vectors”, “Extreme compressive sensing for covariance estimation”. Audio codec, like speech codec that uses sparse vectors / compressive sensing is possible, and formant / speech synth also. “Sample by sample adaptive vector quantization” SADVQ. Quadrature amplitude modulation is analog compression technique in television transmissions (PAL), and digital “Quadrature noise shaped encoding” (Ruotsalainen). Discrete summation formulae is method for creating lots of harmonics with ease, FM modulation (vector phaseshaping, phase offset modulation) is another. One bit / multibit DSM or Takis Zourntos model ADC, VCO-ADC can benefit from these? “Direct digital synthesis using sigma delta modulated signals” (Orino), “A tunable direct digital synthesis with tunable delta sigma modulation” (Vainikka). Anamorphic strecth transform and Phase stretch transform can be turned to analog form using Walsh-Hadamart functions as model? And use Xampling? “Cascaded Hadamart-based delta sigma modulation” Alonso. And Phase stretch transform can use Vector phaseshaping or Phase offset modulation or other FM method? Fuzzy granular analog sampling with Walsh-Hadamart functions is possible? And use Xampling? US patent 8487653B2 “SDOC with FPHA and FPXC:” by Tang System, 2009. Xampling can be used in DSM, Takis Zourntos model or VCO-ADC? Vector phaseshaping oscillator or “vector oscillator” by Vicanek and others can be used in VCO-ADC? Or vectors in DSM or in Takis Zourntos model with or without vector phaseshaping? Feedback amplitude modulation (FBAM) and allpass fiter chain (sound) synthesis are other new signal processing methods, how those are comparable to analog signal processing I don t know. Can FBAM or allpass filter chain be used in DSM or Takis Zourntos model or VCO-ADC? Or use them in other signal processing than (music) sound? Granular additive synthesis and granular fractal synthesis are synthesis methods. DocNashSynths has made several new synthesis methods like “NHT nested hyperbolic and triangular functions”. FOF synthesis, corpus-based concatenative sound synthesis, IRCAM CataRT. Controlled envelope single sideband modulation is one way of modulation. In netpage Google groups forums 10.8 1999 is “Why sine wave in Fourier series?” and answer 12.8 by Timo Tossavainen, that Karhunen-Loeve transform is better than Fourier (but works only in analog electronics, for example manufactured at 16nm), Walsh-Hadamart suits also, wavelet and wavelet packets also, Slant transform is suitable for saw waves, Hartley transform, permutation matrix for DFT. If permutation matrix is matrix computing it is fast in GPU? GPU prgramming languges are Futhark and Harlan (rare GPU programming languages). Another signal processing method is Alessandro Foi: “Shape-adaptive transforms filtering”. “Pointwise SA-DCT algorithms”. For alias free waveforms is “Alias-free nonlinear audio processing (ALINA)”. Mcleyvier command language was used to control analog electronics. Similar programming language that is build from start for analog circuit behaviour in mind, for example KLT, in 16nm analog signal processing, is perhaps needed today. 16nm analog circuit or analog / digital hybrid using KLT can be faster than digital processing, and analog computer manufactured in 16nm process (faster / more energy efficient than 10nm digital computer, Georgia university / institute of technology), or optical analog computer using KLT. Analog circuits are planned to be manufactured even in 5nm manufacturing, altough 16nm is today possible. Neuromorphic analog circuits can also be made, using KLT or other signal processing method. “A novel multi-bit delta sigma modulation FM-to-digital converter”, “Complex frequency modulation - Ian Scott s technology pages” (patented complex frequency modulation). “A nonuniform sampling adaptive delta modulation ANSDM”. “ELDON: floating point format for signal processing”, “Vertex data compression through vector quantization”. Quire is version of unums that is about vectors. In text Gradual and atpered overflow and underflow: a functional diffrential equation and its approximation" 2006 is that floating point number can have overflow threshold of 10 to 600 000 000 potence, number with 600 000 000 decimal digits, enough to store all information in the world in one floating point number, in its overflow accuracy. “Bounded floating point”.“New number systems seek their lost primes”, “Base infinity number system” Eric James Parfit, “Peculiar pattern found in “random” prime numbers”, “Hyperreal structures arising from logarithm”. Benford s law is explained in “DSP guide chapter 34: explaining Benford s law” by Steven W. Smith, that Benford s law that is in everywhere in nature is antilogarithm thing. If similar style explanation is suitable for “hyperreal structures from logarithm” and “peculiar pattern in prime numbers”. Wikipedia: “Ideal (ring) theory”, “Ideal number”. KLT transform, anamorphic stretch transform and other things that work well in analog domain but not in digital, can be used in audio, video or any signal processing because analog circuits are made 16nm tech nowdays and 5 nm analog circuits are planned. So data compression can use analog processing instead of digital. In text “Beating posits at their own game” by Quadiblock (John G: Savard) in Google groups forums 2017 (if it does not show in Google browser it is shown in Microsoft browser) is that tapered floating point / unum is like logarithmic a-law system in his HGU / EGU floating point. Is this “logarithmic floating point”, putting logarithmic and floating point number systems together? Is it also “fractional floating point” because logarithmic is fractional values not integers? Floating point numbers have integers as sign bit, mantissa and exponent. Can Fibonacci numbers be used with floating point like John G: Savard s floating point system uses logarithms? Fibonacci series is golden ratio / tau in integer form (about 1,6 ratio), so it is fractional value like logarithms. In text “DC-accurate, 32 bit DAC achieves 32 bit resolution” is that this DAC has theoretically infinite DNL and monotonicity. Infinite accuraty can be used in data compression. Mathematician Srinivasan Ramajunan made mathematical studies about infinity, or something, at least film made about him is named “The man who knew infinity”. If Savard s floating point si logarithmic a-law, how then other logarithmic companding with floating point numbers? “Asymptotically optimimal scalable coding for minimum weighted mean square error” 2001, “Geometric iecewise uniform latice vector quantization of the memoryless gaussan source” 2011, “Spherical logarithmoc quantization” Matschkal, “On logarithmic spherical vector quantization” LSVQ, “Gosset low complexity vector quantization” GLCVQ, “Lattice spherical vector quantization” Krueger. Most of those have something to do with logarithmc quantization. So can they be used in floating point number system like Savard uses a-law? Increasing accuracy dramatically? There are others like “Design of tree -structured multiple description vector quantization” TSVQ, “Tree-structured product-codebook vector quantization”, “An improvement to tree structured vector quantization” Chu 2013, but has those anything to do with logarithms I don t know. “Low delay audio compression using predictive coding” 2002 has “weighted cascaded least mean square” WCLMS principle, has it something to do with logarithms I don t know. If analog circuits are used in processing, “analog error cancellation logic circuit” can be used, “Active noise cancelling using analog neuro-chip with on-chip learning capability”, Capacitor mismatch error cancellation technique for SAR-ADC", “Error cancelling low voltage SAR-ADC”. In netpage “neuraloutlet wordpress” (com) is “metallic numbers” and “U-value number system”, can those be used in data compression I don t know. If they are logarithmic perhaps something similar like Savard uses a-law is possible in floating point. In netppage “shyamsundergupta number recreations” is in “fascinating triangular numbers” that number sequences 1, 11, 111 etc. are all triangular numbers in base 9, so binary representations of them in binary is simple and they can be compressed easily. In same netpage in “unique numbers” section is that digital root of unique numbers is 9, and number / base 9 appears in other properties of unique numbers also widely. So is 9 then best base for integer computing, not 10 or 2? And in section “curious properties of 153” is “the curious properties of binary 153” that form “octagonal binary ring” with 8 bit / 255 numeral values. That reminds about “Z4 cycle code” that is used in converting quaternary base to binary for example. Can such properties as number / base 9 has, or number 153 that has ring of binary digits, be used in data compression? Logarithmic Cubic Vector Quantization LCVQ, “Logarithmic quantization in the least mean squares algorithm” Aldajani, “State estimation of chaotic Lurie system with logarithmic quantization”, “Semi-logarithmic and hybrid quantization of laplacian source”, “Finite gain stabilisation with logarithmic quantization”, “A logarithmic quantization index modulation”, can any of these be used with floating point like Savard uses logarithm in HGU / EGU? “A bridge between numeration systems and graph directed iterated functions systems” “Wavelet audio coding by fixed point short block processing”, “preferred numbers”, can some of those be used in floating point? Preferred numbers are used in parcel sizes and are “logarithmic” so can preferred numbers be used in floating point systems also? In text “Making floating point math highly effective for AI hardware” 8.11 2018 is list of FP systems: Nonlinear significand maps / logarithmic numbers (Kingsbury, Rainer 1971), Reciprocal closure 2015, Binary stochastic numbers (Gaines 1969), Entropy coding / tapered FP (Morris 1971, if tapered / posit FP is data compression, how about using Finite state entropy with posit / tapered FP) , Posit 2017, Fraction map significand (“Universal coding of the reals” Lindstrom 2018) and Exact log linear multiply-add ELMA, Kulich accumulation. ELMA is 8 bit that has 4 bit accuracy but 24 bit range. Such is suitable for ADPCM style system. This text was written only few months ago so it is latest thing in FP research. Some of those are logarithm based, like Savard s HGU / EGU. For logarithms is XLNSresearch (com) netpage where is long list of studies of logarithmic number systems, including multidimensional logarithmic, index calculus DBNS, hybrid LNS / FP number system, “Complex LNS arithmetic using high-radix reduntant CORDIC algorithms” 1999, “Novel algorithm for multi-operand LNS addition and subtraction” 1995, “Common exponent floating and logarithmic radix-22X2 pipeline” 2010, “Architectures for logarithmic addition integer rings and galois fields” 2001, “A new approach to data conversion: Direct analog-to-residue conversion” , “A 32 bit 64-matrix parallel CMOS processor” 1999. Lucian Jurca has written about combining LNS and FP together. Other: “Parametrizable CORDIC based FP librarry”, FloPoCo library, stackoverflow 2018: “Algorithm-compression by quazi-logarithmic scale”, “High resolution FP ADC” Nandrakumar. Savard uses a-law logarithmic in FP, those other logarithmic systems can be used in FP also? And CORDIC (or BKM algorithm) and Fibonacci series also? Theres many more logarithmic systems like complex / LNS hybrids, Monte Carlo LNS, denormal LNS, reduntant LNS, residue LNS, interval (arithmetic) LNS, dual real / LNS hybrid, serial LNS, two dimensional LNS (2DLNS), multidimensional LNS, signed digit LNS, semilogarithmic LNS, multi-operand LNS Those can perhaps be used as logarithmic “steps” of floating point numbers like Savard uses a-law logarithmic. Also floating point numbers have huge disparity between exponent range and accuracy of mantissa. And FP numbers are difficult to data compress. If compression is only used in mantissa and expnent left uncompressed, integer compression methods can be used in mantissa. Using software tricks to expand mantissa accuracy up to 39 times of its normal accuracy is possible. Or use data compression, integer part can be compressed using ADPCM / delta compression type system, which is lossy compression. Ultra low delay audio compression techniques use ADPCM and other techniques, are fast, only about 1 millisecond or even less. Bit truncation , and Finite state entropy can also be used. Then only mantissa is data compressed because exponent part does not need compression, and now mantissa accuracy is closer to exponent range. Especially AI research has invented different minifloats and “dynamic range integers”, stochastic rounding FP etc. Smallest minifloats are 8 and only 4 bits, IBM, Clover FP library etc. Those have really small accuracy, and if mantissa is data compressed or expanded using software that would help. Processing would be faster if only mantissa is compressed. “Decomposed algebraic integer quantization”, “Alias-free short-time fourier transform”, sparse fractional Fourier transform, “multi-amplitude minimum shift keying format”, sparse composite quantization, pairwise quantization, “Space vector based dithered DSM”, nonuniform sampling DSM, implicit sparse code hashing, “Analytical evaluation of VCO-ADC quantization using pulse frequency modulation” Hernandez, “Time quantized FM using time dispersive codes” Hawkesford, multipre description coding DSM, “Design of multi-bit multiple-phase VCO-based ADC” 2016. I googled zero set with Pisot number and Parry number, if that has something to do with endless data compression, and I found “Ito-Sadaharo numbers vs. Parry numbers”, “Palindromic complexity of infinite words associated with simple Parry numbers”, “A family of non-sofic beta-expansions”, “Combinatorics, automata and number theory”, “Beta-shifts, their languages and computability”, “Abelian complexity of infinite words associated with quadratic Parry numbers”. But I don t understand those matemathical formulas.