Quasicrystals are mathematical entities that are used for example in cosmology and quantum computing. Is it possible that quasicrystals can be used as data compression also? I don t know because I am not a mathematician. From community.wolfram netpage: “1D quasicrystals by fibonacci substitution and lattice projection”. Fibonacci numbers can be tribonacci (order-3 Fibonacci) numbers etc. Have quasicrystals capacity to store information greater than order-3 Fibonacci, Zero displacement ternary etc. number systems? Or more information storing capacity than bihederitary / tree- based number systems (Paul Tarau), or number systems represented in netpage “Novaloka maths on the number horizon - beyond the abacus”. Can quasicrystal contain more information? Strange concepts such as “time crystals” from quasicrystals have been proposed. Nothing to do with previous but “Mathematics without quantors is possible” by Erkki Hartikainen 2015, unfortunately this text is buried inside his very long internet publication that has nothing to do with math, so it is bit difficult to find. Nevermind his ideological stuff, focus to his scientific text only. He has also written “A new proposal for empirical geometry” 2015, and then “An ontological anti-relativist postulate in physics” which is easy to find. If quasicrystals can be used in video, audio, text or other data compression I don t know. Or space probe communication etc. where high data compresssion efficiency is needed. I don t actually know mathemathical description of quasicrystal, not even understand it so I don t know. Wofram graph: “Tridimensional trivalent graph”. “From prime numbers to nuclear physics and beyond” 2013, "Critical -wave functions and Cantor-set spectrum of a one-dimensional quasicrystal model"1987. “Electronic energy spectrum of cubic Fibonacci quasicrystal” 2001, “Between order and disorder, Hamiltonians for…” (chemnitz netpage). Multifractal system is relative to quasicrystal and multifractals have been used in data compression. “Super-resolution reconstruction of remote sensing image using multifractal…” and then experimental M3F multifractal image compression and other multifractal applications. Amplituhedron is used in quantum physics, it makes long equations shorter. From futurism netpage: “New discovery simplifies quantum physics” 2013. Amplituhedron is a sort of data compression method, so it can be used as data compression perhaps, and perhaps quasicrystals also. Also number system based on quasicrystal or amplituhedron could be made perhaps. It would be like index-calculus based number systems, based on differential equations or something like that, I don t know, I am not a mathematician. If amplituhedron makes equations much shorter, number system based on equations is then used in “amplituhedron number system”, if quasicrystal makes numerical information packed in small space, quasicrystal can also be used as base of some exotic number system. If that makes efficient data compression possible. Also “Base infnity number system” Eric James Parfitt, if amplituhedron or quasicrystal and base infinity number system are combined, very large numerical base can be represented as short amplituhedron or quasicrystal. That would perhaps lead to “infinity computer” or similar concepts. If amplituhedron and quasicrystal can be used as similar number base as infinity number system. It would perhaps not be infinite base number system, but very large number base system, “almost infinite range number system”. Floating point numbers have extremely large range already, but smaller accuracy. If accuracy can be very large also if aplituhedron or quasicrystal is used in some number system. Also “TGD as generalized number theory” by Matti Pitkänen has something about p-adic numbers and TGD theory of physics. If p-adic numbers can offer way to make “true compact numbers” (in “Floating point numbers as information storage and data compression” netpage) perhaps this text about TGD, or other texts relative to it, like concept of “fuzzy topologies” can offer way to make “true compact numbers” (“Topologies created by the fuzzy numbers” 2017 and “Connecting fuzzyfying topologies and generalized ideals by means of fuzzy preorders”, etc.), and “supra fuzzy topologies” (“A note on intuitionistic supra fuzzy soft topological spaces”, “Supra fuzzy topological spaces: fuzzy topology”). Or fuzzy topologies and/or p-adic numbers could make very efficient number system, like amplituhedron or quasicrystal. Again I don t know because I am not a mathematician. There is also skyrmion and knotted skyrmion, “Skyrmion resuffler comes to aid stochastic computing”. If skyrmion stochastic electronic circuit makes possible stochastic computing, which in turn perhaps makes possible to use “true compact numbers”, if stochastic computing is possible to combine with “true compact numbers”. I don t know, I am not a mathematician. Perhaps at some accuracy, because 100% or not even near to 100% accuracy is not needed, if it is possible to use “true compact numbers”, in stochastic or some other way, even very coarse accuracy is acceptable if “true compact numbers” can be used in computing, saving in information space can be huge, but if 100% accuracy is needed true compact numbers cannot be used perhaps. Stochastic computing, fuzzy topologies, supra fuzzy topologies, amplituhedrons, quasicrystals, skyrmions, knotted skyrmions are perhaps suitable for searches of true compact numbers that can be used in (fuzzy logic / stochastic etc.) computing. There is Benford s law also, maybe it can be used in data compression (it is already used in data compression in some applications) and also as “true compact number” making. “A simple explanation of Benford s law”, “Benford s law, Zipf s law and the Pareto distribution”, math stackexchange netpage: “Why does Benford s law (Zipf s law) hold?” 2010. Benford s law is somehow relative to “Borel sets” and “descriptive set theory”. “The modulo 1 central limit theorem and Benford s law”, “Leading digit laws on linear Lie groups”, “Order statistics and Benford s law”. “Python - Benford s law number generator inequality”, “Python - is there a random number generator that obey Benford s law?” If Benford s law can be used in the making of “true compact numbers”, or in other data compression. Borel sets / descriptive set theory etc. (I don t understand them because I am not a mathematician) may also help. Or theories that are similar to descriptive set theory. In Steven W. Smith: “DSP guide chapter 34: Explaining Benford s law” is that Benford s law is simple (or complex) antilogarithm and logarithm thing, and there is nothing mystic or unexplained about it. Something similar is sigma algebra, Zech logarithm, and ergodic theory. “Ergodic theory: interactions with combinatorics and number theory”. Math stackexchange questions tagged ercodic theory, “Combinatorially designed LDPC codes using…”, Borel hierarchy, “Expansions of Black-Scholes processes and Benford s law”. If anything (ergodic theory etc.) helps in data compession or making of “true compact numbers”. There is winding number, nonzero rule, residue theorem, and point in polygon principles, if any of them helps to make “true compact numbers” or helps in data compression. In the book “God of the universe” by L. Charles Arnold is about logarithmic spiral and relationship between logarithmic spiral / golden ratio and Fibonacci series. Golden ratio and Fibonacci series are in some way connected to each other.
Perhaps nothing in common with above text, but is it possible to represent number as point in graphical plane (number plane) like computer graphic cards represented graphics, and does this graphic representation bring some improvement in bitwidth what this number needs to be represented. Real numbers and complex numbers are points in number plane (bitplane, or graphical two- or three dimensional plane). So graphic card of PC can point this one place in bitplane accurately using raytracing or other graphic method, matrix or other math. I think tensor processors using GPUs use something similar. If number is very complex like complex number or hypercomplex number, doing math computations is perhaps difficult. But if those numbers are points in graphical plane (graphical picture) like graphical 2D or 3D picture that computer games use, perhaps they can be computed and represented using graphic description math that GPU does. Another idea is to represent number as waveform (frequency) like sound wave or radio frequency wave that computer or other hardware process. A/D conversion is partly analog using analog electronics, turning analog wave (frequency) to digital. VCO-ADC uses voltage controlled analog oscillator or two of them as conversion to delta-sigma modulation. Does this analog processing offer some “data compression” compared to strictly mathematical digital representation of complex number? Complex number is turned to analog wave (frequency) for processing. Both ideas rely on turning digital or other numerical information to either graphical form, point in picture (GPU processing), or waveform that can be represented as frequency like radio frequency or sound frequency. Complex numbers can be perhaps processed easier this way, and faster and data bitwidth required to represent very complex mathematical values can be perhaps shortened so it is kind of “data compression”. When frequency represantation is used, using oscillator generated waveforms can be used as “data compressor”. Usually oscillator creates waveforms using simple beginning values. Like “Vector phasehaping synthesis” or “The geometric oscillator”. If this process can be reversed so that complex waveforms can be put through oscillator and simple beginning values are the result. So complex number is in the end “data compressed”. Oscillator can be either analog or digital, and processing either analog or digital. I googled “geometric number system” and “geometric oscillator” and found out that almost all texts are about nuclear physics and not “plain” math that I am searching, but I found something. Like “New foundations in mathematics” by G. Sobczyk 2013. “Base infinity number system” Eric James Parfitt is in Youtube. “Foldable number systems”, “Matrices from a geometric perspective” Coronac, “A hybrid number processor with geometric and complex arithmetic” (about logarithms, not this subject matter), ergodig theory, “Circular geometry oscillators”, “Geometric quantization of generalized oscillator”, “Symplectic geometry and geometric quantization”, “Harmonic function theory” (Sheldon Axler), “Complex vector formalism of harmonic oscillator in generalized algebra”, “Converting an infinitive decimal expansion to a rational number”, “Harmonic oscillator potential and deformed magic numbers” Sodhganga, “The Wablet: scanned synthesis on a multi-touch interface”. If complex number could be simplified using oscillator to simple beginning values, perhaps such methods that sound synthesis uses like aforementioned VPS and geometric oscillator, and impulse modeling synthesis IMS, 3D wavetable synthesis, fractional-delay inverse comb filters, detection and lateralization of sinusoidal signal in precense of dichotic pink noise, hyper-dimensional digital waveguide mesh, normalized filtered correlation time-scale modification NFC-TCM, fractional filter design based on truncated Lagrange interpolation, trendline synthesis, discrete summation formulae synthesis DSF, fast automatic inharmonity estimation algorithm, Casio Zygotech polynomial interpolation ZPI, impulse based substructuring / replication of impulse response frequency characteristics, spectral synthesis, modal synthesis, Bezier curve synthesis, XOR synthesis, allpass filter chain synthesis, feedback amplitude modulation FBAM, bandlimited waveforms using iterated polynomial interpolation, moving avarage vector quatization, n-gon wave synthesis, polynomial transition regions, differentiated polynomial waveforms, granular synthesis, sample-based synthesis, wavetable synthesis, dynamic stochastic synthesis, synthesis using network of aural exciters (EXCTR, Samy Kramer and Jari Suominen), some of those may help, perhaps. Different vector quantization models also, like additive vector quantization. Turning complex number to point in bitplane picture that GPU can handle and process fast, or to waveform that signal processor or other can handle, using oscillator to simplify it to its basic oscillator starting values. So unlike ordinary oscillator which produces waveforms, complex vaweform of complex mathemathical entity is driven through analog or digital oscillator and end result is mathematically simple or short-bitwidth signal, when again driven through oscillator this produces original complex mathemathical entity. Data is not compressed, number is just turned to signal and then this signal is oscillator processed so that it simplifies significanly compared to original complex mathemathical form. Vector phaseshaping or “geometric oscillator” -type digital signal processing can perhaps be used. In GPU computing complex or hypercomplex numbers are in 2D, 3D or 4D (two to four dimensional) bitmaps (pictures) in memory that GPU uses. So this complex number is not stored in its bitwidth, only pointer information to point in bitmap is stored, this pointer has smaller bitwidth than complex number, and this pointer can be vector, n-gon pointer, matrix pointer, raytrace etc. Non-integer numbers like logarithms, fractional numbers and complex numbers can be represented, for example triangle that has two integer sides and one non-integer side, now non-integer triangle side is that non-integer number that is needed, so it is composed using triangle and GPU computing that uses triangles, and this non-integer number is not needed to be in in long decimal form in computer memory that requires many bits, because that number is fractional not integer. And if the number is not fractional but another complex entity, still using bitmap (picture) with pointers and storing only pointer information, not number itself can save bitwidth. The number is in bitmap picture among other bitmap numbers, and bitmap in GPU memory. So non-integer computing is suitable for GPU computing using graphical forms (pictures). When number is turned to signal (wave frequency) using oscillator, number value is the modulator, waveform is either sound frequency or radio frequency or other frequency electric signal inside computer, using oscillator to produce that signal with number as modulator. Oscillator can be either digital, digitally controlled analog or analog (or that kind of what VCO-ADC uses). Complex mathematical things can be turned to waveform with oscillator and when this procedure is reversed, driving waveform through oscillator, result is simple oscillator or vector phaseshaping operation, or impulse modelling synthesis, or Wablet scanned synthesis (in vector phasheshaping, or geometric oscillator, or IMS or Wablet) that is mathematically simple and not so complex than in the beginning. Simple forms can be stored in memory and when needed again they can be driven through oscillator or vector phasheshaper, or use impulse modeling synthesis or Wablet and result is complex forms again, they can be changed from waveform to another mathematical form like numbers if needed. Feedback amplitude modulation FBAM and allpass filter chain synthesis are another methods. Others are scanned synthesis, swarm based synthesis, wave field synthesis, wave terrain synthesis, spiral synthesis (Petersen), numerical (sound) synthesis and Walsh - Hadamart (sound) synthesis. Perhaps other error correcting codes than Walsh - Hadamart can be used in signal synthesis like Walsh - Hadamart sound synthesis. Oscillator and circuits can use direct-digital synthesis (like delta-sigma DDS, Orino) if it is not analog or DCO. And GPU can use graphical number system to represent non-integer fractional numbers, based on triangles, a kind of version of Parfitt s base infinity number system. Instead of numbers 1,2,3 etc its numbers are triangles whose one side is fractional value. Those triangle “numbers” are counted in GPU using graphic processing. So it is fractional number number system, whose factorial value is counted using triangles, triangle has three sides and one of them is the fractional value side, which is the “number” what is represented. Instead of computing linear decimals of some long non-integer number computer uses GPU - computed triangles and uses that one non-integer value of one triangle side as the number in question. That may be faster than computing long line of bits / decimals of some non-integer number. Non-integer mathematic can perhaps be done using just those triangles and GPU computing. Different kind of triangles can be done to represent different kind of non-integer values. In article “New number systems seek their lost primes” 2017 is more triangle representations of numerical values. There is balanced ternary tau, negative beta encoder and other non-integer computer arithmetic, and computer mathematic using logarithmic values also, and other non-integer computer math. This triangle number system could use them. Of particular interest is Garret Sobczyk and his publications, in his netpage and other his publications. They are about geometric algebra, or graphical representation of mathematics (or is it?). If GPU computing could use that kind of geometric or matrix approaches, in GPGPU computing. It is said that analog music synth is analog computer, can digital synth be part of digital computer? Petri Huhtala PortOSC (portable oscillator) cell phone analog synth. “Digital conversion method for analog signal” patent EP 1059731 (13.12 2000). Other than VCO-ADC is possible, like PWM-ADC, PFM-ADC and SAR-VCO ADC. Exceptionally stable VCOs (Steiner-Parker VCO, Pigtronix VCO, Harmony Systems inc. / Delora VCO design) or other unusual VCOs, Malekko / Richter Oscillator 2 (OSC2), Zeroscillator, Rob Hordjik design VCOs or netpage Electro-music (2006) “Analogue gear news 9” that mentions Bjorn J or BJ oscillator that is “additive” or something, Radikal Technologies Swarm oscillator (time linear modulation), Dove Audio Window Transfer Function oscillator, Melu Instruments phase synchronized oscillator, Audiospektri HG-30, RSF Kobol VCO, morphing VCO and morphing VCF, switched capacitor VCO, “Simple switched capacitor voltage-controlled oscillator” 1983, “Multiple circuit topologies of VCO s for multistandard applications”, “Combined frequency-amplitude nonlinear modulation: theory…”, “Iris: a circular polyrythmic music sequencer”. Iris can perhaps be used not as sequencer but modulation source in vector paseshaping or geometric oscillator, or other waveform generating using geometric (circular) shapes like VPS or geometric oscillator. Those oscillators and techniques can be perhaps used in VCO-ADC or in processing of complex mathematical formulas to simplified signals. So this is like analog computer that uses VCO, altough processing can be digital after VCO A/D conversion when mathematical formula is turned to frequency signal and then simplified using VCO, or to use digital oscillators. Takis Zourntos one bit modulation without delta-sigma, and “A novel speculative pseudo-parallel DSM” Johansson 2014 are new methods. “Multiple pulse per group keying” patents by Clinton Hartmann 2003 etc. Circuits that work like Serge modular, where one circuit has several functions, can perhaps be used. “Timeline-based modulation” (Progress Audio Kinisis, and others) and cross modulation (like cross modulation of several VCOs, Lyre-8 drone synth, kNoB Muscarin) can perhaps be used. Another way is to use digital GPU as graphical computing (computer that process numbers as graphical pictures, GPGPU computing), using for example triangles as number system base. Analog computer CPUs have been designed and analog DSPs, how about analog GPU manufactured at 16nm, because 16nm is smallest manufacturing tech that can use analog circuits now. Analog TV / video / graphics processor is not so different from analog GPU. If it is analog processing it can use fractional values, this is the main benefit of analog computer / DSP / GPU. If analog synth uses vactrol / optocouplers is it then optical analog computer? High frequency reproduction is method where higher frequencies are replicated from base frequency, usually two times higher as base frequency, and in sound replication human hearing frequency sound (18khz) can be limited to 7khz for storage and transmission, when reproduced it is expanded about 14khz range (if double frequency range reproduction is used). How this principle can be used for example other signal processing than audio, or in oscillators etc. I don t know. Sigma-delta ADC can use floating point numbers, perhaps “differential floating point” that uses small 4 or 6 or 8 bit FP or unum / posit microfloat / microposit values, or other similar to posit computing / microfloat. Logarithmic etc. other number systems can be used instead FP / integer. Takis Zournotos one bit without delta -sigma perhaps can use them also. Vector compression / additive quantization AQ with delta- sigma ADC etc. “Between fixed point and floating point by Dr. Gary Ray” has FP numbers with data compression (reversed elias gamma coding). Differential microfloats / microposits with 4-8 bits using data- or vector compression etc. In CPU technology “pointer machine” is some model of computer arithmetic. If pointer machine can point to information, it can be used as pointer in graphical information, like GPU bitmap? Or used as pointer in cubic bitplane to extract information from it? Or as pointer with Champernowne constant to extract information from it? Cubic bitplane and Champernowne constant are in post “Using floating point numbers as information storage and data compression”. Not only analog VCOs but analog filters can be used in signal processing / number manipulation computing, like “mutant vactrol filter” or Oakley Sound Croglin. Volterra synthesis and “Monte Carlo and quasi-Monte Carlo for image synthesis” are synthesis methods suitable for signal processing but also for number field manipulation? In Wikedia in audio codecs section is that a-law and u-law (mu-law) logarithmic coding used in telephone codecs and other is in fact version of floating point format. If logarithms are just versions of floating point representations in binary form, is it possible to use fractional value, not integer in floating point format mantissa and exponent? So it is logarithmic or other fractional number system format. There are plenty of logarithmic number systems in XLNS research - overview netpage in articles section. Turning floating point to logarithmic, for example standard IEEE FP format that uses fractional (logarithmic) values not integers for mantissa or exponent, or both? Or use newer methods like unum, posit, valid or EGU with fractional / logarithmic values? EGU is in “Re: beating posits in their own game” by Quadibloc (John G. Savard), Google groups msg comp arch - pages 11.8 2017, Google browser does not show it, but using Microsoft browser it is shown, exremely gradual underflow EGU, and HGU hyper-gradual underflow. If logarithmic system is just version of floating point (and vice versa), is unum, posit, valid or EGU also logarithmic? Can they use fractional values, not integer? if more accuracy is possible using fractional (logarithmic) values with floating point, unum, posit, valid and EGU. Googling Y. Hida double precision floating point brings many results. “Parallel algoritms for summing floating point numbers” 2016. “Recycled error bits: Energy efficient architectural support for higher precision floating point”. Google groups 2011 “A new numeral system” Quoss Wimblik. “Prof. Hachioji new number system for integral numbers with four lowercase lettters”. “Counting systems and the first Hilbert problem” (Piraha system), “A new computational approach to infinity for modelling physical phenomena”. “Stochastic arithmetic in multiprecision” 2011. “Multiple base composite integer” in MROB netpage. Can multiple base composite integer be used with floating point, unum, posit, valid or EGU / HGU ? Version of floating point can be “differential floating point”, DFP (another post in Robin Hood forums). Can unum, valid, posit or EGU / HGU be in differential “microfloat” form like 4 bit ADPCM is compressed form of 16 bit linear PCM? Can fractional / logarithmic values (something like a-law and mu-law encoding, which are logarithmic / floating point formats) be used with differential floating point. Is there something like differential logarithmic system as ADPCM is differential PCM? Instead of ADPCM it uses logarithmic values like a-law or mu-law in differential form. Is unum, posit, valid or EGU / HGU possible to use differential logarithmic / fractional system? Or multiple base composite integer with unum, posit valid or EGU in differential form? Can delta sigma modulation or Takis Zourntos one bit encoding without delta sigma use posit, valid, unum or EGU / HGU? Or multiple base composite integer? DSM is in nature logarithmic, and floating point DSM is also already in existence. NICAM was ADPCM that used white noise, but NICAM with dithering instead of white noise to improve acuracy can be used in information processing. Dithering with floating point, unum, valid, posit, EGU/HGU or logarithmic or some other number system, in NICAM style almost - ADPCM system. Using 1 bit serial processor or transputer with lambda calculus in hardware is perhaps possible.
Using Walsh-Hadamart transform in signal processing instead of Fourier transform is possible, and Hadamart transform can be used in analogue domain, not digital, and now analog circuits are made with 16nm manufacturing tech. So Hadamart transform / Walsh functions in analog form can be more efficient than Fourier / fast Fourier / sparse Fourier transform (if there is “sparse Hadamart transform” or “sparse Walsh function”). “Application of a realtime Hadamart transform network to sound synthesis” Bernard Hutchins 1975, “Experimental electronic music devices employing Walsh functions” 1973. “Walsh functions in waveform synthesizers” Insam 1974. Fuzzy granural synthesis with Walsh functions. Phase offset modulation is one of many modulation methods of signal (addition to list of modulation methods in previous text ). Wave folding (“Simple wavefolder” like youtube “DIY analog synth project wavefolder post VCA” 2018 ) and wave concatenator and dynamic waveform morphing / crossfading are other signal processing methods, and timeline based synthesis (Progress Audio Kinisis) / timesplice synthesis (Synclavier), and Spherical harmonics synthesis, in “Spherical harmonics synthesizer”. Spherical quantization is used in for example ADPCM. If Additive Quantization AQ can be used in spherical / pyramid / cubic quantizing schemes I don t know. In Chipdesignmag netpage is “Between fixed point and floating point by Dr. Gary Ray”, presenting different experimental floating point systems, version which uses “reversed Elias gamma exponent” can perhaps be used in delta sigma modulation or Takis Zourntos model one bit modulation without delta sigma, 1 bit (reversed) Elias gamma as DSM or Takis Zourntos model. Or other exponent models presented in that article, in 1 bit form, can perhaps make efficient DSM or Takis Zourntos model. Vector quantization (additive quantization AQ) with DSM or Takis Zourntos 1 bit model perhaps can also be done. Vector phaseshaping is used in oscillators but does it work with DSM or Takis Zourntos model also? Or does Anamorphic strecth transform or Phase stretch transform work with DSM or Takis Zourntos model? DSM can have internal multibit or multirate processing, altough it is 1 bit scheme. In netpage CHRTsynth is “natural modulation” done with tube valves and Abraham-Block multivibrator, CHRT tube valve wind controller has formant and timbre modulation. Simple analog oscillator and filter designs with absolute minimal amount of components are for example in netpages EEWeb extremecircuits and in Circuit-finder netpage, and in netpage bowdenshobbycircuits (Bill Bowden). Examples are “How to build 2nd order op amp filters”, “Triangle / squarewave generator”, “How to build reverse bias oscillator”, “How to build low frequency sinewave oscillator”, “Simple op amp bandpass filter”, “Variable high-pass filter”, “Simple white noise generator”, “FET audio mixer”, “Derive pure sine waves from digital signals”. Other simple are “Simple VCF schematic”, “Ultra simple VCF”, Thomas Henry VCF-1, and “Very simple VCF+LFO”. Martin Vicanek has made many improvements in digital signal processing. Convolution synthesis and polyconvolution synthesis (BT Phobos) are synthesis / modulation methods, and Spectral morphing synthesis (Cube 2), Harmonic content morphing synthesis (Firebird VST) and Transwave synthesis (Ensoniq Fizmo) also. Unusual oscillators are TMP array oscillator, Theta oscillator (Räsänen), Kassutronics: Avalance VCO, Stroh Flexwave VCO, PortOSC Petri Huhtala portable oscillator for cell phones, granular oscillator, “Grainy clamp-it”, Falcontinuum VST multi-granular oscillator, scanned oscillator, Swarmsynth swarm oscillator, “The geometric oscillator: sound synthesis with cyclic shapes” 2017, Zero Vector VST White Noise Audio, Native Instruments Form synth oscillator, Nonlinear Instruments VCV rack modules. Dhalang MG uses “Rossler and Lorentz abstractor fractal waveforms”. Other complex sequenced waveforms are in hardware “Ornament&Crime” sequencer and in software Modulys VST, PolyWaves VST, Subconscious VST, Seeq One VST, Photon VST, Pandemonium VST, Metatron 2 VST (“Nippy Baynes wavetable OSC”), Hydi VST. Le Sound AudioTexture and “A new paradigm for sound design” are new methods. In Elektor magazine is “Digital formant synth (D-Formant) 130374”, 2013, that uses BLEP (PolyBLEP) in new form that uses large ROM tables. RCS 370 Polyphonic harmonic generator. Granular synthesis for signal processing, like Borderlands iOS app. Fuzzy granular synthesis uses Walsh-Hadamart transform, and that is working in analog domain, so granular signal processing with analog electronics is possible? Anamorphic stretch transform and Phase stretch transform are used in signal processing. Both have something in common with Fourier transform. Is it possible to make Walsh-Hadamart transform based anamorphic strecth transform or phase strecth transform that work in analog domain? If 16nm analog circuits are used. Similar way that AST and PST are relative to Fourier transform, but now version of AST and PST that are relative to Walsh-Hadamart transform and work with analog circuits. Analog (or digital) Walsh-Hadamart processing instead of digital Fourier transform, with or without AST and PST. Vector phaseshaping is version of phase distortion, so can Phase stretch transform and vector phaseshaping be combined? If vector phasehaping works in oscillators, and analog oscillators are used in VCO-ADC, can VCO-ADC be build with for example vector phaseshaper, or Walsh-Hadamart transform, or phase stretch transform, or can all these be combined in VCO-ADC? Or Anamorphic stretch transform in VCO-ADC? Or Anamorphic strecth transform in DSM or Takis Zourntos model? Or (analog or digital) Walsh-Hadamart functions in delta sigma modulation or Takis Zourntos model, with or without vector phaseshaping, Phase stretch transform or Anamorphic stretch transform? Recursively indexed quantizer is used in ADPCM (RIQ-ADPCM), so it can be perhaps be used with Takis Zourntos model also. Index-calculus number system is used in one version of logarithmic number system. Finite State Entropy coding is data compression, how those three models are suitable for one bit system like DSM, I don t know. But perhaps multibit / multirate DSM will do, or “pseudo-parallel DSM”? If any of those previous methods will make efficient signal processing, efficient analog and digital processing, in DSM, Takis Zourntos model and VCO-ADC, or other signal processing duties. Simple VCO and VCF designs can be used in miniature analog signal processing for example in VCO-ADC in 16nm manufacturing, and analog or digital Walsh-Hadamart functions in 16nm analog or digital signal processor instead of Fourier transforms. Xampling is analog version of sampling, sampling below Nyquist rate, Walsh-Hadamart functions in fuzzy granular (analog) sampling also? “Granular synthesis of sound by fractal organization” Paul Rhys. Elettra Venosa: “Time-interleaved ADCs”. Compressive sampling, like “Multi-channel simultaneus data acquisation through a compressive sampling-based approach” is that 1/250th is enough for sampling analog signal, making sample reconstruction with 250 X compression ratio possible. “Robust compressive sensing via sparse vectors”, “Extreme compressive sensing for covariance estimation”. Audio codec, like speech codec that uses sparse vectors / compressive sensing is possible, and formant / speech synth also. “Sample by sample adaptive vector quantization” SADVQ. Quadrature amplitude modulation is analog compression technique in television transmissions (PAL), and digital “Quadrature noise shaped encoding” (Ruotsalainen). Discrete summation formulae is method for creating lots of harmonics with ease, FM modulation (vector phaseshaping, phase offset modulation) is another. One bit / multibit DSM or Takis Zourntos model ADC, VCO-ADC can benefit from these? “Direct digital synthesis using sigma delta modulated signals” (Orino), “A tunable direct digital synthesis with tunable delta sigma modulation” (Vainikka). Anamorphic strecth transform and Phase stretch transform can be turned to analog form using Walsh-Hadamart functions as model? And use Xampling? “Cascaded Hadamart-based delta sigma modulation” Alonso. And Phase stretch transform can use Vector phaseshaping or Phase offset modulation or other FM method? Fuzzy granular analog sampling with Walsh-Hadamart functions is possible? And use Xampling? US patent 8487653B2 “SDOC with FPHA and FPXC:” by Tang System, 2009. Xampling can be used in DSM, Takis Zourntos model or VCO-ADC? Vector phaseshaping oscillator or “vector oscillator” by Vicanek and others can be used in VCO-ADC? Or vectors in DSM or in Takis Zourntos model with or without vector phaseshaping? Feedback amplitude modulation (FBAM) and allpass fiter chain (sound) synthesis are other new signal processing methods, how those are comparable to analog signal processing I don t know. Can FBAM or allpass filter chain be used in DSM or Takis Zourntos model or VCO-ADC? Or use them in other signal processing than (music) sound? Granular additive synthesis and granular fractal synthesis are synthesis methods. DocNashSynths has made several new synthesis methods like “NHT nested hyperbolic and triangular functions”. FOF synthesis, corpus-based concatenative sound synthesis, IRCAM CataRT. Controlled envelope single sideband modulation is one way of modulation. In netpage Google groups forums 10.8 1999 is “Why sine wave in Fourier series?” and answer 12.8 by Timo Tossavainen, that Karhunen-Loeve transform is better than Fourier (but works only in analog electronics, for example manufactured at 16nm), Walsh-Hadamart suits also, wavelet and wavelet packets also, Slant transform is suitable for saw waves, Hartley transform, permutation matrix for DFT. If permutation matrix is matrix computing it is fast in GPU? GPU prgramming languges are Futhark and Harlan (rare GPU programming languages). Another signal processing method is Alessandro Foi: “Shape-adaptive transforms filtering”. “Pointwise SA-DCT algorithms”. For alias free waveforms is “Alias-free nonlinear audio processing (ALINA)”. Mcleyvier command language was used to control analog electronics. Similar programming language that is build from start for analog circuit behaviour in mind, for example KLT, in 16nm analog signal processing, is perhaps needed today. 16nm analog circuit or analog / digital hybrid using KLT can be faster than digital processing, and analog computer manufactured in 16nm process (faster / more energy efficient than 10nm digital computer, Georgia university / institute of technology), or optical analog computer using KLT. Analog circuits are planned to be manufactured even in 5nm manufacturing, altough 16nm is today possible. Neuromorphic analog circuits can also be made, using KLT or other signal processing method. “A novel multi-bit delta sigma modulation FM-to-digital converter”, “Complex frequency modulation - Ian Scott s technology pages” (patented complex frequency modulation). “A nonuniform sampling adaptive delta modulation ANSDM”. “ELDON: floating point format for signal processing”, “Vertex data compression through vector quantization”. Quire is version of unums that is about vectors. In text Gradual and atpered overflow and underflow: a functional diffrential equation and its approximation" 2006 is that floating point number can have overflow threshold of 10 to 600 000 000 potence, number with 600 000 000 decimal digits, enough to store all information in the world in one floating point number, in its overflow accuracy. “Bounded floating point”.“New number systems seek their lost primes”, “Base infinity number system” Eric James Parfit, “Peculiar pattern found in “random” prime numbers”, “Hyperreal structures arising from logarithm”. Benford s law is explained in “DSP guide chapter 34: explaining Benford s law” by Steven W. Smith, that Benford s law that is in everywhere in nature is antilogarithm thing. If similar style explanation is suitable for “hyperreal structures from logarithm” and “peculiar pattern in prime numbers”. Wikipedia: “Ideal (ring) theory”, “Ideal number”. KLT transform, anamorphic stretch transform and other things that work well in analog domain but not in digital, can be used in audio, video or any signal processing because analog circuits are made 16nm tech nowdays and 5 nm analog circuits are planned. So data compression can use analog processing instead of digital. In text “Beating posits at their own game” by Quadiblock (John G: Savard) in Google groups forums 2017 (if it does not show in Google browser it is shown in Microsoft browser) is that tapered floating point / unum is like logarithmic a-law system in his HGU / EGU floating point. Is this “logarithmic floating point”, putting logarithmic and floating point number systems together? Is it also “fractional floating point” because logarithmic is fractional values not integers? Floating point numbers have integers as sign bit, mantissa and exponent. Can Fibonacci numbers be used with floating point like John G: Savard s floating point system uses logarithms? Fibonacci series is golden ratio / tau in integer form (about 1,6 ratio), so it is fractional value like logarithms. In text “DC-accurate, 32 bit DAC achieves 32 bit resolution” is that this DAC has theoretically infinite DNL and monotonicity. Infinite accuraty can be used in data compression. Mathematician Srinivasan Ramajunan made mathematical studies about infinity, or something, at least film made about him is named “The man who knew infinity”. If Savard s floating point si logarithmic a-law, how then other logarithmic companding with floating point numbers? “Asymptotically optimimal scalable coding for minimum weighted mean square error” 2001, “Geometric iecewise uniform latice vector quantization of the memoryless gaussan source” 2011, “Spherical logarithmoc quantization” Matschkal, “On logarithmic spherical vector quantization” LSVQ, “Gosset low complexity vector quantization” GLCVQ, “Lattice spherical vector quantization” Krueger. Most of those have something to do with logarithmc quantization. So can they be used in floating point number system like Savard uses a-law? Increasing accuracy dramatically? There are others like “Design of tree -structured multiple description vector quantization” TSVQ, “Tree-structured product-codebook vector quantization”, “An improvement to tree structured vector quantization” Chu 2013, but has those anything to do with logarithms I don t know. “Low delay audio compression using predictive coding” 2002 has “weighted cascaded least mean square” WCLMS principle, has it something to do with logarithms I don t know. If analog circuits are used in processing, “analog error cancellation logic circuit” can be used, “Active noise cancelling using analog neuro-chip with on-chip learning capability”, Capacitor mismatch error cancellation technique for SAR-ADC", “Error cancelling low voltage SAR-ADC”. In netpage “neuraloutlet wordpress” (com) is “metallic numbers” and “U-value number system”, can those be used in data compression I don t know. If they are logarithmic perhaps something similar like Savard uses a-law is possible in floating point. In netppage “shyamsundergupta number recreations” is in “fascinating triangular numbers” that number sequences 1, 11, 111 etc. are all triangular numbers in base 9, so binary representations of them in binary is simple and they can be compressed easily. In same netpage in “unique numbers” section is that digital root of unique numbers is 9, and number / base 9 appears in other properties of unique numbers also widely. So is 9 then best base for integer computing, not 10 or 2? And in section “curious properties of 153” is “the curious properties of binary 153” that form “octagonal binary ring” with 8 bit / 255 numeral values. That reminds about “Z4 cycle code” that is used in converting quaternary base to binary for example. Can such properties as number / base 9 has, or number 153 that has ring of binary digits, be used in data compression? Logarithmic Cubic Vector Quantization LCVQ, “Logarithmic quantization in the least mean squares algorithm” Aldajani, “State estimation of chaotic Lurie system with logarithmic quantization”, “Semi-logarithmic and hybrid quantization of laplacian source”, “Finite gain stabilisation with logarithmic quantization”, “A logarithmic quantization index modulation”, can any of these be used with floating point like Savard uses logarithm in HGU / EGU? “A bridge between numeration systems and graph directed iterated functions systems” “Wavelet audio coding by fixed point short block processing”, “preferred numbers”, can some of those be used in floating point? Preferred numbers are used in parcel sizes and are “logarithmic” so can preferred numbers be used in floating point systems also? In text “Making floating point math highly effective for AI hardware” 8.11 2018 is list of FP systems: Nonlinear significand maps / logarithmic numbers (Kingsbury, Rainer 1971), Reciprocal closure 2015, Binary stochastic numbers (Gaines 1969), Entropy coding / tapered FP (Morris 1971, if tapered / posit FP is data compression, how about using Finite state entropy with posit / tapered FP) , Posit 2017, Fraction map significand (“Universal coding of the reals” Lindstrom 2018) and Exact log linear multiply-add ELMA, Kulich accumulation. ELMA is 8 bit that has 4 bit accuracy but 24 bit range. Such is suitable for ADPCM style system. This text was written only few months ago so it is latest thing in FP research. Some of those are logarithm based, like Savard s HGU / EGU. For logarithms is XLNSresearch (com) netpage where is long list of studies of logarithmic number systems, including multidimensional logarithmic, index calculus DBNS, hybrid LNS / FP number system, “Complex LNS arithmetic using high-radix reduntant CORDIC algorithms” 1999, “Novel algorithm for multi-operand LNS addition and subtraction” 1995, “Common exponent floating and logarithmic radix-22X2 pipeline” 2010, “Architectures for logarithmic addition integer rings and galois fields” 2001, “A new approach to data conversion: Direct analog-to-residue conversion” , “A 32 bit 64-matrix parallel CMOS processor” 1999. Lucian Jurca has written about combining LNS and FP together. Other: “Parametrizable CORDIC based FP librarry”, FloPoCo library, stackoverflow 2018: “Algorithm-compression by quazi-logarithmic scale”, “High resolution FP ADC” Nandrakumar. Savard uses a-law logarithmic in FP, those other logarithmic systems can be used in FP also? And CORDIC (or BKM algorithm) and Fibonacci series also? Theres many more logarithmic systems like complex / LNS hybrids, Monte Carlo LNS, denormal LNS, reduntant LNS, residue LNS, interval (arithmetic) LNS, dual real / LNS hybrid, serial LNS, two dimensional LNS (2DLNS), multidimensional LNS, signed digit LNS, semilogarithmic LNS, multi-operand LNS Those can perhaps be used as logarithmic “steps” of floating point numbers like Savard uses a-law logarithmic. Also floating point numbers have huge disparity between exponent range and accuracy of mantissa. And FP numbers are difficult to data compress. If compression is only used in mantissa and expnent left uncompressed, integer compression methods can be used in mantissa. Using software tricks to expand mantissa accuracy up to 39 times of its normal accuracy is possible. Or use data compression, integer part can be compressed using ADPCM / delta compression type system, which is lossy compression. Ultra low delay audio compression techniques use ADPCM and other techniques, are fast, only about 1 millisecond or even less. Bit truncation , and Finite state entropy can also be used. Then only mantissa is data compressed because exponent part does not need compression, and now mantissa accuracy is closer to exponent range. Especially AI research has invented different minifloats and “dynamic range integers”, stochastic rounding FP etc. Smallest minifloats are 8 and only 4 bits, IBM, Clover FP library etc. Those have really small accuracy, and if mantissa is data compressed or expanded using software that would help. Processing would be faster if only mantissa is compressed. “Decomposed algebraic integer quantization”, “Alias-free short-time fourier transform”, sparse fractional Fourier transform, “multi-amplitude minimum shift keying format”, sparse composite quantization, pairwise quantization, “Space vector based dithered DSM”, nonuniform sampling DSM, implicit sparse code hashing, “Analytical evaluation of VCO-ADC quantization using pulse frequency modulation” Hernandez, “Time quantized FM using time dispersive codes” Hawkesford, multipre description coding DSM, “Design of multi-bit multiple-phase VCO-based ADC” 2016. I googled zero set with Pisot number and Parry number, if that has something to do with endless data compression, and I found “Ito-Sadaharo numbers vs. Parry numbers”, “Palindromic complexity of infinite words associated with simple Parry numbers”, “A family of non-sofic beta-expansions”, “Combinatorics, automata and number theory”, “Beta-shifts, their languages and computability”, “Abelian complexity of infinite words associated with quadratic Parry numbers”. But I don t understand those matemathical formulas.