Perhaps nothing in common with above text, but is it possible to represent number as point in graphical plane (number plane) like computer graphic cards represented graphics, and does this graphic representation bring some improvement in bitwidth what this number needs to be represented. Real numbers and complex numbers are points in number plane (bitplane, or graphical two- or three dimensional plane). So graphic card of PC can point this one place in bitplane accurately using raytracing or other graphic method, matrix or other math. I think tensor processors using GPUs use something similar. If number is very complex like complex number or hypercomplex number, doing math computations is perhaps difficult. But if those numbers are points in graphical plane (graphical picture) like graphical 2D or 3D picture that computer games use, perhaps they can be computed and represented using graphic description math that GPU does. Another idea is to represent number as waveform (frequency) like sound wave or radio frequency wave that computer or other hardware process. A/D conversion is partly analog using analog electronics, turning analog wave (frequency) to digital. VCO-ADC uses voltage controlled analog oscillator or two of them as conversion to delta-sigma modulation. Does this analog processing offer some “data compression” compared to strictly mathematical digital representation of complex number? Complex number is turned to analog wave (frequency) for processing. Both ideas rely on turning digital or other numerical information to either graphical form, point in picture (GPU processing), or waveform that can be represented as frequency like radio frequency or sound frequency. Complex numbers can be perhaps processed easier this way, and faster and data bitwidth required to represent very complex mathematical values can be perhaps shortened so it is kind of “data compression”. When frequency represantation is used, using oscillator generated waveforms can be used as “data compressor”. Usually oscillator creates waveforms using simple beginning values. Like “Vector phasehaping synthesis” or “The geometric oscillator”. If this process can be reversed so that complex waveforms can be put through oscillator and simple beginning values are the result. So complex number is in the end “data compressed”. Oscillator can be either analog or digital, and processing either analog or digital. I googled “geometric number system” and “geometric oscillator” and found out that almost all texts are about nuclear physics and not “plain” math that I am searching, but I found something. Like “New foundations in mathematics” by G. Sobczyk 2013. “Base infinity number system” Eric James Parfitt is in Youtube. “Foldable number systems”, “Matrices from a geometric perspective” Coronac, “A hybrid number processor with geometric and complex arithmetic” (about logarithms, not this subject matter), ergodig theory, “Circular geometry oscillators”, “Geometric quantization of generalized oscillator”, “Symplectic geometry and geometric quantization”, “Harmonic function theory” (Sheldon Axler), “Complex vector formalism of harmonic oscillator in generalized algebra”, “Converting an infinitive decimal expansion to a rational number”, “Harmonic oscillator potential and deformed magic numbers” Sodhganga, “The Wablet: scanned synthesis on a multi-touch interface”. If complex number could be simplified using oscillator to simple beginning values, perhaps such methods that sound synthesis uses like aforementioned VPS and geometric oscillator, and impulse modeling synthesis IMS, 3D wavetable synthesis, fractional-delay inverse comb filters, detection and lateralization of sinusoidal signal in precense of dichotic pink noise, hyper-dimensional digital waveguide mesh, normalized filtered correlation time-scale modification NFC-TCM, fractional filter design based on truncated Lagrange interpolation, trendline synthesis, discrete summation formulae synthesis DSF, fast automatic inharmonity estimation algorithm, Casio Zygotech polynomial interpolation ZPI, impulse based substructuring / replication of impulse response frequency characteristics, spectral synthesis, modal synthesis, Bezier curve synthesis, XOR synthesis, allpass filter chain synthesis, feedback amplitude modulation FBAM, bandlimited waveforms using iterated polynomial interpolation, moving avarage vector quatization, n-gon wave synthesis, polynomial transition regions, differentiated polynomial waveforms, granular synthesis, sample-based synthesis, wavetable synthesis, dynamic stochastic synthesis, synthesis using network of aural exciters (EXCTR, Samy Kramer and Jari Suominen), some of those may help, perhaps. Different vector quantization models also, like additive vector quantization. Turning complex number to point in bitplane picture that GPU can handle and process fast, or to waveform that signal processor or other can handle, using oscillator to simplify it to its basic oscillator starting values. So unlike ordinary oscillator which produces waveforms, complex vaweform of complex mathemathical entity is driven through analog or digital oscillator and end result is mathematically simple or short-bitwidth signal, when again driven through oscillator this produces original complex mathemathical entity. Data is not compressed, number is just turned to signal and then this signal is oscillator processed so that it simplifies significanly compared to original complex mathemathical form. Vector phaseshaping or “geometric oscillator” -type digital signal processing can perhaps be used. In GPU computing complex or hypercomplex numbers are in 2D, 3D or 4D (two to four dimensional) bitmaps (pictures) in memory that GPU uses. So this complex number is not stored in its bitwidth, only pointer information to point in bitmap is stored, this pointer has smaller bitwidth than complex number, and this pointer can be vector, n-gon pointer, matrix pointer, raytrace etc. Non-integer numbers like logarithms, fractional numbers and complex numbers can be represented, for example triangle that has two integer sides and one non-integer side, now non-integer triangle side is that non-integer number that is needed, so it is composed using triangle and GPU computing that uses triangles, and this non-integer number is not needed to be in in long decimal form in computer memory that requires many bits, because that number is fractional not integer. And if the number is not fractional but another complex entity, still using bitmap (picture) with pointers and storing only pointer information, not number itself can save bitwidth. The number is in bitmap picture among other bitmap numbers, and bitmap in GPU memory. So non-integer computing is suitable for GPU computing using graphical forms (pictures). When number is turned to signal (wave frequency) using oscillator, number value is the modulator, waveform is either sound frequency or radio frequency or other frequency electric signal inside computer, using oscillator to produce that signal with number as modulator. Oscillator can be either digital, digitally controlled analog or analog (or that kind of what VCO-ADC uses). Complex mathematical things can be turned to waveform with oscillator and when this procedure is reversed, driving waveform through oscillator, result is simple oscillator or vector phaseshaping operation, or impulse modelling synthesis, or Wablet scanned synthesis (in vector phasheshaping, or geometric oscillator, or IMS or Wablet) that is mathematically simple and not so complex than in the beginning. Simple forms can be stored in memory and when needed again they can be driven through oscillator or vector phasheshaper, or use impulse modeling synthesis or Wablet and result is complex forms again, they can be changed from waveform to another mathematical form like numbers if needed. Feedback amplitude modulation FBAM and allpass filter chain synthesis are another methods. Others are scanned synthesis, swarm based synthesis, wave field synthesis, wave terrain synthesis, spiral synthesis (Petersen), numerical (sound) synthesis and Walsh - Hadamart (sound) synthesis. Perhaps other error correcting codes than Walsh - Hadamart can be used in signal synthesis like Walsh - Hadamart sound synthesis. Oscillator and circuits can use direct-digital synthesis (like delta-sigma DDS, Orino) if it is not analog or DCO. And GPU can use graphical number system to represent non-integer fractional numbers, based on triangles, a kind of version of Parfitt s base infinity number system. Instead of numbers 1,2,3 etc its numbers are triangles whose one side is fractional value. Those triangle “numbers” are counted in GPU using graphic processing. So it is fractional number number system, whose factorial value is counted using triangles, triangle has three sides and one of them is the fractional value side, which is the “number” what is represented. Instead of computing linear decimals of some long non-integer number computer uses GPU - computed triangles and uses that one non-integer value of one triangle side as the number in question. That may be faster than computing long line of bits / decimals of some non-integer number. Non-integer mathematic can perhaps be done using just those triangles and GPU computing. Different kind of triangles can be done to represent different kind of non-integer values. In article “New number systems seek their lost primes” 2017 is more triangle representations of numerical values. There is balanced ternary tau, negative beta encoder and other non-integer computer arithmetic, and computer mathematic using logarithmic values also, and other non-integer computer math. This triangle number system could use them. Of particular interest is Garret Sobczyk and his publications, in his netpage and other his publications. They are about geometric algebra, or graphical representation of mathematics (or is it?). If GPU computing could use that kind of geometric or matrix approaches, in GPGPU computing. It is said that analog music synth is analog computer, can digital synth be part of digital computer? Petri Huhtala PortOSC (portable oscillator) cell phone analog synth. “Digital conversion method for analog signal” patent EP 1059731 (13.12 2000). Other than VCO-ADC is possible, like PWM-ADC, PFM-ADC and SAR-VCO ADC. Exceptionally stable VCOs (Steiner-Parker VCO, Pigtronix VCO, Harmony Systems inc. / Delora VCO design) or other unusual VCOs, Malekko / Richter Oscillator 2 (OSC2), Zeroscillator, Rob Hordjik design VCOs or netpage Electro-music (2006) “Analogue gear news 9” that mentions Bjorn J or BJ oscillator that is “additive” or something, Radikal Technologies Swarm oscillator (time linear modulation), Dove Audio Window Transfer Function oscillator, Melu Instruments phase synchronized oscillator, Audiospektri HG-30, RSF Kobol VCO, morphing VCO and morphing VCF, switched capacitor VCO, “Simple switched capacitor voltage-controlled oscillator” 1983, “Multiple circuit topologies of VCO s for multistandard applications”, “Combined frequency-amplitude nonlinear modulation: theory…”, “Iris: a circular polyrythmic music sequencer”. Iris can perhaps be used not as sequencer but modulation source in vector paseshaping or geometric oscillator, or other waveform generating using geometric (circular) shapes like VPS or geometric oscillator. Those oscillators and techniques can be perhaps used in VCO-ADC or in processing of complex mathematical formulas to simplified signals. So this is like analog computer that uses VCO, altough processing can be digital after VCO A/D conversion when mathematical formula is turned to frequency signal and then simplified using VCO, or to use digital oscillators. Takis Zourntos one bit modulation without delta-sigma, and “A novel speculative pseudo-parallel DSM” Johansson 2014 are new methods. “Multiple pulse per group keying” patents by Clinton Hartmann 2003 etc. Circuits that work like Serge modular, where one circuit has several functions, can perhaps be used. “Timeline-based modulation” (Progress Audio Kinisis, and others) and cross modulation (like cross modulation of several VCOs, Lyre-8 drone synth, kNoB Muscarin) can perhaps be used. Another way is to use digital GPU as graphical computing (computer that process numbers as graphical pictures, GPGPU computing), using for example triangles as number system base. Analog computer CPUs have been designed and analog DSPs, how about analog GPU manufactured at 16nm, because 16nm is smallest manufacturing tech that can use analog circuits now. Analog TV / video / graphics processor is not so different from analog GPU. If it is analog processing it can use fractional values, this is the main benefit of analog computer / DSP / GPU. If analog synth uses vactrol / optocouplers is it then optical analog computer? High frequency reproduction is method where higher frequencies are replicated from base frequency, usually two times higher as base frequency, and in sound replication human hearing frequency sound (18khz) can be limited to 7khz for storage and transmission, when reproduced it is expanded about 14khz range (if double frequency range reproduction is used). How this principle can be used for example other signal processing than audio, or in oscillators etc. I don t know. Sigma-delta ADC can use floating point numbers, perhaps “differential floating point” that uses small 4 or 6 or 8 bit FP or unum / posit microfloat / microposit values, or other similar to posit computing / microfloat. Logarithmic etc. other number systems can be used instead FP / integer. Takis Zournotos one bit without delta -sigma perhaps can use them also. Vector compression / additive quantization AQ with delta- sigma ADC etc. “Between fixed point and floating point by Dr. Gary Ray” has FP numbers with data compression (reversed elias gamma coding). Differential microfloats / microposits with 4-8 bits using data- or vector compression etc. In CPU technology “pointer machine” is some model of computer arithmetic. If pointer machine can point to information, it can be used as pointer in graphical information, like GPU bitmap? Or used as pointer in cubic bitplane to extract information from it? Or as pointer with Champernowne constant to extract information from it? Cubic bitplane and Champernowne constant are in post “Using floating point numbers as information storage and data compression”. Not only analog VCOs but analog filters can be used in signal processing / number manipulation computing, like “mutant vactrol filter” or Oakley Sound Croglin. Volterra synthesis and “Monte Carlo and quasi-Monte Carlo for image synthesis” are synthesis methods suitable for signal processing but also for number field manipulation? In Wikedia in audio codecs section is that a-law and u-law (mu-law) logarithmic coding used in telephone codecs and other is in fact version of floating point format. If logarithms are just versions of floating point representations in binary form, is it possible to use fractional value, not integer in floating point format mantissa and exponent? So it is logarithmic or other fractional number system format. There are plenty of logarithmic number systems in XLNS research - overview netpage in articles section. Turning floating point to logarithmic, for example standard IEEE FP format that uses fractional (logarithmic) values not integers for mantissa or exponent, or both? Or use newer methods like unum, posit, valid or EGU with fractional / logarithmic values? EGU is in “Re: beating posits in their own game” by Quadibloc (John G. Savard), Google groups msg comp arch - pages 11.8 2017, Google browser does not show it, but using Microsoft browser it is shown, exremely gradual underflow EGU, and HGU hyper-gradual underflow. If logarithmic system is just version of floating point (and vice versa), is unum, posit, valid or EGU also logarithmic? Can they use fractional values, not integer? if more accuracy is possible using fractional (logarithmic) values with floating point, unum, posit, valid and EGU. Googling Y. Hida double precision floating point brings many results. “Parallel algoritms for summing floating point numbers” 2016. “Recycled error bits: Energy efficient architectural support for higher precision floating point”. Google groups 2011 “A new numeral system” Quoss Wimblik. “Prof. Hachioji new number system for integral numbers with four lowercase lettters”. “Counting systems and the first Hilbert problem” (Piraha system), “A new computational approach to infinity for modelling physical phenomena”. “Stochastic arithmetic in multiprecision” 2011. “Multiple base composite integer” in MROB netpage. Can multiple base composite integer be used with floating point, unum, posit, valid or EGU / HGU ? Version of floating point can be “differential floating point”, DFP (another post in Robin Hood forums). Can unum, valid, posit or EGU / HGU be in differential “microfloat” form like 4 bit ADPCM is compressed form of 16 bit linear PCM? Can fractional / logarithmic values (something like a-law and mu-law encoding, which are logarithmic / floating point formats) be used with differential floating point. Is there something like differential logarithmic system as ADPCM is differential PCM? Instead of ADPCM it uses logarithmic values like a-law or mu-law in differential form. Is unum, posit, valid or EGU / HGU possible to use differential logarithmic / fractional system? Or multiple base composite integer with unum, posit valid or EGU in differential form? Can delta sigma modulation or Takis Zourntos one bit encoding without delta sigma use posit, valid, unum or EGU / HGU? Or multiple base composite integer? DSM is in nature logarithmic, and floating point DSM is also already in existence. NICAM was ADPCM that used white noise, but NICAM with dithering instead of white noise to improve acuracy can be used in information processing. Dithering with floating point, unum, valid, posit, EGU/HGU or logarithmic or some other number system, in NICAM style almost - ADPCM system. Using 1 bit serial processor or transputer with lambda calculus in hardware is perhaps possible.