Continuing the discussion from Free internet for developing countries: more:
If phone or tablet PC should be cheap, it is possible that instead of nowadays practice when even cheapest devices have many audio and video codecs, only one audio and one video codec that is very efficient is in the device, or only onle data compression codec that does text, audio and video compression and decompression so separate audio and video codecs are not needed. But if very slow data rates are required separate audio and video codecs are needed anyway. For data compression there is Klaus Holtz and his autosphy theory that has been in the making since 1974, and Keene Enterprises DACT compression (http://www.rkeene.org/oss/dact/). Googling “Sloot digital coding system” brings results like newsflash “Mathemacian claims breakthrough in complexity theory”, about Laslo Bapal. And netpages like “100% lossless compression theory - please join me” from encode.ru.forum, and bitcointalk.org: “Crypto compression worth of big money - I did it!” last quote (number67) “He will be enriched by the experience!”, where similar coding systems are presented. Because Google directs searches of “Sloot digital coding system” to these pages they all have something to do with it. There is in existence real world implementation of Sloot coding, that is MPEG4 Structured Audio. If Sloot compression is simple sending instructions to a large dictionary, MPEG4 SA it is, sending MIDI information to hardware synth that creates the sound. Similar ways of “Sloot digital coding system” can be applied anywhere when at receiving end library codec is large enough. Small SD memory cards are reaching 512 GB capacity, so 1 Terabyte fixed (ROM) magnetic memory in each phone or laptop can be made, and information on it stored data compressed form so now library is 5 - 10 TB large. Instead of audio even video comprising a “video synth” that recreates the original picture like Structured Audio recreates the sound using music synth. And text, images, data files etc. stored similar way. For video that would mean that moving or still picture is divided to small microblocks, and image of these microblocks is created using predefined library of shapes, forms, colours and image textures, and moving picture is created using keyframes of these building blocks in memory (library) and change (movement in picture) is created sending information about time (how long change from other form/shape to another takes) and direction of change (movement vector) in microblock, the “video synth” creates moving picture by morphing the video (two different keyframes built on “building blocks” of predefined shapes and textures of the library) with movement (change of shapes) info and creates moving picture. No information per pixel of image is send, only information to library coded about geometrical shapes and textures and colours in the microblock, and encoder (camera) just finds from a real world image suitable geometrical shapes that suit in the image so the geometrical shapes can represent the microbloc image at least some accuracy. If predefined library is large enough and microblocks small enough any moving picture can be representet using these predefined “building blocks” from the receiving device
s library and no actual information about picture (normal video compression keyframes) are not needed anymore. Any number of pixels is now possible, 8K picture or 16K digital IMAX, pixel count can be as large as 100K or more because information per pixel basis is not needed anymore to be sent, only microblock shape and texture "building block" library number and time and vector (direction) of change from one microbloc to another (in time). But dictionary had to be extremely large, terabytes, if some sort of realistic moving picture is possible using microblock shape, colour and texture "building blocks". Any pixel count picture can now be sent economically, but what is the quality of picture I don´t know. If object on the picture (macroblock) moves and bacround stays the same, so moves the microblocks in the macroblock compared to backround, but that is difficult to achieve at least nowadays video compression techniques. Also sound can be send using very large library, not anymore just keyboard synth MIDI information but all soundfield, including singers voice and different instruments, can be synthesized using Structured Audio type techniques, using short samples of real audio as "keyframes" like video compression and in- between samples created by synth, or using only predefined very large sound library and all sound is synthesized and only sort of "MIDI- information" is needed to recreate the complete sounfield. Fractal compression techniques use almost same principle, similarities are searched on information and using these similarities information (picture, moving picture, sound) is recreated. But using predefined shapes and structures in the very large library (several terabytes) not even slightest "real" information is needed to be send, only address to a memory library (number) which contain some predefined building block of picture, moving picture, sound or text, and the finished product (video, sound or text) is recreated, perhaps like fractal compression, maybe. So data rates very low are possible. The lower the data rate is the more time decoder can use to decrypt the nessage, so at low data rates very efficient data compression techniques that need lot encryption time are possible, increasing data compression even more. The information send to library codec (via internet etc.) can be in data compressed form like information in library codec, increasing efficency even more. Fractal compression uses "fractals" of information, but if these "fractals" are already in some very large codec (terabytes) only address information to library memory is needed to be send, and this "fractal compression" uses predefined "building blocks". There is information packing method Hashzip (or Hash Zip) that can compress information but cant decode the information that it has packed, at least in reasonable time.
If there is MyPhotoZip that compress images to 1/1500th of their data size, so terabyte library that contains predefined “fractals” or “building blocks” of images and video can be 1500 terabytes at uncompressed form, that is enough memory place to contain fractals and visual building blocks to represent every visual image in the world. Library space can be used for storing datacompressed "fractals"and building blocks of audio sound and text too. Using these building blocks every image / sound / text can be reproduced. On improving processor accuracy: earlier in the fifties there were James. H. Wilkinsons “floating vectors” for matrix computations that were used as substitute for floating point computation when computers didnt have floating point units. Wilkinson also represented “iterative refinement” for computers. Jakow Trachtenberg had “Trachtenberg speed system” for computing, a collection of simple rules that managed to shorten complexity of calcutation to minimum, even for very long numbers. Instead of calcutation this used tables with very simple rules, where numbers were shifted on the table and calcutations were minimal or nonexistent. It works on decimal system, but software version for Trachtenberg speed system is avaialable in internet. If it is possible to make complicate calcutations without actually calculating, just shifting couple of numbers and making very simple add and multiply procedures for even the most complicated and long numbers, that simple method would have huge speed up effect on computation, but it must be done in the software because computers work on binary hardware. There is Gal
s accuracy tables (and its "revisited" method), CORDIC and its modern developements, and Quote notation by Eric Hehner for calculating and representing rational numbers. All those improve simple processor to do complicated calculation. Modern methods like Furers algorithm and its latest (2015) developements, or “Schewchuk
s algorithm", which gives accurate results for floating point computation, and it is improved by "Sterbenz theorem" on netpage "Binary floating-point summotation accuracy to full precision (Python recipe)". There is Cog and Mizar proof management systems. All these and other multiplication-, or shift and add algorithms that computers use can make computers more efficient without hardware increase. On paper "Automated floating point precision analysis" by M. Lam 2011 is floating point optimisation. On article in chipdesignmag:com "Between fixed point and floating point by Dr. Gary Ray" are presented many alternative floating point formats - tapered floating point, fractional format by Richey and Saiedean and Gary Rays own suggestion for super- efficient floating point reperesentation. On article " Hardware-based floating point design flow” by M. Parker 2011 (on the title “This article describes …) is new floating point adder design for FPGAs. G.E.R. Cowan has made analog math processor study " A VLSI analog computer math co-processor for digital computer” 2005. Also magnetic phase memory studies and real products like done by Western Digital have shown that magnetcic phase memory (OUM) can be made in analogue form and in that case one memory cell can hold 1000 bits of binary information in analogue form. There is FPAAs, analogue alternatives of digital FPGAs, and hybrid digital- analog FPGAs and CPLDs, altough few. John L. Gustafsson has introduced the Unum computing principle in the book “The end of an error”. If any of these techniques is possibly to built on hardware using FPGA I don`t know. In Berkeley university they have made 13 bit floating point system that is compatible to 32 bit floating point standard, only using 13 bits and it is otherwise like 32 bit IEEE standard. Also “A new uncertainity-bearing floating point arithmetic” C. Wang 2012.On using VLIW- processor: “Simultaneus floating point sine and cosine for VLIW integer processor”, “Sabrewing processor”, “High radix floating-point division for VLIW integer processor”, “VPFPAP: A purpose VLIW processor variable-precision floating-point arithmetic”, “Implementation of binary floating point vectors on VLIW integer processors”.
Other ways to create predefined fractals or building blocks for compression library, are for example using Multiple Description Coding, and/or Dynamic Weight Avaraging (DWA). And logarithmic spherical quantization (or Spherical Logarithmic Quantization, SLQ), like in the article written by Bernd Matchkal, Johannes Huber: “Spherical logarithmic quantization” that is for audio coding but can be applied to other uses also. There is other versions of the same principle: Logarithmic Spherical Vector Quantization (LSCQ) and Logarithmic Cubic Vector Quantization (LCVQ), variations of the same. Another way is Anamorphic Stretch Transform (AST), a coding method for images and also for other data. Different versions like Discrete AST (DATC), earlier similar coding methods were Stertched Modulation Distribution, Time Stretch Dispersive Fourier Transform TS-DFT, Time Stretch Transform TST, and new AST or DATC are improved versions of these. If these are are the methods to build large library of predefined building blocks of images, sound and text, besides fractal compression. For audio coding standards like Stream Tone Transfer Protocol (STTP) are coming. For using floating point format: new arithmetic methods like “Fast quadruple-double floating point format” and “Extented precision floating point numbers for GPU” have been made. If the processor is ultra simple with low transistor count, floating point format can be 16 bits standard (or 13 bits Berkeley 32 bit standard compatible), or even 11 or 10 bits OpenGl graphics format based on 16 bit standard, these very low accuracy floating point calcutations that are being made with simplified hardware could be improved when required using software calculation methods like Schewchuk algorithm and proposed fast quadruple-double format but in this case the beginning “standard” is 10 or 11 or 13 bit floating point value, and “quadruple-double” values are calculated on that inaccurate basis, so in the end required accuracy is achieved using those software arithmetic methods, not need of complicated hardware and hardware arithmetic unit can be simplified. Also a method for “Direct-digital synthesis using delta-sigma modulated signals”, by Orino etc., have been made. That not only simplifies analog to digital conversion but its data efficiency is substantial and is analog to digital conversion and compression at the same time.
And for more complex systems, if that is possible to build on FPGA or otherwise, there is slideshare.net. " Design and implementation of complex complex floating point processor using FPGA" Pavacuril 2013, “Introduction of the residue number number arithmetic logaric unit wth brief computational complexity analysis (rez-9 soft processor)” Olsen 2012, and on the netpage quadiblock.com (John G. Savard) represented ideas, for example “quasilogarithmic floating point” principle.
FPGAs have at least by some manufacturers direct hardware based ability to use ternary (three-value) logic. But this is rarely used because computers use binary. But if is ossible it is ossible to switch from binary to ternary, balanced ternary, ternary Tau, or “Zero displacement ternary number system: the most econmical way to reresent numbers” Pimentel 2008. Or to use tribonacci / fibonacci base. In the book by Sayood (2002) “Lossless compression handbook” pages 56 - 78 “Polynomial representations” is explained that tribbacci code is the best way to represent numbers if the value is over 128 (7 bits), and “order 3-tribnacci” code is the best. Also “Tournament coding of integer sequences” Teuhola 2008 claims to be the best method. Logarithmic number systems (different versions of it), and residue number systems have lots of theoretical work done and their imementation in binary computing: “A novel digital adder design based on residue number system with special set of moduli”, “Design of RNS based addition subtraction and multiplication units”, “A novel multiple-valued logic OHRNS adder circuit for modulo” etc. How can these be realized on FPGA or not I don t know. Also: “Simplified floating-point division and square root” 2013 Viitanen. “Dual fixed-point: an effective alternative to floating point computation”. " Variable-correction truncated floating-point multipliers", “The faster-than-fast Fourier transform” MIT news 2012, new fast and super efficient FFT for multimedia applications. For processr architechture: old Apollo Computer PRISM was discontinued 20 years ago, latest processor (DN 10000) was from 1991 or earlier, so this design is old and discontinued and should be in the public domain. Apollo PRISM was in between VLIW and RISC and was fastest processor of its time, but without much commercial success despite its powerful design. For open source there is LowRISC (even propsed optical interconnections to speed up comuting) and FloPoCo which is just open source floating point design without processor (INRIA France). Other: “MatRISC: RISC multiprocessor for matrix applications”, “Super-simple matrix processor” Soliman 2015, “Spurbus specification: SPUR, symbolic processor using RISC”, “Exploiting a new level of DLP in multimedia applications” Corbal. “High perfrmance low latency FPGA-based …”, Albert R. Weggener of Samplify Systems inc., introduced many schemes and patents to improve data compression and transfer, but that firm does not exist anymore. For processor architechture: “The reconfigurabe instruction array” Khawam, RICA array. “StageNetSlice: a reconfigurable micrarchitechture building block”, “Floating point DSP block architechture for FPGAs” Langhammer 2015, “TRIPS: a poyamarphous architechture for exploiting ILP, TLP and DLP” 2014, “Circuit merging versus dynamic partial recunfifiguration - the HoMade implementation” 2015 - 2016, “Honeycomb: an application driven online adative reconfigurable hardware architechture” 2012. “aproximate computing: an emerging paradigm for energy-efficient design” 2015 Han. If from binary FPGAs are switched to ternary or quaternary logic computation and data speed increases, studies of quaternary FPGAs exists, at least in theoretical form, and modern FPGAs have (limited?) ability to use ternary logic.
Apollo PRISM development was freezed in 1991 and no PRISM2 was released altough it was ready, PRISM was discontinued in 1997? perhaps. Information on it is on (zepa.net/apollo/) and (jim.rees.org/apollo-archive) netpages. On the reddit.com “Looking around Haiku, and I found this: code (or …” that says that there is something called “Iridium OS” operating system that can(?) use PRISM instruction set, but otherwise very little information about Iridium OS is to be found. If many simple (old) processors with low transistor count are combined together, all in one silicon chip and on the same silicon die, or as separately-packed processors, one way to combine them together is the Parallel Random Access Machine method, or other modern redevelopments of the PRAM principle (there are several different). Different developments about it have been made: “Global cellular automata: a path from parallel random access machines to practical implementations” Keller, at slideshare.net 15.1 2015:“matrix multiplication parallel processing”, other: “On optimal OROW-PRAM algorithms for computing recessively defined functions”, “Parallel random machines with both multiplication and…”, “Oblivious parallel ram and applications”, “Simple and work-efficient parallel algorithms for the minimum spanning tree problem” Zaroliagia, “The ralationship between several parallel computational models” Zhang 2008, “Anonymous processing with synchronous shared memory” Chlebus 2015, “TOTAL ECLIPSE - an efficient architechtural realization of the parallel random access machine”. Also if Elbrus “E2K” was released as Elbrus 2000 in 2005, already when its development team changed american cooperation in 1997, almost immediately they said that they have E2K ready (1998), and in the year 1993 they said in some international computer conference that in russia they have been developed VLIW processor that was superior to Pentium or anything in that the west have, so E2K was in existence at least from 1993 onwards if not even earlier. IBM has made its Power- series of processor open source to boost up sales, even the most modern versions, but they of course are not license free or in public domain. If not using 20 year old processors or older, perhaps FPGAs are the solution for low license cost processors without large license fees per unit, the actual production costs of silicon chip is so small that license fees are the cost factors of microprocessors. To improve performance instead of new MIT FFT Fourier transform perhaps " “Shape-adaptive transforms” on netpage (cs.tut.fi/foi/sa-dct) can be used. And digital dithering, instead of using it only for sound recording it can be applied to all kinds of data: “Generating dithering noise for maximum likelihood estimation of quantized data” 2013. Some exotic models that boost up computing to extremes are ways to use infinite numerical values or transfinite numbers: James A.D.W Anderson “Perspex machine XII: Topology of transreal numbers”, Yaroslav D. Sergeyev: “Infinity computer” and “Mathematical foundation of computer system for storing infinite infinitesimals, and executing arithmetic operations with them, Pat.Apl. 08.03.04”, W. Matthes: " The REAL computer architechture", “The REAL computer architecture resource algebra”, Oswaldo Cadenas: patent WO 2008078098A1: “Processing systems accepting exceptional numbers” (and O. Cadenas: two
s complement transreal coding). "Towards an implementation of a computer algebra system using afunctional language " Lobachev, "SINGULAR computer algebra system", "A field theory motivated approach to symbolic algebra" K. Peeters. And from netpages stackoverflow.com: "Data structures - What is good binary encoding for phi-based balanced ternary algorithms?" and stackoverflow.com: ""Algorithms based on number system base (closed)" 18.3 2011. And netpage neuraloutlet.com in which are "metallic number systems", versions of logarithmic numbers, also "Iterative denormal logarithmic number system IDLNS" (M.G. Arnold) is just one of dozens different proposed logarithmic number systems.If these theories can ever to be realized in real hardware, in FPGA or other, I dont know. Simplier way to make computation efficient is to use ternary or quaternary number suystems, if FPGAs already have ternary logic build on hardware. To make information even more dense still it is possible to use ternary and even quaternary values packed to binary form using some coding method, so that for exanple 2 or 4 bits represent 2 or 4 three- or four- value number instead of just 2 or 4 two- value (binary) number, differnet coding methods exists that code ternary and quaternary number values to sequences of binary bits so that when sequence is decoded one bit represents ternary or quaternary value instead of binary. Examples are. “BIN@ERN: binary-ternary compression data coding”, “Binary to binary encoded ternary (BET)” US. patent, “Arithmetic with binary encoded ternary numbers” Parhami 2013, “Self-determining binary representation of ternary list”. On quaternary numbers: “A survey of quaternary codes and their binary image” Özkaya 2009, includes “Z4 Cycle Code” among others, and “Formulation and of novel quaternary algebra” 2011, “Design of some quaternary combinational blocks using a new logic system” Jahangar 2009, “A cost effective technique for BLUTS to QLUTS in FPGAs”, “Arithmetic operation in multi-valued logic” Patel 2010, “Application on Galois field in VLSI using multi-valued logic” Sakhare 2013. Other: “Rotation symmetric boolean function a novel approach to ternary multiplication” Vidya 2012, " Arithmetic algorithms for ternary number systems " Das 2012, “Addition and multiplication in generalized Tribonacci base” Ambroz 2007, “Bitwise gate grouping algorithm for mixed radix conversion”, “Ternary and quaternary logic to binary bit conversion Mosfets”, “2L threshold circuit for binary-quaternary encoding and decoding”, “Balances and abelinian complexity of certain class of ternary words” Turek (internet blog), Design of 8 bit array multiplier based on ternary logic QDGFET 2014. Other: " Coder compressor for ultra-large instruction width coarse grain reconfigurable systems", “Practical Lock Freedom: efficient and practical non-blocking data structures” Sundel 2014 (Synthesis OS Alexia Massalin, a some sort of distributed operating system), “Differential predictive floating-point analog-to-digital converter” Croza 2003, “Charger balanced analogue to digital digital converter using cyclic conversion”, “Abelinian complexity in minimal subshifts” Saari 2011, “Colour dither with n-best algorithm” Lemström, and for using complex numbers: researchgate.net: “Alternative numbers system bases?” (Rob Graigen) is new 2+i base, an alternative for 1+i complex base for computer arithmetic.
And on the netpage mrob.com at the “Alternative number formats at MROB” section there is represented about every possible non-integer number format in the world, including “Munafo PT system” which uses 17 different values and can represent any number value using 17 predefined values. For example digitizing sound or still images when processor speed is not so important Munafo PT systen can store almost infinite precision of information and pack that information to 17 numbers or values, so still picture or sound sample can have that almost infinite precision too. Several one herz sound frequencies can be included (?) in one PT system number perhaps if they can represent values almost to infinity, so that is below Nyquist frequency coding perhaps? Or then not, if frequencies up to 100 khz or more can be coded accurately, using frequency splitter and storing 5 different 20 khz channels to one 100khz PT number system number, now 5 channels or 5 one herz frequences can be stored in one number value.
The residue number systems and their “moduli” look like quaternion to me. If is quaternary number system is used, is then possible to use quaternions as numbers? Using the “modulus” of residue number system as single quaternary number, or other quaternion. Now instead of single binary bit a quaternary quaternion can be used as smallest information unit. Also on netpage xlnsresearch.com is represented many logarithmic number sustems and their development. There are more, for example “Two-dimensional logarithmic number sytem 2DLNS”. And “A new number system for faster exponent multiplication (bit signed number, canonical sign digit)”. More complicated are "A new number system using alternate fibonacci numbers as the positional weight with some engineering applications " Sinha 2014. And new unum / ubox computing principle which is like floating point(?). If FPGAs are used instead of old processor designs that are public domain / otherwise cheap, many old FPGAs can be groupet to a single silicon die, like old Xilinx 4000 series, 20 years old, 64 of them in 28 - 22 nanometer process manufactured occupy same area of silicon as one manufactured at 180 nanometer process, working simultaneously as parallel processors or otherwise. There is old “Sub- nanosecond arithmetic processor SNAP” project about 15 years ago. And SHARC processor which is VLIW but it is DSP, not general purpose CPU, but it dates from 1994 and is manufactured by chinese clone factories (older models dating 20 years ago) cheaply. Older Xilinx 4000 series FPGAs are manufactured in China also (as clones without license payment or otherwise). New ePUMA is a DSP that is open source and for mobile applications.
New processor technologies like Mill Processor, Soft Machines Variable Instruction Set, GreenArrays processor, Parallel Ultra-Low Processor PULP, or Venray Technologies and Micro Devices Technologies processor-in-memory devices, new methods like “SEERAD: A high speedf yet energy-efficient rounding-based approximate divider " and “Energy efficient approximate M bit vedic multiplier for DSP applications”, but future of computing is in neuromorphic computing. There are projects like Arduino using microcontrollers, other open source projects using closed source microcontrollers like Renesas microcintrollers & others, and open source hobby or serius projects using PLDs and FPGAs. If neuromorphic computing is the future it is sensible to change from Arduino / FPGA style open source projects to neuromorphic computing open source projects. Software is open source, at least for deep learning logic projects, for example Skymind and Deeplearning4J, and Numenta and its NoPIC and NEON, and Nervana Systems. But actual hardware that uses these techniques is not open source. And deep learning is not necessirely neuromorphic computing. There is TeraDeep and its ACCEL FPGA platform and nn-X processor that is based on ARM, but if it is possible for customer tweak the parameters like hobby / open source projects do with FPGA I don t know. There is BrainChip inc. that in the future will offer neuromorphic chip with ARM licensing model but it is not ready yet. NeuroMem offers already CM1K processor as open source, and projects like BrainCard have already done with it. CM1K was discontinued by General Vision because it was too old- fashioned (?) for them and is now offered as open source. There was also CogniMem PM1K chip but very little information about it is available. For using FPGA there was “DANNA a neuromorphic computing VLSI chip” that was before soft processor in FPGA and now real hardware chip, but if its commercially available I don t know. Facebook FAIR project and its BIg Sur Platform is open source but it uses ordinary GPU not neuromorphic chips. Hewlett Packard and its “The machine OS” and “MONETA HP Cog Ex Machina” is open source but not all neuromorphic. In theoretical level there is lot of publications like “Logtotem: algorithmic neural processor and its implementation on an FPGA fabric” but that and others use ordinary FPGAs to imitate neuromorphic chips. Similar theoretical work is “PyNCs: a microkernel for high-level definition and configuration of neuromorphic electronic systems”, using PyNN programming language. European FACETS and BrainScales projects that followed are open source like majority of neuromorphic theoretical work, “ECiS: emulated consciusness systems”, OpenCog project etc. Actual hardware that use these theories is either Intel Quark SE based or RISC/ ARM based like chinese Darwin project that uses opencores open source RISC- cores for neuromorphic network. Knowm (knowm.org) is real neuromorphic memristor based platform and offers data as open source but is not ready yet as hardware implemenation. So Neuromem C1K is only neuromorphic chip that open source / hobbyist community can get their hands on at this moment. There is also “Mavric semiconducrors inc.” that has something to do with neuromorphic field programmaple analogue arrays (FPAAs as neuromorphic versions) but if it is for sale I don t know. If Mavric neuromorphic FPAA exists it would be intreresting thing. FPGA / FPAA solutions are suitable for neuromorphic chips because they are build from the start to be flexible. Making neuromorphic FPGA / FPAA for open source community should not be so difficult because so much theoretical work on neuromorphic chips is open source. Real neuromorphic hardware, not imitation of neuromorphic systems on FPGA like majority of theoretical studies. And because neuromorphic computing is basically multi-value logic, multivalue logic systems like ternary and quaternary logic, ternary and quaternary number systems, residue number systems, logarithmic number systems, complex/real number systems etc. using neuromorphic computing is easier instead of using binary / von Neuman logic, and analogue / digital hybrid circuity is already in the use for neuromorphic chips. Archipelago project is trying to make open source FPGA platform, if future of computing is neuromorphic how this and Archipelago project can be unified I don t know. If 20 years old technology is free to use, already 20 years ago there were lots of neuromorphic chips, on the list on netpage neural network hardware " NNW in HFP: hardware - CERN” 13. 11. 1998 is dozens of chips that are now phased out production, Intel ZISC, Siemens MA-16 and Adaptive Systems CNAPS were in production at least in some quantities 20 years ago. So if license- free neuromorphic technology is wanted those 20 year old designs may help. Already almost every Android smartphone there is Synaptec neuromorphic microcintroller for screen control, and some of these phones have Sonsory (sensory.com) speech regocnition neuromorphic chip. Because phones / laptops are Linux / Android, can these neuromorphic chips be used like Renesas and other closed source microcontrollers are used by Arduino community? Sensory and Synaptec existed already 20 years ago, so existed their microcontroller chips. Sensory neuromorphic microcontroller includes own music synthesiser, is it like Hartman Neuron in mini form? So for DIY synth people that would be interesting. If these neuromorphic chips could be used like Arduino for hobby projects and other circuit bending, that would offer way to neuromorphic engineering for the masses, because 20 years from now all new electronic is neuromorphic anyway. And if neuromorphic chips can be open source or license free making cheap but still effective electronic components with low license cost / no license cost at all for people in development countries that would be good. Neuromorphic computing not just itself makes computing more efficient, but combined with multi value logic (ternary, quaternary) and complex/ real number systems (logarithmic or complex base number systems, residue number systems, multi-base numbers systems etc.) and unum computing & other developments of computer arithmetic. Adapteva with is Parallela Board is open source, and so is Rex Computing which plans new parallel computing chip and does it open source. But just parallel computing doesn t make neuromorphic chip, and neuromorphic computing is the future. Rex Computing style open source neuromorphic chip project done seriously (academic theoretical work has done already more than enough) would be big thing, and 20 years ago were many neuromorphic designs that are now patent/ license free to copy if needed. Chinese factories make almost every old chip in existence, majority are DSPs, already for about one dollar per item price or less. And other factories that offer new old chips (20 year old designs) are in Indonesia, Malaysia and Mexico. No chinese factory has cloned old neuromorphic chip, not yet perhaps.
The Elbrus E2K design was in existence 20 years ago. Its patents, (about 100) are now owned by intel (or Intel has licence rights for them), but oldest Elbrus patents are from 1990s, for example patent filed in 1996 and granted in 1998, and those patents cite old Cydrodyme processor patents widely. In netpage xbitlabs.com “Elbrus E2K specularions” 1999 is that Dave Ditzel of Transmeta designed his Crusoe processor together with Elbrus team in Sun. If Elbrus patents were filed in 1996, and that design relied on earlier processors, like Cydrodyme Cydra 5 (1989),Multiflow processor (1987), Astronautics ZS-1 (1987), TFP processor concept (Hsu 1994), the Sentinel concept (1992 Mahlke, Wu), and lastly Transmeta Crusoe, how much original is in Elbrus processors left? Earliest Elbrus patents are expiring, like Hitachi SuperH patents. IBM made its POWER processors open source. So now there is more to choose as neuromorphic processor core than just RISC / ARM that are used now. IBM succesfully made POWER cores open source to expand market base and succeeded doing it, Elbrus can follow IBM and expand market base radically outside Russia if Elbrus would be open source. These neuromorphic cores are grouped with interconnections as one large neuromorphic kernel in neuromorphic processors. If Elbrus is so super efficient core and processor system, just 70 million transistors and more efficient than Intel Itanium with older manufacturing technology (nanometers), what stops using these VLIW cores as neuromorphic processor cores instead of RISC. Russians are propably already doing it with their year 2045 neuromorphic computing plan. And Mill CPU processor, another VLIW superchip, is based on at least partially on old Philips Trimedia, about 30 years old design. So Philips Tri-Media starting from the year 1996 must be in public domain now? And Mill processor is based on that. There is PicoChip, Tilera and Kalray manycore processors for sale, and Kalray “supercomputer on a chip” is low power (5W) and cheap. Development countries need cheap but efficient computer technology, if super efficient but cheap chips are in the sale. For cheapest devices like phones ARM SoCs like Allwinner A33 sold at 4 dollars already for Tablet PC is suitable, for internet routing (Kalray) cheap solutions is also needed, and for servers. For future neurocomputing core options, is VLIW / TTA (transport triggered architechtire) or RISC better I don t know, some studies have been done; “The perceptual processor” Mathew 2004, “Place coding in analog VLSI: a neuromorphic approach to …”, SPERT: a VLIW/SIMD neuro- multiprocessor". New technology is coming from Scalable Systems Research Labs also. If Qualcomm Zeroth becomes reality (2018?) Android phones will have neuromorphic processor, and Samsung has studied neuromorphic processor since 2013. Other studies: “A digital implementation of neuron-astrocyte interaction for neuromorphic acceleration” 2015, “Phase change memory for neuromophic systems and applications” Suri 2013, “Optimum microarchitechture for neuromorphic algorithms” Wang 2011.
Xilinx FPGAs have already today capaibility to use ternary logic, US patent 7274211, and that is cited at opencores.com netpages. These logic gates can(?) or cannot(?) operate balanced ternary Tau, Zero Displacement Ternary (ZDTNS), tribonacci, or ternary (three-value) complex number bases. Multi-value logic is needed not only for neuromorphic computing but for arithmetic operations that are efficient. Googling “multi valued logic gates” and “multi valued logic circuits”, and “five value logic”, “pentanary logic” (-gates, -circuits), "radix-5 logic " (-gates,-circuits) give lots of results on these topics. For example article “The need for five-value logic system”, and “Highly parallel residue arithmetic chip based on multiple-valued bidirectional logic” Kameyama. On netpage stackoverflow.com “What is the most minimal functional programming language” 18. 10. 2011 is Lambda calculus and combinatory logic as examples. What if instead of programming code single symbol can be used to represent some index, vector, matrix or calculus. for example one single five-value (pentanary) symbol that contain itself vector or matrix etc. In five- or multi- valued logic this one pentanary “number” can contain information of Lambda calculus, combinatory logic, vector or matrix or index, or some complex base number combination, or quaternion etc. There is programming language APL and its developments A+, J, K, and Q programming langueges. APL can in one symbol include lots of information. If multi-valued logic is used, instead of number value these quaternary, pentanary etc. “numbers” can be vectors, matrices, complex base number representations, and calculus that itself contain much larger amount of information than one number ever can, and if there is logic system that can use these multi-value “logic blocks” (that are not traditional numbers) as arithmetic unit, computation becomes much efficient and program code much tense. APL and its modern equivalents use this method but in binary concept. When multiple-value is used instead of binary information per one symbol increases and possibilities which this symbol can represent. One bit can represent only 1 or 0. Magnetic memory (magnetic phase change OUM memory) can store 1000 -value “bit” at its maximum, or 10 bit accuracy instead of just two or three bits value per memory slot (that magnetic memories nowadays use). So magnetic memory storage for multi- value logic exists but logic gates themselvelves must be designed and circuits. The increased efficiency using multi-value representations of vectors, indexes and calculus or complex base number systems instead of traditional binary numbers is so great that complexity of circuit design is secondary. There is US patent 20030212724 A1 for magnetic phase multi- value memory (max. 1000 values or 10 bit precision for one “bit”). There is “index-calculus double base number system” and its developments; “The use of multi- dimensional logarithmic number system in DSP” Dimitrov, “Complex multidimensional logarithmic number system with DSP applications” Eskritt, " “Towards quaternion complex logarithmic number system” Arnold, and these are just versions of logarithmic number systems, others like combinational logic, Lambda calculus, complex base number systems, index-calculus etc. can be used in multivalued logic instead of one bit (1 or 0 ) as the smallest numerical value, and with these “bits” (that are three,four or five value indexes, vectors or equations like quaternions) can be used like numbers or binary bits are now used. New operating systems like chinese “TransOS” cloud OS, french Xenomania RTOS/ Linux, chinese X-Pud Linux, “iSpaces cloud computer” browser originally from Ireland (?) etc. for cheap multiuser operating system, like original Dooble distributed browser.
When searching for 20 years old designs in microchips only Hitachi SuperH has gained publicity, with its J Core open source imolementation ( SuperH 4 / J Core 4 comes to complete licence free public domain in 2016, because it is dated from 1996). But also SHARC processors are old (1994 about first versions), Philips Trimedia that new Mill CPU is based (from 1988 onwards), and lastly and most importantly Elbrus processor, that is from 1992 about, called either Elbrus 3, Elbrus EL-95 or Elbrus E2K (however those different names are for same one processor). Altough released in 2000s it is 25 year old design with first information dating from 1992 about and first four and most important patents dated from 1996. Elbrus is now intellectual property of Intel, who bought about 100 Elbrus patents. Those patents are based on earlier western designs at some part (Cydrodyme, Transmeta, and others). Intel has no interest using Elbrus processor. Intel also has open source activities and own open source data centre, and Intel contributes to different open source data projects. Earlier Intel had own open source licensing system but Intel has ceased to use it. Because Intel has no plans to use Elbrus Intel has opprtunity to release Elbrus designs and patents that Intel does not use in its own products (Intel uses some VLIW-Elbrus type technology in high end server processors). All those “surplus” Elbrus schematics Intel could release under its own open source licensing model. Compiling that tech together with four early Elbrus patents from 1996 and technical Elbrus data that is public domain or released in the 1990s perhaps working Elbrus based open source processor (like J Core / Super H) is possible. It would be “ElbrOS - Elbrus Open Source processor”. Nowadays Elbrus is manufactured using outdated 65nm technology and it has only 75 million transistors compared to modern Intel or ARM design whose transistor count is billion transistors or more. Performance between Elbrus and modern designs is about the same, so Elbrus is super effective and minimal transistor count is suitable for open source design and old manufacturing methods make chip production cheap. Elbrus also when using Windows has to transfer code to VLIW and that slowes down processor even more. But when using native code (based on Linux) Elbrus can be faster. There are modern reserch projects based on VLIW such as p-VEX / r-VEX but it is microcontroller based like most modern open source VLIW processor projects, and then there is FPGAs based on VLIW that are theoretical work only, both soft core and real hardware VLIW FPGA, but no real commercial VLIW FPGA on hardware has ever been released. Theoretical works include: The Image Stream processor (theoretical work that spawned many commercial products), “Modify the UCS 51 architechture to SIMD, VLIW and superscalar…”, BioThreads VLIW FPGA, “ADRES: an architechture with tightly coupled VLIW processor…”, “SynZEN: A hybrid TTA / VLIW architechture with a distributed register file”, “A novel implementation of 32-bit VLIW-MISC processor on FPGA “, " A rapid reconfigurable VLIW co-processor for ternary emulation of digital designs”, “Throughput oriented FPGA overlays using DSP blocks”, " A scalable unsegmented multiport memory for FPGA-based systems " 2015. US Patent 8250342 “Digital signal processing engine” by Igor Kostarnov. And coupling old CPU cores like Elbrus to one multicore processor like Tilera, Kalray etc. are doing processor efficiency is improved. Old Xilinx FPGAs are manufactured in china (20 year old models) and if both Xilinx Virtex 1 and Spartan 1 dates from november 1997 their 20 year age is coming to close. ARM7 processors date from 1997 and ARM9 from 1998 so 20 years of protected production is coming to close for them also. Xilinx Zync and Altera Cyclone are coupling ARM cores with FPGA, licence free versions that use 20 year old designs of ARM and Xilinx are possible, manufactured at narrower nanometer tech than 500 - 130 nm they originally were manufactured, performance is better than original. Very - low cost FPGA coupled with some ARM core or Hitachi SuperH / J Core, or some exotic like Elbrus or Philips Trimedia processor core (if ever is a chance that Elbrus becomes open source) is now possible, if open source community wants cheap FPGAs with hard kernels, either one or many hard processors together with FPGA fabric. Used in some cheap laptop PC perhaps like Linaro, but if its gonna get some use in development countries manufacturing costs must be minimum and license cost of electronics too, If 20 year old design is license free that will do. Old FPGA models are manufactured in China at 1 dollar price per processor about mass production factory price. Elbrus processor has in its first version ELVEES (DSP / GPU) assistant processor, that partly eliminates need for GPU / graphics card. And Philips Trimedia dates from 1997 - 1999 (production examples), and very little has changed ever since so old Trimedia processors from 1997 - 1999 are almost same as newer versions. Trimedia has itself DSP capabilities, and Linux operating system has been booted in Trimedia. And because Spinnaker project that tries to build neuromorphic computer is using ARM9 cores, open source version with less or no license cost using ARM9TDI, Spinnaker is using 18, or later revisions 20, or 19 (chinese version” Algorithm for mapping multilayer BP networks onto the Spinnaker neuromorphic hardware" X. Jin 2010) ARM9 cores in single chip, if open source community wants to get its hands on neuromorphic computing, building ARM9 or ARM7 based Spinnaker platform and selling and marketing it like Arduino or Raspberry Pi is possible, if someone just starts manufacturing chips using license free / open source ARM7 or ARM9 (possible 2018 perhaps licence free) hardware and Spinnaker documentation. Because power consumption is low grouping ARM cores 18 - 20, or 57 or 114 cores (3 X 19 or 6 X 19 cores, chinese version) to a single chip and that chip inside telephone, neuromorphic phone is true. And perhaps using Venray Technologies type “processor in memory” technique. Kalray 256 core is not open source but at least it has low power consumption and it has relatively cheap price for its capabilities, and perhaps 1024 core is coming in 2017. For minimum production cost but effective computing platform, and if its gonna be used inside phones etc. low power consumption is needed (and neuromorphic systems use minimal amount of electric power). More expensive (but still cheap) configurations that use more electric power can be used inside cheap laptop PC:s. The “nn-X” chip that is based on Xilinx Zynq (“A 240 G-ops mobile coprocessor for deep neural networks” Gokhare 2014) is based on ARM9 also, and on FPGA like Teradeep ACCEL. Different models are being represented in " Memory and information processing neuromorphic systems" Indiveri 2015. There is also chinese Darwin project that is using open source RISC cores, but those cores are very little in use elsewhere, considering that ARM has big dominance. And new lowRISC is being developed but no commerial product has yet not materialized. There is also SHARC processor that is old also, from 1994 onwards, so it is going to be unlicensed hardware like Hitachi SuperH, old models, but SHARC processor has not attained any significant interest in neuromorphic computing despite the fact that it is efficient multiprocessor. Neuromorphic SHARC means “SHARC: A streaming model for FPGA accelerator” model and not the SHARC processor itself, like in the paper “Neuromorphic acclerators for neuromorphic systems” Dantala 2011. Almost every computer program in the world is written either to Intel 86x or ARM, so neuromorphic procesor using already established processors and software propably uses either ARM or Intel Quark. DSP software exists for Philips Trimedia and SHARC, and one is country specific processor (Elbrus), but many kind of different software for Elbrus exists.
For use of alternate number system bases: Paul Tarau has proposed “Hereditary binary numbers” concept from Paul Tarau
s home page. These and other alternative number formats like those at MROB.com, quadiblock.com (John G. Savard) and his "Unusual floating point format remembered?" at dsprelated.com -netpage in 2007, neuraloutlet.com netpage and "Metallic number systems", and Zero Displacement Ternary system, or balanced ternary Tau number system. If decimal format like John G. Savards or new DEC64 binary encoded decimal is used, then algorithms that use decimal system such as Trachtenberg speed system of mathemathics that uses very simple number shifting tricks even for most complicated calcutations and actual calcutations can be avoided, that would signicantly improve computation speed. There can even be Trachtenberg system ALU hardware processor if DEC64 or other binary encoded decimal number system becomes standard. Complex number calcutations can be used in FPGAs. " Design of complex floating point processor using FPGA" Pavacuril 2013. John Gustafsson of AMD has proposed Unum concept. And “This paper describes a new approach… Hardware based floating-point design flow” Michael Parker 2011 that significantly increases computing speed. In chipdesignmag.com is article “Between fixed and floating point by dr. Gary Ray” and its table 4 (alternative floating point formats). Albert W. Weggener of Samplify Systems inc. (a firm that does not exist anymore) has patented many information compression techniques, mainly floating point compression. There is article “Spectral compression of mesh geometry” Karni 2000. If suddenly large number new devices are being manufactured, for example when free net in developing countries comes true, perhaps it is possible to try new ideas of data compression, data formats etc, because these new devices outnumber all previously made electronic devices and are manufactured at scale of billions. Already processors include several diffrent ALUs and floating point units that operate at different principles. And even analog processing tech like G.E.R Cowan “A VLSI analog computer / math coprosessor” if neuromorphic computing and Karhunen-Loeve transform requires it anyway, like “Optimal y-u-v model based on Karhunen -Loeve transformation” Xuan, Fisher on video compression. Adding ALUs and other processors that use these methods alongside old ALUs etc. in one processor is possible. For floating point accuracy there is “Shewchuck
s algorithm" and improved version of it by "Sterbentzs theorem”. For example if minimum floating point format is OpenGl standard 10 bits (5+5 bits), precision can now be 50 bit if accuracy is increased by 10 times using algorithms, almost 64 bit floating point standard, altough range is much less. For large integers there is "Furer
s algorithm" and its newer derivates that are somewhat similar. But floating point numbers cant be smaller than that 10 bits, but logarithmic numbers can. For logarithms there is “Gal
s accuracy tables" (that works also on logarithmic) and "Gals accuracy tables revisited”. And other such as “Improved subtraction in logarithmic number system”, “Truncutated logarithmic approximation” Sulllivan, "Improved computational precision with positional-logarithmic data"1997, “Increasing precision with log-scale math” Serang, “Design of a high precision logarithmic converter in a binary…” Lee 2010, “Fast and correctly rounded in logarithms in double-precision” 2005, “An iterative logarithmic multiplier with improved precision”, “High accurate tables for elementary functions” Luther 1995. If for example pixel value of picture or herz in sound is quantized, if Gal
s accuracy tables reach 10 bit precision improvement (or some other logarithmic method), now only one bit is needeed to store 10 bits worth of information of sound or picture, and instead of 10 bits only 1 bit logarithmic bit is needed. And other methods like Bounded Integer Sequence Encoding (BISE) that uses fraction of bits, Quote notation of mathematical data, and Q number system etc, Asymmetrical Stretch Transform based video and still image compression etc. Because processors use different integer floating point and integer ALU´s, many in single processor, some of them can be regular standard FP and integer types, but others can be complex number and alternative number systems based, and even J.A.W. Anderson
s Perspex Machine-, Wofgang Matthess REAL computer architechture, Ya.D Sergeyev
s Infinity Computer, or Oswaldo Cadenass ideas based super complicated “infinity computer ALU”, boosting computational efficiency to infinity. And if such alternate nunber system bases such as Munafo PT system or Zero Displacement Ternary can hold large number of number values (bits) using only few bits or trits, these can be used in data storage for example large number of text characters (bits) and large amount of text can be stored in long string of bits putting one character after other (8 bit character + 8 bit character bits + 8bit character etc. all in coupled long string of bits). If integer processors operate 56 bit max, making super long integer for example 63 x 56 bits, that is devided by 3 so it can be used as Zero Displacement Ternary super long number, if ZTDNS accuracy improves exponentially the more longer (bits) the number is, this huge 56 x 63 bit number would have perhaps thousands times more accuracy (bits) than ordinary 63 x 56 bit binary integer, and more bits means more capacity to store information (such as text coded to binary chain 8+8+8 bits letters etc.) in bitstream of this very long number. Same goes for Munafo PT number system also or other alternative number base that is not straight binary. Finete State Entropy coding (FSE) is class of “State encoding of low power” (Wikipedia), that requires “asymmetrical number system”, (fgiesen.wordpress.com: “FSE/ANS history correction”). Another encoding scheme is Q-digest (“An algorithm for computing approximate quantiles” Qdigest 2013), and its new versions TDigest (“Class TDigest”) and PMX-digest. Because even standard ARM processor has two different floating point standard ALU units, another for DSP use and another for other, why not including some of the new development non- standard number systems and other to additional ALU of new processors. Intel owns Altera and its new floating point adder design concept, AMD has Unum/Ubox computing principle, FSE encoding has its asymmetrical number system etc. If the new additional ALU is real sophisticated it uses “infinity computer” or other principle complex number bases with super computational efficiency but also complexity. If there only would exist a customer who will bought a processor that uses these sophisticated techniques. The hybrid ARM/86x processor that both AMD and VIA were making were cancelled because no customer was interested to bought it. So if some phone / tablet PC / laptop / desktop PC manufacturer wants a sophisticated chip that uses some of these alternative / complex number bases together with standard hardware, there will propably be a manufacturer that is willing to sell that chip to customer.
There were plans in AMD (Skybridge) and VIA Technologies (Isaiah2) to publish combined ARM and 86x combined processor, that could run ARM (Android) and 86x (Linux and Windows) programs at the same processor, but these processors were abonded. However just this kind of combined processor that can run both ARM mobile software and old desktop PC computer programs (Linux and Windows) is needed in free internet for developing countries. Processor capacity is not enough for modern desktop PC programs but older programs will do. Perhaps optional (extra price) large display screen like in desktop PC and phone dock like Ubuntu Touch is needed so desktop programs can be seen completely, but big display can be just optional. Earlier in the 90`s there were many 86x compatible processors, AMD 5x86, AMD K6, AMDK5, Cyrix 6x86, Centaur Winchip. Old 5x86 based microcontroller is still on manufacture, as is old AM 2900 (AMD K5 is derivate of this) microcontroller in china. If someone wants to build 86x computer that can run old programs he can just buy 5x86 microntroller from china. And if someone wants to build license free 86x computer architechture using those old chips 20 year ago as a model that is possible too. Anyway Intel is together with chinese planning launching perhaps SOFIA cheap chips again this time with chinese production, and AMD and VIA are doing also cheap 86x chips with chinese, so 86x schips perhaps are going to inside mobile phones, altough ARM/ 86x hybrid is really needed. Chinese have their own Loongson processor that uses same method as Elbrus to run Windows programs, it has low power consumptions and is propably cheap, but now chinese began make “real” 86x processors through license deals with Intel, AMD and VIA, and these will be propably cheap also. The hybrid ARM/86x chips were more or less ready for production by AMD and VIA, only customer who would buy them is missing.
If most efficient compression is vector quantization, in audio sound TwinVQ method is perhaps best, various methods for vector quantization and other quantization are: for very low bitrates there is “Very low bitrate audio coding development” Edler, “A low bitrate audio coding using generalized adaptive gain shape vector quantization across channels”, “New design in low bitrate audio coding using combined harmonic-wavelet”, “Neural network based analog to digital converter”, “High frequency reconstruction of audio signals based on chaotic prediction theory”, “Bandwidth extension method based on spectral enevelope estimation”, “Wide band audio coding based on frequency domain linear prediction”, then patent WO 2014161995A1, and “Modulation spectrum audio coding”, “Audio encoding and decoding for interleaved waveform coding”, “Zero order hold DAC, partial-order hold DAC”, “Power-efficient high speed parallel sampling ADCs for…”, “Non-uniform sampling algorithms and architechtures” Luo 2012, “Compressive sampling matching pursuit”, “Fourier random sampling” “Multi-coset sampling”, “Successive approximation trellis-coded vector quantization”. “Residual quantization for approximate nearest neighbour search” Yuan, “Audio de-noising, recognition and retrieval by using vectors” Vaidya, “Low bit rate coding with binary Golay codes”, “Stochastic neighbour compression”, “Constrained-storage vector quantization with universal codebook” Ramakrishnan, “Index rendering vector quantizer”. Canadian space agency has developed (spie.org: More efficient satellite data transmission) Serial Adaptive Multistage Vector Quantization SAMVQ and Hierarchical Self-Organising Cluster Vector Quantization HSOCVQ. In text “Impact of vector quantization compression in hyperspectral data” B. Hu, is another SAMVQ. There is “Multiple-description multistage VQ”, “Image compression using zerotree and multistage vector quantization”, “Treesearch vector quantization” TSVQ, “Multi-stage residual VQ”, “Computational RAM implementation of VQ” T. M. Le, “A novel and efficient vector quantization based compression algorithm”, “Multi-stage quantization of parameter vectors from disparate signal dimension”, “Entropy constrained vector quantization”, “Entropy-constrained quantization of exponentially damped sinusoid patterns”, “Locally optimized product quantization for approximate nearest neighbour search”, “Multistage Lattice Vector Quantization”, “Trellis Residual Vector Quantization”, “Multi-rate Lattice Vector Quantization”, “Optimal entropy-constrainred scalar quantizer designed for low-bitrate …”, “Extremy low bit-rate nearest neighbour search using a set compression tree”, “Side match vector quantization” SMVQ, “Three-sided side match finite-state vector quantization” HTSMVQ, “Embedded Zerotree Wavelet quantization” EZW, “Adaptive Scanned Wavelet Difference Reduction” ASVDR, “A real-time wavelet vector quantization algorithm and its VLSI…”, “Zerotree wavelet vector quantization”, “The golden ratio encoder” Debechies 7.3. 2008, “A base phi number system encoder”, “Adaptive additive quantization for extreme vector compression”, “Random sampling with successive approximation ADC”, “Adaptive sparse vector quantization”, “An even grid based lattice vector quantization algorithm for mobile audio coding”, “Low bitrate audio coding with lattice vector quantization based scalable higher order codebook extension scheme”, “New design in low bitrate audio coding using combined harmonic-wavelet tree presentation”, “high frequency range in low-bitrate audio coding using predictive pattern analysis”, “A micron learning vector quantization for parallel analog-to-digital data compression”.
Nothing in common perhaps with previous posts, but as analogue sound has gained popularity again, and there are those who would prefer analogie sound instead for digital in recording etc., using old analogue video recorders such as VHS or old one inch or two inch tape professional analogue video recorders as analogue sound recording, video in PAL format has 13,5 megaherz pixel bandwith and 6 megaherz storage bandwidth, one audio channel needs only 20 kiloherz or less, so multitrack recording by putting freqencies on top of each other ( 20 khz channel + 20 khz channel needs 40 khz, 4 channels needs 80 khz etc.,) and using simple frequency splitter to distract audio channels from each other (20 kiloherz intervals). Using helical scan of videotape recorder makes very high quality audio recordings possible, perhaps even more quality than is available at digital domain, and certainly more quality than open reel 16- or 24- track usual audio recording. For Blu-ray disc analog option also is possible that uses same optical format as old optical film reel sountrack sound systems, accuracy of optical waveforms can be improved using Dolby-type dynamical companding to improve sound quality and reducing data bandwidth. And for improving accuracy of optical spectra resolution of optical analogue waveforms video superresolution improvement like Karhunen-Loeve transform that works on analogue domain, analogue sensor scans the analogue Blu-ray disc optical waveforms and then improves the signal quality of output using KLT transform or other analogue method. Recording in optical Blu-ray is made using laser like digital Blu-ray, because analogue audio optical waveforms like optical reel film sountrack needs much more space than digital dots of ordinary digital Blu-ray, recording time of analogue Blu-ray is much more less than digital Blu- ray disc, but that does not matter if recording time is anywhere near vinyl LP playing time (30 minutes for one side of disc). What kind of projection laser that records the optical reel type analogue spectral audio data at Blu-ray disc
s surface, uses I dont know. For high end CD players different accuracy methods have been included that aim the laser very accurately at CD or BLU-ray disc surface. In addition of analogue video tapes or Blu-ray discs other form to store analogue audio content is perhaps magnetic phase OUM memory, that is magnetic memory that uses analogue, not digital, principles, and one OUM magnetic phase memory analogue “bit” can store 10 bits (1000 numerical values) worth of information. So perhaps is possible to build memory card or memory stick like SD memory card or USB memory stick, but audio information inside this memory stick is in analogue form and not in digital (bit) form. These (analogue video cassette / tape audio recording, analogue audio on Blu- ray disc using optical film reel soundtrack- type system, and analogue magnetig phase memory memory stick or memory card) are all for these who would not want to listen or record sound using digital audio. Because most of nowadays digital music is actually mastered using analogue mastering that is after mastering process just stored as digital soundfile, (altough almost all music nowadays is recorded digitally), that opens possibility listen real analogue audio for those people who demand it. Recording analogue audio in analogue video tapes makes possible same style of editing as analogue video editing uses, and possibilty to include audio effects in a way that analogue video synthesizer video effect systems use. Or then use Blu- ray disc as analogue recording media. Blue-violet laser has reached terabyte data density, and discs can be manufactured two-sided like vinyl LPs and not one-sided if more capacity is needed. Analogue waveform recording in Blu ray obviously requires lot more capacity and space than dots of binary code. Old Laserdisc video disc used some analogue phase modulation but that seem like binary data coding in analogue way. Other exotic old video discs that used analogue video coding methods had their own encoding systems also. Perhaps can be found some analogue coding method for analogue optical audio, that can also encode multrack 4 - 8- 12- or 16 track simultaneysly, or simply use more bandwith if multitrack recording is used, so 16 track recording needs 16 times more space than 1 track recording. Karhunen -Loeve transform works in analogue domain and makes possible matrixing many audio channels in one, two or more channels (if 2 channels or more are used as matrixing base channels, from 2 channels to 6 etc. matrixing schemes are avilable). Old TV formats use analogue quadrature amplitude modulation in order to save bandwidth. Perhaps same technique is possible in optical analogue storage of audio also. And using analogue audio: Old analogue computers that last examples were put out of use in the 1970s, and are now museum pieces, can be put back to use and used as musical synthesizers. They can be used like digital computers were used in 1970s to create computer music, but this time the computers are analogue. Those old computers, if they still are in working condition, can now be used by those purists who demand analogue sound and refuse to use digital sound in music production. Analogue synths are specially built for music production duties, but using computers instead of synths perhaps brings tonal differences in sound like computer music that uses digital computer programs (not soft synths with their digital soft synth oscillator / filters) to create sound. Using analogue old computers instead of analogue synths can have different results in sound, like “pure” digital computer music is different from digital synthesizer music.
More about audio: Wave Field Synthesis is method that uses hundreads of channels to deliver exact positional audio data creating “audio hologram”. There has been problems with controlling acoustic waveforms in room acoustic situations, but new inventions in the field have improved situation. There are different techniques and trade names like coolux.de, MorrowSound True 3D, A&G 3D-EST etc. So if the acoustic wave control has some sort of solution that works in reasonably price and acoustic WFS processors are not super-complicated and expensive anymore, it is perhaps possible to build cheap wave field synthesis loudspeaker array for example like home TV theater systems, if audio processing problems have now been solved. Ordinary Blu-ray disc have in HD- form 100 gigabytes capacity and prototypes have 400 gigabytes, and terabyte using blue- violet laser have been made. DTS sound encoding that Blu-ray discs have is capable of storing thousands of discrete separate audio channels, only limitation is storage media`s capacity to store these channels and capacity to simultaneysly transmit them (bitratre). But even standard Blu-ray disc can have 6 X 192 khz channels. That is almost 1200 khz (1152khz) sampling frequency available simultaneysly / about 600 (576khz) khz is available at heared frequency range. If now these 6 X 192 khz is divided by simple frecuency divider to 24 khz channels(192 : 8 = 24). Now 6 X 8 channels = 48 is available, each 24 khz sampling rate / 12 khz highest audio frequency. Dolby Pro Logic is analogue channel dividing method that can encode maximum 9+1 channels in Dolby Pro Logic IIz form to 2 channels. if 24 stereo channels is in use now total 240 channels fit in Blu-ray 6 X 192 khz channels. But it is also possible to encode digitally matrixed channels in stereo channels instead of analogue, for example DTS:X and different Dolby surround standards. If first channel encoding is done digitally and now each channel pair (2 channels) have decoded matrixed 8 channels of audio digitally inside 2-channel stereo (8 channels from 2), and then in these 8 digitally encoded channels is 4 X 2 channel analogue matrix decoding platform like Dolby Pro Logic using stereo pair channels to encode 8 channels from 2 channels also, so in the end from only 2 channels is finally expanded to 32 channels (8 : 2 = 4, 4 X 8 = 32). If that is possible. Blu ray disc at 1152khz combined sampling rate has 48 channels (24 khz sampling rate per channel) available, and 2 channels is needed to encoding, so 24 X 32 channels is almost 800 channels (768 channels). From standard Blu-ray disc. Other surround formats that divide frequencies only to few thousand kiloherz discrete channels, increase this channel number to thousands because almost 1200 / 600 khz is simultaneusly in use in Blu-ray disc. Using channel frequency division that is divided along the lines of human hearing system, human hearing system has about a dozen separate frequency “channels” between 0 - 15 khz, and if wave field synthesis is using these same frequencies realistic “audio hologram” is possible. And of course there are audio compression methods, the previous example operated in linear PCM, and channels can be made separate (discrete) instead of matrixed inside stereo pair or inside 6- channel Blu ray audio stream using some sophisticated 6 -channel audio matrixing system that expands 6-channel audio to perhaps several dozen matrixed channels inside 6 channel stream. But if discrete channels are used, and 24,5 megabits/ sec is maximum data rate, and DTS has capacity of thousands of channels, and 32kilobits/sec is used for one discrete DTS channel, now 768 different discrete channels are in use in 24,5 megabits/sec bitstream. If these discrete channels are used like analogue stereo pair matrix decoding matrixed audio channels can be built in inside these 768 discrete channels. Audio channel capacity now expands to thousands of channels. If frequency division that splits frequencies in in small few kiloherz range splits according to human hearing system, and passes those frequency splits in different separate loudspeakers, is used, available channel number is several thousands also. From avarage Blu ray disc with its maximum 24,5 megabits/ sec audio data rate. If 3 D real audio processor that sorts out the audio can hold such a workload. And for headphone listening 2 channels with good hifi quality is needed but rest of the channels can be used to hundreads of loudspeakers wave field synthesis system. Electrostatic loudspeakers etc. are needed, and electrostatic speakers can have inside several different loudspeaker elements inside one large loudspeaker, and that speaker multiplication can be done relatively cheaply if electrostatic loudspeakers is used.
For cost free internet making public domain material is difficult for example in feature films. Old silent films made before 1923 are free only in the USA and in Europe public domain films are even older. However USA, England and Japan have public domain laws that make possible evn 1940s or 1950s material (films or music) go to public domain but outside of these countries the same material is not in public domain, altough the material is american, english or japanese. In internet for example old feature films are post in some netpage in Japan, USA or England and then these netpages are deleted because outside these countries this material is illegal to download. However inside these theree countries same material is legal to download. To fix the situation, if not in the western or industrial countries is possble to download this material, perhaps in the third world countries (south and middle america, south asia and africa excluding “rich” countries like South African republic) is possible to use the same copyright and public domain laws as in Japan, USA or England in japanese, british or american material. That would give access and rights of this material to same level as japanese, british and american people have in their own country s public domain material. And some like Andrei Tarkovsky and Louis Bunuel made in their last will that their films should go in public domain. Altough that is not the case for example openculture.com shows Tarkovski s films in the internet for free. In the third world countries public domain material is needed, all kinds of free material like feature films. An alternative for feature films is video game cutscene films. They are actually video games edited to animated scenes (cutscenes) only and without gaming scenes or those scenes edited to “feature film”. These can be seen in youtube or special channel for them (gmdb.tv). Because they are in fact adware or can be considered as over long commercials of video games, they can be treated similar way as advertisements, so they can be royalty payment free or even game studio can pay some sum that cutscene movie is displayed, like advertiser pays for advertisement. Cutscene video game movies are cheap way to increase feature length film material in public domain internet and in free net for developing countries when it comes reality. Because they are more advertisements of games than actual video games, altough made and edited by fans mostly, game studio can even pay some sum that they are being displayed, unlike regular feature film from which royalty payment must be paid by displayer (TV or internet channel).
Meny very advanced music making software (softsynths etc.) have been made for iPhone, but people in development countries does not have money to buy iPhone. However, because iPhone uses ARM processors, and they are cheap, and even specific latest Apple A9 processor with 2 billion transistors costs only about 22 dollars to make, building cheap music softsynth platform that looks and works like Plugiator and uses iOS music software is easy. However Apple does not licence iOS operating system to outsiders. There are almost similar OS alternatives like Darwin, Fink or Haiku OS (that one is based on Be OS, another iOS clone OS). Darwin and Haiku both use BSD OS components. So it is possible to build cheap music handheld softsynth platform (Plugiator) that in slight programming differences can use iOS music software if operating system is changed to Darwin, Fink or Haiku, or some combination of all three of these as future free software alternative for iOS. Now if OS is free and hardware some other than iPhone or iPad almost similar music software can be run in very cheap machines, and every iOS program can have Darwin or Fink etc. alternative. For music software for developing countries and cost free internet in the third world. No net connection is necessary in the Darwin etc. OS device because it is a music making tool, not phone or laptop with net connection, altough it can be used as computer because computer cheap iPhone or iPad alternative is. Allwinner A33 processor cost only 4 dollars at tablet PC, and if even Apple A9 cost only 25 - 30 dollars even that can be used in cheap music plugiator. Free software for iOS such as Sparkle, Tunnelblick, Ninite, Freebird database, MacPorts, Blender, Glada, and audio software either commercial payment or free, such as XME pocket studio, Max audio utility, LNX OS X, Auria etc. exists. Advanced music software of iOS can be now used in development countries in cheap devices. If Apple would have in its iDevices a serious hand held music workstation that is as cheap as possible but inside it has latest Apple A9 processor. It would be like iPod Touch, without any net or radio communication, no connection to internet in any way, and perhaps not even video codec, because only graphics processor for GUI is needed to run music software. If computer games are used their video cutscenes must be replaced with motion capture computer graphics if video codec is nonexistent. The music workstation would be like iPod Touch or iPad but made with cheapest possible components except main processor, which is latest technology. If price of main processor is about 25 dollars the complete device, tablet PC or phone sized music workstation, would have a price of max 50 dollars or so. If that is the only way that iOS music software can be bought to cheaper than iPhone devices. This device would be on sale at third world countries only, because there is its real market place. Cheap Android music workstations that are based on childrens “educational tablet PC s” like Prasad (India) or Cheertone (China) but aimed to serious music making, not for children s toys. Plugiator is one (expensive) example of handheld music workstation. Some like Monome Aleph are extremely over priced, Blackfinn processor inside it costs about 8 dollars, but Monome Aleph cost 1400 dollars to buy. Old DSP s like SHARC, Texas Instruments TMS 320, Coldfire, and others are over 20 year old, manufacturing 20 year old versions of them (chinese factories still manufacture 20 year old designs) using modern 16 or 28 nanometer technology (instead of old 350 nanometer 20 year ago) hugely increase processor speed and many more processors could be placed on single silicon wafer (instead of old 350 nm) making very low production price despite 14, 16 or 22 nanometer manufacturing. Coupling several (several dozens) old inefficient DSP s to one chip and one silicon die (that makes the processor), like SHARC that operate parallely, price would be very cheap but perfarmance perhaps acceptable compared to modern chips. For signal processing for extremely low price devices aimed for development countries. FPGA s (field programmable gate arrays) are also coming to 20 year old in their sophisticated “real FPGA” families like Xilinx Virtex or Altera Cyclone. Because FPGA s in nature work in parallel manufacturing old 20 year ago versions of them but with modern narrow manufacturing technology and coupling several FPGA s to one silicon die and one processor would make cheap but usable signal processing platform for ultra low price devices. Also old ARM processor designs are near 20 year mark so making these license free. MPEG7 has “virtual sound enviroment” capabilities and that standard includes large number of hardware and software synths in the DSP chip / device. Combining several (20 years old) sound chip methods like Roland LA synth, Yamaha and Casio FM synth, (NEC sound chips for phones, now discontinued, used licenced from Yamaha FM circuits, and NEC sound chips vere earlier 1980s used in Casio keyboards and others) and Yamaha digital waveguide synth (YMF 724 - YMF 764 series sound chips), all these in single chip and formant synthesis for speech synthesiser and music also (Yamaha Vocaloid). One cheap hardware sound chip for mobile phone can include them all, among PCM wavetable synth. Different graphical interfaces could be used in the phones display to include old vintage 1980s Yamaha or Casio or Roland other synth emulation so one sound chip can be used like soft synths use one PC processor that one processor represents several synths. For increasing sonic palette and usability. MP3 player circuits can be used as wavetable music synth, and MP3 circuits that has partially combined MP3 and AAC circuitry for economical transistor count for mobile phones has been theoretically made. MPEG7 or other modern standard included several hardware and software synths in every phone tablet PC or laptop, but that hardware synth must be made cost effective way. Also new synthesis methods like in texts “Computationally efficient music synthesis - methods and sound design”, “Smart sound generation for mobile phones”, “Signal processing four sound synthesis: computer generated sounds and music”, SenSynth: A mobile application for dynamic sensor mapping". Now when tablet PC and phone processor are manufactured using 14 - 28 nanometer process, including hardware synth in DSP section of chip that does video processing etc and not using seperate chips for mobile phone hardware synths brings down costs. 10 Years ago Vimicro Vinno had Linux based (Montavista Linux) sound synth platform for mobile phones but that has discontinued among other mobile phone sound chip manufacturers, only Yamaha and Dream still remain. Using modal synthesis or other new method in the hardware (among other older methors such as wavetable and FM modulation) and not in the software perhaps is better if CPU processor is cheap version has limited processing capacity. Soft synths can also be used together with hardware synths in mobile phones, even in cheapest ones. Also speech synth that uses “hidden Markov model speech synthesis” or other and that speech synthesizer should be included in MPEG7 or other standard, and speech synth methods used like eVovcaloid in the musicmaking as music sound synthesis method also. Because large amount of people in the world cannot read or write. So icon based symbol touch or visual interface is best in cheap devices.