Main Site | Join Robin Hood Coop | Projects | Events | Blog | Media | Forums | Mailing List | Twitter | Facebook

Variable frame rate for video compression and unified audio / video codec

Variable frame rate (VFR) for video is either 24, 25, 30, 48, 50, 60, or 72 frames per second. Flickering (refresh) rate is between 50 and 240 herz. Actual flickering rate has nothing to do with frame rate anymore because video compression uses few keyframes that has the actual picture and then uses many intraframes which contain changing motion. If one keyframe is used in many seconds and nothing much is happening in the picture, then frame rates of 24 or over are unnecessary per second. Variable frame rate with individual time code for each frame is existing according to Wikipedia, and there is a text “Variable frame rate video for mobile devices” Boroncini 2006. If frames are not changing much 24 fps frame rate can be dropped down to 6-8 frames per second, which are rates used in animated films. Refresh (flickering) rate however stays at 50 to 240 herz. Also individual “macroblocks” of video compression, like 64 X 64 blocks used in Google and H265 compression can have all individual frame rates. They have that already, every block is coded differently, but frame rate stays of complete frame stays at 24fps or faster. When 24fps is unnecessary frame rate could drop to minimum of 6fps to save processor workload. If flicker rate stays the same and video compression uses same kayframe again and again in the picture no 24fps rate is needed. Individual frame rates for each macroblock propably saves processor workload also. Also if film is shoot at 72fps, then when nothing is happenening in the picture 72fps can be used in the playback also, because not much information is changing from frame to frame. When something is moving in the picture, and that moving part encoded as block in video compression, that moving block can have only 24fps rate to save video data rate when static background stays in 72 fps. Changing between 24, 48, and 72 fps inside each frame saves data rate, either as increasing realistic motion when 72fps is used in scenes with very large background where is very little movement, and that very little movement can be coded efficiently not to altering static keyframe, so very small data rate is needed to 72fps picture. If something is happening fast at through the whole picture, then frame rate drops to 24fps to save data rate, or if very fast moving car is in the picture, then that car is in the block that uses 48fps for example. Other way is to drop frame rate down to 6fps when nothing is happening in the picture or macroblock because 24fps is too much is same keyframe is used again and gain in each frame. This saves processor workload and affects nothing to picture quality if flickering/ refresh rate is always 50 - 240hz no matter what the frame rate is. Flickering rate stays always the same altough frame rate changes from 72fps down to 6fps. Individual frame rates for macroblocks and moving quantized blocks in the picture does not affect picture quality, because video compression quality is the main reason for quality, and it uses variable frame rates for block to block basis already. Dropping frame rate to down to 6fps when nothing is happening anyway just makes video processor workload easier, and increasing frame rate to 72fps or 48fps when nothing is happening in the picture block increases picture quality without affecting data rate much, if original source material is filmed with 72fps. Or simply use same frame rate that video compression uses, that is in some video codecs (according to Wikipedia, keyframe + intra frame rate, each frame has individual timecode). When analogue video was surpassed by digital video compression, frame rate lost its signifigance, video compression with keyframes and intraframes has very little to do with actual 24 - 72fps frame rate. So if individual macroblocks has each individual frame rate rate, between 6- 72 fps (72fps can be divided to 72-,48-,24-,18-,12-,8- and 6 fps), actual savings in data store are not much because video compression uses its own “frame rates” anyway, but perhaps processor workload is less when picture is almost static with 6fps and not 24fps, or quality of picture is better if when picture is almost static it suddenly switch to 72fps from 24fps. That requires original source material must be filmed at 72fps. Because not much changes in the picture data store requirement of 24fps and 72fps is almost the same (altough small movements in small microblocks require more data store in 72fps than 24 fps, but when most of the picture is static, incrased data rate is not so much), only thing that changes very much is frame rate that triples. When lots of things happen in the picture frame rate drops to normal 24fps to save data store. And vice versa if frame rate drops to 6fps when nothing particular happen in the picture or macroblock, the watcher does not even notice difference because video compression uses its own frame rates anyway. Only thing that changes is processor workload (perhaps). Flickering/ refresh rate is always about 240 herz (or 216 or 252 or 288 herz if 72fps is the frame rate standard), when frame rate drops the same picture simply is shown again and again, video compression uses similar thing. Also if video and audio codec should be efficient as possible, video and audio should be using same unified compression and codec structure. Sound processing (music sound synthesis) using GPU instead of CPU is example of sound processing using circuits originally aimed for visual production. There are sound effect plugins and soft synths that use GPU video processing capabilities for sound production. If that is possible then is possible to build cheap and simple codec for mobile phones that uses video circuit or GPU for audio codec also, using about similar compression methods. Now one circuit can be used as video codec/compression, graphics processor, audio codec/compression and as speech codec/compression also. So instead of separate speech, audio and video / graphics processing codecs and compression codecs there is just one codec and one circuit network instead of three / four codecs and three VLSI structures. If complicated sound production using GPU is possible then sample-based audio codec for speech and music compression should be possible also. Sample based sound systems should easily work in GPU, like additive synthesis together with sampled sounds hybrid for natural sound reproduction. And digital waveguide mesh sound reproduction, physical modelling synthesis, modal synthesis, beam tracing modal synthesis, wavetables together with samples, spectral synthesis, finite difference synthesis, XOR synthesis, force-based synthesis and other numerically processed sound etc methods. So sound compression (audio codec for compressing music and speech voices) using those aforementioned methods and using video codec circuits for sound compression and reproduction with great efficiency is possible. Sound samples are compressed and then reproduced using those methods, and also using video codec s own compression methods that are otherwise used in video compression. Creating cheap unified video/audio codec for example roll printed electronics, and using 1-bit signal path with dither, logarithmic / complex number representation and “noise shaping” and additive quantization (AQ) is perhaps possible also. “Revisiting additive quantization” and “Solving multi-codebook quantization in the GPU” (Martinez), “Additive quantization for extreme vector compression”. Attemps to make video circuits simpler are “Unified binarization of of CABAC/CAVLC entropy coding” 2013, “A unified 4/8/16/32 -point integer IDCT architechture for multiple video standards”. For audio for example MP3 and AAC that use partially same circuits etc. have been made, but unified audio and video codec structure where both audio and video uses same circuits has not been made. Just one codec for speech, music and visual information (still photos and videos and computer graphics) for super cheap printed electronics, perhaps using 1-bit signal path, quality is not important, but simplicity and cheapness is. Making one codec graphic processor for GPU, video codec, still photos, and speech and music codec also that uses about the same compression methods, and about the same circuits altough total unification is impossible, sharing resources and compression methods leads to one codec solution that is better than seperate and myriad audio, graphics processing, and video codecs in one cheap phone that has few dollars target price.This unified structure can be used in (more expensive) silicon chips also, and use audio signal path as music generation (music synth) also, even at cheap processors For music production: Since late 1980s there has been music description language Patch Work (PWGL) nowadays called Kronos. Altough through its 30 years of existence it has gained virtually no publicity and is perhaps least used of all music description languages, and only hardcore pro “serious” electroacoustic / computer music composers use it, it has advantages comapared to others. Those composers have used it from early 1990s, or those composers become world famous after they began using PWGL. Among them are Kaija Saariaho, Paavo Heininen, Magnus Lindberg (used one compositional PWGL tool already well in the 1980s, even before Patch Work was released), Örjan Sandredt, Tristan Murail, Jean-Baptiste Barriere, Marc-Andre Delbavie, Brian Ferneyhough. These composers not only used it, some of them also participated PWGL developing process (Lindberg from early stages onwards, prof. Heininen supervised the PWGL project, Sandredt etc.). Altough MAX / MSP and Pure Data are standards today, and PWGL/Kronos is written in Lisp, MAX has been ported to Kronos using PWGL2BACH port. Saariaho used FOMUS composition tool and CHANT choir synthesizer, latter was originally known as PW-CHANT made for Patch Work, but is now in Csound. Other composers like Sami Kleimola that was one of prime developers of PWGL and Perttu Haapanen who uses PWGL tools exclusively for composition, use it. In blog article by Hasan Hujairi 2015 “Parametric composition as a possible approach for non-western art music?” is PWGL / Kronos as an tool for development countries composer to make sophisticated computer- based music. PWGL / Kronos is free and as far as I understand open source and it is really heavy-duty sophisticated compositional tool. Altough work for PWGL has stopped it is because new interface design called Kronos is made and Kronos development is very active. Is it possible to made for example Kronos suitable for mobile Android phones (Kronos is for Windows and Apple only, and for PCs, at this moment)? Although this would be quite heavyweight solution for mobile phone music making, if aim is to make simple techno tracks etc. with simple musical backing. But if Kronos, which is free, would have mobile phone or tablet PC version it would be near revolutionary platform for music making, not just those few pro “serious” composers that use it nowadays. If Kronos would work as simplified version in low-quality Android phone it would be the most complex music making app perhaps ever for cheap phones. If Mediatek X30 (with PoweVR7xT virtual reality 4-core GPU) comes 2017 at Chinese phones, and cheapest X20 phones are 120 dollars about today, effective computing is available at reasonably low price. But Android is bad at soft synths because of delay in sound. Better is mobile Linux distro made especially for sound and musicmaking, that can use Android apps also if needed, or dual Linux / Android OS. Or use Windows phone or Symbian as soft synth platform. Even small Firefox OS or Chrome OS Linux kernels can be used as offline soft synth platform without internet connection, has some minimal few kilobyte music programs, 4K demoscene makes some more of them and when phones used Symbian (ringtone) soft synths had 17 kilobytes smallest hardware memory requirement. Mali G71 GPU has maximum teraflops computing power about in 32-core form, using GPU in mobile phones for soft synth platform together with CPU makes very complicated soft synths possible, even in cheap phones. GPU uses floating point values, but using 10 or 11 bit OpenGl FP numbers in processing audio, and then in final output stage (when sound is mono or stereo in the final output to listener) adding additional header like integer audio processor does, making for example 16 bit integer 40 bit using additional header. Altough sound quality is not improved up to 24 bits, some improvement is achieved. So 10 or 11 bit FP number could have additional header and end result is 32 bit FP with about 16 bit FP quality. For mobile phone sound processing using GPU. Also 1-bit signal path (printed electronics) would be improved if in final output stage additional header is added. Actually ARM cortex A17 would be best cost effective / performance platform for phones, but is discontinued. Cheap 64 bit computers like Pine64 (15 dollars) or Unuiga S905 (25 dollars) exists, and CHIP (9 dollars). Adding cheap 64 bit processor SoC to cheap chinese 5-6 dollar phone platform would make really cheap and powerful few dollar phone or tablet PC possible. Who is the first manufacturer to do it? If some chinese Android phone maker is already making 64 bit ARM processor phone at 10 - 15 dollar price range I don t know. Unuiga has VR8, that is virtual glasses intagrated with processor and wifi, now VR glasses became the “phone” and “computer”, no additional hardware is needed (except perhaps keyboard or touchscreen or some othor control method, gesture control with camera etc.). Unuiga also has Remix OS, Android aimed for tabletop computers. When Microsoft made Windows Phone it hoped mobile phones would join Windows, now it seems that Android is coming to desktop PC market. Google ChromeOS and Firefox OS has small Linux kernel, otherwise they are thin client devices. But even small Linux kernel can be used offline, so linux distro that can be used as thin client web-driven ChromeOS or FirefoxOS but in not thinclient mode can beused as small and simple Linux mobile platform that uses simply programs without large graphical GUI that can be used and fast. Smallest Linux minimal is 1MB RAM and ROM and even 192kilobytes is smallest Linux distro ever, so ChromeOS or FirefoOS that works in simplified manner in no thinclient mode should be possible also. In development countries there is no high-speed data transfer at very cheap price, altough Firefox OS has marketed for developing countries. But telephone line price is much higher than phone itself, so simple low data rate phone system that works like normal phone but can be used as data-intensive thin client mode also if beeded would be solution. But four core 32 bit ARM processor SoC was 4 dollar price at 2014 and 64 bit 5 dollars at 2015, so 2017 propably eight core processor is 6 dollars in SoC. So making cheap phones that are using FirefoxOS or CHrome OS is a bit futile because “real” high computing processors are cheap already. If not then the phone or tablet PC is running some extensive virtual reality enviroment through simple video glasses, and that enviroment must use both cloud and phones processor extensively so simply Linux kernel like Firefox / Chrome OS is using phone s simple processes like simple operating system and other computing capacity of 8-core processor is used to run VR processes together with browser-based cloud computing. Octacore phone SoC is propably so cheap that it can be paid using same advertising money that free internet connection is using in development countries, so octacore phone (made otherwise than processor using cheapest available components and roll printed electronics whenever possible) can be given away as free in development countries together with internet connection, altough that free internet connection would have data rate restrictions etc. compared to paid connection. And X-tra PC is selling PC stick at 25 dollars cheapest, that uses Intel Atom perhaps because it is for PC not phones, and that Pc stick includes memory etc. other hardware, all at 25 dollar price. So cheap processor technology is possible. In text “Sandwich keyboards: fast ten-finger typing on a mobile device with adaptive touch sensing on the back side” 2013 is non-touchscreen phone keys solution. TwinVQ is sound compression method, perhaps using TwinAQ (Additive Quantization, AQ, by Martinez) like in texts “Extreme vector compression” etc. are possible, together perhaps with dual-slope ADC or other. Sound synthesis in phones is usually done with CPU, but if phone uses together with CPU graphics processor (GPU), MP3 player circuit (as sound synth), and video codec also as “raytracing” or sample-based musical synth, even cheap phone can have extensive sound synth palette, when all of these are used simultaneusly. And using computationally efficient sound synthesis methods like sample-based synthesis, perceptionally sparse synthesis, numerical sound synthesis etc.“Sparse atomic decomposition methods of audio signals” 2017. If 4 bit ADPCM is enough for almost 16 bit quality, then for example 2 bit DDPCM (Dynamic DOPCM, from netpage) is enough if sound is only 4 khz, and 4 bit ADPCM is 16 khz. 4 bits is 16 number values and 2 bits is 4 so if audio frequency range is smaller “bitrate” stays the same. 4 khz sound can be expanded with high frequency replication to 8khz and then to 15 khz using aural expander at 8 -15 khz. So simple sound synthesis that uses 2 bit DDPCM and 4 khz range that is then expanded to 15 khz is possible. This simple 2 bit DDPCM or 1 bit with dither and/or QMFcan use XOR synthesis or other numerical simple sound synthesis methods. Other 2 bit is “A 2-bit adaptive delta modulation system with improved performance” Proselantis 2006. Continous variable slope delta modulation (CVDM) is a 1 bit method that is simple, perhaps that can be applied to simple sound synthesis also, with or without dither or QMF filters. If dither or quadrature mirror filter bank QMF is used, even 1-bit sound synthesis with good quality is possible, and dither noise can be audible in electronic sound and can be part of the sound. Logarithmic scale is preferred for limited numerical values like 1-bit sound, but logarithmic scale is not integer scale. So Renard number system or modern version E- series numbers that use integers have been proposed instead. It uses integers so it is simple, but not logarithmic, for limited numerical values but high dynamic range. “Nonuniform sampling delta modulation-practical design studies” Golanski, “Recent advantages in pulse width modulation techniques” Peddapelli 2014, “Variable frequency pulse width modulation” Stork, Hammerbauer 2015. Undersampling (Xampling) beyond Nyquist rate is possible, even 50 times below Nyquist rate, and recover sample for sound synthesis, and perceptually sparse sound reproduction, and recovered sample is almost complete altough it has sampled only 1/50th samplerate. There is US pat 20030142691A1 by Hartmann 2003: “Modulation by multiple pulse per group keying and the same” that uses some sort of many pulses together- modulation. Displays with or without touchscreen can be made cheaply with printed electronics, then cheap 7 inch tablet PC that can be given away with free (advertisement paid, commercially sponsored) cost free to end user internet connection can be made if only difference between 7 inch tablet PC and 4,5 inch display phone is creen size. The free connection can be restricted to internet connection only, without normal voice call telephone connection (that is restricted to paying customers), and that interenet connection only with data restrictions, public domain material only, or without visual or even audio content, so text browser etc. solution, main point is that internet connection by commercially sponsored system (advertisement paid) should be free in poorest countries of the world, and free phone/ tablet PC made with cheap components (roll printed electronics when that is possible, cheap silicon chip otherways) can be given to internet user for free also, because internet line costs of yearly basis are much higher than few dollar cheap tablet PC / phone. No matter how restricted the interenet connection is , main point is that it is cost free to end user in poorest countries of the world. For paying customer extra features can be added, if customer has money to pay that. But for those who have nothing, this commercially sponsered limited internet connection with connecting device (phone or other) would be most important. Virtual reality, in cloud based version (googling “cloud based virtual reality” brings many results) is coming, so perhaps even free net connections in poor countries should have some sort of virtual reality enviroment. Because internet devices are cheap and simple in cost free net, perhaps cloud based VR enviroment or augmented reality envirpment can be used. Perhaps that VR or augmented reality needs so much phone resources that it needs almost all of simple phone s or tablet PC s computing power. Cloud based OS like Chrome OS, Chromium OS, Firefox OS, Xpud Linux, Core OS, use small Linux kernel and all other is coming from the cloud. There are also small Linux distros optimised for small kernel size. One solution for cheap VR / augmented reality is operating system that have small (Linux) kernel that can run suitable programs and apps offline, but when net conection is on, can also run extensive cloud based virtual or augmented reality enviroment, like Chrome Os or Firefox Os run net apps. Because almost all phone s capacity is used to run net apps or VR enviroment, small (Linux) kernel that does not use much computing power when virtual reality is on is needed. I think Chrome OS and Firefox OS work in same way. But they don t work offline. However small few dozen megabytes Linux distro can run Linux programs, so operating system that can offline be used like any small Linux distro, but then when net connection is on large virtual reality enviroment use almost all phone s resources. It is like Firefox OS or Chrome OS but operating system can also be used offline, altough it is simple Linux kernel and not so complicated programs can be used offline like online. Even Android was first planned to be only 32MB RAM + 32MB ROM program, like Android Brillo, early Windows phone had 128MB requirement etc., so simple OS that can be used as VR platform online like Chrome OS is web platform online, but also remain some functionality offline so programs and apps work without net connection, but those apps and programs must be simple enough for simple offline kernel. Symbian was from the start designed as minimal operating system, but has fell on favour and Linux based systems like Android rule. Operating systems like RemixOS, PapyrOS, Xpud Linux etc., exists.

In previous post is that if nothing much happens in the background of video frame it can be coded at 72fps without making changes to frame. Actually those small changes must be coded at 72fps so frame is changing from frame to frame, but changes are small so data rate is small also, except small moving blocks at 72fps the rest of large frame can be at 24 fps or even slower, if nothing except small parts are moving in the background. Nowadays video codecs use motion estimation, from example in sports TV transmissions when movements are fast. Old silent films used 16 - 20 frames per second, not 24 frame standard. Can motion estimation video codec algorithms be used to make 24 fps video from old 16 - 20 fps frame rate film? Altough this “virtual 24 fps” has not the same quality as “real” 24 fps quality old silent film boosted with motion estimation technology up to 24 fps frame rate will propably be improvement. Also old silent films were filmed using slow frame rate and then played at theaters at faster frame rate, making movements unnatural fast in old silent films. “Virtual 24 fps” perhaps would make movements more natural because now movements are in real speed not fastened by frame rate. Also all films use 24 fps frame rate, only during couple of years faster rates such as 48 fps has been in experimental use. If old (and new) films would be boosted using motion estimation from 24 fps to 48fps “virtual” rate perhaps that would increase realism in moving pictures. Television sets already have programmable motion estimation algorithms, sophisticated motion estimation algorithms can improve 24 fps frame rate picture near to “real” 48 fps frame rate, old films etc. could now be encoded in blu-ray disc or other digital media using 48 fps frame rate, altough old film is originally filmed at 24 fps. In video post production stage when old film is remastered to blu-ray or DVD, now also frame rate can be expanded from 24 fps to 48 fps using motion estimation techniques when picture quality is improved by remastering. Altough “real” 48fps quality is not achieved quality of moving objects (blurring of moving objects) can be made signicantly non visually recognisable if sophisticated motion estimation techniques are used in video post production stage when old films are transferred and remastered to video medium. Old 24 fps films can now be sold as 48 fps blu-ray videos. Also 3D conversion can be done in TV set nowadays from 2D material. But 3D quality is better if 3D conversion is done at video post production stage when video is set to DVD or blu-ray. So converting old 2D films to 3D is easy and cheap so releasing old 2D films in 3D and that 3D conversion is done at video post production stage not in TV set when 2D video is played. Many Hollywood films were transferred to 3D using 2D filmed material and 3D conversion only at post production stage. So perhaps old 2D films can be transferred to 3D form and increasing frame rate to 48 frames/sec also. Some old films have transferred to IMAX format also, like Wizard of Oz. Old films had 1:1,37 picture ratio so they are more easier to transfer to IMAX format than modern widescreen movies, because IMAX picture is narrow (but huge). Perhaps IMAX picture looks better than regular widescreen movies when watched using video glasses. If video glasses/ virtual glasses are the video medium of the future perhaps transferring narrow picture format movies to 3D, increasing frame rate from 24 fps to 48 fps (“virtual 48 fps”) and transferring optical presentation to IMAX wide angle but narrow picture format is perhaps solution if people watch TV movies using video glasses. Old black and white films can also be computer coloured if needed. I don t know how to make 48 fps picture from 24 fps material. Several (24) extra frames per second that contain movement if something is moving in the picture? If that is difficult and picture is remastered from film using 4K or 5K mastering, perhaps lowering quality of these additional 24 fps frames quality to 2K or 1K, altough rest of material (another 24 frames per second, that contain the original 24 fps film information) is 4K or 5K quality (from 35mm or 65mm film negative). Now quality of video changes from 4/5K to 1/2K in between frames, but those extra frames are for fast movement only so perhaps watcher does not recognise quality loss so much. If TV movies are watched using video glasses, TV film transmissons using 2D films converted to 3D and to 48fps IMAX, that is perhaps the future of television. Over 55 frames / sec is needed that human eye does not notice blurring in motion picture, altough about 45 frames / sec is enough for realistic movement. So 56 frames / sec would be ideal, fast enough that human eye does not notice any blurring in movements. That 56 fps could be divided down to 28 fps to slower transmission. But only 24/48fps or 30/60 fps are available standards. 60 fps can be divided to half (30fps) but not easily to 24fps rate, 40% frame rate, or is it possible? 72 fps is in fact “too fast” for human eye altough Showscan format considered it best, but if any rate over 55 fps is so fast that human eye does not notice difference, 72 fps is also easily divided down to slower frame rates. Todd AO film format use 30 frames /sec so making 45 fps video stream from Todd AO films is enough for realistic movement, if about 45fps is enough. So “virtual 45 fps” video conversion from 30 fps Todd AO or 30 fps TV video material (old TV video 30fps or HDTV 30 fps) needs only 1,5 times increase in frame rate for movements to look realistic. Additional “virtual” frames are needed only 15 and not 24 per second like conversion from 24 fps material. Old TV (NTSC or PAL) frame rate fastening to 45 fps (NTSC) or 50 fps (PAL) would increase realism, tests have shown that increased frame rate can increase sense of realism even better than increasing pixel rate of picture or even 3D picture. Adding “virtual” frames to old TV video standard material would increase its quality, and it is simpler due to lower pixel count than digital cinema. If old films and TV video material is watched through video glasses, which is perhaps the future of television broadcasts, then converting old films to 3D, and increasing frame rate with “virtual frames” that original source material does not have, in those places that have movement in picture, would make old moving picture material more accessible to virtual glasses. Increasing angle of picture like converting old films to IMAX format also makes watching experience better. Digital cinematography uses intraframe compression, so it uses 24 still pictures per second, and these still pictures are complete. Display of digital video is using interframe compression, in television, DVD or bluray disc at frame rate 24 fps only few frames has complete (intraframe) picture, rest are just mid-frames that contain only movement in the picture and not whole information. Making 72fps moving picture in economical way is to shoot 72 fps that has 24 frames using intraframe compression and 48 frames with interframe compression (intraframe “real pictures” in few keyframes and interframe means only moving parts of picture is coded in other frames), now 72 fps picture has only 24 “real” pictures and 48 other that contain mainly movement (and few keyframes within 48 frames that are “real” pictures also). Now low data rate 72 fps picture can be filmed, even using 4K or 8K or even higher accuracy. Red Epic 617 camera has 28K accuracy. Displaying 72 fps picture does not mean that data rate must be 3 times higher than 24 fps picture. Video compression uses motion estimation and interframe compression so 72fps needs more data than 24 fps but not three times much when television or internet broadcast or DVD / bluray is used. When only movement is coded and modern video codecs reach 1000 times or more compression ratio, movements are very effectively coded and frame rate can be 72 fps but at the same time bitrate can be quite low. So increasing frame rate to industry standard 24 fps to 72 fps or 60 fps does not perhaps mean that bitrate must be increased 3 or 2,5 times. However frame rate of movement is even better way to increase sense of realism in moving pictures than increased pixel count or 3D. Combining 3D and fast frame rate (like “Hobbit” film at 48 fps) makes very realistic presentation of moving picture. “Soap opera effect” or motion smoothing is effect that makes movements unnatural sharp in 24 fps material when displayed on TV. If original transmission has encoded within video stream instructions which part of video stream will use motion smoothing and which must not, now motion smoothing can be automatically switched off when it is not needed and soap opera effect is no more in 24 fps material. Or then video in post production stage increases frame rate from 24 fps to 30 fps adding 6 “virtual” frames per second, and TV transmission uses this 30 fps rate, now if motion smoothing works perfectly using 30 fps rate it does not need to be switched off. But frame rate increase from 24 fps to 30 fps increases bitrate also.