Human vision has threshold in about 56 frames of pictures per second, in 56 fps movements look perfectly natural and it is not possible to detect any difference if it is video file or seen in own eyes. Altough 22fps is enough so that human eye does not notice individual pictures and picture seems moving, 56 fps or faster is needed so that perfect illusion of reality is reached. However video, even Hollywood films are shoot using 24, 25 or 30 frames / sec. The Hobbit film was at 48 fps, but since then fast frame rates seem to be forgotten, except Douglas Trumbull who made “Showscan” format already in 1980s. If digital film camera is speeded up to 60 or 72 fps, but instead of normal way of filming which is that all frames are raw uncompressed files or motion JPEG (that is camera taking digital photograph at 24 - 30 times per second) it takes digital photo in 24 or 30 fps, but in between it takes video compressed inter frame pictures, so 24 fps is increased to 24 reference frames plus 48 inter frames, and 30 fps to 30 reference frames and 30 inter frames. Result is 72 fps (24 + 48) and 60fps (30 + 30) film. Because those inter frames use video compression, and video compression makes video files almost 1000 times smaller in most efficient form, actual increase of additional data store is just 48/1000th or 30/1000th of extra space if most efficient video compression is used. Also inter frames can be in 4K film only 2K resolution and now coded at full resolution raw file or motion JPEG. Filmed at 4K camera but downgraded to 2K, which when blown up to 4K when played back makes picture somewhat blurry compared to 4K reference frames, compression like MDCT if used in pictures makes picture (theoretically) slightly blurry if blown up (from 2K to 4K) but not so noticeably, so watcher does not notice it. So camera takes 24 4K and 48 2K pictures in every second, and during playback 2K inter frames are blown back to 4K. Because 2K is about 1/4 of data space of 4K picture, data space needed is 1,5 times compared to just plain 24 fps picture. But now it is 72 fps picture with good 2K quality 48 inter frames. Those lower quality inter frames are laced like video compression uses inter frames and reference frames, so it is not fixed rate 24 + 48 frames but variable bitrate if needed, if moving picture has lot of movement high quality reference frames can be more than 24 per second. And 30 fps digital film can be 60 fps (30 + 30). If video compression is used in inter frames increase in data space needed is very small, but picture when played back has perfect illusion of reality. So instead of filming only 24 or 25 or 30 fps, and then experiments like Douglas Trumbull filming at 60 -120 fps at full quality, similar reference frame + inter frame coding that video compression does, can be used in fast framerate filming that offers perfect illusion of reality at very little additional data store needed. So this method is in between 24 - 30 fps normal fps and full quality 60 -120 fps filming, but is much more economical than filming full quality 48 fps, 60 fps, 72 fps or 120 fps, but watcher does not propably notice any difference between 60 fps reference + inter frame picture and 60 fps full quality picture. However data store savings are significant. 72 fps is possible also (and 120 fps etc. frames per second speeds, simply increasing inter frames compared to reference frames, 120 fps picture with only 24 full quality reference frames is possible, but still additional storage space needed is small if video compression is used in inter frames, so it is 24 full quality + 96 inter frame fps). Previous was for digital cinema in cinema theatres. Home video like HD bluray can have perfect illusion of reality also, if simply blu ray that has normally 30 fps has same number of reference frames than usual (2-4 per second?) but interframes are increased so frame rate is now 60 fps. Picture must be filmed with 60 fps camera and then encoded to 60 fps video stream to HD bluray disc. Picture quality is not as good as digital cinema at 60 fps but it is still 60 fps picture so it offers perfect illusion of reality also. Or doubling the number of reference frames also, and so doubling datarate of bluray disc (halving the playback time available in bluray disc). Digital camera can have 120 fps frame rate, but when encoded to HD bluray downgraded to 60 fps. 120 fps can be interpolated to 60 fps, so 60 fps stores movements in very realistic way, and realistic movement leads to perfect illusion of reality. In third world countries small optical disc, diameter 65 mm, is enough to store low and medium quality video on very small size disc. Ecodisc is cheap DVD, and DVD video and DVD audio can be used as one united distribution of music albums and video films in same format (65 mm disc). Music album is simply DVD video disc that is showing test picture at lowest possible bitrate while audio tracks are being played. Chinese HD DVD sandard or even small bluray disc at 65 mm diameter can be made. 65 mm disc has 27,7 % reflecting surface (about) compared to 120 mm disc, and cheap bluray discs that use organic layer makes bluray disc manufacturing possible in DVD factory. Video content in those small 65 mm discs is not HD quality but ordinary PAL or NTSC pixel count with 24 fps or something similar, or even lower pixel count and framerates. Video codec some cheap (Chinese) or complete free codec, but effective, more effective than standard DVD video. If 30, 25 or 24 fps video is downgraded to 20 fps, and then during playback simply speeded up to 22 fps like PAL video speeds up 24 fps films to 25 fps, so 22 fps is enough for moving pictures and it saves bitrate (20 fps encoding compared to 24 - 30 fps). This for ultra cheap video / audio content optical disc format for third world. If 65 mm disc is too small to fit into DVD players, making plastic adapter that is 80 mm or 120 mm diameter and 65 mm disc is plugged into it before playing, or in factory original 65 mm disc has plastic ring so disc diameter is 80 or 120 mm. 80 mm disc should be suitable for all CD / DVD players. Reflective surface is small 65 mm diameter, that is enough for audio content about 130 minutes lossless 16 bit 44,1 khz (but rates like 20 bit 36 khz should be used so digital content is not so easily pirated to illegal copy CDs). FLAC or other lossless compression makes 130 minutes to 230 minutes about. Effective video coding in Ecodisc DVD 65 mm is enough for mediocre quality playback video, video codec has more effective compression than standard DVD but either free or very cheap license (chinese?) 65 mm video soundtrack needs efficient lossy audio coding, but low quality audio (few kilobits / few dozen kilobits / sec) is enough for ultra cheap DVD. For near cinema quality (and more expensive than 65 mm DVD) fast framerate 60 - 72 fps HD bluray 120 mm disc also flicker rate must be fast, 240 flicker/second in 60 fps framerate, and at least 216 flicker/sec in 72 fps? Or 288 flicker/sec? Flicker rate is depending on TV set. So TV set must be capable of fast flicker rate and fast framerate so it offers perfect illusion of movement, and perfect illusion of movement is also perfect illusion of reality. Fast framerate, fast flicker rate and HD bluray in 3D picture is perfect illusion of reality. Video compression have 20 - 1000 X compression ratio, and if video is highly compressed it perhaps little spoils illusion of reality, but HD bluray has good picture quality, and multilayer bluray discs have high capacity. In digital cinema (and HD bluray also) video compression can be used in interframes of 120 - 60 fps picture, altough reference frames are raw files or motion JPEG like nowdays (24 - 30 frames in 60 - 120 fps video content, rest are interframes for capturing motion accurately).
There is ways to improve video picture, like noise reduction, sharpness control (edge enhancement), and motion smoothing. Those three methods have not international standard like video codecs have, all digi-TV manufacturers use different systems for those three picture improving methods. If those three methods would have standard like video codecs are used by different TV manufacturers, every TV can use same motion smoothing or sharpness control or noise reduction. The so called “soap opera effect” could be avoided in Tv picture, video stream has instructuions when turn on motion smoothing and when turn it off, in video picture when there is lot of movement motion smoothing is automatically used, when it is not needed motion smoothing turns automatically off. Soap opera effect can then be avoided. When those three picture improving methods are included in video codecs, that would improve picture quality without increasing bitrate, only few bit signal when turn them on or off is needed. So standardised versions of them that are used in every TV like video codecs is needed. Same for digital cinema. Picture quality can benefit of them without additional bitrate increasing. TV user does not have to himself turn on or off motion smoothing anymore, edge enhancement etc., it is done automatically. And now those methods can be used best possible way, they are turn on only when they are needed and when their effect becomes negative so they don t improve picture quality but make it worse they are turned off automatically. Also level of motion smoothing, edge enhancement etc. can be level adjusted, from small percentage to high percentage when needed, in the scale of from 1 to 8 level, or from 1 to 16 etc., automatically. Picture quality then further improves. Video stream includes this information, in its video codec that is standardised. Playing device has in its video codec hardware or software standard motion smoothing and edge enhancement and noise reduction inbuild then. It is already in digi TVs, but it is not standardised so every TV manufacturer uses different systems and user must himself turn off or on them. Also video compression uses macroblocks, transform blocks or whatever they are called, one picture is divided to smaller blocks and then compressed. Those smaller blocks can have their own individual motion smoothing or edge enhancement etc., in every macroblock or transform block. One block can use motion smoothing while some other block uses edge enhancement and those are happening at the same time in different blocks of picture frame of 16 X 16 blocks for example. Some blocks that don t need picture sharpening does not use those methods if not needed and only in those blocks that need sharpening those methods are used. This happens automatically, and blocks can change sharpening method or turn it on or off several times per second because it is automatically controlled by video stream itself. Video picture becomes now much sharper and clearer without soap opera effect etc. Standard system is needed that any video / TV hardware manufacturer can use this macroblock individual sharpening and motion smoothing automatically in video codec (hardware or software video codec). So this automatic sharpening system is included in video codec itself. Video codec not just compress picture then, it also uses motion smoothing, sharpness contol and noise reduction (if needed) in every block of picture frame individually, turns it on or off when needed, and level adjusted for example from level 1 to 16 (4 bits), 6% to full 100% level in every block individually. One bit signal in block can sign is sharpening method on or off, and when one bit shows it is on, additional 4 bit level signal shows what level in scale 1 to 16 this sharpening method is used (for example). This individual sharpening in every block can sharpen video compressed picture perhaps quite much. It must be automated process and included in video codec itself and it must be standardised like video codecs are standardised. Three methods are used, perhaps all at the same time (if needed): motion smoothing, edge enhancement (sharpness control), and noise reduction. Nowdays those three methods are available in digi TVs already, but they are all different systems by different TV manufacturers, there is no universal standard, and user must himself turn them on or off. Those three methods can perhaps all be used at the same time in macroblock or transfer block if they are needed to be used at the same time to sharpen picture.
Also if perfect illusion of reality is needed also lossless video compression is needed. If for example digital cinema uses 24 frames per second lossless compression and then lossy compression in 96 frames per second for frames between those 24 frames, result is 120 frames sec video. Lossy compression can compress video up to 1000 times, so it is not much addition to 24 frames lossless. If lossless compression is used perhaps smaller pixel rate than nowdays 4K or 8 K is needed, 2K or 1K (HDTV quality). The unnatural look of video compression can be avoided if compression is lossless in 24 frames. 96 compressed frames are for capturing movement. If only 24 (or 25 or 30) frames lossy is used like nowdays, fast movements can look blurry, and other unnatural looking things video conpression has. It is possible to see difference for example if video is digital or analogue magnetic tape like type C open reel videotape, compression is visible in movements and “freezed” background in compressed video. But in lossless compression one cannot see any difference. Altough pixel rate may be smaller, the perfect illusion of reality that lossless compression offers, together with lossy compression of in-between frames that are mainly for movement, makes perfect illusion of reality in digital cinema. Nowdays in cinemas are used 24, 25 or 30 frames sec and lossy compression, altough that compression is high quality and fast bitrate. But putting together lossless compression in 24 frames and then very efficient compression (like in blu-ray discs, highly compressed) for 96 frames sec, and result is 120 frames sec digital cinema, and nobody see any difference between cinematograhy or reality, except perhaps pixels. Or 25 frames and then 100 frames, or 75 frames compressed, end result is then 125 or 100 frames sec picture.