Topic: motion compensated deinterlacing?
Hi there,
a wild idea: SVP has to fully understand the whole motion situation from one frame to the next, in order to do its job, right? So I'm wondering if it would be feasible to make use of all the algorithms and knowledge in SVP to implement motion compensated deinterlacing as part of SVP?
How do you handle interlaced anime for example, anyway? Do you currently require deinterlacing to be performed before SVP? I think if you moved deinterlacing into SVP, that could produce a nice quality/performance improvement, even over the best currently available AviSynth deinterlacing scripts, or what do you think? I mean the best algos out there for deinterlacing use their own motion interpolation, but I suppose what SVP does it probably superior to those in terms of quality vs. performance ratio.
Just a wild thought, though. If you don't like the idea, just toss it. I'd love it, though. Have been wishing for high quality motion compensated deinterlacing in madVR for a long time.
Another thought: Would it be hard for SVP to "export" motion information for other filters (or e.g. madVR) to reuse? E.g. debanding, denoising, even sharpening etc algos might be tuned to benefit from knowing which part of the video moves where from one frame to the next. It might even be possible to "misuse" SVP to simply analyze a video sequence, without actually letting SVP modify it, just to use the motion vectors for other purposes. E.g. if you don't want to add deinterlacing, it might be possible to tell SVP to analyze a video sequence, then gather the detected motion information from SVP, and then use that to implement motion compensated deinterlacing externally. Basically you could extend SVP into an open motion toolbox which other software could use for all kinds of algorithms.
What do you think?