P.S: Just to explain. Motion compensated deinterlacing done by SVP would work something like this:
-------
1)
Split the video into fields (e.g. 60 fields per second for NTSC). Field X now has the odd lines set and the even lines are missing. Field X+1 has the even lines set and the odd lines are missing.
2)
Use a good interpolation algorithm, e.g. NNEDI3, to interpolate the missing scanlines of each field, so each field is turned into a full frame.
3)
Now run SVP on 2) and recalculate every frame, using SVP. No new frames need to be added, just every existing frame needs to be recalculated, using the neighbor frames.
4)
Now create the final frames, by combining the known "good" scanlines from the original fields, and inserting the missing scanlines by using the results of step 3).
-------
There are 2 variants of this processing pipeline:
a) The simple variant would be for step 3) to use frames from step 2).
b) The more complicated, and probably slightly higher quality variant would be to run 3) and 4) "interleaved". Meaning that if step 3) wants to recalculate frame X, it would use frame X-1 from 4) and frame X+1 from 2).
-------
Variant a) might be possible to implement externally, if SVP could be used as a "toolbox". Variant b) would probably be hard to realize efficiently that way, so if we want to have Variant b), it would probably have to be implement inside of SVP.