flowreen91 wrote:

But when transcoding it with SVP, every interpolated frame has different positioning of the white lines than the non-interpolated frames.
It's like you show the video normally on the non-interpolated frames and then reduce the height by a few pixels on the RIFE generated frames which makes the pixels not align with the original movie, adding a shake-like effect on the static white lines that is obvious for big screen users.

You might be on to something here. When using TensorRT and a 1080p model is being configured, the model is set up as 1920x1088, not 1920x1080. This is because 1088 is divisible by 16 but 1080 isn't.

Anyway, I hope this bug gets resolved soon, as RIFE needs to be almost perfect to justify using it over standard SVP. Standard SVP with best settings only requires 4 cores @ 60% CPU on my 12900K, while RIFE is a GPU killer.

EDIT: This bug also occurs with 1280x720 video, and the shimmering happens at that resolution, too. The model is set up as 1280x736.

flowreen91 wrote:

Please try to record your screen with an obvious scenario of "outlines and edges shimmer (the outline width noticeably changes) when panning or scrolling happens."

http://nnl1.com/misc/TensorRT-Bug.mp4

This is a 1440p60 file so please watch it at 100% zoom. First I play it with standard SVP which looks great. Then I replay it with RIFE TensorRT. Focus on the uniform edges, specifically the shoulder crease marks. This becomes very obvious when blown up to a large 4K TV.

madchickendog wrote:

Is anyone having success using rife 2x framerate 48fps or more with an rtx 4080 super? I am having stable smooth interpolation only at 1080p 48 fps, anything more than that jitters or plays in slower motion including slower audio.

I can get 1080p120 with TensorRT and MPC Video Renderer, using 90% GPU. I can get 1080p60 with TensorRT and MadVR with the NGU Sharp scaler set to High, along with some other postprocessing like Sharpen/Crispen/Thin Edges. This uses 80% GPU, and is what I normally do because my Sony TV can then use its own Bravia XR interpolation processing to go from 60fps to 120fps. However, due to the TensorRT bug I'm experiencing, I've switched back to standard SVP for now. I'm able to increase the NGU Sharp scaler from High to Very High because standard SVP can be all done on the CPU. (To force this, Application settings > GPU acceleration > no acceleration) Note that the NGU Sharp scaler is very GPU intensive, so if you're using that, be careful when adjusting.

Watching 1080p anime with RIFE set to 60fps and with MadVR postprocessing to 4K. 4080 Super, AI 4.9.

I'm finding that with RIFE TensorRT, outlines and edges shimmer (the outline width noticeably changes) when panning or scrolling happens. This does not happen with RIFE ncnn/Vulkan or standard SVP interpolation. I tried changing AI models and they all do this. Disabling MadVR doesn't seem to fix it either; it happens with all renderers. It's off-putting enough that I've stuck with standard SVP interpolation for now. Is there a fix for this, or is this a known issue? There's no point in the performance increase of TensorRT if it looks like dogshit.

Xenocyde wrote:

Is MPC-HC better for GPUs like RTX 4080? I'm on MPV but screen is 1080p so don't think it really matters right now. I'm thinking of getting a 4K screen soon, though, so maybe MPC-HC is better for 4K?

I deleted mpv after one day. Horrible GUI and doesn't work with the MadVR renderer. But others love it. For anime which is >90% of the content I consume, MadVR's 4K upscaling filters are fantastic. But it's very GPU intensive. With my old video card, a GTX 1660Ti, I had to make the CPU handle SVP's regular interpolation while the GPU handled MadVR. Putting both to GPU resulted in lag. With my 4080S, RIFE+MadVR works for 1080p24 content upscaled to 4K. No comment on native 4K or HDR content as almost no current anime is in native 4K or HDR.

VLC is the "for dummies" media player and doesn't have much customization. mpv goes too far extreme in the other direction and is for those who think FFmpeg command line is the greatest encoder. MPC-HC/BE is in the middle and that's good enough for me.

RickyAstle98 wrote:

Does d3d11va-copy same thing D3D11 in mpv player? How to force D3D11 thru svp and mpv player, a bit confused, I want more performance! smile

I use MPC-HC, but looks like you're on copyback mode. Any copyback mode sends the frame data to RAM, which allows for CPU processing. D3D11 Native keeps all of the frame data in VRAM, which is much faster, but all processing must be done by the GPU.

Cool. Glad I could help.

Consider making D3D11 Native the default playback mode in future versions of SVP, as it took me a while to troubleshoot this. The only problem is that D3D11 Native doesn't work in Windows 7, so the system requirement listing would need to change.

Was going to wait for the 50-series, but the RTX 4080 Super launch was too good to pass up on and got myself one.

Primary use is anime watching. Using the generic 4.4 filter for now. 1080p24 to 1080p60, resize to 4K via MadVR's NGU Sharp filter, then have my Sony A95K's Motionflow XR processing make it go from 60fps to 120fps.

At first the GPU was at 90% usage, and the video gradually desynced from the audio. I thought the 4080S couldn't handle it. But then I forced D3D11 rendering in MadVR, and also switched LAV video decode from DXVA2 Copyback to D3D11 Native. Now it runs great. I have no idea why SVP doesn't force these settings upon installation. If your video is lagging, double-check to see that you're not in DXVA2 Copyback mode, as that is a huge performance hit. Now my GPU is at 70-80% usage and doesn't desync. Great!

It's unfortunate that the price of admission with the MadVR renderer is a 4080S though, because I feel anything weaker wouldn't have been good enough.

dawkinscm wrote:

No it's not becoming faster. After v4.6, Rife is effectively becoming slower because the models are getting bigger and more compute intensive. Rife v4.6 is about 20% faster than v4.9 which is about 20% faster than v4.12.

Guess I'll wait and save for the 5090 then. Currently on a 4-year old 1660Ti which can barely handle SVP+MadVR 1080p60, and then my Sony A95K TV interpolates it to 120fps. No point in getting anything less that can't handle RIFE 4k120.

Has RIFE become any faster? Do you still need a 4090 to go from 4k24 to 4k120 in realtime, without the use of TensorRT pre-caching? Or is a less powerful card good enough now? I'm considering a 4070Ti but am concerned that just won't be fast enough for RIFE 4k24 to 4k120.