22:33:32.546 [i]: AVSF: found new player instance
22:33:32.572 [i]: AVSF: filters in use: LAV Splitter Source -> LAV Video Decoder -> * -> madVR
22:33:32.573 [i]: AVSF: new video in mpc-hc64.exe (64-bit) [MPC-HC 2.0.0.0] on screen 0
22:33:32.614 [i]: Media: video 1920x1440 [PAR 1.000] at 23.976 fps
22:33:32.614 [i]: Media: codec type is AV1, YUV/4:2:0/10 bits
22:33:32.615 [i]: Playback: starting up...
22:33:32.619 [i]: Playback [73c0e3c8]: Frame server (64-bit) C:\Users\Sahil\AppData\Local\Programs\VapourSynth\core\vapoursynth.dll
22:33:32.620 [i]: Playback [73c0e3c8]: resulting video frame 1920x1440
22:33:32.621 [i]: Playback [73c0e3c8]: 4 acceptible profiles, best is 'RIFE AI engine' [5000]
22:33:32.623 [i]: Playback [73c0e3c8]: enabled while video is playing
22:33:32.626 [i]: Playback [73c0e3c8]: playing at 59.94 [23.976 *5/2] /10 bit
22:33:32.939 [E]: Playback [73c0e3c8]: VS - Python exception: [WinError 740] The requested operation requires elevation
22:33:32.939 [E]: Playback [73c0e3c8]: VS - Traceback (most recent call last):
22:33:32.939 [E]: Playback [73c0e3c8]: VS - File \src\cython\vapoursynth.pyx\, line 2866, in vapoursynth._vpy_evaluate
22:33:32.939 [E]: Playback [73c0e3c8]: VS - File \src\cython\vapoursynth.pyx\, line 2867, in vapoursynth._vpy_evaluate
22:33:32.939 [E]: Playback [73c0e3c8]: VS - File \C:\Users\Sahil\AppData\Roaming\SVP4\scripts\73c0e3c8.py\, line 77, in <module>
22:33:32.940 [E]: Playback [73c0e3c8]: VS - smooth = interpolate(clip)
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Users\Sahil\AppData\Roaming\SVP4\scripts\73c0e3c8.py\, line 56, in interpolate
22:33:32.940 [E]: Playback [73c0e3c8]: VS - smooth = RIFE(input_m,multi=rife_num,model=rife_mnum,backend=trt_backend)
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Program Files (x86)\SVP 4\rife\vsmlrt.py\, line 936, in RIFE
22:33:32.940 [E]: Playback [73c0e3c8]: VS - output0 = RIFEMerge(
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Program Files (x86)\SVP 4\rife\vsmlrt.py\, line 821, in RIFEMerge
22:33:32.940 [E]: Playback [73c0e3c8]: VS - return inference_with_fallback(
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Program Files (x86)\SVP 4\rife\vsmlrt.py\, line 1420, in inference_with_fallback
22:33:32.940 [E]: Playback [73c0e3c8]: VS - raise e
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Program Files (x86)\SVP 4\rife\vsmlrt.py\, line 1399, in inference_with_fallback
22:33:32.940 [E]: Playback [73c0e3c8]: VS - return _inference(
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Program Files (x86)\SVP 4\rife\vsmlrt.py\, line 1339, in _inference
22:33:32.940 [E]: Playback [73c0e3c8]: VS - engine_path = trtexec(
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Program Files (x86)\SVP 4\rife\vsmlrt.py\, line 1172, in trtexec
22:33:32.940 [E]: Playback [73c0e3c8]: VS - completed_process = subprocess.run(args, env=env, check=False, stdout=sys.stderr)
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Users\Sahil\AppData\Local\Programs\Python\Python38\Lib\subprocess.py\, line 493, in run
22:33:32.940 [E]: Playback [73c0e3c8]: VS - with Popen(*popenargs, **kwargs) as process:
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Users\Sahil\AppData\Local\Programs\Python\Python38\Lib\subprocess.py\, line 858, in __init__
22:33:32.940 [E]: Playback [73c0e3c8]: VS - self._execute_child(args, executable, preexec_fn, close_fds,
22:33:32.940 [E]: Playback [73c0e3c8]: VS - File \C:\Users\Sahil\AppData\Local\Programs\Python\Python38\Lib\subprocess.py\, line 1311, in _execute_child
22:33:32.940 [E]: Playback [73c0e3c8]: VS - hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
22:33:32.940 [E]: Playback [73c0e3c8]: VS - OSError: [WinError 740] The requested operation requires elevation
Issue resolved by running mpc-hc in admin oddly.
Another issue. Trt inference model has to be built or cached for every new resolution. This caused me a lot of confusion for real time playback as it would hang and there's no indication how long that will take to finish building. Once that's done realtime playback works. It might help new users if the TensorRT version came with a benchmark tool that contained some reference footage at different resolutions and gave a notification when all the refence models for common resolutions were saved. Also going over the vsmlrt documentation, it's extremely important to set fp16, num_parameters, and previously workspace before the inference model is made. How is svp handling fp16 vs fp32 and does the gpu thread setting refer to the num_stream parameter in vsmlrt?