I'm not here for a topic of dicussing differences of opinion or this and that. The topic I wrote about is RIFE and how to visually (somewhat) properly doing the testing and finding which RIFE model is better for what type of content this-and-that. I'll stick with that wink
https://github.com/hzwer/ECCV2022-RIFE
https://arxiv.org/abs/2011.06294

https://paperswithcode.com/paper/rife-r … estimation
This shown ranking is a snippet from this website "MSU Video Frame Interpolation Benchmark" https://videoprocessing.ai/benchmarks/v … tion.html.
Anyone who is interested in different video estimation/interpolation ML models, should check it out.

The RIFE model (which is btw. a very outdated model from 2021 or so) is already scoring pretty well in the subjective score. When it comes to PSNR, SSIM, VMAF it obviously isn't that high scoring.
But it all is negated once looking at the FPS numbers; Chronos-SloMo-v2 is only achieving 4.x FPS.



The middleground (in effort and result) for us, doing a bit more than childsplay, would be:
1) Screen-record or capture (the video player) the video with a program
2) Put it in a video editing software and edit/cut it.
3) Do the same for the other model.
4) Align them perfect, frame-by-frame, and put both video files together.
What we do here mostly is childsplay when it comes to visually testing, and by no means the developers should listen to us.
wink


@Drakko01
Okay, I done my childsplay-testing with this mentioned scene from this movie (making notice again that our testing here is incredibly flawed).
You mean that scene ~10 seconds scene where the camera pans 360 degree around the character with the robotic tentacles?
Yes, that's a good scene to test, with nice patterns and elaborate shapes!

Some tips first:
1) If you are doing the A-B repeat with slomo playback speed as mentioned, and/or changing between models, you have to play the A-B loop at least 2-3 times, for the scene to be fully buffered/processed. See my attached image "screen9"? With the mpc video renderer, press CTLR+J to show the graph and information.
As seen in the bottom right corner, the graph is not flat but spiking and the framerate not stable at 48, which means it's not fully buffered/procesed yet (it's a demanding 3840x1600 video even for the rtx 4090, as mentioned).
2) The flawed result is seen in the attached "screen10". Massive artifacts etc. which normally wouldn't be there, even at x0.25 playback speed.
When visually A-B comparing, screen-recoding, recording, make sure the graph is flat and the framerate fully stable.

Long story summarized, between v4.25 and v4.26, I sat down for couple of minutes:
For that 10 second clip, I can't tell a clear winner.
At most elaborate/complex elements, v4.26 is displaying these elements slightly clearer, with less noise/small blocking&artifacts on the patterns/the whole image during slow-medium sized movement (camera pan), as in the first 4 second on the red metal triangle things and other metal party of these tentacles.
Also I spot less bright haloing-artifacts between the tentacles in the foregound and the guys's background black darker coat. Also at the forest in the background
I also spot a bit less noise and patterns on the concrete whitish pillar and the river at the end.

Contrary v4.25 is for example visibly better in other aspect during the first 4 seconds and after that: There are less stutterish blocks (warping, medium sized) around the metal tentacles (around the whole body) and the red triangles and other metal parts.
Also I noticed v4.25 is somehwat more clearer during faster movements (camera pan).

If anything, I somewhat prefer v4.26 because as the pros outweighs the cons somewhat.

So basically: This is what I meant. Visible childsplay-testing as ours, we can maybe spot a very clear difference when it comes to older models as 4.15 versus now v4.25/26, but definitely not that much between 4.25 and 4.26 themselves. Our testing is way too childsplay for that and our conclusions differentiate too much due to our different testing.
In such small version jumps, some things get better, some worse.

That's why I mentioned plenty: Trust the developers. They are developing it, they do the math, the have the datam, they do proper scientifically testing etc. and there is hardly any point arguing against that. If they "recommend v4.26 for most scenes now", I will believe them, because my testing/conclusion (and everyone elses here) is much flawed, and theirs not.


Also: Please no more 3840x1600 video file testing. big_smile Way too demanding with such current model, even for such graphics card. Stutters way too much and need to long process/buffer. If you want me to test next, find some 1080p footage wink

@reynbow
It's always the same page https://github.com/AmusementClub/vs-mlr … nal-models

@Drakko01

I'm not interested in the 0.25 speed test, I don't think anyone here is

The opinions of x y, what someone interested in, is not the topic here (or the one I started). I recommended you to do your own testing if you don't know whom to believe or unsure about your own testing/findings, and this is the (better) way when it comes to visually spotting all/most artifacts etc.: at slow(er) playback speed, preferably x0.1 or x0.25 speed.

Maybe I haven't made myself clear, so again:
1) Lower playback speed (e.g. x0.1, x0.25) is not making the model produce less/more (in quality or quantity), artifacts, warping, blocking, pacing etc. It's all still the same, no matter the playback speed.
2) Our eyes/brain (our perception) simply can't spot/perceive the differences well/fast enough anymore; yes even with RIFE models only estimating from 24 fps to 60 fps.
We humans may distinguish higher differences in light levels (or as the mainstream says "more fps"), but doesn't mean we perceive everything equally. It simply is too fast aka too short in time span, to notice.
Can someone spot a tiny 0.5 % spot of artifacts, when the artifact itself is only 50 miliseconds long in time? Obviously: No. We are not some future Cyborgs/Android who have "10000 fps artificial eyes and brain computing" big_smile
3)Thus concluding, testing done on normal playback speed is flawed, and leads to a fallacy of conclusion aka wrong results/finding.
4)Even v4.25/26 are both full of artifacts etc. but which are only/mosty perceptible at slow playback speed for us humans.
Everyone who has not done the visual testing at slow playback speed, will be surprised how much more artifacts and worse pacing etc. one will start to perceive.


The goal of the Rife developers was not always aligned with what we are looking for here.
Many times the posts from members like dawkinscm,Blackfyre,dlr5668,flowreen91 ...

Fair enough. This subforum has a broad spectrum of topic allowed around the RIFE models.
I don't know what other things/settings you mean. The topic here I started are the RIFE models, thus regarding that topic:
When it comes to visually testing for artifacts etc. (as we do here), my own testing, as somewhat time consuming as it already is with all the low playback speed, dozens of A-B loop-repeat, writing down and doing screenshots, is childsplay and and big time flawed.
If anyone here is not even doing the same, than it's even more flawed.

Going scientfically and accurately, this is how we should do our testing and this/similar methods, is also thow the RIFE developers are doing it.
https://netflixtechblog.com/toward-a-pr … bfa9efbd46

@RickyAstle98

Also when RIFE will support new computation levels directly from RT chain, thats increase inference performance from 84 to 100% according to NVIDIA guys!

That was a joke? big_smile
Or was something like that stated by the RIFE developers? They would have to make use of the new features.

The 5090 will have 2x as much CUDA cores than 5080, less power consumption on new compute levels! The 5080 performance is between 4080 and 4090 in rendering tasks, but games? Who knows?!

Correct.
Simply two BW-103 GPU-dies glued together (as Apple is doing with their M1 architecture since 2020 and Nvidia with Blackwell ML).
Of course (given technical leaks) Nvidias milking-strategy will be again, to castrate all GPU-dies of each model of their lineup (by 10 - 15 % as usually), as there is no competition and AMD+Nvidia is an Stackelberg-duopoly anyway for some years. Both companies are both manipulating the market for years, colluding on prices and products etc. There is no real competition; it's a public farce.
The currenlty sold RTX 4080 (super) are in fact relabeled RTX 4070 (AD104 GPU-die). The firstly introduced "RTX 4080 12GB" - which was then canceled - was in fact a RTX 4060 (AD-106 GPU-die).
There is no real RTX 4080 sold; same for rtx 4070 (is a rtx 4060 in fact) and the rtx 4060 (is a rtx 4050 in fact).
Renaming SKUs, giving them lower tier GPU-dies.
Looking at the leaks, it will be the same for consumer rtx 5000 again.



@dawkinscm

If the rumours are true then the 5080 doesn't suck because it will be at least as powerful as the current most powerful consumer GPU on the planet

As mentioned, that will verly likely (if the leaks are true) not happening. Also: don't generalize the metrics.
At best a rtx 5080 (BW-103 GPU- die, so in fact a relabeled RTX 5070), will have 15 - 20 % less rasterization performance than a rtx 4090. In reality it should be more of +25  less%. Similar story with Tensor Cores (for RIFE) and other things.

1) The leaked BW-103 GPU-die (RTX 5080), has a whopping ~ 60 % less shading units, ROPs, RT-Cores, or TMUs, Tensor Cores than the GPU in the current RTX 4090, but should somehow magically achieve the same rasterization performance, machine learning, ray tracing or Tensor Core (for RIFE) performance? This is not happening. The last time there was this huge of GPU-architecture jump of > 50 % from Nvidia, was last in 2007 from the 8 series. https://en.wikipedia.org/wiki/GeForce_8_series
As said: Ampere rtx 3090 to 4090 is only so big because of the lithography 12 nm -> 5 nm jump. Two full nodes.
Performance jumps due to GPU-architecture from Nvidia, were all only 10 - 20 % for the last 5 generations. The rest was lithography or simply more transistors.
2) The leaks specifies only the full BW-103 GPu-die for the rtx 5080 (GPU SM 84 (84 Full)). Nvidias has never sold a full GPU-die for consumers, for the last 4 - 5 generations or so. It will be castrated by at least 10 % as usual, just as every other model.
So it won't be 84 SM, but more like ~ 76 again, which means 70 % less units than the current rtx 4090.
3) Moores Law is effectively dead since 2012 TSMC's 28 nm lithography node (whatever other people say is wrong, or lying marketing).
4) For many generations (2012, especially since CUDA) Nvidia's pro/quadro/server and consumer lineup, have the same GPU architecture basis. They only swap out/leave out certain components as display output, video accelerators etc.
TMU, ROP, Tensor Core, RT Cores are verly much alike.
AMD's leading people, finally realized that this unified approach is the smarter way and will soon start doing the same
https://overclock3d.net/news/gpu-displa … nd-gamers/
4) GH100 to B100 has only 30 % overall more transistors (Hopper to Blackwell architecture). Together with the previous things mentioned, this is a leak in itself, already giving away how much better Nvidias engineer can make the RTX 5000 lineup, comparing equal GPU-die to GPU-die.
20 - 30 % in all aspects. The rest will be ML ("AI") marketing-gimmicks as "fake frames" Frame Generation x-4 times, or DLSS 4.

So anyone pondering about buying a graphics card for RIFE for usage of 3840x1600 or 4K-UHD (3840x2160): Either continue with the rtx 4090 or grab upcoming rtx 5090. Sorry to "burst some bubbles".
It's all a illegal, colluding Stackelberg Duopoly since at least 2006 (Nvidia is here the Stackelberg leader)
https://www.tomshardware.com/news/nvidi … ,6311.html
https://hexus.net/business/news/corpora … itigation/
AMD is not selling "cheaper products for less money" aka offering better price/performance for at least 5 graphic cards generations (since 2014).
The simly price them 10 - 20 % below Nvidia's graphics card models (e.g. current rtx 7900xtx versus rtx 4080), but are worse 50 - 200 % in nearly all aspects (efficacy, features as image reconstruction DLSS 3 vs. FSR 2, raytracing or ML-performance, streaming etc.).
Hardly anyone uses a AMD RDNA 1,2,3 graphics card for RIFE, with via ncnn/Vulkan, right? It sucks in performance.

Same goes for Intel+AMD duopoly, controlling the consumer and server market, with their x86/x64 patents for decades, eliminating any real competition.
All company execs only want to maximize product margin to the moon, and please financial shareholders.
The stuff about "fair and legal competition" based on capitalistic markets doctrine and laws, is a lie and a public farce for at least 10 -20 years.
It's all an elaborated show, but 99 % of the media continues to report as if there were some real, fair and legal competition going on ...

Sorry a bit for the offtopic part. I think it has to be commented, especially since hardly anyone mentions it in the current state of nonsense 08/15-media brainwashing. wink

@Drakko01
v15.5? You meant to say 4.14 or 4.15?
What do you mean by with performance?
As said, 4.26 aftter my third elaborate testing, overall the best. There is no point for me using anything else, especially since both 4.25/4.26 consume so little electricity (my rtx 4090 GPU is undervolted 0.875V).
If you don't know who you want to believe, than do your own testing.

Notice what I wrote: I do elaborate testing; I recommend doing the same. Also test on a big screen (I test on a 55 inch Oled TV from 1 meter distance).
Testing with LCD-displays or LCD-dimming zone displays/TVs/monitors (aka misleadingly marketed "miniLED"), is not as accurate, as LCD-display technology lacks pixel-perfect dimming.
Lot of scenes across at least 10 real scene movies or animation/cartoon.
I test specifically at 0.25 speed, only this way I can perceive artifacts etc. which are not perceived even under 0.5 speed, less under normal speed (1.0)
I always do A-B repeat every scene couple of times.

Additional: With v4.18 not only I noticed way less artifacts, warping etc. but especially way better image smoothness and pacing.
If there are multiple elements in the foreground, middle-ground, background, v41.8 and even better 4.26, are displaying the elements more smoothly and with better pacing.
With older models, or even with v4.18, there was more stuttering occuring; like the background element (for example a forest) was blending in more wrongly with elements of the foregound (lets say a car); both stuttered a lot, especially if the element was complex in structure (let's say a fence, a pattern).
With some scenes and the v4.18 model, I noticed the forest scene in the background was stuttering, while with the v4.25 model, even more the v4.26 model, the forest was smoothly displayed.
That means the neural network wasn't able to distinguish elements that well with older models.

And last: I'm not guessing or have to rememver what I found 1 month ago either, because I write down everything I notice.
I have a full, long text file were I wrote down all of my findings, back to v4.17.

If you don't want to believe any of us, than maybe you want to believe the developers themselves? big_smile
"Currently, it is recommended to choose 4.26 by default for most scenes."
If anyone, you can trust their knowledge and findings.
https://github.com/hzwer/Practical-RIFE … 7ecf72f635

@Blackfyre
After my third testing, I disagree. The 4.26 model is overall and noticeably, for animation/cartoon and real scene movies, the best for most scenes.
As already linked https://www.svp-team.com/forum/viewtopi … 248#p85248
I was correct with the 4.18 model (as many here also, came to the same conclusion after testing), that it was overall the best for most scenes.

Few weeks after that I did the same tests again, with 4.18 versus 4.22 & 4.22 lite models.
I haven't commented it, by I reached the same conclusion as the authors in their readme file (4.18 was the best back then for real scene and 4.22 & LITE for animation) https://github.com/hzwer/Practical-RIFE … /README.md
I had thus used 4.22 lite for animation/cartoons few weeks ago.

And two weeks ago I sat down for more than 1 hour again, tested more than 40+ different scenes (animation and real scene), always with 0.25 speed to catch every artifact, warping, image smoothness and pacing etc. used again A-B repeat, again and again to loop through etc.
I again reached the same conclusion as the authors "Currently, it is recommended to choose 4.26 by default for most scenes."

I think the difference in our findings is simply our differences in how we test. I think here lies the fallacy of conclusion.
I think if you would test more and especially stricter (you did or didn't?), I'm sure you would come to the same conclusion as the authors and me.
I mean: How can the authors who train the models etc. be wrong with their conclusions and findings? Not possible. wink

It's not better in all scenes, but most scenes. I also found 4 - 5 scenes were the 4.18 models was still somewhat better (either in artifact, warping, image smoothness or pacing), but the rest 4.26 was better.
And yes, I also specifically tested 4.25 versus 4.26, with the same conclusion.


And the best part about the 4.25 & 4.26 models is how little electricity they need to achieve that. Tested with latest v15.5: latest TensorRT library, and a couple of movies at 3840x1600 resolution and 48 fps.
4.17 lite ~ 145 watts average gpu power consumption.
4.22 lite ~ 150 watts
4.18 ~ 176 watts
4.22 ~ 187 watts
4.25 ~ 158 watts
4.26 ~ 158 watts

I'm using it for everything, as recommened.

And btw. RIFE is using (machine learning) neural networks and it's elaborate algorithms.
Not using Nvidia's Optical Flow (the transistors on the GPU), or RT Cores, or TMUs or ROPs.
So whatever advancements are coming this rtx 5000 consumer generation for these, will not help with RIFE's demands.

@Blackfyre
So here is a general buying advice for everyone, and a bit prediction of the future - and I'm 99 % sure I will be correct with that:
https://www.tomshardware.com/pc-compone … 12-bit-bus
https://wccftech.com/nvidia-24-gb-gefor … s-spotted/
So given how big greedy fuckers the AMD & Nvidia execs have become since 2018's Turing generation (prices weren't nearly as bad from 2000 - 2018), that nvidias is a first a "machine learning company" (marketing "AI") now, and that the leaks from twitter user "kopite7kimi" are 90 % always correct, I think with certainty the upcoming RTX 5080 won't be enough.
The measly 10k Shading units & Tensor Cores won't do it, given rtx 4090's 16384 shading units and 512 Tensor Cores aren't able to handle it.
Nvidia's milking strategy will likely again be releasing "super" graphics cards lineup in 2026, so a future "rtx 5080 Super" with ~ 14k Shading cores and ~ 450 Tensor Cores might be able to run current RIFE's 4.15 to 4.26 models.

But given how the RIFE developers are always matching their models and keep increasing their ML & inference demand, I wager to say that for future RIFE models, running them on the same resolutions near 4K-UHD 48 - 60 fps, the future top dog GPU-die in the rtx 5090/6090 will be needed again wink

And btw. Nvidias engineers won't be able to do miracles with this upcoming consumer rtx 5000 generation. The measly lithography node jump from 5nm to 4nm, will maximum give 20 - 25 % more performance (already the same for their Blackwell ML lineup, that's why they glued two Blackwell GPU-dies together and can now market x2 performance).
At best, they will crank up their Tensor- and RT-cores amount by 30 - 40 % this generation (because they prioritize Machine Learning and Ray/Path tracing performance over rasterization performance), and that's it.
So better wait for 2027 rtx 6000 lineup, because this will be a big overall jump again.


Maybe, maybe, maybe, thus RIFE's developers will see it the same way as me, and change their doing and will match their upcoming models, so at least the rtx 4090 will continue be able to handle the ~3840 x 1600 resolution at 48 fps, and the rtx 5090 at 60 fps (including 4K-UHD, which is 3840 x 2160), for their future models.
I can't know that. Someone would have to ask them.
In other words: The overall performance/efficacy jump from e.g. Nvidias rtx 3090 -> rtx 4090 (xx102 to xx102 gpu-die) was only so large (thus performance for RIFE models) because they switched from Samsungs 8nm lithography node (which is almost equal to TSMC's 12 nm node), to TSMC's 5 nm node; so 12 nm to 5 nm. This time it's from 5nm to 4nm.
So it's obvious for me that this generation, far far less people will buy the consumer RTX 5000 series.

@Blackfyre
The topic about the rtx 4090 graphics card  performance has been commented many times, incuding by me.
Here it is https://www.svp-team.com/forum/viewtopi … 248#p85248


Here is some further general information:
I'm pretty sure that the developers of the RIFE Large Language Models are also using a rtx 4090 graphics card (either for training or inference) and are strictly tuning their parameters, model size etc. to match/fit into the rtx 4090's maximum machine learning & inference performance, when running their models at maximum ~3840 x 1600 resolution (so basically all 21:9 movies).
Been monitoring it since 2020 the rtx 3090 graphics card came out and how it behaved with RIFE's models.
The developers always made strictly sure that nvidia's top dog GPU-die (xx102 used in xx90 cards) is at least able to run 48 fps around the ~ 3840 x 1600 resolution for their current models (as I linked above).
They very likely won't create a model for which the current top-dog's GPU-die won't be able to run with those settings; it would defeat their purpose and obviously be the wrong path, not catering to the markets needs (us, consumers. "What's the best point of creating a too-demanding model, if no one would be able to run it?").

Equally (as linked and commented), the rtx 4090 isn't able to run above that resolution & framerate since the RIFE 4.15 model or so (I remember it started with that version I couldn't run 3840 x 1600 at 60 fps anymore).

@Xenocyde
@Blackfyre

I've (again) tested RIFE 17.v2 , v17.v2 Lite, 18.v2, 19.v2 and 20.v2, with my usual blend of test scenes, in slow motion (x0.25),
looking for usual things as distortions, warping of patterns and objects, artifacts, especially when camera panning etc.
My observations match yours.

1) For me18.v2 is by far the best model regarding all things mentioned.
2) 17.v2 second best model, but with noticeably increase in distortion and warping of patterns and objects, especially when panning.
3) Followed by third best 19.v2 with same issues.
4) Fourth best version 17.v2 Lite with the same artifacts.
5) 20.v2 the worst of them. Not sure what happened here during training. :-(

Especially watching in slowmotion x0.25, artifacts, distortions etc. can be observed with all models, but it's suprising how good 18v2 and 17v2 Lite are looking.
For 1080p@60 fps I keep using 17v2 Lite, and for everything up to
3840x1600 - 4K-UHD resolution, @48 fps, it's v18.v2.



@Asking a question out loud
Anyone having an explanation why even a nvidia rtx 4090 can best do 48 fps up to 3840x1600 - 4K-UHD resolution?
Increasingly, above 3840x1600 resolution @60 fps, playback keeps stuttering no matter what. Already removed other bottlenecks.

Observing hwinfo64 metrics for
- GPU-utilization
- power draw etc.
- task manager
there is a strange discrepancy between all those.

Up to 3840x1600 resolution @48 fps,
- GPU-utilization keeps around 40 - 60 %
- power draw hovers around 170 - 240 watts
- Task Manager: Tensor Cores, 3D, Video Encode & Decode and Copy also far from being fully utilized.

Then, getting close to 3840x1600 resolution @60 fps and above, a sudden disproportionate jumps happens and stuttering keeps occuring.
The ~ 25 % increase in resolution does not warrant the overall increase ... is the GPU bottlenecked by the Tensor Cores? Not enough of them?

https://www.svp-team.com/forum/viewtopi … 921#p84921
@Chainik
@flowreen91
Thanks again you both for explanations and providing solutions. I've read both comments, tried mentioned solutions, also with new uploaded file.
That one fix with replacing a line and switching a file still didn't work out either.

1) For now I keep using the TensorRT 9.2 library, "v14.test4: latest TensorRT and ONNX Runtime libraries" from march 27, for best performance/efficacy.
2) At least for now, I didn't notice much difference between NVOF and SVP motion vectors. Looks all fine, though I keep NVOF for best performance.

If it works, it works smile

@flowreen91
@Chainik
Thanks for providing a tip.

4) Switching to Tensor RT 15.x, it works swapping "vstrt.dll and the folder vsmlrt-cuda" only.
I noticed a bump of graphic card total power consumption (undervolted rtx 4090), about 5 - 10 % (differs in resolution, framerate, upscaling etc.)

5) If swapping the newer "vsmlrt.py" also from the TensorRT 15 file also, nothing works anymore; that means any machine learning based RIFE model and SVP's own interpolation. See image attached "error swapping vsmlrt_py".
Any way to fix from my side, you SVP developers side, or simply not swap the file?

6)
@flowreen91
Made the line change ("True"), nothing works.
See image attached.

Thanks in advance!

@Chainik
Maybe it's me being a laymen, but will you respond more detailed please?
I asked multiple questions, also in comment 2.
If "no time" or "don't want to", than please say so, but completely ignoring is not polite.

3)

you don't need anything to use 4.16+ models, just put it into the models folder

So this

vsmlrt.py
Added support for RIFE v4.17 models.

can be completely ignored, and has no advantages/disadvantages (problems, issues, performance, quality etc.), if using the older Tensort RT 8.5.1 provided with svp, in
combination with newer RIFE models as 4.16 or 4.17?

@Chainik

With the newest update, please write down here, for laymen to understand, what's the best opion to use, i.e. achieving highest image quality, image artifacts, smoothness, when using RIFE 4.17 (or older versions) with the newest TensorRT (or not), regarding
- SVP motion vectors
- NVOF motion vectors
- image comparison

I use a graphics card RTX 4090. Is there a diference in performance, so users with less or more performant graphic cards have to care?

I previously used "image comparison 12 %" because someone mentioned it here.

Someone wrote smt. about it already, but I like to hear you detailed opinion.

Thanks in advance.

@Chainik

Hello,

1)
is the older TensorRT-updating method mentioned here https://www.svp-team.com/forum/viewtopi … 674#p83674
swapping "vstrt.dll and the folder vsmlrt-cuda" only,
still correct and applies for the "v15: latest TensorRT library" from june 2024, together with the newest "RIFE 4.17"?

No other files from the downloaded "vsmlrt-windows-x64-cuda.v15.7z" need to be swapped?
I.e. there is a new folder "vsov" added (100 MB), which is currently not there in "C:\Program Files (x86)\SVP 4\rife"

Using "Rife 4.15 and 4.16" I previously updated this way to the "v14.test4: latest TensorRT and ONNX Runtime libraries" from march 27.
Despite mentioning your performance regression

enjoy" what exactly? 10-15% performance drop

https://www.svp-team.com/forum/viewtopi … 612#p84612

I noticed a drop of 5 % in gpu-utilization and 7 - 8 % in gpu-power, when comparing the same video files for 5 minutes (and had to do all the inferencing for every resolution again).
So I understand it was the correct way to do?



2)
Does "vsmlrt.py" in the "svp/rife" folder, this times has to be swapped too?
It says "vsmlrt.py: Added support for RIFE v4.17 models.", the previous TensorRT version does not mention that.

You stated smt. before, but I have no knowledge if that applies this time too.

only for TRT>=9
with TRT8 updating vsmlrt.py most likely does nothing
[...]

Good day,

black friday is there, what a coincidence: Out of the blue the program-activation gets reset, asking me to registration again.
Coincidences everywhere; 2 -3 times a year.

So what do I bought?
1) A licence key giving me infinite (1 computer) access to the program (I can use offline)
2) A You'll own nothing-perpetual-licence subscription (WEF Klaus Schwab likes), where I own nothing (no program) and have to constantly prove via internet access that I bought this perpetual-licence subscription and get granted access to your guys program-account.

Do I get an answer from the developers?
Looks like it is the 2). And 2) means once you guys shut your online servers down or decide for whatever reason to abandon this project, I lose access to it.

I'm not giving you guys any money in the future if is 2).
:-/

Greetings

Thanks for the replies.
I got another question regard an issue:
8) When watching a film, or aninmated (cartoon, anime) or animated 3d film, there are some sometimes/often artifacts around moving objects; the faster an object moves the more artifacts there are.
Or imagining a zebra running in front of an greend forest.
It' s most visible when something moves horizontally, rarely vertically and for me.
Changing the option "mask artifact" from "low/medium/high" don't help. Got set it to low, high makes it worse.
Changing the overall preset to "higher quality" don't help either.
thanks!

7) Using an nvidia graphics card, it's possible to add a profile to a program, where various things can be set. Is this usefull or not needed? What about g-sync (adaptive sync)? Is it working with svp 4? I know that's it's not working and should be deactivated for mp-hc or madVR.

6) Despite setting the monitor to 144 Hz, vsp 4 sets the video to "142.657" Hz and with the monitor set to 60 Hz, vsp sets the video to "at 59.940" Hz. Is this still correct?

Thanks in advance!

5) There is sometime a message shown by svp in mpc-hc: "SVP: Playback at 142.657 Automatic Black bars:276-276, 0-2". Is this due too heavy load or frames being dropped?

attachment pictures:
stuttermessage.jpg

4) Disabling madVR. This how the developers intended svp 4 working in conjunction with mpc-hc? The mpc-hc  settings look like this:
- Playback -> Output -> Video Renderer: Enhanced Video Renderer (custom presenter). "Direct3D-Fullscreen mode" is deacvtivated.
- madVR is no longer working and shown as a tray icon. Filter, during playback are now showing the "Enhanced Video Renderer (custom presenter)".

attachment pictures:
svp4only.jpg

3) In the svp 4 Manager, gpu-acceleration is activated. As expected, using 1) svp 4 + madVR, puts an immense load on the cpu. The slider in the svp 4 manager is set to the middle position;  watching 1080p content puts the cpu utilization to ~ 40 - 50 %, watching uhd-media (2160p) puts the cpu to 70 - 90 % utilization. More option than the svp 4 manager slider to tackle this?
3.1) Mpc-hc shows that frames are being droped, even when the cpu is at 50 % utilization.
What to do against frames being dropped?
3.2)  Now that the graphic card has to render not 24 or 30 frames, but 60, the gpu load is very high. Now matter what profile settings I put in madVR: chroma upscaling to Lanczos, Jinc, NGU ... the gpu load is always around 90 %.
I found the solution to this in the madVR settings: scaling algorithms ->  "your profile" -> image downscaling -> processing done by GPU video logic: set to "DXVA2".
Keeping the same other madVR settings, now, even with 2160p content, the gpu utilization is way lower. This the correct way to tackle it?

Hello,
thanks to the devs for the program.
I've a few questions I would like the devs or someone else to kindly answer.

I installed svp 4 pro for the first time on a windows 10 pc. The video player used is mpc-hc (64 bit) version 1.7.15 with madVR v0.92.12. The g-sync monitor used can display 144 Hz, native 8 bit color and works correctly in conjunction with the nvidia graphics card. Cpu is a intel i7 2600 k, all cores running at 4,4 GHz.
During installation everything mandatory was installed (filters for mpc-hc x64 version too), though I aborted the mpc-hc program installation (1.7.13) because, as said, a newer version is already installed. newest LAV filters, x32 and x64 were already installed.


1) Can mpc-hc used together with madVR and svp 4 pro? I read on the internet before that it can't, but after I installed it, it looks like it does. Didn't even needed to change any settings in mpc-hc or madVR.
After I load a video the svp message (inside mpc-hc) indicates that everything is correctly installed and running, even with madVR simultaneously. In the svp 4 manager everything is running correctly too. The black bars during video playback, are also being filled with color.
All media playback now looks very smooth.

I followed the instructions here https://www.svp-team.com/wiki/SVP:MPC-HC
During playback, at every start, message shown is always this: "SVP: Playback at 142.657 Automatic" and "SVP: activated Automatic".
In madVr setting, everything works as it worked before without svp 4. All madVR profiles are loaded during playback.
madVR -> rendering -> general settings "enable windowed overlay", "enable automatic fullscren exlcusive mode" and "smooth motion" are all deactivated".
Pressing CTRL + J (in mpc-hc) results in the usual information being showed.

attachment pictures:
playback3.jpg


2)
Using 1) The monitor set to 144 Hz means svp will process content to 144 Hz, which puts an extreme amount of utilization to cpu and gpu. Making a custom profile, let's call it "60 Hz", in the svp 4 manager, setting "fix frame rate 60 fps" results svp now showing the message "SVP: Playback at 59.940 60 Hz" and "SVP: activated 60 Hz". MadVR still shows "display ~ 143.8HZ" and "composition rate ~ 143.998Hz". Are these correct settings?

attachment pictures:
60Hz.jpg