2,001

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

@Blackfyre
So here is a general buying advice for everyone, and a bit prediction of the future - and I'm 99 % sure I will be correct with that:
https://www.tomshardware.com/pc-compone … 12-bit-bus
https://wccftech.com/nvidia-24-gb-gefor … s-spotted/
So given how big greedy fuckers the AMD & Nvidia execs have become since 2018's Turing generation (prices weren't nearly as bad from 2000 - 2018), that nvidias is a first a "machine learning company" (marketing "AI") now, and that the leaks from twitter user "kopite7kimi" are 90 % always correct, I think with certainty the upcoming RTX 5080 won't be enough.
The measly 10k Shading units & Tensor Cores won't do it, given rtx 4090's 16384 shading units and 512 Tensor Cores aren't able to handle it.
Nvidia's milking strategy will likely again be releasing "super" graphics cards lineup in 2026, so a future "rtx 5080 Super" with ~ 14k Shading cores and ~ 450 Tensor Cores might be able to run current RIFE's 4.15 to 4.26 models.

But given how the RIFE developers are always matching their models and keep increasing their ML & inference demand, I wager to say that for future RIFE models, running them on the same resolutions near 4K-UHD 48 - 60 fps, the future top dog GPU-die in the rtx 5090/6090 will be needed again wink

And btw. Nvidias engineers won't be able to do miracles with this upcoming consumer rtx 5000 generation. The measly lithography node jump from 5nm to 4nm, will maximum give 20 - 25 % more performance (already the same for their Blackwell ML lineup, that's why they glued two Blackwell GPU-dies together and can now market x2 performance).
At best, they will crank up their Tensor- and RT-cores amount by 30 - 40 % this generation (because they prioritize Machine Learning and Ray/Path tracing performance over rasterization performance), and that's it.
So better wait for 2027 rtx 6000 lineup, because this will be a big overall jump again.


Maybe, maybe, maybe, thus RIFE's developers will see it the same way as me, and change their doing and will match their upcoming models, so at least the rtx 4090 will continue be able to handle the ~3840 x 1600 resolution at 48 fps, and the rtx 5090 at 60 fps (including 4K-UHD, which is 3840 x 2160), for their future models.
I can't know that. Someone would have to ask them.
In other words: The overall performance/efficacy jump from e.g. Nvidias rtx 3090 -> rtx 4090 (xx102 to xx102 gpu-die) was only so large (thus performance for RIFE models) because they switched from Samsungs 8nm lithography node (which is almost equal to TSMC's 12 nm node), to TSMC's 5 nm node; so 12 nm to 5 nm. This time it's from 5nm to 4nm.
So it's obvious for me that this generation, far far less people will buy the consumer RTX 5000 series.

2,002

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

And btw. RIFE is using (machine learning) neural networks and it's elaborate algorithms.
Not using Nvidia's Optical Flow (the transistors on the GPU), or RT Cores, or TMUs or ROPs.
So whatever advancements are coming this rtx 5000 consumer generation for these, will not help with RIFE's demands.

2,003

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

@Blackfyre
After my third testing, I disagree. The 4.26 model is overall and noticeably, for animation/cartoon and real scene movies, the best for most scenes.
As already linked https://www.svp-team.com/forum/viewtopi … 248#p85248
I was correct with the 4.18 model (as many here also, came to the same conclusion after testing), that it was overall the best for most scenes.

Few weeks after that I did the same tests again, with 4.18 versus 4.22 & 4.22 lite models.
I haven't commented it, by I reached the same conclusion as the authors in their readme file (4.18 was the best back then for real scene and 4.22 & LITE for animation) https://github.com/hzwer/Practical-RIFE … /README.md
I had thus used 4.22 lite for animation/cartoons few weeks ago.

And two weeks ago I sat down for more than 1 hour again, tested more than 40+ different scenes (animation and real scene), always with 0.25 speed to catch every artifact, warping, image smoothness and pacing etc. used again A-B repeat, again and again to loop through etc.
I again reached the same conclusion as the authors "Currently, it is recommended to choose 4.26 by default for most scenes."

I think the difference in our findings is simply our differences in how we test. I think here lies the fallacy of conclusion.
I think if you would test more and especially stricter (you did or didn't?), I'm sure you would come to the same conclusion as the authors and me.
I mean: How can the authors who train the models etc. be wrong with their conclusions and findings? Not possible. wink

It's not better in all scenes, but most scenes. I also found 4 - 5 scenes were the 4.18 models was still somewhat better (either in artifact, warping, image smoothness or pacing), but the rest 4.26 was better.
And yes, I also specifically tested 4.25 versus 4.26, with the same conclusion.


And the best part about the 4.25 & 4.26 models is how little electricity they need to achieve that. Tested with latest v15.5: latest TensorRT library, and a couple of movies at 3840x1600 resolution and 48 fps.
4.17 lite ~ 145 watts average gpu power consumption.
4.22 lite ~ 150 watts
4.18 ~ 176 watts
4.22 ~ 187 watts
4.25 ~ 158 watts
4.26 ~ 158 watts

I'm using it for everything, as recommened.

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Blackfyre wrote:

Matching the FPS and display Refresh Rate gives a smoother experience.

To avoid missing frames, in nvcp set the player "fast sync"

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

@jdg4dfv7

Thanks for all the info, I'll read all of it in detail later, but I skimmed through it and wanted to note that the 4.26 model definitely showed heavy artifacts in certain scenes I tested that were not present in 4.18 and 4.25. I gave the No Time to Die example before, but I noticed it elsewhere too. Maybe I will test again and see how it goes.

Another thing I wanted to note is 3840x1600 is what I call 4K letterbox, but there are a lot of IMAX releases too for the past few years, which are actual full scale 4K (or switch to it in many scenes), this also applies to some TV shows that are full scale 4K and not limited to 1600 vertical pixels.

This is why I use 4.25 v2 for 3840x1600 or lower at x2, and I use 4.16 v2 for full 4K at x2

3090 is only capable of pushing x2 and v2 models perform better than v1

The 5080 news sucks, but I am hoping the 5090 is not ridiculously priced in Australia (but it looks like it will, and I might just stick with the 3090 until it dies out now if the performance difference to the 4090 is not substantial).

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Blackfyre wrote:

@jdg4dfv7

Thanks for all the info, I'll read all of it in detail later, but I skimmed through it and wanted to note that the 4.26 model definitely showed heavy artifacts in certain scenes I tested that were not present in 4.18 and 4.25. I gave the No Time to Die example before, but I noticed it elsewhere too. Maybe I will test again and see how it goes.

Another thing I wanted to note is 3840x1600 is what I call 4K letterbox, but there are a lot of IMAX releases too for the past few years, which are actual full scale 4K (or switch to it in many scenes), this also applies to some TV shows that are full scale 4K and not limited to 1600 vertical pixels.

This is why I use 4.25 v2 for 3840x1600 or lower at x2, and I use 4.16 v2 for full 4K at x2

3090 is only capable of pushing x2 and v2 models perform better than v1

The 5080 news sucks, but I am hoping the 5090 is not ridiculously priced in Australia (but it looks like it will, and I might just stick with the 3090 until it dies out now if the performance difference to the 4090 is not substantial).

If the rumours are true then the 5080 doesn't suck because it will be at least as powerful as the current most powerful consumer GPU on the planet and that would be good enough for me. But if the rumours are true then the 5090 will be so far ahead of it in performance that it will feel like the 5080 sucks.

2,007 (edited by RickyAstle98 07-10-2024 09:53:13)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

dawkinscm wrote:
Blackfyre wrote:

@jdg4dfv7

Thanks for all the info, I'll read all of it in detail later, but I skimmed through it and wanted to note that the 4.26 model definitely showed heavy artifacts in certain scenes I tested that were not present in 4.18 and 4.25. I gave the No Time to Die example before, but I noticed it elsewhere too. Maybe I will test again and see how it goes.

Another thing I wanted to note is 3840x1600 is what I call 4K letterbox, but there are a lot of IMAX releases too for the past few years, which are actual full scale 4K (or switch to it in many scenes), this also applies to some TV shows that are full scale 4K and not limited to 1600 vertical pixels.

This is why I use 4.25 v2 for 3840x1600 or lower at x2, and I use 4.16 v2 for full 4K at x2

3090 is only capable of pushing x2 and v2 models perform better than v1

The 5080 news sucks, but I am hoping the 5090 is not ridiculously priced in Australia (but it looks like it will, and I might just stick with the 3090 until it dies out now if the performance difference to the 4090 is not substantial).

If the rumours are true then the 5080 doesn't suck because it will be at least as powerful as the current most powerful consumer GPU on the planet and that would be good enough for me. But if the rumours are true then the 5090 will be so far ahead of it in performance that it will feel like the 5080 sucks.

The 5090 will have 2x as much CUDA cores than 5080, less power consumption on new compute levels! The 5080 performance is between 4080 and 4090 in rendering tasks, but games? Who knows?!

2,008 (edited by RickyAstle98 07-10-2024 09:51:48)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

dawkinscm wrote:
Blackfyre wrote:

@jdg4dfv7

Thanks for all the info, I'll read all of it in detail later, but I skimmed through it and wanted to note that the 4.26 model definitely showed heavy artifacts in certain scenes I tested that were not present in 4.18 and 4.25. I gave the No Time to Die example before, but I noticed it elsewhere too. Maybe I will test again and see how it goes.

Another thing I wanted to note is 3840x1600 is what I call 4K letterbox, but there are a lot of IMAX releases too for the past few years, which are actual full scale 4K (or switch to it in many scenes), this also applies to some TV shows that are full scale 4K and not limited to 1600 vertical pixels.

This is why I use 4.25 v2 for 3840x1600 or lower at x2, and I use 4.16 v2 for full 4K at x2

3090 is only capable of pushing x2 and v2 models perform better than v1

The 5080 news sucks, but I am hoping the 5090 is not ridiculously priced in Australia (but it looks like it will, and I might just stick with the 3090 until it dies out now if the performance difference to the 4090 is not substantial).

If the rumours are true then the 5080 doesn't suck because it will be at least as powerful as the current most powerful consumer GPU on the planet and that would be good enough for me. But if the rumours are true then the 5090 will be so far ahead of it in performance that it will feel like the 5080 sucks.

Also when RIFE will support new computation levels directly from RT chain, thats increase inference performance from 84 to 100% according to NVIDIA guys!

2,009 (edited by Drakko01 08-10-2024 10:54:02)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

jdg4dfv7 wrote:

And the best part about the 4.25 & 4.26 models is how little electricity they need to achieve that. Tested with latest v15.5: latest TensorRT library, and a couple of movies at 3840x1600 resolution and 48 fps.

What do you think about the performance of the v15.5 compare with the previous version (15.4),Pros and cons?.
In my test with the Rife 4.18 /4.25/4.26 I agree more with what dawkinscm said, beyond the fact that I don't use that same x factor or resolution. I personally don't see the point of chasing maximum resolution, as a 4k downside of by svp, , increasing the frequency and upscalling to 4K is more than enough..
I

2,010

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

@Drakko01
v15.5? You meant to say 4.14 or 4.15?
What do you mean by with performance?
As said, 4.26 aftter my third elaborate testing, overall the best. There is no point for me using anything else, especially since both 4.25/4.26 consume so little electricity (my rtx 4090 GPU is undervolted 0.875V).
If you don't know who you want to believe, than do your own testing.

Notice what I wrote: I do elaborate testing; I recommend doing the same. Also test on a big screen (I test on a 55 inch Oled TV from 1 meter distance).
Testing with LCD-displays or LCD-dimming zone displays/TVs/monitors (aka misleadingly marketed "miniLED"), is not as accurate, as LCD-display technology lacks pixel-perfect dimming.
Lot of scenes across at least 10 real scene movies or animation/cartoon.
I test specifically at 0.25 speed, only this way I can perceive artifacts etc. which are not perceived even under 0.5 speed, less under normal speed (1.0)
I always do A-B repeat every scene couple of times.

Additional: With v4.18 not only I noticed way less artifacts, warping etc. but especially way better image smoothness and pacing.
If there are multiple elements in the foreground, middle-ground, background, v41.8 and even better 4.26, are displaying the elements more smoothly and with better pacing.
With older models, or even with v4.18, there was more stuttering occuring; like the background element (for example a forest) was blending in more wrongly with elements of the foregound (lets say a car); both stuttered a lot, especially if the element was complex in structure (let's say a fence, a pattern).
With some scenes and the v4.18 model, I noticed the forest scene in the background was stuttering, while with the v4.25 model, even more the v4.26 model, the forest was smoothly displayed.
That means the neural network wasn't able to distinguish elements that well with older models.

And last: I'm not guessing or have to rememver what I found 1 month ago either, because I write down everything I notice.
I have a full, long text file were I wrote down all of my findings, back to v4.17.

If you don't want to believe any of us, than maybe you want to believe the developers themselves? big_smile
"Currently, it is recommended to choose 4.26 by default for most scenes."
If anyone, you can trust their knowledge and findings.
https://github.com/hzwer/Practical-RIFE … 7ecf72f635

2,011 (edited by jdg4dfv7 09-10-2024 03:25:17)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

@RickyAstle98

Also when RIFE will support new computation levels directly from RT chain, thats increase inference performance from 84 to 100% according to NVIDIA guys!

That was a joke? big_smile
Or was something like that stated by the RIFE developers? They would have to make use of the new features.

The 5090 will have 2x as much CUDA cores than 5080, less power consumption on new compute levels! The 5080 performance is between 4080 and 4090 in rendering tasks, but games? Who knows?!

Correct.
Simply two BW-103 GPU-dies glued together (as Apple is doing with their M1 architecture since 2020 and Nvidia with Blackwell ML).
Of course (given technical leaks) Nvidias milking-strategy will be again, to castrate all GPU-dies of each model of their lineup (by 10 - 15 % as usually), as there is no competition and AMD+Nvidia is an Stackelberg-duopoly anyway for some years. Both companies are both manipulating the market for years, colluding on prices and products etc. There is no real competition; it's a public farce.
The currenlty sold RTX 4080 (super) are in fact relabeled RTX 4070 (AD104 GPU-die). The firstly introduced "RTX 4080 12GB" - which was then canceled - was in fact a RTX 4060 (AD-106 GPU-die).
There is no real RTX 4080 sold; same for rtx 4070 (is a rtx 4060 in fact) and the rtx 4060 (is a rtx 4050 in fact).
Renaming SKUs, giving them lower tier GPU-dies.
Looking at the leaks, it will be the same for consumer rtx 5000 again.



@dawkinscm

If the rumours are true then the 5080 doesn't suck because it will be at least as powerful as the current most powerful consumer GPU on the planet

As mentioned, that will verly likely (if the leaks are true) not happening. Also: don't generalize the metrics.
At best a rtx 5080 (BW-103 GPU- die, so in fact a relabeled RTX 5070), will have 15 - 20 % less rasterization performance than a rtx 4090. In reality it should be more of +25  less%. Similar story with Tensor Cores (for RIFE) and other things.

1) The leaked BW-103 GPU-die (RTX 5080), has a whopping ~ 60 % less shading units, ROPs, RT-Cores, or TMUs, Tensor Cores than the GPU in the current RTX 4090, but should somehow magically achieve the same rasterization performance, machine learning, ray tracing or Tensor Core (for RIFE) performance? This is not happening. The last time there was this huge of GPU-architecture jump of > 50 % from Nvidia, was last in 2007 from the 8 series. https://en.wikipedia.org/wiki/GeForce_8_series
As said: Ampere rtx 3090 to 4090 is only so big because of the lithography 12 nm -> 5 nm jump. Two full nodes.
Performance jumps due to GPU-architecture from Nvidia, were all only 10 - 20 % for the last 5 generations. The rest was lithography or simply more transistors.
2) The leaks specifies only the full BW-103 GPu-die for the rtx 5080 (GPU SM 84 (84 Full)). Nvidias has never sold a full GPU-die for consumers, for the last 4 - 5 generations or so. It will be castrated by at least 10 % as usual, just as every other model.
So it won't be 84 SM, but more like ~ 76 again, which means 70 % less units than the current rtx 4090.
3) Moores Law is effectively dead since 2012 TSMC's 28 nm lithography node (whatever other people say is wrong, or lying marketing).
4) For many generations (2012, especially since CUDA) Nvidia's pro/quadro/server and consumer lineup, have the same GPU architecture basis. They only swap out/leave out certain components as display output, video accelerators etc.
TMU, ROP, Tensor Core, RT Cores are verly much alike.
AMD's leading people, finally realized that this unified approach is the smarter way and will soon start doing the same
https://overclock3d.net/news/gpu-displa … nd-gamers/
4) GH100 to B100 has only 30 % overall more transistors (Hopper to Blackwell architecture). Together with the previous things mentioned, this is a leak in itself, already giving away how much better Nvidias engineer can make the RTX 5000 lineup, comparing equal GPU-die to GPU-die.
20 - 30 % in all aspects. The rest will be ML ("AI") marketing-gimmicks as "fake frames" Frame Generation x-4 times, or DLSS 4.

So anyone pondering about buying a graphics card for RIFE for usage of 3840x1600 or 4K-UHD (3840x2160): Either continue with the rtx 4090 or grab upcoming rtx 5090. Sorry to "burst some bubbles".
It's all a illegal, colluding Stackelberg Duopoly since at least 2006 (Nvidia is here the Stackelberg leader)
https://www.tomshardware.com/news/nvidi … ,6311.html
https://hexus.net/business/news/corpora … itigation/
AMD is not selling "cheaper products for less money" aka offering better price/performance for at least 5 graphic cards generations (since 2014).
The simly price them 10 - 20 % below Nvidia's graphics card models (e.g. current rtx 7900xtx versus rtx 4080), but are worse 50 - 200 % in nearly all aspects (efficacy, features as image reconstruction DLSS 3 vs. FSR 2, raytracing or ML-performance, streaming etc.).
Hardly anyone uses a AMD RDNA 1,2,3 graphics card for RIFE, with via ncnn/Vulkan, right? It sucks in performance.

Same goes for Intel+AMD duopoly, controlling the consumer and server market, with their x86/x64 patents for decades, eliminating any real competition.
All company execs only want to maximize product margin to the moon, and please financial shareholders.
The stuff about "fair and legal competition" based on capitalistic markets doctrine and laws, is a lie and a public farce for at least 10 -20 years.
It's all an elaborated show, but 99 % of the media continues to report as if there were some real, fair and legal competition going on ...

Sorry a bit for the offtopic part. I think it has to be commented, especially since hardly anyone mentions it in the current state of nonsense 08/15-media brainwashing. wink

2,012 (edited by Drakko01 09-10-2024 04:56:39)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

jdg4dfv7 wrote:

@Drakko01
v15.5? You meant to say 4.14 or 4.15?
What do you mean by with performance?
As said, 4.26 aftter my third elaborate testing, overall the best. There is no point for me using anything else, especially since both 4.25/4.26 consume so little electricity (my rtx 4090 GPU is undervolted 0.875V).
If you don't know who you want to believe, than do your own testing.

Sorry if I was not clear, since I made reference to your post indicating v15.5 compared to v15.4, I thought you would understand that I was referring to the tensor libraries.

jdg4dfv7 wrote:

Notice what I wrote: I do elaborate testing; I recommend doing the same. Also test on a big screen (I test on a 55 inch Oled TV from 1 meter distance).
Testing with LCD-displays or LCD-dimming zone displays/TVs/monitors (aka misleadingly marketed "miniLED"), is not as accurate, as LCD-display technology lacks pixel-perfect dimming.
Lot of scenes across at least 10 real scene movies or animation/cartoon.

I've been using svp since its indiegogo days, and have countless hours of tests with high quality files over the years, with the advent of Rife more and more.

I agree that LCD-dimming zone displays/TVs are not as accurate as OLED ones. I work with what I have at the moment: a 65-inch Qn85B, a 65-inch NU8000 and an old 75-inch LG.

jdg4dfv7 wrote:

I test specifically at 0.25 speed, only this way I can perceive artifacts etc. which are not perceived even under 0.5 speed, less under normal speed (1.0)
I always do A-B repeat every scene couple of times.

I'm not interested in the 0.25 speed test, I don't think anyone here is. What interests me! is to minimize or eliminate artifacts at normal speed and have the best and smoothest viewing experience.I think that is the goal of this forum.

jdg4dfv7 wrote:

Additional: With v4.18 not only I noticed way less artifacts, warping etc. but especially way better image smoothness and pacing.
If there are multiple elements in the foreground, middle-ground, background, v41.8 and even better 4.26, are displaying the elements more smoothly and with better pacing.
With older models, or even with v4.18, there was more stuttering occuring; like the background element (for example a forest) was blending in more wrongly with elements of the foregound (lets say a car); both stuttered a lot, especially if the element was complex in structure (let's say a fence, a pattern).
With some scenes and the v4.18 model, I noticed the forest scene in the background was stuttering, while with the v4.25 model, even more the v4.26 model, the forest was smoothly displayed.
That means the neural network wasn't able to distinguish elements that well with older models.

I think the same but if we only talk about the v4.25 model, with 4.26 I still find errors/artifacts that are not present in 4.25, that is what I was referring to when I mentioned agreeing with dawkinscm.Beyond that I also agree that v4.26 and v4.18 are also excellent models.

A clear example of this is in Spider-Man No Way Home 2021 2160p UHD BluRay at 31:50 when the nanotech runs through Doc Op's tentacles, if you have it please test it.



jdg4dfv7 wrote:

And last: I'm not guessing or have to rememver what I found 1 month ago either, because I write down everything I notice.
I have a full, long text file were I wrote down all of my findings, back to v4.17.

I don't have to guess or remember, I simply know what my test scenes look like with each model and I come from Rife 4.4/4.6 going through all the models up to the current ones.

jdg4dfv7 wrote:

If you don't want to believe any of us, than maybe you want to believe the developers themselves? big_smile
"Currently, it is recommended to choose 4.26 by default for most scenes."

The goal of the Rife developers was not always aligned with what we are looking for here.
Many times the posts from members like dawkinscm,Blackfyre,dlr5668,flowreen91, to name a few, helped me recheck settings or combinations that brought me to the satisfactory point I am at now.
Yesterday, thanks to your post, I updated the tensor libraries, which improved the viewing experience with the v4.25 model.

Thank you for the time you take to both test and share it with the forum.

2,013 (edited by reynbow 09-10-2024 08:24:14)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

sorry for the dumb question but I'm struggling to find a link to download the latest models in these 80+ pages of comments
can anyone guide me please?

I see this: https://github.com/hzwer/Practical-RIFE
but I'm not how I'm supposed to use those files with SVP

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

reynbow wrote:

sorry for the dumb question but I'm struggling to find a link to download the latest models in these 80+ pages of comments
can anyone guide me please?

I see this: https://github.com/hzwer/Practical-RIFE
but I'm not how I'm supposed to use those files with SVP

https://github.com/AmusementClub/vs-mlr … nal-models

2,015 (edited by dlr5668 09-10-2024 09:49:23)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

reynbow wrote:

sorry for the dumb question but I'm struggling to find a link to download the latest models in these 80+ pages of comments
can anyone guide me please?

I see this: https://github.com/hzwer/Practical-RIFE
but I'm not how I'm supposed to use those files with SVP

Check this https://www.svp-team.com/wiki/RIFE_AI_interpolation -> Adding TensorRT models

2,016

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

Over the last few days I've often asked myself whether I could actually see the difference between 48p and 60p.

I came across the following page, which illustrates the difference very nicely. Perhaps someone else could use it: https://frames-per-second.appspot.com/

2,017 (edited by jdg4dfv7 09-10-2024 10:58:45)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

@reynbow
It's always the same page https://github.com/AmusementClub/vs-mlr … nal-models

@Drakko01

I'm not interested in the 0.25 speed test, I don't think anyone here is

The opinions of x y, what someone interested in, is not the topic here (or the one I started). I recommended you to do your own testing if you don't know whom to believe or unsure about your own testing/findings, and this is the (better) way when it comes to visually spotting all/most artifacts etc.: at slow(er) playback speed, preferably x0.1 or x0.25 speed.

Maybe I haven't made myself clear, so again:
1) Lower playback speed (e.g. x0.1, x0.25) is not making the model produce less/more (in quality or quantity), artifacts, warping, blocking, pacing etc. It's all still the same, no matter the playback speed.
2) Our eyes/brain (our perception) simply can't spot/perceive the differences well/fast enough anymore; yes even with RIFE models only estimating from 24 fps to 60 fps.
We humans may distinguish higher differences in light levels (or as the mainstream says "more fps"), but doesn't mean we perceive everything equally. It simply is too fast aka too short in time span, to notice.
Can someone spot a tiny 0.5 % spot of artifacts, when the artifact itself is only 50 miliseconds long in time? Obviously: No. We are not some future Cyborgs/Android who have "10000 fps artificial eyes and brain computing" big_smile
3)Thus concluding, testing done on normal playback speed is flawed, and leads to a fallacy of conclusion aka wrong results/finding.
4)Even v4.25/26 are both full of artifacts etc. but which are only/mosty perceptible at slow playback speed for us humans.
Everyone who has not done the visual testing at slow playback speed, will be surprised how much more artifacts and worse pacing etc. one will start to perceive.


The goal of the Rife developers was not always aligned with what we are looking for here.
Many times the posts from members like dawkinscm,Blackfyre,dlr5668,flowreen91 ...

Fair enough. This subforum has a broad spectrum of topic allowed around the RIFE models.
I don't know what other things/settings you mean. The topic here I started are the RIFE models, thus regarding that topic:
When it comes to visually testing for artifacts etc. (as we do here), my own testing, as somewhat time consuming as it already is with all the low playback speed, dozens of A-B loop-repeat, writing down and doing screenshots, is childsplay and and big time flawed.
If anyone here is not even doing the same, than it's even more flawed.

Going scientfically and accurately, this is how we should do our testing and this/similar methods, is also thow the RIFE developers are doing it.
https://netflixtechblog.com/toward-a-pr … bfa9efbd46

2,018 (edited by RickyAstle98 09-10-2024 11:21:49)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

jdg4dfv7 wrote:

@RickyAstle98

Also when RIFE will support new computation levels directly from RT chain, thats increase inference performance from 84 to 100% according to NVIDIA guys!

That was a joke? big_smile
Or was something like that stated by the RIFE developers? They would have to make use of the new features.

The 5090 will have 2x as much CUDA cores than 5080, less power consumption on new compute levels! The 5080 performance is between 4080 and 4090 in rendering tasks, but games? Who knows?!

Correct.
Simply two BW-103 GPU-dies glued together (as Apple is doing with their M1 architecture since 2020 and Nvidia with Blackwell ML).
Of course (given technical leaks) Nvidias milking-strategy will be again, to castrate all GPU-dies of each model of their lineup (by 10 - 15 % as usually), as there is no competition and AMD+Nvidia is an Stackelberg-duopoly anyway for some years. Both companies are both manipulating the market for years, colluding on prices and products etc. There is no real competition; it's a public farce.
The currenlty sold RTX 4080 (super) are in fact relabeled RTX 4070 (AD104 GPU-die). The firstly introduced "RTX 4080 12GB" - which was then canceled - was in fact a RTX 4060 (AD-106 GPU-die).
There is no real RTX 4080 sold; same for rtx 4070 (is a rtx 4060 in fact) and the rtx 4060 (is a rtx 4050 in fact).
Renaming SKUs, giving them lower tier GPU-dies.
Looking at the leaks, it will be the same for consumer rtx 5000 again.





@dawkinscm

If the rumours are true then the 5080 doesn't suck because it will be at least as powerful as the current most powerful consumer GPU on the planet

As mentioned, that will verly likely (if the leaks are true) not happening. Also: don't generalize the metrics.
At best a rtx 5080 (BW-103 GPU- die, so in fact a relabeled RTX 5070), will have 15 - 20 % less rasterization performance than a rtx 4090. In reality it should be more of +25  less%. Similar story with Tensor Cores (for RIFE) and other things.

1) The leaked BW-103 GPU-die (RTX 5080), has a whopping ~ 60 % less shading units, ROPs, RT-Cores, or TMUs, Tensor Cores than the GPU in the current RTX 4090, but should somehow magically achieve the same rasterization performance, machine learning, ray tracing or Tensor Core (for RIFE) performance? This is not happening. The last time there was this huge of GPU-architecture jump of > 50 % from Nvidia, was last in 2007 from the 8 series. https://en.wikipedia.org/wiki/GeForce_8_series
As said: Ampere rtx 3090 to 4090 is only so big because of the lithography 12 nm -> 5 nm jump. Two full nodes.
Performance jumps due to GPU-architecture from Nvidia, were all only 10 - 20 % for the last 5 generations. The rest was lithography or simply more transistors.
2) The leaks specifies only the full BW-103 GPu-die for the rtx 5080 (GPU SM 84 (84 Full)). Nvidias has never sold a full GPU-die for consumers, for the last 4 - 5 generations or so. It will be castrated by at least 10 % as usual, just as every other model.
So it won't be 84 SM, but more like ~ 76 again, which means 70 % less units than the current rtx 4090.
3) Moores Law is effectively dead since 2012 TSMC's 28 nm lithography node (whatever other people say is wrong, or lying marketing).
4) For many generations (2012, especially since CUDA) Nvidia's pro/quadro/server and consumer lineup, have the same GPU architecture basis. They only swap out/leave out certain components as display output, video accelerators etc.
TMU, ROP, Tensor Core, RT Cores are verly much alike.
AMD's leading people, finally realized that this unified approach is the smarter way and will soon start doing the same
https://overclock3d.net/news/gpu-displa … nd-gamers/
4) GH100 to B100 has only 30 % overall more transistors (Hopper to Blackwell architecture). Together with the previous things mentioned, this is a leak in itself, already giving away how much better Nvidias engineer can make the RTX 5000 lineup, comparing equal GPU-die to GPU-die.
20 - 30 % in all aspects. The rest will be ML ("AI") marketing-gimmicks as "fake frames" Frame Generation x-4 times, or DLSS 4.

So anyone pondering about buying a graphics card for RIFE for usage of 3840x1600 or 4K-UHD (3840x2160): Either continue with the rtx 4090 or grab upcoming rtx 5090. Sorry to "burst some bubbles".
It's all a illegal, colluding Stackelberg Duopoly since at least 2006 (Nvidia is here the Stackelberg leader)
https://www.tomshardware.com/news/nvidi … ,6311.html
https://hexus.net/business/news/corpora … itigation/
AMD is not selling "cheaper products for less money" aka offering better price/performance for at least 5 graphic cards generations (since 2014).
The simly price them 10 - 20 % below Nvidia's graphics card models (e.g. current rtx 7900xtx versus rtx 4080), but are worse 50 - 200 % in nearly all aspects (efficacy, features as image reconstruction DLSS 3 vs. FSR 2, raytracing or ML-performance, streaming etc.).
Hardly anyone uses a AMD RDNA 1,2,3 graphics card for RIFE, with via ncnn/Vulkan, right? It sucks in performance.

Same goes for Intel+AMD duopoly, controlling the consumer and server market, with their x86/x64 patents for decades, eliminating any real competition.
All company execs only want to maximize product margin to the moon, and please financial shareholders.
The stuff about "fair and legal competition" based on capitalistic markets doctrine and laws, is a lie and a public farce for at least 10 -20 years.
It's all an elaborated show, but 99 % of the media continues to report as if there were some real, fair and legal competition going on ...

Sorry a bit for the offtopic part. I think it has to be commented, especially since hardly anyone mentions it in the current state of nonsense 08/15-media brainwashing. wink

The inference performance increase was claimed by NVIDIA itself, the new features just an assumption!

2,019 (edited by RickyAstle98 09-10-2024 11:39:19)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

jdg4dfv7 wrote:

@reynbow
It's always the same page https://github.com/AmusementClub/vs-mlr … nal-models

@Drakko01

I'm not interested in the 0.25 speed test, I don't think anyone here is

The opinions of x y, what someone interested in, is not the topic here (or the one I started). I recommended you to do your own testing if you don't know whom to believe or unsure about your own testing/findings, and this is the (better) way when it comes to visually spotting all/most artifacts etc.: at slow(er) playback speed, preferably x0.1 or x0.25 speed.

Maybe I haven't made myself clear, so again:
1) Lower playback speed (e.g. x0.1, x0.25) is not making the model produce less/more (in quality or quantity), artifacts, warping, blocking, pacing etc. It's all still the same, no matter the playback speed.
2) Our eyes/brain (our perception) simply can't spot/perceive the differences well/fast enough anymore; yes even with RIFE models only estimating from 24 fps to 60 fps.
We humans may distinguish higher differences in light levels (or as the mainstream says "more fps"), but doesn't mean we perceive everything equally. It simply is too fast aka too short in time span, to notice.
Can someone spot a tiny 0.5 % spot of artifacts, when the artifact itself is only 50 miliseconds long in time? Obviously: No. We are not some future Cyborgs/Android who have "10000 fps artificial eyes and brain computing" big_smile
3)Thus concluding, testing done on normal playback speed is flawed, and leads to a fallacy of conclusion aka wrong results/finding.
4)Even v4.25/26 are both full of artifacts etc. but which are only/mosty perceptible at slow playback speed for us humans.
Everyone who has not done the visual testing at slow playback speed, will be surprised how much more artifacts and worse pacing etc. one will start to perceive.


The goal of the Rife developers was not always aligned with what we are looking for here.
Many times the posts from members like dawkinscm,Blackfyre,dlr5668,flowreen91 ...

Fair enough. This subforum has a broad spectrum of topic allowed around the RIFE models.
I don't know what other things/settings you mean. The topic here I started are the RIFE models, thus regarding that topic:
When it comes to visually testing for artifacts etc. (as we do here), my own testing, as somewhat time consuming as it already is with all the low playback speed, dozens of A-B loop-repeat, writing down and doing screenshots, is childsplay and and big time flawed.
If anyone here is not even doing the same, than it's even more flawed.

Going scientfically and accurately, this is how we should do our testing and this/similar methods, is also thow the RIFE developers are doing it.
https://netflixtechblog.com/toward-a-pr … bfa9efbd46

4) The 4.26 has significant artifacts for me even at normal playback speed, which not happens with older models, sometimes even parts of big objects interpolating separately than full object, which still not happens for older models testing, no matter what playback speed was set, I dont say 4.26 will work the same for others, atleast thats how 4.26 works for me, period!

2,020

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

TUSJDFJSERTHS wrote:
reynbow wrote:

sorry for the dumb question but I'm struggling to find a link to download the latest models in these 80+ pages of comments
can anyone guide me please?

I see this: https://github.com/hzwer/Practical-RIFE
but I'm not how I'm supposed to use those files with SVP

https://github.com/AmusementClub/vs-mlr … nal-models

Thanks for this.
Should I be using v2 or not?

2,021 (edited by RickyAstle98 09-10-2024 11:38:15)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

reynbow wrote:
TUSJDFJSERTHS wrote:
reynbow wrote:

sorry for the dumb question but I'm struggling to find a link to download the latest models in these 80+ pages of comments
can anyone guide me please?

I see this: https://github.com/hzwer/Practical-RIFE
but I'm not how I'm supposed to use those files with SVP

https://github.com/AmusementClub/vs-mlr … nal-models

Thanks for this.
Should I be using v2 or not?

The v2 models gives me 16% transcoding performance increase and 14% realtime performance! (my 10 months old 4070)
The answer: why not?

2,022

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

RickyAstle98 wrote:
reynbow wrote:

Thanks for this.
Should I be using v2 or not?

The v2 models gives me 16% transcoding performance increase and 14% realtime performance! (my 10 months old 4070)
The answer: why not?

Dang. Well. You've sold me.

2,023 (edited by RickyAstle98 09-10-2024 11:42:03)

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

reynbow wrote:
RickyAstle98 wrote:
reynbow wrote:

Thanks for this.
Should I be using v2 or not?

The v2 models gives me 16% transcoding performance increase and 14% realtime performance! (my 10 months old 4070)
The answer: why not?

Dang. Well. You've sold me.

But only about 10% realtime increase for new models (4.18 to 4.26)!

2,024

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

I'm not here for a topic of dicussing differences of opinion or this and that. The topic I wrote about is RIFE and how to visually (somewhat) properly doing the testing and finding which RIFE model is better for what type of content this-and-that. I'll stick with that wink
https://github.com/hzwer/ECCV2022-RIFE
https://arxiv.org/abs/2011.06294

https://paperswithcode.com/paper/rife-r … estimation
This shown ranking is a snippet from this website "MSU Video Frame Interpolation Benchmark" https://videoprocessing.ai/benchmarks/v … tion.html.
Anyone who is interested in different video estimation/interpolation ML models, should check it out.

The RIFE model (which is btw. a very outdated model from 2021 or so) is already scoring pretty well in the subjective score. When it comes to PSNR, SSIM, VMAF it obviously isn't that high scoring.
But it all is negated once looking at the FPS numbers; Chronos-SloMo-v2 is only achieving 4.x FPS.



The middleground (in effort and result) for us, doing a bit more than childsplay, would be:
1) Screen-record or capture (the video player) the video with a program
2) Put it in a video editing software and edit/cut it.
3) Do the same for the other model.
4) Align them perfect, frame-by-frame, and put both video files together.
What we do here mostly is childsplay when it comes to visually testing, and by no means the developers should listen to us.
wink


@Drakko01
Okay, I done my childsplay-testing with this mentioned scene from this movie (making notice again that our testing here is incredibly flawed).
You mean that scene ~10 seconds scene where the camera pans 360 degree around the character with the robotic tentacles?
Yes, that's a good scene to test, with nice patterns and elaborate shapes!

Some tips first:
1) If you are doing the A-B repeat with slomo playback speed as mentioned, and/or changing between models, you have to play the A-B loop at least 2-3 times, for the scene to be fully buffered/processed. See my attached image "screen9"? With the mpc video renderer, press CTLR+J to show the graph and information.
As seen in the bottom right corner, the graph is not flat but spiking and the framerate not stable at 48, which means it's not fully buffered/procesed yet (it's a demanding 3840x1600 video even for the rtx 4090, as mentioned).
2) The flawed result is seen in the attached "screen10". Massive artifacts etc. which normally wouldn't be there, even at x0.25 playback speed.
When visually A-B comparing, screen-recoding, recording, make sure the graph is flat and the framerate fully stable.

Long story summarized, between v4.25 and v4.26, I sat down for couple of minutes:
For that 10 second clip, I can't tell a clear winner.
At most elaborate/complex elements, v4.26 is displaying these elements slightly clearer, with less noise/small blocking&artifacts on the patterns/the whole image during slow-medium sized movement (camera pan), as in the first 4 second on the red metal triangle things and other metal party of these tentacles.
Also I spot less bright haloing-artifacts between the tentacles in the foregound and the guys's background black darker coat. Also at the forest in the background
I also spot a bit less noise and patterns on the concrete whitish pillar and the river at the end.

Contrary v4.25 is for example visibly better in other aspect during the first 4 seconds and after that: There are less stutterish blocks (warping, medium sized) around the metal tentacles (around the whole body) and the red triangles and other metal parts.
Also I noticed v4.25 is somehwat more clearer during faster movements (camera pan).

If anything, I somewhat prefer v4.26 because as the pros outweighs the cons somewhat.

So basically: This is what I meant. Visible childsplay-testing as ours, we can maybe spot a very clear difference when it comes to older models as 4.15 versus now v4.25/26, but definitely not that much between 4.25 and 4.26 themselves. Our testing is way too childsplay for that and our conclusions differentiate too much due to our different testing.
In such small version jumps, some things get better, some worse.

That's why I mentioned plenty: Trust the developers. They are developing it, they do the math, the have the datam, they do proper scientifically testing etc. and there is hardly any point arguing against that. If they "recommend v4.26 for most scenes now", I will believe them, because my testing/conclusion (and everyone elses here) is much flawed, and theirs not.


Also: Please no more 3840x1600 video file testing. big_smile Way too demanding with such current model, even for such graphics card. Stutters way too much and need to long process/buffer. If you want me to test next, find some 1080p footage wink

Post's attachments

screen10.jxr 260.81 kb, 37 downloads since 2024-10-09 

screen9.jxr 320.21 kb, 32 downloads since 2024-10-09 

2,025

Re: New RIFE filter - 3x faster AI interpolation possible in SVP!!!

flowreen91 wrote:

We can see how the directly generated video of RIFE has no shake of the white lines on the right side of the screen.
But when transcoding it with SVP, every interpolated frame has different positioning of the white lines than the non-interpolated frames.
It's like you show the video normally on the non-interpolated frames and then reduce the height by a few pixels on the RIFE generated frames which makes the pixels not align with the original movie, adding a shake-like effect on the static white lines that is obvious for big screen users.

SVP devs please take a look

Looking through the magnifying glass on the 65'' OLED, I can see "shaking" in both of these converted samples (non-RIFE too), and moreover, in a real-time RIFE conversion.
I'd say this's because of color space converted back and forth, noticeable in a very high contrast areas only.
Probably hmm