cemaydnlar wrote:

Ok i'll give rife a try. Is a rtx 3070 enough for x2 ?

Yes, as long as it is video with no more than 1080p resolution, 8-bit colour depth without HDR


cemaydnlar wrote:

I am selecting rife interpolation but my fps stays at 24 ? What am i doing wrong ? Can you help me use rife?

Read this first:
https://www.svp-team.com/wiki/RIFE_AI_interpolation

Set the following values:

AI model: Generic (v4)
GPU threads: 2
GPU device: https://www.svp-team.com/forum/viewtopi … 170#p80170
RIFE via CUDA: Off

If you continue to have problems read what problems others had and how they dealt with them:

whole threads:
https://www.svp-team.com/forum/viewtopic.php?id=6580
https://www.svp-team.com/forum/viewtopic.php?id=6553

also read my thread from Chainik's post of April 28 to the end, everything should work from then on:
https://www.svp-team.com/forum/viewtopi … 360#p80360

skip it altogether: == RIFE / PyTorch installation == it's too slow now
https://www.svp-team.com/forum/viewtopi … 372#p80372

You are probably writing about this: https://www.svp-team.com/forum/viewtopic.php?id=6488
Remember, there are probably 2 SVP developers and thousands of SVP users. The chance that some enthusiast finds a combination of settings that gives a slightly better visual effect is simply higher. However, this requires a great deal of time and a great deal of trying, and sometimes a little more luck.

Despite the fact that someone has found a slightly better setting than the others, you are still not completely happy. And you are not likely to be. The base algorithm has certain limitations that cannot be jumped over. It already has a huge number of setting combinations and this is a credit to the developers. Searching for optimal settings is the task of the users, who are also developers, but there are only two of them. And the search is an enormous amount of time.

That's why recently, thanks to the development of GPU computing power, a lot of machine learning-based frame interpolation models have started to appear. This makes it possible to speed up considerably to reach the ideal of artifact-free interpolation. There is no other way. Artificial intelligence is entering all areas of life.

If everyone supported AI models, we would show their developers that we care and that their work is not in vain. Thanks to them we are closer to the ideal of real-time artifact-free frame interpolation:

https://github.com/hzwer/arXiv2021-RIFE
https://github.com/hzwer/Practical-RIFE
https://github.com/nihui/rife-ncnn-vulkan
https://github.com/HomeOfVapourSynthEvo … cnn-Vulkan
https://www.svp-team.com/wiki/RIFE_AI_interpolation

Of course I wish you and myself to find someone who will find even better settings for the basic SVP algorithm, because it is still irreplaceable for 4K 10bit HDR files. However progress cannot be stopped and the best quality frame interpolation will be provided only by artificial intelligence.

You asked the same question almost 3 years ago and got plenty of answers: https://www.svp-team.com/forum/viewtopic.php?id=5500

You spoke up when others asked about similar problems: https://www.svp-team.com/forum/viewtopic.php?id=6139

Others have asked before you and someone even stated that he preferred FrameRateConverter over SVP for this reason: https://www.svp-team.com/forum/viewtopic.php?id=4382

I just showed You that RIFE beats both mvtools2 and FrameRateConverter in what you asked about.

It can be seen that You care a lot about this and over the years you have not found a solution that satisfies you.

Do you think there is some magical way to make the base SVP algorithm capable of approaching what artificial intelligence can achieve? 

Base SVP interpolation uses only a tiny fraction of the computing power that RIFE requires.

You can spend hours testing every possible setting in SVP yourself, or you can use that time to do extra work that will allow you to upgrade your GPU. You can also take a shortcut, although I don't recommend it: https://www.svp-team.com/forum/viewtopi … 388#p80388

In general, RIFE is better with occlusions, object boundaries, y-axis rotational vectors than mvtools2 based tools.

http://forum.doom9.net/showpost.php?p=1 … tcount=312

https://i.postimg.cc/T2Zg6s1y/ezgif-2-85352ef164.gif
http://forum.doom9.net/showpost.php?p=1 … tcount=323

https://i.postimg.cc/zBBm9pbB/handwave-frc-rife.gif
http://forum.doom9.net/showpost.php?p=1 … tcount=324

To-do List

Multi-frame input of the model

Frame interpolation at any time location (Done)

Eliminate artifacts as much as possible

Make the model applicable under any resolution input

Provide models with lower calculation consumption

https://github.com/hzwer/Practical-RIFE

You have the same priorities as me: less halo and less artifacts during realtime interpolation.

Solution: x5 interpolation for 1920x1080 23.976 fps files in realtime using SVP + RIFE filter for VapourSynth (ncnn Vulkan)

Why x5? Better smoothess than x3 and both x3 and x5 give the best interpolation result with RIFE: https://www.svp-team.com/forum/viewtopi … 345#p80345

To do x5 interpolation for 1920x1080 23.976 fps files in realtime we need 116% more powerful graphics card than ZOTAC GAMING GeForce RTX 3070 Ti AMP Extreme Holo: https://www.svp-team.com/forum/viewtopi … 480#p80480

This will be possible soon with the NVIDIA GeForce RTX 4090.

If you want to get a really good quality with the least amount of artifacts possible use this: https://www.svp-team.com/wiki/RIFE_AI_interpolation

About halo effect around moving objects read here:
http://forum.doom9.net/showpost.php?p=1 … stcount=17
http://forum.doom9.net/showpost.php?p=1 … stcount=24

About anime read this: https://www.svp-team.com/forum/viewtopi … 308#p80308

Fast paced scenes are difficult for any model. Specifically created model for this is here:  https://github.com/google-research/frame-interpolation

If you want the best possible quality for anime, look for AI models with the best score in the ATD12K test.

If you want the best possible quality for fast paced scenes, look for AI models with the best score in the Xiph-2k test.

You can search for scores and compare them here among others:
https://arxiv.org/pdf/2204.03513.pdf
https://arxiv.org/pdf/2104.02495.pdf
https://arxiv.org/pdf/2202.04901.pdf

If you additionally want reasonably good speed and ease of use, then only SVP + RIFE filter for VapourSynth (ncnn Vulkan)

ZOTAC GAMING GeForce RTX 3070 Ti AMP Extreme Holo
https://www.svp-team.com/forum/viewtopi … 477#p80477

205W & 89fps = 0.434fps for 1W    0.217 interp. fps for 1W    83,08% eff.
155W & 81fps = 0.523fps for 1W    0.261 interp. fps for 1W    100,00% eff.
140W & 68fps = 0.486fps for 1W    0.243 interp. fps for 1W    92,95% eff.
125W & 50fps = 0.400fps for 1W    0.200 interp. fps for 1W    76,54% eff.
100W & 26fps = 0.260fps for 1W    0.130 interp. fps for 1W    49,75% eff.

You are right, when evaluating efficiency for different power limits it is completely irrelevant.

I just forgot to write that with this one test I wanted to take care of 3 things at once, and that third thing I just didn't describe:

1. performance per watt at various power limits
2. coil whine at various power limits
3. estimation of the performance of future graphics cards based on the performance of current graphics cards

Until today we had no precise result on the forum for 1080p files for RIFE Model 4.0 and - RIFE filter for VapourSynth (ncnn Vulkan): https://www.svp-team.com/forum/viewtopi … 220#p80220
and a precise result in fps allows a more accurate estimation of the capabilities of future graphics cards.
Additionally more computing power is needed to interpolate 1920x1080 than for example 1920x800: https://www.svp-team.com/forum/viewtopi … 361#p80361   

Until recently I was only dreaming about 1080p x2 real-time interpolation, but after the last optimization I have gained an appetite for 1080p x5. I only need the information about the interpolation factor to estimate how close we are today to my goal:

x5 - I need 120fps (24 original frames + 96 interpolated)
x2 - I need 96 interpolated frames, so also 96 original, which gives 192 fps in total

So, if Your graphics card at 205W gives 89 fps (1080p x2) that means I need 116% more powerful graphics card!

Awesome! Later this year it should be possible!

As for coil whine You either have an exceptional unit: https://www.guru3d.com/articles-pages/z … ew,34.html or undervolting really helps here, which of course is often recommended to mute the card. In any case, it's good to know that with RIFE loading Tesor Cores can be quiet.

WOW! Thanks for the instant reply!

Yes, I'm planning on undervolting as well.

Are these really the results for x5 interpolation tests for a 1920x1080 23.976 fps?

What exactly is the card model without coil whine?

As I am also considering passive GPU cooling the information about coil whine is also very important to me. I know that this is a very subjective matter - one can hear it even with all the fans and HDDs humming, while the other cannot. With completely passive cooling of the entire computer and the use of SSDs, the issue of coil whine coming from the GPU is important. Especially since a constant, equal load on the Tensor Cores, is said to be a worst case scenario:

I have never seen a GPU not coil whine during ml training and these are the top end boards, such as the a100, v100, a6000, Titan Volta, Titan RTX, 3090fe. There is always coil whine if the workload utilizes 100% of the die (since sustained voltage increases are seen), and I am not talking about the GPU utilization metric seen in Gaming, since they fail to use tensor cores that make GPUs screech.

https://www.reddit.com/r/nvidia/comment … _with_rma/

So I would also ask for information on how different power limits affect coil whine when loading Tensor Cores via RIFE.

There are opinions that coil whine is particularly troublesome when the graphics card uses all its power. Hence my guess, that buying for example a GPU, which power section has the possibility to draw 600W and limiting this draw to 50% can significantly reduce the coil whine.

Of course from 1-2 meters distance coil whine may be not audible at all. In this case you can try to judge the coil whine from a closer distance, of course not so close that someone's hair gets pulled into the graphics card fanwink

This is not as important to me as the above efficiency test at different power limits, but if someone would check this by the way, I would really appreciate it.

I am looking for at least 2 people who own one of the following high power consumption cards:

290W TDP - GeForce RTX 3070 Ti
320W TDP - GeForce RTX 3080
350W TDP - GeForce RTX 3080 Ti
350W TDP - GeForce RTX 3090
450W TDP - GeForce RTX 3090 Ti

and could run a RIFE interpolation test with different power limits set using MSI Afterburner.

I would be interested in x5 interpolation tests for a 1920x1080 23.976 fps file.

Why exactly such parameters? I believe that this will probably be the real-time interpolation limit for RIFE using an NVIDIA GeForce RTX 4090 card, unless there are already new optimizations in the future regarding RIFE models and their cooperation with Tensor Cores.

Of course, this is about testing using the RIFE filter for VapourSynth (ncnn Vulkan) and the result in fps, so that we can accurately determine the performance for each power limit.

We don't need a graph (although a data visualization is always welcome), just the raw data will be enough, from which we can then calculate the efficiency for a given GPU in fps/W for different power limits.

What is the minimum power limit? I think 150W will be adequate.

I will be very grateful for any such test, even if the parameters will be different than I suggested. I know that such tests can be time-consuming but the result can be very interesting and give interesting outcomes to those who are thinking about upgrading their GPU, but are a bit scared by those 600 or 900 watts of power consumption and generation of additional heat and noise.

Coming soon (probably September 2022) are new NVIDIA graphics cards with amazing Tensor Core capabilities, but also with unbelievable power consumption:

~900W TGP - NVIDIA GeForce RTX 4090 Ti
~600W TGP - NVIDIA GeForce RTX 4090
~350W TGP - NVIDIA GeForce RTX 4080

https://wccftech.com/nvidia-geforce-rtx … rds-rumor/

Without air conditioning, which I don't have, I can't imagine an extra 600 or 900W of heat in a small room in the summer. Of course there is a choice and we can always buy an NVIDIA GeForce RTX 4080 with 350W draw, but a more economical solution in the long run is to buy an NVIDIA GeForce RTX 4090 and drop the power down to, say, 300W or 350W. This way we can probably get an efficient graphics card in the sense of performance to energy consumed.

In addition, I wonder about passive cooling of such a card. Today it is possible to passively cool a graphics card up to 250W, although I believe that with some modifications it will be possible to reach up to 300W, especially if the card itself has a design suitable for 600W.

Until now, I believed that lowering the power consumption of a graphics card always leads to an increase in efficiency: relatively large at first then less and less.

However, I have now found some tests that confirm this, but also show that once the power consumption is lowered below a certain level, the efficiency starts to drop dramatically. The tests indicate that this depends not so much on the architecture of the specific graphics card, but mainly on the software used for testing. This can be seen in the graphs below:

Hardware: Inno3D GeForce RTX 3090 Ti X3 OC
Software: Control

https://www.hardwareluxx.de/images/cdn02/uploads/2022/Mar/free_server_mp/inno3d-geforcertx3090ti-x3oc-performance-scaling_1920px.png

https://www.hardwareluxx.de/index.php/a … l?start=23


Hardware: MSI GeForce RTX 3090 Gaming X Trio
Software: Heaven Benchmark

https://i.ibb.co/M5jh7Dh/FPS-Curve.png
https://i.ibb.co/5GkXn5n/Pp-W-Curve.png

https://www.forum-3dcenter.org/vbulleti … st12963231

I must admit that these results worry me a little, because they indicate that lowering the power consumption to 50% does not necessarily mean an increase in efficiency. The consolation is the fact that, as we can see on the graphs, a lot depends on the software. So I am very interested to see how it looks with RIFE, which uses Tensor Cores.

I am glad that all the problems with RIFE have been solved for the moment. I take the absence of new posts on this topic as proof that everything is working as it should and everyone is enjoying RIFE working with SVP for video interpolation.

Thanks for all the testing and sharing. I haven't commented on their results recently, but I plan to come back to them again. I didn't ask for a particular type of test, so everyone posted what they thought was important and useful to others. I read everything carefully and thought about what might be important to someone like me who doesn't have a suitable graphics card yet and is just making a decision.

In fact, I would be interested in many more things, but the most important I already know, that is, on current graphics cards can already freely perform real-time interpolation at factors greater than x2. Of course, the more powerful the graphics card, the greater the possibilities.

What I am most interested in at the moment and what more I would like to find out about RIFE interpolation I will present in the next post.

ToasterPC wrote:

https://i.imgur.com/pOS0Nyr.png


1. Try to use GPU thread: 2 - this gives 3x better performance, see blackmickey1007's post:
https://www.svp-team.com/forum/viewtopi … 219#p80219

2. Try to use 8-bit video, RIFE does not natively support 10-bit video.

3. Don't use madVR upscaling, it takes away GPU resources needed for RIFE.

4. Try to use x2 interpolation first, don't go crazy with that 360Hz wink

5. List all GPUs you have in your system: integrated and dedicated - read what Chainik wrote here: https://www.svp-team.com/forum/viewtopi … 170#p80170

6. Screenshots are always welcome on this thread smile

Thanks for contributing to this thread, but FSR is an upscaling algorithm not video frame interpolation algorithm.

What you are writing about i.e. a combination of upscaling algorithm and video frame interpolation algorithm already exists: https://github.com/AaronFeng753/Waifu2x-Extension-GUI

There are upscaling algorithms based on machine learning (AI) giving much better quality for video than FSR can give: https://paperswithcode.com/task/video-super-resolution

FSR compared to Real-ESRGAN: https://github.com/xinntao/Real-ESRGAN

is what the SVP main algorithm is compared to RIFE...

...it may be much faster, but it will not achieve the quality of Real-ESRGAN.

And if you want both functionalities running in real-time then one algorithm will take computing power away from the other.

I prefer all the GPU power for RIFE, but there are probably people who would like to split that power between upscaling and video frame interpolation.

Exactly what Chainik wrote

https://developer.nvidia.com/cuda-gpus
https://developer.nvidia.com/vulkan-driver

It should work like this:

https://github.com/hzwer/arXiv2021-RIFE/issues/207

Do yourself an interpolated video, extract the frames and compare with the original and what it says in the link above.

Also remember that if there are duplicates, RIFE has to interpolate more frames per second, which puts more strain on the GPU. In other words, where normal video will not overload the GPU, video with duplicates may overload the GPU and the GPU may not be able to handle RIFE interpolation in real time.

6912x3456 59.94fps x2

My guess is that you need faster RAM. Maybe even quad channel...

Today is the big day!

Thanks to Chainik from today everyone can test x3, x4, x5...x10 RIFE frame interpolation in real time in the easy to use SVP.

We gain not only better smoothness, but above all better quality of interpolated frames, especially for x3 and x5 settings!

If anyone wants proof of the better quality of such settings they can find it here:  https://arxiv.org/pdf/2204.03513.pdf in Figure 6.

The graph (b) shows the quality of 7 interpolated frames for an interpolation factor of x8 for several algorithms, including RIFE. The worst quality for the RIFE algorithm is presented by Time Step 4. This is the same interpolated frame that we get with x2 interpolation.

So the best quality result will be obtained in RIFE when there is no need to interpolate the frame which will be exactly in the middle between two original frames, that is best to use interpolation factor x3, x5, x7 and so on. Of course, with the interpolation factor x4 we will still get a better effect than with x2, because the display time of this lowest quality interpolated frame will be half as long as with the interpolation factor x2.

By the way, has anyone noticed on the graph (a) in Figure 6, that we are getting a much faster algorithm for x4 and x8 interpolation factor than RIFE? Looking forward to the source code: https://github.com/feinanshan/M2M_VFI

Well, yes, I can not test, so I will paste such interesting tidbits. I hope that I'll be able to enrich this thread a bit before I build a new PC. wink

I'd like to thank everyone again for the tests and explain again why I only ask and don't test anything myself.

The answer is simple: I don't have a suitable graphics card for testing, and my temporary PC is not suitable for the latest graphics cards anyway.

I've been saving money every month since 2019 to buy something really powerful. I had plans to upgrade my computer for 2020, but the situation at that time forced me to revise my plans. And maybe it's even better, because in the meantime the RIFE interpolation algorithm based on machine learning (artificial intelligence) appeared and it became clear that now the graphics card will be responsible for the best interpolation of frames in real time.

I'm now looking forward to COMPUTEX 2022, which will be held at the Taipei Nangang Exhibition Hall from May 24 to 27, 2022, where we may find out more about many new products that will be launched in the second half of this year.

It is in the second half of 2022 that I now intend to build my new PC completely from scratch. Then I will share very detailed test results using SVP & RIFE on the latest hardware that will be available then.

Until then, I would be grateful to everyone for sharing their tests here on the forum, so as to provide feedback to the SVP and RIFE developers, so that we can look forward to even better frame interpolation in the future, and to other testers, so that there are even more of us.

Special thanks to those involved in development:
https://github.com/hzwer/arXiv2021-RIFE
https://github.com/hzwer/Practical-RIFE
https://github.com/nihui/rife-ncnn-vulkan
https://github.com/HomeOfVapourSynthEvo … cnn-Vulkan
https://www.svp-team.com/wiki/RIFE_AI_interpolation

and to all those testing the implementation of RIFE in SVP here on the forum.

Huge thanks Chainik!!!

This is what I was waiting for! smile

I have another question: is there any chance to implement YUV444 format on the RIFE output, as requested by blackmickey1007 here: https://www.svp-team.com/forum/viewtopic.php?id=6582

What is your opinion Chainik on this?

nerji wrote:

Oh, and slightly off topic, I'm not sure how the AI models are created/maintained, but any investment into the anime one would be appreciated! Can only use generic v3/4 in real time currently smile


hzwer wrote:

In fact, the v4.0 version already contains the data of anime10K. I recently tried developing the "anime" and "real world" versions separately, but found no improvement.

https://github.com/hzwer/Practical-RIFE/issues/12


ATD-12K is a large-scale animation triplet dataset, which comprises 12,000 triplets(train10k,test2k) by manually inspect and the test2k with rich annotations, including levels of difficulty, the Regions of Interest (RoIs) on movements, and tags on motion categories

The dataset collected from 30 series of movies(Is wild and modern) made by diversified producers, with a total duration of 25+ hours. total of 101 clips in two resolutions (i.e., 1920×1080, 1280×720)

https://paperswithcode.com/dataset/atd-12k

In summary, the 4.0 model is trained on the largest animation triplet dataset.

Filmscans (RGB) → Blu-ray (YUV420) → RIFE (RGB) → VapourSynth (YUV420 now) → Monitor/TV (RGB) roll

... and any conversion to YUV420 is a loss of half the resolution in terms of colour... sad

By the way, can RIFE interpolate 10 bit video? 32 bit  RGB means 8 bits each of red, green, blue, and alpha...