Search options (Page 15 of 21)
scb wrote:Guys,
I still haven't seen a full explanation about the different between the 'normal' and 'lite' models of the same versions? What situations should I pick the 'lite' version over the 'normal' version of a model?
The purpose of the "lite" versions were never fully clear to me except for Rife 4.14 "lite" which is different to every model that came before it. But there is no answer to your question, you just have to try them and see which one works best for you. As of right now, if you don't have a high end card then don't bother with anything above Rife 4.9. Rife 4.13 is computationally heavy even for a 4080. Rife 4.14 "lite" is not "lite" at all unless you have a card with high memory bandwidth like a 'maybe' a 3070ti and above.
StanTheMan wrote:RTX 3060 Ti, i5 12400f, 32 GB of RAM.
"Lite" versions works better everytime. From newest versions with "lite" (4.12 and so on) usual versions dropped frames and it's like don't have enough power or overloading, because it's not working on 60 FPS everytime (48 FPS works OK), but i didn't do any test, just saw the FPS and my internal feelings. So, now i'm on 4.12 "lite" version and it's great.
Newest 4.13 and 4.14 and lite too don't work properly in any scenarios with my GPU because of overloading or something. So, now i'm on 4.12 "lite" with hope for better versions in the future.
Rife 4.14 "lite" is different to the other "lite" models. It uses a technology that requires more GPU bus bandwidth but not as much compute power. Rife 4.13 "lite" is similar to regular Rife 4.9 so if you struggle with 4.9 you will struggle with 4.13. I think maybe Rife 4.14 needs more compute power and memory bandwidth than the 3060ti can provide.
jimdogma7 wrote:roshin1401 wrote:Im noob to this software. While using TensorRT, a cmd popups doing compilation. Does this occur for every video file opened? or only once? And also please suggest best rife model for my gpu. Its a 3070 laptop.
I'm having the same problem and unfotunately this post wasn't responded to. I've put the .OONX files in the "models" and/or 'Rife" folders as instructed and yes, they do show up in the AI models pulldown in the SVP manager, but when I select it and try to run a video, a CMD box pops up and after a few seconds goes away. The video then plays but is not converted. Nor does the little green text in the lower left corner come up telling you what setting you're using, etc. Tres bizarre. Plus, if I try to transcode the video it doesn't work either. Says conversion failed.
It seems pretty straitforward, I don't what I'm doing wrong here. Please help!
I keep saying this but it's worth repeating. If you are new to this then don't do anything you see here because we are experimenting. SVP works out of the box. Just uninstall everything, go to the SVP Rife AI page https://www.svp-team.com/wiki/RIFE_AI_interpolation and follow the instructions. All the information you need about the Rife, the DOS box and everything else is on that page. Once you have done that then have a look at the SVP Transcoding page https://www.svp-team.com/wiki/Manual:SVPcode. I don't transcode, but once you have followed those instructions there are others on here who can help you with any specific questions you have.
RAGEdemon wrote:537.58 eliminated the pan stutters for me.
It made no difference for me when I tried it but I do like the options that come with the installer. So I installed the latest version because of Cuda 12.3 but with the options. It works at least as well as before with or without 537.58 but with a couple of non SVP related improvements.
RAGEdemon wrote:Other modes such as sync 24fps video to screen refresh, or fps x 2.5, or whole multiples, is really taxing on the GPU. An overclocked 4090 can't do it, even on the highest performing RIFE (4.4/4.6).
I tried 2.5 and it was pretty bad but then if an overclocked 4090 can't do it then I've got not chance 
Blackfyre wrote:Thanks for the input, so looks like I'll just stick with the 3090 until 5090 is released.
3840x2160 was watchable on the 4080, but with slow pan stutters. I updated the Nvidia Driver to latest version released yesterday and used the v14Test3 TensorRT library (Cuda 12.3) and the latest vslmrt.py with Rife 4.14v2 lite. It's still dropping frames but the slow pans are now mostly smooth. The 4090 should have no problem so you don't need to wait for a 5090.
I currently don't have any intention of using SVP with 4K but this should help to clarify your options.
RickyAstle98 wrote:Blackfyre wrote:Question, regarding RIFE I assume the 4070, 4070 Super, 4070 Ti Super, etc are all better than the RTX 3090 with regards to RIFE yes?
I'm asking because I can sell my RTX 3090 used for around $1100 in Australia, and just buy an RTX 4070 Super or 4070 Ti Super for example on special for around the same price.
3090 can no longer handle 4K at 2x with the latest RIFE models sometimes, if the 4070 and higher can handle it, I am willing to do the switch, despite having half the vRAM.
Tested! My 4070 provide 2x 4.14v2 (and lite) under 3200x1824opt (3520x1984opt unstable) At exact 4K nah, so I think 4070Ti close enough for latest models!
Windows 10 fresh (3 month) and NVIDIA driver 546.17 without mods...
4080 with Rife 4.14v2 lite plays 3840x2160 at 2x SVP Index 1.0 GPU 99-100% with frame drops. Mostly smooth but some slow pans a little jerky. 4080 can barely do it so 4070Ti will not do it.
I installed 537.58 and tbh it made little obvious difference. But the install tool was useful to completely fix some Nvidia driver issues I knew about but had only partially fixed. I just made another back up of everything. Thank you 
Blackfyre wrote:Question, regarding RIFE I assume the 4070, 4070 Super, 4070 Ti Super, etc are all better than the RTX 3090 with regards to RIFE yes?
Not necessarily. I would test 1.14 lite first because the 3080/3090 has a wider memory bus than even the 4070ti/4080 so the 3090ti might work better with Rife 1.14 lite than anything less than a 4090. Hopefully someone can help with other card comparisons.
RAGEdemon wrote:Gentlemen,
Consistent reports are emerging regarding recent nvidia drivers causing stuttering in apps and games. It is related to the DPC latency.
Current advice is to roll back to 537.58 drivers (dated October 2023).
More info in the COMMENTS in this thread:
https://www.reddit.com/r/nvidia/comment … iscussion/
I believe I have seen these stutters in SVP too. Rolling back to the recommended driver fixed the issue. Best staying on the older driver till a confirmed fix has been integrated...
Looking through that thread, there are conflicting reports as to which Nvidia drivers are causing problems. There are also reports of problems being caused by Windows 11. I downgraded to Windows 10 months ago after having lots of issues. I switched to Nvidia Studio drivers as they are more stable with less changes and less likely to cause an issue. But I'm not a gamer. I also turned off Windows GPU scheduling on the recommendation of a VR dev.
cemaydnlar wrote:dawkinscm wrote:cemaydnlar wrote:v1.14 v2 is the smoothest version of them all for me somehow...
v1.14 v2 is so smooth that I made sure to back everything up 
I meant v1.14 lite v2 sorry
i am using 3070. That might be the case i appreciate your answer bud thx.
It's OK. We were both talking about 1.14 lite v2 and it is very smooth 
cemaydnlar wrote:v1.14 v2 is the smoothest version of them all for me somehow...
v1.14 v2 is so smooth that I made sure to back everything up 
cemaydnlar wrote:It gave a weird error while caching but is somehow working without doing that caching cmd again.2nd i don't see any improvement for gpu usage. İt's the same as v.14. Is it normal?
If the messages come after the "profiling" statement then my understanding is that the warnings are part of the profiling process. As for the performance, that might depend on you GPU memory bandwidth. The Nvidia 3070Ti and above start at 600GB/s memory bandwidth. The 4070Ti has the same memory bandwidth as a 4080 at 700GB/s. But he older 3080 and 3090 series cards might perform better with 4.14 "lite" than the 4080 because of higher memory bandwidth. Below this class of card YMMV.
The new technique used for the "lite" versions is potentially a little more accurate than the normal versions but I suspect that GPUs with low internal memory bandwidth will have issues with v14 "lite". I'm keeping it for now and will see what happens further down the line.
Edit:
The figures from @aloola and @RickyAstle98 show that this is a heavy model. But now everything has kind of bedded in, Rife v1.14 "lite" (v2) is using a little less GPU resources than before, down from 94-100% with Rife v1.14 to between 86-92% with "lite" while "so far" retaining the same quality as before. It's also very smooth. YMMV on GPUs with limited memory bandwidth.
So far v14.4 lite uses more GPU than the non lite version. I thought it was may be because of the changes I made but I reverted back to SVP standard configs and it's the same. Maybe there is an issue they need to fix.
Update:
It uses about the same amount of GPU resources but it works in a different way which makes it more dependant on memory bandwidth than GPU resources. I've read up a little on the new methodology and it is interesting the result is that the "lite" version may not be as "lite" before.
flowreen91 wrote:There may be fixes and enhancements in the latest v14 releases which is not in v13. If you're willing to help test, I would try downloading https://github.com/AmusementClub/vs-mlr … 4.test2.7z and unzipping
and the folder
into
C:\Program Files (x86)\SVP 4\rife
I've updated and I don't see any obvious differences. But I have thought about updating for the latest fixes to TensorRT and Cuda so thanks 
flowreen91 wrote:Current default SVP installation has 4.6/4.9 v1 models which means average user never saw this jiggling issue.
In the future when SVP devs will want to add the v2 models they will surely update the default old TensorRT libraries too so your average user will never see this issue.
Until then we should follow @cws guide on how to update our own TensorRT library from https://github.com/AmusementClub/vs-mlrt/releases if we want to play around with latest RIFE models:
cws wrote:Right, TensorRT 9.x is in "beta" and not officially released. Still, v13 was released when TensorRT 8.5 was the latest version, when there is a new release of TensorRT 8.x (which is 8.6.)
There may be fixes and enhancements in the latest v14 releases which is not in v13. If you're willing to help test, I would try downloading https://github.com/AmusementClub/vs-mlr … 4.test2.7z and unzipping
and the folder
into
C:\Program Files (x86)\SVP 4\rife
If true then this is not so much a vsmlrt fix as it is a Cuda/TensorRT fix. I think the last time I tried this my Tensor libraries weren't updated but I might use the info above and try again.
Yep. That's where I got the last copy from too. It's configured for Catmull Rom. The ones I saw configured for Mitchell must have been local variations by other users.
Blackfyre wrote:ontop
glsl-shader="C:\Users\Username\AppData\Roaming\mpv\Shaders\SSimDownscaler.glsl"
FYI While I think SsimDownscaler by default was configured for Mitchell, some configs out there have it configured for Catmull Rom.
cws wrote:It's true there's no significant performance improvement, but there may be fixes and other minor improvements.
Specifically, when using v13 (with TensorRT 8.5 / CUDA 11.8) with the v2 models, I can see some blurriness and pixels shifting in the bottom right corner if there is some obvious static content there like a logo. If this is something you can reproduce, try using the latest v14 (with TensorRT 9.x / CUDA 12.x)
Thanks. If you see a particular issue that you want fixing then this does make sense. Especially if you are saying that v14 fixes this issue for you. I still do testing of new models, changing code etc but I think SVP is stable and works well now so I would not recommend most people do this. Why fix something when it ain't broken ? 
flowreen91 wrote:dawkinscm wrote:It's a change to the code in base.py. I got this from a previous post by @blackmickey1007 for completely disabling SC.
SVP4\script\base.py
input_m = input_m.misc.SCDetect(threshold= 0.1 or 0.2).
The old line is:
input_m = input_m.misc.SCDetect(threshold=rife_sc)
and changing it manually to 0.1 and 0.2 values translates to:
- selecting "Scene change threshold" from SVP and selecting "10%" or "20%"
- or by going to application settings and search "rife_sc" and set it to 10 or 20 like in following image:
https://gyazo.com/714dd3a8a3159f1ac7f496126d92ee70
dawkinscm i suggest you revert that line change since SVP devs were nice enough to allow us to control it from both mentioned places.
For "completely disabling SC" just set "rife_sc" application setting at 100 to see max smoothness with zero animation stuttering.
Thanks for clarifying. I just used that the 0.1/0.2 examples from the original post. I've tried 100 and other high values before and Rife is smooth with anything from 15% or 100% on my machine.
But the double image issue I'm talking about you may never see this issue depending on the the type of stuff you watch. I want to fix this while keeping the smooth motion which is pretty much impossible. With a low enough value, the stutters disappear but also causes slow pan stutters. My actual setting is a decimal fraction below 10% and rife_sc only accepts integers. Increasing fps actually fixes the issue but causes slow pan stutters and resource issues. The only real solution is for the model to get better and I've noticed a couple of small improvements so hopefully that trend will continue.
BTW I didn't want to say it before without more testing but Rife 4.14 uses a little less GPU resources than 4.12/4.13. YMMV.
I use a number of videos highlighting different artefacts. But Alita might be the best because it has different types of fast movement and exposes a couple of minor artefacts. One of which has been fixed by Rife 4.14.
cemaydnlar wrote:Can you share that setting with me ?
It's a change to the code in base.py. I got this from a previous post by @blackmickey1007 for completely disabling SC.
SVP4\script\base.py
input_m = input_m.misc.SCDetect(threshold= 0.1 or 0.2)
The threshold values are just suggestions. You will need to experiment to see what works for you. You won't find a pefect setting because when you fix double images from fast movement it will break slow panning. When you reset for slow panning you get double images in fast movement. But you can get a compromise which improves fast movement a little without breaking slow panning. I couldn't get even this minor improvement in 4.12/4.13 but 4.14 does seem to be better overall for fast movement so here it was possible.
Rife 4.14 is out. There's fast movement artefact I saw in one movie which is now gone. There are some sc settings I make in the code that work a lot better to reduce a different more problematical fast movement artefact. It's still there, but less so than before. Rife 4.14 seems to be better overall for fast movement.
Blackfyre wrote:ontop
vo=gpu-next
fbo-format=rgba16hf
spirv-compiler=auto
dither-depth=auto
scale=ewa_lanczossharp
cscale=ewa_lanczossharp
dscale=ewa_lanczossharp
sigmoid-upscaling=yes
glsl-shader="C:\Users\YourUsernameHere\AppData\Roaming\mpv\Shaders\KrigBilateral.glsl"
glsl-shader="C:\Users\YourUsernameHere\AppData\Roaming\mpv\Shaders\FSRCNNX_x2_16-0-4-1.glsl"
glsl-shader="C:\Users\YourUsernameHere\AppData\Roaming\mpv\Shaders\SSimDownscaler.glsl"
I'm not going to comment on config for OLED. It's best you experiment for yourself. Here are some points to think about.
dither-depth=auto and spirv-compiler=auto is the default so not necessary
fbo-format is disabled in gpu-next so is not needed and won't work anyway.
KrigBilateral isn't really necessary and although I've never seen it myself it can apparently cause mistakes.
Is FSRCNNX_x2_16-0-4-1.glsl that much better than FSRCNNX_x2_8-0-4-1.glsl considering the extra processing power needed?
Do you need to use both FSRCNNX_x2_16-0-4-1.glsl and SSimDownscaler.glsl? That's a lot of processing and sharpening.
SSimDownscaler.glsl is configured by default for Mitchell but you are using ewa_lanczossharp. So that is not optimal.
You are sharpening in upscale, shader and downscale. Although FSRCNNX might not be doing anything because by default it only works when a x2 or greater upscale is needed. This might be why you can get away with running FSRCNNX_x2_16 and SSimDownscaler which are very GPU heavy.
Some shaders overwrite others so it's best to use append when you want multiple shaders to work at the same time.
Hope this helps a little.
cws wrote:Right, TensorRT 9.x is in "beta" and not officially released. Still, v13 was released when TensorRT 8.5 was the latest version, when there is a new release of TensorRT 8.x (which is 8.6.)
There may be fixes and enhancements in the latest v14 releases which is not in v13. If you're willing to help test, I would try downloading https://github.com/AmusementClub/vs-mlr … 4.test2.7z and unzipping
and the folder
into
C:\Program Files (x86)\SVP 4\rife
According to the info about the code, v14 is no improvement over v13. Having personally tested the v14 versions I would say that if anything they might use a little more GPU resource. Maybe that's because they are designed for AI GPUs.
flowreen91 wrote:dawkinscm wrote:Does changing the code in the way I mentioned do this?
https://gyazo.com/c71046909c565742a9c8b94a6929f11f
For values integers higher than 2 it will play the video track at 2x 4x 8x the speed while sound track would still be played at 1x.
For values lower than 2 it will crash with error 'fractional multi requires plugin akarin (https://github.com/AkarinVS/vapoursynth-plugin/releases) version v0.96g or later.'
Try to play the video with MPC-HC if you want to see big error logs on your screen when you test with different values.
dawkinscm wrote:after a few seconds the screen goes blank
that's what happens when you run out of video track 
follow aloola's suggestion and just select your FPS from the top right
Now that you have said it is obvious. I already knew what multi does in theory. But what I didn't realise was that in practice it directly relates to the fps which again is now obvious. Thanks 
As stated in the docs, the v14 test releases are not designed for our GPUs even though they will work. SVP is not and should not be using them
aloola wrote:why do you need to change it there when you can easily do it here?
Thanks but that's not what I'm trying to do. Or at least I don't think that's what I'm trying to do. Does changing the code in the way I mentioned do this?
Posts found: 351 to 375 of 509