The RGBS format is supported by RIFE.
YUV444P16 can be converted to RGBS format with almost no loss.
Pages 1
You are not logged in. Please login or register.
SmoothVideo Project → Posts by blackmickey1007
Pages 1
The RGBS format is supported by RIFE.
YUV444P16 can be converted to RGBS format with almost no loss.
SVP4\script\base.py
- input_m = input_m.misc.SCDetect(threshold=0.1)
+ input_m = input_m.misc.SCDetect(threshold=0.20)
Thank you for your comprehensive reply. I did not understand everything, but for me personally this passage was the most important:
Did I understand correctly that you want to watch 4K UHD HDR video (10bit) interpolated with RIFE?
As I understood it correctly, such a conversion, would give better quality and resolution than interpolating the same material from a 1080p YUV420 source, part of whose information is encoded in 540p anyway.By converting from 4K UHD HDR video (10bit) you want to preserve 1080p throughout the chain?
I have some videos in YUV444P10 format and I want to keep the UV plane resolution, so I want SVP to support YUV444 format.
On the other hand, it is hard to realistically use RIFE for 4K video in a few years. Scaling 4K video to 1080P is a trade-off.
By the way, can RIFE interpolate 10 bit video? 32 bit RGB means 8 bits each of red, green, blue, and alpha...
The RGBS format in VapourSynth is 32bits floating point per channel. It is 32bits(R) + 32bits(G) + 32bits(B), without alpha channel.
The 8bits per channel integer format in VapourSynth is called RGB24. 8bits(R) + 8bits(G) + 8bits(B)
List of formats supported by VapourSynth:
http://www.vapoursynth.com/doc/pythonre … -constants
List of formats supported by AviSynth:
http://avisynth.nl/index.php/Avisynthplus_color_formats
Filmscans (RGB) → Blu-ray (YUV420) → RIFE (RGB) → VapourSynth (YUV420 now) → Monitor/TV (RGB)
... and any conversion to YUV420 is a loss of half the resolution in terms of colour...
Master Tape (YUV422) → Blu-ray (YUV420) → RIFE (RGBS) → SmoothFps_RIFE (YUV420 now) → Monitor/TV (RGB)
That looks like the difference is caused by using the wrong function for color space conversion.
For example:
YUV --(BT.709 conversion) --> RGB --(BT.601 conversion) --> YUV
This can be checked using the following script.
import vapoursynth as vs
import statistics
core = vs.core
core.num_threads = 4
input_clip = VpsFilterSource
input_YUV = input_clip.resize.Point(format=vs.YUV420P16)
clip_RGB = input_YUV.resize.Point(format=vs.RGBS, matrix_in_s="709")
clip_YUV = clip_RGB.resize.Point(format=vs.YUV420P16, matrix_s="709")
# expr: x - y
# Y Plane
expr = 'x y - '.format(neutral=1 << (input_YUV.format.bits_per_sample - 1))
Diff = core.std.Expr([input_YUV, clip_YUV], [0])
Diff = core.std.PlaneStats(Diff)
# Show the difference value (PlaneStatus - Y Plane)
# If the value is 0, it means it is exactly the same, non-zero means there is a difference.
clip_YUV = core.std.CopyFrameProps(clip_YUV, Diff)
clip_YUV = core.text.FrameProps(clip_YUV)
clip_YUV.set_output()
Incorrect conversion
clip_RGB = input_YUV.resize.Point(format=vs.RGBS, matrix_in_s="709")
clip_YUV = clip_RGB.resize.Point(format=vs.YUV420P16, matrix_s="470bg")
Accuracy error (from floating to integer conversion process)
clip_RGB = input_YUV.resize.Point(format=vs.RGB24, matrix_in_s="709")
clip_YUV = clip_RGB.resize.Point(format=vs.YUV420P16, matrix_s="709")
My request is for the video resolution, not the color space.
The common video format is YUV420, which has three planes, Y-plane, U-plane, and V-plane, where the length and width of UV-plane is only half of Y-plane.
In 2160P YUV420 video, the resolution of Y-plane is 2160P, U-plane is 1080P, and V-plane is 1080P.
YUV444 format has the same resolution for Y-plane, U-plane, and V-plane.
RGB format, there are three planes, R plane, G plane and B plane, all three planes have the same resolution.
RIFE only supports RGB format, but VapourSynth Filter does not support RGB format output, so you must convert RGB format to YUV format.
If you convert from RGB to YUV420 format, the information of UV plane will be lost.
If you want to keep the full resolution, you should convert to YUV444 format.
NOW
4K HDR Video ---> scale(power saving) ---> RGBS Convert(RIFE) ---> YUV444P10 Convert(Output)
Format: YUV420 YUV444 RGBS YUV420
Y or R: 2160p 1080p 1080p 1080p
U or G: 1080p 1080p 1080p 540p
V or B: 1080p 1080p 1080p 540p
Depth: 10bit 10bit 32bit 10bit
Wanted
4K HDR Video ---> scale(power saving) ---> RGBS Convert(RIFE) ---> YUV444P10 Convert(Output)
Format: YUV420 YUV444 RGBS YUV444
Y or R: 2160p 1080p 1080p 1080p
U or G: 1080p 1080p 1080p 1080p
V or B: 1080p 1080p 1080p 1080p
Depth: 10bit 10bit 32bit 10bit
Request support for YUV444P8/P10/P16 format
AviSynth Filter recently supports YUV444P10/P16 format, and I hope SVP will follow suit.
https://github.com/CrendKing/avisynth_f … 2189374110
If SVP can support YUV444P10 format, when playing 4K UHD HDR video, it can convert from 4K YUV420P10 to 1080P YUV444P10 and keep the resolution of UV plane.
The RIFE filter can also convert from RGBS to YUV444 instead of YUV420, keeping the resolution of the UV plane. (SmoothFps_RIFE)
### Environment ###
Windows 10
DDR4-2933 48GiB
Nvidia RTX2070 8GiB
Nvidia Driver 511.79
CUDA Toolkit 11.3
cuDNN v8.2.1 (June 7th, 2021), for CUDA 11.x
### Software ###
Python 3.10.4
VapourSynth R58-RC2
PyTorch 1.11.0 (CUDA 11.3)
vs_rife v2.0.0
VapourSynth-RIFE-ncnn-Vulkan r3 (model: 4.0)
### Tools & Seting ###
GPU-Z 2.45.0
VapourSynth Editor r19-mod-5-AC2
VapourSynth threads: core.num_threads = 4
Decoder: lsmas.LWLibavSource(format="yuv420p8", prefer_hw=3)
Video: demo.mp4 [720p]
### Result ###
1. RIFE filter for VapourSynth (PyTorch CUDA) - vs_rife v2.0.0
Interpolation: x2
RIFE model: 4.0
scale: 1.0
FP16: False
FPS: 54.115
CUDA: ~50%
PerfCap: VRel, VOp, Pwr
2. RIFE filter for VapourSynth (PyTorch CUDA) - vs_rife v2.0.0
Interpolation: x2
RIFE model: 4.0
scale: 0.5
FP16: False
FPS: 69.997
CUDA: ~40%
PerfCap: VRel, VOp
3. RIFE filter for VapourSynth (PyTorch CUDA) - vs_rife v2.0.0
Interpolation: x2
RIFE model: 4.0
scale: 0.5
FP16: True
FPS: 70.936
CUDA: ~32%
PerfCap: VRel, VOp
4. RIFE filter for VapourSynth (ncnn Vulkan) - VapourSynth-RIFE-ncnn-Vulkan r3
Interpolation: x2
RIFE model: 4.0
GPU thread: 1
tta: False
uhd: False
sc: True
FPS: 27.356
CUDA: ~1%
Compute_1: 30%
PerfCap: Idle
5. RIFE filter for VapourSynth (ncnn Vulkan) - VapourSynth-RIFE-ncnn-Vulkan r3
Interpolation: x2
RIFE model: 4.0
GPU thread: 2
tta: False
uhd: False
sc: True
FPS: 92.956
CUDA: ~15%
Compute_1: ~94%
PerfCap: VRel, VOp, Pwr
6. RIFE filter for VapourSynth (ncnn Vulkan) - VapourSynth-RIFE-ncnn-Vulkan r3
Interpolation: x2
RIFE model: 4.0
GPU thread: 2
tta: False
uhd: True
sc: True
FPS: 92.366
CUDA: ~15%
Compute_1: ~94%
PerfCap: VRel, VOp, Pwr
7. RIFE filter for VapourSynth (ncnn Vulkan) - VapourSynth-RIFE-ncnn-Vulkan r3
Interpolation: x2
RIFE model: 4.0
GPU thread: 2
tta: False
uhd: False
sc: False
FPS: 87.083
CUDA: ~15%
Compute_1: ~94%
PerfCap: VRel, VOp, Pwr
8. RIFE filter for VapourSynth (ncnn Vulkan) - VapourSynth-RIFE-ncnn-Vulkan r3
Interpolation: x2
RIFE model: 4.0
GPU thread: 3
tta: False
uhd: False
sc: True
FPS: 90.645
CUDA: ~15%
Compute_1: ~94%
PerfCap: Idle
You should give MPV Player a go, again.
what's the point of sharing it in such a format then?
It's part of Avisynth script.
You can use avisynth to convert video into HFR.
levels.pel = 1; //SVSuper("{pel:1,...}");
analyse.block.w = 32; //SVAnalyse(super, "{block:{w:32,...},....}", src=last);
analyse.block.h = 32; //SVAnalyse(super, "{block:{...,h:32},....}", src=last);
analyse.main.levels = 4; //SVAnalyse(super, "{...,main:{...,levels:4}}", src=last);
analyse.main.search.type = 3; //SVAnalyse(super, "{...,main:{search:{...,type:3,...},...},...}", src=last);
analyse.main.search.distance = -5; //SVAnalyse(super, "{...,main:{search:{...,distance:-5},...},...}", src=last);
analyse.main.search.coarse.distance = -4; //SVAnalyse(super, "{...,main:{search:{coarse:{distance:-4,...},...},...},...}", src=last);
analyse.main.search.coarse.trymany = true; //SVAnalyse(super, "{...,main:{search:{coarse:{...,trymany:true},...},...},...}", src=last);
analyse.main.search.coarse.bad.range = 0; //SVAnalyse(super, "{...,main:{search:{coarse:{...,bad:{range:0},...},...},...},...}", src=last);
analyse.main.penalty.lambda = 11.5; //SVAnalyse(super, "{...,main:{...,penalty:{lambda:11.5,...},...},...}", src=last);
analyse.main.penalty.plevel = 1.6; //SVAnalyse(super, "{...,main:{...,penalty:{...,plevel:1.6,...},...},...}", src=last);
analyse.main.penalty.pnbour = 65; //SVAnalyse(super, "{...,main:{...,penalty:{...,pnbour:65},...},...}", src=last);
analyse.refine[0] = {search:{distance:4}}; //SVAnalyse(super, "{...,refine:[{search:{distance:4}}]}", src=last);
smooth.algo = 23; //SVSmoothFps(super, vectors, "{...,algo:23,...}", mt=15);
smooth.scene.mode = 0; //SVSmoothFps(super, vectors, "{...,scene:{mode:0,...}}", mt=15);
smooth.scene.limits.scene = 3000; //SVSmoothFps(super, vectors, "{...,scene:{...,limits:{scene:3000,...}}}", mt=15);
smooth.scene.limits.blocks = 40; //SVSmoothFps(super, vectors, "{...,scene:{mode:0,limits:{...,blocks:40}}}", mt=15);
This is kevingwn's old script.
Modified special settings for anime v2
https://imgur.com/a/mjDjs
levels.pel = 1;
analyse.block.w = 32;
analyse.block.h = 32;
analyse.main.levels = 4;
analyse.main.search.type = 3;
analyse.main.search.distance = -5;
analyse.main.search.coarse.distance = -4;
analyse.main.search.coarse.trymany = true;
analyse.main.search.coarse.bad.range = 0;
analyse.main.penalty.lambda = 11.5;
analyse.main.penalty.plevel = 1.6;
analyse.main.penalty.pnbour = 65;
analyse.refine[0] = {search:{distance:4}};
smooth.algo = 23;
smooth.scene.mode = 0;
smooth.scene.limits.scene = 3000;
smooth.scene.limits.blocks = 40;
Pages 1
SmoothVideo Project → Posts by blackmickey1007
Powered by PunBB, supported by Informer Technologies, Inc.