I managed to make it work but it is not a walkover
Basically : 3D blurays are not side by side or top/bottom streams but packed frames. The codec name is H264 MVC
This means that for each frame : the left picture is encoded in H264 format, whereas the right picture rebuilt from the left picture + the differential part stored apart in the same (packed) frame.
Unfortunately, ffmpeg (used by mpv, itself used by SVP transcoder) does not support H264 MVC, so the decoding process will give only 2D frames (and ignore the diff part).
To make it work, I used a tool named BD3D2MK3D : this tool is a wrapper of a set of tools (tsmuxer, mkvmerge, x264/x265 for encoding) and most of all the FRIMSource library that is able to decode H264 MVC properly.
- Decode the video and unmux it (I guess that the encoder needs to be fed then with 1 file per stream) : 1 file for video stream, 1 file per audio/subtitle streams
- The tool will generate an avisynth script file that will be used in the next step in the encoder command line : the script just grabs the left/right pictures given by the FRIMSource decoder and put them together (horizontally or vertically. I made sure to uncheck "half" because it will divide by 2 the resolutions. You will get a full HD x2 resolution stream, eg. 3840x1080
This procedure is not easy though because I had to manually modify the avisynth script to incorporate the call to SVP library: this script is located in the subfolder where all the unmuxed files are. Its name is __ENCODE_3D_MOVIE.avs
Add at script beginning (built from base.avs script in SVP4 programs folder + the generated script by SVP when using SVP transcoder) :
LoadPlugin("C:\Program Files (x86)\SVP 4\plugins64\svpflow2.dll")
stereo_type = 1
src_fps = 23.976
function interpolate(clip)
{
input_um = clip #clip.resize.Point(format=vs.YUV420P8,dither_type="random")
input_m = input_um
input_m8 = input_m
super_params = "{scale:{up:0},gpu:1,rc:true}"
analyse_params = "{block:{w:8,overlap:0},main:{search:{coarse:{distance:-8,bad:{sad:2000},width:530},type:2}},refine:[{thsad:250}]}"
smoothfps_params = "{gpuid:11,gpu_qn:1,rate:{num:11,den:5},algo:23,mask:{area:100,cover:80},scene:{mode:0}}"
threads=3
demo_mode=0
nvof=1
rife = 0
nvof==1 ? eval("""
smooth = SVSmoothFps_NVOF(input_m, smoothfps_params, vec_src=input_m8, mt=threads, src=input_um)
""") : rife==1 ? eval("""
input_rife = input_m.ConvertBits(32,fulls=false,fulld=true).ConvertToPlanarRGB(matrix="709")
smooth = RIFE(input_rife,factor_num=rife_num,factor_den=rife_den,model_path=rife_mpath,gpu_id=rife_gpu,gpu_thread=rife_threads,sc=rife_sc,sc_threshold=rife_scth,yv12=input_m8)
smooth = smooth.ConvertToYUV420(matrix="709").ConvertBits(input_um.BitsPerComponent)
smooth = SVSmoothFps_RIFE(input_m, smoothfps_params, rife_out=smooth, vec_src=vec_src, src=input_um, mt=threads)
""") : eval("""
super = SVSuper(input_m8, super_params)
vectors = SVAnalyse(super, analyse_params, src=input_m8)
smooth = SVSmoothFps(input_m, super, vectors, smoothfps_params, mt=threads, src=input_um)
""")
return demo_mode==0 ? smooth : demo(input_m,smooth)
}
And at the end of the script : quote or delete this line"Return(last)#.Info()" and replace with :
input=last
stereo_type==0 ? eval(""" interpolate(input)
""") : stereo_type==1 ? eval("""
lf = interpolate(input.crop(0,0,input.width/2,0))
rf = interpolate(input.crop(input.width/2,0,0,0))
StackHorizontal(lf, rf)
""") : stereo_type==2 ? Eval("""
lf = interpolate(input.crop(0,0,0,input.height/2))
rf = interpolate(input.crop(0,input.height/2,0,0))
StackVertical(lf, rf)""") : input
Return(input)
Now I have a MKV 3D at full resolution (2x FullHD) and smooth (60FPS in my case) that I can ready with my 3D projector (I used Zidoo device for playback)
No need to plug my PC anymore and configure MPC-BE for smooth 3D playback.
This procedure should be industrialized, I will see if I can put my hand on this tool and upgrade it with SVP options