Great video, a really interesting insight and tutorial. It can be a bit of a struggle for more laymen-types to get their heads around this stuff, but I found the map you provide really useful, as I'm also a visual learner, especially for processes such as this. Look forward to seeing more!
Great video. I have many Sony derived DV tapes of home video, which I captured to interlaced avi using Sony Vegas some years ago. I now have more time to convert them to a progressive format. I use DV resolve, which deinterlaced and produced good results. I’ve also used Handbrake and found its BWDIF default filter to be the best. However, QTGMC interlacing appears to be the gold standard. Hybrid can provide QTGMC deinterlacing and, importantly for someone who doesn’t work in scripts, has a GUI. Many thanks for the highly informative video, which takes the time to explain and demonstrate the features achievable using Hybrid.
(29:45) Bob deinterlacing doesn't cut the frame rate in half, but rather doubles it, giving you an extremely stable 60p output when using QTGMC. This would provide more temporal resolution which should result in less artifacts when interpolating to 24 FPS.
Yes thank you! Did I say halves it? I meant double. You are spot on with the advantage of doubling temporal rez when Bobbing to 50 or 60p. Nice catch thank you Kooz!
I watched this for the 2nd time after doing some decent research into video transfer methods and deinterlacing. This is by far the best video guide to optimising archival camera footage out there and thankfully is using the best tool out there by far being hybrid. Your comments on hybrid filter options and chatGPT are also spot on. It has a ton of info regarding this tool and far easier to use for research than blind google searching. Oh one thing ChatGPT did say was that using bob before deinterlacing was the optimal approach for smoothest movement. Thankfully hybrid allows the combining of these processing options. Thanks very much by the way.
@@TheBukman Thank you for this comment. Im really making an effort to give practical guides that are complete. My next tutorial will be a beginning to end HDV to 4k remaster of an old film I shot with the Canon XLH1 in 2007 along with a surprising discovery about this camera’s color science. (Far ahead of its time).
I am not someone who comments on RUclips videos but I just want to say thank you so much for this extremely detailed instructional video on how to use Hybrid with Vaporsynth QTGMC.I have been using Avisynth with VirtualDub but I think I am going to make the switch over to Hybrid due to how much easier this is and Avisynth does not play well with multiple audio tracks, specifically tape formats like professional recordings on Umatic. I am in the beginning stages of starting a video transfer business with my buddy and this is one of the most difficult parts of getting our clients professional results. I take this approach: I give them the raw archival footage and the upscaled and deinterlaced x265 for easy viewing. I tell them to put on the cloud and when video processing get exponentially better in 20 years you can give someone the files and have them make it "better" than the "viewing" files that you currently have. Thank you!
@@chrisjaeger7795 you got it exactly! Good luck with your new venture! Thank you for your service to the tape archival rescue! Im about to release a 6 part series on hdv transfer and remaster to color so hopefully you will glean more knowledge or inspiration from my mistakes!!!
Thank you for this really helpful video. Been using Hybrid for a while and it is an excellent piece of software. Would like to request a video on your approach to using Hybrid for an Inverse Telecine process on video SD content that came from film (which means it has the 3:2 pulldown as part of the video), creating a final clean 24p output file.
Thank you for sharing the invaluable experience! One question though: at the crop/resize tab, you talked about choosing the bicubic resizer over lanczos one, ok. But then in the Vapoursynth area (at the Resize subtab), you talk now about choosing NNEDI3 resizer. Now which of the two tabs actually matter for the resize settings and which resizer will be used, provided that we have activated Vapoursynth? Thank you!
@@sergeypianovaria hey! I think I do that out of habit but now that I think about it I believe nnedi upscale settings override. Ill need to read more about that pipeline to give you a more informed response!
@@zhalberd Yes, that's correct. I tested all of the resizers in Crop/Resize to see how they compare and discovered that when one checks that Resizer box in Filtering-Frame-Resize it overrides whatever you selected in Crop/Resize.
Hi..still new to this..but want to learn more about hybrid as not impressed with topaz..so you do everything in one pass?..from what ive seen people recommend to just deinterlace in one pass. Then resize/enhance on the 2nd pass, or doesn't it matter. Many thanks and great video
@@Upscaler23 one pass as there is an order of operations built into Hybrid. The “frame server” serves the video file frame by frame and Vapoursynth (or Avisynth) renders each frame (or series of frames depending on the filter) with each filter in an assembly line from left to right. This is also how Vapoursynth scripts are written - in a specific order down the script. Or at least this is how I understand it.
Hi Zack thank you for this video. Really clarifies ALOT of things. I’m new to PCs and using an analog capture setup running XP since that is the recommended OS for my special ATI capture card to export an AVI with lossless video codec. Since i’m primarily a Mac user (M1 Max Pro with 32 GB Ram), the only other PC I own is a Dell Latitude company laptop with Intel i7 1.9 Ghz, 16 GB Ram, UHD Graphics 620 card and 512GB NVMe SSD installed. If all I plan to do is use Hybrid to interlace AVI files and use Resolve for quick trim and exporting, do you think the PC would suffice. I have read in forums that HuffyYUV lossless codec is not supported by any recent MacOS and kind of stuff and don’t really have the funds to get another PC at the moment. I have this one laying around that i’ve considered just adding RAM and maybe swapping out the SSD. What would you recommend?
@@cuckoohaus honestly you can run Hybrid on a toaster. Though Resolve is picky with GPUs. If the UHD 620 is an Intel integrated GPU it may not work (as of last I tried) - double check BlackMagic Design’s GPU compatibility list. But Hybrid will work on just about anything. I use it on my Intel Nuc as well as my Lenovo Thinkpad with Intel Iris Xe integrated GPU.
excellent information and presentation :) do you have anything that covers actually getting the original footage? i've been trying to use VirtualDub2 but most of the compression formats i try either show up as progressive in MediaInfo or don't seem to be very receptive to de-interlacing within Hybrid. so i guess that's the main thing i'm attempting to do: get a good quality interlaced copy of the original footage that i can manually work with in Hybrid. thanks in advance! :)
@@locommotionmusic hi! Thanks for your comment. Correct me if I misunderstood but are you asking me the best method to capture the archival source in the first place?
Yes I think this is what was asked. I am interested in that also. It seems these tools like hybrid start with a source file that was captured in some sort of interlaced format from tape which is then processed by your hybrid process presented here. If would be great to get some suggestions on that initial tape capture step. Eg what codec and file format and what resolution is best to capture from tape with. Also would you upscale during the tape capture process? Some capture tools provide many options and it can get confusing if the intention is to use tools such as hybrid to do the main processing step. Plus as you say you should capture with no processing initially to be able to use new processing tools later that come along. I would love to know what that initial tape capture to digital should be for future proofing.
@@zhalberd correct. i realize the premise of this video was that you were working with source material being provided to you. but as @TheBukman described it can be quite the challenge to capture good quality source material first. fwiw after a LOT of tinkering and tweaking I have had a little better luck getting _decent_ OCF and getting Hybrid or StaxRip to play nice with the footage. but it still seems pretty hacky on my part! :D
Hey yall. I've made a video for you based on this request. I realize that this was a critical component of the workflow so I hope this helps! ruclips.net/video/zB42-A5VOsQ/видео.html
I work primarily in the film business with post production facilities that only accept ProRes as a deliverable. Sometimes I have clients who are PC/Windows based and they demand DNxHD or DNxHR. I will work with any format a client requests! In a perfect world I would prefer to stick with open source formats.
@@zhalberd I capture in losslessly compressed Huffy and one thing that sucks if you want to use davinci resolve is the lack of support for lossless codecs.
Why do video editors like to work with 23.98fps instead of 29.97fps which is what 99% NTSC video tapes are? Changing frame rates usually degrades the video. Why not keep original fame rate?
@@rsuryase when video editors first start a project there is a post production meeting that usually happens. In this meeting, we discuss something called “deliverables” which is what the distributor has asked for. This can mean several different things depending on the distribution method. Theatrical distribution will require 24fps. Television streaming will require 23.976fps or 24fps (or 25fps for PAL / SECAM). Broadcast for terrestrial television, cable, and satellite is 29.97 (for NTSC regions). Distribution determines the final format but the master timeline settings may or may not be aligned with this depending on how deliberate the editor or assistant editor is. Honestly, you would be shocked at how little editors actually understand the “why” and the “how” of timeline settings. This is why you can see a lot of weird stuttering and temporal artifacting in documentary films with archival media on streaming and tv these days. It comes down to poor planning and sloppy conversions. Most films with archival media shown on tv/streaming today have been deinterlaced using the basic algorithm built into the editing software and also sloppily “conformed” from 29.97 to 23.98. There is also a movement in the post world to get rid of fractional framerates altogether. 23.976 is totally redundant in the arbitrary world of streaming. So all of this will eventually change but archival films will still require someone on the team who knows the secret sauce. The younger generation of assistant editors do not stick around long enough to actually learn the fundamentals so the generational loss of the skills required has caused these weird issues. Its a long conversation for another time…
Great video, a really interesting insight and tutorial. It can be a bit of a struggle for more laymen-types to get their heads around this stuff, but I found the map you provide really useful, as I'm also a visual learner, especially for processes such as this. Look forward to seeing more!
Thanks for stopping by!
Great video. I have many Sony derived DV tapes of home video, which I captured to interlaced avi using Sony Vegas some years ago. I now have more time to convert them to a progressive format. I use DV resolve, which deinterlaced and produced good results. I’ve also used Handbrake and found its BWDIF default filter to be the best. However, QTGMC interlacing appears to be the gold standard. Hybrid can provide QTGMC deinterlacing and, importantly for someone who doesn’t work in scripts, has a GUI. Many thanks for the highly informative video, which takes the time to explain and demonstrate the features achievable using Hybrid.
(29:45) Bob deinterlacing doesn't cut the frame rate in half, but rather doubles it, giving you an extremely stable 60p output when using QTGMC. This would provide more temporal resolution which should result in less artifacts when interpolating to 24 FPS.
Yes thank you! Did I say halves it? I meant double. You are spot on with the advantage of doubling temporal rez when Bobbing to 50 or 60p. Nice catch thank you Kooz!
I watched this for the 2nd time after doing some decent research into video transfer methods and deinterlacing.
This is by far the best video guide to optimising archival camera footage out there and thankfully is using the best tool out there by far being hybrid.
Your comments on hybrid filter options and chatGPT are also spot on. It has a ton of info regarding this tool and far easier to use for research than blind google searching.
Oh one thing ChatGPT did say was that using bob before deinterlacing was the optimal approach for smoothest movement. Thankfully hybrid allows the combining of these processing options.
Thanks very much by the way.
@@TheBukman Thank you for this comment. Im really making an effort to give practical guides that are complete. My next tutorial will be a beginning to end HDV to 4k remaster of an old film I shot with the Canon XLH1 in 2007 along with a surprising discovery about this camera’s color science. (Far ahead of its time).
I am not someone who comments on RUclips videos but I just want to say thank you so much for this extremely detailed instructional video on how to use Hybrid with Vaporsynth QTGMC.I have been using Avisynth with VirtualDub but I think I am going to make the switch over to Hybrid due to how much easier this is and Avisynth does not play well with multiple audio tracks, specifically tape formats like professional recordings on Umatic. I am in the beginning stages of starting a video transfer business with my buddy and this is one of the most difficult parts of getting our clients professional results. I take this approach: I give them the raw archival footage and the upscaled and deinterlaced x265 for easy viewing. I tell them to put on the cloud and when video processing get exponentially better in 20 years you can give someone the files and have them make it "better" than the "viewing" files that you currently have. Thank you!
@@chrisjaeger7795 you got it exactly! Good luck with your new venture! Thank you for your service to the tape archival rescue! Im about to release a 6 part series on hdv transfer and remaster to color so hopefully you will glean more knowledge or inspiration from my mistakes!!!
Thank you for this really helpful video. Been using Hybrid for a while and it is an excellent piece of software. Would like to request a video on your approach to using Hybrid for an Inverse Telecine process on video SD content that came from film (which means it has the 3:2 pulldown as part of the video), creating a final clean 24p output file.
Cool thanks! My vids are all hi8
@@AllenMichael ur music is rad. Are you running your recordings through tape format prior to transfers?
@@zhalberd thanks. I record my music on tape, then mix it to tape; then send the tape to mastering.
Thank you for sharing the invaluable experience!
One question though: at the crop/resize tab, you talked about choosing the bicubic resizer over lanczos one, ok. But then in the Vapoursynth area (at the Resize subtab), you talk now about choosing NNEDI3 resizer. Now which of the two tabs actually matter for the resize settings and which resizer will be used, provided that we have activated Vapoursynth? Thank you!
@@sergeypianovaria hey! I think I do that out of habit but now that I think about it I believe nnedi upscale settings override. Ill need to read more about that pipeline to give you a more informed response!
@@zhalberd Thank you!
@@zhalberd Yes, that's correct. I tested all of the resizers in Crop/Resize to see how they compare and discovered that when one checks that Resizer box in Filtering-Frame-Resize it overrides whatever you selected in Crop/Resize.
@@videocaptureguide thank you for confirming! These notes are good for the community 👍🏻
Hi..still new to this..but want to learn more about hybrid as not impressed with topaz..so you do everything in one pass?..from what ive seen people recommend to just deinterlace in one pass. Then resize/enhance on the 2nd pass, or doesn't it matter. Many thanks and great video
@@Upscaler23 one pass as there is an order of operations built into Hybrid. The “frame server” serves the video file frame by frame and Vapoursynth (or Avisynth) renders each frame (or series of frames depending on the filter) with each filter in an assembly line from left to right. This is also how Vapoursynth scripts are written - in a specific order down the script. Or at least this is how I understand it.
Can you explain the next step? VHS processed by Hybrid is ok, but a basic color correction in DaVinci Resolve Studio?
@@matti157 hey! That’s a great idea. Ill be sure to add this to my episode idea sheet
Hi Zack thank you for this video. Really clarifies ALOT of things. I’m new to PCs and using an analog capture setup running XP since that is the recommended OS for my special ATI capture card to export an AVI with lossless video codec. Since i’m primarily a Mac user (M1 Max Pro with 32 GB Ram), the only other PC I own is a Dell Latitude company laptop with Intel i7 1.9 Ghz, 16 GB Ram, UHD Graphics 620 card and 512GB NVMe SSD installed. If all I plan to do is use Hybrid to interlace AVI files and use Resolve for quick trim and exporting, do you think the PC would suffice. I have read in forums that HuffyYUV lossless codec is not supported by any recent MacOS and kind of stuff and don’t really have the funds to get another PC at the moment. I have this one laying around that i’ve considered just adding RAM and maybe swapping out the SSD. What would you recommend?
@@cuckoohaus honestly you can run Hybrid on a toaster. Though Resolve is picky with GPUs. If the UHD 620 is an Intel integrated GPU it may not work (as of last I tried) - double check BlackMagic Design’s GPU compatibility list. But Hybrid will work on just about anything. I use it on my Intel Nuc as well as my Lenovo Thinkpad with Intel Iris Xe integrated GPU.
excellent information and presentation :) do you have anything that covers actually getting the original footage? i've been trying to use VirtualDub2 but most of the compression formats i try either show up as progressive in MediaInfo or don't seem to be very receptive to de-interlacing within Hybrid. so i guess that's the main thing i'm attempting to do: get a good quality interlaced copy of the original footage that i can manually work with in Hybrid. thanks in advance! :)
@@locommotionmusic hi! Thanks for your comment. Correct me if I misunderstood but are you asking me the best method to capture the archival source in the first place?
Yes I think this is what was asked.
I am interested in that also.
It seems these tools like hybrid start with a source file that was captured in some sort of interlaced format from tape which is then processed by your hybrid process presented here.
If would be great to get some suggestions on that initial tape capture step. Eg what codec and file format and what resolution is best to capture from tape with. Also would you upscale during the tape capture process?
Some capture tools provide many options and it can get confusing if the intention is to use tools such as hybrid to do the main processing step.
Plus as you say you should capture with no processing initially to be able to use new processing tools later that come along. I would love to know what that initial tape capture to digital should be for future proofing.
@@zhalberd correct. i realize the premise of this video was that you were working with source material being provided to you. but as @TheBukman described it can be quite the challenge to capture good quality source material first. fwiw after a LOT of tinkering and tweaking I have had a little better luck getting _decent_ OCF and getting Hybrid or StaxRip to play nice with the footage. but it still seems pretty hacky on my part! :D
Hey yall. I've made a video for you based on this request. I realize that this was a critical component of the workflow so I hope this helps! ruclips.net/video/zB42-A5VOsQ/видео.html
Why go ProRes and not a lossless intermediate codec?
I work primarily in the film business with post production facilities that only accept ProRes as a deliverable. Sometimes I have clients who are PC/Windows based and they demand DNxHD or DNxHR. I will work with any format a client requests! In a perfect world I would prefer to stick with open source formats.
@@zhalberd I capture in losslessly compressed Huffy and one thing that sucks if you want to use davinci resolve is the lack of support for lossless codecs.
Prores is good it’s just not mathematically losslessly compressed.
Yes please could u do a music video
Haha. I have a terrible voice. Not sure this is a good idea.
I mean deinterlace a music video from a vhs
@@britsluver yes I could do that for sure
Why do video editors like to work with 23.98fps instead of 29.97fps which is what 99% NTSC video tapes are? Changing frame rates usually degrades the video. Why not keep original fame rate?
Because they were originally recorded at 23.976. The 29.97 frame rate is the result of 3:2 pulldown. You’re restoring it to its original state.
@@rsuryase when video editors first start a project there is a post production meeting that usually happens. In this meeting, we discuss something called “deliverables” which is what the distributor has asked for. This can mean several different things depending on the distribution method. Theatrical distribution will require 24fps. Television streaming will require 23.976fps or 24fps (or 25fps for PAL / SECAM). Broadcast for terrestrial television, cable, and satellite is 29.97 (for NTSC regions). Distribution determines the final format but the master timeline settings may or may not be aligned with this depending on how deliberate the editor or assistant editor is. Honestly, you would be shocked at how little editors actually understand the “why” and the “how” of timeline settings. This is why you can see a lot of weird stuttering and temporal artifacting in documentary films with archival media on streaming and tv these days. It comes down to poor planning and sloppy conversions. Most films with archival media shown on tv/streaming today have been deinterlaced using the basic algorithm built into the editing software and also sloppily “conformed” from 29.97 to 23.98. There is also a movement in the post world to get rid of fractional framerates altogether. 23.976 is totally redundant in the arbitrary world of streaming. So all of this will eventually change but archival films will still require someone on the team who knows the secret sauce. The younger generation of assistant editors do not stick around long enough to actually learn the fundamentals so the generational loss of the skills required has caused these weird issues. Its a long conversation for another time…