60ish TB. I would love to use something like tdarr, but I usually end up re encoding my sample piece per movie 5 times tinkering settings, so having some template going over them would be a no go for me. Gpu encodes are never sadisfying for me, the time for CPU encoding combined with the hobby of collecting movies is just not fisable
Tim, I started my transcode project last year, took six months and saved 30tb! I do CPU encodes and only with HD content. Best quality and file sizes Vs GPU can be achieved this way.
@@joelang6126 I'm guessing you have already done so but if not; recommend using something like unbalance to consolidate onto fewer drives in the array, thus avoiding spinning up drives when not needed!
Great stuff! I've actually been looking for a bulk transcode service for quite a while that wasn't just an ffmpg batch file. Definitely will be putting this to use.
Jeff, I hope you do a video about it ! Would really like to see it setup on Proxmox, maybe with multiple nodes and more in depth, even tho this video is already pretty good!
@@FlaxTheSeedOne it doesn't matter what you use to encode, the results should be the same . Its a compression algorithm , or just math . Why would there be any difference other than raw speed? Measured in (Millions of Inter-Operations per second) or (MIPS). the creator's point was that x and h 265 are a better storage solution than x or h 264 . Saved him nearly a terabyte. If handbrake can do the same then its just as viable .
@@tobiwonkanogy2975 according to video, encode/decode hardware logic was used. Even Turing NVENC gives result like medium CPU presset. It's kind of ok for an archive but still, quality drop might be too big for someone.
@@vadnegru I don't know enough about how things get encoded and decoded then. The cpu's have a bigger section of die that works faster than a gpu ? the only thing i have found was that cpus tend to get the encode/decode features first and then graphics later on . other info seemed to suggest gpus if speed is necessary and cpu if accuracy is more important. gpus were scalable were you mainly only have one cpu socket now . VRAM is less ECC than DDR RAM is my true guess.
It should be said that transcoding video WILL degrade the quality, especially when the source video isn't lossless. You can compare it to converting an MP3 file to a lower bitrate MP3 file. You're compressing an already compressed file format so the quality degradation is doubled. When you're working with archive footage that's saved at really high quality settings, it'll probably still look fine but don't expect this method to do you any favours when applied to a library of movies or TV shows you once ripped to an already lossy format.
Try it out. I doubt you see a difference in every Day usage between a 1080p 10 Mbit/s h264 Source converted to a 1080p 5 Mbit/s h265 file. Not saying you cant see it at all, but for general stuff its not noticeable at all imo.
This was my exact question too -- not long ago I spent waaaaay too long getting GPU encoding working under WSL2 with ffmpeg, and even though I was aware there would be a quality drop, I wasn't really expecting to be at a level where I either would notice or care. But it was immediately obvious and distracting, banding all over the place, completely distracting. CPU-reencoded files were the same size (within a percent or two either way), but looked completely indistinguishable from the source (at least to my eyes -- I'm sure somebody who knew what to look for would know!). That I was able to see the quality difference was a real eye-opener (no pun intended!) for me, and I ended up just spending the extra run-time using the CPU. It might have taken a lot longer but there's a good reason why they warn you about the quality drop for GPU. If there's a way of getting around this I'd be delighted to hear it, as the time savings would be massive!
Dude, there's no difference between cpu and gpu encoding as long as you use equal settings. If you don't know what the equivalent settings are, then that's your problem, not the encoding. Unfortunately, sometimes the settings can get very complicated, and sometimes you just need to try out a few different settings.
@@jonmayer This project is not for you then. The point here was save space and not beat up the cpu while doing it. for absolute purists who don't want to change their files they should not be changing their files.
"old", a 1050 TI is currently the GPU for my wife's Proxmox gaming server/my 3d printing host... running almost exclusively Sims 4 and Cura. :) I'm only 2 min in but I have to say, I hope you discuss your TDARR settings. I've been following this project for almost 2 years and I decided this winter, when I already want to heat my apartments, is the perfect time to really define what I want out of it and see how well it does with a mixed mashup of TV and Anime.
Seems like no one has commented on this yet, sick new camera setup! It looks awesome, kind of feels less cramped from the telephoto lens and the bokeh and framing is also nicer. I have plenty of OBS Replay Buffer clips that used NVENC H.264 CQP 25, meaning it takes a lot of space in exchange for lower resource usage (OBS is on the same PC that I game on). I already encode using HandBrake, but this looks like it’ll help use my desktop’s extra resources together with my server when I sleep.
Try looking into something like LizardFS for storage moving forward. You'll be able to scale your storage with redundancy at a software/filesystem level rather than dealing with raid arrays. Need more redundancy on a single folder, no problem. Need more space, add more drives or chunkservers. If a drive or server drops, chunks from other drives or servers will be copied to re-meet the redundancy goals, instead of a crazy raid rebuild.
Have thought about that many times in the past but I don't really see a reason to do this. Adding and running an additional hard drive, or get a larger one can be cheaper initially and is more energy efficient long-term than running a graphics card to transcode back and forth all the time when using Jellyfin or Plex. Of course this might change if you have a spare gpu lying around anyway, have one in your server anyway and energy cost is cheap where you live.
Tdarr has been a saver for me! I have used it for some time, been ripping my movie collection (I have alot)! I have saved over 1.5 TB worth of data, with no notable quality loss!
It's great for space saving but beware that H265 takes more horsepower to transcode on the fly when watching content on Plex/Jellyfin, so if your home server is a fairly low powered Synology or Raspberry Pi you might want to consider leaving it as H264 and just buy more storage, otherwise you will run into buffering!
@@ZiggleFingers For many people that defeats the entire point of having a Plex server. If your goal is to save money on streaming services you're hardly saving money if you go out and replace perfectly fine playback devices every couple of years. I'm using an Xbox 360 on one TV for the kids to stream cartoons. I don't want to give them access to a newer more expensive device they're likely to damage anyway.
@@Prophes0r Ya most modern hardware but hardware is commonly being used for longer these days. My pc is 10 years old and I dont' plan on upgrading it as I only download videos to my external hard drives which I watch on my ps4 slim and ps vita slim
Just make sure all the clients are h265 decode capable and ZERO transcoding is needed, 3rd gen firestick (2019) onwards, intel 6th gen cpu or later (or Ryzen APU), & I believe 1000 series Nvidia 500 series AMD or later all do the decoding in hardware as does a Pi4.
This is really helpful. Though I guess my 9.9TB movie library will take decades to convert since i've got only 1 node with a 3070. But my first test conversions looks like I can cut the size roughly in half 😍
You'd be surprised bro, with nvenc acceleration I was able to do a typical 1080p h264, ~5-7gb in about 20-30 minutes and I was also able to do 2x concurrently without impacting performance, and that was with a 980 too. Might actually take less time than you think!
@@bobtiji yes but different efficiency, also NVENC is only good on RTX 2000 series and up. The GTX 1000 series doesn’t have good quality encoding and the files are even larger although I’m not 100% sure on the file size.
This is true, CPU do give better quality encodes and the file size is smaller. However, on my GTX1650 vs my AMD 5800x. I can get around 4-6x better performance on my GPU vs my CPU. So make a smarter conversion setup, where stuff you really want to keep quality, use CPU, stuff you don't really care about, use GPU.
I'm pretty sure it depends on the output bitrate you set, no? In general, H265 is 30% smaller than the corresponding quality of H264. But of course, you can compress a H264 100MBit/s file into a H265 10MBit/s file, but usually it's around 70MBit/s for the same quality.
I love MicroCenter! Always make it a point to go there when I head up north. Prices are very comparable and staff is great. Thanks for this video. This is truly what I needed. I’ve been transcoding “manually” via Handbrake.
I wish there was a MicroCenter near me (or ANYWHERE IN THE PACIFIC NORTHWEST -- in case they're listening). I've also been "manually" transcoding via Handbrake (in batches with presets, at least) -- excited to try this out too!
the new Apple M1 Pro/Ultra have some serious hardware encoders (can do h.265 also), and the max can do 30 streams of 4k ProRes or 7 streams of 8k ProRes. I'm curious how well it would do at this task if you threw a Mac Mini with one of those cpu's onto your network as a compute node for your video encoding
Thanks for showing us this service and your hardware setups! Really nice to watch :-) I'm not sure of your exact settings, but not enabling 10bit would actually degrade the quality of a big part of my library as it was recorded in 10bit. Also not sure about your framerate settings. Most of my videos use 60fps, some even 120fps. If everything would get converted to 30fps, it will make any slowdown in post impossible.
does tdarr support av1 yet? i think av1 will replace h265 some day. youtube already uses it too. you might want to try using av1 on your youtube uploads to check out the quality difference
IMO if you are trying to save space and get a better quality, use CPU encoding. Else if you want to transcode a video for plex (like real time resolution switching, etc) use the GPU. I got into this topic a while ago, when i was trying to rip my blueray collection and noticed the bad quality on NVENC re-encoding.
Plex can use intel quick sync and based on os it runs on it can use nvenc of nvida and amds answer to that but you card needs to support h265 decode to transcode on the fly
GPU Nvidia encoding isn't the problem here. Problem is a Nvidia decoder. Try using CPU as decoder na Nvidia GPU as encoder. That way you will get fast transcoding and maybe even better quality compared to the CPU.
@@vedranart plex has ondemad trascodig from H265 10bit to any lower quality as long as the gpu in de server has the decoder and encoder available be it intels Quick sync AMD UVD/UVE or Nvidia NVENC or NVDEC
The Nvidia Quadro P2000 has no software limitations regarding hw transcoding. Thats why its popular for plex. You can transcode 20 streams simultaneously.
If you have alot of space processor time you should rather transcode to av1 (better then h265 and open-source the actual future of video) and opus(for audio). This will take alot of time and currently doesnt have any gpu encode support. VP9 is a good alternative to h265 for better browser and mobile device support as h265 royalties are insane.
I developed a similar although not as polished service back in 2018 when I was in uni. But I stopped development after I started working and could afford buying more hdds hehe. Glad to see that someone else had the same idea
I go way WAY back with video encoding. Spent many hours @ the doom9 forums trying to squeeze every bit out of every file while keeping all the quality. Transcoding just isn't worth it unless you have huge source files, like ripping straight from a disc. You always lose a little quality and slow storage is cheap and easy.
Storage may be cheap and easy, but serving large files can be a problem, or server will end up transcoding to a worse quality on the fly. And besides there are other reasons to use an app like Tdarr or Fileflows than just to trancode. You can remux, add new audio tracks in different codecs (my stuff cant place EAC3 for example which would cause plex to to a transcode), automatically cut commercials, add chapters, remove unwanted streams to save space etc.
@@JohnAndrews_nz You have valid points but editing streams in a container isn't quite transcoding. I'd argue that's even worse than transcoding since the gains are even smaller (most of the time) but require manual work unless every stream is tagged properly. I guess everyone has different needs but I'm done with the days of 100% CPU for weeks on end for a measly 720GB.
@@drivenbydemons6537 you are correct. that would not be transcoding. however, you can still gain a lot of storage if you strip out all the additional unwanted audio tracks. Tha can save 10-15% per video at times. No where near h264 to h264, but still do that across a few TB and you can save a couple hundred GBs I'm John BTW. Switched to this account for transparency. Was on my mobile/personal account previously.
Looks interesting if you're in need of just compressing videos for storage purposes and/or watching videos locally with a compatible media player. But, if you selfhost the videos for like website/blog/streaming service, or want to make sure the videos are playable with the most basic media player (without the need of installing codecs), then h264 is still the way to go.
We are getting to the point that most devices in the wild have built in x265/HEVC decoding. Anything released 2016 and later should have it. So it really comes down to whether or not you think supporting 7+ year old hardware is worth it. In some situations it will be. Others is will not. And there are situations where you might want to store in HEVC and live transcode back to x264 for unsupported devices.
Would be super-cool if the next version added support for ai driven metadata automation where a sidecar file is generated to facilitate searching your library (digital assets management).
*There is no benefit to running 3 off line transcode streams, as each stream simply runs @ 33% of the total Encoder horse power, especially for h.265.* The '3 stream limit' is meant to satisfy real-time h.264 online/outgoing transcodes to viewing clients. I run single offline jobs on ffmpeg in Windows for h.265 and max out NVENC, whereas I run 3 streams of outgoing h.264 that can rarely max out NVENC on my plex server due to the lighter workload.
Not entirely true, depends on your setup. You may have some CPU bound tasks to perform in the processing stack, and you also then have some storage actions to take, both of which can take time, but do no use the CPU. So completely depends on your use case.
I’ll agree with others in the comments that CPU encoding yields way better results and can be done with okay speed with the right CPU. My 1950x does it well and my 5950x does it very fast. A gpu will be faster but file sizes are much larger with no quality benefit. My 3090 can do it faster but not worth the file size.
You gotta pump those numbers up those are rookie numbers 😁 Im currently at 7.8 TB of savings 😅 i took my 2070 and Quadro P1000 over 6 Weeks of 24/7 encoding
av1 is the future of video encoding. It isn't quite here yet because there isn't much hardware assist available yet, but it totally kicks h265's butt... and like... av1 is royalty free... which maybe you don't care about... but the big commercial content providers do.
keyword being future :P cos currently the support isn't there yet. but yeah AV1 will become the standard in the not too distant future (hopefully not too distant...)
You might want to check how many streams the P2200 Quadro is limited too. It may be up to 6 or unlimited. I'm unable to check now but nvidea have a nice table with the details.
The hard-coded dependency to reach out to Github with a stateful connection, concerns me. I'll wait until a proper security audit is done of the project before continuing. It was a bit suspicious when the project tried to reach out to a dozen external IP addresses when Tdarr_Server was launched and several additional external IPs when Tdarr_Node was launched.
Tdarr has been around for about 3 years, if no one has flagged it by now, I don't think it does anything nefarious. But if you are that concerned, you should do one.
You're going to be MUUUUUCH better off CPU-encoding H265; as the GPU encoders do not have the capabilities to fine tune profiles. CPU encoding will take enormously longer, but result in a good 30% decrease in file size over the GPU encoded. The reason for this is that GPU encodes only need to do so generally for uploading to streaming services, etc. They care far more about time to encode the frame than they do quality - so they generally will use a fast preset profile that the GPU provides.
CPU/Software encoding often allows for more quality to file size ratios, as hardware encoders are often limited in their capabilities as compared to software encoders. This should not be interpreted as saying Hardware encoders can't do as good of visual quality as a CPU/Software encoder. It's strictly a quality to file size ratio. A CPU/Software encoder with some of the craziest options, will be able to do a high quality image using less storage space than a Hardware encoder will be able to. Settings and CPU depending, you can fairly easily get to messing with encode settings that will triple or more the encode time. Quality vs Time often comes down to very subjective opinions with wildly different reasons for each person. If you're doing a single video and you don't care if it takes all night to encode, a CPU/Software encoder is probably better for your purposes. If you're on a tight time line and have to whip out several videos or hours worth of video, you may opt for Hardware encoding or even a combination of hardware and software encoding (just to keep all your resources busy) and skip on some of the file size settings/savings in favor of saving time. If you have zero concern for storage space constraints, a hardware encoder and speed might be a great option for you. And that whole argument of time saved vs storage size requirements is also going to be extremely reliant on the hardware *you* have available. A 5950x will pretty handily chew through some software video encoding, and if you have one of those CPUs, and you're looking for quality, you may as well use it instead of hardware encoding. But if you have a 4 core CPU and a decent GPU, you may opt for the hardware encoding option. It's all going to be very case by case, person by person... subjective. Also, a great rule of thumb is that you should always try to encode from the source if possible. Transcoding introduces quality losses and those can vary depending on exactly what you're doing. If by chance, when you first encoded the video, you chose to use exceptional quality settings (far in surplus of what you really needed or maybe you were targeting near lossless image quality to start with), you can likely get away with a transcode to a more efficient format, and visually see minimal quality loss, if at all. If instead, you were very aggressive, aiming for small file sizes to start with, say 480p with a bit rate that made reading text on a youtube comment section difficult to read, you should probably not transcode the file at all. And let's face it, anyone who has owned a computer for the last 10 to 20 years, is likely to have some of those old 480p videos where the settings were very obviously aggressive for storage / bandwidth savings. Leave those videos alone lol.. unless you can recreate the source material or have the source material to do a redo.
I've been planning on doing this to my 1TB+ family vids folder (h.264 1080p30 to h.265), but I'm currently waiting on AV1 hardware encoding GPUs from Intel.
This is one of those cases I really don't understand who it's supposed to target. It's still way too complicated and too much hassle for a random schmuck to deal with but at the same time someone who knows what all of this is about will have much more specific requirements and already know how to handle transcoding in a way that suits them much better.
I'm assuming you mean the video and not the software? Either way, I don't know if I'd qualify as as a random schmuck as I build/repair computers, have hosted my own servers etc. But I'm FAR from being a professional, everything I know I've learned by doing it; no education or anything. Encoding/decoding/transcoding are a very weak point for me, as are parts of networking. So for me this seems an awesome way to get started messing with this stuff. So I guess I'm who this would be targeting.
@@Mad-Lad-Chad Of course I was being hyperbolic, I can see the positive comments after all. But I genuinely don't understand why someone would think it's preferable to learn how to operate software which has this kind of narrow focus but is not at all beginner-friendly compared to a one-button-solution, rather than ffmpeg, which is barely (for the same kind of purpose) more complicated and versatile enough to handle virtually any kind of encoding-related task. Granted, I did of course try many GUIs over the years as well (many of which are just frontends for ffmpeg anyway), but I always go back to command line because it's just so much more powerful and reliable.
@@EireBallard A lot of users hate command line, but short of that it would be due to lack of knowledge I imagine. I thought this software could do more than just this specific conversion, but admittedly I was mostly listening to the video in the background as noise. I'm not even really sure what ffmpeg is.
Great video. Still, it is completely incomprehensible to me that people are still talking about "the new codec". H.265 was confirmed as a new standard by the ITU in 2013. I'm more of an AV1 friend.
I'm afraid that the electricity cost for all that gpus running hard for so long might turn out to be much higher than just expanding the storage. I was considerig similar endeavour but since most of my video library is movies and tv shows I can just redownload them in h265 for basically no added system load to my server. Interesing video and toolkit nonetheless.
@@fileflows204 At that point though you're just taking the extra cost and splitting it up to be done in significantly smaller but more frequent chunks aren't you? The amount of electricity used will still be the same for the same number of files but spread out, unless I'm missing something?
Oh gonna use a dedicated rpi for this. shrink my media library without taking up my local servers resources and wasting energy that a fully utilized x86 CPU would do. I love having a project with real purpose other than me just testing.
Seems like most video apps that are good, this is based off ffmpeg under the hood. At this point I’ve seen it used so often I’m just using it outright along with some simple bash scripts. Nice find for a distributed transcoding solution though!
I started with a rpi (RIP it died.. I made a custom ice holder to cool the cpu.. had to refill the ice canister every morning). Now i've worked up to a whole cluster with my 3090 being the heavy lifter. Here are my results for anyone interested. Performance: rpi < 1 fps M1 mac ca 1-5 fps 4vCPU VM esxi i7-4770k 24 fps RTX 3090 169 fps Not sure if there is any point to letting the RTX 3090 work on more files than 1 since my encoder is also at 100% *EDIT* So I added 2 CPU jobs and 2 GPU jobs on my gaming rig. Even though my encoder is maxed i still got 160 fps on both GPU tasks. So it is worth doing multiple tasks eventhough the encoder says its maxed out. on 720 footage RTX 3090 and I9 9900k stats with two tasks each(same results as if i let one GPU and CPU do one each): CPU tasks both get ca50 fps GPU tasks both get ca220 fps. So far making it a total of ca537 fps from my gaming rig. Pretty sweet.
I use handbrake to take direct blurays to mkv using MakeMKV with 5.1 sound, and it will drop it from 25gb to 4-5gb each file. works great with a 4tb external hd and a sony BPD-S5500
Tim, thank you for this. I've been using handbrake for years and looking for an alternative. My Plex drive currently has 60GB free of 8TB, excited to get home and give this a go
@@ryanmckee6348 well don't fix it if it ain't broken 😉 i'd rather focus on one software and master my skills there instead of many different ones - for such a use case. Also too not too sure, if the features are on par in respect of movies, like sub titles, audio etc... That makes more sense for you tube material but not necessarily for movies
Based on the transcode options seen in your video. Transcoded videos might have less video bitrates. "Settings are dependant on file bitrate Working by the logic that H265 can support the same ammount of data at half the bitrate of H264." I guess the disk space reclaimed comes mainly from that bit rate decrease.
It should be pointed out that there are a lot of options for encoding h265 videos. Tdarr is not unique in this regard. I'm unlikely to switch from FFmpeg and x265.
Short followup video? Can we see a difference before and after on a TechnoTim how to video? Are your recording direct to h.265 now? Etc. Thanks for considering.
Thanks for the video, i noticed 720p videos has a noticeable degrade of quality whereas on 1080p is very hard to spot. Also I was wondering is there any possibility to add multiple GPUs on 1 node for transcoding?
Great video, but I have a question: Did you have to create the other nodes on other computer to do that? Couldn’t you just add more GPU on the same server and create additional nodes on it? Thanks
Remeber, NVENC H.265 is way worse than CPU H.265 encoding. Yes it's fast but extremely inefficient. You could've saved double the space if it was CPU encoding.
This can be true but lots of people with large library rather faster encoding, the quality with GPU and the saving is pretty good without the long wait.
H265 encoding takes to long, more electricity usage, for not much gain in file size. You have to let it run a really long time to get small file sizes w/ acceptable quality. If you don’t care about quality then you can have really small file sizes. But tweaking encoder settings to get acceptable quality w/ small file sizes w/ h265 is tedious.
couldn't you just convert your backup server into an encoding server? Like put all 3 GPUs in the server and then throw proxmox on and pass through all the GPUs to 3 separate containers/VMs?
Both H264 and H265 are lossy compression schemes. Just be aware the going from one lossy codec to another, you will lose quality. This should really be called a re-encode rather than a transcode. In a transcode, the conversion is happening on-the-fly and then served to a client without changing the source files. The quality loss may not be a big deal to you or noticeable to the eye, but it is happening. Lossless codecs (like FLAC for audio), are fundamentally different in that you can convert between them with no generational degradation.
From what I've seen to date hardware accelerated transcoding has been developed to transcode on the fly video well, but at the expense of file size. I haven't tried this software though, so maybe this is better? I've used handbrake for 90% of my videos, and software transcoding has always provided the superior quality/file size over hardware accelerated options. Would be interested in seeing a comparison. I don't have a server myself, so Idk how to run Tdarr. lol
Tdarr/FileFlows both use ffmpeg under the hood, so aslong as you that has access to your GPU, and your GPU has the support hardware encoding, then they will work fine.
i wish you explained how you determined which nvidia driver you used. i've been looking around for hours trying to get that smi thing to work. what made you choose the 510 over any other ins the list??
2.1TB saved on my backup OTA recording server, probably 8 on my main server, but there isnt good stats on that because i had been using the predecessor to TDARR, handbrakebatchbeast, which is sometimes more reliable than TDARR if you are using CLI arguments on handbrake Side note, i was blow away with how fast a cheap alderlake processor transcodes videos, no joke, i saw frame rates over 1000FPS going from MPEG2 to H265, but the hardware transcoding issue remains un-changed. Everything, from AMD VCE, to NVidia NVENC, or Apple Video ToolBox WILL make the average file significantly larger than CPU transcoding Side side note, this issue seems to be mostly limited to GPU transcoding between different families of codecs, going from H264 to H265 is great, but going from MPEGX to H26X is just terrible
Will TDARR work with a Google Coral TPU for transcoding? Can you run multiple GPUs in one server for TDARR? Basically, build a strictly transcoding machine. Can TDARR be used to compress cctv nvr files?
Looks like it uses handbrake or ffmpeg. (eg in settings there are options like handbrakePath and ffmpegPath) Seems like a GUI script that feeds videos to servers/nodes that run them. Haven't looked in detail to see if they have their own encoding software other than really using opensource to run their subscription based business... 😆
Because its a lot easier to automate Tdarr than it is handbrake. Sure if you want one off things, handbrake is fine. But if you want to automate hundreds, thousands, tdarr is much easier.
@@fileflows204 u mean the Queue option in handbrake does not work anymore? EGADS!! .... oh wait. I just checked, yes we can still queue whole directories or file lists in one quick process... not sure how it can be easier than that... need to run multiple handbrakes on networked computers off one file list? No problem, split the list into parts, and queue each list on a separate machine and have all files shared on the network... or if u want to stream the files to the encoder, substitute FFMPEG (like tdtard does)..
@@mhavock well you have to make the queue... so that's a step... so that's not quicker. Tdarr its automatic. It no steps once setup. But if you want to use handbrake, go ahead, no one is forcing you.
the videos that you are saving are your source files? wouldnt you want to save those in the highest quality format possible, instead of a lossy one? can tdarr support multiple video cards on the same node?
There's also a line to be drawn I think. Never let perfect be the enemy of good enough. Do you need raw footage of your desk shots? Or would a compressed version serve the same purpose for next to no disadvantage. Granted if its your wedding video, you probably want to preserve the pixels but if its shaky cam footage of your holiday visit to the zoo would reencoding it from h264 to h265 really have any detrimental impact?
Should also point out that you shoudn't use this for Sonarr/Radarr content that has been arr'd (Torrented), as it'll break the filehash and be invalid (you won't be seeding mismatched files anymore... I personally keep the hardlinks and always seed what I've downloaded so this is a no-go for TV/movie files... maybe I would do it for some gaming videos or phone video backups......
sadly I have a 11th Gen Intel and an AMD gpu, (no nvidia cards available thanks to bitcoin mining), so to convert my collection at 1hr 30mins a time would take more years than I have left, and I'm only in my 50's !!!
Are you using Longhorn to store your media?! Longhorn doesn't support TRIM on volumes, so if you are using it, you aren't really saving space until you copy the data to a new Longhorn volume. Also are you storing on SSD's or HDD's? Longhorn is really bad on HDD's and I wouldn't dream of using it for media storage with replication.
Hey just stopped on here to suggest a video using the clouds (Google, Amazon, Oracle, Azure) Free tier GPU options to help speed conversions along. I have a new Dell server and 9th gen I7 and a 3700x with a 3070 and the amount of time to convert to h265 is killing me
Tim, great content. Always learning something. Have you looked at unmanic by any chance. I have been using your method with tdarr since this video went up, thinking of changing to unmanic for the file sizes are much smaller and cant tell any difference in the video quality. e.g Converted my BluRay Star wars from around 10gb to between 6gb - 8gb using different settings / plugins with tdarr. Result using unmanic, got the size down to 2.5gb.
Transcoding will lead to losing quality unless you are working with a lossless format If you are ripping your content from cd/BD then mp3/hevc are a good idea But going from other lossy codec like h264 to h265 you are going to lose quality
In your video you have assigned 1 movie to 1 GPU/Container .. but as i found out h265 transcode with CPU encode is way more efficient (smaller files with better quality) Is it possible to use multiple Tdar instances (docker containers) and each one of them to elaborate only few minutes of a 1 single movie/clip ? instead 1 full movie each ? I have a 3 node docker swarm. In a scenario in which a movie is 30 minutes long , i would be able to assign 10m of clip to each node and let tdar transcode and re-compose the file as output. If you want a faster system you would just add docker swarm nodes .. and you would be able to cut transcode time by the amount of nodes you have. I saw an Italian guy called morrowlinux did something similar with ffmpeg distributed but i have not understood how he did nor i found any reference in ffmpeg libraries. Can Tdar being used for this ? or does anything similar exist that may allow something similar ?
Tim is it possible to make a video to show how to use a AMD GPU. Tdarr seems to only have Nvidia support. I have Handbrake and have exported the .json file for the presets. However I am not sure how to get that into Tdarr.
unRAID (and space invader one) makes it so even a dumb dumb like me can use the Intel hd graphics on my i9 9900 and my 3070 at the same time on the same server. I would even include my rx 6600 xt if amd encoding wasn't still crap (and I believe there isn't a profile in tdarr for it).
** FINAL COUNT ** 720 GB of disk space reclaimed! How much space is your video collection taking up?
thats top secret
8.5TB atm -_- ughhh definitely going to try this
I reclaimed about 660 GB already. Started couple of months ago. Good video
About 3TB -- ran tdarr and converted to h265 and saved just over 1TB!
60ish TB. I would love to use something like tdarr, but I usually end up re encoding my sample piece per movie 5 times tinkering settings, so having some template going over them would be a no go for me. Gpu encodes are never sadisfying for me, the time for CPU encoding combined with the hobby of collecting movies is just not fisable
Tim, I started my transcode project last year, took six months and saved 30tb!
I do CPU encodes and only with HD content. Best quality and file sizes Vs GPU can be achieved this way.
Holy shit!?
30tb savings! that's pretty massive. What was the beginning size of the library?
@@Trains-With-Shane About 60Tb of x264 content. I now have 50TB free on my unRaid array.
@@joelang6126 I'm guessing you have already done so but if not; recommend using something like unbalance to consolidate onto fewer drives in the array, thus avoiding spinning up drives when not needed!
@@transatlant1c Already done mate lol. Drives are filled to 97% capacity before starting writing to empty drives.
Great stuff! I've actually been looking for a bulk transcode service for quite a while that wasn't just an ffmpg batch file. Definitely will be putting this to use.
Thanks! I’d love to see you crush it with 5 nodes running GPUs!
Jeff, I hope you do a video about it ! Would really like to see it setup on Proxmox, maybe with multiple nodes and more in depth, even tho this video is already pretty good!
@@FlaxTheSeedOne it doesn't matter what you use to encode, the results should be the same . Its a compression algorithm , or just math . Why would there be any difference other than raw speed? Measured in (Millions of Inter-Operations per second) or (MIPS). the creator's point was that x and h 265 are a better storage solution than x or h 264 . Saved him nearly a terabyte. If handbrake can do the same then its just as viable .
@@tobiwonkanogy2975 according to video, encode/decode hardware logic was used. Even Turing NVENC gives result like medium CPU presset. It's kind of ok for an archive but still, quality drop might be too big for someone.
@@vadnegru I don't know enough about how things get encoded and decoded then.
The cpu's have a bigger section of die that works faster than a gpu ?
the only thing i have found was that cpus tend to get the encode/decode features first and then graphics later on .
other info seemed to suggest gpus if speed is necessary and cpu if accuracy is more important. gpus were scalable were you mainly only have one cpu socket now .
VRAM is less ECC than DDR RAM is my true guess.
New Techno Tim video, best part about Saturdays! I've been kicking around the idea of setting up a transcode server too.
It should be said that transcoding video WILL degrade the quality, especially when the source video isn't lossless. You can compare it to converting an MP3 file to a lower bitrate MP3 file. You're compressing an already compressed file format so the quality degradation is doubled. When you're working with archive footage that's saved at really high quality settings, it'll probably still look fine but don't expect this method to do you any favours when applied to a library of movies or TV shows you once ripped to an already lossy format.
Try it out. I doubt you see a difference in every Day usage between a 1080p 10 Mbit/s h264 Source converted to a 1080p 5 Mbit/s h265 file. Not saying you cant see it at all, but for general stuff its not noticeable at all imo.
Not to mention using the GPU to encode isn't the best way either. Sure it's faster, but there can be a quality trade off there too.
This was my exact question too -- not long ago I spent waaaaay too long getting GPU encoding working under WSL2 with ffmpeg, and even though I was aware there would be a quality drop, I wasn't really expecting to be at a level where I either would notice or care. But it was immediately obvious and distracting, banding all over the place, completely distracting. CPU-reencoded files were the same size (within a percent or two either way), but looked completely indistinguishable from the source (at least to my eyes -- I'm sure somebody who knew what to look for would know!).
That I was able to see the quality difference was a real eye-opener (no pun intended!) for me, and I ended up just spending the extra run-time using the CPU. It might have taken a lot longer but there's a good reason why they warn you about the quality drop for GPU. If there's a way of getting around this I'd be delighted to hear it, as the time savings would be massive!
Dude, there's no difference between cpu and gpu encoding as long as you use equal settings. If you don't know what the equivalent settings are, then that's your problem, not the encoding. Unfortunately, sometimes the settings can get very complicated, and sometimes you just need to try out a few different settings.
@@jonmayer This project is not for you then. The point here was save space and not beat up the cpu while doing it. for absolute purists who don't want to change their files they should not be changing their files.
"old", a 1050 TI is currently the GPU for my wife's Proxmox gaming server/my 3d printing host... running almost exclusively Sims 4 and Cura. :)
I'm only 2 min in but I have to say, I hope you discuss your TDARR settings. I've been following this project for almost 2 years and I decided this winter, when I already want to heat my apartments, is the perfect time to really define what I want out of it and see how well it does with a mixed mashup of TV and Anime.
OMG! At the 5:45 Mark, that tip on the plug-in paths literally saved me another 4 hours of research. I'll give you a SUB just for that... THX so much!
So far, I've saved 23.32TB using Tdarr - it's friggin stellar. Glad to see you're enjoying it too!!!
Thank you!
Seems like no one has commented on this yet, sick new camera setup! It looks awesome, kind of feels less cramped from the telephoto lens and the bokeh and framing is also nicer.
I have plenty of OBS Replay Buffer clips that used NVENC H.264 CQP 25, meaning it takes a lot of space in exchange for lower resource usage (OBS is on the same PC that I game on). I already encode using HandBrake, but this looks like it’ll help use my desktop’s extra resources together with my server when I sleep.
Noticed the gesture when saying thanks, having a deaf sister I found that pleasant to see
Try looking into something like LizardFS for storage moving forward.
You'll be able to scale your storage with redundancy at a software/filesystem level rather than dealing with raid arrays.
Need more redundancy on a single folder, no problem. Need more space, add more drives or chunkservers.
If a drive or server drops, chunks from other drives or servers will be copied to re-meet the redundancy goals, instead of a crazy raid rebuild.
Have thought about that many times in the past but I don't really see a reason to do this. Adding and running an additional hard drive, or get a larger one can be cheaper initially and is more energy efficient long-term than running a graphics card to transcode back and forth all the time when using Jellyfin or Plex.
Of course this might change if you have a spare gpu lying around anyway, have one in your server anyway and energy cost is cheap where you live.
Tdarr has been a saver for me! I have used it for some time, been ripping my movie collection (I have alot)! I have saved over 1.5 TB worth of data, with no notable quality loss!
What were the plugins that you used ?
@@supratiksarkar3422 Tdarr_Plugin_s7x9_winsome_h265_10bit and Tdarr_Plugin_x7ab_Remove_Subs
It's great for space saving but beware that H265 takes more horsepower to transcode on the fly when watching content on Plex/Jellyfin, so if your home server is a fairly low powered Synology or Raspberry Pi you might want to consider leaving it as H264 and just buy more storage, otherwise you will run into buffering!
most modern hardware has x265/HEVC decoding support. Even RasPis.
The killer is driver support.
@@ZiggleFingers For many people that defeats the entire point of having a Plex server. If your goal is to save money on streaming services you're hardly saving money if you go out and replace perfectly fine playback devices every couple of years. I'm using an Xbox 360 on one TV for the kids to stream cartoons. I don't want to give them access to a newer more expensive device they're likely to damage anyway.
@@Prophes0r Ya most modern hardware but hardware is commonly being used for longer these days. My pc is 10 years old and I dont' plan on upgrading it as I only download videos to my external hard drives which I watch on my ps4 slim and ps vita slim
Just make sure all the clients are h265 decode capable and ZERO transcoding is needed, 3rd gen firestick (2019) onwards, intel 6th gen cpu or later (or Ryzen APU), & I believe 1000 series Nvidia 500 series AMD or later all do the decoding in hardware as does a Pi4.
This is really helpful. Though I guess my 9.9TB movie library will take decades to convert since i've got only 1 node with a 3070.
But my first test conversions looks like I can cut the size roughly in half 😍
Just thought of a new coin mining scheme
You'd be surprised bro, with nvenc acceleration I was able to do a typical 1080p h264, ~5-7gb in about 20-30 minutes and I was also able to do 2x concurrently without impacting performance, and that was with a 980 too. Might actually take less time than you think!
GPU h.265 is still about 1,5-2x the size of CPU h.265 with the same quality. So if you are „archiving“ I would advise against GPU transcoding.
how can that be? isn't it the same codec?
@@bobtiji yes but different efficiency, also NVENC is only good on RTX 2000 series and up. The GTX 1000 series doesn’t have good quality encoding and the files are even larger although I’m not 100% sure on the file size.
@@owlmostdead9492 uh. the more you know.
This is true, CPU do give better quality encodes and the file size is smaller. However, on my GTX1650 vs my AMD 5800x. I can get around 4-6x better performance on my GPU vs my CPU. So make a smarter conversion setup, where stuff you really want to keep quality, use CPU, stuff you don't really care about, use GPU.
I'm pretty sure it depends on the output bitrate you set, no? In general, H265 is 30% smaller than the corresponding quality of H264. But of course, you can compress a H264 100MBit/s file into a H265 10MBit/s file, but usually it's around 70MBit/s for the same quality.
THIS is definitely worth considering. Thx Tim!
I love MicroCenter! Always make it a point to go there when I head up north. Prices are very comparable and staff is great.
Thanks for this video. This is truly what I needed. I’ve been transcoding “manually” via Handbrake.
I wish there was a MicroCenter near me (or ANYWHERE IN THE PACIFIC NORTHWEST -- in case they're listening). I've also been "manually" transcoding via Handbrake (in batches with presets, at least) -- excited to try this out too!
the new Apple M1 Pro/Ultra have some serious hardware encoders (can do h.265 also), and the max can do 30 streams of 4k ProRes or 7 streams of 8k ProRes. I'm curious how well it would do at this task if you threw a Mac Mini with one of those cpu's onto your network as a compute node for your video encoding
Thanks for showing us this service and your hardware setups! Really nice to watch :-)
I'm not sure of your exact settings, but not enabling 10bit would actually degrade the quality of a big part of my library as it was recorded in 10bit. Also not sure about your framerate settings. Most of my videos use 60fps, some even 120fps. If everything would get converted to 30fps, it will make any slowdown in post impossible.
does tdarr support av1 yet? i think av1 will replace h265 some day. youtube already uses it too. you might want to try using av1 on your youtube uploads to check out the quality difference
As of current, 3/3/24, tdarr does support av1. Just in case
IMO if you are trying to save space and get a better quality, use CPU encoding. Else if you want to transcode a video for plex (like real time resolution switching, etc) use the GPU.
I got into this topic a while ago, when i was trying to rip my blueray collection and noticed the bad quality on NVENC re-encoding.
Plex can use intel quick sync and based on os it runs on it can use nvenc of nvida and amds answer to that but you card needs to support h265 decode to transcode on the fly
GPU Nvidia encoding isn't the problem here. Problem is a Nvidia decoder. Try using CPU as decoder na Nvidia GPU as encoder. That way you will get fast transcoding and maybe even better quality compared to the CPU.
@@vedranart plex has ondemad trascodig from H265 10bit to any lower quality as long as the gpu in de server has the decoder and encoder available be it intels Quick sync AMD UVD/UVE or Nvidia NVENC or NVDEC
@@sojab0on Does 12th gen Intel quick sync have it?
@@vedranart Does it matter which GPU you have ? I have a spare 1660 ti and spare rtx 3060 GPU. Is one better than the other at transcoding?
I'd love to hear what you're doing for Digital Asset Management to be able to make use of all of that archived video footage.
you're a life saver! I've been recording my gameplay to my NAS and I've been able to reclaim about 50% of my storage back ! (300GB)
The Nvidia Quadro P2000 has no software limitations regarding hw transcoding. Thats why its popular for plex. You can transcode 20 streams simultaneously.
If you have alot of space processor time you should rather transcode to av1 (better then h265 and open-source the actual future of video) and opus(for audio). This will take alot of time and currently doesnt have any gpu encode support. VP9 is a good alternative to h265 for better browser and mobile device support as h265 royalties are insane.
I developed a similar although not as polished service back in 2018 when I was in uni. But I stopped development after I started working and could afford buying more hdds hehe. Glad to see that someone else had the same idea
I go way WAY back with video encoding. Spent many hours @ the doom9 forums trying to squeeze every bit out of every file while keeping all the quality. Transcoding just isn't worth it unless you have huge source files, like ripping straight from a disc. You always lose a little quality and slow storage is cheap and easy.
Storage may be cheap and easy, but serving large files can be a problem, or server will end up transcoding to a worse quality on the fly. And besides there are other reasons to use an app like Tdarr or Fileflows than just to trancode. You can remux, add new audio tracks in different codecs (my stuff cant place EAC3 for example which would cause plex to to a transcode), automatically cut commercials, add chapters, remove unwanted streams to save space etc.
@@JohnAndrews_nz You have valid points but editing streams in a container isn't quite transcoding. I'd argue that's even worse than transcoding since the gains are even smaller (most of the time) but require manual work unless every stream is tagged properly. I guess everyone has different needs but I'm done with the days of 100% CPU for weeks on end for a measly 720GB.
@@drivenbydemons6537 you are correct. that would not be transcoding. however, you can still gain a lot of storage if you strip out all the additional unwanted audio tracks. Tha can save 10-15% per video at times. No where near h264 to h264, but still do that across a few TB and you can save a couple hundred GBs
I'm John BTW. Switched to this account for transparency. Was on my mobile/personal account previously.
The sign language "thank you" was such a cherry on top. Even for a hearing person.
Looks interesting if you're in need of just compressing videos for storage purposes and/or watching videos locally with a compatible media player. But, if you selfhost the videos for like website/blog/streaming service, or want to make sure the videos are playable with the most basic media player (without the need of installing codecs), then h264 is still the way to go.
Yeah, just storage
We are getting to the point that most devices in the wild have built in x265/HEVC decoding. Anything released 2016 and later should have it.
So it really comes down to whether or not you think supporting 7+ year old hardware is worth it.
In some situations it will be. Others is will not. And there are situations where you might want to store in HEVC and live transcode back to x264 for unsupported devices.
New subscriber and amazed at your videos, and this one with tdarr is fantastic, well done sir!!
Would be super-cool if the next version added support for ai driven metadata automation where a sidecar file is generated to facilitate searching your library (digital assets management).
*There is no benefit to running 3 off line transcode streams, as each stream simply runs @ 33% of the total Encoder horse power, especially for h.265.* The '3 stream limit' is meant to satisfy real-time h.264 online/outgoing transcodes to viewing clients. I run single offline jobs on ffmpeg in Windows for h.265 and max out NVENC, whereas I run 3 streams of outgoing h.264 that can rarely max out NVENC on my plex server due to the lighter workload.
Not entirely true, depends on your setup. You may have some CPU bound tasks to perform in the processing stack, and you also then have some storage actions to take, both of which can take time, but do no use the CPU. So completely depends on your use case.
Let's try with H265+, additional ~20% size reduce!
I’ll agree with others in the comments that CPU encoding yields way better results and can be done with okay speed with the right CPU. My 1950x does it well and my 5950x does it very fast. A gpu will be faster but file sizes are much larger with no quality benefit. My 3090 can do it faster but not worth the file size.
Great video and software! I'm on track to reclaim about 30% myself. Thank you and keep up the great work!
You gotta pump those numbers up those are rookie numbers 😁
Im currently at 7.8 TB of savings 😅 i took my 2070 and Quadro P1000 over 6 Weeks of 24/7 encoding
av1 is the future of video encoding. It isn't quite here yet because there isn't much hardware assist available yet, but it totally kicks h265's butt... and like... av1 is royalty free... which maybe you don't care about... but the big commercial content providers do.
keyword being future :P cos currently the support isn't there yet. but yeah AV1 will become the standard in the not too distant future (hopefully not too distant...)
You might want to check how many streams the P2200 Quadro is limited too. It may be up to 6 or unlimited. I'm unable to check now but nvidea have a nice table with the details.
Sound similar to Sonarr, Radarr, and Lidarr. Don't doubt that's one of the intended use cases
The hard-coded dependency to reach out to Github with a stateful connection, concerns me. I'll wait until a proper security audit is done of the project before continuing. It was a bit suspicious when the project tried to reach out to a dozen external IP addresses when Tdarr_Server was launched and several additional external IPs when Tdarr_Node was launched.
Tdarr has been around for about 3 years, if no one has flagged it by now, I don't think it does anything nefarious. But if you are that concerned, you should do one.
Why would tdarr need to access anything outside of your home network?
thanks for the heads up, I’m skipping tdarr.
@@majorgear1021 You can ask them, they are on github and discord.
Love me some Tdarr!!!
Well, now I can finally hang on to my footage. I was always too lazy to use ffmpeg scripts to compress my video files, but this looks promising 😅
You're going to be MUUUUUCH better off CPU-encoding H265; as the GPU encoders do not have the capabilities to fine tune profiles. CPU encoding will take enormously longer, but result in a good 30% decrease in file size over the GPU encoded. The reason for this is that GPU encodes only need to do so generally for uploading to streaming services, etc. They care far more about time to encode the frame than they do quality - so they generally will use a fast preset profile that the GPU provides.
CPU/Software encoding often allows for more quality to file size ratios, as hardware encoders are often limited in their capabilities as compared to software encoders. This should not be interpreted as saying Hardware encoders can't do as good of visual quality as a CPU/Software encoder. It's strictly a quality to file size ratio. A CPU/Software encoder with some of the craziest options, will be able to do a high quality image using less storage space than a Hardware encoder will be able to.
Settings and CPU depending, you can fairly easily get to messing with encode settings that will triple or more the encode time. Quality vs Time often comes down to very subjective opinions with wildly different reasons for each person. If you're doing a single video and you don't care if it takes all night to encode, a CPU/Software encoder is probably better for your purposes. If you're on a tight time line and have to whip out several videos or hours worth of video, you may opt for Hardware encoding or even a combination of hardware and software encoding (just to keep all your resources busy) and skip on some of the file size settings/savings in favor of saving time. If you have zero concern for storage space constraints, a hardware encoder and speed might be a great option for you.
And that whole argument of time saved vs storage size requirements is also going to be extremely reliant on the hardware *you* have available. A 5950x will pretty handily chew through some software video encoding, and if you have one of those CPUs, and you're looking for quality, you may as well use it instead of hardware encoding. But if you have a 4 core CPU and a decent GPU, you may opt for the hardware encoding option. It's all going to be very case by case, person by person... subjective.
Also, a great rule of thumb is that you should always try to encode from the source if possible. Transcoding introduces quality losses and those can vary depending on exactly what you're doing.
If by chance, when you first encoded the video, you chose to use exceptional quality settings (far in surplus of what you really needed or maybe you were targeting near lossless image quality to start with), you can likely get away with a transcode to a more efficient format, and visually see minimal quality loss, if at all. If instead, you were very aggressive, aiming for small file sizes to start with, say 480p with a bit rate that made reading text on a youtube comment section difficult to read, you should probably not transcode the file at all. And let's face it, anyone who has owned a computer for the last 10 to 20 years, is likely to have some of those old 480p videos where the settings were very obviously aggressive for storage / bandwidth savings. Leave those videos alone lol.. unless you can recreate the source material or have the source material to do a redo.
Wow dude, handling electrical equipment and heavy equipment with bare feet is too wild to watch.. 😅
I've been planning on doing this to my 1TB+ family vids folder (h.264 1080p30 to h.265), but I'm currently waiting on AV1 hardware encoding GPUs from Intel.
So glad to see your channel grow from sub 10k to almost 100k. Good job man
Thank you so much!
This is one of those cases I really don't understand who it's supposed to target. It's still way too complicated and too much hassle for a random schmuck to deal with but at the same time someone who knows what all of this is about will have much more specific requirements and already know how to handle transcoding in a way that suits them much better.
I'm assuming you mean the video and not the software? Either way, I don't know if I'd qualify as as a random schmuck as I build/repair computers, have hosted my own servers etc. But I'm FAR from being a professional, everything I know I've learned by doing it; no education or anything. Encoding/decoding/transcoding are a very weak point for me, as are parts of networking. So for me this seems an awesome way to get started messing with this stuff. So I guess I'm who this would be targeting.
@@Mad-Lad-Chad Of course I was being hyperbolic, I can see the positive comments after all. But I genuinely don't understand why someone would think it's preferable to learn how to operate software which has this kind of narrow focus but is not at all beginner-friendly compared to a one-button-solution, rather than ffmpeg, which is barely (for the same kind of purpose) more complicated and versatile enough to handle virtually any kind of encoding-related task. Granted, I did of course try many GUIs over the years as well (many of which are just frontends for ffmpeg anyway), but I always go back to command line because it's just so much more powerful and reliable.
@@EireBallard A lot of users hate command line, but short of that it would be due to lack of knowledge I imagine. I thought this software could do more than just this specific conversion, but admittedly I was mostly listening to the video in the background as noise. I'm not even really sure what ffmpeg is.
Great video. Still, it is completely incomprehensible to me that people are still talking about "the new codec". H.265 was confirmed as a new standard by the ITU in 2013. I'm more of an AV1 friend.
I'm afraid that the electricity cost for all that gpus running hard for so long might turn out to be much higher than just expanding the storage. I was considerig similar endeavour but since most of my video library is movies and tv shows I can just redownload them in h265 for basically no added system load to my server. Interesing video and toolkit nonetheless.
Or set it up so once a file is downloaded, it is converted to your format of choice then and there. Takes a lit bit more time, but not much.
@@fileflows204 At that point though you're just taking the extra cost and splitting it up to be done in significantly smaller but more frequent chunks aren't you? The amount of electricity used will still be the same for the same number of files but spread out, unless I'm missing something?
Oh gonna use a dedicated rpi for this. shrink my media library without taking up my local servers resources and wasting energy that a fully utilized x86 CPU would do.
I love having a project with real purpose other than me just testing.
Seems like most video apps that are good, this is based off ffmpeg under the hood. At this point I’ve seen it used so often I’m just using it outright along with some simple bash scripts. Nice find for a distributed transcoding solution though!
I started with a rpi (RIP it died.. I made a custom ice holder to cool the cpu.. had to refill the ice canister every morning).
Now i've worked up to a whole cluster with my 3090 being the heavy lifter.
Here are my results for anyone interested.
Performance:
rpi < 1 fps
M1 mac ca 1-5 fps
4vCPU VM esxi i7-4770k 24 fps
RTX 3090 169 fps
Not sure if there is any point to letting the RTX 3090 work on more files than 1 since my encoder is also at 100%
*EDIT*
So I added 2 CPU jobs and 2 GPU jobs on my gaming rig.
Even though my encoder is maxed i still got 160 fps on both GPU tasks.
So it is worth doing multiple tasks eventhough the encoder says its maxed out.
on 720 footage
RTX 3090 and I9 9900k stats with two tasks each(same results as if i let one GPU and CPU do one each):
CPU tasks both get ca50 fps
GPU tasks both get ca220 fps.
So far making it a total of ca537 fps from my gaming rig. Pretty sweet.
apply the nvidia patch to bypass the transcode restriction
I use handbrake to take direct blurays to mkv using MakeMKV with 5.1 sound, and it will drop it from 25gb to 4-5gb each file. works great with a 4tb external hd and a sony BPD-S5500
Tim, thank you for this. I've been using handbrake for years and looking for an alternative. My Plex drive currently has 60GB free of 8TB, excited to get home and give this a go
Whats wrong with handbrake?
@@nixxblikka nothing in particular, just looking for alternatives. Best practice is typically to diversify software options don't you think?
@@ryanmckee6348 well don't fix it if it ain't broken 😉 i'd rather focus on one software and master my skills there instead of many different ones - for such a use case. Also too not too sure, if the features are on par in respect of movies, like sub titles, audio etc... That makes more sense for you tube material but not necessarily for movies
Based on the transcode options seen in your video. Transcoded videos might have less video bitrates.
"Settings are dependant on file bitrate Working by the logic that H265 can support the same ammount of data at half the bitrate of H264."
I guess the disk space reclaimed comes mainly from that bit rate decrease.
Thank you for this video.
It should be pointed out that there are a lot of options for encoding h265 videos. Tdarr is not unique in this regard. I'm unlikely to switch from FFmpeg and x265.
Correct, most of these all use ffmpeg under the hood Tdarr, FileFlows, Unmanic. It comes down your use case and how you want to automate it.
Short followup video?
Can we see a difference before and after on a TechnoTim how to video?
Are your recording direct to h.265 now? Etc.
Thanks for considering.
I just wish I had money to be able to keep old video cards around.
Unmanic is easier to set up and much faster at encoding in my experience using both software on a Unraid server.
now we need a tut on setting up an open source media server with hardware encoding (quicksync and gpu's)
Thanks for the video, i noticed 720p videos has a noticeable degrade of quality whereas on 1080p is very hard to spot. Also I was wondering is there any possibility to add multiple GPUs on 1 node for transcoding?
Great video, but I have a question: Did you have to create the other nodes on other computer to do that? Couldn’t you just add more GPU on the same server and create additional nodes on it? Thanks
Remeber, NVENC H.265 is way worse than CPU H.265 encoding. Yes it's fast but extremely inefficient. You could've saved double the space if it was CPU encoding.
This can be true but lots of people with large library rather faster encoding, the quality with GPU and the saving is pretty good without the long wait.
H265 encoding takes to long, more electricity usage, for not much gain in file size. You have to let it run a really long time to get small file sizes w/ acceptable quality. If you don’t care about quality then you can have really small file sizes. But tweaking encoder settings to get acceptable quality w/ small file sizes w/ h265 is tedious.
Cpu transcode is ok, GPU (intel or nvidia) are crap. If you want the best looking file for the minimal space, use CPU encode
couldn't you just convert your backup server into an encoding server? Like put all 3 GPUs in the server and then throw proxmox on and pass through all the GPUs to 3 separate containers/VMs?
Both H264 and H265 are lossy compression schemes. Just be aware the going from one lossy codec to another, you will lose quality. This should really be called a re-encode rather than a transcode. In a transcode, the conversion is happening on-the-fly and then served to a client without changing the source files.
The quality loss may not be a big deal to you or noticeable to the eye, but it is happening. Lossless codecs (like FLAC for audio), are fundamentally different in that you can convert between them with no generational degradation.
From what I've seen to date hardware accelerated transcoding has been developed to transcode on the fly video well, but at the expense of file size. I haven't tried this software though, so maybe this is better? I've used handbrake for 90% of my videos, and software transcoding has always provided the superior quality/file size over hardware accelerated options. Would be interested in seeing a comparison. I don't have a server myself, so Idk how to run Tdarr. lol
6 mins after posting. Did you make it to 1TB?
very interesting, can it use AMD cards ? several cards on one machine ? several cards with a mix of nvidia and amd ?
Tdarr/FileFlows both use ffmpeg under the hood, so aslong as you that has access to your GPU, and your GPU has the support hardware encoding, then they will work fine.
i wish you explained how you determined which nvidia driver you used. i've been looking around for hours trying to get that smi thing to work. what made you choose the 510 over any other ins the list??
I know what my home lab hour is going to be today!
24.69GB saved space. I only have 6804 files with 508 transcodes.
Version .19 will finally support dolby vision btw 🥳🥳
2.1TB saved on my backup OTA recording server, probably 8 on my main server, but there isnt good stats on that because i had been using the predecessor to TDARR, handbrakebatchbeast, which is sometimes more reliable than TDARR if you are using CLI arguments on handbrake
Side note, i was blow away with how fast a cheap alderlake processor transcodes videos, no joke, i saw frame rates over 1000FPS going from MPEG2 to H265, but the hardware transcoding issue remains un-changed. Everything, from AMD VCE, to NVidia NVENC, or Apple Video ToolBox WILL make the average file significantly larger than CPU transcoding
Side side note, this issue seems to be mostly limited to GPU transcoding between different families of codecs, going from H264 to H265 is great, but going from MPEGX to H26X is just terrible
Will TDARR work with a Google Coral TPU for transcoding?
Can you run multiple GPUs in one server for TDARR? Basically, build a strictly transcoding machine.
Can TDARR be used to compress cctv nvr files?
Why Tdarr and not Handbrake on DOCKER? I mean should be enough for regular user, isn’t it?
Looks like it uses handbrake or ffmpeg. (eg in settings there are options like handbrakePath and ffmpegPath)
Seems like a GUI script that feeds videos to servers/nodes that run them.
Haven't looked in detail to see if they have their own encoding software other than really using opensource to run their subscription based business... 😆
Because its a lot easier to automate Tdarr than it is handbrake. Sure if you want one off things, handbrake is fine. But if you want to automate hundreds, thousands, tdarr is much easier.
@@fileflows204 u mean the Queue option in handbrake does not work anymore? EGADS!! .... oh wait. I just checked, yes we can still queue whole directories or file lists in one quick process... not sure how it can be easier than that... need to run multiple handbrakes on networked computers off one file list? No problem, split the list into parts, and queue each list on a separate machine and have all files shared on the network... or if u want to stream the files to the encoder, substitute FFMPEG (like tdtard does)..
@@mhavock well you have to make the queue... so that's a step... so that's not quicker. Tdarr its automatic. It no steps once setup.
But if you want to use handbrake, go ahead, no one is forcing you.
You can also put multiple GPU's in a single machine...
would save more if you CPU transcode rather than GPU.
I like the new setup! Is this in the “darr” stack? Sonarr, Radarr, Lidarr, etc?
Thank! I’ve never used the others so maybe?
the videos that you are saving are your source files? wouldnt you want to save those in the highest quality format possible, instead of a lossy one?
can tdarr support multiple video cards on the same node?
Not for my youtube archival footage. It’s hours of footage I may never look at again
There's also a line to be drawn I think. Never let perfect be the enemy of good enough.
Do you need raw footage of your desk shots? Or would a compressed version serve the same purpose for next to no disadvantage.
Granted if its your wedding video, you probably want to preserve the pixels but if its shaky cam footage of your holiday visit to the zoo would reencoding it from h264 to h265 really have any detrimental impact?
Should also point out that you shoudn't use this for Sonarr/Radarr content that has been arr'd (Torrented), as it'll break the filehash and be invalid (you won't be seeding mismatched files anymore... I personally keep the hardlinks and always seed what I've downloaded so this is a no-go for TV/movie files... maybe I would do it for some gaming videos or phone video backups......
sadly I have a 11th Gen Intel and an AMD gpu, (no nvidia cards available thanks to bitcoin mining), so to convert my collection at 1hr 30mins a time would take more years than I have left, and I'm only in my 50's !!!
Are you using Longhorn to store your media?!
Longhorn doesn't support TRIM on volumes, so if you are using it, you aren't really saving space until you copy the data to a new Longhorn volume.
Also are you storing on SSD's or HDD's? Longhorn is really bad on HDD's and I wouldn't dream of using it for media storage with replication.
Hey just stopped on here to suggest a video using the clouds (Google, Amazon, Oracle, Azure) Free tier GPU options to help speed conversions along. I have a new Dell server and 9th gen I7 and a 3700x with a 3070 and the amount of time to convert to h265 is killing me
I'm sad. You had the chance to call it, "BeastNode-3090", and you didn't. Otherwise, great video ! xD
What about adding more video cards to a server? Do you need high bandwidth or can you use a pcie splitter and have like 4 cards working away?
Tim, great content. Always learning something. Have you looked at unmanic by any chance. I have been using your method with tdarr since this video went up, thinking of changing to unmanic for the file sizes are much smaller and cant tell any difference in the video quality. e.g Converted my BluRay Star wars from around 10gb to between 6gb - 8gb using different settings / plugins with tdarr. Result using unmanic, got the size down to 2.5gb.
Not yet!
Not only I got Unmanic working faster, but it's open source; for those who are interested in that aspect too.
Will check it out
Transcoding will lead to losing quality unless you are working with a lossless format
If you are ripping your content from cd/BD then mp3/hevc are a good idea
But going from other lossy codec like h264 to h265 you are going to lose quality
Tdarr went closed-source. Invest in alternative options.
In your video you have assigned 1 movie to 1 GPU/Container .. but as i found out h265 transcode with CPU encode is way more efficient (smaller files with better quality)
Is it possible to use multiple Tdar instances (docker containers) and each one of them to elaborate only few minutes of a 1 single movie/clip ? instead 1 full movie each ?
I have a 3 node docker swarm. In a scenario in which a movie is 30 minutes long , i would be able to assign 10m of clip to each node and let tdar transcode and re-compose the file as output.
If you want a faster system you would just add docker swarm nodes .. and you would be able to cut transcode time by the amount of nodes you have.
I saw an Italian guy called morrowlinux did something similar with ffmpeg distributed but i have not understood how he did nor i found any reference in ffmpeg libraries.
Can Tdar being used for this ? or does anything similar exist that may allow something similar ?
Tim is it possible to make a video to show how to use a AMD GPU. Tdarr seems to only have Nvidia support.
I have Handbrake and have exported the .json file for the presets. However I am not sure how to get that into Tdarr.
Great Video! Like Always TIM!! :)
unRAID (and space invader one) makes it so even a dumb dumb like me can use the Intel hd graphics on my i9 9900 and my 3070 at the same time on the same server. I would even include my rx 6600 xt if amd encoding wasn't still crap (and I believe there isn't a profile in tdarr for it).
I'm at 1.5 Tb. Only 18,000 more videos to go!