60ish TB. I would love to use something like tdarr, but I usually end up re encoding my sample piece per movie 5 times tinkering settings, so having some template going over them would be a no go for me. Gpu encodes are never sadisfying for me, the time for CPU encoding combined with the hobby of collecting movies is just not fisable
Great stuff! I've actually been looking for a bulk transcode service for quite a while that wasn't just an ffmpg batch file. Definitely will be putting this to use.
Jeff, I hope you do a video about it ! Would really like to see it setup on Proxmox, maybe with multiple nodes and more in depth, even tho this video is already pretty good!
@@FlaxTheSeedOne it doesn't matter what you use to encode, the results should be the same . Its a compression algorithm , or just math . Why would there be any difference other than raw speed? Measured in (Millions of Inter-Operations per second) or (MIPS). the creator's point was that x and h 265 are a better storage solution than x or h 264 . Saved him nearly a terabyte. If handbrake can do the same then its just as viable .
@@tobiwonkanogy2975 according to video, encode/decode hardware logic was used. Even Turing NVENC gives result like medium CPU presset. It's kind of ok for an archive but still, quality drop might be too big for someone.
@@vadnegru I don't know enough about how things get encoded and decoded then. The cpu's have a bigger section of die that works faster than a gpu ? the only thing i have found was that cpus tend to get the encode/decode features first and then graphics later on . other info seemed to suggest gpus if speed is necessary and cpu if accuracy is more important. gpus were scalable were you mainly only have one cpu socket now . VRAM is less ECC than DDR RAM is my true guess.
Tim, I started my transcode project last year, took six months and saved 30tb! I do CPU encodes and only with HD content. Best quality and file sizes Vs GPU can be achieved this way.
@@joelang6126 I'm guessing you have already done so but if not; recommend using something like unbalance to consolidate onto fewer drives in the array, thus avoiding spinning up drives when not needed!
It should be said that transcoding video WILL degrade the quality, especially when the source video isn't lossless. You can compare it to converting an MP3 file to a lower bitrate MP3 file. You're compressing an already compressed file format so the quality degradation is doubled. When you're working with archive footage that's saved at really high quality settings, it'll probably still look fine but don't expect this method to do you any favours when applied to a library of movies or TV shows you once ripped to an already lossy format.
Try it out. I doubt you see a difference in every Day usage between a 1080p 10 Mbit/s h264 Source converted to a 1080p 5 Mbit/s h265 file. Not saying you cant see it at all, but for general stuff its not noticeable at all imo.
This was my exact question too -- not long ago I spent waaaaay too long getting GPU encoding working under WSL2 with ffmpeg, and even though I was aware there would be a quality drop, I wasn't really expecting to be at a level where I either would notice or care. But it was immediately obvious and distracting, banding all over the place, completely distracting. CPU-reencoded files were the same size (within a percent or two either way), but looked completely indistinguishable from the source (at least to my eyes -- I'm sure somebody who knew what to look for would know!). That I was able to see the quality difference was a real eye-opener (no pun intended!) for me, and I ended up just spending the extra run-time using the CPU. It might have taken a lot longer but there's a good reason why they warn you about the quality drop for GPU. If there's a way of getting around this I'd be delighted to hear it, as the time savings would be massive!
Dude, there's no difference between cpu and gpu encoding as long as you use equal settings. If you don't know what the equivalent settings are, then that's your problem, not the encoding. Unfortunately, sometimes the settings can get very complicated, and sometimes you just need to try out a few different settings.
@@jonmayer This project is not for you then. The point here was save space and not beat up the cpu while doing it. for absolute purists who don't want to change their files they should not be changing their files.
"old", a 1050 TI is currently the GPU for my wife's Proxmox gaming server/my 3d printing host... running almost exclusively Sims 4 and Cura. :) I'm only 2 min in but I have to say, I hope you discuss your TDARR settings. I've been following this project for almost 2 years and I decided this winter, when I already want to heat my apartments, is the perfect time to really define what I want out of it and see how well it does with a mixed mashup of TV and Anime.
the new Apple M1 Pro/Ultra have some serious hardware encoders (can do h.265 also), and the max can do 30 streams of 4k ProRes or 7 streams of 8k ProRes. I'm curious how well it would do at this task if you threw a Mac Mini with one of those cpu's onto your network as a compute node for your video encoding
Have thought about that many times in the past but I don't really see a reason to do this. Adding and running an additional hard drive, or get a larger one can be cheaper initially and is more energy efficient long-term than running a graphics card to transcode back and forth all the time when using Jellyfin or Plex. Of course this might change if you have a spare gpu lying around anyway, have one in your server anyway and energy cost is cheap where you live.
I love MicroCenter! Always make it a point to go there when I head up north. Prices are very comparable and staff is great. Thanks for this video. This is truly what I needed. I’ve been transcoding “manually” via Handbrake.
I wish there was a MicroCenter near me (or ANYWHERE IN THE PACIFIC NORTHWEST -- in case they're listening). I've also been "manually" transcoding via Handbrake (in batches with presets, at least) -- excited to try this out too!
Seems like no one has commented on this yet, sick new camera setup! It looks awesome, kind of feels less cramped from the telephoto lens and the bokeh and framing is also nicer. I have plenty of OBS Replay Buffer clips that used NVENC H.264 CQP 25, meaning it takes a lot of space in exchange for lower resource usage (OBS is on the same PC that I game on). I already encode using HandBrake, but this looks like it’ll help use my desktop’s extra resources together with my server when I sleep.
does tdarr support av1 yet? i think av1 will replace h265 some day. youtube already uses it too. you might want to try using av1 on your youtube uploads to check out the quality difference
It's great for space saving but beware that H265 takes more horsepower to transcode on the fly when watching content on Plex/Jellyfin, so if your home server is a fairly low powered Synology or Raspberry Pi you might want to consider leaving it as H264 and just buy more storage, otherwise you will run into buffering!
@@ZiggleFingers For many people that defeats the entire point of having a Plex server. If your goal is to save money on streaming services you're hardly saving money if you go out and replace perfectly fine playback devices every couple of years. I'm using an Xbox 360 on one TV for the kids to stream cartoons. I don't want to give them access to a newer more expensive device they're likely to damage anyway.
@@Prophes0r Ya most modern hardware but hardware is commonly being used for longer these days. My pc is 10 years old and I dont' plan on upgrading it as I only download videos to my external hard drives which I watch on my ps4 slim and ps vita slim
Just make sure all the clients are h265 decode capable and ZERO transcoding is needed, 3rd gen firestick (2019) onwards, intel 6th gen cpu or later (or Ryzen APU), & I believe 1000 series Nvidia 500 series AMD or later all do the decoding in hardware as does a Pi4.
Thanks for showing us this service and your hardware setups! Really nice to watch :-) I'm not sure of your exact settings, but not enabling 10bit would actually degrade the quality of a big part of my library as it was recorded in 10bit. Also not sure about your framerate settings. Most of my videos use 60fps, some even 120fps. If everything would get converted to 30fps, it will make any slowdown in post impossible.
This is really helpful. Though I guess my 9.9TB movie library will take decades to convert since i've got only 1 node with a 3070. But my first test conversions looks like I can cut the size roughly in half 😍
You'd be surprised bro, with nvenc acceleration I was able to do a typical 1080p h264, ~5-7gb in about 20-30 minutes and I was also able to do 2x concurrently without impacting performance, and that was with a 980 too. Might actually take less time than you think!
Tdarr has been a saver for me! I have used it for some time, been ripping my movie collection (I have alot)! I have saved over 1.5 TB worth of data, with no notable quality loss!
@@bobtiji yes but different efficiency, also NVENC is only good on RTX 2000 series and up. The GTX 1000 series doesn’t have good quality encoding and the files are even larger although I’m not 100% sure on the file size.
This is true, CPU do give better quality encodes and the file size is smaller. However, on my GTX1650 vs my AMD 5800x. I can get around 4-6x better performance on my GPU vs my CPU. So make a smarter conversion setup, where stuff you really want to keep quality, use CPU, stuff you don't really care about, use GPU.
I'm pretty sure it depends on the output bitrate you set, no? In general, H265 is 30% smaller than the corresponding quality of H264. But of course, you can compress a H264 100MBit/s file into a H265 10MBit/s file, but usually it's around 70MBit/s for the same quality.
IMO if you are trying to save space and get a better quality, use CPU encoding. Else if you want to transcode a video for plex (like real time resolution switching, etc) use the GPU. I got into this topic a while ago, when i was trying to rip my blueray collection and noticed the bad quality on NVENC re-encoding.
Plex can use intel quick sync and based on os it runs on it can use nvenc of nvida and amds answer to that but you card needs to support h265 decode to transcode on the fly
GPU Nvidia encoding isn't the problem here. Problem is a Nvidia decoder. Try using CPU as decoder na Nvidia GPU as encoder. That way you will get fast transcoding and maybe even better quality compared to the CPU.
@@vedranart plex has ondemad trascodig from H265 10bit to any lower quality as long as the gpu in de server has the decoder and encoder available be it intels Quick sync AMD UVD/UVE or Nvidia NVENC or NVDEC
i wish you explained how you determined which nvidia driver you used. i've been looking around for hours trying to get that smi thing to work. what made you choose the 510 over any other ins the list??
Looks interesting if you're in need of just compressing videos for storage purposes and/or watching videos locally with a compatible media player. But, if you selfhost the videos for like website/blog/streaming service, or want to make sure the videos are playable with the most basic media player (without the need of installing codecs), then h264 is still the way to go.
We are getting to the point that most devices in the wild have built in x265/HEVC decoding. Anything released 2016 and later should have it. So it really comes down to whether or not you think supporting 7+ year old hardware is worth it. In some situations it will be. Others is will not. And there are situations where you might want to store in HEVC and live transcode back to x264 for unsupported devices.
I go way WAY back with video encoding. Spent many hours @ the doom9 forums trying to squeeze every bit out of every file while keeping all the quality. Transcoding just isn't worth it unless you have huge source files, like ripping straight from a disc. You always lose a little quality and slow storage is cheap and easy.
Storage may be cheap and easy, but serving large files can be a problem, or server will end up transcoding to a worse quality on the fly. And besides there are other reasons to use an app like Tdarr or Fileflows than just to trancode. You can remux, add new audio tracks in different codecs (my stuff cant place EAC3 for example which would cause plex to to a transcode), automatically cut commercials, add chapters, remove unwanted streams to save space etc.
@@JohnAndrews_nz You have valid points but editing streams in a container isn't quite transcoding. I'd argue that's even worse than transcoding since the gains are even smaller (most of the time) but require manual work unless every stream is tagged properly. I guess everyone has different needs but I'm done with the days of 100% CPU for weeks on end for a measly 720GB.
@@drivenbydemons6537 you are correct. that would not be transcoding. however, you can still gain a lot of storage if you strip out all the additional unwanted audio tracks. Tha can save 10-15% per video at times. No where near h264 to h264, but still do that across a few TB and you can save a couple hundred GBs I'm John BTW. Switched to this account for transparency. Was on my mobile/personal account previously.
How is tdarr for just transcoding audio? My home theatre doesn’t support dts-hd ma audio, but many titles have it. I’d like to convert the audio to dolby digital which my HT supports.
Any iea how to diagnose what the transcode errors are , I have a 96gb project all in h364 and trying to convert to h265, however it seems to only convert 3gb of the data and chucks half out as an error. Any help would be greatly appreciated
The Nvidia Quadro P2000 has no software limitations regarding hw transcoding. Thats why its popular for plex. You can transcode 20 streams simultaneously.
If you have alot of space processor time you should rather transcode to av1 (better then h265 and open-source the actual future of video) and opus(for audio). This will take alot of time and currently doesnt have any gpu encode support. VP9 is a good alternative to h265 for better browser and mobile device support as h265 royalties are insane.
For anyone who is trying to run a 10 series or 20 series card for transcoding, you can unlock the GPU pretty easily to allow more than just a couple transcodes
Will TDARR work with a Google Coral TPU for transcoding? Can you run multiple GPUs in one server for TDARR? Basically, build a strictly transcoding machine. Can TDARR be used to compress cctv nvr files?
You might want to check how many streams the P2200 Quadro is limited too. It may be up to 6 or unlimited. I'm unable to check now but nvidea have a nice table with the details.
Never played with tdarr, if the node is on a different physical machine, do you transfer the video files over to the new machine or does it go through your network on the main machine?
the videos that you are saving are your source files? wouldnt you want to save those in the highest quality format possible, instead of a lossy one? can tdarr support multiple video cards on the same node?
There's also a line to be drawn I think. Never let perfect be the enemy of good enough. Do you need raw footage of your desk shots? Or would a compressed version serve the same purpose for next to no disadvantage. Granted if its your wedding video, you probably want to preserve the pixels but if its shaky cam footage of your holiday visit to the zoo would reencoding it from h264 to h265 really have any detrimental impact?
I developed a similar although not as polished service back in 2018 when I was in uni. But I stopped development after I started working and could afford buying more hdds hehe. Glad to see that someone else had the same idea
Great video, but I have a question: Did you have to create the other nodes on other computer to do that? Couldn’t you just add more GPU on the same server and create additional nodes on it? Thanks
Tim is it possible to make a video to show how to use a AMD GPU. Tdarr seems to only have Nvidia support. I have Handbrake and have exported the .json file for the presets. However I am not sure how to get that into Tdarr.
Short followup video? Can we see a difference before and after on a TechnoTim how to video? Are your recording direct to h.265 now? Etc. Thanks for considering.
Can AMD RX580 be used or does it need nvidia card? I am running Proxmox so I was thinking about assigning my AMD card to Windows 10 machine and set up Tdarr on it. Can it be done with AMD cards?
Would be super-cool if the next version added support for ai driven metadata automation where a sidecar file is generated to facilitate searching your library (digital assets management).
Isn't GPU encoding uses built in asics for encoding purposes? In that case isn't it seriously hurting the quality and size compared to cpu encoding with medium/slow preset? Especially using GPU with old encoders like GTX 1080, 16XX or 20XX have newer encoder chips.
*There is no benefit to running 3 off line transcode streams, as each stream simply runs @ 33% of the total Encoder horse power, especially for h.265.* The '3 stream limit' is meant to satisfy real-time h.264 online/outgoing transcodes to viewing clients. I run single offline jobs on ffmpeg in Windows for h.265 and max out NVENC, whereas I run 3 streams of outgoing h.264 that can rarely max out NVENC on my plex server due to the lighter workload.
Not entirely true, depends on your setup. You may have some CPU bound tasks to perform in the processing stack, and you also then have some storage actions to take, both of which can take time, but do no use the CPU. So completely depends on your use case.
Will this work for converting variable frame rate footage like twitch vods to constant frame rate? A lot of video editing software works better with constant frame rate footage and I currently do my converting with handbrake
The hard-coded dependency to reach out to Github with a stateful connection, concerns me. I'll wait until a proper security audit is done of the project before continuing. It was a bit suspicious when the project tried to reach out to a dozen external IP addresses when Tdarr_Server was launched and several additional external IPs when Tdarr_Node was launched.
Tdarr has been around for about 3 years, if no one has flagged it by now, I don't think it does anything nefarious. But if you are that concerned, you should do one.
Seems like most video apps that are good, this is based off ffmpeg under the hood. At this point I’ve seen it used so often I’m just using it outright along with some simple bash scripts. Nice find for a distributed transcoding solution though!
Would limiting the transcodes to 1 make any difference in the time it takes to complete 3? I feel that setting it to 3 just make each transcode take 3x as long.
you're right, it just divides them up however there might be gaps where the trasncoder isn't being used and using more than 1 stream ensures maximum encode time
Thanks for the video, i noticed 720p videos has a noticeable degrade of quality whereas on 1080p is very hard to spot. Also I was wondering is there any possibility to add multiple GPUs on 1 node for transcoding?
Try looking into something like LizardFS for storage moving forward. You'll be able to scale your storage with redundancy at a software/filesystem level rather than dealing with raid arrays. Need more redundancy on a single folder, no problem. Need more space, add more drives or chunkservers. If a drive or server drops, chunks from other drives or servers will be copied to re-meet the redundancy goals, instead of a crazy raid rebuild.
av1 is the future of video encoding. It isn't quite here yet because there isn't much hardware assist available yet, but it totally kicks h265's butt... and like... av1 is royalty free... which maybe you don't care about... but the big commercial content providers do.
keyword being future :P cos currently the support isn't there yet. but yeah AV1 will become the standard in the not too distant future (hopefully not too distant...)
Transcoding 8tb of videofootage with a xeon e5-2407 v2, would I be able to achieve that within my current lifespan? Or is a gpu really a dealbreaker when dealing with several terabytes?
The Raspberry Pi3 I use can only play h.264 movies. I'm looking for a device that can play h.265 and 4k movies, does anybody have good ideas what to buy?
In your video you have assigned 1 movie to 1 GPU/Container .. but as i found out h265 transcode with CPU encode is way more efficient (smaller files with better quality) Is it possible to use multiple Tdar instances (docker containers) and each one of them to elaborate only few minutes of a 1 single movie/clip ? instead 1 full movie each ? I have a 3 node docker swarm. In a scenario in which a movie is 30 minutes long , i would be able to assign 10m of clip to each node and let tdar transcode and re-compose the file as output. If you want a faster system you would just add docker swarm nodes .. and you would be able to cut transcode time by the amount of nodes you have. I saw an Italian guy called morrowlinux did something similar with ffmpeg distributed but i have not understood how he did nor i found any reference in ffmpeg libraries. Can Tdar being used for this ? or does anything similar exist that may allow something similar ?
ubuntu 22.04 thoughts? 20.04 is what I've gone to for my baremetal or VM needs (sweet sweet LTS), but I hated that I had to use 21.04 for pi USB boot. With 22.04 I'm looking forward to it. I used the beta in a VM with success, and I would love a video on your thoughts of it.
From what I've seen to date hardware accelerated transcoding has been developed to transcode on the fly video well, but at the expense of file size. I haven't tried this software though, so maybe this is better? I've used handbrake for 90% of my videos, and software transcoding has always provided the superior quality/file size over hardware accelerated options. Would be interested in seeing a comparison. I don't have a server myself, so Idk how to run Tdarr. lol
I’ll agree with others in the comments that CPU encoding yields way better results and can be done with okay speed with the right CPU. My 1950x does it well and my 5950x does it very fast. A gpu will be faster but file sizes are much larger with no quality benefit. My 3090 can do it faster but not worth the file size.
It should be pointed out that there are a lot of options for encoding h265 videos. Tdarr is not unique in this regard. I'm unlikely to switch from FFmpeg and x265.
Tim, great content. Always learning something. Have you looked at unmanic by any chance. I have been using your method with tdarr since this video went up, thinking of changing to unmanic for the file sizes are much smaller and cant tell any difference in the video quality. e.g Converted my BluRay Star wars from around 10gb to between 6gb - 8gb using different settings / plugins with tdarr. Result using unmanic, got the size down to 2.5gb.
You gotta pump those numbers up those are rookie numbers 😁 Im currently at 7.8 TB of savings 😅 i took my 2070 and Quadro P1000 over 6 Weeks of 24/7 encoding
This is one of those cases I really don't understand who it's supposed to target. It's still way too complicated and too much hassle for a random schmuck to deal with but at the same time someone who knows what all of this is about will have much more specific requirements and already know how to handle transcoding in a way that suits them much better.
I'm assuming you mean the video and not the software? Either way, I don't know if I'd qualify as as a random schmuck as I build/repair computers, have hosted my own servers etc. But I'm FAR from being a professional, everything I know I've learned by doing it; no education or anything. Encoding/decoding/transcoding are a very weak point for me, as are parts of networking. So for me this seems an awesome way to get started messing with this stuff. So I guess I'm who this would be targeting.
@@Mad-Lad-Chad Of course I was being hyperbolic, I can see the positive comments after all. But I genuinely don't understand why someone would think it's preferable to learn how to operate software which has this kind of narrow focus but is not at all beginner-friendly compared to a one-button-solution, rather than ffmpeg, which is barely (for the same kind of purpose) more complicated and versatile enough to handle virtually any kind of encoding-related task. Granted, I did of course try many GUIs over the years as well (many of which are just frontends for ffmpeg anyway), but I always go back to command line because it's just so much more powerful and reliable.
@@EireBallard A lot of users hate command line, but short of that it would be due to lack of knowledge I imagine. I thought this software could do more than just this specific conversion, but admittedly I was mostly listening to the video in the background as noise. I'm not even really sure what ffmpeg is.
Thanks for this. have been looking for a solution as I have around 80TB of video footage which i Need to archive and reducing the size while not affecting the quality will be a great move forward. Tried this with a Tdarr server and node on Windows 10. All seems ok until I try transcoding files over 4GB, they transcode but the copy from the cache does not happen - am I missing something fundamental here? Has anyone had the same issue on windows or linux with files greater than 4GB? (have about 35TB which are over 4GB, max size i think is 90GB. Any suggestions?
Tim, thank you for this. I've been using handbrake for years and looking for an alternative. My Plex drive currently has 60GB free of 8TB, excited to get home and give this a go
@@ryanmckee6348 well don't fix it if it ain't broken 😉 i'd rather focus on one software and master my skills there instead of many different ones - for such a use case. Also too not too sure, if the features are on par in respect of movies, like sub titles, audio etc... That makes more sense for you tube material but not necessarily for movies
Hey just stopped on here to suggest a video using the clouds (Google, Amazon, Oracle, Azure) Free tier GPU options to help speed conversions along. I have a new Dell server and 9th gen I7 and a 3700x with a 3070 and the amount of time to convert to h265 is killing me
I've been planning on doing this to my 1TB+ family vids folder (h.264 1080p30 to h.265), but I'm currently waiting on AV1 hardware encoding GPUs from Intel.
Based on the transcode options seen in your video. Transcoded videos might have less video bitrates. "Settings are dependant on file bitrate Working by the logic that H265 can support the same ammount of data at half the bitrate of H264." I guess the disk space reclaimed comes mainly from that bit rate decrease.
Oh gonna use a dedicated rpi for this. shrink my media library without taking up my local servers resources and wasting energy that a fully utilized x86 CPU would do. I love having a project with real purpose other than me just testing.
Looks like it uses handbrake or ffmpeg. (eg in settings there are options like handbrakePath and ffmpegPath) Seems like a GUI script that feeds videos to servers/nodes that run them. Haven't looked in detail to see if they have their own encoding software other than really using opensource to run their subscription based business... 😆
Because its a lot easier to automate Tdarr than it is handbrake. Sure if you want one off things, handbrake is fine. But if you want to automate hundreds, thousands, tdarr is much easier.
@@fileflows204 u mean the Queue option in handbrake does not work anymore? EGADS!! .... oh wait. I just checked, yes we can still queue whole directories or file lists in one quick process... not sure how it can be easier than that... need to run multiple handbrakes on networked computers off one file list? No problem, split the list into parts, and queue each list on a separate machine and have all files shared on the network... or if u want to stream the files to the encoder, substitute FFMPEG (like tdtard does)..
@@mhavock well you have to make the queue... so that's a step... so that's not quicker. Tdarr its automatic. It no steps once setup. But if you want to use handbrake, go ahead, no one is forcing you.
couldn't you just convert your backup server into an encoding server? Like put all 3 GPUs in the server and then throw proxmox on and pass through all the GPUs to 3 separate containers/VMs?
First, it is very likely that the existing videos have been encoded with a lossy codec, which should have introduced noises (or artefacts). If I re-encode that using H265, doesn't that introduce more noises on top of existing noises, which the H265 encoder will try to encode, because it cannot know whether something is noise or original picture? Secondly, isn't H265 itself fading out? It is proprietary and you have to pay to decode, so a lot of video players do not support it out-of-the-box. As far as I know, Google developed VP9 as an alternative and now they are moving to AV1. So, if you encode everything again, why not do it with AV1, instead of H265? Thirdly, 700GB? I don't know, I have an old 1TB HDD's that I do not use at home. Last time I bought an external HDD, it was a 10TB and it cost $185 or something. 700GB is less than $13 in that regard. Considering all the energy consumption of transcoding and hassle, I am not sure if saving $13 is worth it.
I feel like the biggest point of this is not having to add another server to your lab, bacause most of the homelab people already have their servers filled up to the brim. That would mean you have to buy another server, the drives, set it up and it's probably gonna end up sitting idling as a storage server, consuming more electricity, rarely being used. I think that would end up being more expensive.
Tdarr/FileFlows both use ffmpeg under the hood, so aslong as you that has access to your GPU, and your GPU has the support hardware encoding, then they will work fine.
I use handbrake to take direct blurays to mkv using MakeMKV with 5.1 sound, and it will drop it from 25gb to 4-5gb each file. works great with a 4tb external hd and a sony BPD-S5500
** FINAL COUNT ** 720 GB of disk space reclaimed! How much space is your video collection taking up?
thats top secret
8.5TB atm -_- ughhh definitely going to try this
I reclaimed about 660 GB already. Started couple of months ago. Good video
About 3TB -- ran tdarr and converted to h265 and saved just over 1TB!
60ish TB. I would love to use something like tdarr, but I usually end up re encoding my sample piece per movie 5 times tinkering settings, so having some template going over them would be a no go for me. Gpu encodes are never sadisfying for me, the time for CPU encoding combined with the hobby of collecting movies is just not fisable
New Techno Tim video, best part about Saturdays! I've been kicking around the idea of setting up a transcode server too.
Great stuff! I've actually been looking for a bulk transcode service for quite a while that wasn't just an ffmpg batch file. Definitely will be putting this to use.
Thanks! I’d love to see you crush it with 5 nodes running GPUs!
Jeff, I hope you do a video about it ! Would really like to see it setup on Proxmox, maybe with multiple nodes and more in depth, even tho this video is already pretty good!
@@FlaxTheSeedOne it doesn't matter what you use to encode, the results should be the same . Its a compression algorithm , or just math . Why would there be any difference other than raw speed? Measured in (Millions of Inter-Operations per second) or (MIPS). the creator's point was that x and h 265 are a better storage solution than x or h 264 . Saved him nearly a terabyte. If handbrake can do the same then its just as viable .
@@tobiwonkanogy2975 according to video, encode/decode hardware logic was used. Even Turing NVENC gives result like medium CPU presset. It's kind of ok for an archive but still, quality drop might be too big for someone.
@@vadnegru I don't know enough about how things get encoded and decoded then.
The cpu's have a bigger section of die that works faster than a gpu ?
the only thing i have found was that cpus tend to get the encode/decode features first and then graphics later on .
other info seemed to suggest gpus if speed is necessary and cpu if accuracy is more important. gpus were scalable were you mainly only have one cpu socket now .
VRAM is less ECC than DDR RAM is my true guess.
Tim, I started my transcode project last year, took six months and saved 30tb!
I do CPU encodes and only with HD content. Best quality and file sizes Vs GPU can be achieved this way.
Holy shit!?
30tb savings! that's pretty massive. What was the beginning size of the library?
@@Trains-With-Shane About 60Tb of x264 content. I now have 50TB free on my unRaid array.
@@joelang6126 I'm guessing you have already done so but if not; recommend using something like unbalance to consolidate onto fewer drives in the array, thus avoiding spinning up drives when not needed!
@@transatlant1c Already done mate lol. Drives are filled to 97% capacity before starting writing to empty drives.
OMG! At the 5:45 Mark, that tip on the plug-in paths literally saved me another 4 hours of research. I'll give you a SUB just for that... THX so much!
It should be said that transcoding video WILL degrade the quality, especially when the source video isn't lossless. You can compare it to converting an MP3 file to a lower bitrate MP3 file. You're compressing an already compressed file format so the quality degradation is doubled. When you're working with archive footage that's saved at really high quality settings, it'll probably still look fine but don't expect this method to do you any favours when applied to a library of movies or TV shows you once ripped to an already lossy format.
Try it out. I doubt you see a difference in every Day usage between a 1080p 10 Mbit/s h264 Source converted to a 1080p 5 Mbit/s h265 file. Not saying you cant see it at all, but for general stuff its not noticeable at all imo.
Not to mention using the GPU to encode isn't the best way either. Sure it's faster, but there can be a quality trade off there too.
This was my exact question too -- not long ago I spent waaaaay too long getting GPU encoding working under WSL2 with ffmpeg, and even though I was aware there would be a quality drop, I wasn't really expecting to be at a level where I either would notice or care. But it was immediately obvious and distracting, banding all over the place, completely distracting. CPU-reencoded files were the same size (within a percent or two either way), but looked completely indistinguishable from the source (at least to my eyes -- I'm sure somebody who knew what to look for would know!).
That I was able to see the quality difference was a real eye-opener (no pun intended!) for me, and I ended up just spending the extra run-time using the CPU. It might have taken a lot longer but there's a good reason why they warn you about the quality drop for GPU. If there's a way of getting around this I'd be delighted to hear it, as the time savings would be massive!
Dude, there's no difference between cpu and gpu encoding as long as you use equal settings. If you don't know what the equivalent settings are, then that's your problem, not the encoding. Unfortunately, sometimes the settings can get very complicated, and sometimes you just need to try out a few different settings.
@@jonmayer This project is not for you then. The point here was save space and not beat up the cpu while doing it. for absolute purists who don't want to change their files they should not be changing their files.
"old", a 1050 TI is currently the GPU for my wife's Proxmox gaming server/my 3d printing host... running almost exclusively Sims 4 and Cura. :)
I'm only 2 min in but I have to say, I hope you discuss your TDARR settings. I've been following this project for almost 2 years and I decided this winter, when I already want to heat my apartments, is the perfect time to really define what I want out of it and see how well it does with a mixed mashup of TV and Anime.
the new Apple M1 Pro/Ultra have some serious hardware encoders (can do h.265 also), and the max can do 30 streams of 4k ProRes or 7 streams of 8k ProRes. I'm curious how well it would do at this task if you threw a Mac Mini with one of those cpu's onto your network as a compute node for your video encoding
Have thought about that many times in the past but I don't really see a reason to do this. Adding and running an additional hard drive, or get a larger one can be cheaper initially and is more energy efficient long-term than running a graphics card to transcode back and forth all the time when using Jellyfin or Plex.
Of course this might change if you have a spare gpu lying around anyway, have one in your server anyway and energy cost is cheap where you live.
I love MicroCenter! Always make it a point to go there when I head up north. Prices are very comparable and staff is great.
Thanks for this video. This is truly what I needed. I’ve been transcoding “manually” via Handbrake.
I wish there was a MicroCenter near me (or ANYWHERE IN THE PACIFIC NORTHWEST -- in case they're listening). I've also been "manually" transcoding via Handbrake (in batches with presets, at least) -- excited to try this out too!
Seems like no one has commented on this yet, sick new camera setup! It looks awesome, kind of feels less cramped from the telephoto lens and the bokeh and framing is also nicer.
I have plenty of OBS Replay Buffer clips that used NVENC H.264 CQP 25, meaning it takes a lot of space in exchange for lower resource usage (OBS is on the same PC that I game on). I already encode using HandBrake, but this looks like it’ll help use my desktop’s extra resources together with my server when I sleep.
does tdarr support av1 yet? i think av1 will replace h265 some day. youtube already uses it too. you might want to try using av1 on your youtube uploads to check out the quality difference
As of current, 3/3/24, tdarr does support av1. Just in case
@@DrDipsh1t Don't use tdarr av1. Use only SVT-AV1-PSY for archieving.
So far, I've saved 23.32TB using Tdarr - it's friggin stellar. Glad to see you're enjoying it too!!!
Thank you!
I'd love to hear what you're doing for Digital Asset Management to be able to make use of all of that archived video footage.
It's great for space saving but beware that H265 takes more horsepower to transcode on the fly when watching content on Plex/Jellyfin, so if your home server is a fairly low powered Synology or Raspberry Pi you might want to consider leaving it as H264 and just buy more storage, otherwise you will run into buffering!
most modern hardware has x265/HEVC decoding support. Even RasPis.
The killer is driver support.
@@ZiggleFingers For many people that defeats the entire point of having a Plex server. If your goal is to save money on streaming services you're hardly saving money if you go out and replace perfectly fine playback devices every couple of years. I'm using an Xbox 360 on one TV for the kids to stream cartoons. I don't want to give them access to a newer more expensive device they're likely to damage anyway.
@@Prophes0r Ya most modern hardware but hardware is commonly being used for longer these days. My pc is 10 years old and I dont' plan on upgrading it as I only download videos to my external hard drives which I watch on my ps4 slim and ps vita slim
Just make sure all the clients are h265 decode capable and ZERO transcoding is needed, 3rd gen firestick (2019) onwards, intel 6th gen cpu or later (or Ryzen APU), & I believe 1000 series Nvidia 500 series AMD or later all do the decoding in hardware as does a Pi4.
Noticed the gesture when saying thanks, having a deaf sister I found that pleasant to see
Thanks for showing us this service and your hardware setups! Really nice to watch :-)
I'm not sure of your exact settings, but not enabling 10bit would actually degrade the quality of a big part of my library as it was recorded in 10bit. Also not sure about your framerate settings. Most of my videos use 60fps, some even 120fps. If everything would get converted to 30fps, it will make any slowdown in post impossible.
THIS is definitely worth considering. Thx Tim!
New subscriber and amazed at your videos, and this one with tdarr is fantastic, well done sir!!
This is really helpful. Though I guess my 9.9TB movie library will take decades to convert since i've got only 1 node with a 3070.
But my first test conversions looks like I can cut the size roughly in half 😍
Just thought of a new coin mining scheme
You'd be surprised bro, with nvenc acceleration I was able to do a typical 1080p h264, ~5-7gb in about 20-30 minutes and I was also able to do 2x concurrently without impacting performance, and that was with a 980 too. Might actually take less time than you think!
Tdarr has been a saver for me! I have used it for some time, been ripping my movie collection (I have alot)! I have saved over 1.5 TB worth of data, with no notable quality loss!
What were the plugins that you used ?
@@supratiksarkar3422 Tdarr_Plugin_s7x9_winsome_h265_10bit and Tdarr_Plugin_x7ab_Remove_Subs
GPU h.265 is still about 1,5-2x the size of CPU h.265 with the same quality. So if you are „archiving“ I would advise against GPU transcoding.
how can that be? isn't it the same codec?
@@bobtiji yes but different efficiency, also NVENC is only good on RTX 2000 series and up. The GTX 1000 series doesn’t have good quality encoding and the files are even larger although I’m not 100% sure on the file size.
@@owlmostdead9492 uh. the more you know.
This is true, CPU do give better quality encodes and the file size is smaller. However, on my GTX1650 vs my AMD 5800x. I can get around 4-6x better performance on my GPU vs my CPU. So make a smarter conversion setup, where stuff you really want to keep quality, use CPU, stuff you don't really care about, use GPU.
I'm pretty sure it depends on the output bitrate you set, no? In general, H265 is 30% smaller than the corresponding quality of H264. But of course, you can compress a H264 100MBit/s file into a H265 10MBit/s file, but usually it's around 70MBit/s for the same quality.
IMO if you are trying to save space and get a better quality, use CPU encoding. Else if you want to transcode a video for plex (like real time resolution switching, etc) use the GPU.
I got into this topic a while ago, when i was trying to rip my blueray collection and noticed the bad quality on NVENC re-encoding.
Plex can use intel quick sync and based on os it runs on it can use nvenc of nvida and amds answer to that but you card needs to support h265 decode to transcode on the fly
GPU Nvidia encoding isn't the problem here. Problem is a Nvidia decoder. Try using CPU as decoder na Nvidia GPU as encoder. That way you will get fast transcoding and maybe even better quality compared to the CPU.
@@vedranart plex has ondemad trascodig from H265 10bit to any lower quality as long as the gpu in de server has the decoder and encoder available be it intels Quick sync AMD UVD/UVE or Nvidia NVENC or NVDEC
@@sojab0on Does 12th gen Intel quick sync have it?
@@vedranart Does it matter which GPU you have ? I have a spare 1660 ti and spare rtx 3060 GPU. Is one better than the other at transcoding?
i wish you explained how you determined which nvidia driver you used. i've been looking around for hours trying to get that smi thing to work. what made you choose the 510 over any other ins the list??
Looks interesting if you're in need of just compressing videos for storage purposes and/or watching videos locally with a compatible media player. But, if you selfhost the videos for like website/blog/streaming service, or want to make sure the videos are playable with the most basic media player (without the need of installing codecs), then h264 is still the way to go.
Yeah, just storage
We are getting to the point that most devices in the wild have built in x265/HEVC decoding. Anything released 2016 and later should have it.
So it really comes down to whether or not you think supporting 7+ year old hardware is worth it.
In some situations it will be. Others is will not. And there are situations where you might want to store in HEVC and live transcode back to x264 for unsupported devices.
I go way WAY back with video encoding. Spent many hours @ the doom9 forums trying to squeeze every bit out of every file while keeping all the quality. Transcoding just isn't worth it unless you have huge source files, like ripping straight from a disc. You always lose a little quality and slow storage is cheap and easy.
Storage may be cheap and easy, but serving large files can be a problem, or server will end up transcoding to a worse quality on the fly. And besides there are other reasons to use an app like Tdarr or Fileflows than just to trancode. You can remux, add new audio tracks in different codecs (my stuff cant place EAC3 for example which would cause plex to to a transcode), automatically cut commercials, add chapters, remove unwanted streams to save space etc.
@@JohnAndrews_nz You have valid points but editing streams in a container isn't quite transcoding. I'd argue that's even worse than transcoding since the gains are even smaller (most of the time) but require manual work unless every stream is tagged properly. I guess everyone has different needs but I'm done with the days of 100% CPU for weeks on end for a measly 720GB.
@@drivenbydemons6537 you are correct. that would not be transcoding. however, you can still gain a lot of storage if you strip out all the additional unwanted audio tracks. Tha can save 10-15% per video at times. No where near h264 to h264, but still do that across a few TB and you can save a couple hundred GBs
I'm John BTW. Switched to this account for transparency. Was on my mobile/personal account previously.
How is tdarr for just transcoding audio? My home theatre doesn’t support dts-hd ma audio, but many titles have it. I’d like to convert the audio to dolby digital which my HT supports.
Let's try with H265+, additional ~20% size reduce!
Any iea how to diagnose what the transcode errors are , I have a 96gb project all in h364 and trying to convert to h265, however it seems to only convert 3gb of the data and chucks half out as an error. Any help would be greatly appreciated
The Nvidia Quadro P2000 has no software limitations regarding hw transcoding. Thats why its popular for plex. You can transcode 20 streams simultaneously.
If you have alot of space processor time you should rather transcode to av1 (better then h265 and open-source the actual future of video) and opus(for audio). This will take alot of time and currently doesnt have any gpu encode support. VP9 is a good alternative to h265 for better browser and mobile device support as h265 royalties are insane.
For anyone who is trying to run a 10 series or 20 series card for transcoding, you can unlock the GPU pretty easily to allow more than just a couple transcodes
Will TDARR work with a Google Coral TPU for transcoding?
Can you run multiple GPUs in one server for TDARR? Basically, build a strictly transcoding machine.
Can TDARR be used to compress cctv nvr files?
Is there something similar for managing photos? Like autorotating them finding duplicates ... ?
How about the power usage? And could i install tdarr om my windows machine and covert on my Synology ?
You might want to check how many streams the P2200 Quadro is limited too. It may be up to 6 or unlimited. I'm unable to check now but nvidea have a nice table with the details.
you're a life saver! I've been recording my gameplay to my NAS and I've been able to reclaim about 50% of my storage back ! (300GB)
Never played with tdarr, if the node is on a different physical machine, do you transfer the video files over to the new machine or does it go through your network on the main machine?
the videos that you are saving are your source files? wouldnt you want to save those in the highest quality format possible, instead of a lossy one?
can tdarr support multiple video cards on the same node?
Not for my youtube archival footage. It’s hours of footage I may never look at again
There's also a line to be drawn I think. Never let perfect be the enemy of good enough.
Do you need raw footage of your desk shots? Or would a compressed version serve the same purpose for next to no disadvantage.
Granted if its your wedding video, you probably want to preserve the pixels but if its shaky cam footage of your holiday visit to the zoo would reencoding it from h264 to h265 really have any detrimental impact?
I developed a similar although not as polished service back in 2018 when I was in uni. But I stopped development after I started working and could afford buying more hdds hehe. Glad to see that someone else had the same idea
The sign language "thank you" was such a cherry on top. Even for a hearing person.
What about adding more video cards to a server? Do you need high bandwidth or can you use a pcie splitter and have like 4 cards working away?
Anyone know the solution for ERROR] Tdarr_Server - SyntaxError: Unexpected token u in JSON at position 0
I am looking for self hosted "warranty management system". Could you make some suggestions or give some advice
Great video, but I have a question: Did you have to create the other nodes on other computer to do that? Couldn’t you just add more GPU on the same server and create additional nodes on it? Thanks
Tim is it possible to make a video to show how to use a AMD GPU. Tdarr seems to only have Nvidia support.
I have Handbrake and have exported the .json file for the presets. However I am not sure how to get that into Tdarr.
Great video and software! I'm on track to reclaim about 30% myself. Thank you and keep up the great work!
So glad to see your channel grow from sub 10k to almost 100k. Good job man
Thank you so much!
Short followup video?
Can we see a difference before and after on a TechnoTim how to video?
Are your recording direct to h.265 now? Etc.
Thanks for considering.
Is it possible to use a GeForce rtx 670FTW or s there a minimum requirement?
Can AMD RX580 be used or does it need nvidia card? I am running Proxmox so I was thinking about assigning my AMD card to Windows 10 machine and set up Tdarr on it. Can it be done with AMD cards?
Would be super-cool if the next version added support for ai driven metadata automation where a sidecar file is generated to facilitate searching your library (digital assets management).
Isn't GPU encoding uses built in asics for encoding purposes? In that case isn't it seriously hurting the quality and size compared to cpu encoding with medium/slow preset? Especially using GPU with old encoders like GTX 1080, 16XX or 20XX have newer encoder chips.
yes, GPU encoders are optimized for streaming applications, not quality encoding.
Nice video thanks.... but all video i watch used NVIDEA none use RADEON do you know why ?
I think it's due to the NVENC encoder being more accessible.
@@TechnoTimThanks for responding...
*There is no benefit to running 3 off line transcode streams, as each stream simply runs @ 33% of the total Encoder horse power, especially for h.265.* The '3 stream limit' is meant to satisfy real-time h.264 online/outgoing transcodes to viewing clients. I run single offline jobs on ffmpeg in Windows for h.265 and max out NVENC, whereas I run 3 streams of outgoing h.264 that can rarely max out NVENC on my plex server due to the lighter workload.
Not entirely true, depends on your setup. You may have some CPU bound tasks to perform in the processing stack, and you also then have some storage actions to take, both of which can take time, but do no use the CPU. So completely depends on your use case.
Love me some Tdarr!!!
Will this work for converting variable frame rate footage like twitch vods to constant frame rate? A lot of video editing software works better with constant frame rate footage and I currently do my converting with handbrake
The hard-coded dependency to reach out to Github with a stateful connection, concerns me. I'll wait until a proper security audit is done of the project before continuing. It was a bit suspicious when the project tried to reach out to a dozen external IP addresses when Tdarr_Server was launched and several additional external IPs when Tdarr_Node was launched.
Tdarr has been around for about 3 years, if no one has flagged it by now, I don't think it does anything nefarious. But if you are that concerned, you should do one.
Why would tdarr need to access anything outside of your home network?
thanks for the heads up, I’m skipping tdarr.
@@majorgear1021 You can ask them, they are on github and discord.
do i need the server and node version or can run only the server or node version?
Any video on how to set up a windows tdarr node for to work with a server running truenas scale? I'm probably doing something stupid :P
Seems like most video apps that are good, this is based off ffmpeg under the hood. At this point I’ve seen it used so often I’m just using it outright along with some simple bash scripts. Nice find for a distributed transcoding solution though!
Would limiting the transcodes to 1 make any difference in the time it takes to complete 3? I feel that setting it to 3 just make each transcode take 3x as long.
you're right, it just divides them up however there might be gaps where the trasncoder isn't being used and using more than 1 stream ensures maximum encode time
Thanks for the video, i noticed 720p videos has a noticeable degrade of quality whereas on 1080p is very hard to spot. Also I was wondering is there any possibility to add multiple GPUs on 1 node for transcoding?
Try looking into something like LizardFS for storage moving forward.
You'll be able to scale your storage with redundancy at a software/filesystem level rather than dealing with raid arrays.
Need more redundancy on a single folder, no problem. Need more space, add more drives or chunkservers.
If a drive or server drops, chunks from other drives or servers will be copied to re-meet the redundancy goals, instead of a crazy raid rebuild.
av1 is the future of video encoding. It isn't quite here yet because there isn't much hardware assist available yet, but it totally kicks h265's butt... and like... av1 is royalty free... which maybe you don't care about... but the big commercial content providers do.
keyword being future :P cos currently the support isn't there yet. but yeah AV1 will become the standard in the not too distant future (hopefully not too distant...)
Transcoding 8tb of videofootage with a xeon e5-2407 v2, would I be able to achieve that within my current lifespan? Or is a gpu really a dealbreaker when dealing with several terabytes?
CPU transcoding gives you better quality and smaller file size . I saw go for it!
@@TechnoTim Really? In that case there's no harm in trying, thanks
Sound similar to Sonarr, Radarr, and Lidarr. Don't doubt that's one of the intended use cases
I am looking for a similar type of engine to optimize pictures for website. If you find or know of something please let me know.
Well, now I can finally hang on to my footage. I was always too lazy to use ffmpeg scripts to compress my video files, but this looks promising 😅
@1:50 never work with hardware barefoot, you never know what you drop on your feet :)
Also what case are you using for the pc conversion with the 1080
It's the chenbro one listed here! kit.co/TechnoTim/techno-tim-homelab-and-server-room-upgrade-2021
My plex server is around 12TB, however I need to stick with 264 plex doesn't work very well with 265
The Raspberry Pi3 I use can only play h.264 movies. I'm looking for a device that can play h.265 and 4k movies, does anybody have good ideas what to buy?
In your video you have assigned 1 movie to 1 GPU/Container .. but as i found out h265 transcode with CPU encode is way more efficient (smaller files with better quality)
Is it possible to use multiple Tdar instances (docker containers) and each one of them to elaborate only few minutes of a 1 single movie/clip ? instead 1 full movie each ?
I have a 3 node docker swarm. In a scenario in which a movie is 30 minutes long , i would be able to assign 10m of clip to each node and let tdar transcode and re-compose the file as output.
If you want a faster system you would just add docker swarm nodes .. and you would be able to cut transcode time by the amount of nodes you have.
I saw an Italian guy called morrowlinux did something similar with ffmpeg distributed but i have not understood how he did nor i found any reference in ffmpeg libraries.
Can Tdar being used for this ? or does anything similar exist that may allow something similar ?
ubuntu 22.04 thoughts? 20.04 is what I've gone to for my baremetal or VM needs (sweet sweet LTS), but I hated that I had to use 21.04 for pi USB boot. With 22.04 I'm looking forward to it. I used the beta in a VM with success, and I would love a video on your thoughts of it.
I will switch to the latest LTS probably in about a month or so
What about VA1?
From what I've seen to date hardware accelerated transcoding has been developed to transcode on the fly video well, but at the expense of file size. I haven't tried this software though, so maybe this is better? I've used handbrake for 90% of my videos, and software transcoding has always provided the superior quality/file size over hardware accelerated options. Would be interested in seeing a comparison. I don't have a server myself, so Idk how to run Tdarr. lol
I’ll agree with others in the comments that CPU encoding yields way better results and can be done with okay speed with the right CPU. My 1950x does it well and my 5950x does it very fast. A gpu will be faster but file sizes are much larger with no quality benefit. My 3090 can do it faster but not worth the file size.
Is ther something like this for images too?
It should be pointed out that there are a lot of options for encoding h265 videos. Tdarr is not unique in this regard. I'm unlikely to switch from FFmpeg and x265.
Correct, most of these all use ffmpeg under the hood Tdarr, FileFlows, Unmanic. It comes down your use case and how you want to automate it.
I like the new setup! Is this in the “darr” stack? Sonarr, Radarr, Lidarr, etc?
Thank! I’ve never used the others so maybe?
Tim, great content. Always learning something. Have you looked at unmanic by any chance. I have been using your method with tdarr since this video went up, thinking of changing to unmanic for the file sizes are much smaller and cant tell any difference in the video quality. e.g Converted my BluRay Star wars from around 10gb to between 6gb - 8gb using different settings / plugins with tdarr. Result using unmanic, got the size down to 2.5gb.
Not yet!
Not only I got Unmanic working faster, but it's open source; for those who are interested in that aspect too.
You gotta pump those numbers up those are rookie numbers 😁
Im currently at 7.8 TB of savings 😅 i took my 2070 and Quadro P1000 over 6 Weeks of 24/7 encoding
This is one of those cases I really don't understand who it's supposed to target. It's still way too complicated and too much hassle for a random schmuck to deal with but at the same time someone who knows what all of this is about will have much more specific requirements and already know how to handle transcoding in a way that suits them much better.
I'm assuming you mean the video and not the software? Either way, I don't know if I'd qualify as as a random schmuck as I build/repair computers, have hosted my own servers etc. But I'm FAR from being a professional, everything I know I've learned by doing it; no education or anything. Encoding/decoding/transcoding are a very weak point for me, as are parts of networking. So for me this seems an awesome way to get started messing with this stuff. So I guess I'm who this would be targeting.
@@Mad-Lad-Chad Of course I was being hyperbolic, I can see the positive comments after all. But I genuinely don't understand why someone would think it's preferable to learn how to operate software which has this kind of narrow focus but is not at all beginner-friendly compared to a one-button-solution, rather than ffmpeg, which is barely (for the same kind of purpose) more complicated and versatile enough to handle virtually any kind of encoding-related task. Granted, I did of course try many GUIs over the years as well (many of which are just frontends for ffmpeg anyway), but I always go back to command line because it's just so much more powerful and reliable.
@@EireBallard A lot of users hate command line, but short of that it would be due to lack of knowledge I imagine. I thought this software could do more than just this specific conversion, but admittedly I was mostly listening to the video in the background as noise. I'm not even really sure what ffmpeg is.
Thanks for this. have been looking for a solution as I have around 80TB of video footage which i Need to archive and reducing the size while not affecting the quality will be a great move forward. Tried this with a Tdarr server and node on Windows 10. All seems ok until I try transcoding files over 4GB, they transcode but the copy from the cache does not happen - am I missing something fundamental here? Has anyone had the same issue on windows or linux with files greater than 4GB? (have about 35TB which are over 4GB, max size i think is 90GB. Any suggestions?
I'd be curious to see if it does any better with unlocked drivers
6 mins after posting. Did you make it to 1TB?
Tim, thank you for this. I've been using handbrake for years and looking for an alternative. My Plex drive currently has 60GB free of 8TB, excited to get home and give this a go
Whats wrong with handbrake?
@@nixxblikka nothing in particular, just looking for alternatives. Best practice is typically to diversify software options don't you think?
@@ryanmckee6348 well don't fix it if it ain't broken 😉 i'd rather focus on one software and master my skills there instead of many different ones - for such a use case. Also too not too sure, if the features are on par in respect of movies, like sub titles, audio etc... That makes more sense for you tube material but not necessarily for movies
Hey just stopped on here to suggest a video using the clouds (Google, Amazon, Oracle, Azure) Free tier GPU options to help speed conversions along. I have a new Dell server and 9th gen I7 and a 3700x with a 3070 and the amount of time to convert to h265 is killing me
What I have to do with the Windows Node .json-File?
I've been planning on doing this to my 1TB+ family vids folder (h.264 1080p30 to h.265), but I'm currently waiting on AV1 hardware encoding GPUs from Intel.
Based on the transcode options seen in your video. Transcoded videos might have less video bitrates.
"Settings are dependant on file bitrate Working by the logic that H265 can support the same ammount of data at half the bitrate of H264."
I guess the disk space reclaimed comes mainly from that bit rate decrease.
Oh gonna use a dedicated rpi for this. shrink my media library without taking up my local servers resources and wasting energy that a fully utilized x86 CPU would do.
I love having a project with real purpose other than me just testing.
Why Tdarr and not Handbrake on DOCKER? I mean should be enough for regular user, isn’t it?
Looks like it uses handbrake or ffmpeg. (eg in settings there are options like handbrakePath and ffmpegPath)
Seems like a GUI script that feeds videos to servers/nodes that run them.
Haven't looked in detail to see if they have their own encoding software other than really using opensource to run their subscription based business... 😆
Because its a lot easier to automate Tdarr than it is handbrake. Sure if you want one off things, handbrake is fine. But if you want to automate hundreds, thousands, tdarr is much easier.
@@fileflows204 u mean the Queue option in handbrake does not work anymore? EGADS!! .... oh wait. I just checked, yes we can still queue whole directories or file lists in one quick process... not sure how it can be easier than that... need to run multiple handbrakes on networked computers off one file list? No problem, split the list into parts, and queue each list on a separate machine and have all files shared on the network... or if u want to stream the files to the encoder, substitute FFMPEG (like tdtard does)..
@@mhavock well you have to make the queue... so that's a step... so that's not quicker. Tdarr its automatic. It no steps once setup.
But if you want to use handbrake, go ahead, no one is forcing you.
couldn't you just convert your backup server into an encoding server? Like put all 3 GPUs in the server and then throw proxmox on and pass through all the GPUs to 3 separate containers/VMs?
First, it is very likely that the existing videos have been encoded with a lossy codec, which should have introduced noises (or artefacts). If I re-encode that using H265, doesn't that introduce more noises on top of existing noises, which the H265 encoder will try to encode, because it cannot know whether something is noise or original picture?
Secondly, isn't H265 itself fading out? It is proprietary and you have to pay to decode, so a lot of video players do not support it out-of-the-box. As far as I know, Google developed VP9 as an alternative and now they are moving to AV1. So, if you encode everything again, why not do it with AV1, instead of H265?
Thirdly, 700GB? I don't know, I have an old 1TB HDD's that I do not use at home. Last time I bought an external HDD, it was a 10TB and it cost $185 or something. 700GB is less than $13 in that regard. Considering all the energy consumption of transcoding and hassle, I am not sure if saving $13 is worth it.
I feel like the biggest point of this is not having to add another server to your lab, bacause most of the homelab people already have their servers filled up to the brim. That would mean you have to buy another server, the drives, set it up and it's probably gonna end up sitting idling as a storage server, consuming more electricity, rarely being used. I think that would end up being more expensive.
Is there a way of getting Tdarr to add meta details while encoding?
When you say add meta details what do you mean? metadata inside the video file itself, or something like an external nfo kodi file?
very interesting, can it use AMD cards ? several cards on one machine ? several cards with a mix of nvidia and amd ?
Tdarr/FileFlows both use ffmpeg under the hood, so aslong as you that has access to your GPU, and your GPU has the support hardware encoding, then they will work fine.
I use handbrake to take direct blurays to mkv using MakeMKV with 5.1 sound, and it will drop it from 25gb to 4-5gb each file. works great with a 4tb external hd and a sony BPD-S5500
now we need a tut on setting up an open source media server with hardware encoding (quicksync and gpu's)