@Dan Gingerich Right. Netgear routers with default name and password can be cracked immediately with public wordlist. Wifi names that look like they were set up by normies usually have easy passwords that can be grabbed with RockYou or alike. I haven't looked too much into it but a lot of consumer routers probably also have WPS vulnerabilities but I've never had success. Most have active mitigation from what I've seen.
@Dan Gingerich You are 100% correct, but it's also a numbers game. The actual risk exposure for an average home network is infinitesimally small in terms of actually being targeted and hacked. So people have a false sense of security even though they are exposed because the chances of anything bad happening are so low.
The most Linus line of the vid: _"It's an Intel 10-core, 20-thread; It's basically just acting as a SATA controller, and the TITAN was just already in there!"_
Hi Jay. Unraid veteran of 3-4 years here. Just wanted to comment to say in response to what you said at 14:28, if you are using a cache drive, you actually aren't dealing with spinning drives for writes to the array. It will first write to the cache SSD and then when you've scheduled unraid's "mover" script, your files will be moved from the cache to the spinning (parity protected) drives. This is also why you wouldn't expect your drives to heat up on writes to the array. That would only be applicable if you didn't have a cache drive. Because you have to write to the array while maintaining parity, it significantly reduces your write speeds to the actual array, which is why a cache drive is necessary for most use cases with Unraid. Let me know if I can answer any questions :)
Hi Jay, I love unRAID, though I'm pretty sure you gave yourself a false sense of performance with your file transfer. While I couldn't verify it on-screen, I'm pretty sure your transfer speed and drive temp test were flawed. When you transferred the large file to your unRAID server, it would have been going to your SSD cache drive (that's the only way it would transfer as fast as it did). By default, this won't move to your spinny-go-round array until... I think it's 3:40am. You can also run "mover" manually and force it over to the array if you don't want to wait that long. Once the files are on the spinny drives, your transfers off of the array (back to Phil's computer) will slow down to spinny-go-round speeds, and max out at... ~200MB/s. Your temperature test also didn't actually occur, since the files would have never moved over to the spinny drives. I'm also thinking your statement that you can edit directly off the array is probably wrong, as once again, once you're data is on the spinny array, you're limited to the HDD speed. Without striping, that's basically going off of a single spinny-go-round HDD. I absolutely love unRAID. I've only been using it for about a month, and it is great for my purposes (media storage). What you could consider for editing, is adding another SSD. Not to the protected array, and not as a cache drive, but using the "Unassigned Devices" plugin. Then set up a share for just that SSD. If you want to edit directly off the server, this should keep you at SSD speeds, and better take advantage of your 10Gb connection. Once your done editing, you could move it over to the protect array for long term storage. Love your new studio! The old studio made ME feel cramped just waching it.
Not saying that wouldn't work, but can't they get a similar result by creating a new "Current files" share to work out of with Cache set to ONLY? Still add a second ssd to the cache to try an max out I/O though
@@1480750 This would work too, and wouldn't require adding another drive. I wasn't looking. Did you see how large his cache drive was? If it's smaller that may not be ideal due to space constraints, and adding a drive would make more sense.
@@PeanutFox It was a Samsung 860 1tb. I'd think that was a comfortable size even for a few things on the go at once. I'm using a pair of 1tb drives and getting writes in the high 400s to mid 600s (adata su800) and typically have ~250g "living" on the cache
Yeah you in no way get 500 MB/s to the actual drives. The fact that they seem completely unaware, even checking drive temps (on drives which aren't in use yet) kinda baffle me...
Linus really lost it. He even admitted that things are going bad and people should watch his videos as soon as they are uploaded and stuff. Totally desperate
Barnacules Nerdgasm True... Btw, I don’t know if you’ll see this comment, but will you be uploading now, a bit more frequently? You’re still my fav TH-camr, and I’ve learned lots from you. I want to become a Software Engineer/Developer, and I was motivated, and inspired because of you
Except then you've got the one dude who decided to bring his job in Enterprise IT support home with him by literally buying an old array. Then you do multi-path link aggregation on your fabric to end up with enough connectivity that you can not only quickly access your files, but you can configure boot from SAN with no discernable lag. If you're just doing block and multiple 8Gb connections are fine for your use, I'd recommend picking up an old VNX with a few DAE of 2TB drives and a brocade DS5510B. It'll be somewhat expensive, but an array managing your RAID 6+2 storage pool with dedicated hot spares is about as fault tolerant ad you're going to get. If you want more speed, VNX2 are also a decent option, as 16Gb SLIC are available for them, though you would also need the more expensive Brocade DS6510B if you wanted to be able to manage your fabric with full throughput. Keep in mind that a switch between your array and your host is not a requirement, merely a convenience for future expansion. Also keep in mind that you can totally use direct attached copper cables, or DAC cables to do short runs up to around 8-10M without needing to mess around with fiber optic cables for 10Gb Ethernet connectivity if you decide to use iSCSI instead of Fiber Channel for your connectivity. Then you can pick up a Cisco Nexus 5k for your switching and be set at 10Gb speeds. They can be a surprisingly cost-effective method of connecting your array or switch to nearby devices.
Yeah, Really want Intel NICs for FreeNAS if you don't want to look through the compatibility list Though someone did comment on Twitter that they have a WIP driver for the Aquantia NICs now
Best part about what you guys do is making people feel like they are already friends with you. You guys do a great job of drawing people in and making them feel like their there with you!
Oh man, the absolute diversity of "tech" in your channel never stops being interesting! Overclocking/cooling experiments, car stuff, linux/network nerd stuff. Love it all!
Overkill for his needs. The LTT servers are spec'd out to support the simultaneous use of 6 editors + various writers so, comparatively, isn't as overkill, as overkill is the level beyond what is required to get the job done.
Linus is in a totally different league from Jay's. He has an army of writers, camera operators, editors, technicians to do the work for him. Jay is still doing alot of the content himself.
12:40 Cat5e is technically not rated to handle 10 Gb internet. You could get away with it in your small office but its not technically rated for it. The distances you saw, of approx 40 meters supported for 10Gb is on Cat6 at 250MHz. Cat6a is rated to handle 10Gb up to 100meters at 500MHz The biggest benefit you get from Cat6 is the reduction in cross talk which will benefit you when you are hitting this things on a single NIC doing file uploads and accessing a game server......The difference between knowing what you are saying and actually saying because you read something is vast
@@JohnDoe722 Yes true, but it was more about him trying to prove his community wrong about Cat5e when they "looked it up" but were still wrong. Like I said, for what they were doing yes "it technically worked" as you say but I was correcting what he was incorrectly trying to pass on as truth
Need to add something here, I had FreeNAS running on a pretty low powered server. Dual Core i3 and 4G of ram. Adding plugins and shares is really simple, and getting a vm running is even easier. I haven't used UNraid, but for me FreeNAS was flawless
I have unraid running on i5 4C and 8Gb ram for almost 2 years. I haven't had any major problems, like ones what has happened is because of me :) Dockers and apps are amazing what are available. Only downside is that you are limited to read speeds of single HDD, but that isn't problem in 1Gb network. That is problem with Jay's 10Gb network.
I've been using unraid running on a dell t310 server for about two years now. Works awesome! I use docker and vm's on it quite a bit. Also i have done the same thing with steam games, i just mounted the smba share as a drive on my computer, then point my steam downloads folder to the new mapped drive and boom!! Instant games!
I built myself a NAS in a Fractal R5, been super happy with it. I added fans to the back of the cage to help pull air out and even in heavy sustained writes my drives only hit about 36*C Cool video Jay!
5:38+: "It's the fact that I don't have to wait for steam." Me: Thinking of the days (2004ish) when steam was really slow in updating itself and often was just completely useless.
DItto! (Was something up or going on with him? I quit following after all his releases were on goofy gaming stuff, and other generic shit that did not interest me)
I love unRAID so much. I'm an idiot and I got it working, and there's so much community support for it. I didn't know what Docker was before unRAID but I learned and it is so convenient. I replaced several VMs with 5 or 6 docker containers and it uses so much less system resources.
Yup because UnRAID writes to individual disks not striping to the array/ They will need to copy a file bigger than cache drive plus ram to see the true speeds.
From what I could tell, speed both ways are/were limited by the local SATA SSD on Phil's computer, not the NAS cache... (?) This is odd behavior though. Unraid should cache inbound writes to RAM first by default. There's plenty to go on, it could easily take the whole file in RAM, but I don't see any RAM (cache) utilization increase when writing *to* the NAS. Is this because the NAS SSD cache drive is able to write as fast as the "client" SSD can read, and thus there's no build-up of incoming data in RAM? Going by this setup, with the array not being striped, and their suggested file sizes they work with, NVMe storage on the clients should maximize their usable transfer speeds. No need for NVMe in the NAS, with 30+ GB of RAM available for write caching.
Drive temps aren't going to change during a file transfer to the NAS. Because that file is going to your cache SSD, not the hard drives. There is a "mover" that runs every so often, that moves files off of your cache and onto the hard drives. Then when you access that file it will be read from the drives. Also, that's why you can add as many SSDs to the cache as you want without reconfiguring anything. Because all the cache is are drives that get uploaded to before the files are moved to the hard drives. And adding more cache won't increase performance unless you go over the 1tb of uploads your current 1tb SSD cache can do. Unless you raid together the cache drives, that will improve performance. But simply adding more cache drives as single drives won't.
+1 for unriad. Been running it for about a year and a half. Absolutely love it and how user friendly it is to get into. For the hardware junkies out there: Dual x5670s 72GB RAM 8tb white label red drive for array, and 1 for parity 1TB nvme cache
Honestly, what you just said is more complicated than anything I saw in the video. Granted, I've never set up a NAS and I don't see that changing anytime soon, so maybe it actually is easier than it sounds.
@@TurinAlexander installing ESXi and FreeNAS is almost next, next, next. If you want more advanced setup you should ask someone, but aint so much complicated.
One thing you gotta keep in mind @JayzTwoCents, is mechanical hard drives are really dense, so they take much longer to heat up. You likely won't see any change if you're only pegging the drives with a 20 or 30 second read or write. When my FreeNAS rig (8x2tb drives with 3 parity: 10tb usable) does scrubs twice a month, it may take an hour or two for the drives to start climbing above 40°C. I do have a fan control script that grabs the temps every five minutes and adjusts the fans accordingly.
These videos are the best. It's like a mechanic trying to explain the physics and math behind the internal combustion engine. No different than Linus smacking his head up against his servers.
I used to casually watch Baracules videos a ton but I haven’t seen one suggested to me in years. I am now subscribed, I miss his videos. Glad to see him.
Jaze & Jerry, thanks for reminding me of my woes back in college when ever I walk with my friends who just happen to be super-nerdy, annoyingly geeky and frustratingly tech savvy! They wind around all afternoon only to finally agree on the same things they were arguing about in the first place at midnight. Still love them though. At least people thought I was a smart kid just because we walked together 😑😑😑😑😑😑😁😁😁😁😁😁
@@Jayztwocents Yes, you did. How good of a 10g though... You didn't actually count the corrupt packets, the re-sent packets and so on. For "brute force" stuff like single transfers it's fine... tcp deals with re-transmission if there's errors. Also, it's just ONE network cable between two computers in a relatively "clean" environment (somewhat empty warehouse, minimal "noise"). In a bundle of network cables going through walls or pipes, signals from one cable would affect others.. or if the cable is routed between other computers, or behind a network rack ... or if you route the bundle of cables by some electrical cable or fluorescent lightning or some microwave ovens, you could experience lost packets... and so on. The standards recommend Cat6 and Cat6a as required for signal integrity at that maximum distance.
@mariushmedias, while that is what the specification states, in real world conditions and with good quality cable, low interference and typically under 50m CAT5e could do 10Gbps. What you have to understand is the specification needs to allow for various conditions, some lower spec cable which meets specification, server rooms where lines come in bundles together with racks upon racks giving out EM interference and of course electricians who think they know how to terminate patch cord but wouldn’t know what the difference between a cat5 or cat6 termination was if it bit them on the ass. While I wouldn’t be reliant on cat5e giving a constant, no drop off, no packet loss connection at 10Gbps it can be done.
@@backupplan6058 What is really comes down to: if you've already got at least CAT5e in place in an environment similar to the one here, it will probably be OK, but if you're going to be running new cable go with 6a -- or in Jayz's case seriously look into doing fiber. Personally I've always hated working with cable (particularly CAT6+), and would never trust my clutzy self to do a significant run of fiber, but still went ahead and redid all the CAT5 in the house with CAT6a (actually got a contractor to do the longer runs between floors). As for Unraid, it looks like a neat Linux based system but after running a few other specialized nixes (including FreeNAS, pfSense and RouterOS), I've come to the conclusion that going with the latest standard release of something like CentOS or Ubuntu Server whenever possible is almost always going to result in the fewest regrets. But then, as a mostly Linux sysadmin for the last 20 years, I'm a glutton for punishment. Enjoyed the video immensely though. It's always good to see other people struggling with this stuff, and coming out victorious.
you kid but... www.xtremehardware.com/forum/topic/2610-progetto-impossible-modding/?do=findComment&comment=38471 those 2 blocks went on 5.25" bays kinda like an adapter to cool drives from the sides. at the time it seriously made sense to cool the hdd in such a radical way.
Just a couple of corrections... 10:50 When you're copying stuff to UnRAID it doesnt spread it out like a traditional RAID or ZFS, it writes contiguously. It only writes to the next disk depending on your allocation method 14:30 With UnRAID and a cache added, its not hitting your drives in real time. Its on your SSD and there is a timer job that moves from your cache drive to your array at scheduled times during the day. The speed limit you're hitting of ~500MB/s is the speed of your 860 EVO. If you use an M.2 NVMe drive or RAID a second SSD then you should effectively max the 10Gbps on writes. You will typically be limited in read speed to the speed of the single drive though which is the downside. As for the Aquantia cards, you could have run a Linux distro like Ubuntu Server or Proxmox and used ZFS on Linux with something like Webmin with the ZFS Plugin to give you a UI for configuration. Then you would have had all the benefits of the ZFS filesystem, with a much more robust hypervisor (KVM) for VM's than FreeNAS has. UnRAID is a good all in-one solution though for Simplicity of setup and management if you arent needing the read speed.
fucking hell man, just imagine telling someone from the 90's you have 100 terabytes of storage over a 10 gigabit network connection and watching them laugh in disbelief. makes me wonder what kind of shit we'll see in 2040.
@Kevin Prima I honestly don't think 1TB existed in the 90's. Maybe in a massive server or supercomputer, but the largest drive in 1990 was less than 1GB and the PATA limitation was 128GB until 2002. Many supercomputers of the era had no more than 6GB of total system memory, so I'm sure drive space was rather short too.
I'm genuinely concerned for Barnacles' health. He doesn't look like he's making any effort to like... Make his own life better. He doesn't look good. :\
UnRaid is good for flexibility, but not performance. It doesn't stripe and there's no read caching, so reads from the array are limited to the speed of a single disk. The cache is essentially a write back cache unless you pin the data to the cache (set the share to cache only). You got good read speeds because the data was still in the cache. There's a mover that moves the data to the array on a schedule. I also think I've read that you can have corruption issues if you tried to use SSDs as array drives. You can mix and match disk sizes in the cache if you use BTRFS. That's how I set mine up.
Last I heard , BTRFS was still not considered 'solid' on any RAID beyond 1 or 0...(Even Redhat finally dropped BTRFS last year, not a very good sign of having a promising future in the 'the kernel'...)
@@mdd1963 Yea I think it's pretty much dead at this point. I think some people were reporting corruption issues on the unraid forum, but I'm not sure what raid level they were using with it. I've been running it in raid 1 for a few years now with no issues.
Jerry did make a video once...I sill have the VHS tape around here somewhere. 😆
That's kinda sad
I use to watch Jerry's videos but then the motor on my VHS-C adaptor went out.
"You can't find it because it was Betamax not VHS." - Sony
I have a Beta...wait, how did he do that...Jerry isn't old enough to know what Beta even is.
"SUCCESSFUL JAAAAAAAAAAAAAYYYYYYY!!!'
"JERRY!?"
"Hallo..... please make me relevant"
"We're not gonna change the default creds right now" is code for "this box will keep default creds forever" 100% of the time.
basically
@Dan Gingerich Right. Netgear routers with default name and password can be cracked immediately with public wordlist. Wifi names that look like they were set up by normies usually have easy passwords that can be grabbed with RockYou or alike. I haven't looked too much into it but a lot of consumer routers probably also have WPS vulnerabilities but I've never had success. Most have active mitigation from what I've seen.
@Dan Gingerich You are 100% correct, but it's also a numbers game. The actual risk exposure for an average home network is infinitesimally small in terms of actually being targeted and hacked. So people have a false sense of security even though they are exposed because the chances of anything bad happening are so low.
Dan Gingerich But now we know your password is exactly 24 characters, which makes brute forcing easier...
"The most overkill NAS ever"
*LINUS HAS ENTERED THE CHAT*
The most Linus line of the vid: _"It's an Intel 10-core, 20-thread; It's basically just acting as a SATA controller, and the TITAN was just already in there!"_
LoL
What? A NAS build w/o Linus?
:O
*Linustechtips has left the building....*
at least its not a jellyfish
That's Illegal!
Isnt the petabyte project the most overkill NAS? 😅
JayzTwoCents joins Snazzy Labs :D
5:10 Jay complaining that GTA V takes 30 minutes to download because it's 78 gigs
***cries in 5 mbps***
Cries in 270kbps.
It actually takes only about 11 minutes on a 1Gbit line.
"visible lag"
Cries in 1,4MBps
Cries in 1gbps
The Dynamic Duo back together again! kicking ass. always good to see Jerry again :D
I wish someone would look at me the way Jerry looks at Jay when he talks about networking...
For real
He thought he was a sandwich
I saw that too.
Lol
Why you gotta hate they are both clearly passionate about this
Hi Jay. Unraid veteran of 3-4 years here. Just wanted to comment to say in response to what you said at 14:28, if you are using a cache drive, you actually aren't dealing with spinning drives for writes to the array. It will first write to the cache SSD and then when you've scheduled unraid's "mover" script, your files will be moved from the cache to the spinning (parity protected) drives. This is also why you wouldn't expect your drives to heat up on writes to the array. That would only be applicable if you didn't have a cache drive. Because you have to write to the array while maintaining parity, it significantly reduces your write speeds to the actual array, which is why a cache drive is necessary for most use cases with Unraid. Let me know if I can answer any questions :)
Great answer! Was scratching my head at that point of the video. You saved me from having to comment on it 👍
Hi Jay,
I love unRAID, though I'm pretty sure you gave yourself a false sense of performance with your file transfer. While I couldn't verify it on-screen, I'm pretty sure your transfer speed and drive temp test were flawed. When you transferred the large file to your unRAID server, it would have been going to your SSD cache drive (that's the only way it would transfer as fast as it did). By default, this won't move to your spinny-go-round array until... I think it's 3:40am. You can also run "mover" manually and force it over to the array if you don't want to wait that long.
Once the files are on the spinny drives, your transfers off of the array (back to Phil's computer) will slow down to spinny-go-round speeds, and max out at... ~200MB/s.
Your temperature test also didn't actually occur, since the files would have never moved over to the spinny drives.
I'm also thinking your statement that you can edit directly off the array is probably wrong, as once again, once you're data is on the spinny array, you're limited to the HDD speed. Without striping, that's basically going off of a single spinny-go-round HDD.
I absolutely love unRAID. I've only been using it for about a month, and it is great for my purposes (media storage).
What you could consider for editing, is adding another SSD. Not to the protected array, and not as a cache drive, but using the "Unassigned Devices" plugin. Then set up a share for just that SSD. If you want to edit directly off the server, this should keep you at SSD speeds, and better take advantage of your 10Gb connection. Once your done editing, you could move it over to the protect array for long term storage.
Love your new studio! The old studio made ME feel cramped just waching it.
Not saying that wouldn't work, but can't they get a similar result by creating a new "Current files" share to work out of with Cache set to ONLY? Still add a second ssd to the cache to try an max out I/O though
If they wanted to edit off of the NAS they should add NVMe drives not to be bottlenecked by SATA speeds
@@1480750 This would work too, and wouldn't require adding another drive. I wasn't looking. Did you see how large his cache drive was? If it's smaller that may not be ideal due to space constraints, and adding a drive would make more sense.
@@PeanutFox It was a Samsung 860 1tb. I'd think that was a comfortable size even for a few things on the go at once. I'm using a pair of 1tb drives and getting writes in the high 400s to mid 600s (adata su800) and typically have ~250g "living" on the cache
Yeah you in no way get 500 MB/s to the actual drives. The fact that they seem completely unaware, even checking drive temps (on drives which aren't in use yet) kinda baffle me...
Linus is shaking and crying right now
Not since he built his monster SSD unit.
GOOD
Yeah because of all the dislikes.
So that was the high pitched sound I was hearing the entire time I was watching.
Linus really lost it. He even admitted that things are going bad and people should watch his videos as soon as they are uploaded and stuff. Totally desperate
"Most overkill NAS ever"
*Linus laughs in 10gb/s petabyte storage*
Th3ToxicCanadian but that’s actual storage, this system is more configured for gaming and rendering than a NAS 🤔
Barnacules Nerdgasm True... Btw, I don’t know if you’ll see this comment, but will you be uploading now, a bit more frequently? You’re still my fav TH-camr, and I’ve learned lots from you. I want to become a Software Engineer/Developer, and I was motivated, and inspired because of you
This is California not Canada.
@@mikkelbreiler3846 What?
Except then you've got the one dude who decided to bring his job in Enterprise IT support home with him by literally buying an old array. Then you do multi-path link aggregation on your fabric to end up with enough connectivity that you can not only quickly access your files, but you can configure boot from SAN with no discernable lag. If you're just doing block and multiple 8Gb connections are fine for your use, I'd recommend picking up an old VNX with a few DAE of 2TB drives and a brocade DS5510B. It'll be somewhat expensive, but an array managing your RAID 6+2 storage pool with dedicated hot spares is about as fault tolerant ad you're going to get. If you want more speed, VNX2 are also a decent option, as 16Gb SLIC are available for them, though you would also need the more expensive Brocade DS6510B if you wanted to be able to manage your fabric with full throughput. Keep in mind that a switch between your array and your host is not a requirement, merely a convenience for future expansion. Also keep in mind that you can totally use direct attached copper cables, or DAC cables to do short runs up to around 8-10M without needing to mess around with fiber optic cables for 10Gb Ethernet connectivity if you decide to use iSCSI instead of Fiber Channel for your connectivity. Then you can pick up a Cisco Nexus 5k for your switching and be set at 10Gb speeds. They can be a surprisingly cost-effective method of connecting your array or switch to nearby devices.
Good to see something that is not QNAP, Jellyfish or 45Drives once in a while. Some good ol' selfmade server stuff. I approve!
Going with a *paid* Linux/NAS OS, however....(something about that is distasteful!)
@@mdd1963 at least that money trickles down to bug fixes in upstream stuff. Like Steam making Wine better because they're using it.
Seriosuly! All of a sudden every TH-camr was making videos of how the jellyfish changed their lifes!
@@cllamasful what the hell is this 'jellyfish' ? :)
nm, I found it....; they still don't really explain what the OS is based on....; seems like just a MAC-like (overpriced?) NAS solution...?
Am I to understand that Jerry-Rigged that computer to work?
"You think we could put a Minecraft server on it" -Jay at 18:00
FreeNAS and 10gbe problems ... I smell Aquantia NIC chips :D
Yeah :) He could have just bought an used Intel X520 off ebay for 40$ but heyho ... Looking forward to the "unraid ate all my data"-Video ;)
yes, that’s true
Could have just got a Chelsio card 🤔
Peter Pain
That sounds like a good plan. I did not think of eBay for those cards.
Yeah, Really want Intel NICs for FreeNAS if you don't want to look through the compatibility list Though someone did comment on Twitter that they have a WIP driver for the Aquantia NICs now
Best part about what you guys do is making people feel like they are already friends with you. You guys do a great job of drawing people in and making them feel like their there with you!
"this is not a tutorial" that explains about 85% of his videos lol kidding
@Kevin Prima I was kinda making a joke over all his videos lol
Oh man, the absolute diversity of "tech" in your channel never stops being interesting! Overclocking/cooling experiments, car stuff, linux/network nerd stuff. Love it all!
"overkill"? Doesn't Linus have a NVME storage server?
Seems like you should still use that hardware for video rendering or video ingress on a VM.
Nvme fast storage server, watercooled rendering server, petabyte server.. and hes done a few other youtubers servers..
You guys sound smart. Be my friend!
Overkill for his needs. The LTT servers are spec'd out to support the simultaneous use of 6 editors + various writers so, comparatively, isn't as overkill, as overkill is the level beyond what is required to get the job done.
@@markusr3259 True, but he called it the most overkill server
Linus is in a totally different league from Jay's. He has an army of writers, camera operators, editors, technicians to do the work for him. Jay is still doing alot of the content himself.
12:40 Cat5e is technically not rated to handle 10 Gb internet. You could get away with it in your small office but its not technically rated for it. The distances you saw, of approx 40 meters supported for 10Gb is on Cat6 at 250MHz. Cat6a is rated to handle 10Gb up to 100meters at 500MHz The biggest benefit you get from Cat6 is the reduction in cross talk which will benefit you when you are hitting this things on a single NIC doing file uploads and accessing a game server......The difference between knowing what you are saying and actually saying because you read something is vast
Sounds like a "well technically" to me. If it works, it works, right?
@@JohnDoe722 Yes true, but it was more about him trying to prove his community wrong about Cat5e when they "looked it up" but were still wrong. Like I said, for what they were doing yes "it technically worked" as you say but I was correcting what he was incorrectly trying to pass on as truth
Jay!! A revisit on this would be great!! Keep up your awesomeness!!
This video makes me realize how much I miss Tech Talk. Please try to fit it in your schedules, if not, I wouldn't mind one with Phil.
Need to add something here, I had FreeNAS running on a pretty low powered server. Dual Core i3 and 4G of ram. Adding plugins and shares is really simple, and getting a vm running is even easier. I haven't used UNraid, but for me FreeNAS was flawless
I have unraid running on i5 4C and 8Gb ram for almost 2 years. I haven't had any major problems, like ones what has happened is because of me :) Dockers and apps are amazing what are available. Only downside is that you are limited to read speeds of single HDD, but that isn't problem in 1Gb network. That is problem with Jay's 10Gb network.
Hey Jay I work at 45 Drives , you should definitely go all out and set up a real mean NAS :)
13:40 refreshing to see that smooth speed over that cat5e cable
Linus trying to hold back tears : That's ok.... I unders... I understaahahahahaaaand 😭😭😭😭😭
I've been using unraid running on a dell t310 server for about two years now. Works awesome! I use docker and vm's on it quite a bit. Also i have done the same thing with steam games, i just mounted the smba share as a drive on my computer, then point my steam downloads folder to the new mapped drive and boom!! Instant games!
5:44 He should definitely do that 😂
I built myself a NAS in a Fractal R5, been super happy with it. I added fans to the back of the cage to help pull air out and even in heavy sustained writes my drives only hit about 36*C
Cool video Jay!
5:38+: "It's the fact that I don't have to wait for steam."
Me: Thinking of the days (2004ish) when steam was really slow in updating itself and often was just completely useless.
It’s great seeing Barney boi smiling and laughing. I’m happy he’s doing good.
DItto! (Was something up or going on with him? I quit following after all his releases were on goofy gaming stuff, and other generic shit that did not interest me)
Hey, I'm a simple person, I see Jerry in the thumbnail I click the video
I love unRAID so much. I'm an idiot and I got it working, and there's so much community support for it. I didn't know what Docker was before unRAID but I learned and it is so convenient. I replaced several VMs with 5 or 6 docker containers and it uses so much less system resources.
The speed burst was filling up the cache in RAM, then the speed of writing to the SSD.
Yup because UnRAID writes to individual disks not striping to the array/ They will need to copy a file bigger than cache drive plus ram to see the true speeds.
@@amalgroki How is it "the true speeds" if it will never appear in real world usage?
@@amalgroki - ok, but do you know of a file that bigger than that 1Tb Samsung he's using as his cache? That's a lotta cache space...
@@Broadpaw_Fox I acknowledge that but he claimed the speeds we saw where from the hard disk array which is not true.
From what I could tell, speed both ways are/were limited by the local SATA SSD on Phil's computer, not the NAS cache... (?) This is odd behavior though. Unraid should cache inbound writes to RAM first by default. There's plenty to go on, it could easily take the whole file in RAM, but I don't see any RAM (cache) utilization increase when writing *to* the NAS. Is this because the NAS SSD cache drive is able to write as fast as the "client" SSD can read, and thus there's no build-up of incoming data in RAM?
Going by this setup, with the array not being striped, and their suggested file sizes they work with, NVMe storage on the clients should maximize their usable transfer speeds. No need for NVMe in the NAS, with 30+ GB of RAM available for write caching.
Really nice to see em together 👍
"NIC Card" eye twitch.
Absolutely love this type of content!!!
I bet you FreeNAS will support that card after this video. lol
yeah, i doubt that.
@thegeorgezila eventually. FreeNAS always takes time to update FreeBSD kernal.
Drive temps aren't going to change during a file transfer to the NAS. Because that file is going to your cache SSD, not the hard drives. There is a "mover" that runs every so often, that moves files off of your cache and onto the hard drives. Then when you access that file it will be read from the drives. Also, that's why you can add as many SSDs to the cache as you want without reconfiguring anything. Because all the cache is are drives that get uploaded to before the files are moved to the hard drives. And adding more cache won't increase performance unless you go over the 1tb of uploads your current 1tb SSD cache can do. Unless you raid together the cache drives, that will improve performance. But simply adding more cache drives as single drives won't.
Wow, a tech youtuber without a storinator
fortunately
So good to see you guys back
Jay: Most overkill NAS ever
Linus: Am I a joke to you?
Google/Amazon/FB respond: YES
@@Damicske NASA/NSA/D.O.D. respond "Don't make me pull the car over!" lol
"With those socks and sandals Linus, yes".
I love when you and Jerry get together. It's magic.
lol i see that juul charger on that tower...
+1 for unriad. Been running it for about a year and a half. Absolutely love it and how user friendly it is to get into.
For the hardware junkies out there:
Dual x5670s
72GB RAM
8tb white label red drive for array, and 1 for parity
1TB nvme cache
FreeNAS isn’t as complicated as described. Running it under ESXi with a HBA passed to a FreeNAS VM is an excellent alternative.
This is exactly how i'm running it. FreeNAS runs very happy in a VM with a dedicated HBA passed through, have yet to encounter a single issue!
Honestly, what you just said is more complicated than anything I saw in the video. Granted, I've never set up a NAS and I don't see that changing anytime soon, so maybe it actually is easier than it sounds.
@@TurinAlexander installing ESXi and FreeNAS is almost next, next, next. If you want more advanced setup you should ask someone, but aint so much complicated.
And so much faster than unraid!
@jayztwocents just wanted to thank you for all your videos I'm working on a new Hardline liquid cooled PC and your videos have helped me a ton
NIC Card every time Jay says it a Network admin cried.
@Kevin Prima Honestly I think he does it to mess with us now.
Crying...just like ATM Machine.
One thing you gotta keep in mind @JayzTwoCents, is mechanical hard drives are really dense, so they take much longer to heat up. You likely won't see any change if you're only pegging the drives with a 20 or 30 second read or write. When my FreeNAS rig (8x2tb drives with 3 parity: 10tb usable) does scrubs twice a month, it may take an hour or two for the drives to start climbing above 40°C. I do have a fan control script that grabs the temps every five minutes and adjusts the fans accordingly.
“I wish I knew how to set up windows”
-Jayz TwoCents 2019
Glad to see Jerry alive and well,
This makes me want a server for literally no reason.
Budget Jay - gives you a link to amazon where you can search for the items used yourself
2 million congratz!!!
Video was uploaded 1 minute ago. and there are "Cool" comments
Cool
Cool
Cool
Cool
Cool
Ridiculous looks good. About time to watch some vids of him again.
Question: Any reason you guys aren't doing both the NAS with games AND doing a Game Cache?
Cheers, for the video guys. I like seeing an unraid setup. I use FreeNAS but I always like to see what the options are.
JayzTwoCents, "I need to book an airline ticket." Airline, "Sure, what do you need?" JayzTwoCents, "Do you have any 737Max flights?"
Great to see you, Jerry!
Lol 😂 big tech youtuber did not know modem > router > switch baffles me
These videos are the best. It's like a mechanic trying to explain the physics and math behind the internal combustion engine. No different than Linus smacking his head up against his servers.
Builds a NAS to store videos of jay playing battleship that is sitting on the shelf in the background
I love Jay and Jerry videos, We need more 😫
I'm so happy that you two are hanging out again
*phil dies in background merch plz!!!
I used to casually watch Baracules videos a ton but I haven’t seen one suggested to me in years. I am now subscribed, I miss his videos. Glad to see him.
Unraid Rocks! Great video. I have an UNRAID build on my channel too great OS.
Imma check it out later :)
@@Brooks__EU ty! ❤️
Jaze & Jerry, thanks for reminding me of my woes back in college when ever I walk with my friends who just happen to be super-nerdy, annoyingly geeky and frustratingly tech savvy! They wind around all afternoon only to finally agree on the same things they were arguing about in the first place at midnight. Still love them though. At least people thought I was a smart kid just because we walked together 😑😑😑😑😑😑😁😁😁😁😁😁
If your still measuring in Terabytes it's not overkill...
#humblebrag for Linus
The ending is pure GOLD!!!
No Jay, it's CAT6 that can go up to 45 meters or so. For up to 100 meters you need CAT6A cable and ideally, proper CAT6A jacks.
And yet.... we had 10G....
@@Jayztwocents Yes, you did. How good of a 10g though... You didn't actually count the corrupt packets, the re-sent packets and so on. For "brute force" stuff like single transfers it's fine... tcp deals with re-transmission if there's errors.
Also, it's just ONE network cable between two computers in a relatively "clean" environment (somewhat empty warehouse, minimal "noise"). In a bundle of network cables going through walls or pipes, signals from one cable would affect others.. or if the cable is routed between other computers, or behind a network rack ... or if you route the bundle of cables by some electrical cable or fluorescent lightning or some microwave ovens, you could experience lost packets... and so on.
The standards recommend Cat6 and Cat6a as required for signal integrity at that maximum distance.
He only went to 100ft (about 33m).
@mariushmedias, while that is what the specification states, in real world conditions and with good quality cable, low interference and typically under 50m CAT5e could do 10Gbps. What you have to understand is the specification needs to allow for various conditions, some lower spec cable which meets specification, server rooms where lines come in bundles together with racks upon racks giving out EM interference and of course electricians who think they know how to terminate patch cord but wouldn’t know what the difference between a cat5 or cat6 termination was if it bit them on the ass. While I wouldn’t be reliant on cat5e giving a constant, no drop off, no packet loss connection at 10Gbps it can be done.
@@backupplan6058 What is really comes down to: if you've already got at least CAT5e in place in an environment similar to the one here, it will probably be OK, but if you're going to be running new cable go with 6a -- or in Jayz's case seriously look into doing fiber. Personally I've always hated working with cable (particularly CAT6+), and would never trust my clutzy self to do a significant run of fiber, but still went ahead and redid all the CAT5 in the house with CAT6a (actually got a contractor to do the longer runs between floors). As for Unraid, it looks like a neat Linux based system but after running a few other specialized nixes (including FreeNAS, pfSense and RouterOS), I've come to the conclusion that going with the latest standard release of something like CentOS or Ubuntu Server whenever possible is almost always going to result in the fewest regrets. But then, as a mostly Linux sysadmin for the last 20 years, I'm a glutton for punishment. Enjoyed the video immensely though. It's always good to see other people struggling with this stuff, and coming out victorious.
love these two guys together
You have achieved new milestone:
*Building Nas without Linus*
Greets from the UK. Great insight keep it coming Jay.
complains game takes 30 mins
me: have you ever waited 5 days for a game?
Jay and Jerry together is a riot.
The most overkill NAS ever? Come on now.
Probs just from his perspective
I've also seen a lot more overkill NAS' than this thing...
Mean, it's dope but definitely not worth of the title "most overkill" XD
It's all relative. A 1TB NAS is extremely overkill for an infant.
@@Coalafied but by that means, my current game PC is "the most overkill gaming PC ever"...
Spoiler alert: it has an i5-4460 with an RX480...
@@Coalafied I bet I could get my little cousins to fill a 1TB HDD with cartoon stuff
So happy to see Barnacules back in action. Don't leave us again!
next up... watercooling hard drives by JayzTwoCents
you kid but... www.xtremehardware.com/forum/topic/2610-progetto-impossible-modding/?do=findComment&comment=38471
those 2 blocks went on 5.25" bays kinda like an adapter to cool drives from the sides.
at the time it seriously made sense to cool the hdd in such a radical way.
Says it's not a tutorial, and yet I've learned so much about setting up a NAS via UnRAID that I didn't know already. Thanks, Jay!
Bring back Tech talk or aka, Jay and Jerry talk about random shit
Just a couple of corrections...
10:50 When you're copying stuff to UnRAID it doesnt spread it out like a traditional RAID or ZFS, it writes contiguously. It only writes to the next disk depending on your allocation method
14:30 With UnRAID and a cache added, its not hitting your drives in real time. Its on your SSD and there is a timer job that moves from your cache drive to your array at scheduled times during the day. The speed limit you're hitting of ~500MB/s is the speed of your 860 EVO. If you use an M.2 NVMe drive or RAID a second SSD then you should effectively max the 10Gbps on writes. You will typically be limited in read speed to the speed of the single drive though which is the downside.
As for the Aquantia cards, you could have run a Linux distro like Ubuntu Server or Proxmox and used ZFS on Linux with something like Webmin with the ZFS Plugin to give you a UI for configuration. Then you would have had all the benefits of the ZFS filesystem, with a much more robust hypervisor (KVM) for VM's than FreeNAS has. UnRAID is a good all in-one solution though for Simplicity of setup and management if you arent needing the read speed.
Very good points !
Jay Nobody except Cisco suggests 40Gbe anymore 25/100 is the way to go.
San Man what’s 25/100?
@@blakethomas3169 SFP28
San Man gotcha
Good to see you guys together again.
Great video, but if you are still measuring in terabytes it isn't "The most overkill NAS ever" :O
It’s overkill based on our needs obviously
fucking hell man, just imagine telling someone from the 90's you have 100 terabytes of storage over a 10 gigabit network connection and watching them laugh in disbelief. makes me wonder what kind of shit we'll see in 2040.
@@X4Alpha4X too true. A 90's person would think you confused TB and Gbps with MB and Kbps.
@Kevin Prima I honestly don't think 1TB existed in the 90's. Maybe in a massive server or supercomputer, but the largest drive in 1990 was less than 1GB and the PATA limitation was 128GB until 2002.
Many supercomputers of the era had no more than 6GB of total system memory, so I'm sure drive space was rather short too.
@@Jayztwocents PLEASE! Can you post the model numbers of the SATA/SAS controller and the NIC? I wasnt clear in the video. Tkx
Thats the the shit we need to learn Jay. Keep up the good work bro.
I've used UnRAID for years, and it's come a long way from version 5.4
Haven't seen Jerry around for a while.. great to see him back.. what a great guy is Jerry.. both you guys are tops..
Holy Barnacules.
I like that this video is now on the frontpage of Unraids website.
It's funny you call Jerry the king of delayed projects because he's actually the king of premature.....
This was a really good video Jay.. Thanks
I'm genuinely concerned for Barnacles' health. He doesn't look like he's making any effort to like... Make his own life better. He doesn't look good. :\
UnRaid is good for flexibility, but not performance. It doesn't stripe and there's no read caching, so reads from the array are limited to the speed of a single disk. The cache is essentially a write back cache unless you pin the data to the cache (set the share to cache only). You got good read speeds because the data was still in the cache. There's a mover that moves the data to the array on a schedule. I also think I've read that you can have corruption issues if you tried to use SSDs as array drives. You can mix and match disk sizes in the cache if you use BTRFS. That's how I set mine up.
Last I heard , BTRFS was still not considered 'solid' on any RAID beyond 1 or 0...(Even Redhat finally dropped BTRFS last year, not a very good sign of having a promising future in the 'the kernel'...)
@@mdd1963 Yea I think it's pretty much dead at this point. I think some people were reporting corruption issues on the unraid forum, but I'm not sure what raid level they were using with it. I've been running it in raid 1 for a few years now with no issues.
Don't use Freenas if it doesn't have ECC RAM. You should dedicate that box to just NAS though, for data integrity.
Great video, Jay! I haven't watched your channel in a while tbh. I'm happy this is the video that got me watching again!
LMG down thumbed this video due to Linus not being invited, well Linus forced it!
Hey Jay make sure Jumbo Frames are enabled on both ends (sever and client) and you should see a good boost in sustained network transfer speeds.
Firefox needs 64GB of memory? Are you sure you're not mixing up Firefox with Chrome again? Lol
Good to see Jerry back in your videos! :)