You lost me at licensing :) If this becomes a brick if you dont pay licnese fees, that becomes a problem. I remember Datto doing that to many customers. Their bare metal restore servers were literally bricks.
What world are you living in? Anything that is commercial and made for enterprise class is by subscription. It’s just the way they make money and keep the development cycle alive.
@@PeterRichardsandYoureNotthe development of ceph itself isn't funded by that subscription. the hardware itself is a bit meh (hot swapping disks is not possible) and the software itself is just a wrapper around ansible + cephadm.
I bought your ansible book in 2018 which, in effect, got me to my job where I'm now. There, I'm responsible for a giant ceph cluster with 1250 OSDs . Enjoy your time with ceph, it can be a scary software when something goes sour :D ....and also a big thank you for helping me become a cloud engineer!
So any use of this "appliance" requires payment of monies to Mars in terms of license fees, even after an initial subscription? If I am correct, then the forever pay aspect of cloud storage is an integral part of this solution. I think the concept of fully owning your own hardware inherent in a Raspberry Pi cluster must not be forgotten or lost.
Wow. I've introduced to the Ambedded in 2018, back then when the Ambedded's founder husband was teaching me Openshift & Ceph. He told that her wife is starting a ARM-based ceph cluster in a box. At that time, i believe it is using Cortex A7 or A53, but with almost the same rack chassis and 8 node config. in 2019, I've met her on the Taiwan Computex showcasing the Mars 400
Funny story. A couple years ago I wanted a NAS but I wanted it open source. I was thinking of a TruNAS build until a Jeff on TH-cam mentions “ceph cluster”. The video of your build wasn’t even out yet but I was off to Google to learn about Ceph and before long I decided to build a 3 node Ceph cluster out of small PCs for my home lab. So far so good!
If you have a 4th node in the cluster, in the event that one node becomes unavailable -- intentional maintenance or unexpected node failure -- you can still operate in a cluster state (three nodes) that can absorb a single node failure.
I love watching Jeff videos as it reminds me of the challenges I faced when writing acceptance tests for 5ESS-PRX switches in Saudi Arabia 40 years ago. They ran UNIX RTR and if one of the two main processors failed, theoretically only one call was lost. Never managed to prove that!
@@TheOfficialOriginalChad My wife (sorry, I spelled BOSS wrong) uses the network for browsing on her phone. She sees no benefit to 10GB. I hope I can sell her and get enough to pay for a switch! HAHA
Firstly I just want to say I love this. It is awesome to see more arm clusters. That being said, I think starting at $3,500 this feels a bit lacking. Especially the hard drive mounting seem super clunky. The other thing that kinda grinds my gears is why documentation is "liscense" based. Where you need to pay for support to get their documentation. Actual support I understand, but the manual and faq etc. seems a bit excessive.
The Mars 400 really makes for the textbook example of why industrial ARM has taken so long to take off, and why Ampere has done so well in contrast. It has no resale value, the company behind it has no real reputation to speak of and they gave it a heavily used product name, it has no refurb or reapplication opportunity, and it's running on non-distribute closed blobs with minimal oversight. It's an expensive e-waste hulk to dispose of the moment Ambedded decides to drop support in any form or fashion and by extension a ticking time bomb for any business that decides to rely on it for mission-critical tasks, not to mention a security risk. As a CSE, I wouldn't want that thing anywhere near any network, cluster, or farm I'm responsible for.
It's running on basically stock Ubuntu + Ceph. You have full access to the innards of their software. It happily runs several other stock Linux deployments if you want to reuse the box. Our next closest option at the time was over a quarter of a milion quid and ran proprietary software.
It could be the same for any other company...... Take Cisco and their $%$&%y HyperFlex product line; my last job just installed it ~3 years ago and Cisco announced this year that they are dropping it completely. And that product is so properity its not funny..... to the point where you cant do half the standard VMware management without using their backwards and for the most part broken services. Add to the point that unless you have an active support with them, you dont even get root access to the services on your own network (they recently changed it but *unless* you rebuilt or fresh installed, it wasnt easy to get the password).
That's sexy gear. I wish more people knew about these ARM solutions. I know people in IT maintaining tons of servers and they have no clue what an ARM SoC is. Many of their tasks could be offloaded by low-end to mid-range ARM devices consuming about 10 x less. But changing somebody their views takes time. This video can help for sure. 👍
Is it though? No SAS. Very poorly designed drive bays. License(s?). More drive bays than you can use. And 5 rusty spinners are going to give you maybe 800MB/s sequential writes. Doesn't seem very sexy for a base price of $3500 plus licensing.
@@sazma Why SAS? Each node has internal Flash, an M.2 and possibly an HDD or SSD. The aggregation is at the network layer. It's a textbook Ceph deployment with lots of small servers handling a smallish amount of storage each. The only major issue of this is slow rebalancing times. You can load 8 HDDs on them depending on the role of the specific node. Since this is a single chassis I would guess that the three missing HDDs are the bays for the management nodes. If you have the recommended minimum three chassis then you'll have one HDD missing per chassis. I agree it's not full enterprise grade kit. But it fills a niche that a few years ago nobody else supported.
@@kevinthorpe8420 Because it's Enterprise price. Perhaps you're right about 8 hdds, but from what I understood from the video, 3 supervisor/mon/whatever are suggested, so that seems like a possibility rather than a likely scenario. It's a fun tech demo, for sure, but it's definitely WAY overpriced and FAR better solutions can be had for FAR cheaper.
How much of the total Rpi in existence does he have? What if it's like 10-15%? I realize that's probably not true, but it would be interesting to calculate.
I've ran CEPH on my ProxMox 7 node cluster a few years ago. While it worked well but it had performance issues whenever it's re-balancing. From what I understand that CEPH recently has gotten alot better at that so may give it another try on my backup Proxmox cluster at work. CEPH is great for what it is but you really have to know it's in and outs to get the most out of it. Which is why companies like this offer support contracts.
@@JeffGeerlinglolol! Command History is such a wonderful gift to the world. I can't remember how many times I just grep the history for that command I did 2 weeks ago!
The lack of vibration dampening is a concern in any data center. Vibration can be a major issue when you fill out a rack if you're still using spinning disk media. However, it might not be significant if you only have a single 1u device with 8 drives. When you scale it up, that's when you're going to have problems.
It's a good reason to only buy Exos or other rated HDDs, they are built for high-vibration environments. And racks can (and should) have a little isolation... but they still have vibration induced by fans, other drives, etc.)
No, that's a very small problem. Much, much bigger one is lack of hot-swap. ANYTHING down in that server and you are for server-out operation. So the entire benefit of having 1 OSD per server (not having to worry about having multiple OSDs down when server dies), is entirely moot here, as you need to take it out to do even simple disk swap. Like, you could risk it doing it when it is on but that needs those rails that allow cables to be still connected when rail is fully extended... but that's PITA to cable and have shit airflow Also having option of hot-swapping (even if it was a pair-at-a-time) would also solve vibration as those trays almost always have some elasticity in...
@@xani666 Nope, as long as your cables are long enough you can access the cabinet to change hard drives, M.2 drives or even an entire node without a power down.
@@kevinthorpe8420 we had few servers mounted like that but it's just extra work to install that also blocks airflow, all to make servicing more difficult and time consuming. It's not worth to save few bucks, and looking at price of those you won't even be saving all that much. You can get say S5TH | D52T-1ULH for similar money but with drive shelf built into the server and 12 slots instead of 8 in 1U
@@kevinthorpe8420 yeah. You can have cables long enough to slide it out. However, as mentioned, it's a PITA to cable manage, and it hinders airflow at the back of the rack. Congrats. You solved the storage access issue with even bigger issues. A solution like this is fine in a 1U chassis. IF you have an identical setup in another rack already to go to switch over when SHTF. Preferably three identical. This way you still have redundancy when one is down. Think Kubernetes. Think ETC clusters. Think anything HA.
We've been running those for a few years. They do what it says on the box. At the time we started this journey there was nothing in the price range to touch them, or space (8 nodes in 1U), or power ~100W per chassis. We've started having issues with hardware failures now they're older and we've overloaded them so we've had to move some of the ceph nodes (index nodes, not storage) off this hardware, but it still works. Our biggest problem by far is that support in a UK time slot is not easy to manage. If they had a support centre in Europe then I think we'd be much happier.
"At the time we started this journey there was nothing in the price range to touch them" And that right there is why some people don't understand the price isn't actually expensive. Just the power draw alone pays for itself within a year compared to running multiple Ampere platforms. (Which weren't really available at the time these things were first a thing.) I swear those who complain about prices have never actually worked in a medium to large business to know what those budgeting reports look like. I see this same thing with professional camera equipment from hobbyist users. Complaints about how a camera body costs $6000 or a lens costs $10,000. A few grand for a piece of equipment is pennies on the dollar when compared to wages at home. Not to mention that money spent makes money. And many (most) companies don't have shoestring budgets. They actually have profit margins worth a darn.
this chanel has opened my eyes to low power computing i now have about 10 "compute devices" ... nothing abouve 65w i am going to pick up 2 pi 5 to replace 4 of those devices i hope one day to have everthing running of battery and solar
Link aggregation is still worth it sometimes! even though the core of your network is only 10Gbe, having the storage linked to the switch at X2 (or higher) to the switch means that a single node on the network can only ever consume 50% (or less) of the total available throughput, it's good at least for multi user access. in your case though maybe not worth it, but just thought i'd mention it!
Thanks Jeff. This video is informative and enjoyable. I know Ambedded is partnering for marketing purposes, but you don't make it a sales pitch. Great tech talk and honest reviews/testing. These kind of videos are why I have you in my notifications.
Yep! They were kinda surprised I wanted to test their hardware. I mean, I would be too haha :D It's not like the common homelab or Pi user would buy enterprise storage appliances! But I thought it was a unique enough machine, and had some interesting parallels to the Pi clusters I build a lot, and they were game! They're a neat company, and I know the engineers there do solid work :)
Definitely! It's gotten a lot better-I remember a few years ago it was a struggle to get it running. Now it's almost on par with the x86 experience, and you fight with the normal things you fight with in all clusters, instead of incompatibility issues.
@@JeffGeerling Hopefully it won't generate enough heat to melt the 3D printed server rack it's in. I really should have used ABS instead of PLA for it...
It's amazing how well done your videos are, I always learn something. And it's just so much fun to watch, not the least because you have a great talent in sharing knowledge.
I maintain our companies ceph storage with >6TB and confused about the "disk per host-ratio" in this setup. There is a sweet spot with disks@hosts and sites. I would be scared to run and maintain this in production.
I will have to look for the previous video where you presented this six CM 4 compute module mother board. Looks interesting. I would like to learn Ceph just for fun and curiosity. Anyway this 1 U server looks promising, but for the company the version with hot swap drives that you can easily change on the fly is much more convenient. Of course it depends and this is just my private opinion. Thanks for sharing.
@@JeffGeerling I have a Cobalt RaQ4 from the 90s that used to host all my websites. It died years ago, but it's such a cool looking blue 1U case with green lighting. I can't throw it away. I should really fill it with Pi's.
7:30 Vibration dampening is NOT for “portability”, it’s to dissipate the forces from spinning disk drives. Multiple drives tend to resonate, and even though these forces are small, they decrease life of drives significantly. I’m guessing the dampeners were just missing, because I have never seen a disk caddy without dampening.
While I have no good use for this I can see the value in a pi cluster as a learning tool for those still using on prem hardware. My place of employment went to the cloud a few years ago though so we can pay a lot more for the same server specs as we had in our still running data center. Government work at its finest lol
The Ansible part of the video was your "Squirrel!" moment 😂 Also, I understood maybe 25% of the whole thing, maybe because I don't know much about networking. Regardless, it was a fascinating video. 👍
4:54 Kids these days with their "full-sized" 3.5" drives. Back in my day, we only had 5.25" drives in 3.25" high by *8 INCHES DEEP* bays... and we liked it. 5MB should be more than enough for anyone!!
Those are actually solid 'ducts' on the back of those server fans (they're deep!). The front has the actual spinning fan blades, then at the back I believe the ducts help create a more turbulent flow to make sure the air back there mixes (or it might be something to help with static pressure... or both!). But the best thing is if you touch that your fingers don't get chopped off. Not quite the same on the other side of those fans!
Hey man, the reason you are seeing 55k on FIO under Linux terminal is Linux BuffMem cache, i recommend you to remove your test file each time you run the test that way its not able to cache inside of your memory. Awesome video.
Kubernetes / distributed storage. Right now in my 'production' homelab I'm still running mdadm RAID and Samba. But I'm considering moving my main storage to Ceph at some point. Not sure if 'forever' but it would be nice to run a local bare-metal Ceph cluster in production. Most of my usage has been on managed instances, and they're less fun :) (and more expensive).
I feel stupid, I thought you said the vpn software was called timegate, I am glad you posted the url here, because I did look for timegate, was like how is a time sheep software vpn software. Thanks Jeff
Love your videos Jeff. Lol! Calls himself a dummy while successfully setting up a Pi cluster server. Me: watches videos on YT with a Pi 400. P.S.: I'm getting 1080p playback without a hitch on YT now. Perhaps the latest updates updated the Mali driver?
you are truly the Raspberry Pi Lord. Your videos are awesome. EDIT: i actually commented before watching the video, even if there are not actual Raspberry Pis, i still think you are awesome
Twingate doesn't inherently offer enhanced security by default. If you gain console access, such as through SSH for debugging purposes, you can easily move to other services and machines within the same Layer 2 network segments, bypassing TwinGate's access control. In contrast, I prefer Zerotier, which also supports ingress and egress filtering on a per-network basis, though it's not stateful, but it's free and can be self-hosted. NOW ... I'm getting myself a MARS400 for the office - definitely some good stuff! 😉
Do they ship these with a list of installed MAC addresses so you can set IP reservations before plugging it in? How are you liking that CRS309? I have two of these, one for my lab (XCP-NG on used HP servers) and one for work's production XCP-NG (as top of rack on new Supermicro servers).
I didn't ask about pre-configuring, but the one they sent me was configured with a couple customizations out of the box, so it's probably an option to at least have them send you the MACs. I love the CRS309-I have three of them now, they're quite handy!
Great video Jeff and looks like some interesting hardware. Who do you think would be the target audience for an machine like this? I'm not familiar with Ceph myself so don't know of it's benefits, but looking at your speeds I can't help feeling that a regular server would outperform in transfer speeds and come at a lower cost, although with higher power usage.
Generally if you needed some extremely reliable archive storage, you might consider a Mars 400 (or three!). For higher performance (like for active use, or for VMs in a cloud), you would want their faster machines (or other faster hardware, generally), so you could get lower latency and higher throughput. But for almost all home users, and for a lot of small business use cases, a typical NAS would be quite adequate.
For now my HL15 and XL60 servers are going to get deployed a bit more traditionally. But follow Network Chuck - he has a set of 45Drives servers he's working on building into a massive Ceph cluster!
Jeff . Youu and trafficlightdoctor should do a colab ...peope keep saying raspberry pi shoukd be used in traffic lights..love your input as I respect your wealth of knowledge.
Hmmm Twingate looks interesting. I've been considering setting up my Mac Mini to do that sort of thing, Twingate sounds like a much simpler solution than fighting with Apple bullshit from 2006, and at least the hardware side for it would be cheaper than buying basically anything x86. I'll need to come up with something to do with the ancient Mini though.
I used my 2011 Mini for a few years as a simple file server. Some of the Intel minis could run Linux and even do okay with some things like Jellyfin or Plex!
@@JeffGeerling I should look into more options for it, though mine is a fair bit older than yours. I do know I can upgrade the CPU and storage, if only there were quad cores that would run in it, that would probably be better for any likely task I'd put to it since I'm not a vintage Mac software enthusiast where I'd run into lots of stuff that isn't multithreaded enough to benefit from the extra cores.
Indeed it is. The software on the MARS400 runs basically stock Ubuntu and Ceph. You can build your own cluster if you have a pile of servers kicking around but that's a pile of servers if you need the resilience, then rack space and, power.
Hello jeff, can you explain how to check the computational capacity / calculation per second of a computer / server. Is this good benchmark to select better system for application or web server.
Hello Jeff, I'm currently training for the RHCE exam with all the ansible stuff. I was just thinking because you're mentioned in the book wrote by Sander van Vugt for the exam prep. Do you know him or it just happened that he mentioned you as author for ansible roles (not directly but as geerlingguy) ?
I've communicated with him before, but I don't personally know him-I think there are probably more than 50% of my Ansible-related contacts who I've never actually met in person :(
One node per drive seems entirely excessive, especially for spinning rust. Fine for learning/testing, sure, but as you already want at least 3 nodes just the traditional 1U server with 8 OSDs each will be less fuss and probably even cheaper. No hotswapping is also concern for anything datacenter. And worst case might be having to turn out other 7 OSDs to replace one drive (if say your setup doesn't have cables long enough to take the server out while running
Surely, when you start having multiple nodes per drive, you start running in to drive cache limitations, bus contention, latency and therefore an increased potential for data loss? If every drive served two or more nodes, sooner or later something will have to wait to be written or read. Faster performance and ultimates reliability are usually at odds with each other, so you choose the est combination for your needs.
@@another3997 That is literally the reverse of what I said, I said "one server with 8 OSDs", as in server with 8 drives attached to it. One node per one disk is kinda wasteful. You're paying "OS tax" for each OSD, and 8 nodes, even cheap ARM ones, do cost more than one entry level intel box. Limiting amount of disks per server might make sense if you're running into performance issues (say a bunch of fast NVMe OSDs), but otherwise is wasteful. Like I said, it's nice if you have test cluster cos you can get a bunch of machines in small amount of rack space, but if you're testing you might want to use VMs anyway, far easier to make changes to that setup.
Yeah, it would make a little more sense to have 2 bays per node but this was designed to hit a price point so it's all pretty standard. The having 8 nodes in 1U adds more resilience. And nearly everything IS hot-swap, even the nodes themselves can be shut down individually and swapped with the chassis still live. In any case the minimum recommended install is three chassis. It's really designed for full racks of these things.
I couldn't switch to a regular PC PSU for the Storinator so far that I've seen. My Storinator is the 60-drive model and it seems to have larger power requirements. Also I run all SSDs in one of them and bought 8 hybrid converters to do 128 2.5" drive bays. But logistically, there are many issues. 1. How to connect all the SAS ports without expanders? 2. How to fit everything in the case with all those wires? 3. There are only 6 connectors for drives, but I need 8. 4. The PSUs only have 50A on +5V (250W) each whereas I need about 15A with some headroom per 16 drives. Doing all this, I found some 12V to 5V converters from Corsair to do 20A each (100W). With 6 of these, I'll be in he safe zone. Logistically this is a pain to setup, but I'll be doing it this weekend.
Ooh! If you do get through those challenges please ping me via email... I've been working on getting all the parts together for the same thing. I talked to the 45Drives folks about it a bit a couple months ago too.
hmm buy 3 seems like a cop out for HA advice. If I get 3, why not get 5, don't bother with the dual power supplies & networking and make it cheaper and simpler. Dual top-of-the-rack switches and UPS would be a better use of that, especially if the dual supplies aren't on separate powers trips and RCDs, How HA is HA would be an interesting series cobbling together all the "hobbyist" gear to beat the enterprise. I'm curious how the air gets through that thing when the HDD backplane looks like the full height.
Nice thing about Ceph is that you can define redundancy policy to be node, rack, site, etc. aware. All the redundancy makes the whole environment more resilient. You get improved throughput as a nice side-effect. Also, one starts with what one can, and improves things as you go.
Seems like a good idea but wish there was a version that seemed more appropriate for home servers.... might just get the blade if it still exists when I manage to get a new job 😅
I'm guessing the three nodes without drives are the management nodes. If you have more chassis then those would be spread over several chassis and you'd use those nodes as OSD nodes with a HDD. You'll be missing three HDD in an entire ceph cluster. It's just that Jeff has the whole cluster on one chassis.
Interesting, but a little bigger than what I could use. Jeff can you do some content on half rack 1u equipment? I don’t know about the rest of the home lab community but I think that would be more interesting to us
I think computing companies should try the transputer archetecture again. the idea being that you have a proccessor with it's own onboard ram and switch to allow for any number of processors to work in parallel.
The transputer was, and is, an interesting concept, however, it didn't succeed in getting a foothold. A modern cluster of networked nodes could in theory make use of that concept, making each node equal and responsible for sorting things out with it's neighbours, but I imagine there's a lot of latency involved in the process. Even if the architecture were scaled down to modern CPU die sizes, would they be any faster than current multicore, multiprocessor designs? Bus contention and latency are big issues in all high performance systems.
Just running 4 of my Dell R710 servers is killing me on power. My last utility bill was $600. And most of them are usually just at idle. I am afraid to power up my entire 20 server cluster. The monthly cost savings of just a few of these machines would pay for themselves in short order. The only concern would be running my own software and os. I don't want to be locked into a specific ecosystem. I will have to do some research, but it might be worth moving over to these.
@kevinthorpe8420 my R710s have 850 watt redundant power supplies. Even at idle, they are probably pulling close to 100 watts. And there are 20 of them. Even with just the 4 running and pulling 100 watts plus the 14000 BTU AC unit cooling them, it is kill us on the cost of electricity. Cooler, quieter, and cheaper, always looks better.
Ceph clusters can be multi-location and you can define the crush map to keep copies in both locations. However that can slow things down if the network link is slow. There are also replication strategies you can use for DR backups or archiving.
Jeff, ya dummy!
Nobody's gonna run one of these things in production!
Haha not only topically relevant, but FIRST!
40th like
:D subbing, twice :)
@@JeffGeerling paying license fees? lmao
You lost me at licensing :) If this becomes a brick if you dont pay licnese fees, that becomes a problem. I remember Datto doing that to many customers. Their bare metal restore servers were literally bricks.
Not different from cloud services and saas. I've been told, thats what people want theese days.
Also NetApp storage, which requires licensing in order to use their hardware with protocols like NFS, iSCSI, and SMB/CIFS.
I use 3 datto servers are my proxmox cluster XD
2 use standard asrock mobos, 1 uses a modified asrock mobo that took an asrock mobo bios chip
What world are you living in? Anything that is commercial and made for enterprise class is by subscription. It’s just the way they make money and keep the development cycle alive.
@@PeterRichardsandYoureNotthe development of ceph itself isn't funded by that subscription. the hardware itself is a bit meh (hot swapping disks is not possible) and the software itself is just a wrapper around ansible + cephadm.
I bought your ansible book in 2018 which, in effect, got me to my job where I'm now. There, I'm responsible for a giant ceph cluster with 1250 OSDs . Enjoy your time with ceph, it can be a scary software when something goes sour :D ....and also a big thank you for helping me become a cloud engineer!
Send money not words
So any use of this "appliance" requires payment of monies to Mars in terms of license fees, even after an initial subscription? If I am correct, then the forever pay aspect of cloud storage is an integral part of this solution. I think the concept of fully owning your own hardware inherent in a Raspberry Pi cluster must not be forgotten or lost.
Ceph is my fav for homelab - and now I have a small cluster at the office too! Finally some love.
Jeff, Pi clusters aren’t crazy. The people who build them are. I should know. I have two of them. Keep the posts coming and stay well!
Takes one to know one lol
Only thing that is crazy are the prices. This hardware Is not in my budget anymore. It only becomes more expensive.
As a "very large utility company" we had a 24 pi cluster doing predictive math for peak shaving.
Shame Sony took that away from the Playstation. Oh well their loss.
Could the work have been done on a GPU using OpenCL, or was the job something branch-heavy, necessitating the use of the cluster?
Wow. I've introduced to the Ambedded in 2018, back then when the Ambedded's founder husband was teaching me Openshift & Ceph. He told that her wife is starting a ARM-based ceph cluster in a box. At that time, i believe it is using Cortex A7 or A53, but with almost the same rack chassis and 8 node config.
in 2019, I've met her on the Taiwan Computex showcasing the Mars 400
This chassis is that! And it's pretty neat! Would love to see them build a new 1U clustered model with more speeed.
Funny story. A couple years ago I wanted a NAS but I wanted it open source. I was thinking of a TruNAS build until a Jeff on TH-cam mentions “ceph cluster”. The video of your build wasn’t even out yet but I was off to Google to learn about Ceph and before long I decided to build a 3 node Ceph cluster out of small PCs for my home lab. So far so good!
If you have a 4th node in the cluster, in the event that one node becomes unavailable -- intentional maintenance or unexpected node failure -- you can still operate in a cluster state (three nodes) that can absorb a single node failure.
The most complex raspberry pi setup doesn't exi.......
This is above my pay grade.
Love to see how much you progressed on your Ceph journey. Keep up the good work!
I love watching Jeff videos as it reminds me of the challenges I faced when writing acceptance tests for 5ESS-PRX switches in Saudi Arabia 40 years ago. They ran UNIX RTR and if one of the two main processors failed, theoretically only one call was lost. Never managed to prove that!
favorite quote today,"....but my home network is only 10GB..."
Hehe... someday I'll upgrade to 40 or 100 Gbps...
To be fair, 10Gb switches can be picked up for under $100 nowadays, NICs for $50. It’s super accessible if you need it to be.
@@TheOfficialOriginalChad My wife (sorry, I spelled BOSS wrong) uses the network for browsing on her phone. She sees no benefit to 10GB.
I hope I can sell her and get enough to pay for a switch! HAHA
@@ScottPlude 😂
Firstly I just want to say I love this. It is awesome to see more arm clusters. That being said, I think starting at $3,500 this feels a bit lacking. Especially the hard drive mounting seem super clunky. The other thing that kinda grinds my gears is why documentation is "liscense" based. Where you need to pay for support to get their documentation. Actual support I understand, but the manual and faq etc. seems a bit excessive.
Can't disagree here. Also note this machine was launched in 2019 I think. It's been a while and their newer boxes are a bit friendlier.
@@JeffGeerling Speaking of ARMs, has anyone noticed yet that you have three of them?
We opened up a vendor's box and there was an RPi 4 compute node in there, doing all the work
The Mars 400 really makes for the textbook example of why industrial ARM has taken so long to take off, and why Ampere has done so well in contrast. It has no resale value, the company behind it has no real reputation to speak of and they gave it a heavily used product name, it has no refurb or reapplication opportunity, and it's running on non-distribute closed blobs with minimal oversight. It's an expensive e-waste hulk to dispose of the moment Ambedded decides to drop support in any form or fashion and by extension a ticking time bomb for any business that decides to rely on it for mission-critical tasks, not to mention a security risk. As a CSE, I wouldn't want that thing anywhere near any network, cluster, or farm I'm responsible for.
It's running on basically stock Ubuntu + Ceph. You have full access to the innards of their software. It happily runs several other stock Linux deployments if you want to reuse the box.
Our next closest option at the time was over a quarter of a milion quid and ran proprietary software.
It could be the same for any other company...... Take Cisco and their $%$&%y HyperFlex product line; my last job just installed it ~3 years ago and Cisco announced this year that they are dropping it completely. And that product is so properity its not funny..... to the point where you cant do half the standard VMware management without using their backwards and for the most part broken services. Add to the point that unless you have an active support with them, you dont even get root access to the services on your own network (they recently changed it but *unless* you rebuilt or fresh installed, it wasnt easy to get the password).
That's sexy gear. I wish more people knew about these ARM solutions. I know people in IT maintaining tons of servers and they have no clue what an ARM SoC is. Many of their tasks could be offloaded by low-end to mid-range ARM devices consuming about 10 x less. But changing somebody their views takes time. This video can help for sure. 👍
And this server is actually an older model-the newer ones are much faster, and even more power-efficient.
Is it though? No SAS. Very poorly designed drive bays. License(s?). More drive bays than you can use. And 5 rusty spinners are going to give you maybe 800MB/s sequential writes. Doesn't seem very sexy for a base price of $3500 plus licensing.
@@sazma Why SAS? Each node has internal Flash, an M.2 and possibly an HDD or SSD. The aggregation is at the network layer. It's a textbook Ceph deployment with lots of small servers handling a smallish amount of storage each. The only major issue of this is slow rebalancing times. You can load 8 HDDs on them depending on the role of the specific node. Since this is a single chassis I would guess that the three missing HDDs are the bays for the management nodes. If you have the recommended minimum three chassis then you'll have one HDD missing per chassis.
I agree it's not full enterprise grade kit. But it fills a niche that a few years ago nobody else supported.
@@kevinthorpe8420 Because it's Enterprise price. Perhaps you're right about 8 hdds, but from what I understood from the video, 3 supervisor/mon/whatever are suggested, so that seems like a possibility rather than a likely scenario. It's a fun tech demo, for sure, but it's definitely WAY overpriced and FAR better solutions can be had for FAR cheaper.
@@sazmaClaiming it can be had cheaper/better somewhere else without proof is just hot air moving.
Thats such a great little 10Gbit switch from Mikrotik!
I think your videos are very informative.. So I would never call you a dummy..just tease you about hoarding all the Pi's..lol
How much of the total Rpi in existence does he have?
What if it's like 10-15%?
I realize that's probably not true, but it would be interesting to calculate.
@@Roy_1Heh, judging by the millions that are out there, it's less than .0000001% :)
I'm not even in the triple-digits! (And still well below 50 total)
@@JeffGeerlingFair I think a lot of us honestly think you have like 150 of them which in hindsight seems a bit unrealistic haha
Jeff takes the " You will never know till you try" And I love it. Instead of asking why he asks why not and tries it. That's my type of fun!
@@JeffGeerlingYou know you have hoarded all the cm4s you just wont tell us 😅
I ran Ceph on old Dell gear years ago. Very cool fs. 10gb network is required to get decent IO out of it.
I've ran CEPH on my ProxMox 7 node cluster a few years ago. While it worked well but it had performance issues whenever it's re-balancing. From what I understand that CEPH recently has gotten alot better at that so may give it another try on my backup Proxmox cluster at work. CEPH is great for what it is but you really have to know it's in and outs to get the most out of it. Which is why companies like this offer support contracts.
It's nice to know I'm not the only one that _still_ forgets sudo sometimes.
It's been 30 years, I'll remember eventually! :D
I think my command history is like 90% up-arrow, and 9% 'sudo !!'
@@JeffGeerlinglolol! Command History is such a wonderful gift to the world. I can't remember how many times I just grep the history for that command I did 2 weeks ago!
The lack of vibration dampening is a concern in any data center. Vibration can be a major issue when you fill out a rack if you're still using spinning disk media. However, it might not be significant if you only have a single 1u device with 8 drives. When you scale it up, that's when you're going to have problems.
It's a good reason to only buy Exos or other rated HDDs, they are built for high-vibration environments. And racks can (and should) have a little isolation... but they still have vibration induced by fans, other drives, etc.)
No, that's a very small problem.
Much, much bigger one is lack of hot-swap. ANYTHING down in that server and you are for server-out operation.
So the entire benefit of having 1 OSD per server (not having to worry about having multiple OSDs down when server dies), is entirely moot here, as you need to take it out to do even simple disk swap.
Like, you could risk it doing it when it is on but that needs those rails that allow cables to be still connected when rail is fully extended... but that's PITA to cable and have shit airflow
Also having option of hot-swapping (even if it was a pair-at-a-time) would also solve vibration as those trays almost always have some elasticity in...
@@xani666 Nope, as long as your cables are long enough you can access the cabinet to change hard drives, M.2 drives or even an entire node without a power down.
@@kevinthorpe8420 we had few servers mounted like that but it's just extra work to install that also blocks airflow, all to make servicing more difficult and time consuming. It's not worth to save few bucks, and looking at price of those you won't even be saving all that much.
You can get say S5TH | D52T-1ULH for similar money but with drive shelf built into the server and 12 slots instead of 8 in 1U
@@kevinthorpe8420 yeah. You can have cables long enough to slide it out. However, as mentioned, it's a PITA to cable manage, and it hinders airflow at the back of the rack. Congrats. You solved the storage access issue with even bigger issues. A solution like this is fine in a 1U chassis. IF you have an identical setup in another rack already to go to switch over when SHTF. Preferably three identical. This way you still have redundancy when one is down. Think Kubernetes. Think ETC clusters. Think anything HA.
I would imagine the dual 10Gbps ports out the back are so you can connect them up with multi-chassis link aggregation to the upstream switches.
Also, network math within the device:
8x nodes with 2x 2.5GbE ~= 2x 10GbE
This is a really cool storage appliance for a homelab! I didn't know that I needed one until you reviewed it
And thanks for the free eBook!
We've been running those for a few years. They do what it says on the box. At the time we started this journey there was nothing in the price range to touch them, or space (8 nodes in 1U), or power ~100W per chassis. We've started having issues with hardware failures now they're older and we've overloaded them so we've had to move some of the ceph nodes (index nodes, not storage) off this hardware, but it still works. Our biggest problem by far is that support in a UK time slot is not easy to manage. If they had a support centre in Europe then I think we'd be much happier.
"At the time we started this journey there was nothing in the price range to touch them" And that right there is why some people don't understand the price isn't actually expensive. Just the power draw alone pays for itself within a year compared to running multiple Ampere platforms. (Which weren't really available at the time these things were first a thing.)
I swear those who complain about prices have never actually worked in a medium to large business to know what those budgeting reports look like. I see this same thing with professional camera equipment from hobbyist users. Complaints about how a camera body costs $6000 or a lens costs $10,000.
A few grand for a piece of equipment is pennies on the dollar when compared to wages at home. Not to mention that money spent makes money. And many (most) companies don't have shoestring budgets. They actually have profit margins worth a darn.
this chanel has opened my eyes to low power computing
i now have about 10 "compute devices" ... nothing abouve 65w
i am going to pick up 2 pi 5 to replace 4 of those devices
i hope one day to have everthing running of battery and solar
You make me want to try this for myself! Now THAT'S awesome influencing! Mahalo for you videos!
Finally, i was waiting for something like this again
How do I get extra arms, Jeff?!? - Reminds me of a "Rick & Morty" episode on "Gazorpazorp"!!!! LOL - Love You, Sir!
Link aggregation is still worth it sometimes! even though the core of your network is only 10Gbe, having the storage linked to the switch at X2 (or higher) to the switch means that a single node on the network can only ever consume 50% (or less) of the total available throughput, it's good at least for multi user access. in your case though maybe not worth it, but just thought i'd mention it!
I don't even know what ceph is but this video is fascinating 0oO
Nice video. Though I don't work with Pis, I find the whole cluster concept fascinating.
Thanks Jeff. This video is informative and enjoyable. I know Ambedded is partnering for marketing purposes, but you don't make it a sales pitch. Great tech talk and honest reviews/testing. These kind of videos are why I have you in my notifications.
Yep! They were kinda surprised I wanted to test their hardware. I mean, I would be too haha :D
It's not like the common homelab or Pi user would buy enterprise storage appliances! But I thought it was a unique enough machine, and had some interesting parallels to the Pi clusters I build a lot, and they were game!
They're a neat company, and I know the engineers there do solid work :)
I really need to spin up an ARM powered Ceph cluster of my own when I'm finished with the Switchwire...
Definitely! It's gotten a lot better-I remember a few years ago it was a struggle to get it running. Now it's almost on par with the x86 experience, and you fight with the normal things you fight with in all clusters, instead of incompatibility issues.
@@JeffGeerling Hopefully it won't generate enough heat to melt the 3D printed server rack it's in. I really should have used ABS instead of PLA for it...
@@socketwench Sounds like it's time to prep another rack... ;)
you are living a week in the life of some of my testing! fun surprises sometimes.
☺️😌 he showed my blog diagram about Ceph 👍🌷 I’m so honored
Thank you for publishing it!
It's amazing how well done your videos are, I always learn something. And it's just so much fun to watch, not the least because you have a great talent in sharing knowledge.
Sick video Geff very nice lay out wonderful design
I maintain our companies ceph storage with >6TB and confused about the "disk per host-ratio" in this setup. There is a sweet spot with disks@hosts and sites. I would be scared to run and maintain this in production.
I will have to look for the previous video where you presented this six CM 4 compute module mother board. Looks interesting. I would like to learn Ceph just for fun and curiosity. Anyway this 1 U server looks promising, but for the company the version with hot swap drives that you can easily change on the fly is much more convenient. Of course it depends and this is just my private opinion. Thanks for sharing.
I understood about 4% of that. 4.5% tops. But it's a pretty box.
Honestly if there's one takeaway, it's that manufacturers should have a go at different color schemes for the front of their boxes :D
@@JeffGeerling I have a Cobalt RaQ4 from the 90s that used to host all my websites. It died years ago, but it's such a cool looking blue 1U case with green lighting. I can't throw it away. I should really fill it with Pi's.
That is the reason is pretty hard to find one for our projects, he bought it all
Love the concept, less excited in the licensing aspect of it but appreciate that I as a lowly home labber am not the audience for this product.
Buy is and don't licence it then. It won't brick. You can still install stock Linux. You just don't get support.
7:30 Vibration dampening is NOT for “portability”, it’s to dissipate the forces from spinning disk drives. Multiple drives tend to resonate, and even though these forces are small, they decrease life of drives significantly.
I’m guessing the dampeners were just missing, because I have never seen a disk caddy without dampening.
While I have no good use for this I can see the value in a pi cluster as a learning tool for those still using on prem hardware. My place of employment went to the cloud a few years ago though so we can pay a lot more for the same server specs as we had in our still running data center. Government work at its finest lol
The Ansible part of the video was your "Squirrel!" moment 😂
Also, I understood maybe 25% of the whole thing, maybe because I don't know much about networking. Regardless, it was a fascinating video. 👍
Haha I know; I was setting the thing up and was (pleasantly) surprised to see ansible output right in the UI they built!
I think this is the bare minimum for Klipper !
4:54 Kids these days with their "full-sized" 3.5" drives. Back in my day, we only had 5.25" drives in 3.25" high by *8 INCHES DEEP* bays... and we liked it. 5MB should be more than enough for anyone!!
Meh, back In my day a 160MB drive was the size of a washing machine drum. Which was the style at the time. (ICL 2900 series)
I like the sound of the fans with a picture of still fans...
Those are actually solid 'ducts' on the back of those server fans (they're deep!). The front has the actual spinning fan blades, then at the back I believe the ducts help create a more turbulent flow to make sure the air back there mixes (or it might be something to help with static pressure... or both!). But the best thing is if you touch that your fingers don't get chopped off. Not quite the same on the other side of those fans!
Hey man, the reason you are seeing 55k on FIO under Linux terminal is Linux BuffMem cache, i recommend you to remove your test file each time you run the test that way its not able to cache inside of your memory. Awesome video.
Sillicon Graphics (SGI) Purple 😅. What do you use Ceph for?
Kubernetes / distributed storage.
Right now in my 'production' homelab I'm still running mdadm RAID and Samba. But I'm considering moving my main storage to Ceph at some point. Not sure if 'forever' but it would be nice to run a local bare-metal Ceph cluster in production. Most of my usage has been on managed instances, and they're less fun :) (and more expensive).
Liked 'cause the nice mention of Network Chuck! Already subbed.
I've unfortunately not yet had the pleasure of running a CEPH cluster... but I'd definitely love to set one up!:D
I feel stupid, I thought you said the vpn software was called timegate, I am glad you posted the url here, because I did look for timegate, was like how is a time sheep software vpn software. Thanks Jeff
Heh, you could travel through time with timegate!
Love your videos Jeff. Lol! Calls himself a dummy while successfully setting up a Pi cluster server. Me: watches videos on YT with a Pi 400. P.S.: I'm getting 1080p playback without a hitch on YT now. Perhaps the latest updates updated the Mali driver?
you are truly the Raspberry Pi Lord. Your videos are awesome.
EDIT: i actually commented before watching the video, even if there are not actual Raspberry Pis, i still think you are awesome
Who needs this? Raises his hand …slowly at first, and then yells “I DO!”
I don't know much at all about anything and I enjoy your videos a lot. Never touched a raspberry pi even once lol.
I love purple.
Perfect thing to run a homebrew VTL on.
Twingate doesn't inherently offer enhanced security by default. If you gain console access, such as through SSH for debugging purposes, you can easily move to other services and machines within the same Layer 2 network segments, bypassing TwinGate's access control. In contrast, I prefer Zerotier, which also supports ingress and egress filtering on a per-network basis, though it's not stateful, but it's free and can be self-hosted. NOW ... I'm getting myself a MARS400 for the office - definitely some good stuff! 😉
Do they ship these with a list of installed MAC addresses so you can set IP reservations before plugging it in?
How are you liking that CRS309? I have two of these, one for my lab (XCP-NG on used HP servers) and one for work's production XCP-NG (as top of rack on new Supermicro servers).
I didn't ask about pre-configuring, but the one they sent me was configured with a couple customizations out of the box, so it's probably an option to at least have them send you the MACs.
I love the CRS309-I have three of them now, they're quite handy!
@5:48 did Jeff suggest that keeping a larger number cool is 'Orwell and good'?
Great video Jeff and looks like some interesting hardware. Who do you think would be the target audience for an machine like this? I'm not familiar with Ceph myself so don't know of it's benefits, but looking at your speeds I can't help feeling that a regular server would outperform in transfer speeds and come at a lower cost, although with higher power usage.
Generally if you needed some extremely reliable archive storage, you might consider a Mars 400 (or three!). For higher performance (like for active use, or for VMs in a cloud), you would want their faster machines (or other faster hardware, generally), so you could get lower latency and higher throughput.
But for almost all home users, and for a lot of small business use cases, a typical NAS would be quite adequate.
Twingate is the best!
This is pretty good, the 4GB of memory per node worries me though. If it's just the storage node, maybe it's fine.
I would really like to see you run ceph on a 45drives server. It would be cool to see over 2 pb of data storage linked together.
For now my HL15 and XL60 servers are going to get deployed a bit more traditionally. But follow Network Chuck - he has a set of 45Drives servers he's working on building into a massive Ceph cluster!
Jeff . Youu and trafficlightdoctor should do a colab ...peope keep saying raspberry pi shoukd be used in traffic lights..love your input as I respect your wealth of knowledge.
Well, It's nice to have connections and get free stuff.
Would love to see perf tests on this with sata ssds
0:06 well there's your problem, RSJ has taken over your account again!
A simple answer as to why do anything is "fun"
Hmmm Twingate looks interesting. I've been considering setting up my Mac Mini to do that sort of thing, Twingate sounds like a much simpler solution than fighting with Apple bullshit from 2006, and at least the hardware side for it would be cheaper than buying basically anything x86.
I'll need to come up with something to do with the ancient Mini though.
I used my 2011 Mini for a few years as a simple file server. Some of the Intel minis could run Linux and even do okay with some things like Jellyfin or Plex!
@@JeffGeerling I should look into more options for it, though mine is a fair bit older than yours.
I do know I can upgrade the CPU and storage, if only there were quad cores that would run in it, that would probably be better for any likely task I'd put to it since I'm not a vintage Mac software enthusiast where I'd run into lots of stuff that isn't multithreaded enough to benefit from the extra cores.
Nice appliance, Jeff! Do I understand it the right way: Ceph is a storage area networking (SAN) solution implemented completely in software?!
Indeed it is. The software on the MARS400 runs basically stock Ubuntu and Ceph. You can build your own cluster if you have a pile of servers kicking around but that's a pile of servers if you need the resilience, then rack space and, power.
Hello jeff, can you explain how to check the computational capacity / calculation per second of a computer / server. Is this good benchmark to select better system for application or web server.
Over 400 pages?
Oh my Jeff...
The color of that server reminds me of the old SGI machines...
Hello Jeff, I'm currently training for the RHCE exam with all the ansible stuff. I was just thinking because you're mentioned in the book wrote by Sander van Vugt for the exam prep. Do you know him or it just happened that he mentioned you as author for ansible roles (not directly but as geerlingguy) ?
I've communicated with him before, but I don't personally know him-I think there are probably more than 50% of my Ansible-related contacts who I've never actually met in person :(
Looking forward to someone selling Raspberry pies with dedicated GPUs too lol
thanks for causing the pi shortage
One node per drive seems entirely excessive, especially for spinning rust. Fine for learning/testing, sure, but as you already want at least 3 nodes just the traditional 1U server with 8 OSDs each will be less fuss and probably even cheaper.
No hotswapping is also concern for anything datacenter. And worst case might be having to turn out other 7 OSDs to replace one drive (if say your setup doesn't have cables long enough to take the server out while running
Surely, when you start having multiple nodes per drive, you start running in to drive cache limitations, bus contention, latency and therefore an increased potential for data loss? If every drive served two or more nodes, sooner or later something will have to wait to be written or read. Faster performance and ultimates reliability are usually at odds with each other, so you choose the est combination for your needs.
@@another3997 That is literally the reverse of what I said, I said "one server with 8 OSDs", as in server with 8 drives attached to it.
One node per one disk is kinda wasteful. You're paying "OS tax" for each OSD, and 8 nodes, even cheap ARM ones, do cost more than one entry level intel box. Limiting amount of disks per server might make sense if you're running into performance issues (say a bunch of fast NVMe OSDs), but otherwise is wasteful.
Like I said, it's nice if you have test cluster cos you can get a bunch of machines in small amount of rack space, but if you're testing you might want to use VMs anyway, far easier to make changes to that setup.
Yeah, it would make a little more sense to have 2 bays per node but this was designed to hit a price point so it's all pretty standard. The having 8 nodes in 1U adds more resilience.
And nearly everything IS hot-swap, even the nodes themselves can be shut down individually and swapped with the chassis still live. In any case the minimum recommended install is three chassis. It's really designed for full racks of these things.
I couldn't switch to a regular PC PSU for the Storinator so far that I've seen. My Storinator is the 60-drive model and it seems to have larger power requirements.
Also I run all SSDs in one of them and bought 8 hybrid converters to do 128 2.5" drive bays. But logistically, there are many issues.
1. How to connect all the SAS ports without expanders?
2. How to fit everything in the case with all those wires?
3. There are only 6 connectors for drives, but I need 8.
4. The PSUs only have 50A on +5V (250W) each whereas I need about 15A with some headroom per 16 drives.
Doing all this, I found some 12V to 5V converters from Corsair to do 20A each (100W). With 6 of these, I'll be in he safe zone.
Logistically this is a pain to setup, but I'll be doing it this weekend.
Ooh! If you do get through those challenges please ping me via email... I've been working on getting all the parts together for the same thing. I talked to the 45Drives folks about it a bit a couple months ago too.
hmm buy 3 seems like a cop out for HA advice. If I get 3, why not get 5, don't bother with the dual power supplies & networking and make it cheaper and simpler. Dual top-of-the-rack switches and UPS would be a better use of that, especially if the dual supplies aren't on separate powers trips and RCDs, How HA is HA would be an interesting series cobbling together all the "hobbyist" gear to beat the enterprise. I'm curious how the air gets through that thing when the HDD backplane looks like the full height.
Nice thing about Ceph is that you can define redundancy policy to be node, rack, site, etc. aware. All the redundancy makes the whole environment more resilient. You get improved throughput as a nice side-effect. Also, one starts with what one can, and improves things as you go.
Space and power. The nearest option we had to three of these at ~300W was 7U and over a kW
put soft foam in the gap to avoid hitting the hard disk.
Now they only need to become widely available cheap on the second hand market with good support for running Debian on the individual arm nodes
They used to run CentOS and now run Ubuntu. You're fine. Debian probably works out of the box. I know Ubuntu does.
Can you use some of these clusters to crack the latest TI calculator signing keys for us? I can't afford to build one, myself.
Is the "appliance" still usable after your year runs out, and you decide not to renew the license?
Seems like a good idea but wish there was a version that seemed more appropriate for home servers.... might just get the blade if it still exists when I manage to get a new job 😅
"You're crazy"
"No poor people are crazy, Jack. I'm eccentric."
(Speed)
I get why YT keeps sending me these videos of advertisements for products I would never buy. This is big money. 💵
Would love to see the documentation include IPv6, further, its 2024, can we switch to screwless designs for harddrives already?
7:18 then what's the point of having 8 bays? I feel like I'm missing something...
I'm guessing the three nodes without drives are the management nodes. If you have more chassis then those would be spread over several chassis and you'd use those nodes as OSD nodes with a HDD. You'll be missing three HDD in an entire ceph cluster. It's just that Jeff has the whole cluster on one chassis.
Interesting, but a little bigger than what I could use.
Jeff can you do some content on half rack 1u equipment?
I don’t know about the rest of the home lab community but I think that would be more interesting to us
It would certainly fit more racks more easily :)
Jeff, you silly goose! No one is going to use that stuff in a real production environment.
I think computing companies should try the transputer archetecture again. the idea being that you have a proccessor with it's own onboard ram and switch to allow for any number of processors to work in parallel.
The transputer was, and is, an interesting concept, however, it didn't succeed in getting a foothold. A modern cluster of networked nodes could in theory make use of that concept, making each node equal and responsible for sorting things out with it's neighbours, but I imagine there's a lot of latency involved in the process. Even if the architecture were scaled down to modern CPU die sizes, would they be any faster than current multicore, multiprocessor designs? Bus contention and latency are big issues in all high performance systems.
@@another3997 That's how the computers on the USS Enterprise work. So maybe.Edge computing is a big thing nowadays.
if RPi evolution is ongoing - what does it mean for a multi-pi server ?
How does Twingate compare to Tailscale compare to Cloudflare? Maybe one for a comparison video?
Just running 4 of my Dell R710 servers is killing me on power. My last utility bill was $600. And most of them are usually just at idle. I am afraid to power up my entire 20 server cluster. The monthly cost savings of just a few of these machines would pay for themselves in short order. The only concern would be running my own software and os. I don't want to be locked into a specific ecosystem. I will have to do some research, but it might be worth moving over to these.
~100W per cabinet. I think the Mars-200 was 100W and the MArs-400 is 105W.
@kevinthorpe8420 my R710s have 850 watt redundant power supplies. Even at idle, they are probably pulling close to 100 watts. And there are 20 of them. Even with just the 4 running and pulling 100 watts plus the 14000 BTU AC unit cooling them, it is kill us on the cost of electricity. Cooler, quieter, and cheaper, always looks better.
@@janmonson Ours run more like 500W each. So you are clearly wasting a lot of computer doing sweet feck all.
@@kevinthorpe8420 Web, email, cloud, and plex. Have an R310 running a private Minecraft Java server as well.
Does ceph + (offsite backup or tape) count as 3-2-1 backup? There are more than 2 copies on ceph and the offsite backup is a different storage media
Ceph clusters can be multi-location and you can define the crush map to keep copies in both locations. However that can slow things down if the network link is slow. There are also replication strategies you can use for DR backups or archiving.
lol oh boy. The first couple times I thought you were saying CIF really weirdly