Haha no joke, I was just now getting around to snagging an RPi4 8GB since I didn't buy one immediately when they first came out (Def regret that one), then pandemic began and they immediately jumped up like $50-100 for even the smallest sized RAM model in price, if you could even find a 4GB or 8GB for so long... Then I see a youtube video of someone building a custom gaming rig using a Raspberry Pi 5, and realized it's been much longer than i thought since the 4 released, and I never really was too attracted to the double micro-HDMI ports on the 4... and all the special case accessories stuff required to just attach a regular 1TB M.2 NVMe SSD . (Also, does anyone recommend still getting the new RPi 5, or is the 4 a better option?...im leaning towards the 5 much more, but figured i'd ask my fellow nerds in the comments )😂
if you even "think" about getting an rpi for your desktop and gaming needs ... you're doing it wrong. Thats like wanting to buy construction machinery for your wool-knitting needs and interests. Pis never "really" had any markup, you always were able to buy one for 10-20 USD _(most bare/basic versions)_ Rpis are development boards, home automation boards and even industrial tech components - they cannot be used like any sort of "normal computer", even though some Rpi-distros seem to "support" that. Maybe ... "dont" buy dev boards if you want another desktop. Thats stupid and makes no sense.
Yep, goes for more than 100 dollars in my country. Chromebook with wayyy more power goes for 20 dollar. Don't really see why you need it. If you want compact computers just buy mini computers with around the same price (in my country)
Yeah when I was in Highschool they had just hit the mainstream and they were what? 12 bucks, and the pi zero had just come out and came FREE with the magazine talking about their release
Redundancy is only achieved on a per cpu board basis. If the power to the unit fails, all cpu boards fail. You need more than one unit to have full redundancy.
Most rack-mounted servers have dual power supplies. And you are only really complaining about one form of redundancy. Surely you understand that a cpu board could fail without taking an entire power rail with it.
@@shanejohns7901 The Enclosure shown in the video has in fact 2 PSUs. In my opinion PSUs are one of the most important redundant parts because they fail more often than CPUs or memory.
Nope, since your being a Norman about it id like to mention true redundancy needs 3 full setups in 3 different locations all connected via 2 separate branches of networking. If a fire takes out 1 location that would only leave 2 for the period until a replacement is setup and all data copied over. If we only used 2 locations then the day we need the redundancy we would be at risk as during the intensive operation of copying over all the data and running it hard to cover full load it would be the only available location in the world, and having to do a high risk full data copy and run its normal workload simultaneously meaning the redundant system has failed to provide redundancy. Hence 3 locations are needed, multiple backups at the same location only provide some usefulness such as data parity and reduced wear and tear plus low level localised hardware failure protection.
There is no such thing as "full redundancy" in reality. No way in the world you can guarantee that it will never all go offline at once. In the real world, designers make decisions about which parts of a system most need redundancy and which might only really benefit from a single backup (like the power supply).
Work any government job and you learn this. Redundancy doesn't start until 3. 1 working a second working back and a 3rd working backup for spare parts. That's basic bitch redundancy 101
@Jeff, your learning and development is our own too. For many, many, years my colleagues, friends and I have learnt a wealth of knowledge we leverage in the real world, from the things you do. Just while I'm here, a quick mention for Geerling Engineering. The RF videos you've been publishing as of late have been of huge interest; especially for someone curious about everything.
Does it really scale well extrapolating to clusters of cpus with much bigger words/pipelines? Or compared to clustered gpus? Might be a stupid question, but it doesn't seem like it would translate well between x86, Cuda, and ARM
After very long time I've found someone giving so much efforts and capabilities which can be achieved with a Pi SBC, I myself run a 2 TB, perfectly working NAS using Pi-3B+ Dont feel a need to upgrade but would do so to convert it into a sellable and deployed product in coming months ❤❤
Data formatting, translation, the way it is stored, etc... Software can make things fast, a lot of people are limited by their perception of things. Relatively slow.... still useful, and most people wouldn't notice the difference in their day-to-day lives.
Or, you know, if you are mainly intersted in learning about CEPH, the software aspects of clustering, and anything which doesn't need different physical hardware nodes - you could use a bunch of VMs an/or "Tiny Mini Micro" PCs, while spending (much) les$$$. With added benefits when to comes to (ease of) snapshotting/rollback/backup. .
If you are on a budget you will probably be using Lenovo Tiny, HP Mini, Dell Micro. Because they are roughly the same cost or cheaper, and they are in stock. Well, they are dumping stock. barebones HP 6th/7th gen are 27 bones. You need processor ($5 and up), Ram ($5 and up) and power supply (~$8), and storage is NVME or SATA, (but PCIE x4 NVMe). Probably the smallest DDR4 you are going to find is a 4GB, but the good news is in bulk everything gets cheaper and you don't need the special carriers or cluster board at all. Don't get me wrong, Pi are cool, I have bought several, and just ordered the CM4, but not for a cluster.
What happens if you try to run a normal desktop OS like Raspbian on this? Can you actually do it, or is it only useful for things like Docker clusters?
Well sir now I will have to go and see your videos on how you build those because you have a very excellent point on utilizing that inexpensive way to learn how to work on the very expensive
I find these kind of things awesome for learning stuff like K8s and Docker Swarm and see how they're working on different kind of loads. Works with normal Pis, too, but isn't as clean, as I have learnt the rather hard(-ish) way :D Will need to print a custom case with fan support and have this cluster up and running somewhere (at least for when I'm intending on using it and learning with it) 😅😂
@@RoryEckel Setup of any of them is easy, management and fixing it when it fails is where things matter. Ceph's kindof a nightmare there. First I've heard of LH. Still kinda stuck on LizardFS because it's reasonably easy to fix when things break. Just move chunks to a working server and you're good.
So just for context, a pi4 is 2 gigaflops for 5 watts, it a quad core with no hyper threading, And costs $100. A 7995wx provides 192 threads, 12 teraflops for 350w…. An equivalent pi4 cluster would 3000w for that same compute power… 240w for the same thread count. And cost 60k to compete on throughput. Only if you had a very specific need for thread count and not throughout, something like monitoring a million plus inputs simultaneously in a RTOS environment would it make sense, even then it wouldn’t because anything that critical would need error correction at a hardware level. The only people that think this is a good idea are those bad at maths….. and computers…. And let’s be honest probably getting a girlfriend too
I'll ignore most of the fallacies here and just go after the simplest one.. how much would it be to build six 7995wx machines for an educational Ceph cluster?
@@hipster2283 oh please go ahead and point out the fallacies, and I’ll go ahead and point out the fallacy you would need 6… or even a thread ripper. Any modern CPU is an order of magnitude better performance per watt or dollar than a Pi. You wouldn’t need a cluster in the first place if you used basic maths.
Aaaand that's the reason people prefer Raspberry Pi to the cheaper and more powerful competitions. The documentation and support is so good that it's easy to learn things. I do like me a Banana Pi too. But more frustration in doing anything weird. For testing new things it's always Raspberry Pi. ..The others seem to be slowly getting better at that too. Not there yet. But slowly getting better.
@@JeffGeerlingThough it's getting closer and closer. I only need 8TB of storage for by NAS. And a single 8TB SSD is only twice the price of an 8TB hard drive.
@@achannelhasnoname5182 Just doing a quick search it seems like an 8TB SSD is 4-6 times more expensive than an 8TB HDD. What SSD are you talking about?
I work on infrastructure, and testing different application deployment architectures. For Kubernetes and distributed / redundant web application services, network storage is essential. Being able to test Ceph deployments locally (on my desk) without running VMs on one workstation is helpful to learn about network bottlenecks and different management strategies.
Systems like these are great for development houses. I used to work at a game studio who needed many many Arm cores for building and testing mobile versions of games, systems like these are exactly what we needed.
Well, there's this too... what I just built, and it has 6 NVMe 2TB M.2 drives in a RAID 10 array. That's 6TB of redundant storage for my C: drive with the capacity for 6 more hard drives... that is if you _don't_ want to add a card for more hard drives and just keep the Zotac running on 16 lanes... MSI MEG X670E ACE motherboard Ryzen 9 7950X3D cooled with a Noctua NH-D15s chromax. black and 2 NF-A15 fans. 6 Kingston FURY Renegade M.2 2TB (PCIe 4.0 x4) NVMe 2 - G.SKILL Trident Z5 Neo Series AMD EXPO 32GB (2 x 16GB) 288-Pin PC RAM DDR5 6000 (PC5 48000) for a total of 64GB RAM Zotac RTX4090 Gaming GPU (running all of the available 16 PCIe lanes) All inside a Fractal Define 7 XL case. I'm able to run BOTH of MSI Afterburner's Kombustor GPU stress AND CPU burner at the same time, maxing out the cards 400+ watts and the 7950X3D's 120 *_simultaneously_* and not overheat anything! Matter-of-fact the Zotac never gets over 80 degrees and the 7950 never gets over 89.5 so it never throttles the cores, they run the full 4800. No water cooling necessary.
I have a ThreadRipper with 24c/48t cpu. I would be curious how much power would be required to power up the equivalent number of cores/threads using this clustering method. Using the ThreadRipper to mine Monero (thrashes the cpu 24/7) only runs me about 500 or so watts as measured at the wall.
You should start using other SBCs than Raspberry Pi because the Raspberry Pi Foundation has big problems with transparency and openness. Maybe you can even design your own cluster board.
Which OS do you use on the pi boards? I've been thinking that Linux Mint (with the ability to combine all nodes into a cirtual computer) would be good. Can imagine running a ton of them to create a cheap personal supercomputer.
one can learn to operate any number of programs like docker or kubernetes etc or to write your own parallelized software. I have a pi cluster with 32 cores, I ran a basic render farm with it.
Neutron radiation will cause a bit flip and throw an error resulting in failure. The clusters are all doing the same thing, if any is different, it is ignored.
I want to build a raspberry pi cluster or a cluster using some other microcontroller board. I'm wondering if it would be useful web development testing 🤔
I think these are really cool jeff, but I think people are wondering, what do you actually DO with them other than tinkering/experimenting/compiling the Linux Kernel daily(😂 Jk). What applications do these clusters have in the real world/what companies use them? I haven't seen anything like this in commercial use yet, I still mainly see large x86 servers with lots of sockets.
Edge clusters, mainly. That and education. There is a very niche use case for a low-power cluster with more than one node, that can run on one board in a small/low-power/low-cooling system. Most use cases are better served by just one faster system, and most workloads are hard to split up efficiently on small computers like these.
The Pi clusters are a bit slower (only 1 Gbps connection to outside world), but all-in they cost around $400-500 for a 6 node cluster. The Mars 400 as tested (including drives) runs around $5000, but more if you want a support plan.
you’re much more likely to face a PSU or storage issue first, so not sure if this solves a serious redundancy issue. Regarding scalability, just use VMs?
I understood about 5% of that but well done. Haven't got a clue what it does but at least I know what a raspberry pi is! I was going to use one for setting up a retro gaming station, but I gave up cause Linux is super hard to understand.
That day when computer chips become an ultra rare commodity and the UN decides to raid Jeff's home to keep the world afloat for one last decade.
Coming next year, probably.
Lets fly to Mars
LoL
Lotta truth in that
Nice information.
Just like the world government to target an individual instead of using their own resources to help the masses
So that's where all the damn Pis keep going...
I can't. I eated them all.
They need everything....................I didn't like the pay to play
Impossible!
Micro center is loaded with them
Jeff's got a sweet tooth!
Bro, now I know why it’s so hard to get one these things 😂.
Jeff bought them all 😂
Haha no joke, I was just now getting around to snagging an RPi4 8GB since I didn't buy one immediately when they first came out (Def regret that one), then pandemic began and they immediately jumped up like $50-100 for even the smallest sized RAM model in price, if you could even find a 4GB or 8GB for so long...
Then I see a youtube video of someone building a custom gaming rig using a Raspberry Pi 5, and realized it's been much longer than i thought since the 4 released, and I never really was too attracted to the double micro-HDMI ports on the 4... and all the special case accessories stuff required to just attach a regular 1TB M.2 NVMe SSD .
(Also, does anyone recommend still getting the new RPi 5, or is the 4 a better option?...im leaning towards the 5 much more, but figured i'd ask my fellow nerds in the comments )😂
@@brilspolymathstudiosJeff bought you? Weird
@@itbwen9672 oh wait I meant them
Can't might need that, yeah they are useful!
Yo dawg we heard you like computers so we put computers in your computers.
Computers in your computers to run computers, that connect to other computers, using computers
@@williambaumgardner489networks in a nutshell
@@williambaumgardner489can’t forget about the virtual computers running in each and every computer connected computer controlled computer 😇
@@williambaumgardner489 Welcome to the Internet
@@Someaccount-whatelse Come and take a seat
after taking into account the huge markup of rpi boards and extra hardware its probably cheaper option to go with bunch of intel nuc boxes
rPIs are back to 40-80€ per piece. where did you find intel NUCs at a comparable price? the ones I found started at 330€.
@@raku2122well those NUC boxes will have multiple times the performance of one rpi and it won't require a motherboard or anything else extra.
У нас такого барахла много и оно дешевое около 30 евро
£90-120 for a i5 8500 or 9500 with 16gb ram
if you even "think" about getting an rpi for your desktop and gaming needs ... you're doing it wrong. Thats like wanting to buy construction machinery for your wool-knitting needs and interests. Pis never "really" had any markup, you always were able to buy one for 10-20 USD _(most bare/basic versions)_
Rpis are development boards, home automation boards and even industrial tech components - they cannot be used like any sort of "normal computer", even though some Rpi-distros seem to "support" that. Maybe ... "dont" buy dev boards if you want another desktop. Thats stupid and makes no sense.
pi are now very costly it defeated itself on being a diy board long ago and above all scalping adds to it.
Yep, goes for more than 100 dollars in my country. Chromebook with wayyy more power goes for 20 dollar. Don't really see why you need it. If you want compact computers just buy mini computers with around the same price (in my country)
Yeah when I was in Highschool they had just hit the mainstream and they were what? 12 bucks, and the pi zero had just come out and came FREE with the magazine talking about their release
Even when it's not scalper prices, I can't think of a good reason for this sort of setup.
My Pi 3B came with a case, all cords needed and a 64GB (which was a LOT then). I wanted 1 week shipping so paid a little extra, totaling $38
That's sad I remember when they came out and didn't know it's like that now. I never had one and always thought they were cool.
Eager to see the full video
This is the guy who was hoarding all those raspberry pi in 2023
Thank you for the RPi shortage.
Man I never thought a motherboard with some pis could look soo good
Redundancy is only achieved on a per cpu board basis.
If the power to the unit fails, all cpu boards fail.
You need more than one unit to have full redundancy.
Most rack-mounted servers have dual power supplies. And you are only really complaining about one form of redundancy. Surely you understand that a cpu board could fail without taking an entire power rail with it.
@@shanejohns7901 The Enclosure shown in the video has in fact 2 PSUs. In my opinion PSUs are one of the most important redundant parts because they fail more often than CPUs or memory.
Nope, since your being a Norman about it id like to mention true redundancy needs 3 full setups in 3 different locations all connected via 2 separate branches of networking. If a fire takes out 1 location that would only leave 2 for the period until a replacement is setup and all data copied over. If we only used 2 locations then the day we need the redundancy we would be at risk as during the intensive operation of copying over all the data and running it hard to cover full load it would be the only available location in the world, and having to do a high risk full data copy and run its normal workload simultaneously meaning the redundant system has failed to provide redundancy. Hence 3 locations are needed, multiple backups at the same location only provide some usefulness such as data parity and reduced wear and tear plus low level localised hardware failure protection.
There is no such thing as "full redundancy" in reality. No way in the world you can guarantee that it will never all go offline at once.
In the real world, designers make decisions about which parts of a system most need redundancy and which might only really benefit from a single backup (like the power supply).
Work any government job and you learn this. Redundancy doesn't start until 3. 1 working a second working back and a 3rd working backup for spare parts. That's basic bitch redundancy 101
CEPH clusters can be so diverse. Love to see this.
That loop was deviously smooth
The quality of your vids are going up, keep it up!
Man the lighting on the cm4 makes them look so good
@Jeff, your learning and development is our own too. For many, many, years my colleagues, friends and I have learnt a wealth of knowledge we leverage in the real world, from the things you do.
Just while I'm here, a quick mention for Geerling Engineering. The RF videos you've been publishing as of late have been of huge interest; especially for someone curious about everything.
They were singing, "Hi Hi Mr American Pie" very very clever
Arm computing is awesome.
Clustering is underrated
Does it really scale well extrapolating to clusters of cpus with much bigger words/pipelines? Or compared to clustered gpus?
Might be a stupid question, but it doesn't seem like it would translate well between x86, Cuda, and ARM
What are the specific use cases for these set ups?
Brute forcing captchas
Opulent displays of wealth.
I have mine running kubernetes to monitor the uptime or my other servers because it's the most stable one
After very long time I've found someone giving so much efforts and capabilities which can be achieved with a Pi SBC,
I myself run a 2 TB, perfectly working NAS using Pi-3B+
Dont feel a need to upgrade but would do so to convert it into a sellable and deployed product in coming months ❤❤
Data formatting, translation, the way it is stored, etc...
Software can make things fast, a lot of people are limited by their perception of things.
Relatively slow.... still useful, and most people wouldn't notice the difference in their day-to-day lives.
That’s not a cluster. That’s an overcomplicated RAID. For clustering you need separate systems with separate points of failure.
Almost a single point of failure
CiB left the chat.
It’s 4 separate compute units. For learning and demonstration it is a cluster.
Raid could be considered a (localized) cluster of storage
Beaglebones are cheaper and are totally open source so you can hack them at the hardware level.
He was not cooking any longer...he was baking
as long as your error checking us top notch, this is cool
Or, you know, if you are mainly intersted in learning about CEPH, the software aspects of clustering, and anything which doesn't need different physical hardware nodes - you could use a bunch of VMs an/or "Tiny Mini Micro" PCs, while spending (much) les$$$. With added benefits when to comes to (ease of) snapshotting/rollback/backup.
.
How is this cluster redundant when they share the same hardware and power source?
If you are on a budget you will probably be using Lenovo Tiny, HP Mini, Dell Micro. Because they are roughly the same cost or cheaper, and they are in stock. Well, they are dumping stock. barebones HP 6th/7th gen are 27 bones. You need processor ($5 and up), Ram ($5 and up) and power supply (~$8), and storage is NVME or SATA, (but PCIE x4 NVMe). Probably the smallest DDR4 you are going to find is a 4GB, but the good news is in bulk everything gets cheaper and you don't need the special carriers or cluster board at all. Don't get me wrong, Pi are cool, I have bought several, and just ordered the CM4, but not for a cluster.
Lindo projeto!!! Parabéns!!!
What happens if you try to run a normal desktop OS like Raspbian on this? Can you actually do it, or is it only useful for things like Docker clusters?
Well sir now I will have to go and see your videos on how you build those because you have a very excellent point on utilizing that inexpensive way to learn how to work on the very expensive
I'm wondering if at this point it wouldn't be better to do it virtually on a system with the combined computing power.
I find these kind of things awesome for learning stuff like K8s and Docker Swarm and see how they're working on different kind of loads. Works with normal Pis, too, but isn't as clean, as I have learnt the rather hard(-ish) way :D
Will need to print a custom case with fan support and have this cluster up and running somewhere (at least for when I'm intending on using it and learning with it) 😅😂
Relying on a singular company’s SBCs is waiting for a disaster to happen
Right, he thinks it’s “redundancy” but it’s kind of defeated by that alone lol.
Do you have one of the pi as the main node like you do in my other clusters out there?
Yes, I usually put the pi in slot 1 as the main node.
Yupper’s, Got the C6 but due the CM4 shortage I’ve only 2 nodes. Hoping that the CM5 will be compatible 🤞🏻(and available 😉 )
One of the most valuable things I learned was to space out your peripherals across all lanes.
Not sure I'd ever want to deal with the mess that is CEPH again, but might need to test out one of those boards.
I'm using longhorn it was pretty easy
@@RoryEckel Setup of any of them is easy, management and fixing it when it fails is where things matter. Ceph's kindof a nightmare there. First I've heard of LH.
Still kinda stuck on LizardFS because it's reasonably easy to fix when things break. Just move chunks to a working server and you're good.
od like to know what specifically cluster servers do. And how to make them work.
Single power supply doesn't seems very redundant.
They are very useful for a learning platform.
So just for context, a pi4 is 2 gigaflops for 5 watts, it a quad core with no hyper threading, And costs $100.
A 7995wx provides 192 threads, 12 teraflops for 350w…. An equivalent pi4 cluster would 3000w for that same compute power… 240w for the same thread count. And cost 60k to compete on throughput.
Only if you had a very specific need for thread count and not throughout, something like monitoring a million plus inputs simultaneously in a RTOS environment would it make sense, even then it wouldn’t because anything that critical would need error correction at a hardware level.
The only people that think this is a good idea are those bad at maths….. and computers…. And let’s be honest probably getting a girlfriend too
What I thought but didn't wanna do math.
Not very efficient in any category to include cost.
I'll ignore most of the fallacies here and just go after the simplest one.. how much would it be to build six 7995wx machines for an educational Ceph cluster?
@@hipster2283 oh please go ahead and point out the fallacies, and I’ll go ahead and point out the fallacy you would need 6… or even a thread ripper. Any modern CPU is an order of magnitude better performance per watt or dollar than a Pi. You wouldn’t need a cluster in the first place if you used basic maths.
Love to see the long form with implementation in a real world use case...
There really aren't. A corporation would never set up anything like this and a hobbyist could get similar computing power far cheaper in other ways.
@@wombatillo thanks, please share some more info to support your point of view.sounds interesting.
"Includes paid promotion"
Aaaand that's the reason people prefer Raspberry Pi to the cheaper and more powerful competitions. The documentation and support is so good that it's easy to learn things. I do like me a Banana Pi too. But more frustration in doing anything weird. For testing new things it's always Raspberry Pi.
..The others seem to be slowly getting better at that too. Not there yet. But slowly getting better.
I know the m.2 drives mounted like that are probably fine but those bends give me the bends.
Is that a hard drive, Indy?
Good ol spinning rust, still can't be beat for cost per TB!
@@JeffGeerlingThough it's getting closer and closer. I only need 8TB of storage for by NAS. And a single 8TB SSD is only twice the price of an 8TB hard drive.
@@achannelhasnoname5182 Just doing a quick search it seems like an 8TB SSD is 4-6 times more expensive than an 8TB HDD. What SSD are you talking about?
What do you use them for? You mentioned development. What exactly do you develop?
I work on infrastructure, and testing different application deployment architectures. For Kubernetes and distributed / redundant web application services, network storage is essential. Being able to test Ceph deployments locally (on my desk) without running VMs on one workstation is helpful to learn about network bottlenecks and different management strategies.
Systems like these are great for development houses. I used to work at a game studio who needed many many Arm cores for building and testing mobile versions of games, systems like these are exactly what we needed.
this is like teenage boys explaining to their parents why they need RTX 4090 in their PC's.
Is there any current cluster solution that's a single system image? Newer then Kerigrid ???
Well, there's this too... what I just built, and it has 6 NVMe 2TB M.2 drives in a RAID 10 array. That's 6TB of redundant storage for my C: drive with the capacity for 6 more hard drives... that is if you _don't_ want to add a card for more hard drives and just keep the Zotac running on 16 lanes...
MSI MEG X670E ACE motherboard
Ryzen 9 7950X3D cooled with a Noctua NH-D15s chromax. black and 2 NF-A15 fans.
6 Kingston FURY Renegade M.2 2TB (PCIe 4.0 x4) NVMe
2 - G.SKILL Trident Z5 Neo Series AMD EXPO 32GB (2 x 16GB) 288-Pin PC RAM DDR5 6000 (PC5 48000) for a total of 64GB RAM
Zotac RTX4090 Gaming GPU (running all of the available 16 PCIe lanes)
All inside a Fractal Define 7 XL case. I'm able to run BOTH of MSI Afterburner's Kombustor GPU stress AND CPU burner at the same time, maxing out the cards 400+ watts and the 7950X3D's 120 *_simultaneously_* and not overheat anything! Matter-of-fact the Zotac never gets over 80 degrees and the 7950 never gets over 89.5 so it never throttles the cores, they run the full 4800.
No water cooling necessary.
It all comes down to cost and efficiency
Can you make an example what you develop on it?
I have for example a 7950x running on a ITX board and proxmox divided in many servers and 64GB RAM.
I've always wanted to build one but, short of launching a hello world nginx container I have no idea what to run on the thing.
Cool.
Liked your content. Subscribing 😊
It's more handy than what most might think. Scale it!🤓😎
I was hoping it would be about actual MARS
I have a ThreadRipper with 24c/48t cpu. I would be curious how much power would be required to power up the equivalent number of cores/threads using this clustering method. Using the ThreadRipper to mine Monero (thrashes the cpu 24/7) only runs me about 500 or so watts as measured at the wall.
my local movie theater uses raspberry pi’s to get advertising screens
What a very contagious cluster?
In switzerland we have just th 16 Core AMB but this is a Monster
You should start using other SBCs than Raspberry Pi because the Raspberry Pi Foundation has big problems with transparency and openness. Maybe you can even design your own cluster board.
Damn Turing pi 4 is out already?
Which OS do you use on the pi boards? I've been thinking that Linux Mint (with the ability to combine all nodes into a cirtual computer) would be good. Can imagine running a ton of them to create a cheap personal supercomputer.
Ceph is awesome! did you roll out bare metal with a playbook or use Kubernetes with Rook helm chart?
It's Ansible, but in this case it was managed by Ambedded. I also have been working on it a little in my Pi-Cluster repo on GitHub
Okay, but *why*? Isn't this expensive?
one can learn to operate any number of programs like docker or kubernetes etc or to write your own parallelized software. I have a pi cluster with 32 cores, I ran a basic render farm with it.
What a beast! Never going to need, but interesting.
😂
Neutron radiation will cause a bit flip and throw an error resulting in failure. The clusters are all doing the same thing, if any is different, it is ignored.
All Hail The Pie Ambassador
When you have unlimited budget
You know what would be an awesome usecase for these pis? Virtualization.
what are these for?
What’s the point of one of these? Still don’t understand
What does it do ?
I don't understand. PI is cpu?
Dear dude,
Do u know about the idein pi zero cluster with 16 zeros?
I want to build a raspberry pi cluster or a cluster using some other microcontroller board. I'm wondering if it would be useful web development testing 🤔
What OS are you using for running this hardware?? Thanks!
Would something like this be good for an open source nas
How does one do this? What kind of motherboard sorcery is that?
When I saw that server pi board.. my first thought was this has no production value, but I want it, what can I do with it 😂 I can test kurbernetes 😂
Can you turn one of these into a Gaming home server?
Poor man's Cisco UCS.
When overclocking pi cluster server?
I specifically use this for stuff and sometimes also things.
But will it run crysis?
Raspberry pies are not slow, it's how they use them.
I think these are really cool jeff, but I think people are wondering, what do you actually DO with them other than tinkering/experimenting/compiling the Linux Kernel daily(😂 Jk). What applications do these clusters have in the real world/what companies use them? I haven't seen anything like this in commercial use yet, I still mainly see large x86 servers with lots of sockets.
Edge clusters, mainly. That and education. There is a very niche use case for a low-power cluster with more than one node, that can run on one board in a small/low-power/low-cooling system. Most use cases are better served by just one faster system, and most workloads are hard to split up efficiently on small computers like these.
Whats the budget on making a complete cluster setup like the ones you're showing? I've always wanted to do this.
The Pi clusters are a bit slower (only 1 Gbps connection to outside world), but all-in they cost around $400-500 for a 6 node cluster. The Mars 400 as tested (including drives) runs around $5000, but more if you want a support plan.
raspberry pie sounds good. may I have a slice?
Can it run Doom?
thanks Jeff
I love all the stuff you do, Jeff!
I need these on my life
you’re much more likely to face a PSU or storage issue first, so not sure if this solves a serious redundancy issue. Regarding scalability, just use VMs?
I understood about 5% of that but well done. Haven't got a clue what it does but at least I know what a raspberry pi is! I was going to use one for setting up a retro gaming station, but I gave up cause Linux is super hard to understand.
Even a pi3 cluster cab do real work, a lot of services aren't that heavy on their own.
Hdd in vaacum will be working?