10 Watt HA Proxmox Cluster ft. ZimaBoard

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 มิ.ย. 2024
  • Buy a ZimaBoard - amzn.to/43yaXhk
    ZimaBoard Site - link.rdwl.me/28vEQ
    ZimaBoard Review - • An Actual x86 MINI SER...
    Proxmox Setup - • Upgrading to PROXMOX -...
    Disable Swap in Promxox - learn.umh.app/course/disable-...
    Install Proxmox on eMMC - ibug.io/blog/2022/03/install-...
    -------------------------------------------------------------------------------------------
    🛒 Amazon Shop - www.amazon.com/shop/raidowl
    👕 Merch - / raidowl
    -------------------------------------------------------------------------------------------
    🔥 Check out this week's BEST DEALS in PC Gaming from Best Buy: shop-links.co/cgDzeydlH34
    💰 Premium storage solutions from Samsung: shop-links.co/cgDzWiEKhB8
    ⚡ Keep your devices powered up with charging solutions from Anker: shop-links.co/cgDzZ755mwl
    -------------------------------------------------------------------------------------------
    Join the Discord: / discord
    Become a Channel Member!
    / @raidowl
    Support the channel on:
    Patreon - / raidowl
    Discord - bit.ly/3J53xYs
    Paypal - bit.ly/3Fcrs5V
    My Hardware:
    Intel 13900k - amzn.to/3Z6CGSY
    Samsung 980 2TB - amzn.to/3myEa85
    Logitech G513 - amzn.to/3sPS6yv
    Logitech G703 - shop-links.co/cgVV8GQizYq
    WD Ultrastar 12TB - amzn.to/3EvOPXc
    My Studio Equipment:
    Sony FX3 - shop-links.co/cgVV8HHF3mX / amzn.to/3qq4Jxl
    Sony 24mm 1.4 GM -
    Tascam DR-40x Audio Recorder - shop-links.co/cgVV8G3Xt0e
    Rode NTG4+ Mic - amzn.to/3JuElLs
    Atmos NinjaV - amzn.to/3Hi0ue1
    Godox SL150 Light - amzn.to/3Es0Qg3
    links.hostowl.net/
    0:00 Intro to my Proxmox Cluster
    0:28 What is a Proxmox Cluster?
    1:43 Hardware for my Proxmox Cluster / Zimaboard
    3:35 Setting up a Proxmox Cluster
    5:16 Setting up Ceph
    6:53 Setting up and migrating services in the Proxmox Cluster
    7:58 Configuring your Proxmox Cluster for HA
    9:36 Virtualizing kubernetes in the cluster
    10:33 What you should run on this cluster?
    11:13 Overall thoughts on my HA Proxmox Cluster
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 206

  • @TechnoTim
    @TechnoTim ปีที่แล้ว +215

    It’s ok that you had the original idea to steal my idea, I stole the idea too 😂

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +59

      One day I’ll have an original idea…one day

    • @lachlanstone282
      @lachlanstone282 ปีที่แล้ว +15

      With tech, it is less about the tech you are showing and more the perspective of the tech

    • @minedustry
      @minedustry 11 หลายเดือนก่อน +2

      Good ideas are contagious

  • @icequark1568
    @icequark1568 ปีที่แล้ว +46

    "Cause I can, and I am a nerd." Words to live by.

    • @swollenaor
      @swollenaor ปีที่แล้ว +3

      Should be on a merch item

  • @kienanvella
    @kienanvella ปีที่แล้ว +70

    The way to get your workloads to prefer a node is to create HA groups for each node. Set the node priority for each group to be the same for all the non preferred nodes, and increase that value by one or two for the preferred node.
    Workloads will migrate to the node that is up with the highest priority first, and then prefer nodes of equal priority with the most amount of free memory and lowest CPU usage

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +12

      Interesting, thanks for the tip!

    • @markclarke4895
      @markclarke4895 ปีที่แล้ว +5

      Works perfectly. Thanks for this tip!

  • @bitterrotten
    @bitterrotten ปีที่แล้ว +3

    This is such a cool project. Hadn't taken this board seriously until you posted this but now I keep thinking about it as I'm trying to adjust my new southern, basement-less lifestyle where I'm baking my wife in her office with my homelab servers.

  • @tester0083
    @tester0083 ปีที่แล้ว +2

    Thank you for making this video! Proxmox cluster was the very next thing on my lab to do list!

  • @jeytis72
    @jeytis72 ปีที่แล้ว +3

    Very informative and clear. I have now a better understanding about how a Cluster and Ceph work in Proxmox. Thanks

  • @thespencerowen
    @thespencerowen 7 หลายเดือนก่อน

    Such a great video. I’m doing the exact same thing with intel nuc. I use ceph at work and I despise it, but seeing how simple it is inside of proxmox has convinced me to try it at home.

  • @BrianThomas
    @BrianThomas ปีที่แล้ว +1

    I've done the same exact thing on 4 different small form factor hardware. I'm running Open Media Vault which file sharing and docker, portainer support running all kinds of cool stuff. The must have was a UNIFI controller in docker. Having that in a HA cluster on battery backup is AMAZING!. It's very cool. I'll have to give the CEPH a try though. This looks like it save me some power. I'm using an NSF cluster for HA storage on 3 other mini PCs.

  • @buildfrom
    @buildfrom ปีที่แล้ว

    Fascinating. Would definitely like for more videos to be posted on this SBC.

  • @corpdecker
    @corpdecker ปีที่แล้ว

    Ayee, that map screenshot shows my house. What a small world.
    I need to set something like this up, but w/ more power than the ZimaBoards. I know all the options, but it's hard balancing performance vs cost vs efficiency. Thanks for the vid!

  • @th3rm-o977
    @th3rm-o977 ปีที่แล้ว

    Great informative video!
    Gives me something to think about with my intel NUC's

  • @IT-Entrepreneur
    @IT-Entrepreneur ปีที่แล้ว

    Crazy good. Exactly what i was looking for

  • @chrisa.1740
    @chrisa.1740 ปีที่แล้ว +4

    "Because I can, and I'm a nerd." - RaidOwl
    That's exactly my reasoning for most of my Home Lab projects.

  • @GeoffSeeley
    @GeoffSeeley ปีที่แล้ว +29

    As others have commented, Ceph needs more memory but it should also have it's own isolated (preferably) network, the faster the better. It will constantly be copying changed blocks on one node to the other two nodes to stay in sync. You should use that second NIC on each node for this.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +6

      Noted, thanks for the tip! Not much is happening on the main nic anyway lol

    • @walt
      @walt ปีที่แล้ว +4

      @@RaidOwl You also have those pcie ports available on each of the zima boards. Maybe use that for additional nvme or 10/25gb nic. I run a similar proxmox cluster on intel NUCs with 6w pentium n3700 cpus but I use ZFS replication instead of ceph. I wish I had those pcie slots though, then I would upgrade the networking and try ceph.
      The other thing you could try with these boards is a Pfsense/OPNsense CARP HA router setup. They're x86 cpus and have 2x nics so they should be good for that. In a different proxmox cluster I run Pfsense VMs in a combination of Pfsense CARP HA and proxmox ZFS replication HA. It works great, haven't had network downtime in forever.

    • @seethruhead7119
      @seethruhead7119 ปีที่แล้ว +1

      this is why the R86S are so interesting to me
      10gbe ceph network + 10gbe client network
      OR
      2x10gbe for a ceph ring network and 2.5gbe for client

    • @JeevaDotNet
      @JeevaDotNet ปีที่แล้ว +3

      Actually ceph prefers a single network, "best practices" they want 2, cluster and public. But it's designed against best practises. C.E.R.N. Runs a flat network for ceph, I run 3, networks for Ceph. Management, VM, OSD replication.
      If you dive deep into the logs you will see ceph complains when running with two networks. But thay error doesn't do anything.

  • @Aquavibes-xl9uu
    @Aquavibes-xl9uu 7 หลายเดือนก่อน

    This is amazing, thanks for sharing this architecture

  • @NightHawkATL
    @NightHawkATL ปีที่แล้ว +5

    Great video! I ultimately want to get to this point as well and possibly include plex in a supermicro 1u cluster with a p400 in each.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +1

      A little bit more horsepower than mine lol

  • @lamar9525
    @lamar9525 ปีที่แล้ว +1

    Thank you, another rabbit hole for me to go in.

  • @LeoNux-um7tg
    @LeoNux-um7tg 15 วันที่ผ่านมา

    I'm going to build my proxmox server with my old 3rd gen quad core laptop. Thanks for inspiring me

  • @patrickpaganini
    @patrickpaganini 10 หลายเดือนก่อน +2

    "Or if you want an actual good video ..." lol, that took me by surprise - good to have a sense of humour!

  • @Der089User
    @Der089User ปีที่แล้ว +6

    Built a cluster with 3x Minisforum EliteMini HM90 with 32GB RAM (max. is 64GB) + 512GB NVMe + 1TB SSD each - plenty of cluster power with (3x9=) 27 Watts idle.
    The mini PCs also have two NICs so I'm running a separate Cluster Network for running the synchronization of the nodes as a best practice tip from Proxmox says: "Storage communication should never be on the same network as corosync!"
    Actually didn't want to run out of CPU power even when operating the one or another Windows OS on the cluster.
    Anyway: Proxmox rocks! Hope to see more content on your channel - which is one of my favorites. 👍🏻

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +1

      Hell yeah that’s awesome

    • @marcin6386
      @marcin6386 ปีที่แล้ว +1

      Yes! That's was exactly why I bought recently the HM80 is exactly like HM90 but with low powered amd ryzen 4800U processor 15wats. They are maybe 20/25% slower but hey! It's only 15 Wats chip! The only thing I didn't like was the fact that probably they don't support ecc ram. But still when you have a cluster I do believe you are covered even when there would be RAM corruption. So maybe that's not the worst idea :-)

    • @moosolutions
      @moosolutions ปีที่แล้ว

      😊

  • @festro1000
    @festro1000 ปีที่แล้ว +12

    I'm surprised back-plane pcie interconnects aren't a thing with SBCs like this, would make for far better clusters.

  • @SplittingField
    @SplittingField ปีที่แล้ว +1

    I experimented with a qdevice approach: main server, small x86 where critical services start if main server is down and and old pi to keep quorum. It worked fine, but the way I did it, storage wasn't redundant (it was all on my NAS) and when I wanted to stop using HA I had to mess around with proxmox files on disk directly.

  • @JohnMatthew1
    @JohnMatthew1 7 หลายเดือนก่อน

    I have a similar setup, but use the Lenovo mini-pcs with I5's and 16gb RAM, SSD + NVME, a great setup. Expandable to 32gb on each node, things work great.
    Just need some fast shared storage now.

  • @Rorhan90
    @Rorhan90 ปีที่แล้ว

    The best reason for everything is "because i can"!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! LOVE IT!

  • @seethruhead7119
    @seethruhead7119 ปีที่แล้ว

    I'm thinking about doing the same with 3-5 R86S. The nice part being the dual 10gbe ports. 1 port for client traffic, and 1 port for the ceph network. also there are 2.5gbe ports left over for management or something else.
    Or I could use the two 10gbe ports for a ring network for ceph, and the 2.5gbe ports for client. The ring network is interesting because I could avoid buying a 10gbe switch just for ceph

  • @MthaMenMon
    @MthaMenMon 28 วันที่ผ่านมา

    Now that is a super handy cluster. You can feed it with power from a small array of solar panels and boom. You could even host your own satellite with such a setup 😂
    And since proxmox is just, plain debian, anything could be done with it.

  • @igordasunddas3377
    @igordasunddas3377 ปีที่แล้ว +2

    My system not being HA is exactly my problem. I have set up services along with pihole on it, but I don't want to put e.g. pfsense into it and have it work as a router, because if this single device dies, I don't even have access to the internet.
    Thanks for this video as it shows how to make the server part HA. I wonder if there is an easy way to have a router highly available. My current router worked for years without any trouble, but I am using the pihole as a DNS, because I wanted an intranet domain. Still: if it breaks, I have to manually adjust things to at least have internet back on.

  • @JasonsLabVideos
    @JasonsLabVideos ปีที่แล้ว

    NICE buddy !! this is awesome !!

  • @prashanthb6521
    @prashanthb6521 ปีที่แล้ว +3

    Only 10 watts and so much functionality is definitely a winner.

  • @KjelltheWolf
    @KjelltheWolf ปีที่แล้ว

    This looks awsome. For testing like you said its fine. Now i consider one or two of these for pfSense

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +2

      They’re awesome for pfSense

    • @KjelltheWolf
      @KjelltheWolf ปีที่แล้ว

      @@RaidOwl cool. I looked for stuff like this but hadnt ZimbaBoards on my radar here in Germany. And for 110-190€ its kinda a steal for pfSense

  • @rjbrowning85
    @rjbrowning85 5 หลายเดือนก่อน

    Very interesting project. I see that they are selling the new ZimaBlade in a 3 unit cluster configuration. Hopefully, we can get slightly better performance out of the new ZimaBlades

  • @obsolete21
    @obsolete21 ปีที่แล้ว +2

    I run a similar setup (but on NUCs instead of Zima boards) - one thing I've noticed is that LXC containers migrate between nodes in a few seconds, whereas my VMs take about a minute to migrate.

    • @JohnMatthew1
      @JohnMatthew1 7 หลายเดือนก่อน

      Migration of LXC's have to shutdown first, no? They are quicker since no OS per-se

  • @ochbad
    @ochbad ปีที่แล้ว +1

    great video, thanks for making it. have my engagement!

  • @urzalukaskubicek9690
    @urzalukaskubicek9690 ปีที่แล้ว +1

    I am currently using truenas scale for my vms and containers but this is tempting. I have two questions:
    1. What is the performance penalty compared to single node? Is it noticeable for Web services?
    2. How does upgrading proxmox to newer version works with cluster like this? You just shut down node A, do upgrade, reconnect it to cluster with older versions? Then next node and so on?

  • @markclarke4895
    @markclarke4895 ปีที่แล้ว

    Funny your video popped up on my phone this morning. I spent my entire weekend trying to figure out how to auto migrate a VM back to its original node. Apparently, it can be done with hook scripts.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      Someone in the comments mentioned you can do it by setting up HA Groups

  • @paulsimpson6290
    @paulsimpson6290 ปีที่แล้ว +3

    Awesome - gave me some ideas!
    Some quick questions..
    1) Do all the SSD / HDDs used in the Ceph nodes have to be the same size?
    2) If I started off with (say) 500Gb drives and wanted to upgrade to (say) 2Tb drives, could I do this one disk at a time and avoid data loss, or would I have to back everything up, upgrade all the disks, and restore?
    3) In your example, you had three 1Tb disks. How much storage do you end up with on the cluster?
    4) Can you add more Ceph nodes once it's up and running?
    5) If you can add more nodes, will it start with fewer than 3 nodes? (I need to spread the cost of disk purchases out!)
    TIA

    • @javoronkov
      @javoronkov 11 หลายเดือนก่อน +1

      1) practically yes; your usable size will be the size of the smallest disk.
      2) yes, it’s a piece of cake.
      3) configuration shown was R3 (replication across three nodes), so total usable size is 1TB. You can use EC2+1 (erasure coding of 2 data and 1 parity) that will give you 2TB of space while allowing for one disk/node to fail.
      4) yes, it’s a piece of cake.
      5) yes, but some deficiencies. Proxmox cluster: 2 nodes depend heavily on each other. You won’t be able to start vms/lxcs, change the configuration and so on while you have only one node online (official/supported flow of things). CEPH cluster: chances are you’ll end up with osd-level redundancy, so your operation will fail if any of your disks/nodes go down.

  • @basdfgwe
    @basdfgwe ปีที่แล้ว +1

    Man proxmox is so good!!

  • @GustavoMsTrashCan
    @GustavoMsTrashCan ปีที่แล้ว +1

    Oh wow! That's pretty much my "wet dream" right there. Nicely done. I'm really jealous.

  • @GeorgeLee
    @GeorgeLee ปีที่แล้ว

    Does sound like a fun thing to do!!

  • @giancarlosrm
    @giancarlosrm ปีที่แล้ว

    Great video!!!!!! Love it

  • @swollenaor
    @swollenaor ปีที่แล้ว

    I am currently looking at a cluster build with intel nuc, but this set up would be cool to tip my toes in clusters and such.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +3

      Intel NUCs are solid too. I snagged some $50 ones off eBay and upgraded the ram and storage to run my main k3s cluster on.

  • @JeevaDotNet
    @JeevaDotNet ปีที่แล้ว +1

    Best practices to disable PG Autoscale. It has killed a lot go ceph clusters.

  • @sichenggu5806
    @sichenggu5806 ปีที่แล้ว +1

    Cool setup! I want to make my own mini cluster and this inspired me!
    Maybe Beelink EQ12(I dont know whether it can be sold in your untry, but it is availble in China and Singapore) is an alternative to ZimaBoard if you want more compute power hahahaha.
    it has the new Intel N100(which may be 2~3X faster than N3450, 4 cores) , and the no memory no ssd version cost about $150 in China.

    • @marcin6386
      @marcin6386 ปีที่แล้ว

      Yeah I went that route. But there are a few cons. With the biggest no 1: You don't have support for the ECC Ram memory 😢

    • @sichenggu5806
      @sichenggu5806 ปีที่แล้ว

      @@marcin6386 oh ECC memory is critical for some situations, i forget that...🤣

  • @4evermetalhead79
    @4evermetalhead79 ปีที่แล้ว

    Cool. I dig it. 🔥

  • @emanuelpersson3168
    @emanuelpersson3168 ปีที่แล้ว +1

    Bitwarden would be nice on a Cluster.

  • @MarkJay
    @MarkJay ปีที่แล้ว +1

    another good option are Dell Wyse thinclients. they are low power, x86, more powerful, and about 3X cheaper than the zima board

  • @MAMDAVEM
    @MAMDAVEM ปีที่แล้ว +1

    Presumably this would be a good way to run a HA HomeAssistant home automation system? to my knowledge, although folk use Proxmox to host HomeAssistant (I do), no one has posted a video on how to run one in a HA way.

  • @bluesquadron593
    @bluesquadron593 ปีที่แล้ว +7

    The memory is a bottleneck here. CEPH needs a good chunk of memory. I found 16GB was not enough for a 500gb CEPH pool. With a vm or two.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +1

      Yeah I think for this use case it’ll be fine. For my production system it’ll be more robust

  • @ammo2222
    @ammo2222 4 หลายเดือนก่อน

    Building a HA Cluster with single Point of Failure (Switch) is exactly my kind of Humor👍

    • @RaidOwl
      @RaidOwl  4 หลายเดือนก่อน

      It’s only running in a single house on a single planet too, how crazy is that???

  • @VickyLovesHeadphones
    @VickyLovesHeadphones ปีที่แล้ว

    Can you run pfSense on a Proxmox cluster?
    Having a highly available router/firewall seems like a neat solution

  • @archimedes7436
    @archimedes7436 ปีที่แล้ว

    Would this be a good design for a micro super computer in a small pelican case? Something better than a standard laptop.

  • @JohnSmith-vs6yy
    @JohnSmith-vs6yy ปีที่แล้ว +1

    Proxmox seems pretty heavy for something like this. I'd like to see something like this with updated software. Possibly Fedora CoreOS to get auto OS updates with rollback, with k3s and Rancher dashboard, or just go full first-party with Docker Swarm with ___ dashboard (I wish there was a first-party one), replicated masters in case the master node goes down, but also sharing storage for HA. Double drives (NVMe or SATA) on the Zimaboards for redundancy to handle a single drive outage without losing full HA support for services (can drive mirroring be done with clustered storage within the same host in place of RAID?). I was thinking something like this with Cockpit to manage the system hardware and OS and Rancher to handle the apps. It's too bad that Cockpit doesn't have a Kubernetes extension. They used to, but it can't be found anymore. Probably because they're supporting Podman already. There's nothing in Cockpit (AFAIK) that creates or manages clusters unless you go over to the Red Hat corporate side of things and start looking at OpenShift, and even the community version OKD is too much for a Zimaboard. Minishift is also dead.

    • @mipmipmipmipmip
      @mipmipmipmipmip 10 หลายเดือนก่อน

      The thing is: proxmox is just very convenient to set up ceph clusters!

  • @tergkyit
    @tergkyit 11 หลายเดือนก่อน

    Good idea❤❤❤❤❤❤

  • @KniferFTW
    @KniferFTW ปีที่แล้ว +1

    I was recently actually looking for a tutorial to do this lol

  • @2GuysTek
    @2GuysTek ปีที่แล้ว +3

    Love it!

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +1

      ❤️❤️❤️

    • @2GuysTek
      @2GuysTek ปีที่แล้ว

      @@RaidOwl Someday I'm gonna have to try to convince you to give VMware a try!

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      @@2GuysTek goodluck ;) I'll give you exactly 5 min on our next stream to make your case lol

    • @2GuysTek
      @2GuysTek ปีที่แล้ว

      Challenge accepted!

  • @deathpie5000
    @deathpie5000 9 หลายเดือนก่อน +1

    @Raid Owl please please make in-depth proxmox tutorial I am struggling to understand the LVM stuff I use EX4, and for instance, if something happens to the system and I plugged that hard disk into a working system, how do I access the files that were inside that proxmox that are like LVM? I don't know. There's just no really good in-depth proximox tutorial. There's a few but a really good one I think a lot of people would really like that

  • @nullify.
    @nullify. 11 หลายเดือนก่อน

    I really want to like the Zima board, but the pcie slot sticking out bugs me. Especially with such a nice looking case/heatsink design

  • @pichonPoP
    @pichonPoP ปีที่แล้ว +2

    I want a setup like this, due to draw power.

  • @framegrace1
    @framegrace1 7 หลายเดือนก่อน

    I plan to go K8s native, no proxmox. Using something cheap as Control plane, and this zima boards as workers. Let's see how it goes.,

    • @RaidOwl
      @RaidOwl  7 หลายเดือนก่อน +1

      Goodluck soldier

  • @andyk9685
    @andyk9685 ปีที่แล้ว

    Thanks !!

  • @MillionMileDrive
    @MillionMileDrive หลายเดือนก่อน

    After a year, how is the onboard EMMC holding up with HA and Ceph? I have a 5 node cluster of Dell Micro PCs and the 256gb Proxmox boot SSD are at 80% wear after a year.

  • @a.krugliak
    @a.krugliak 8 หลายเดือนก่อน

    I have a question... how those zima boards works in 24[7 schedule?
    and what doing on if your zima1 (for example) with ssd has been down? how you connect ssd as ha nas?

  • @undergroundnews_dk
    @undergroundnews_dk 10 หลายเดือนก่อน

    Great job - done by the italien munks ;)

  • @mpsii
    @mpsii ปีที่แล้ว

    With the pci express port, can you not attach a video card for plex?

  • @mochalatte3547
    @mochalatte3547 ปีที่แล้ว +1

    "...it sits about 10Watts total.." What hardware/software do you use to monitor these SBC's? Fantastic info. Time to give Proxmox a try (been using UNRAID for years now).

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      I had them plugged into my Vanspower portable battery bank. I have a video on it if you wanna check it out

    • @mochalatte3547
      @mochalatte3547 ปีที่แล้ว

      @@RaidOwl That would be great if you can post the link. Thanks mate.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      @@mochalatte3547 th-cam.com/video/SiZRnyz8R1U/w-d-xo.html

  • @gowinfanless
    @gowinfanless ปีที่แล้ว

    Very very impressive,could you do it with R86S-U4 which are Intel N6005 CPU with 32GB ram+3*2.5G+2*10.0G SFP+ port? That was so expected!!

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      Yeah for sure!

  • @AlexB-op7kb
    @AlexB-op7kb 9 หลายเดือนก่อน

    I started to set up an HA ProxMox cluster, and got stuck at the "Fencing" part. Is fencing optional, or does it just seem more intimidating than it is?

  • @bastothemax
    @bastothemax ปีที่แล้ว

    Question: does proxmox have a feature to migrate vm's between nodes if the resource usage (ram/cpu) Is very high?

  • @VarunPilankar
    @VarunPilankar 10 หลายเดือนก่อน +1

    My suggestion would be using low power cluster as backup.. Maybe consider using a setup like this:
    Cluster A (High Power) - Node A1, Node A2
    Cluster B (Lower Power) - Node B1, Node B2
    HA - Cluster A and Cluster B (maybe cross wired) for efficient and more practical usecase.. This will also make sense in regards to performance per watt.

  • @datatribute648
    @datatribute648 ปีที่แล้ว

    Does the shared storage have any redundancy built in? or just creates 1 pool essentially?

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      The size in the Ceph cluster dictates the redundancy so in my case it was 3 copies.

  • @joshuagibson7512
    @joshuagibson7512 9 หลายเดือนก่อน

    I am trying to do something similar with three Gigabyte Brix machines, but I'm running into issues because creating an OSD seems to require an empty disk (ie, it won't accept the disk where the os is). Did you partition the zima drive somehow before install?

    • @RaidOwl
      @RaidOwl  9 หลายเดือนก่อน

      Nah I just used dedicated sata drives for the OSDs

  • @discrtidunkwn
    @discrtidunkwn ปีที่แล้ว

    You can tell it's monks from the holiness in their design.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +2

      …omg

  • @AnilKumarIndia
    @AnilKumarIndia 5 หลายเดือนก่อน

    Nice video

  • @tonybeckett66
    @tonybeckett66 ปีที่แล้ว +2

    I'm suffering from cluster envy

  • @AnotherSkyTV
    @AnotherSkyTV ปีที่แล้ว

    Because I can and I'm a nerd - a valid reason! 😅

  • @bassjmr
    @bassjmr ปีที่แล้ว

    I installed proxmox on my Zimmaboard on emmc too. Question is how long will it last until it dies. There’s maybe a reason emmc not being supported

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      Time will tell

  • @arubial1229
    @arubial1229 11 หลายเดือนก่อน +2

    The ZimaBoard could have been straight up God tier if it had built-in NVME support. Installing a PCI-E add-in card doesn't count.

  • @akurenda1985
    @akurenda1985 ปีที่แล้ว +1

    Hey bro. I heard you want to build an HA proxmox setup to monitor your other HA proxmox setup.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      😳😳😳

  • @MMWielebny
    @MMWielebny 3 หลายเดือนก่อน

    If you plan to have k8s on zima in vm then why to use proxmox in the first? Only reason I can think of is to use other OS then Linux but resources will still limit you. Ceph can be installed inside k8s or next to it. Besides ceph is not only block. You can have s3/Swift object storage or cephfs POSIX compatible server.

  • @pawe460
    @pawe460 11 หลายเดือนก่อน +1

    Hi there,
    is it possible to daisy chain zimaboards since all of them have 2 ethernet ports or is it neccessary to connect all of them to a switch?

    • @Keith-ej1sx
      @Keith-ej1sx 10 หลายเดือนก่อน

      If the middle one died then you have nothing connecting the other two.
      If you were then to respond with "then loop then back" well then you have two problems. Broadcast storms and no way to network them to anything else (because all ports are taken).
      The single point of failure still exists here, and yes it's the switch (and probably also the powerboard your using to plug in all those cords).
      It's a surmountable problem though, just needs more thought on the network side. RSTP on managed switches. Does proxmox support cluster over two networks?

  • @thedeejlam
    @thedeejlam ปีที่แล้ว

    Would be nice with POE for [more] convenience.

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      I agree 100%

  • @rallegade
    @rallegade ปีที่แล้ว +1

    I can't stop but feeling like ceph is a bit overkill for a cluster in this scale and size and with the given hardware. Instead I would look towards zfs and the use replication between the nodes for the vm's that needs HA. This would also minimize the complexity of the setup a lot and get rid of things like split brain issues which can happen with ceph 😊

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +3

      Lol probably but I’ve never tried Ceph so this was my excuse to give it a go since Proxmox makes it easy AF

    • @rallegade
      @rallegade ปีที่แล้ว

      @@RaidOwl agree that it's absolutely what should be done in a lab environment to get to know how it works! But I'd still prefer zfs for lower ish end non production environment 😉
      Keep up the nice videos though!

  • @terrorpup
    @terrorpup ปีที่แล้ว

    I am about to do that, I am going to use a Vima board as the NAS. What are you using for storage?

    • @terrorpup
      @terrorpup ปีที่แล้ว

      ah, nm mind I saw that you are using 1 TB drive, so why not a nas?

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      I wanted the storage to stay local within the cluster for HA

  • @marcorobbe9003
    @marcorobbe9003 5 หลายเดือนก่อน

    Very interesting, that is exactly what I am planning to set up for home automation (NodeRed, Grafana, ...).
    Right now I am hanging with one topic. (How) is it possible to share data between VMs or Container?
    I think, Proxmox is running on its own (the internal) disk. The VMs, and containers are on the external SSD.
    When I am setting up a container, that container gets his own virtual hdd assigned that is placed on the external SSD / the Ceph disk.
    Is it possible / how to have a folder / diskarea, partition, ... lets call it "shared folder" where different container can read and write data.
    Later on there is maybe a container with a simple NAS software solution or just a SMB share, that gives me acces to that "shared folder" via LAN so I can backup that data from time to time.
    I would be very happy, if someone can help me out how to do that.
    Thanks a lot

  • @notafbihoneypot8487
    @notafbihoneypot8487 ปีที่แล้ว

    What about a K3S cluster

  • @ChyPyChy
    @ChyPyChy ปีที่แล้ว +1

    You could buy for that money a much better system. For an example, for 120-150 Dollar, u could get a refurbed lenovo thinkcentre m700 with core i5 like 6400t or 6500t, 8 GB RAM and 256GB SSDs, and it takes only 15-25 watt with a much better perfomance rate. You''ll get also a clean case and so on.

    • @pavelperina7629
      @pavelperina7629 ปีที่แล้ว

      I did exactly this. Zimaboard 216 and Fujitsu Q956 with i5-6400t (which I upgraded basically doubling it's price). Zimaboard consumes 2.5w idle, 7.5w at load. Fujitsu is more compilcated: 5.5w idle without anything (but network) attached, 7-9w at very little load (added usb wifi, running podman-compose with nextcloud, mariadb and cloudflared which greatly increases wakeups per seconds, still like 2-5% cpu usage) and 37w at full load. But at full load it's roughly five times faster. Zimaboard costs roughly 130eur including shipping and additional costs were minidp-hdmi cable and 250gb sata harddrive. Fujitsu costs roughly 130eur including 128 sata harddrive and 8gb ram.
      Running nextcloud on zimaboard feels like it uses half of memory a stuggles a bit when resizing jpegs on fly while handling https connection. Fujitsu feels like overkill for this task. It's almost as good as i5-4590 desktop i used from 2015 to late 2021 and it's faster than my 5 years old i7-7500u notebook. Power efficiency is insane.

  • @FrancescoCarucci
    @FrancescoCarucci ปีที่แล้ว

    Can you give us the link to the italian monks in the swiss alps to order that board?

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      My garage lol

  • @DominiqueComte
    @DominiqueComte ปีที่แล้ว

    what is the device you are displaying the power usage with, please ?

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +1

      It’s a portable battery bank. I have a video on it: Vanspower

  • @trioharsanto5257
    @trioharsanto5257 10 หลายเดือนก่อน

    Zima boar support mikrotik router os or not sir 😅😅

  • @Stealthmachines
    @Stealthmachines 8 หลายเดือนก่อน

    How do I run one VM across all three nodes in my cluster to take advantage of all cores?

    • @RaidOwl
      @RaidOwl  8 หลายเดือนก่อน

      You can only run a single VM on one machine at a time.

  • @tld8102
    @tld8102 ปีที่แล้ว +1

    i though this was an arm powered SBC. automatically assumed it was using Pimox

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      Yeah I thought the same when I first saw one

  • @groto27
    @groto27 ปีที่แล้ว +1

    Noice

  • @QuintenBuyckx
    @QuintenBuyckx ปีที่แล้ว

    Do you have an idea to make a shared usb over this cluster?
    Let's say that home assistant is kind of mission critical aimt my place, if I don't want my girlfriend screaming at me. Most HA services would switch from server without a problem, but the zwave and zigbee usb are hardwired to one system only.

    • @ytdlgandalf
      @ytdlgandalf ปีที่แล้ว

      Remote usb is hard. Make them remote on serial level , so use something like ser2net. I even use this approach but for dvb c tuners, for which you can use minisatip
      Even then it's hard, you can't failover the machine which has the dongle physically connected

  • @moellerjon
    @moellerjon ปีที่แล้ว +1

    hard to call something HA when it's all running on the same power supply, same switch, etc...
    here's a video idea: HA-ception - 3 instances of proxmox running in another instance of proxmox. (And then for the next video, HA your HA)

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +2

      When does it end?!?!?

    • @marcin6386
      @marcin6386 ปีที่แล้ว +1

      ​@@RaidOwl well it's a never ending story 😅 ideally you should make a build like that 2 more times and put them in different countries and sync togheter just in case one HA build will go down 😂😂

  • @mridulranjan1069
    @mridulranjan1069 ปีที่แล้ว

    You are using a single network interface on each Zima Boards for regular networking as well as cluster networking? That never worked for me, time and again, the cluster would incorrectly detect a network issue and the cluster would go bizzare forcing me to reboot the entire thing!

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +1

      I’ll keep my eye on it. If it acts wonky I’ll use that other one.

  • @TheFrantic5
    @TheFrantic5 ปีที่แล้ว +1

    With my budget I'll have to settle for "Somewhat available"

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว

      😂😂😂

    • @ShoruKen
      @ShoruKen 11 หลายเดือนก่อน

      "Somewhat available" should be a configurable option! :) Instead of Ceph or some other highly available option, it could make nightly or weekly snapshots or something like that, and still auto migrate.

  • @Nosuchthingasnormalhere
    @Nosuchthingasnormalhere ปีที่แล้ว +1

    What if that single switch fails? It's not high avail then is it

    • @RaidOwl
      @RaidOwl  ปีที่แล้ว +3

      What if my house explodes? Then it’s not highly available either

    • @gabrielporto.mikrotik
      @gabrielporto.mikrotik ปีที่แล้ว +1

      @@RaidOwl You should definitely get another house to play good. LoL 😅

  • @wstrater
    @wstrater 7 หลายเดือนก่อน +1

    Running Proxmox and a VM just to run Kubernetes in HA seems like an over kill. You are running HA container management on top of HA hypervisor. Why not just install an OS that supports cephfs and install Kubernetes on top. Just need to be sure that your etcd cluster spans all three nodes and each node is schedulable for workloads. One node goes down and Kubernetes can move the workload without having to first move Kubernetes and then the workload. Not as much fun but less hassle in the long run.

  • @BrianThomas
    @BrianThomas 11 หลายเดือนก่อน

    I'm lost on how you are able to setup HA mode when the storage is local on each device. I have a similar setup and I want to do that.
    Can anyone point me in the right direction with clear instructions? Or explain in detail for me?

    • @RaidOwl
      @RaidOwl  11 หลายเดือนก่อน +1

      Ceph replicates everything stored locally to the other 2 nodes as well. So if something needs to be migrated the data is already there locally.

    • @BrianThomas
      @BrianThomas 11 หลายเดือนก่อน

      @@RaidOwl Thank you for your reply. I did a little more homework and I think I have it under control. When I first configured CEPH I did it wrong. Someone posted a great tutorial 12 days ago and that's where I noticed my mistakes. I love the channel and I love your work. 1000 thanks....

  • @alqods80
    @alqods80 5 หลายเดือนก่อน

    Don’t use rancher for small resources, just use openlens which runs on your desktop machine and connects to your k3s