Epyc Rome so underappreciated on youtube. Not cheap but bang for buck if you need PCIE and the power consumption isn't a deal breaker it's a great platform. This is definitely a use case I wouldn't have thought of, really neat to see how you guys used it and glad it worked out for you.
I am doing this on a 7000 series Threadripper machine and it's working pretty well. I have occasional issues with audio clicks and pops, even with the USB controllers passed-through, but normally a reboot will clear it up. For peripherals, I have a KVM built into my monitor, and it switches which VM has the devices. Also, I can connect remotely with Moonlight/Sunshine. All in all, It's pretty slick.
thanks for making this video! I've been considering using EPYC for a "remote workstation" involving virtualized desktops, this makes me feel better about it!
Glad to hear that, I was surprised how easy it was to setup and how stable it is, but it makes sense, since this tech powers the cloud if you think about it. As I mentioned in the video, if you stick to newer hardware (gtx10xx and above), you shouldn't have any problems. Older AMD cards suffer with the reset bug (mine included), but that's possible to remedy with github.com/gnif/vendor-reset
More surprised you didn’t try multi seat given your use case. Less overhead vs VM since it be just 2 seprate X-11/wayland clients running at the same time.
We made a tiny flask app which was hosted on another machine on our lan and interacted with the proxmox API to control the system. We ended up not using much, because the system took on more services, like github workers and some other things and the windows of time where it could be off shrunk so much, that we just left it on. I shall make a follow-up video because this footage is almost a year old and the system evolved even more.
~6:49 not ideal for the power consumption? that curve shows its under load most of the time :P so its actually doing things ... so you wouldnt want to turn it off anyway probably :p i have a similiar setup ... and when i have the base system + 1 linux + 1 windows (with GFX PCI pass trough) VM running, my system drops down to like 50~60 watts "idle" power consumption just handling the background tasks of all 3 systems ... and i have a i9 14900k + 7900 XTX so they can draw power if they have to :P yours showing to eat 200 - 400 watts constantly ... so it is actually working hard :P
It's not explained well in the video, that's actually the consumption of this system + another PC working as a NAS (40W) + another SBC running proxmox with DNS, home automation and gitlab (10W) + the whole desk with monitors and some chargers + some networking gear. As you said the idle is around 60W for me as well and it can spike up to like 600W under heavy load.
It is a rapoo E9270P, but I am looking to replace it with something more reliable - the top touch bar sometimes activates on its own and usually turns up the volume to max 😅 it's so bad I made a habit of setting the system volume to 80% and dialing down everything else so I don't go deaf one day
This setup is so fun to use even tho a little annoying with consumer grade hardware if the mainboard doesn't play nice. Before I went ITX, I had my 5700 and my 580 installed and used the 580 for Linux while the 5700 was for a Windows VM. If I had a partner who needed a powerful PC, I'd probably go that route again since one can even remote connect to those VMs, which is super handy.
Mostly CPU because she works with a wide range of things, not just neural networks. But yeah, bigger models ended up on the GPU, we just upgraded to an RX7900 in-between shooting this video and it going live, because we will need to run some LLMs for research locally.
This is great! I am using both a native Ubuntu install and a virtualized Windows with GPU passthrough. Having a GPU and a screen each makes it like two machines and I can thus run 3D cad in Windows and heavy parallel calculations on Linux. Best of both worlds. And with Barrier running on both machines, my cursor and keyboard switches seamlessly from one desktop to the other. Magic!
@jm-tech7968 Not necessarily. I have similar setup on an X570 mobo with bifurcation. The first GPU is running native Ubuntu. The second one I pass through to a Windows VM. But it could have been another ubuntu instance. I see no reason to have a second VM since you are fine with Ubuntu anyway. It's quite easy to set up PCIe passthrough with virt-manager.
I've done the same with Unraid. I prefer it's storage system. Migrated to ZFS. Though I used a 1700x. Shared 8x pcie to each GPU. The system hosted 4vms and 8 Dockers.
I am not proficient with servers, hopefully someone from the audience notices and gives you a hint. This seems like an older 5th generation Xenon, so it may not support the features you need for this...
Hi, this is a nice and interesting video Great content Can you talk about the distributions you use And what a full work day looks like using this system
Awesome video! It's always great to see experiments with fully detailed steps instead of just a brief overview of the work. I have some questions about your setup: why did you choose Proxmox? From my experience, Proxmox can use around 500MB to 1GB of memory right after a fresh installation. In the video, I’m not sure if this was mentioned, but how do you handle the disks? Do you use direct passthrough to the VM, or do you create a disk image for each one? If you're using an image, it might be better to map the disk directly using UUID into the VM and define write-back. Regarding the power-off button, with the help of acpid, you can manage various actions for what should happen when it’s pressed. Inak skvelé video, len tak ďalej 👍
It is a single gen4 nvme with disk images for the VMs, it is plenty fast for our use-case. I chose proxmox mostly because I am familiar with it and it did all I needed. Also there is a second machine in our home-lab which had already used it before this setul was built, I have a blog post about the home-lab manakjiri.cz/project/home-lab/ I'd like to make a follow-up video where I'll talk about this in a bit more detail. Thank you for the questions! A pozdravuju :)
I got mine from someone who upgraded to something newer, I think I overpaid given it's condition, but I needed some replacement quickly because the 770 died unexpectedly. Turns out the 5700xt is sufficient for running ollama, so I am quite happy with it in the end. If you can get it for €60 I think that's a good deal, it is also not as power hungry as I expected.
@@jm-tech7968 I’d recommend two RTX 2070 Supers if available, eBay might be worth checking. Cool setup though, I’ve thought about using cheap mini PCs to log into Parsec instead of running locally. For non-gaming, NVIDIA Quadro cards are cheaper, more power-efficient, and still solid, but mostly use mini DisplayPort.
I periodically (every year or so) give nvidia a chance, but I find it too glitchy on linux to be usable. I do not understand how we can still have graphical glithes in electron apps and browsers on wayland in this day and age. Can you attest to this? I would hope the quadro would be better in this regard and I would love to get better power efficiency.
Nvidia only recently started allowing Linux developers to use their open source drivers. They should have provided better support over 15 years ago, but it’s Nvidia. You’re right to stick with AMD. I have a 7800XT in one of my Linux desktops.
@@jm-tech7968 I have a RX 570 and a RTX 2070 super. I primarily do gaming and virtualization. Nvidia up until driver version 555 had troubles with Wayland. Most distros ship with the stable branch of 550 witch is not ready for wayland. But Im not very impressed with the performance of the the Nvidia cards in openGL. I like to use Minecarft as a benchmark and my RX 570 witch is 3000kč (120eur) cheaper is 1,5 to 2,5 faster in minecraft (wayland and x11) + i also tried mesh shaders but that didn't help(Nvidium mod). But i have seen similar performance in windows, it is not Linux issue, it is a nvidia issue. + using my PC with the rx 570 feels faster. But i do have to say that with other games (Vulkan) the performance is somewhat in line with the price diference.
Epyc Rome is going to be at least as fast if not faster at similar or lower TDP than a lot of the Xeons you are going to find in those older workstations you are going to find relatively cheap. I have a Dell 7810 and and replaced it with an Epyc 7302 and the Epyc stomps those V3 Xeons.
@@nadtz V4 Xeons are better. But I guess TDPs are going to be a little high since a workstation like z840 has 2 CPUs that's double as much, but even considering that buying 2 xeons something like 2698 V4s is much better than a single 7302. You get more CPU cores, higher single threaded & multi threaded performance and you could get two of them for one of Epyc's price.
@@vishwanathbondugula4593 a bit better and V5 a bit better than V4. I own the Dell and Supermicro X9 (V3/4) and X10 (V5/6) hardware and Rome pretty much blows all of it away. If you don't need a ton of compute single socket V4+ still definitely has a place with it's low idle/load TDP and cost (unless you need UDIMM's) but if you want/need PCIE gen 4 Intel has nothing that competes till 2nd or 3rd gen scaleable. I wouldn't take 2x 2698 v4's over what I have now but you do you.
@@vishwanathbondugula4593 YT keeps eating my replies but looking at ebay you couldnt get 2 z840's with 2x 2698 v4's + 128gb ram for $1000 so I'm fine with what I have.
@@vishwanathbondugula4593 I'd really love to know why my replies are disappearing. Long story short I can't find a Z840 with 2x 2698 V4s and 128gb memory for $500 and one of the reasons I went Rome was for PCIE gen 4 so I'll stick with the 7302.
You can run windows in proxmox, the GPU passthrough is more difficult but doable. As for running games, forget about any MMOs or any multiplayer in general, as anti-cheat will flag you for running in a VM. Most single-player games run fine and we have gamed on this system many times.
Don't forget to invest in a UPS to protect this beast. Nice build BTW, I did the opposite for my wife and I as I have cloned my build down to the same case and PSU just with less RAM as she doesn't need as much as I do.
I don’t want to sound sexist but your wife/girlfriend/partner runs arch? Uh… wow. That’s a unicorn. You need to make sure your actually marry that one before she changes her mind.
Epyc Rome so underappreciated on youtube. Not cheap but bang for buck if you need PCIE and the power consumption isn't a deal breaker it's a great platform. This is definitely a use case I wouldn't have thought of, really neat to see how you guys used it and glad it worked out for you.
I am doing this on a 7000 series Threadripper machine and it's working pretty well. I have occasional issues with audio clicks and pops, even with the USB controllers passed-through, but normally a reboot will clear it up. For peripherals, I have a KVM built into my monitor, and it switches which VM has the devices. Also, I can connect remotely with Moonlight/Sunshine. All in all, It's pretty slick.
thanks for making this video! I've been considering using EPYC for a "remote workstation" involving virtualized desktops, this makes me feel better about it!
Glad to hear that, I was surprised how easy it was to setup and how stable it is, but it makes sense, since this tech powers the cloud if you think about it. As I mentioned in the video, if you stick to newer hardware (gtx10xx and above), you shouldn't have any problems. Older AMD cards suffer with the reset bug (mine included), but that's possible to remedy with github.com/gnif/vendor-reset
More surprised you didn’t try multi seat given your use case. Less overhead vs VM since it be just 2 seprate X-11/wayland clients running at the same time.
We wanted to keep the OSes separate by design, I like to experiment and break stuff and she needs a stable env to work on
Another advantage is that you can make snapshots of VMs as backups.
6:44 if you have the qemu-guest-agent from apt pressing the power button should gracefully shut down all vms, Great video!
Good idea. If he has the proxmox app, he could just turn on her VM. A bit more complex, but she seems to know what she’s doing.
We made a tiny flask app which was hosted on another machine on our lan and interacted with the proxmox API to control the system. We ended up not using much, because the system took on more services, like github workers and some other things and the windows of time where it could be off shrunk so much, that we just left it on.
I shall make a follow-up video because this footage is almost a year old and the system evolved even more.
@@jm-tech7968 please do a follow up, and if possible can u include a little bit of benchmarking for when u guys both use at the same time and solo
Super video (y). Jen tak dál.
~6:49 not ideal for the power consumption? that curve shows its under load most of the time :P so its actually doing things ... so you wouldnt want to turn it off anyway probably :p i have a similiar setup ... and when i have the base system + 1 linux + 1 windows (with GFX PCI pass trough) VM running, my system drops down to like 50~60 watts "idle" power consumption just handling the background tasks of all 3 systems ... and i have a i9 14900k + 7900 XTX so they can draw power if they have to :P yours showing to eat 200 - 400 watts constantly ... so it is actually working hard :P
It's not explained well in the video, that's actually the consumption of this system + another PC working as a NAS (40W) + another SBC running proxmox with DNS, home automation and gitlab (10W) + the whole desk with monitors and some chargers + some networking gear. As you said the idle is around 60W for me as well and it can spike up to like 600W under heavy load.
0:17 what keyboard is that? WHAT KEYBOARD IS THAT? ANYONE KNOW?
It is a rapoo E9270P, but I am looking to replace it with something more reliable - the top touch bar sometimes activates on its own and usually turns up the volume to max 😅 it's so bad I made a habit of setting the system volume to 80% and dialing down everything else so I don't go deaf one day
What client are you using to connect to the VMs? Are you using a hardware client to connect to the VMs?
The two VMs each have a GPU and a USB card with normal monitors and peripherals connected to them, they feel like bare-metal systems
@@jm-tech7968I think the question was - to get to the desktop, how are you doing that? Thin client?
I haven't needed to access the VM remotely yet, I use ssh to pull files if needed, but that's about it.
Great video! Thank you for sharing.
This setup is so fun to use even tho a little annoying with consumer grade hardware if the mainboard doesn't play nice. Before I went ITX, I had my 5700 and my 580 installed and used the 580 for Linux while the 5700 was for a Windows VM. If I had a partner who needed a powerful PC, I'd probably go that route again since one can even remote connect to those VMs, which is super handy.
Yes I agree, also backups are simple, in our case proxmox just takes a snapshot every week and stores the last 3 on our NAS!
Btw, your girlfriend mentioned Mashine learning workloads, is the 5700XT enought, or does it run on the CPU ?
Mostly CPU because she works with a wide range of things, not just neural networks. But yeah, bigger models ended up on the GPU, we just upgraded to an RX7900 in-between shooting this video and it going live, because we will need to run some LLMs for research locally.
great video!
This is great! I am using both a native Ubuntu install and a virtualized Windows with GPU passthrough. Having a GPU and a screen each makes it like two machines and I can thus run 3D cad in Windows and heavy parallel calculations on Linux. Best of both worlds. And with Barrier running on both machines, my cursor and keyboard switches seamlessly from one desktop to the other. Magic!
Oh that's a cool setup, are you using integrated graphics for the Ubuntu host?
@jm-tech7968 Not necessarily. I have similar setup on an X570 mobo with bifurcation. The first GPU is running native Ubuntu. The second one I pass through to a Windows VM. But it could have been another ubuntu instance. I see no reason to have a second VM since you are fine with Ubuntu anyway. It's quite easy to set up PCIe passthrough with virt-manager.
Waw, nice build, nice video, usefull feedbacks
"giving 600 euro to a random guy on street"
Sounded like some shady backally deal
I've done the same with Unraid. I prefer it's storage system. Migrated to ZFS. Though I used a 1700x. Shared 8x pcie to each GPU.
The system hosted 4vms and 8 Dockers.
How did you figure it out. I've been trying with no luck. I bought an ASUS ESC4000 to try this on
I am not proficient with servers, hopefully someone from the audience notices and gives you a hint. This seems like an older 5th generation Xenon, so it may not support the features you need for this...
Hi, this is a nice and interesting video Great content
Can you talk about the distributions you use
And what a full work day looks like using this system
I will make a follow-up video, there are many great questions, your's included, which deserve more attention
No, když jsem viděl Bazoš, docela mě překvapilo že jste Češi 😅. Fajn setup za tu cenu.
Díky!
To jsem si taky říkal, potěšilo.
Jsem rád že jste to našli i když je to v angličtině, většina contentu je z Ameriky, ale náš trh je dost jiný
@jm-tech7968 Souhlasím. Vždycky se pro něco nadchnu a pak zjistím, že to v EU je nesehnatelné a alternativy stojí třikrát tolik. Vždycky mi to zamrzí.
Well it's more like 16 for the gpu and 4 for the m.2
Awesome video! It's always great to see experiments with fully detailed steps instead of just a brief overview of the work.
I have some questions about your setup: why did you choose Proxmox? From my experience, Proxmox can use around 500MB to 1GB of memory right after a fresh installation.
In the video, I’m not sure if this was mentioned, but how do you handle the disks? Do you use direct passthrough to the VM, or do you create a disk image for each one?
If you're using an image, it might be better to map the disk directly using UUID into the VM and define write-back.
Regarding the power-off button, with the help of acpid, you can manage various actions for what should happen when it’s pressed.
Inak skvelé video, len tak ďalej 👍
It is a single gen4 nvme with disk images for the VMs, it is plenty fast for our use-case.
I chose proxmox mostly because I am familiar with it and it did all I needed. Also there is a second machine in our home-lab which had already used it before this setul was built, I have a blog post about the home-lab manakjiri.cz/project/home-lab/
I'd like to make a follow-up video where I'll talk about this in a bit more detail. Thank you for the questions!
A pozdravuju :)
why turning off the server ? I never turn off my computer, It'll idle at 100w of energy, that's 1 incandescent bulb, its nothing
that works out to just about a 1MWh a year, I'd say that is not a negligible amount of energy, given an average home here consumes 3.5MWh
5:11 I saw those exact graphics cards go on sale recently(rx 5700 xt) on aukro. It was about 60€, I waited too long and they were sold out:(
I got mine from someone who upgraded to something newer, I think I overpaid given it's condition, but I needed some replacement quickly because the 770 died unexpectedly. Turns out the 5700xt is sufficient for running ollama, so I am quite happy with it in the end. If you can get it for €60 I think that's a good deal, it is also not as power hungry as I expected.
@@jm-tech7968 I’d recommend two RTX 2070 Supers if available, eBay might be worth checking. Cool setup though, I’ve thought about using cheap mini PCs to log into Parsec instead of running locally. For non-gaming, NVIDIA Quadro cards are cheaper, more power-efficient, and still solid, but mostly use mini DisplayPort.
I periodically (every year or so) give nvidia a chance, but I find it too glitchy on linux to be usable. I do not understand how we can still have graphical glithes in electron apps and browsers on wayland in this day and age. Can you attest to this? I would hope the quadro would be better in this regard and I would love to get better power efficiency.
Nvidia only recently started allowing Linux developers to use their open source drivers. They should have provided better support over 15 years ago, but it’s Nvidia. You’re right to stick with AMD. I have a 7800XT in one of my Linux desktops.
@@jm-tech7968 I have a RX 570 and a RTX 2070 super. I primarily do gaming and virtualization. Nvidia up until driver version 555 had troubles with Wayland. Most distros ship with the stable branch of 550 witch is not ready for wayland. But Im not very impressed with the performance of the the Nvidia cards in openGL. I like to use Minecarft as a benchmark and my RX 570 witch is 3000kč (120eur) cheaper is 1,5 to 2,5 faster in minecraft (wayland and x11) + i also tried mesh shaders but that didn't help(Nvidium mod). But i have seen similar performance in windows, it is not Linux issue, it is a nvidia issue. + using my PC with the rx 570 feels faster. But i do have to say that with other games (Vulkan) the performance is somewhat in line with the price diference.
Here's my build I made like a year ago.
$509 EPYC 7F72 24-Core
$502 8x 32GB 2Rx4 PC4 3200R RAM (256GB in all)
$231 EVGA Supernova 1600 P2 80+ Platinum 1600W
$95 Noctua NH-U14S TR4-SP3 CPU Cooler
$739 SuperMicro H12SSL-NT
$43 Intel SSD 512GB 660p
USD $2,119
You could buy used workstations from a couple of years ago, something like a HP Z840, with 2 xeons and it would have costed like 300$
Epyc Rome is going to be at least as fast if not faster at similar or lower TDP than a lot of the Xeons you are going to find in those older workstations you are going to find relatively cheap. I have a Dell 7810 and and replaced it with an Epyc 7302 and the Epyc stomps those V3 Xeons.
@@nadtz V4 Xeons are better. But I guess TDPs are going to be a little high since a workstation like z840 has 2 CPUs that's double as much, but even considering that buying 2 xeons something like 2698 V4s is much better than a single 7302. You get more CPU cores, higher single threaded & multi threaded performance and you could get two of them for one of Epyc's price.
@@vishwanathbondugula4593 a bit better and V5 a bit better than V4. I own the Dell and Supermicro X9 (V3/4) and X10 (V5/6) hardware and Rome pretty much blows all of it away. If you don't need a ton of compute single socket V4+ still definitely has a place with it's low idle/load TDP and cost (unless you need UDIMM's) but if you want/need PCIE gen 4 Intel has nothing that competes till 2nd or 3rd gen scaleable. I wouldn't take 2x 2698 v4's over what I have now but you do you.
@@vishwanathbondugula4593 YT keeps eating my replies but looking at ebay you couldnt get 2 z840's with 2x 2698 v4's + 128gb ram for $1000 so I'm fine with what I have.
@@vishwanathbondugula4593 I'd really love to know why my replies are disappearing. Long story short I can't find a Z840 with 2x 2698 V4s and 128gb memory for $500 and one of the reasons I went Rome was for PCIE gen 4 so I'll stick with the 7302.
nice finnaly some czech content (mas odber)
why not just remote into the PC? This saves the hassle of having 2 GPUs
We wanted a desktop PC
nice!
how would this work with 1 or 2 times windows and playing games? Expect the same experience?
You can run windows in proxmox, the GPU passthrough is more difficult but doable. As for running games, forget about any MMOs or any multiplayer in general, as anti-cheat will flag you for running in a VM. Most single-player games run fine and we have gamed on this system many times.
nice
Cory could do this without virtualization 😂
Cela polovina meho pocitace je z druhe ruky, kupoval jsem 12900k za 11k
A nedavno s24 ultra
Coolio
Don't forget to invest in a UPS to protect this beast. Nice build BTW, I did the opposite for my wife and I as I have cloned my build down to the same case and PSU just with less RAM as she doesn't need as much as I do.
I don’t want to sound sexist but your wife/girlfriend/partner runs arch? Uh… wow. That’s a unicorn. You need to make sure your actually marry that one before she changes her mind.