Thanks to you and Wendell, I started using pfsense just over a month ago and really enjoy it. Unfortunately, I first attempted by virtualizing it and boy was that a mistake. Exactly like Wendell sail, I was economizing on my limited available hardware but it was not worth it.
I have over 240 virtual pfsense firewalls, and a bunch of them are pfsense+. That is on VMware 6.7U3 Enterprise Plus. I did a virtual firewall at home on vmware, wasn’t a fan. That was on 6.7U3 Essentials. I went to an SG3100 as I am replacing my big server to a tiny (low power) server. I’m putting my big server in my datacenter.
Aww, at the end I didn't see the clicky bits where the subscribe and video suggestions are. Regarding the forbidden router, the fragility is my main concern with one machine to rule them all. Would be nice if you could have a second little box that maybe can't route at full capacity / with as much might, but is basically an online replica that could be switched over to for maintenance of if the main machine gets explodey.
I love the "just pop the nic and this ssd in another box" option. The soft nic in xcp-ng with X550 has been faster than expected, too. So I will show how to have proxy arp/carp failover with pfsense.. sometime.. in the not to distant future. The fast router is down? The turd router is on the job! kind of thing.
I've gone the other way and have the "forbidden" router vm as backup from my primary pfsense box, if that goes down, i've back up with my VM in minutes
That’s what node servers are for. 😉 I actually have my pfsense that I’m setting up in a 4U 4 node SuperMicro box. I’m doing all my NIC routing through VLANs configured on XCP-ng’s side and on my switches, so if I put pfS as a high availability VM it should just boot itself up on another node; I haven’t tested that yet as I’m sorely building out the system and am a bit heat constrained for running all my nodes but it should be doable.
I have a main VM router like this, and a seperate simple one (actually a cm4 router you previously covered) handling a dedicated management network, using seperate moca wiring and cheap unmanaged switches, so that I can always get access to the BMC interface of the main machine and see what's wrong, even in the case where the main network is down.
for those who think this is a bad idea "just because" - be aware a lot of high end commercial firewall/routers are offered as VMs run on a cluster - Palo Alto for example offer virtual instances, Cisco offer virtual ASAs, etc. Flip-side to the bad-juju with having your router virtualized is that it does make some things a lot more painless though - OS upgrades, configuration changes, etc. are trivial to back out in their entirety by rolling back to a previous snapshot. And if you're willing to take the minor performance hit by using virtual NICs instead of hardware pass-through (assuming you've only got gigabit internet or less, which shouldn't be too much of a restriction for most) your VM becomes completely hardware independent. Big epyc box blow up? Spin up the hypervisor on whatever spare machine you have, and restore your VM files (or even just plug in the disk). I use virtualized pfsense a LOT for internal VM segregation, home-lab-workstation (Have a lot of virtual NICs and use virtual pfsense to route between them for disposable test networks), etc. It runs on any hypervisor that FreeBSD runs on, which pretty much includes HyperV, VMWare Workstation/Fusion, Virtualbox, etc. - so if you have a high power workstation you can experiment with fully isolated virtual networks (test active directory stuff, etc.) without necessarily needing to use a server for it. as wendell says, just make sure you're aware of the risks (obviously, keep your pfsense instance and your hypervisor up to date to ensure you're protected from VM escape).
The one KEY benefit I DO see with the idea of virtualising the router / firewall is: Checkpoint / Snapshot prior to an upgrade / update, and if it breaks, restore back to that snapshot, get back online within 60 seconds! Also, a great way in a lab to check out different appliances / options. In the real world, yes, I have come across complete virtual environments like this and they have been rock solid for years! Really comes down to the initial planning phase to ensure all contingencies are planned for while maintaining both security integrity and uptime... Personally, I like the blinky lights on my rack mounted 1u appliance box :D
Not to seem like a ZFS shill, and I'm sure you can achieve similar things with other tech too, but I basically have achieved this on bare metal too, if it sounds appealing to anyone. All my Linux machines are running OpenZFS with ZFS encryption and other GRUB-unsupported-features. My whole /boot directory (kernel, initramfs, etc) is stored within the encrypted ZFS pool. And then I use ZFSBootMenu to decrypt, mount, and then boot the kernel. The cool thing about all of that--besides being able to use bleeding-edge ZFS features--is that, since the entire OS is on ZFS, I can snapshot before updates or before making stupid configuration changes. If anything goes wrong and I can no longer boot, I can rollback the entire OS to the previous state from the boot menu with a single command and be up and running again immediately.
Been running pfSense in a Hyper-V setup for hy home for the past 5 years, not a single issue in terms of the VM side, have had my own Networking issues (routing, etc) but thats all. I also setup another pfSense VM on my remote server to create an IPSec tunnel between them. Highly recommend this type of setup. I run my pfSense VM on an old Dell R810.
I've been running pfsense as a VM on a Linux host (qemu/kvm) for a while now with a couple of nic's passed through to it and it works just fine. quick and easy to set up.
I'vd had pfsense as a VM on Unraid for a year now and it has been great. My Unraid box is a dual Xeon with 32 cores. I agree that having the router in my main server is not optimal and my next project will be an economical build for a dedicated pfSense router box. Great content guys. Thanks.
I've basically had a similar setup for about 8 months now. Proxmox host, Dual Gb NIC (one for LAN, one for WAN) via PCI pass through to pfsense, pihole DNS. gigabit internet. A few other Vms, on an old intel i7 6700/32GB ddr4. Works a treat.
@@MichaelSmith-fg8xh It's a Core i3-9100F with 16GB of ram running Debian 11. It's using Qemu+KVM+libvirt to run VMs, one of which is pfSense. I use virtual NICs rather than pass-through for some flexibility and haven't had any issues running 1Gbit internet and 2.5Gbit intranet.
Exactly the way my setup is going. With the summer heat wave on the horizon and electricity bill nearly doubling I am seriously considering rebuilding the homelab - having one forbidden router/VM host/ docker host/wifi AP which runs all the time (and just barely sips 30 watts) and then having another bulk data/media storage (refurbished rack server with 12 drives) that powers on only when needed.
A lot of folks are in for a shock soon. That old dual core Zeon server with 40TB of storage and all the other junk is going to cripple them. I'll stick with a 5W router and 20W NAS box thanks.
@@jabezhane Depends on where you are I guess, my AC kills me over the summer but the rest of the year my old v2 Xeon NAS adds a negligible amount to my power bill.
Approx a year ago I got a HP dl360 with 2 x5650 to run UNRAID on it after one month I took the server down and used the laptop for the UNRAID server and the dl360 as a PC since it was better than my 2012 laptop,l because of the 150w idle usage and close to 250w while doing very little stuff. November last year I finally built a desktop with a Ryzen 5900x and old 1050ti I was using and finally power down the server. In December the old PC from a friend for 80€ with a 4770 and used it to run UNRAID 24/7. This month my parents received the electricity bill and over the last 12 months we had to pay an extra 480€ on top of the 66€ we already paid each month, this also includes the 70w pump I have filtering the water 24/7 for the small turtle lake we have, another 3x12w CCTV cameras. Finally decided to install a Shelly 1PM to monitor the power consumption of the electrical outlets on my homelab/office and including the 2 UPS and the server with a usage of approx to 20% because of CCTV motion detection and person detection I'm using around 120w with only the server on and with the gaming PC idle is around 250w total when playing it gets close to 540w total, I'm expecting it to increase even more because of increasing electricity prices here in Portugal, the average price per kWh I'm paying is around 0.154€. I guess it's time to invest in a couple solar panels to offset the daily usage of not only the homelab but also washing machines and so on. Sorry for the rant.
I am running PFSense as an edge firewall/router on Hyper-V using an old FX-8350 system I had laying around. Works amazingly well! I even have enough remaining system resources to run my unify host for three AP's!
I've been running pfsense as a vm for close to 2 yrs now and it works amazingly well. The only difference is that I use Hyper-V instead of proxmox, esxi or xcp-ng. I personally find Hyper-V more easier to use, i've got pfblocker, openvpn, squid proxy, HA proxy and Multiple vlans running of the pfsense. I haven't had to touch it in over a year now, it just simply works. I also have a simple windows vm and some docker containers running off that small virtualization box. The box is a Beelink mini pc, similar to an intel NUC, just a bit more cheaper. I have a separate much more powerful VM host machine for all other work / testing.
Been running mine in proxmox for about 2 years now. It's mind blowing what you can do for free. I bought an intel motherboard that had 2 ports on it for my i7 3770 with 32gb ram sas card and it only cost me £250 ( the hard drives were more ). Once your using lxc containers only ram is the limit of what I can run as only pfsense and open media vault run in vms. I do have to double NAT but had no issues. You just pass ports through from the ISP router to the pfsence. Pfsence has the 2 onboard nic ports and the proxmox has a intel 1000 4 port card in SMB multi channel
I’ve had pfsense running virtually in VMWare for various applications for years :) The only reason my homelab isn’t setup that way is because my wife gets mad at me when the internet goes off
The wife is your most important client or network user. I was using a virtualized ClearOS7 as the router because our Virgin Media 500meg connection seemed to keep crashing the router. The thing is although ClearOS gave faster throughput it also would hang after a week. We ended up switching to a 60meg VDSL connection and it's beeen solid.
I've been running a pfSense in VMs for over a decade. The only issue I've run into was the old minimum boot volume becoming too small around 2.3. The main trick is static IPs on the hypervisors, otherwise cold starts might be troublesome. I'm also a full time IT manager, so your mileage will vary. Also, RIP VMware if the purchase by Broadcom goes through.
Thank you for doing these. As someone who is about to dive into this without any experience... your stuff provides a calm reasoned information based approach on how to deal with increasingly complex problems barely understood by anyone now days.
Running a pfSense router for several months and very happy with how it works with both my internal networks. Left over hardware from an upgrade a few years back, discrete hardware just works well for me and the slight extra cost for electricity is worth the simplicity and makes the occasional PD easy and quick, after all my time is worth something.
I've been running pfSense as a PROXMOX VM for years on an old Mac Mini cluster. Things can get tricky after a power outage or a shutdown and restart, especially with a separate management VLAN. I use virtual NICs vice PCI passthrough. That way in a cluster you can migrate machines around without networking issues. I also use an Edgerouterx as the upstream DFGW for the pfSense wan IP, that way ISP DHCP address stays the same and it's also easy to split off a separate lab network from the edge router without messing up the "production" network. You can also have a DMZ network on 2nd box or logically separated with a pfSense VM running as a FW and a reverse proxy to host websites/services with SSL certs under one DHCP address.
Running ESXi on a used OptiPlex with a host of VMs (including pfSense) for over a year now with no issues. The reduced noise, clutter and power consumption is worth the risk for me.
In Romania they just introduced 10Gbit internet for $10. The 2.5Gb plan is $9, the 1Gb plan is $8. Full Duplex, international traffic included. Bought an asus xg-c100c, waiting 1-2 weeks for internet installation. If I don't hit 7 Gbps, I will buy intel x550-t1.
I ran my Astaro / Sophos NextGen Firewalls as VMs for years. The ONLY reason I went to a physical appliance was because I wanted the aesthetics of the 1u unit in my 1/4 Wallmount Rack. I still have a Sophos XG running virtualized as my HA node.
Mate, that intro is simply superior, cracks me up every time. Just wanted to let you know I love your style, slowly going through all this juicy content you've created over the years. re: virtualisation, I think as long as you understand it relatively well and have solid foundational networking knowledge it's probably the way to go. Reasonable redundancy levels should always be catered for regardless of your deployment choice - especially when failure triggers multiple voices echoing "is the Internet down?" throughout your house ;) DR & LCP (Lab Continuity Planning :D) is kinda fun anyway, right? Right?? 😅
I've got a Netgate RCC-VE 2440 with an Atom C2358 that's been faithfully routing my home internet (which is now gigabit fiber) for about seven years. It's decked out with a 4G LTE modem, WiFi card, and mSATA storage, and it was originally running pfSense but I switched to Linux for reasons. It's starting to get a little on in years so I'm excited to explore other options... thank you for this video!
watched from virtualized pfsense router. On exsi. Honestly it worked so well I roled it out my parents house. Using my old Dell r610 host setup running proxymox
Still only 5 min into the video but couldn't help but comment once I heard "Virtualizing pfSense!". I built a beefy VM host to serve as my home lab and virtualize all my services that I use to learn on and as a result pfSense is one of them. It has dual NICs (my VM host server) and I pass one of them directly to the pfSense VM using pci passthrough so I didn't have to decide which libvirt networking configuration would have the least amount of drawbacks. VM Host is a Ryzen 9 5950, 128GB RAM (non-ECC, more expensive) 20TB of disk space and a couple of old AMD W7000 Firepros I had laying around. Currently serves pfSense, Samba file sharing, camera security system, Plex and nextcloud. Love your vids and am in awe of your teams expertise!!!
If you can make it wife friendly when it goes t&ts up then I might head back to the dark side. I remember receiving several 'dark' calls from my wife when the internet wasn't running when i had pfsense virtualised years ago - nothing is better then the simple instruction of 'switch it off and back on again' and that doesn't tend to end happily when you have a box full of virtualized machines :)
This is so fun! As much as 10 years ago I started to run m0n0wall in a vm as that provided me more option than my default cable router. That was on ESXi 3.5 and later on I made the switch to Pfsense under vmware. Nowadays I still run Pfsense but under hyper-v. And my system currently is a I7 2600 with 16GB. It had 3 NICS, one 1GBs to the internet on fiber (yay!), one to the appliance network on 1GBs and 1 on 2.5GBs to my machines. The day 10GBs switches won't cost hundreds of euro's, I'll upgrade. Probably the I7 2600 too. In other vm's I have my webserver, a SAN and a NAS. SR-IOV is enabled for the most important connection; Internet.
I got this... Makes sure you have a router as well for management network, cause you may want to update xcp-ng. You do need that if stuff hits a fan, which will happen! Mine is NAS, TV-tuner, router, voip-gateway, Home assistant server, wifi/switch controller and of course everything software a house would need.
Nice vid! Was already trying to build a "Forbidden Router" and was looking at XCP-NG so your timing is impeccable. Can't wait to see your follow up vids!
IMO, for home and small business, automatic failover high-availability is a cure worse than the disease. Bespoke software configurations and distributed systems are for the big boys who lose $10,000/minute when the network is down, and can afford a team of developers to maintain it. Better to have a dedicated bare-metal router that's cheap enough that you can buy two. The manual failover procedure is simple enough that a five year old can do it following written instructions, as long as the router and the spare aren't too high to reach. If you do software upgrades A/B style by swapping to the spare, that solves the, "if you aren't testing it, it doesn't work," problem and protects you from regression bugs. OEM business desktops have low idle power because of energy star regulations, and Kaby Lake and earlier are quite cheap used. In current market conditions, cheaper than a RasPi 4, even.
Ran into this myself: I'm using OPNSense for a router, but ended up using an old Pi B for my Unifi controller, and there are other bits I'd like to run on my router box.
Finally someone tackles the challenge of setting up pihole/unbound and Steamcache on one machine! I have waited forever to find a good tutorial on that! Thanks Wendel. Looking forward to it :)
I didn't have a good experience with the passthrough and XCP-NG. I had a GPU and a PCI USB hub. These two things had to be configured in a very specific order for them to work. And even then, the mouse stopped working whenever I played a video and only then. I actually tried with two different PCI USB hubs and I reinstalled both the hypervisor and the windows VM multiple times. However everything went real smooth with vmware. Just sharing my experience with xcpng.
Thanks Wendell for covering this topic, the homelab people in the comments may have the experience and hardware laying around to experiment with but I expect you'll do your best to point out the caveats and pitfalls for someone just walking into this type of project of a single networking appliance versus the herd of animals.
You broke me mr Wilson... 1 month many dozens of installs, got it all working but seem to have reached the limit of my asus tuf gaming x570 as it falls apart when finalized... I guess i cannot have every pci and nvme filled? found a 4 port 1gb card but pci-x not so common these days. Will drink until i pass out and worry about it tommorow. Much fun man, all the best. Thanks for the challenge.
Only partway through the video but it's wild how relevant these videos are to me. Purchased a used old 4 port protectli vault on ebay but that got me thinking about whether I could just slap opnsense in a VM on one of my existing servers, bridge the LAN side, and call it a day. Thanks for sharing your expertise!
I used to give clients a choice of virtual pfsense box or cheap router using VMware back in 2013 we had about 14 virtual pfsense boxes running perfect. Ran it like that for about 5 years with zero problems before we replaced it all with cheaper routers. Honestly the virtual pfsense was the better solution.
I had actually just set this up like two weeks ago on my Mac Pro 4,1. Ubuntu LTS as the hypervisor, with PFSense running under KVM, PCIe passthrough of the NICs. Docker. The whole nine. Seems like great minds think alike. Very interested in that newer hypervisor based on Xen, though.
I got a 1u supermicro SYS-5019D-4C-FN8TP with quad 10gb ports and a handful of 1gb ports. Used proxmox to virtualize pfsense and connect to my 2.5gb modem for > 1gb connection. Using it as my security appliance with AD, PiHole and a Linux Network Tool VM. With a quad core xeon and 32gb ram it works great. The thing I love about this server is the front facing network ports so it looks great in my rack with my switches.
Great video, I have done about same thing with unRAID running virtualized RouterOS with 100Gbit virtual eth + 2 dedicated eth ports. Setupping wasn't the easiest but its 'hanging' :D
I implemented an old Server 2003 VM on a server once at a client to route between multiple segments on the network. It was fun but I'm sure it had some latency issues. It worked for what we needed. The Server acted as the DHCP server for one of the segments as well. Nerd fun.
Oh yeah, I did this at my parents house and host their family photos on a ZFS mirror. pfSense in a ProxMox container. I did it with two NICs, router LAN and management interface on same bridge. And one of my rack mounted routers too, for local stuff.
I just stick to stuff I can do on OpenBSD, works great for me. OpenBSD has a VM hypervisor now too, so you can put the applications in there, and keep the routing outside.
Just done this myself, but using ESXI - the free version is pretty nice (just register for a code), and allows you to do PCI-E passthrough in the GUI. Passed through an i340-T4 to pfSEnse and an LSI SAS card to TrueNAS, both seem happy so far. Remains to be seen whether I'll keep pfSense virtualised or not though, I do like having a dedicated box for it... but going from three physical boxes to one is also really nice :)
For anyone that wants to run a home lab to tinker, you will want dedicated hardware for the network stack just because of the nature of playing around with this stuff it becomes a headache when you mess something up on the one box that is running your connection to the internet. Do this at your own risk but be prepared to segment out PFsense and other bits to their own boxes.
@@jabezhane Yes your router needs to be on all the time so that's 120w all the time as opposed to 12w. However if you were going to have your server on all the time then not having a router saves 12w.
I am researching pfsense and building a router that can handle 1gb fiber up/down. What would you consider to be a good core count or processor to shoot for? I thought about using a thin client/mini tower from eBay with a 4 port intel nic. Would love to see an updated video on something like this. Thanks!
What Home Mad Scientist doesn't want the Home Street Cred that comes with taking down the whole Home Network from the comfort of their own Home Laboratory? Your husband/wife/children will love you for it! Frankenstein monsters are the best monsters.
This is the way I went, but I did it with Proxmox. XCP-ng is supposed to be great (and has its own advantages vs. Proxmox), but inertia is real & I'll probably stick with Proxmox until a specific need pulls me away from it. I run OPNsense in a VM (using a passed-through Intel I350-T2), and a Debian container that runs my services (many of them under Docker). I think this setup is only for fairly advanced users who understand & accept the tradeoffs with it & are also willing to spend time setting it up. If you've got the will & know-how to take it on, it's a really satisfying project with a nice long-term payoff.
What are the XCP-ng avantages versus proxmox? I would prefer proxmox because of the more up to date kernel.... does proxmox have any software feature missing?
@@Egidiusdehammo I have very limited knowledge of XCP-ng, but one neat feature it has (that Proxmox doesn't) is SDN (Software Defined Networking). I'm sure there are other advantages of XCP-ng (and Proxmox!). Personally I continue to use Proxmox because it meets my needs and I like that it's Debian-based.
Great introduction to the topic! I will be following with interest, even though I'm aware of the risks and don't plan on virtualizing my home firewall on my lab VM infrastructure (the risk of breakage from me messing with it too much is very real). The answer to that, of course, would be to treat the VM host running your firewall as an appliance. Mess around with your other hypervisors, but leave that one the [bleep] alone. Case in point: this is exactly what "network function virtualization" appliances like the Juniper NFX or Cisco ENCS series are meant for, and they're great if that's what you need. You're just not going to be in there updating the BIOS or tweaking it on a regular basis, which is probably why they're more stable than our average home lab servers. :-D
I was waiting for this video every since you foreshadowed it about 5 months back when talking about your 100 Gbe EDR NIC. I am not sure whether this is due to the EPYC platform you have, but I have actually had better performance virtualizing Pfsense on Proxmox. I found the ability to select the exact CPU platform on Proxmox to make a huge difference. I can't get more than 10gbe/s on XCP-NG with FDR connectx-3 or EDR connectx-4. It would be nice if you compared the SR-IVO vs the pass through setup, so we can tell whether there is really performance issues one way or another. RoCE will be fantastic if you can show that as well. You're going to have a hard time saturating that 100gbe NIC, so maybe RDMA-Based NVMe-oF would be another nice scenario to see. As always thank you for sharing your findings.
I used to be proud of the fact that I turned a Raspberry Pi into a little wifi router to get around the fact that my university dorm room only had wired internet and blocked routers, but not computers that were being used like a hotspot. Now I feel like a noob.
PFSense is awesome, virtualizing it is cool too. I tried to use it for advanced routing between-VMs when there's only one hardware NIC available and it seemed to work. Also, while pfsense is better than most consumer routers, by the time you have a homelab sometimes it is more beneficial to use enterprise-grade hardware systems ... software-only routers inherently lack ASICs or other acceleration tricks.
Got a Poweredge T710 cheap on craigslist and virtualized everything with Proxmox (including pfsense) for the last year without any problems. (pfsense, plex, portainer, openmedia, transmission, heimdall, homeassistant, truenas...). Just gotta have enough memory :)
Picked up a r610 for $ 100 from CL and running esxi 6.7 on it with pass-through for freenas and passing 2 nics to pfsense. It's been working fine for a few months now.
Did this for a while with xcp-ng and opnsense. It was fun doing it, and didn't have any issues with performance. The only problem, however, is patching xcp-ng(and the subsequent reboot) would bring down the internet for everyone. The wife acceptance factor on this fun/geeky setup is very low, so ended up abandoning it. If there were a way to easily hotswap router vms between two xcp ng servers to maintain Internet connectivity, well, I'd still be using it today.
This kind of stuff is a lot of fun. I have proxmox running on an old SFF optiplex 9020 with a Haswell i5 and 16 GB RAM. There's an old HP N360T(?) dual gigabit NIC in one expansion slot and a 2TB Samsung 983DCT in the other. It's running VMs for pfSense for routing, Ubuntu Server 20.04 for docker with LanCache, and Windows Server 2022 Core for file shares. The SSD is passed through to Ubuntu for the LanCache with cockpit running on top for web management. Windows Server has admin center for web management. Overall it's been a really fun project and an excellent LAN party in a box. Next step is a pi hole container and 10G on the LAN side. Unfortunately, will have to choose between full speed on the SSD or 10G NICs if I stick with the 9020 and it's pcie layout.
Exactly and especially in today's age and specialized ASICs, you don't want to go the virtual way anymore with firewall because there is too many threats to check and to process that a general computing cpu can't cope with. 10+ years ago, yes, having vms for firewalls was good because we had abandon things like deep inspection and what not - it was too costly cpu-wise. But now, things have changed in the last 4-5 years and firewall are now legit core-switch-level devices that doesn't only police north-south traffic, but also the east-west traffic that is becoming more and more and issue in this security era. Two-tier architecture are the way to go now with access switch able to provide 10GE to end points, and uplinks of 25/40/100 Gbps are now common, and firewall able to process north of 400Gbps can save organization a lot of money in hardware and management.
Lol, I started going down this road that the video is about last year. I've been doing it because I've been dissatisfied with the network solutions I had and I'm trying to learn more about more complicated network application structures on a closer to enterprise level. I'm not a sysadmin, I'm barely IT, but given how hard it's been to get anyone to look at my partial college education in computer engineering and my general IT certs, I've been hoping a homelab setup would be a good way to get my foot in the door. BTW, I did try to run Pfsense on a standalone box first, but considering what else I was buying, I cheaper out a little. The processor that was on it was terrible and I ran into all kinds of issues from DHCP requests timing out to eventually the box literally not booting or letting me in to run a factory reset. Maybe I'll have it do something else later, but I gave up and just decided to virtualize it on my existing hardware (FX8320 beats out about every Intel atom anyways, even if I'm only allocating one core lol) Okay, weird, my first edit deleted itself when I added the second edit... But yeah I setup VMware's free esxi and went with truenas core (did VMware because I read about issues using KVMs with truenas core) and bought a couple of compatible 1gbps NICs for Pfsense, a 10gbps nic for the nas, a multigig switch, a couple 2.5gbps NICs for the 2 gaming desktops, and loaded my steam games on the 4x10TB hard drives that are striped and mirrored with a 1TB SSD cache (not NVMe because the motherboard is old). My next projects are setting up Vaultwarden on the raspberry Pi and setting wireguard up again but directly on Pfsense to get other devices to connect to it (thought about doing reverse proxy but I don't currently have enough users or applications I want to use away from home to want to start managing that kind of exposure, but I will get there eventually)
I’ve been running OPNsense in Proxmox on i7 6700K, with passthrough and could never reach my port speed, capping at ~80% of the full speed. I’ve moved since to i5-7400T and running it natively, without a VM - now reaching the full speed, with a lot more services running on OPNsense…. I received recently a Mikrotik to play with and opened a completely new world now… Thinking to drop OPNsense and go RouterOS + maybe containers(it supports docker containers, like pi-hole although I found OPNsense Unbound DNS to be significantly faster).
I've been rocking a "forbidden" setup like this with TrueNAS and PfSense on my home server for a while now. Started out on bare metal on a busted laptop (no screen, so "headless" lmao) then migrated it to a Dell Optiplex 9020 with i7 4770, 32GB of RAM, 10GB SFP+ NIC, and a LSI2308 SAS HBA. The HBA is passed through to TrueNAS, and the WAN port (the 1GB RJ45 on the mobo) is passed through to pfSense. I've since upgraded to a Ryzen setup for this rig, using a 5700G, 64GiB of 3200MT/s CL22 ECC memory. This gives me 1 more PCie x16 slot for more stuff in the future! Might add a low power GPU for NVENC and home media stuff.
ANd yeah, having everything centralized --- router, VPN, NAS, NVR, web server, management utilities, home media and smart home stuff, etc.. is definently putting all your eggs in one basket and it is a pain to lose internet when it goes down for maintenance.
After the 'scandal' with pfsense vs opnsense, and the debacle of their wireguard implementation, I've switched to Sophos XG and have never been happier.
That last comment made me scream. "I love the concept, I don't love how fragile it is." I have been running a media server / backup storage / nas / virtual desktop / video rendering server / proxy server (road warrior style) for a few years. I upgraded my storage with an array of cheap disks. Adding 12 Tb to it, that worked for a few months till one of the new drives lost a super block. Sure proxmox still boots, but the virtual machines' virtual OS drives are toast. I think most of the plex media is also toast (stripe through that disk) but I haven't had time to rebuild it yet. So no free wifi, no plex, till I get get some free time.
I am doing the same approach on my HP microserver, having Windows Sever based with Hyper-V VMs. - virtualized Sophos UTM Home for routing & VLAN, - a Fedora VM for podman containers, - Windows server core for Windows AD. Would go the proxmox/xcp-ng if I am rebuilding it. I dealt with fragile part taking advantage of the IPMI and Windows GUI
I love the concept on eco-friendliness alone. It's too close for comfort for my noob butt. It feels like the gate to my property to be right next to the safe in bedroom closet door.
So I have a physical box for pfsense that acts as router/FW/dhcp/dns. What I’ve done is setup a second pfsense box as a vm that I have setup with their HA implementation, but I only have dhcp and dns syncing. This way, should my main pfsense router die, I will lose internet, but everything else on my network won’t grind to a halt because dns and dhcp went away. It’s a compromise, but it gives me what I’m looking for.
I've run pfsense in a vm on top of unraid (so, KVM) for years on a used supermicro dual xeon(1366 era) server board (liquidated from an old facebook server swap out best I can tell). Never had an issue. I also had a used netgate passive cooled appliance set up in a complicated HA/CARP implementation for fail over purposes because everyone told me I was making a bad choice... never engaged once. I eventually tore it down and left a basic linksys router (has an on/off switch is why) sitting on top of the cabinet with a couple labelled cables stating plug me in here and switch me on if the internet dies. It's only every been needed when I tested the checklist with the missus. I've yet to have a failure. If you know what you are getting into - vm router implementations are really great.
I do the same, though the power usage is bananas for me, I may replace it with a AMD TR 12 core desktop machine. Now I do have 60 HDDs that contribute to that power usage
I'm curious why you chose XCP-ng vs Proxmox? Also looking forward to performance data on virtual vs passthrough NICs. Would you ever recommend virtualizing routers with CARP/HA between VMs on different physical boxes for a business with high uptime requirements? I just recently struggled with this decision and decided to just have two 1u servers each running OPNsense bare metal in HA configuration. I didn't trust the extra complexity of VMs when uptime is key but I'm curious to hear others thoughts.
I'm really looking forward to this since i've been running the same concept on proxmox for three months. Sadly I have't had time to try clustering like Jeff Geerling says, but one of these days I might have the time. Also I have one other problem, the speed is full 1 GB but the other way it's 100 MB but that might be a configuration error from my side. My setup is Proxmox 7,x (I don't remeber) on an 12 core xeon 128 GB RAM, with PCI passthrough on the WAN side and virtual interfaces on the LAN site, all 10 GB
This is good content and I like the chill music barely noticable in the background. I use mikrotik RouterOS x86-64 on baremetal. I also have Ubiquiti gear. At any rate, this dude (sorry, I don't know your name) is NOT a level 1 tech. He's more like level 3 as he knows a lot and has proven deep knowledge of zfs and even filesystems in general. I like the humbleness of the channel name.
I did exactly this, but with 2x Hyper-V hosts at home for around 3 years, with pfSense. pfSense running CARP between two boxes, always pinned to separate hosts with a dedicated passthrough NIC for each. I of course had an old Ubiquiti Edgerouter configured and ready to go if SHTF, but it never actually did. I only stopped because I simplified my (running 24/7) home server setup to a single host and replaced the pfSense VM's with a Fortigate to save power and heat output...
You don’t need much. I was running pfSense in a VMWare environment with several other servers and it barely moved any capacity. Next, I moved it to its own PC. It is an old i3 2100 and the CPU hangs at 3%. I have multiple WANs, a lot of advanced port forwarding over those WANs, port based VPNs, incoming client-based incoming VPN… you have to love pfSense.
I've done basically this, but my DNS handling is done by pihole on a separate raspberry Pi. No reason you can't virtualize it though. I just happened to have a raspberry Pi handy.
Lets GO! running promox with pfSense, pi-hole, Unifi, and experimenting with XPenology a rip of Senologys software and future plans for Home Assist and PBX running on a Dell R210ii
I've been running and testing pfsense "on a stick" in a vm on both vmware and proxmox for years now using vnic's, and both have had absolutely no issue serving my 10+ VLANs and gigabit uplink, and in testing it was able to route 6 gigabit between vlans on just a single Intel i3-core I have never experienced any kind of delays or other weird stuff, even when the tiny i3 has been fully loaded on the other cores, it's still been running way better than any consumer-level router i've ever tried. Only time i've experienced delays have been when pfsense shares cores with other VM's that are hammering it, then jitter increases substantially(Up to around 100ms i believe), but dedicating a core to pfsense and those problems went away.
I am curious if a high single thread performance processor would improve pfsense… routing a single stream at 10gb+? Service reboots/updates going quicker? Does the high cpu cache make some operations (firewall? VPN ? Suricata?) significantly more performant? How would Intels heterogeneous cpu cores work/help/hurt?
For the DNS traffic, it would be nice! I quit using a router all together a while back, but it looks like my internet is about to be going a lot faster again... It's tempting, but not near the hardware you're committing to such a feat! MTBF, no PCIe pass-through and a full spool for fiber runs for electrical isolation... STARLINK I'm not too keen on that dish going outside because we've had two satellite dishes blown off of our house! Here's hoping that STARLINK will get faster... I may just build one on the cheap to handle router functions, no more.
There’s a BIG reason to virtualized your pfsense: being able to spin up a second instance in under a minute. I recently had a HW failure, fortunately I had a cold spare machine (called Phoenix). Powered it up and my network had its gateway back online in about a minute. Sure you can dedicate a second machine for this, but having a generic cold spare machine with all your critical VM’s ready to go is golden
@@AbhishekKumar-nt3in No, I have a whole second machine, with 3 network ports, with the same hardware as my primary machine, powered off. I've installed proxmox and loaded it with the two critical VMs I use on my network, pfSense and pihole. It's plugged into the network and my two WAN connections. If my main machine goes down, all I do is power up this machine. After about a minute of boot time (it's actually faster than my primary machine but uses more idle power) my gateway is back up, and I can work on whatever is wrong with my main gateway without impacting my network. I do power up this second machine (isolated so it doesn't interfere with my primary gateway) from time to time to ensure it's still functional, and to update the VM image if I've made major changes to my config (rare).
@@repatch43 Awesome. Thanks for sharing. I am planning to use my TP-Link OpenWrt flashed router as my backup router if my ProxMox or the pfSense VM goes down for some reason.
If you're going with the EPIC why not make this not only the Router but also the SWITCH by using multiple multi-port Ethernet cards? You have the 4x10Gbps (even more) cards available to go that route. You could even use some at only 2.5Gbps to 5Gbps if they were going to connect to WiFi AP's.
I ran pfSense through VMware years ago so I could use the same system to also run my web server & other projects, it worked great the issue was when we finally got fiber, the virtual NIC just wasn't cutting it, was only getting about 600Mb/s through the VM, this was on a Ryzen 1700x at the time. Maybe it's better now with faster CPUs & improvements to VMware. Otherwise I'd say PCIe passthrough is a must if you have a gigabyte or higher connection coming in & you want to attempt something like this.
I almost passed through a NIC to a pfsense vm in Unraid. I just couldn't get past the idea of my internet going down every time I wanted to play musical chairs with my iommu groups.
I use a E300-9D supermicro box for this, much smaller, 8 ethernet ports of which 4 are 10GBE out of the box, and support for a single slot PCIe card which is populated with a ASM2824 switch card to add 4 more M.2 drives for a total of 5, and a mini PCIe slot for a Wifi AP (or cellular backup if you want).
People are running VMs of pfsense on Intel N100s and similar mini-pc chips. I find that a lot more interesting than throwing money, compute power and electricity at the problem.
Heh, a PfSense VM is what I've been running since I had time to figure it out during covid. It also load balances two internet connections, because one with all the videoconferencing during covid was getting to be an annoyance with multple peope simultaneously needing more reliability and sometiemes bandwidth. I'm quite happy with it, but as you say the main reason for it is that I want to economize.... not so much on hardware, but on power usage. My server board is a dual low power xeon, and it has a pfsense vm a pihole vm and a webserver vm. I agree with jeff though, duplicating the functions in a second machine on the network would be better, I can easily fall back by changing some plugs but having some failover would be better. So I'd agree that running a 'one piece of hardware does absolutely everything' solution doesn't really make sense.... unless you have two of them.
I basically just got done doing this, for in the case of stuff breaking i have by WTR54GL in a box on the shelf so i can limp by till i can get it fixed (very glad i had that when my 10 year old board died) now on AM4 w/ a 2200GE (qemu on ubuntu server) planning to run raid arrays to make it less fragile, runs a lot better now than it did on the FM1 socket
Happened upon this video after setting up PfSense (along with some other VMs) in Proxmox... and after borking it with a configuration change that took my network down, needing to walk over and plug a monitor and keyboard into my server to troubleshoot the PfSense VM. I didn't not know what I was getting into!
Oh wow Xen lives on in XCP-ng.. thank god! I've not really played with virtualization for a while... use to use Xen and then moved to vmware in the corporate world for years... Now I'm looking to do this as a small home project to offload most small compute tasks... I try to avoid linux at ALL costs due to just plain weirdness I don't want to spend time troubleshooting.. Although I made a career out of managing linux and doing systems programming.. I'd rather spend my time doing the systems programming than deal with more linux oddities.
NOOOOO!!! Don't Virtualize it! Unless you know the risks and challenges which were well covered in this video.
Would you say ITS FORBIDDEN 🙊🙈
Thanks to you and Wendell, I started using pfsense just over a month ago and really enjoy it. Unfortunately, I first attempted by virtualizing it and boy was that a mistake. Exactly like Wendell sail, I was economizing on my limited available hardware but it was not worth it.
@@emeraldbonsai i think only if you then run it all on a single port Rasberry Pi with switch VLANs being required to make it work/« secure »
@@ShadVonHass yea I gave up and installed it directly. Much better use of my time . I started this 3 years ago
I have over 240 virtual pfsense firewalls, and a bunch of them are pfsense+. That is on VMware 6.7U3 Enterprise Plus.
I did a virtual firewall at home on vmware, wasn’t a fan. That was on 6.7U3 Essentials.
I went to an SG3100 as I am replacing my big server to a tiny (low power) server. I’m putting my big server in my datacenter.
Aww, at the end I didn't see the clicky bits where the subscribe and video suggestions are.
Regarding the forbidden router, the fragility is my main concern with one machine to rule them all. Would be nice if you could have a second little box that maybe can't route at full capacity / with as much might, but is basically an online replica that could be switched over to for maintenance of if the main machine gets explodey.
ayyy Jeff you're here lesss goooo!!!
I love the "just pop the nic and this ssd in another box" option.
The soft nic in xcp-ng with X550 has been faster than expected, too. So I will show how to have proxy arp/carp failover with pfsense.. sometime.. in the not to distant future. The fast router is down? The turd router is on the job! kind of thing.
I've gone the other way and have the "forbidden" router vm as backup from my primary pfsense box, if that goes down, i've back up with my VM in minutes
That’s what node servers are for. 😉
I actually have my pfsense that I’m setting up in a 4U 4 node SuperMicro box. I’m doing all my NIC routing through VLANs configured on XCP-ng’s side and on my switches, so if I put pfS as a high availability VM it should just boot itself up on another node; I haven’t tested that yet as I’m sorely building out the system and am a bit heat constrained for running all my nodes but it should be doable.
I have a main VM router like this, and a seperate simple one (actually a cm4 router you previously covered) handling a dedicated management network, using seperate moca wiring and cheap unmanaged switches, so that I can always get access to the BMC interface of the main machine and see what's wrong, even in the case where the main network is down.
for those who think this is a bad idea "just because" - be aware a lot of high end commercial firewall/routers are offered as VMs run on a cluster - Palo Alto for example offer virtual instances, Cisco offer virtual ASAs, etc.
Flip-side to the bad-juju with having your router virtualized is that it does make some things a lot more painless though - OS upgrades, configuration changes, etc. are trivial to back out in their entirety by rolling back to a previous snapshot.
And if you're willing to take the minor performance hit by using virtual NICs instead of hardware pass-through (assuming you've only got gigabit internet or less, which shouldn't be too much of a restriction for most) your VM becomes completely hardware independent. Big epyc box blow up? Spin up the hypervisor on whatever spare machine you have, and restore your VM files (or even just plug in the disk).
I use virtualized pfsense a LOT for internal VM segregation, home-lab-workstation (Have a lot of virtual NICs and use virtual pfsense to route between them for disposable test networks), etc. It runs on any hypervisor that FreeBSD runs on, which pretty much includes HyperV, VMWare Workstation/Fusion, Virtualbox, etc. - so if you have a high power workstation you can experiment with fully isolated virtual networks (test active directory stuff, etc.) without necessarily needing to use a server for it.
as wendell says, just make sure you're aware of the risks (obviously, keep your pfsense instance and your hypervisor up to date to ensure you're protected from VM escape).
The one KEY benefit I DO see with the idea of virtualising the router / firewall is: Checkpoint / Snapshot prior to an upgrade / update, and if it breaks, restore back to that snapshot, get back online within 60 seconds! Also, a great way in a lab to check out different appliances / options. In the real world, yes, I have come across complete virtual environments like this and they have been rock solid for years! Really comes down to the initial planning phase to ensure all contingencies are planned for while maintaining both security integrity and uptime... Personally, I like the blinky lights on my rack mounted 1u appliance box :D
Not to seem like a ZFS shill, and I'm sure you can achieve similar things with other tech too, but I basically have achieved this on bare metal too, if it sounds appealing to anyone.
All my Linux machines are running OpenZFS with ZFS encryption and other GRUB-unsupported-features. My whole /boot directory (kernel, initramfs, etc) is stored within the encrypted ZFS pool. And then I use ZFSBootMenu to decrypt, mount, and then boot the kernel.
The cool thing about all of that--besides being able to use bleeding-edge ZFS features--is that, since the entire OS is on ZFS, I can snapshot before updates or before making stupid configuration changes. If anything goes wrong and I can no longer boot, I can rollback the entire OS to the previous state from the boot menu with a single command and be up and running again immediately.
@@omarassadi2455 That’s pretty cool!!
Been running pfSense in a Hyper-V setup for hy home for the past 5 years, not a single issue in terms of the VM side, have had my own Networking issues (routing, etc) but thats all. I also setup another pfSense VM on my remote server to create an IPSec tunnel between them.
Highly recommend this type of setup. I run my pfSense VM on an old Dell R810.
I've been running pfsense as a VM on a Linux host (qemu/kvm) for a while now with a couple of nic's passed through to it and it works just fine. quick and easy to set up.
forbidden router is it running ruin?
I'vd had pfsense as a VM on Unraid for a year now and it has been great. My Unraid box is a dual Xeon with 32 cores. I agree that having the router in my main server is not optimal and my next project will be an economical build for a dedicated pfSense router box. Great content guys. Thanks.
I've basically had a similar setup for about 8 months now. Proxmox host, Dual Gb NIC (one for LAN, one for WAN) via PCI pass through to pfsense, pihole DNS. gigabit internet. A few other Vms, on an old intel i7 6700/32GB ddr4. Works a treat.
I did this exact thing over a year ago. It was easy and it's worked out very well. Highly recommend running pFsense in a KVM virtual machine.
Did you keep it that way?
@@MichaelSmith-fg8xh It's a Core i3-9100F with 16GB of ram running Debian 11. It's using Qemu+KVM+libvirt to run VMs, one of which is pfSense. I use virtual NICs rather than pass-through for some flexibility and haven't had any issues running 1Gbit internet and 2.5Gbit intranet.
Exactly the way my setup is going. With the summer heat wave on the horizon and electricity bill nearly doubling I am seriously considering rebuilding the homelab - having one forbidden router/VM host/ docker host/wifi AP which runs all the time (and just barely sips 30 watts) and then having another bulk data/media storage (refurbished rack server with 12 drives) that powers on only when needed.
A lot of folks are in for a shock soon. That old dual core Zeon server with 40TB of storage and all the other junk is going to cripple them. I'll stick with a 5W router and 20W NAS box thanks.
@@jabezhane Depends on where you are I guess, my AC kills me over the summer but the rest of the year my old v2 Xeon NAS adds a negligible amount to my power bill.
Might want to consider that, AFAIK, all those power cycles and spin ups/downs will worsen the life expectancy of your hardware quite badly.
Approx a year ago I got a HP dl360 with 2 x5650 to run UNRAID on it after one month I took the server down and used the laptop for the UNRAID server and the dl360 as a PC since it was better than my 2012 laptop,l because of the 150w idle usage and close to 250w while doing very little stuff.
November last year I finally built a desktop with a Ryzen 5900x and old 1050ti I was using and finally power down the server.
In December the old PC from a friend for 80€ with a 4770 and used it to run UNRAID 24/7.
This month my parents received the electricity bill and over the last 12 months we had to pay an extra 480€ on top of the 66€ we already paid each month, this also includes the 70w pump I have filtering the water 24/7 for the small turtle lake we have, another 3x12w CCTV cameras.
Finally decided to install a Shelly 1PM to monitor the power consumption of the electrical outlets on my homelab/office and including the 2 UPS and the server with a usage of approx to 20% because of CCTV motion detection and person detection I'm using around 120w with only the server on and with the gaming PC idle is around 250w total when playing it gets close to 540w total, I'm expecting it to increase even more because of increasing electricity prices here in Portugal, the average price per kWh I'm paying is around 0.154€.
I guess it's time to invest in a couple solar panels to offset the daily usage of not only the homelab but also washing machines and so on.
Sorry for the rant.
@@almaefogo No it's a real issue. I even down clocked my i7 5690X rig from 4Ghz to 2.5GHz a few months ago as I saw this coming.
Have a solid recovery plan if you attempt this.
But you shouldn't be afraid to experiment provided you have a solid recovery plan.
I am running PFSense as an edge firewall/router on Hyper-V using an old FX-8350 system I had laying around. Works amazingly well! I even have enough remaining system resources to run my unify host for three AP's!
I've been running pfsense as a vm for close to 2 yrs now and it works amazingly well. The only difference is that I use Hyper-V instead of proxmox, esxi or xcp-ng. I personally find Hyper-V more easier to use, i've got pfblocker, openvpn, squid proxy, HA proxy and Multiple vlans running of the pfsense. I haven't had to touch it in over a year now, it just simply works. I also have a simple windows vm and some docker containers running off that small virtualization box. The box is a Beelink mini pc, similar to an intel NUC, just a bit more cheaper. I have a separate much more powerful VM host machine for all other work / testing.
Been running mine in proxmox for about 2 years now. It's mind blowing what you can do for free. I bought an intel motherboard that had 2 ports on it for my i7 3770 with 32gb ram sas card and it only cost me £250 ( the hard drives were more ). Once your using lxc containers only ram is the limit of what I can run as only pfsense and open media vault run in vms.
I do have to double NAT but had no issues. You just pass ports through from the ISP router to the pfsence. Pfsence has the 2 onboard nic ports and the proxmox has a intel 1000 4 port card in SMB multi channel
I’ve had pfsense running virtually in VMWare for various applications for years :)
The only reason my homelab isn’t setup that way is because my wife gets mad at me when the internet goes off
Good opportunity to sell redundant and fail over equipment. It's only to make the wife happy!
The wife is your most important client or network user. I was using a virtualized ClearOS7 as the router because our Virgin Media 500meg connection seemed to keep crashing the router. The thing is although ClearOS gave faster throughput it also would hang after a week. We ended up switching to a 60meg VDSL connection and it's beeen solid.
I've been running a pfSense in VMs for over a decade. The only issue I've run into was the old minimum boot volume becoming too small around 2.3. The main trick is static IPs on the hypervisors, otherwise cold starts might be troublesome.
I'm also a full time IT manager, so your mileage will vary.
Also, RIP VMware if the purchase by Broadcom goes through.
Likewise, been running it virtual for years and years. Ad long as you set it all up correctly, you're golden.
I have been virtualizing pf-sence for years now. I have used it with zen, hyper-v, Vmware, & virtualbox.
Thank you for doing these. As someone who is about to dive into this without any experience... your stuff provides a calm reasoned information based approach on how to deal with increasingly complex problems barely understood by anyone now days.
Running a pfSense router for several months and very happy with how it works with both my internal networks. Left over hardware from an upgrade a few years back, discrete hardware just works well for me and the slight extra cost for electricity is worth the simplicity and makes the occasional PD easy and quick, after all my time is worth something.
I've been running pfSense as a PROXMOX VM for years on an old Mac Mini cluster. Things can get tricky after a power outage or a shutdown and restart, especially with a separate management VLAN. I use virtual NICs vice PCI passthrough. That way in a cluster you can migrate machines around without networking issues. I also use an Edgerouterx as the upstream DFGW for the pfSense wan IP, that way ISP DHCP address stays the same and it's also easy to split off a separate lab network from the edge router without messing up the "production" network. You can also have a DMZ network on 2nd box or logically separated with a pfSense VM running as a FW and a reverse proxy to host websites/services with SSL certs under one DHCP address.
Running ESXi on a used OptiPlex with a host of VMs (including pfSense) for over a year now with no issues. The reduced noise, clutter and power consumption is worth the risk for me.
In Romania they just introduced 10Gbit internet for $10. The 2.5Gb plan is $9, the 1Gb plan is $8. Full Duplex, international traffic included. Bought an asus xg-c100c, waiting 1-2 weeks for internet installation. If I don't hit 7 Gbps, I will buy intel x550-t1.
I ran my Astaro / Sophos NextGen Firewalls as VMs for years. The ONLY reason I went to a physical appliance was because I wanted the aesthetics of the 1u unit in my 1/4 Wallmount Rack. I still have a Sophos XG running virtualized as my HA node.
Mate, that intro is simply superior, cracks me up every time. Just wanted to let you know I love your style, slowly going through all this juicy content you've created over the years.
re: virtualisation, I think as long as you understand it relatively well and have solid foundational networking knowledge it's probably the way to go. Reasonable redundancy levels should always be catered for regardless of your deployment choice - especially when failure triggers multiple voices echoing "is the Internet down?" throughout your house ;)
DR & LCP (Lab Continuity Planning :D) is kinda fun anyway, right? Right?? 😅
I've got a Netgate RCC-VE 2440 with an Atom C2358 that's been faithfully routing my home internet (which is now gigabit fiber) for about seven years. It's decked out with a 4G LTE modem, WiFi card, and mSATA storage, and it was originally running pfSense but I switched to Linux for reasons. It's starting to get a little on in years so I'm excited to explore other options... thank you for this video!
watched from virtualized pfsense router. On exsi. Honestly it worked so well I roled it out my parents house. Using my old Dell r610 host setup running proxymox
Still only 5 min into the video but couldn't help but comment once I heard "Virtualizing pfSense!". I built a beefy VM host to serve as my home lab and virtualize all my services that I use to learn on and as a result pfSense is one of them. It has dual NICs (my VM host server) and I pass one of them directly to the pfSense VM using pci passthrough so I didn't have to decide which libvirt networking configuration would have the least amount of drawbacks. VM Host is a Ryzen 9 5950, 128GB RAM (non-ECC, more expensive) 20TB of disk space and a couple of old AMD W7000 Firepros I had laying around. Currently serves pfSense, Samba file sharing, camera security system, Plex and nextcloud. Love your vids and am in awe of your teams expertise!!!
If you can make it wife friendly when it goes t&ts up then I might head back to the dark side. I remember receiving several 'dark' calls from my wife when the internet wasn't running when i had pfsense virtualised years ago - nothing is better then the simple instruction of 'switch it off and back on again' and that doesn't tend to end happily when you have a box full of virtualized machines :)
This is so fun! As much as 10 years ago I started to run m0n0wall in a vm as that provided me more option than my default cable router. That was on ESXi 3.5 and later on I made the switch to Pfsense under vmware. Nowadays I still run Pfsense but under hyper-v. And my system currently is a I7 2600 with 16GB. It had 3 NICS, one 1GBs to the internet on fiber (yay!), one to the appliance network on 1GBs and 1 on 2.5GBs to my machines. The day 10GBs switches won't cost hundreds of euro's, I'll upgrade. Probably the I7 2600 too. In other vm's I have my webserver, a SAN and a NAS.
SR-IOV is enabled for the most important connection; Internet.
Same here
I got this... Makes sure you have a router as well for management network, cause you may want to update xcp-ng. You do need that if stuff hits a fan, which will happen! Mine is NAS, TV-tuner, router, voip-gateway, Home assistant server, wifi/switch controller and of course everything software a house would need.
Nice vid! Was already trying to build a "Forbidden Router" and was looking at XCP-NG so your timing is impeccable. Can't wait to see your follow up vids!
Dude! Stop reading my mind! I was just thinking about this LAST NIGHT. You are truly awesome!
IMO, for home and small business, automatic failover high-availability is a cure worse than the disease. Bespoke software configurations and distributed systems are for the big boys who lose $10,000/minute when the network is down, and can afford a team of developers to maintain it.
Better to have a dedicated bare-metal router that's cheap enough that you can buy two. The manual failover procedure is simple enough that a five year old can do it following written instructions, as long as the router and the spare aren't too high to reach. If you do software upgrades A/B style by swapping to the spare, that solves the, "if you aren't testing it, it doesn't work," problem and protects you from regression bugs.
OEM business desktops have low idle power because of energy star regulations, and Kaby Lake and earlier are quite cheap used. In current market conditions, cheaper than a RasPi 4, even.
Ran into this myself: I'm using OPNSense for a router, but ended up using an old Pi B for my Unifi controller, and there are other bits I'd like to run on my router box.
ZenArmor is a must if your hardware is able.
Finally someone tackles the challenge of setting up pihole/unbound and Steamcache on one machine! I have waited forever to find a good tutorial on that! Thanks Wendel. Looking forward to it :)
I didn't have a good experience with the passthrough and XCP-NG. I had a GPU and a PCI USB hub. These two things had to be configured in a very specific order for them to work. And even then, the mouse stopped working whenever I played a video and only then. I actually tried with two different PCI USB hubs and I reinstalled both the hypervisor and the windows VM multiple times.
However everything went real smooth with vmware. Just sharing my experience with xcpng.
You keep saying I shouldn't do this, and I keep thinking more and more that I want to!
Same here
Thanks Wendell for covering this topic, the homelab people in the comments may have the experience and hardware laying around to experiment with but I expect you'll do your best to point out the caveats and pitfalls for someone just walking into this type of project of a single networking appliance versus the herd of animals.
You broke me mr Wilson... 1 month many dozens of installs, got it all working but seem to have reached the limit of my asus tuf gaming x570 as it falls apart when finalized... I guess i cannot have every pci and nvme filled? found a 4 port 1gb card but pci-x not so common these days. Will drink until i pass out and worry about it tommorow. Much fun man, all the best. Thanks for the challenge.
Only partway through the video but it's wild how relevant these videos are to me. Purchased a used old 4 port protectli vault on ebay but that got me thinking about whether I could just slap opnsense in a VM on one of my existing servers, bridge the LAN side, and call it a day. Thanks for sharing your expertise!
I expect Craft Computing is drinking this up like he's been locked in a McMenamin's beer cooler.
I used to give clients a choice of virtual pfsense box or cheap router using VMware back in 2013 we had about 14 virtual pfsense boxes running perfect.
Ran it like that for about 5 years with zero problems before we replaced it all with cheaper routers.
Honestly the virtual pfsense was the better solution.
I had actually just set this up like two weeks ago on my Mac Pro 4,1. Ubuntu LTS as the hypervisor, with PFSense running under KVM, PCIe passthrough of the NICs. Docker. The whole nine. Seems like great minds think alike. Very interested in that newer hypervisor based on Xen, though.
I got a 1u supermicro SYS-5019D-4C-FN8TP with quad 10gb ports and a handful of 1gb ports. Used proxmox to virtualize pfsense and connect to my 2.5gb modem for > 1gb connection. Using it as my security appliance with AD, PiHole and a Linux Network Tool VM. With a quad core xeon and 32gb ram it works great. The thing I love about this server is the front facing network ports so it looks great in my rack with my switches.
Great video, I have done about same thing with unRAID running virtualized RouterOS with 100Gbit virtual eth + 2 dedicated eth ports. Setupping wasn't the easiest but its 'hanging' :D
I implemented an old Server 2003 VM on a server once at a client to route between multiple segments on the network. It was fun but I'm sure it had some latency issues. It worked for what we needed. The Server acted as the DHCP server for one of the segments as well. Nerd fun.
Oh yeah, I did this at my parents house and host their family photos on a ZFS mirror. pfSense in a ProxMox container. I did it with two NICs, router LAN and management interface on same bridge.
And one of my rack mounted routers too, for local stuff.
I just stick to stuff I can do on OpenBSD, works great for me. OpenBSD has a VM hypervisor now too, so you can put the applications in there, and keep the routing outside.
Just done this myself, but using ESXI - the free version is pretty nice (just register for a code), and allows you to do PCI-E passthrough in the GUI. Passed through an i340-T4 to pfSEnse and an LSI SAS card to TrueNAS, both seem happy so far. Remains to be seen whether I'll keep pfSense virtualised or not though, I do like having a dedicated box for it... but going from three physical boxes to one is also really nice :)
For anyone that wants to run a home lab to tinker, you will want dedicated hardware for the network stack just because of the nature of playing around with this stuff it becomes a headache when you mess something up on the one box that is running your connection to the internet.
Do this at your own risk but be prepared to segment out PFsense and other bits to their own boxes.
Yes this is too risky, putting your Internet connection through your home lab. A recipe for a grumpy family.
100% i've done it and moved to a different architecture now. Any server downtime becomes an internet downtime and that sucks
That and factor in the energy cost over the next 4 years at least.
@@jabezhane Yes your router needs to be on all the time so that's 120w all the time as opposed to 12w. However if you were going to have your server on all the time then not having a router saves 12w.
@@wayland7150 Yeah first rule of data club is not to get too much data in the first place. I'll stick with a 25w NAS thanks. Lol
I am researching pfsense and building a router that can handle 1gb fiber up/down. What would you consider to be a good core count or processor to shoot for? I thought about using a thin client/mini tower from eBay with a 4 port intel nic. Would love to see an updated video on something like this. Thanks!
What Home Mad Scientist doesn't want the Home Street Cred that comes with taking down the whole Home Network from the comfort of their own Home Laboratory? Your husband/wife/children will love you for it! Frankenstein monsters are the best monsters.
This is the way I went, but I did it with Proxmox. XCP-ng is supposed to be great (and has its own advantages vs. Proxmox), but inertia is real & I'll probably stick with Proxmox until a specific need pulls me away from it. I run OPNsense in a VM (using a passed-through Intel I350-T2), and a Debian container that runs my services (many of them under Docker). I think this setup is only for fairly advanced users who understand & accept the tradeoffs with it & are also willing to spend time setting it up. If you've got the will & know-how to take it on, it's a really satisfying project with a nice long-term payoff.
What are the XCP-ng avantages versus proxmox? I would prefer proxmox because of the more up to date kernel.... does proxmox have any software feature missing?
@@Egidiusdehammo I have very limited knowledge of XCP-ng, but one neat feature it has (that Proxmox doesn't) is SDN (Software Defined Networking). I'm sure there are other advantages of XCP-ng (and Proxmox!). Personally I continue to use Proxmox because it meets my needs and I like that it's Debian-based.
Great introduction to the topic! I will be following with interest, even though I'm aware of the risks and don't plan on virtualizing my home firewall on my lab VM infrastructure (the risk of breakage from me messing with it too much is very real).
The answer to that, of course, would be to treat the VM host running your firewall as an appliance. Mess around with your other hypervisors, but leave that one the [bleep] alone. Case in point: this is exactly what "network function virtualization" appliances like the Juniper NFX or Cisco ENCS series are meant for, and they're great if that's what you need. You're just not going to be in there updating the BIOS or tweaking it on a regular basis, which is probably why they're more stable than our average home lab servers. :-D
I was waiting for this video every since you foreshadowed it about 5 months back when talking about your 100 Gbe EDR NIC. I am not sure whether this is due to the EPYC platform you have, but I have actually had better performance virtualizing Pfsense on Proxmox. I found the ability to select the exact CPU platform on Proxmox to make a huge difference. I can't get more than 10gbe/s on XCP-NG with FDR connectx-3 or EDR connectx-4. It would be nice if you compared the SR-IVO vs the pass through setup, so we can tell whether there is really performance issues one way or another. RoCE will be fantastic if you can show that as well. You're going to have a hard time saturating that 100gbe NIC, so maybe RDMA-Based NVMe-oF would be another nice scenario to see. As always thank you for sharing your findings.
I used to be proud of the fact that I turned a Raspberry Pi into a little wifi router to get around the fact that my university dorm room only had wired internet and blocked routers, but not computers that were being used like a hotspot. Now I feel like a noob.
PFSense is awesome, virtualizing it is cool too. I tried to use it for advanced routing between-VMs when there's only one hardware NIC available and it seemed to work.
Also, while pfsense is better than most consumer routers, by the time you have a homelab sometimes it is more beneficial to use enterprise-grade hardware systems ... software-only routers inherently lack ASICs or other acceleration tricks.
Got a Poweredge T710 cheap on craigslist and virtualized everything with Proxmox (including pfsense) for the last year without any problems. (pfsense, plex, portainer, openmedia, transmission, heimdall, homeassistant, truenas...). Just gotta have enough memory :)
Picked up a r610 for $ 100 from CL and running esxi 6.7 on it with pass-through for freenas and passing 2 nics to pfsense.
It's been working fine for a few months now.
Did this for a while with xcp-ng and opnsense. It was fun doing it, and didn't have any issues with performance. The only problem, however, is patching xcp-ng(and the subsequent reboot) would bring down the internet for everyone. The wife acceptance factor on this fun/geeky setup is very low, so ended up abandoning it. If there were a way to easily hotswap router vms between two xcp ng servers to maintain Internet connectivity, well, I'd still be using it today.
Looking forward to the DNS video. I currently have a pi hole and want to add a Steam Cache machine to my network as well.
This kind of stuff is a lot of fun. I have proxmox running on an old SFF optiplex 9020 with a Haswell i5 and 16 GB RAM. There's an old HP N360T(?) dual gigabit NIC in one expansion slot and a 2TB Samsung 983DCT in the other. It's running VMs for pfSense for routing, Ubuntu Server 20.04 for docker with LanCache, and Windows Server 2022 Core for file shares. The SSD is passed through to Ubuntu for the LanCache with cockpit running on top for web management. Windows Server has admin center for web management. Overall it's been a really fun project and an excellent LAN party in a box. Next step is a pi hole container and 10G on the LAN side. Unfortunately, will have to choose between full speed on the SSD or 10G NICs if I stick with the 9020 and it's pcie layout.
Exactly and especially in today's age and specialized ASICs, you don't want to go the virtual way anymore with firewall because there is too many threats to check and to process that a general computing cpu can't cope with. 10+ years ago, yes, having vms for firewalls was good because we had abandon things like deep inspection and what not - it was too costly cpu-wise. But now, things have changed in the last 4-5 years and firewall are now legit core-switch-level devices that doesn't only police north-south traffic, but also the east-west traffic that is becoming more and more and issue in this security era. Two-tier architecture are the way to go now with access switch able to provide 10GE to end points, and uplinks of 25/40/100 Gbps are now common, and firewall able to process north of 400Gbps can save organization a lot of money in hardware and management.
Lol, I started going down this road that the video is about last year. I've been doing it because I've been dissatisfied with the network solutions I had and I'm trying to learn more about more complicated network application structures on a closer to enterprise level. I'm not a sysadmin, I'm barely IT, but given how hard it's been to get anyone to look at my partial college education in computer engineering and my general IT certs, I've been hoping a homelab setup would be a good way to get my foot in the door.
BTW, I did try to run Pfsense on a standalone box first, but considering what else I was buying, I cheaper out a little. The processor that was on it was terrible and I ran into all kinds of issues from DHCP requests timing out to eventually the box literally not booting or letting me in to run a factory reset. Maybe I'll have it do something else later, but I gave up and just decided to virtualize it on my existing hardware (FX8320 beats out about every Intel atom anyways, even if I'm only allocating one core lol)
Okay, weird, my first edit deleted itself when I added the second edit... But yeah I setup VMware's free esxi and went with truenas core (did VMware because I read about issues using KVMs with truenas core) and bought a couple of compatible 1gbps NICs for Pfsense, a 10gbps nic for the nas, a multigig switch, a couple 2.5gbps NICs for the 2 gaming desktops, and loaded my steam games on the 4x10TB hard drives that are striped and mirrored with a 1TB SSD cache (not NVMe because the motherboard is old). My next projects are setting up Vaultwarden on the raspberry Pi and setting wireguard up again but directly on Pfsense to get other devices to connect to it (thought about doing reverse proxy but I don't currently have enough users or applications I want to use away from home to want to start managing that kind of exposure, but I will get there eventually)
I’ve been running OPNsense in Proxmox on i7 6700K, with passthrough and could never reach my port speed, capping at ~80% of the full speed.
I’ve moved since to i5-7400T and running it natively, without a VM - now reaching the full speed, with a lot more services running on OPNsense….
I received recently a Mikrotik to play with and opened a completely new world now… Thinking to drop OPNsense and go RouterOS + maybe containers(it supports docker containers, like pi-hole although I found OPNsense Unbound DNS to be significantly faster).
I've been rocking a "forbidden" setup like this with TrueNAS and PfSense on my home server for a while now. Started out on bare metal on a busted laptop (no screen, so "headless" lmao) then migrated it to a Dell Optiplex 9020 with i7 4770, 32GB of RAM, 10GB SFP+ NIC, and a LSI2308 SAS HBA. The HBA is passed through to TrueNAS, and the WAN port (the 1GB RJ45 on the mobo) is passed through to pfSense.
I've since upgraded to a Ryzen setup for this rig, using a 5700G, 64GiB of 3200MT/s CL22 ECC memory. This gives me 1 more PCie x16 slot for more stuff in the future! Might add a low power GPU for NVENC and home media stuff.
ANd yeah, having everything centralized --- router, VPN, NAS, NVR, web server, management utilities, home media and smart home stuff, etc.. is definently putting all your eggs in one basket and it is a pain to lose internet when it goes down for maintenance.
After the 'scandal' with pfsense vs opnsense, and the debacle of their wireguard implementation, I've switched to Sophos XG and have never been happier.
That last comment made me scream. "I love the concept, I don't love how fragile it is."
I have been running a media server / backup storage / nas / virtual desktop / video rendering server / proxy server (road warrior style) for a few years.
I upgraded my storage with an array of cheap disks. Adding 12 Tb to it, that worked for a few months till one of the new drives lost a super block.
Sure proxmox still boots, but the virtual machines' virtual OS drives are toast. I think most of the plex media is also toast (stripe through that disk) but I haven't had time to rebuild it yet. So no free wifi, no plex, till I get get some free time.
I am doing the same approach on my HP microserver, having Windows Sever based with Hyper-V VMs.
- virtualized Sophos UTM Home for routing & VLAN,
- a Fedora VM for podman containers,
- Windows server core for Windows AD.
Would go the proxmox/xcp-ng if I am rebuilding it.
I dealt with fragile part taking advantage of the IPMI and Windows GUI
I love the concept on eco-friendliness alone. It's too close for comfort for my noob butt. It feels like the gate to my property to be right next to the safe in bedroom closet door.
I have pfsense running on proxmox with pcie pt for a dedicated nic. Has been running great for over a year in this conf.
So I have a physical box for pfsense that acts as router/FW/dhcp/dns. What I’ve done is setup a second pfsense box as a vm that I have setup with their HA implementation, but I only have dhcp and dns syncing. This way, should my main pfsense router die, I will lose internet, but everything else on my network won’t grind to a halt because dns and dhcp went away. It’s a compromise, but it gives me what I’m looking for.
I've run pfsense in a vm on top of unraid (so, KVM) for years on a used supermicro dual xeon(1366 era) server board (liquidated from an old facebook server swap out best I can tell). Never had an issue.
I also had a used netgate passive cooled appliance set up in a complicated HA/CARP implementation for fail over purposes because everyone told me I was making a bad choice... never engaged once. I eventually tore it down and left a basic linksys router (has an on/off switch is why) sitting on top of the cabinet with a couple labelled cables stating plug me in here and switch me on if the internet dies. It's only every been needed when I tested the checklist with the missus. I've yet to have a failure.
If you know what you are getting into - vm router implementations are really great.
I do the same, though the power usage is bananas for me, I may replace it with a AMD TR 12 core desktop machine. Now I do have 60 HDDs that contribute to that power usage
Sweet, I've been hoping to see Wendell do a show on this topic.
I like it, now I've got a new build to consider.
I'm curious why you chose XCP-ng vs Proxmox? Also looking forward to performance data on virtual vs passthrough NICs. Would you ever recommend virtualizing routers with CARP/HA between VMs on different physical boxes for a business with high uptime requirements? I just recently struggled with this decision and decided to just have two 1u servers each running OPNsense bare metal in HA configuration. I didn't trust the extra complexity of VMs when uptime is key but I'm curious to hear others thoughts.
oh my i've been using "THE FORBIDDEN ROUTER" for a litte over 7~8 years now... lol, both on esxi and xcp-ng
I'm really looking forward to this since i've been running the same concept on proxmox for three months. Sadly I have't had time to try clustering like Jeff Geerling says, but one of these days I might have the time. Also I have one other problem, the speed is full 1 GB but the other way it's 100 MB but that might be a configuration error from my side. My setup is Proxmox 7,x (I don't remeber) on an 12 core xeon 128 GB RAM, with PCI passthrough on the WAN side and virtual interfaces on the LAN site, all 10 GB
This is good content and I like the chill music barely noticable in the background. I use mikrotik RouterOS x86-64 on baremetal. I also have Ubiquiti gear. At any rate, this dude (sorry, I don't know your name) is NOT a level 1 tech. He's more like level 3 as he knows a lot and has proven deep knowledge of zfs and even filesystems in general. I like the humbleness of the channel name.
I did exactly this, but with 2x Hyper-V hosts at home for around 3 years, with pfSense. pfSense running CARP between two boxes, always pinned to separate hosts with a dedicated passthrough NIC for each. I of course had an old Ubiquiti Edgerouter configured and ready to go if SHTF, but it never actually did.
I only stopped because I simplified my (running 24/7) home server setup to a single host and replaced the pfSense VM's with a Fortigate to save power and heat output...
You don’t need much. I was running pfSense in a VMWare environment with several other servers and it barely moved any capacity. Next, I moved it to its own PC. It is an old i3 2100 and the CPU hangs at 3%. I have multiple WANs, a lot of advanced port forwarding over those WANs, port based VPNs, incoming client-based incoming VPN… you have to love pfSense.
That’s exactly what I planned on doing! Virtualized that and include a couple of vm’s for intrusion monitoring and dns handling
I've done basically this, but my DNS handling is done by pihole on a separate raspberry Pi. No reason you can't virtualize it though. I just happened to have a raspberry Pi handy.
Lets GO! running promox with pfSense, pi-hole, Unifi, and experimenting with XPenology a rip of Senologys software and future plans for Home Assist and PBX running on a Dell R210ii
I've been running and testing pfsense "on a stick" in a vm on both vmware and proxmox for years now using vnic's, and both have had absolutely no issue serving my 10+ VLANs and gigabit uplink, and in testing it was able to route 6 gigabit between vlans on just a single Intel i3-core
I have never experienced any kind of delays or other weird stuff, even when the tiny i3 has been fully loaded on the other cores, it's still been running way better than any consumer-level router i've ever tried. Only time i've experienced delays have been when pfsense shares cores with other VM's that are hammering it, then jitter increases substantially(Up to around 100ms i believe), but dedicating a core to pfsense and those problems went away.
I am curious if a high single thread performance processor would improve pfsense… routing a single stream at 10gb+? Service reboots/updates going quicker? Does the high cpu cache make some operations (firewall? VPN ? Suricata?) significantly more performant? How would Intels heterogeneous cpu cores work/help/hurt?
For the DNS traffic, it would be nice! I quit using a router all together a while back, but it looks like my internet is about to be going a lot faster again...
It's tempting, but not near the hardware you're committing to such a feat! MTBF, no PCIe pass-through and a full spool for fiber runs for electrical isolation... STARLINK I'm not too keen on that dish going outside because we've had two satellite dishes blown off of our house!
Here's hoping that STARLINK will get faster... I may just build one on the cheap to handle router functions, no more.
There’s a BIG reason to virtualized your pfsense: being able to spin up a second instance in under a minute.
I recently had a HW failure, fortunately I had a cold spare machine (called Phoenix). Powered it up and my network had its gateway back online in about a minute.
Sure you can dedicate a second machine for this, but having a generic cold spare machine with all your critical VM’s ready to go is golden
Great. So you connected your SSD/HDD to your spare machine to get your network up and running again? Right?
@@AbhishekKumar-nt3in No, I have a whole second machine, with 3 network ports, with the same hardware as my primary machine, powered off. I've installed proxmox and loaded it with the two critical VMs I use on my network, pfSense and pihole. It's plugged into the network and my two WAN connections. If my main machine goes down, all I do is power up this machine. After about a minute of boot time (it's actually faster than my primary machine but uses more idle power) my gateway is back up, and I can work on whatever is wrong with my main gateway without impacting my network.
I do power up this second machine (isolated so it doesn't interfere with my primary gateway) from time to time to ensure it's still functional, and to update the VM image if I've made major changes to my config (rare).
@@repatch43 Awesome. Thanks for sharing. I am planning to use my TP-Link OpenWrt flashed router as my backup router if my ProxMox or the pfSense VM goes down for some reason.
If you're going with the EPIC why not make this not only the Router but also the SWITCH by using multiple multi-port Ethernet cards? You have the 4x10Gbps (even more) cards available to go that route. You could even use some at only 2.5Gbps to 5Gbps if they were going to connect to WiFi AP's.
I ran pfSense through VMware years ago so I could use the same system to also run my web server & other projects, it worked great the issue was when we finally got fiber, the virtual NIC just wasn't cutting it, was only getting about 600Mb/s through the VM, this was on a Ryzen 1700x at the time. Maybe it's better now with faster CPUs & improvements to VMware. Otherwise I'd say PCIe passthrough is a must if you have a gigabyte or higher connection coming in & you want to attempt something like this.
I almost passed through a NIC to a pfsense vm in Unraid. I just couldn't get past the idea of my internet going down every time I wanted to play musical chairs with my iommu groups.
I use a E300-9D supermicro box for this, much smaller, 8 ethernet ports of which 4 are 10GBE out of the box, and support for a single slot PCIe card which is populated with a ASM2824 switch card to add 4 more M.2 drives for a total of 5, and a mini PCIe slot for a Wifi AP (or cellular backup if you want).
While this is way beyond me, it's the best way to learn so I can do something 'like this' in the future
People are running VMs of pfsense on Intel N100s and similar mini-pc chips. I find that a lot more interesting than throwing money, compute power and electricity at the problem.
Great vid, been running my own routers (FreeBSD with jails) since 2004 :D
Heh, a PfSense VM is what I've been running since I had time to figure it out during covid. It also load balances two internet connections, because one with all the videoconferencing during covid was getting to be an annoyance with multple peope simultaneously needing more reliability and sometiemes bandwidth. I'm quite happy with it, but as you say the main reason for it is that I want to economize.... not so much on hardware, but on power usage. My server board is a dual low power xeon, and it has a pfsense vm a pihole vm and a webserver vm. I agree with jeff though, duplicating the functions in a second machine on the network would be better, I can easily fall back by changing some plugs but having some failover would be better. So I'd agree that running a 'one piece of hardware does absolutely everything' solution doesn't really make sense.... unless you have two of them.
I basically just got done doing this, for in the case of stuff breaking i have by WTR54GL in a box on the shelf so i can limp by till i can get it fixed (very glad i had that when my 10 year old board died) now on AM4 w/ a 2200GE (qemu on ubuntu server) planning to run raid arrays to make it less fragile, runs a lot better now than it did on the FM1 socket
Sophos Home Firewall is also abgreat option virtualized FW, It natively supports most platforms.
Happened upon this video after setting up PfSense (along with some other VMs) in Proxmox... and after borking it with a configuration change that took my network down, needing to walk over and plug a monitor and keyboard into my server to troubleshoot the PfSense VM. I didn't not know what I was getting into!
I've been thinking about doing this. Thanks for the info!
"We have exorcised the network."
Oh wow Xen lives on in XCP-ng.. thank god! I've not really played with virtualization for a while... use to use Xen and then moved to vmware in the corporate world for years... Now I'm looking to do this as a small home project to offload most small compute tasks... I try to avoid linux at ALL costs due to just plain weirdness I don't want to spend time troubleshooting.. Although I made a career out of managing linux and doing systems programming.. I'd rather spend my time doing the systems programming than deal with more linux oddities.