One major downside of the way you've set it up: If your UDM dies, your entire cluster state may be compromised as nodes are no longer able to see eachother. I would personally have added a 2-port NIC (I bought some refurbed SFP+ ones for 60 bucks a pop - tho I'm from Europe so your market may differ) in that unpopulated PCIe slot then hook up all nodes directly to each other in a mesh (A->B, B->C, C->A) with some SFP+ DAC cables (they cost like 15 bucks a pop from fs). Then use the onboard NICs _just_ for traffic leaving the cluster. It would add some extra costs (and some configuration complexity) but the benefits are worth it in my opinion: - Ceph can now run over dedicated interfaces (that are also faster when using SFP+), lowering the burden on the other interfaces (less congestion). - Your UDM failing only affects your uplink (but your cluster state itself will otherwise remain unaffected).
an the 8 lan ports an the udmp are internally connected over just 1gb/s. Its basicaly a 9port gigabit switch, 8 port facing the outside, 1 used internally to connect to the rest of the system.
Dude, you made it into the 511 building... that's insane! That's the hub for all the Midwest Backbone is located. I'm so Jealous. Just a bit of background: when US Bank was constructing the stadium, there was an idea to demolish it since it appears just like any ordinary building. However, they were told that wasn't an option. It was then that they discovered the true significance of that building.
@@TechnoTim Yes, the company I work for utilizes a direct connection to the backbone, connecting all the way back to our main data center. It's not an inexpensive setup, and that location serves as a major hub for all the leading internet providers. Depending on your access level, if you venture down to the basement, you'll come across secure rooms that are off-limits, reserved for major companies like CenturyLink, Xfinity, Spectrum, and others.
Haha I would have loved to have been the one who told the developers “no you can’t tear down the major communications hub for the city and half the country. “
Exactly. Those cables are now consuming an additional U. Even in a shared rack, that extra U costs something and a U wasted on cables is an expensive waste of money.
You need to worry about airflow and where the UDM pulls its air. if its front to back like all the other servers, it would be pulling hot air from the back of the cabinet and dumping it out the front of the cabinet. This is why most enterprise network switches have models with back to front airflow.
Thank you! Great idea! I proposed this a few times but they said it was fine in front. We'll see if they change their minds once the rack starts to fill up! 😂
I'm actually surprised there isn't a top of rack switch they all just plug into and get their static IP's from. I rarely had to bring my own networking equipment for my colo's.
Also Regarding your question: given that your ISP is located in the same data center as you (lol), I recommend sticking with the hardware Site-to-Site VPN. It's hard to find a better or more reliable connection. From my perspective, opting for a service like ZeroTier would only introduce unnecessary overhead to your current setup.
I also would stick with the site to site vpn, i would never trust others to handle or be part of it in any manner of my private vpn connection. Tried and tested trough ages. The thing you should ask your self is that, why would you replace it? What is that you are not satisfied with in the current site to site vpn setup? What benefit would give you over site to site vpn? That benefit would improve your situation or possibilities?
What about option with self hosted Zerotier controller. I'm using such option for quite a while now and got lots of benefits from it, but I'm not keeping my hardware in data center. Also Zerotier can be good access granulation tool.
Underneath zero-tier and all those other easy configurations, VPNs run on WireGuard underneath. If you're hosting the self-hosting solution at home, self-hosting is great as long as peering is good by your ISP. If the ISP doesn't have good peering, your VPN can become unstable. However, self-hosting does give you some privacy if you have privacy concerns.
As an ISP network engineer, I second this. Although you can do some cool things with Tailscale and ZeroTier, what you want from co-located equipment is reliability. The more complex things get, the more likely they are to have problems.
You're welcome! I have your dark mode shirt too, it just hasn't hit the rotation for a day when I've been recording yet. But it'll show up soon enough :)@@TechnoTim
I went tailscale after having an openVPN, the biggest upside was the integration of every device: iPhone, iPad, random laptop(a), NAS in tertiary location, suddenly they were all part of an overlay network. Since then, I literally have forgotten where some devices are located because it has become so seamless. 😂 100% recommend Tailscale. I just wish UDM have a native support (in mgmt interface) for an exit node with tailscale.
I've always wanted to colocate, so this is pretty cool from a HomeLab perspective of how this all works. Yeah I can look at colocation videos online but probably none from a homelabber. Thanks Tim!
Start your own community rack? It's not that common, but some computer clubs and similar groups do it, they rent a rack, or multiple, and you all share the expenses, usually with a little bit of an extra for the organization.
Try contacting hackerspaces. For example, when I lived in CPH I went to Labitat. Okay, they dont really have a data center, but you could get rack space with decent internet for cheap. Or they would know a place to co-locate cheaply.
I very barely know a guy with colocation space in Denmark. His company is called something like Stacket Group (I think?) and he runs some brands and stuff from it. Maybe you can get in contact with them and see if they will rent you space. I believe they're connected via GlobalConnect and TDC.
It's been a couple years since i've been in a data center but it's amazing how really cold air can become really warm air in the very short amount of time it's inside the components of a server rack. I was able to observe when they built out the new air handler for the datacenter at work and the ducts were big enough to walk around in... upright!
BTW, you can save 1u of space (above your UDM) by locating your UDM to the back of the rack....this is where all of your eth/sfp ports live for your servers. This is how most network0 their servers. Also a reason way enterprise switches have a back to front air flow.
I did something similar a few years back and still continuing to do this! It’s great to have it in a dc where the temperature remains the same and you can add / expand where required 🎉
Hey Tim, cable ties and wire trigger my DC-OCD - velcro is your friend. I would also caution you on the Unifi in the DC, having a single point of failure in front of the cluster could lead to sad times. Opnsense clustering is extremely robust, it's also getting a lot more updates than PFSense and it runs on lightweight hardware (I repurposed a couple of old Sophos xg-115's about 18 months ago - super stable) - Love the vids - thanks for putting them out there.
I vote for tailscale, one I would like to see more of it on videos and it works great. I have remote repeater sites connected and 2 ranches in different states. It does require a very consistent update but that seems to be the only drawback besides not hosting it myself. Thanks for the videos, enjoy them a lot.
This is awesome! I moved to the Twin Cities a year and a half ago and to know these things are a short drive away is really neat. I am currently working on my RHCSA cert and you have been a good source of motivation and inspiration during that journey. Thank you for doing what you do.
I recently went to do some maintenance on one of my companies backbones and thought I recognized the DC. Forgot I watched this video and had to find it to confirm we’re colocating in the same building haha
Thank you for the very educational video! Some how every time I think of doing something you make a video a month later explaining how it can be done haha
Thanks for this. I've been thinking of doing something similar here in the Dallas/Ft. Worth area. Running mine at home is getting loud, expensive and dumps so much heat.
Excellent move! Not having to worry about power issues is a big one. Here are my 2c's: Moving the switch to the hot side of the rack is something you want to think through. I was dealing with this for years, until Cisco finally started offering fans with reverse air flow, so a) you're not obstructing the airflow in general and b) trying to cool you switch with hot air from the servers. It seems there is no side cable management in the racks? I started to use slim cables - less space needed, better air flow. Plus they might fit through the side of the rack. If you are using single PSU servers, you might want to invest in an ATS so you can take advantage of dual power sources. As a bonus they offer environmental monitoring and some even remote access to reboot your equipment (Hands up who had to run to a Cisco switch and pull the plug ;)
Thanks for showing the leg work, I kinda had a feeling you got in on a deal when you agreed to colo, prices are insane these days. Cheaper to rent a dedi server and not worry about hardware failure costs.
Would love to see you cover more colo type stuff! I had equipment in colo back in the 2000s and loved it, just recently set up some stuff in a colo to augment my homelab. In colo, power is almost always your biggest expense. So half rack vs full rack is a small difference for a simple circuit. I've had high setup costs before when they have to set up additional racks. Its odd they make you pay for them to set up the space for you.
My buddy and I did this a few years ago. Around here there is a company, Hurricane Electric, that basically has the Costco of datacenters. They are mostly a transit provider, but have a few LARGE datacenters in the bay area. You can get 1 gig, 15 amp, full cab for $400/mo.
I almost miss having to drive 30 miles to my ISP were my servers were co-located......25 years ago! Back then it was a real treat to have 100M between servers with a T1 to the internet, and dealing with 2 dialup lines in MLPPP to access them from home. Looking at my full 42U cabinet now....maybe co-lo is a real option again?
The 511 building is a pretty cool building if you're into tech. I worked out of it for a short time a few years ago. The tenant list is interesting and it has a storied history as well. My understanding is that it was built as an R & D facility for Control Data Corp. Across 6th St to the SW is the Strutwear Knitting Company building, made historic for other reasons.
I did a half-way to going from home DC to colo, I purchased a /28 subnet from ZEN internet while also too allowing some clients to usilise the shelves within my home.
If you have full network administrative privileges, a hardware-based site-to-site VPN is the best choice, rather than mesh. Although a mesh network seems to solve many complex network configurations at first glance, from a site perspective, mesh addresses the complexity of peering between multiple sites. Moreover, you only have two sites here.
I use nebula as an overlay network and am really happy with that so far. Seemless connection between all the server and client devices no matter where they are, as long as there is an internet connection
I have a very close setup as you Tim. I have free colocation space from my employer, and my stuff at home. I messed with this a lot over the last year, and site-to-site in my option is the way to go. (Even using site Magic as you seem to be) the convenience (and troubleshooting) are worth it. I have tailscale on a few devices but that is mainly for a "oh shit" when I break something. I'll post some more in the discord.
agreed. it doesn't hurt to run both. For instance; past few days my TS exit node container was acting super weird and rebuilding it didn't fix it.. it was not until few hours I discovered snort was blocking part of the traffic.. I'd def have both implemented for a PROD deployment.
Nice indeed. I also started moving my homelab to colo, and managed to snap some 10G connectivity, 1U spot, and good pricing, and location. Used modern hardware (cpu, memory, storage, nic), and super speedy. Easily getting 10G to my home (and will expand to 25G once the colo owners upgrade their gear, to 25 or 100G), and getting 0.54ms rtt form home. Nice. I already want another server somewhere (maybe other DC), just for fun.
If I was you, I would host all my coordination servers at the colo, it has a static IP, you can set up a netbird/tailscale subnet router, and still have a ssh back door if the SDN fails. You can also set up A subnet router at home. The benefits will be similar to a site to site.
HI, I personaly manage multiple colo space, and i use hardware, redudent path vpn + dynamic routing protocol (OSPF). Also be sure to have a second way to access if your main tunnel goes down (eg, your ipsec endpoint down). Ps: i'm not using ubnt stuf for that setup. Also, don't forget to document your ip usage on something like an ipam solution and think about a good adressing plan. It help alot
I haven't used collocation for about a decade. Renting individual VPS is way cheaper and better maintenable than anything else without sacrifing performance (as long as you pick a good host). That said; I use tailscale for pretty much anything. My homelab, my office, my commercial servers, my clients... I love how I can manage ACL easily and quickly give access to anyone to exactly what he needs access to. Keep in mind that I also run my own DERP server. It shouldn't make much of a difference (speed/safety), but it was easy enough to self-host.
Tailscale 100%. Been using it for a few years and wouldn't look back. The ease of setup and management is unparalleled and they have pretty big ambitions to be the de-facto VPN/Access/Networking company and I believe have built the team to achieve that.
Personally friends of mine and i run talescale between our houses so we can backup each other data. Also plan to soon add family to it for their backups as well.
Being tied to to a supplier's hardware/software dependent solution, I would set up a tailscale/headscale solution. There is far more flexibility in a VPN/SDN mesh than a site-to-site vendor specific solution.
Tailscale with subnet routers for the win. Site to site vpns are cool but if you add a third site or more that becomes annoying to manage unless you use something like ospf or bgp.
I would honestly do both. Having the hardware-based VPN is nice, but if something ever happened and that UDM messed up, instead of a visit, with something like Twingate or Tailscale, you could connect remotely and fix things so long as the network connection were still somehow in tact. Redundancy is never a bad thing.
I have a few VPS on different cloud providers that i wanted to link together over a pricate network, plus provide access to backup storage on a server in my homelab. Since this doesn't require multiple users or acceas control lists, Tailscale was overkill, so I just setup my own Wireguard mesh which has been working really well for almost a year now.
I'd stick to the site-to-site VPN, you've basically stumbled into the most ideal setup using that and I don't see a lot of benefit to going for the overlay network route in this scenario, as cool as something like tailscale is. Awesome seeing something like plover and I'll have to see if I can find something like that more local. Been kind of wanting to move to a colo for some of my equipment but even getting quotes is a bit of a headache locally.
I have self-hosted and cloud servers in different countries, and I connect to them from many different places, so that's why I'm using a mesh VPN. But since you have them both with the same IPS, it doesn't really matters. But if you are trying to join cloud instances to the network, I'd go with a mesh VPN. Sometimes I need to test something in the cloud, so I automated the instance deployment that automatically connects it to Tailscale and after the instance is terminated or power off for some time, it gets remove from my Tailscale account. That's very handy.
All I can say is, Tailscale slaps brother. I have been extremely grateful that it is an available option for small homelab users like myself. It may not be a bad idea to at least test it out.
Had issues with self hosted TailScale server, would like to see content around that. I too think that colocation is cool but too expensive for hobby projects.
May want to put your UDM Pro on the back side of the aisle for cable management, the heat shouldn't be too much of a concern with it. If you moved it to the back, you will also save 1U of space where your cabling runs through.
Dang, those prices are high. Here in Swedenland a 10U colo with dedicated gbit full duplex port, a number of IP's and no data limit is about $400/month, and that was just a quick look at a friends company.
I would use a mesh network mainly because I’m a big fan of Zero Trust, also you become more independent of the network at home or in the DC. But I would recommend something like Nebula. It’s super fast and lightweight. It won’t has a nice UI unless you use the hosted version but when you use Ansible for everything like setup key rotation it becomes really easy. On top you can use something like NetBird, it’s a German product. Also a mesh VPN solution, mainly does the same but with nicer Auth integration like SMLA etc. That I would use for things like mobile devices or PCs and Nebula for the Backend stuff.
I would go with a hardware based VPN since that is the closest thing to what you would want in an enterprise setting being an EPLAN (depending on what you want from this rack)
It´s amazing how tech costs are cheap in USA. I have to pay, at least, 5 times more the values you showed. I´d use hardware to connect to the remote data center.
I would chose as the follows: 1. A site-to-site VPN, even using tailscale/wireguard as a site edge, for the major use case, in case some system does not support ts/wg, as well as reducing configuring time for a simple testing-purpose project. 3. ts/wg as a backup medium to access one host in that rack in case of the failure of UDM VPN.
I Personally run tailscale my self to connect my cloud and homelab stuff together for security and only expose what i need to and when/were i need to, though as of recently been looking into a self hosted solution for privacy/security reasons
IF YOU PLAN FOR MULTI-GIGABIT ROUTING TO THE INTERNET: The integrated switch in the UDMP is like a GBe switch connected to the UDMP. So this always limits internet and inter-vlan routing. If you don’t use the internal switch but connect a 10G switch via the UDMP SFP+ port, then you can use 10G towards the internet and for inter-vlan routing. This is reduced to 3.5 with IPS enabled.
I used to use hardware VPN and moved to tailscale. I was hesitant at first but don't think I'd ever go back. I can simply add another device like my phone or laptop and in 2 min I'm online with all of my devices. Hardware VPN is kind of the old way of doing things, for me it was time to learn something new and change with the times. That's my two cents, looking forward to your next video.
I would keep my main physical infra on the site to site then setup a software defined network for the virtual systems. That way you get the best of both worlds flexibility that comes with software and reliability that comes with hardware solutions.
its interesting as soon as you said you used a UDMP for VPN I thought.. Why didn't he just use Tailscale or Cloudflare tunnel. IMO I would run a cloudflare LXC on each box (assuming they are proxmox) and you'd have connectivity to them. Tailscale would also work great. looking forward to see what you go with!!!
It's no question do the site to site VPN, it's more secure, easier to configure and manage the whole 9 yards. Also what some have already said you would likely benefit from having the udm on the back, typically only goes forward in home lab scenarios or in dedicated all networking racks. That's pretty cool though I never knew you were so close I used to work for an ISP/MSP that owns the fiber into that stadium and colocates at that DC. Although I never went there.
Hey Tim, thanks for your videos, I like them and learn too. I totally get why you use self managed software, but why do you bother with own racks and HW? Why not simply renting servers or VMs from a cloud provider? It’s not expensive if you use it wisely. I’d definitely move own self hosted public facing apps to the cloud, either to container environment or worst case a simple VMs.
Yep, I live in Chicago and colo at 350 down on Cermak. My ping is 500usec. What’s great is that pretty much every game server hosts here in downtown Chicago. So my ping to everything shows 0 milliseconds.
I had a site to site VPN for the longest time just to connect to my permanent address but it got less and less reliable to the point where I installed tailscale as a container (second one on another node in the cluster) and haven't looked back since. However I think because your ISP is located in the same data center I would just keep it to a site to site VPN.
I ran a few hosts years ago and looked into colocating, and the cheapest places I found was either NJ, NY, or TX. idk why they're so cheap but they're pretty cheap and is why lots of hosts have servers in those areas
Wouldn't sharing a colocation rack space be quite a risk? You need to setup some encryption, chassis intrusion, network BPDU, filter, mac security... For connection back home, mesh network or a cloudflare container! Since it doesn't look like you have an OOB management device (and probably don't want datacenter remote hand to help you power cycle), would suggest some sort of IP KVM like opengear or piKVM.
Just did this and besides all my co workers thinking I was insane, it's been great. $90/mo for 4Us. Tailscale on a EdgeRouter has been dog slow though. Now that I have a static IP and a stable site, I think I will switch to standard IPsec for the site to site VPN and use a regular VPN server to connect remotely. The MTU is much lower with the wireguard overhead than IPsec and it gives me issues with my vSAN Witness occasionally.
"it sounded like this inside" didn't hear any diffrence with my PowerEdge running behind me lol
PowerEdge as in singular? lol
lol as my full rack is humming away 15ft away from me.
@@jonathan.sullivan yes, as only one is currently powered up. But it's a R730xd it's not that loud usually. Expect when it's 25° in my room
dude right, I have three running about 30' from me. I was like dang that's quiet
Haha, relatable
One major downside of the way you've set it up: If your UDM dies, your entire cluster state may be compromised as nodes are no longer able to see eachother.
I would personally have added a 2-port NIC (I bought some refurbed SFP+ ones for 60 bucks a pop - tho I'm from Europe so your market may differ) in that unpopulated PCIe slot then hook up all nodes directly to each other in a mesh (A->B, B->C, C->A) with some SFP+ DAC cables (they cost like 15 bucks a pop from fs).
Then use the onboard NICs _just_ for traffic leaving the cluster.
It would add some extra costs (and some configuration complexity) but the benefits are worth it in my opinion:
- Ceph can now run over dedicated interfaces (that are also faster when using SFP+), lowering the burden on the other interfaces (less congestion).
- Your UDM failing only affects your uplink (but your cluster state itself will otherwise remain unaffected).
an the 8 lan ports an the udmp are internally connected over just 1gb/s. Its basicaly a 9port gigabit switch, 8 port facing the outside, 1 used internally to connect to the rest of the system.
Dude, you made it into the 511 building... that's insane! That's the hub for all the Midwest Backbone is located. I'm so Jealous.
Just a bit of background: when US Bank was constructing the stadium, there was an idea to demolish it since it appears just like any ordinary building. However, they were told that wasn't an option. It was then that they discovered the true significance of that building.
Maybe that explains the sweet, sweet ping time! Thanks for the history!
@@TechnoTim Yes, the company I work for utilizes a direct connection to the backbone, connecting all the way back to our main data center. It's not an inexpensive setup, and that location serves as a major hub for all the leading internet providers. Depending on your access level, if you venture down to the basement, you'll come across secure rooms that are off-limits, reserved for major companies like CenturyLink, Xfinity, Spectrum, and others.
Haha I would have loved to have been the one who told the developers “no you can’t tear down the major communications hub for the city and half the country. “
the 511 building is legendary. The TW Telecom colo in MInnetonka, Cyxtera in Shakopee, and the 2 Databank colos are good visits, too.
I choose ISPs based on 511 peering.
A tip. Mount the UDM on the back side of the rack and gain back that rack space you used for the cabling to run to the back.
Exactly. Those cables are now consuming an additional U. Even in a shared rack, that extra U costs something and a U wasted on cables is an expensive waste of money.
You need to worry about airflow and where the UDM pulls its air. if its front to back like all the other servers, it would be pulling hot air from the back of the cabinet and dumping it out the front of the cabinet. This is why most enterprise network switches have models with back to front airflow.
Thank you! Great idea! I proposed this a few times but they said it was fine in front. We'll see if they change their minds once the rack starts to fill up! 😂
I'm actually surprised there isn't a top of rack switch they all just plug into and get their static IP's from. I rarely had to bring my own networking equipment for my colo's.
It might be tight, but I think there was space to run the cables to the sides. If this 1U ever becomes a problem.
Also Regarding your question: given that your ISP is located in the same data center as you (lol), I recommend sticking with the hardware Site-to-Site VPN. It's hard to find a better or more reliable connection. From my perspective, opting for a service like ZeroTier would only introduce unnecessary overhead to your current setup.
I also would stick with the site to site vpn, i would never trust others to handle or be part of it in any manner of my private vpn connection. Tried and tested trough ages. The thing you should ask your self is that, why would you replace it? What is that you are not satisfied with in the current site to site vpn setup? What benefit would give you over site to site vpn? That benefit would improve your situation or possibilities?
Hardware site to site for sure, nice setup
What about option with self hosted Zerotier controller. I'm using such option for quite a while now and got lots of benefits from it, but I'm not keeping my hardware in data center. Also Zerotier can be good access granulation tool.
Underneath zero-tier and all those other easy configurations, VPNs run on WireGuard underneath. If you're hosting the self-hosting solution at home, self-hosting is great as long as peering is good by your ISP. If the ISP doesn't have good peering, your VPN can become unstable. However, self-hosting does give you some privacy if you have privacy concerns.
As an ISP network engineer, I second this.
Although you can do some cool things with Tailscale and ZeroTier, what you want from co-located equipment is reliability. The more complex things get, the more likely they are to have problems.
Hey I like that shirt you're wearing at the end! 😂
Thanks for a great design Jeff!
Funny seeing you here
You're welcome! I have your dark mode shirt too, it just hasn't hit the rotation for a day when I've been recording yet. But it'll show up soon enough :)@@TechnoTim
I went tailscale after having an openVPN, the biggest upside was the integration of every device: iPhone, iPad, random laptop(a), NAS in tertiary location, suddenly they were all part of an overlay network. Since then, I literally have forgotten where some devices are located because it has become so seamless. 😂
100% recommend Tailscale. I just wish UDM have a native support (in mgmt interface) for an exit node with tailscale.
It would be interesting to see ping time over a Tailscale network to those same machines.
I've always wanted to colocate, so this is pretty cool from a HomeLab perspective of how this all works. Yeah I can look at colocation videos online but probably none from a homelabber. Thanks Tim!
Man, $45/month is so cheap for that service. I wish we had something like this in Denmark.
I bet there is in Köpenhamn?
@@emanuelpersson3168 Nothing that I've been able to find. It all targets organisations at much higher costs.
Start your own community rack? It's not that common, but some computer clubs and similar groups do it, they rent a rack, or multiple, and you all share the expenses, usually with a little bit of an extra for the organization.
Try contacting hackerspaces. For example, when I lived in CPH I went to Labitat. Okay, they dont really have a data center, but you could get rack space with decent internet for cheap. Or they would know a place to co-locate cheaply.
I very barely know a guy with colocation space in Denmark. His company is called something like Stacket Group (I think?) and he runs some brands and stuff from it. Maybe you can get in contact with them and see if they will rent you space. I believe they're connected via GlobalConnect and TDC.
It's been a couple years since i've been in a data center but it's amazing how really cold air can become really warm air in the very short amount of time it's inside the components of a server rack. I was able to observe when they built out the new air handler for the datacenter at work and the ducts were big enough to walk around in... upright!
they watched Die Hard and thought "why crawling through when you can walk"
BTW, you can save 1u of space (above your UDM) by locating your UDM to the back of the rack....this is where all of your eth/sfp ports live for your servers. This is how most network0 their servers. Also a reason way enterprise switches have a back to front air flow.
That’s also if depths are within reason
the UDM probably doesn't have a black to front airflow and would just be eating hot air at that point.
I did something similar a few years back and still continuing to do this! It’s great to have it in a dc where the temperature remains the same and you can add / expand where required 🎉
Hey Tim, cable ties and wire trigger my DC-OCD - velcro is your friend. I would also caution you on the Unifi in the DC, having a single point of failure in front of the cluster could lead to sad times. Opnsense clustering is extremely robust, it's also getting a lot more updates than PFSense and it runs on lightweight hardware (I repurposed a couple of old Sophos xg-115's about 18 months ago - super stable) - Love the vids - thanks for putting them out there.
Good ol' 511. I'm definitely going to look into getting some stuff moved!
I vote for tailscale, one I would like to see more of it on videos and it works great. I have remote repeater sites connected and 2 ranches in different states. It does require a very consistent update but that seems to be the only drawback besides not hosting it myself. Thanks for the videos, enjoy them a lot.
This is awesome! I moved to the Twin Cities a year and a half ago and to know these things are a short drive away is really neat. I am currently working on my RHCSA cert and you have been a good source of motivation and inspiration during that journey. Thank you for doing what you do.
there are a lot of colo facilities here. Lots of cool stuff to see out there.
I recently went to do some maintenance on one of my companies backbones and thought I recognized the DC. Forgot I watched this video and had to find it to confirm we’re colocating in the same building haha
I was thinking about doing this for awhile. Excited to watch this video.
Fellow Minnesota resident😄
Thank you for the very educational video! Some how every time I think of doing something you make a video a month later explaining how it can be done haha
Oohh I had no idea you also were in Minnesnowta! Hope you are ready for the almost two feet of snow coming for us this weekend ❄
I want to thumb down this comment but you're not a bad person. lol.
Thanks for this. I've been thinking of doing something similar here in the Dallas/Ft. Worth area. Running mine at home is getting loud, expensive and dumps so much heat.
I'd love to see some content around Nebula as an overlay network. Defined Networking has a pretty generous free tier in the hosted space.
Excellent move! Not having to worry about power issues is a big one.
Here are my 2c's:
Moving the switch to the hot side of the rack is something you want to think through. I was dealing with this for years, until Cisco finally started offering fans with reverse air flow, so a) you're not obstructing the airflow in general and b) trying to cool you switch with hot air from the servers.
It seems there is no side cable management in the racks? I started to use slim cables - less space needed, better air flow. Plus they might fit through the side of the rack.
If you are using single PSU servers, you might want to invest in an ATS so you can take advantage of dual power sources. As a bonus they offer environmental monitoring and some even remote access to reboot your equipment (Hands up who had to run to a Cisco switch and pull the plug ;)
Thanks for showing the leg work, I kinda had a feeling you got in on a deal when you agreed to colo, prices are insane these days. Cheaper to rent a dedi server and not worry about hardware failure costs.
Would love to see you cover more colo type stuff! I had equipment in colo back in the 2000s and loved it, just recently set up some stuff in a colo to augment my homelab.
In colo, power is almost always your biggest expense. So half rack vs full rack is a small difference for a simple circuit. I've had high setup costs before when they have to set up additional racks. Its odd they make you pay for them to set up the space for you.
Hi from Minnesota! I watched your TH-cam channel for a while, but I did not know your live so close to me.
My buddy and I did this a few years ago. Around here there is a company, Hurricane Electric, that basically has the Costco of datacenters. They are mostly a transit provider, but have a few LARGE datacenters in the bay area. You can get 1 gig, 15 amp, full cab for $400/mo.
Amazing video! thanks for all your advise Tim, it has helped me out a lot during my homelab journey. Now its time to take it to the next level 🥳
I almost miss having to drive 30 miles to my ISP were my servers were co-located......25 years ago! Back then it was a real treat to have 100M between servers with a T1 to the internet, and dealing with 2 dialup lines in MLPPP to access them from home. Looking at my full 42U cabinet now....maybe co-lo is a real option again?
The 511 building is a pretty cool building if you're into tech. I worked out of it for a short time a few years ago. The tenant list is interesting and it has a storied history as well. My understanding is that it was built as an R & D facility for Control Data Corp. Across 6th St to the SW is the Strutwear Knitting Company building, made historic for other reasons.
I did a half-way to going from home DC to colo, I purchased a /28 subnet from ZEN internet while also too allowing some clients to usilise the shelves within my home.
If you have full network administrative privileges, a hardware-based site-to-site VPN is the best choice, rather than mesh. Although a mesh network seems to solve many complex network configurations at first glance, from a site perspective, mesh addresses the complexity of peering between multiple sites. Moreover, you only have two sites here.
I use nebula as an overlay network and am really happy with that so far. Seemless connection between all the server and client devices no matter where they are, as long as there is an internet connection
I have a very close setup as you Tim. I have free colocation space from my employer, and my stuff at home. I messed with this a lot over the last year, and site-to-site in my option is the way to go. (Even using site Magic as you seem to be) the convenience (and troubleshooting) are worth it. I have tailscale on a few devices but that is mainly for a "oh shit" when I break something. I'll post some more in the discord.
agreed. it doesn't hurt to run both. For instance; past few days my TS exit node container was acting super weird and rebuilding it didn't fix it.. it was not until few hours I discovered snort was blocking part of the traffic.. I'd def have both implemented for a PROD deployment.
Nice indeed. I also started moving my homelab to colo, and managed to snap some 10G connectivity, 1U spot, and good pricing, and location. Used modern hardware (cpu, memory, storage, nic), and super speedy. Easily getting 10G to my home (and will expand to 25G once the colo owners upgrade their gear, to 25 or 100G), and getting 0.54ms rtt form home. Nice. I already want another server somewhere (maybe other DC), just for fun.
If I was you, I would host all my coordination servers at the colo, it has a static IP, you can set up a netbird/tailscale subnet router, and still have a ssh back door if the SDN fails. You can also set up A subnet router at home.
The benefits will be similar to a site to site.
Excellent pathway, collocation being the natural evolution to homelab'ing :)
TechnoTim is the best on TH-cam. He’s still humble and hasn’t let the publicity get to his head! Thanks man!
Been using tailscale and it keeps blowing my mind!
Cool topic! Wondering what everyone uses for remote connections in their homelabs. Mesh by software vs hardware sounds like a great video idea!
HI, I personaly manage multiple colo space, and i use hardware, redudent path vpn + dynamic routing protocol (OSPF).
Also be sure to have a second way to access if your main tunnel goes down (eg, your ipsec endpoint down).
Ps: i'm not using ubnt stuf for that setup.
Also, don't forget to document your ip usage on something like an ipam solution and think about a good adressing plan. It help alot
Great tips, thank you! I wish I were better at networking!
Hi Tim, Use the head scale control plane. I use it myself and it can double as a good video tutorial.
Wows! Cool stuff. Pinging your remote units faster than the ones in your house !? That had to feel good.
Oh yeah!
Another side note the 511 building is a carrier hotel aswell. It has pretty much every single ISP that is in Minnesota.
I haven't used collocation for about a decade. Renting individual VPS is way cheaper and better maintenable than anything else without sacrifing performance (as long as you pick a good host). That said; I use tailscale for pretty much anything. My homelab, my office, my commercial servers, my clients... I love how I can manage ACL easily and quickly give access to anyone to exactly what he needs access to. Keep in mind that I also run my own DERP server. It shouldn't make much of a difference (speed/safety), but it was easy enough to self-host.
sharing a shared space? it's like colo-ception!
Flipping Genius! 🖖
Definitely a nice future video, how to connect have a hyper converged dual site proxmox cluster using some Routing and tunnelling tech.
Tailscale 100%. Been using it for a few years and wouldn't look back. The ease of setup and management is unparalleled and they have pretty big ambitions to be the de-facto VPN/Access/Networking company and I believe have built the team to achieve that.
Are you able to share the range of cost for the setup btw? I didn't catch that in the video.
The final stage of every home lab - data center
Personally friends of mine and i run talescale between our houses so we can backup each other data. Also plan to soon add family to it for their backups as well.
Looks good Tim, thanks for sharing.
Being tied to to a supplier's hardware/software dependent solution, I would set up a tailscale/headscale solution. There is far more flexibility in a VPN/SDN mesh than a site-to-site vendor specific solution.
Tailscale with subnet routers for the win. Site to site vpns are cool but if you add a third site or more that becomes annoying to manage unless you use something like ospf or bgp.
I would honestly do both. Having the hardware-based VPN is nice, but if something ever happened and that UDM messed up, instead of a visit, with something like Twingate or Tailscale, you could connect remotely and fix things so long as the network connection were still somehow in tact. Redundancy is never a bad thing.
I think that's the old AT&T buidling. I did grad school in St Paul and rember passing by that area numerous times when I went across the river.
I have a few VPS on different cloud providers that i wanted to link together over a pricate network, plus provide access to backup storage on a server in my homelab. Since this doesn't require multiple users or acceas control lists, Tailscale was overkill, so I just setup my own Wireguard mesh which has been working really well for almost a year now.
"Although it is pretty cool in there." I see what you did.
I'd stick to the site-to-site VPN, you've basically stumbled into the most ideal setup using that and I don't see a lot of benefit to going for the overlay network route in this scenario, as cool as something like tailscale is. Awesome seeing something like plover and I'll have to see if I can find something like that more local. Been kind of wanting to move to a colo for some of my equipment but even getting quotes is a bit of a headache locally.
I have self-hosted and cloud servers in different countries, and I connect to them from many different places, so that's why I'm using a mesh VPN. But since you have them both with the same IPS, it doesn't really matters. But if you are trying to join cloud instances to the network, I'd go with a mesh VPN. Sometimes I need to test something in the cloud, so I automated the instance deployment that automatically connects it to Tailscale and after the instance is terminated or power off for some time, it gets remove from my Tailscale account. That's very handy.
All I can say is, Tailscale slaps brother. I have been extremely grateful that it is an available option for small homelab users like myself. It may not be a bad idea to at least test it out.
Either site-to-site wireguard or an overlay network such as Tailscale or Netbird. Prefferably one of the latter.
Had issues with self hosted TailScale server, would like to see content around that. I too think that colocation is cool but too expensive for hobby projects.
maybe a dumb question but what is the upside to putting your server into the data center instead of your home local network ?
May want to put your UDM Pro on the back side of the aisle for cable management, the heat shouldn't be too much of a concern with it. If you moved it to the back, you will also save 1U of space where your cabling runs through.
BGP for site to site and an overlay network like zero tier with a self hosted controller would be a great setup
im an ubiquiti fan so stay like this.
I would go with Tailscale, but if I want to host my relays as well I would go with Netbird.
Cool project, can't wait for more videos .
Dang, those prices are high.
Here in Swedenland a 10U colo with dedicated gbit full duplex port, a number of IP's and no data limit is about $400/month, and that was just a quick look at a friends company.
omg to find out that you are in Minnesota this video is awesome!
I would use a mesh network mainly because I’m a big fan of Zero Trust, also you become more independent of the network at home or in the DC. But I would recommend something like Nebula. It’s super fast and lightweight. It won’t has a nice UI unless you use the hosted version but when you use Ansible for everything like setup key rotation it becomes really easy. On top you can use something like NetBird, it’s a German product. Also a mesh VPN solution, mainly does the same but with nicer Auth integration like SMLA etc. That I would use for things like mobile devices or PCs and Nebula for the Backend stuff.
I don't know the pro and cons of either one so I'd like you to cover a little bit of both if possible in the next
I would go with a hardware based VPN since that is the closest thing to what you would want in an enterprise setting being an EPLAN (depending on what you want from this rack)
It´s amazing how tech costs are cheap in USA. I have to pay, at least, 5 times more the values you showed.
I´d use hardware to connect to the remote data center.
Welcome to collocated, from here up and up.
Do yourself a favor and try get a tour of a TIER IIII facility 😊
I would chose as the follows:
1. A site-to-site VPN, even using tailscale/wireguard as a site edge, for the major use case, in case some system does not support ts/wg, as well as reducing configuring time for a simple testing-purpose project.
3. ts/wg as a backup medium to access one host in that rack in case of the failure of UDM VPN.
why didnt you consider hetzer servers or just using Digital ocean or AWS?
Awesome journey. If what you have is safe and secure. You maybe adding more latency and speed bottleneck with using Tailscale.
I Personally run tailscale my self to connect my cloud and homelab stuff together for security and only expose what i need to and when/were i need to, though as of recently been looking into a self hosted solution for privacy/security reasons
IF YOU PLAN FOR MULTI-GIGABIT ROUTING TO THE INTERNET: The integrated switch in the UDMP is like a GBe switch connected to the UDMP. So this always limits internet and inter-vlan routing.
If you don’t use the internal switch but connect a 10G switch via the UDMP SFP+ port, then you can use 10G towards the internet and for inter-vlan routing. This is reduced to 3.5 with IPS enabled.
I'm sold on overlay networks like, Tailscale and Zerotier. Enjoy the new digs!
I was not sure what you ended up paying. The reason i ask is the cost over time providing your own HW more or less then just renting compute..
I used to use hardware VPN and moved to tailscale. I was hesitant at first but don't think I'd ever go back. I can simply add another device like my phone or laptop and in 2 min I'm online with all of my devices.
Hardware VPN is kind of the old way of doing things, for me it was time to learn something new and change with the times.
That's my two cents, looking forward to your next video.
Not to mention all the firewall complications on both ends
I would keep my main physical infra on the site to site then setup a software defined network for the virtual systems. That way you get the best of both worlds flexibility that comes with software and reliability that comes with hardware solutions.
its interesting as soon as you said you used a UDMP for VPN I thought.. Why didn't he just use Tailscale or Cloudflare tunnel. IMO I would run a cloudflare LXC on each box (assuming they are proxmox) and you'd have connectivity to them. Tailscale would also work great. looking forward to see what you go with!!!
Even if you go the 'regular' VPN route, definitely an overlay network like tailscale (or headscale) for the fun of it.
It's no question do the site to site VPN, it's more secure, easier to configure and manage the whole 9 yards. Also what some have already said you would likely benefit from having the udm on the back, typically only goes forward in home lab scenarios or in dedicated all networking racks. That's pretty cool though I never knew you were so close I used to work for an ISP/MSP that owns the fiber into that stadium and colocates at that DC. Although I never went there.
Didn’t know Techno Tim was in MN. I’m in the South Metro. Cool video again.
Hey Tim, thanks for your videos, I like them and learn too. I totally get why you use self managed software, but why do you bother with own racks and HW? Why not simply renting servers or VMs from a cloud provider? It’s not expensive if you use it wisely. I’d definitely move own self hosted public facing apps to the cloud, either to container environment or worst case a simple VMs.
Yep, I live in Chicago and colo at 350 down on Cermak. My ping is 500usec. What’s great is that pretty much every game server hosts here in downtown Chicago. So my ping to everything shows 0 milliseconds.
I had a site to site VPN for the longest time just to connect to my permanent address but it got less and less reliable to the point where I installed tailscale as a container (second one on another node in the cluster) and haven't looked back since. However I think because your ISP is located in the same data center I would just keep it to a site to site VPN.
I ran a few hosts years ago and looked into colocating, and the cheapest places I found was either NJ, NY, or TX. idk why they're so cheap but they're pretty cheap and is why lots of hosts have servers in those areas
Wouldn't sharing a colocation rack space be quite a risk? You need to setup some encryption, chassis intrusion, network BPDU, filter, mac security...
For connection back home, mesh network or a cloudflare container! Since it doesn't look like you have an OOB management device (and probably don't want datacenter remote hand to help you power cycle), would suggest some sort of IP KVM like opengear or piKVM.
👍
I added a comment sort of along these lines
Also verify that your power supplies are 220v capable. Most real server PSUs are full-range, but I've seen smoke on more than one occasion.
I know what the first DC quote is. I used to work for them lol.. The location you were looking at was either Edina or Eagan amirite?
Just did this and besides all my co workers thinking I was insane, it's been great. $90/mo for 4Us. Tailscale on a EdgeRouter has been dog slow though. Now that I have a static IP and a stable site, I think I will switch to standard IPsec for the site to site VPN and use a regular VPN server to connect remotely. The MTU is much lower with the wireguard overhead than IPsec and it gives me issues with my vSAN Witness occasionally.
Pretty jealous. Wish I could get some lab colo-space haha
What is the definition of homelab then ? 🤔
FYI the 2680v4 is a 14 core 28 thread CPU. I know we are talking cores vs threads but just pointing it out.
Thank you, good call! You’re right, threads not cores. Editing Tim should have caught that!
I've got part of my homelab services running in the cloud ... currently using Zerotier but migrating to Nebula.