I Colocated My HomeLab in a Data Center

แชร์
ฝัง

ความคิดเห็น • 364

  • @aure_eti
    @aure_eti 6 หลายเดือนก่อน +341

    "it sounded like this inside" didn't hear any diffrence with my PowerEdge running behind me lol

    • @jonathan.sullivan
      @jonathan.sullivan 6 หลายเดือนก่อน +9

      PowerEdge as in singular? lol

    • @xtlmeth
      @xtlmeth 6 หลายเดือนก่อน +4

      lol as my full rack is humming away 15ft away from me.

    • @aure_eti
      @aure_eti 6 หลายเดือนก่อน +5

      @@jonathan.sullivan yes, as only one is currently powered up. But it's a R730xd it's not that loud usually. Expect when it's 25° in my room

    • @JohnWeland
      @JohnWeland 6 หลายเดือนก่อน +4

      dude right, I have three running about 30' from me. I was like dang that's quiet

    • @GabrielFoote
      @GabrielFoote 6 หลายเดือนก่อน

      Haha, relatable

  • @FinlayDaG33k
    @FinlayDaG33k 6 หลายเดือนก่อน +78

    One major downside of the way you've set it up: If your UDM dies, your entire cluster state may be compromised as nodes are no longer able to see eachother.
    I would personally have added a 2-port NIC (I bought some refurbed SFP+ ones for 60 bucks a pop - tho I'm from Europe so your market may differ) in that unpopulated PCIe slot then hook up all nodes directly to each other in a mesh (A->B, B->C, C->A) with some SFP+ DAC cables (they cost like 15 bucks a pop from fs).
    Then use the onboard NICs _just_ for traffic leaving the cluster.
    It would add some extra costs (and some configuration complexity) but the benefits are worth it in my opinion:
    - Ceph can now run over dedicated interfaces (that are also faster when using SFP+), lowering the burden on the other interfaces (less congestion).
    - Your UDM failing only affects your uplink (but your cluster state itself will otherwise remain unaffected).

    • @juri14111996
      @juri14111996 5 หลายเดือนก่อน +3

      an the 8 lan ports an the udmp are internally connected over just 1gb/s. Its basicaly a 9port gigabit switch, 8 port facing the outside, 1 used internally to connect to the rest of the system.

  • @zeddy893
    @zeddy893 6 หลายเดือนก่อน +207

    Dude, you made it into the 511 building... that's insane! That's the hub for all the Midwest Backbone is located. I'm so Jealous.
    Just a bit of background: when US Bank was constructing the stadium, there was an idea to demolish it since it appears just like any ordinary building. However, they were told that wasn't an option. It was then that they discovered the true significance of that building.

    • @TechnoTim
      @TechnoTim  6 หลายเดือนก่อน +42

      Maybe that explains the sweet, sweet ping time! Thanks for the history!

    • @zeddy893
      @zeddy893 6 หลายเดือนก่อน

      @@TechnoTim Yes, the company I work for utilizes a direct connection to the backbone, connecting all the way back to our main data center. It's not an inexpensive setup, and that location serves as a major hub for all the leading internet providers. Depending on your access level, if you venture down to the basement, you'll come across secure rooms that are off-limits, reserved for major companies like CenturyLink, Xfinity, Spectrum, and others.

    • @stephendennis5969
      @stephendennis5969 6 หลายเดือนก่อน +34

      Haha I would have loved to have been the one who told the developers “no you can’t tear down the major communications hub for the city and half the country. “

    • @TheDillio187
      @TheDillio187 6 หลายเดือนก่อน +12

      the 511 building is legendary. The TW Telecom colo in MInnetonka, Cyxtera in Shakopee, and the 2 Databank colos are good visits, too.

    • @mattaudio
      @mattaudio 6 หลายเดือนก่อน +2

      I choose ISPs based on 511 peering.

  • @izatt82
    @izatt82 6 หลายเดือนก่อน +160

    A tip. Mount the UDM on the back side of the rack and gain back that rack space you used for the cabling to run to the back.

    • @TravisNewton1
      @TravisNewton1 6 หลายเดือนก่อน +30

      Exactly. Those cables are now consuming an additional U. Even in a shared rack, that extra U costs something and a U wasted on cables is an expensive waste of money.

    • @TheSHELMSY
      @TheSHELMSY 6 หลายเดือนก่อน +19

      You need to worry about airflow and where the UDM pulls its air. if its front to back like all the other servers, it would be pulling hot air from the back of the cabinet and dumping it out the front of the cabinet. This is why most enterprise network switches have models with back to front airflow.

    • @TechnoTim
      @TechnoTim  6 หลายเดือนก่อน +42

      Thank you! Great idea! I proposed this a few times but they said it was fine in front. We'll see if they change their minds once the rack starts to fill up! 😂

    • @jonathan.sullivan
      @jonathan.sullivan 6 หลายเดือนก่อน +11

      I'm actually surprised there isn't a top of rack switch they all just plug into and get their static IP's from. I rarely had to bring my own networking equipment for my colo's.

    • @hw2508
      @hw2508 6 หลายเดือนก่อน +3

      It might be tight, but I think there was space to run the cables to the sides. If this 1U ever becomes a problem.

  • @zeddy893
    @zeddy893 6 หลายเดือนก่อน +100

    Also Regarding your question: given that your ISP is located in the same data center as you (lol), I recommend sticking with the hardware Site-to-Site VPN. It's hard to find a better or more reliable connection. From my perspective, opting for a service like ZeroTier would only introduce unnecessary overhead to your current setup.

    • @Krushx0
      @Krushx0 6 หลายเดือนก่อน +9

      I also would stick with the site to site vpn, i would never trust others to handle or be part of it in any manner of my private vpn connection. Tried and tested trough ages. The thing you should ask your self is that, why would you replace it? What is that you are not satisfied with in the current site to site vpn setup? What benefit would give you over site to site vpn? That benefit would improve your situation or possibilities?

    • @victorzenteno1166
      @victorzenteno1166 6 หลายเดือนก่อน +2

      Hardware site to site for sure, nice setup

    • @RogueRonin2501
      @RogueRonin2501 6 หลายเดือนก่อน +1

      What about option with self hosted Zerotier controller. I'm using such option for quite a while now and got lots of benefits from it, but I'm not keeping my hardware in data center. Also Zerotier can be good access granulation tool.

    • @zeddy893
      @zeddy893 6 หลายเดือนก่อน

      Underneath zero-tier and all those other easy configurations, VPNs run on WireGuard underneath. If you're hosting the self-hosting solution at home, self-hosting is great as long as peering is good by your ISP. If the ISP doesn't have good peering, your VPN can become unstable. However, self-hosting does give you some privacy if you have privacy concerns.

    • @denton3737
      @denton3737 6 หลายเดือนก่อน

      As an ISP network engineer, I second this.
      Although you can do some cool things with Tailscale and ZeroTier, what you want from co-located equipment is reliability. The more complex things get, the more likely they are to have problems.

  • @JeffGeerling
    @JeffGeerling 6 หลายเดือนก่อน +54

    Hey I like that shirt you're wearing at the end! 😂

    • @TechnoTim
      @TechnoTim  6 หลายเดือนก่อน +8

      Thanks for a great design Jeff!

    • @AndyIsHereBoi
      @AndyIsHereBoi 6 หลายเดือนก่อน

      Funny seeing you here

    • @JeffGeerling
      @JeffGeerling 6 หลายเดือนก่อน +5

      You're welcome! I have your dark mode shirt too, it just hasn't hit the rotation for a day when I've been recording yet. But it'll show up soon enough :)@@TechnoTim

  • @EricInTheNet
    @EricInTheNet 6 หลายเดือนก่อน +22

    I went tailscale after having an openVPN, the biggest upside was the integration of every device: iPhone, iPad, random laptop(a), NAS in tertiary location, suddenly they were all part of an overlay network. Since then, I literally have forgotten where some devices are located because it has become so seamless. 😂
    100% recommend Tailscale. I just wish UDM have a native support (in mgmt interface) for an exit node with tailscale.

  • @keyboard_g
    @keyboard_g 6 หลายเดือนก่อน +18

    It would be interesting to see ping time over a Tailscale network to those same machines.

  • @armedscubasteve
    @armedscubasteve 6 หลายเดือนก่อน +4

    I've always wanted to colocate, so this is pretty cool from a HomeLab perspective of how this all works. Yeah I can look at colocation videos online but probably none from a homelabber. Thanks Tim!

  • @ivanmalinovski7807
    @ivanmalinovski7807 6 หลายเดือนก่อน +37

    Man, $45/month is so cheap for that service. I wish we had something like this in Denmark.

    • @emanuelpersson3168
      @emanuelpersson3168 6 หลายเดือนก่อน

      I bet there is in Köpenhamn?

    • @ivanmalinovski7807
      @ivanmalinovski7807 6 หลายเดือนก่อน

      @@emanuelpersson3168 Nothing that I've been able to find. It all targets organisations at much higher costs.

    • @RobinCernyMitSuffix
      @RobinCernyMitSuffix 6 หลายเดือนก่อน +4

      Start your own community rack? It's not that common, but some computer clubs and similar groups do it, they rent a rack, or multiple, and you all share the expenses, usually with a little bit of an extra for the organization.

    • @karliszemitis3356
      @karliszemitis3356 6 หลายเดือนก่อน

      Try contacting hackerspaces. For example, when I lived in CPH I went to Labitat. Okay, they dont really have a data center, but you could get rack space with decent internet for cheap. Or they would know a place to co-locate cheaply.

    • @lilyydotdev
      @lilyydotdev 6 หลายเดือนก่อน

      I very barely know a guy with colocation space in Denmark. His company is called something like Stacket Group (I think?) and he runs some brands and stuff from it. Maybe you can get in contact with them and see if they will rent you space. I believe they're connected via GlobalConnect and TDC.

  • @Trains-With-Shane
    @Trains-With-Shane 6 หลายเดือนก่อน +6

    It's been a couple years since i've been in a data center but it's amazing how really cold air can become really warm air in the very short amount of time it's inside the components of a server rack. I was able to observe when they built out the new air handler for the datacenter at work and the ducts were big enough to walk around in... upright!

    • @marcogenovesi8570
      @marcogenovesi8570 6 หลายเดือนก่อน +1

      they watched Die Hard and thought "why crawling through when you can walk"

  • @seantellsit1431
    @seantellsit1431 6 หลายเดือนก่อน +12

    BTW, you can save 1u of space (above your UDM) by locating your UDM to the back of the rack....this is where all of your eth/sfp ports live for your servers. This is how most network0 their servers. Also a reason way enterprise switches have a back to front air flow.

    • @npham1198
      @npham1198 6 หลายเดือนก่อน

      That’s also if depths are within reason

    • @kyrujames
      @kyrujames 6 หลายเดือนก่อน

      the UDM probably doesn't have a black to front airflow and would just be eating hot air at that point.

  • @matthewlandon1697
    @matthewlandon1697 6 หลายเดือนก่อน

    I did something similar a few years back and still continuing to do this! It’s great to have it in a dc where the temperature remains the same and you can add / expand where required 🎉

  • @DavidPerrettGM
    @DavidPerrettGM 6 หลายเดือนก่อน +1

    Hey Tim, cable ties and wire trigger my DC-OCD - velcro is your friend. I would also caution you on the Unifi in the DC, having a single point of failure in front of the cluster could lead to sad times. Opnsense clustering is extremely robust, it's also getting a lot more updates than PFSense and it runs on lightweight hardware (I repurposed a couple of old Sophos xg-115's about 18 months ago - super stable) - Love the vids - thanks for putting them out there.

  • @TylerBundy260
    @TylerBundy260 6 หลายเดือนก่อน +3

    Good ol' 511. I'm definitely going to look into getting some stuff moved!

  • @Whiskey7BackRoads
    @Whiskey7BackRoads 6 หลายเดือนก่อน +2

    I vote for tailscale, one I would like to see more of it on videos and it works great. I have remote repeater sites connected and 2 ranches in different states. It does require a very consistent update but that seems to be the only drawback besides not hosting it myself. Thanks for the videos, enjoy them a lot.

  • @dhelmick
    @dhelmick 6 หลายเดือนก่อน

    This is awesome! I moved to the Twin Cities a year and a half ago and to know these things are a short drive away is really neat. I am currently working on my RHCSA cert and you have been a good source of motivation and inspiration during that journey. Thank you for doing what you do.

    • @TheDillio187
      @TheDillio187 6 หลายเดือนก่อน

      there are a lot of colo facilities here. Lots of cool stuff to see out there.

  • @Liebe-Futurel
    @Liebe-Futurel 16 วันที่ผ่านมา

    I recently went to do some maintenance on one of my companies backbones and thought I recognized the DC. Forgot I watched this video and had to find it to confirm we’re colocating in the same building haha

  • @mne36
    @mne36 6 หลายเดือนก่อน +1

    I was thinking about doing this for awhile. Excited to watch this video.
    Fellow Minnesota resident😄

    • @mne36
      @mne36 6 หลายเดือนก่อน

      Thank you for the very educational video! Some how every time I think of doing something you make a video a month later explaining how it can be done haha

  • @mllarson
    @mllarson 6 หลายเดือนก่อน +1

    Oohh I had no idea you also were in Minnesnowta! Hope you are ready for the almost two feet of snow coming for us this weekend ❄

    • @TheDillio187
      @TheDillio187 6 หลายเดือนก่อน

      I want to thumb down this comment but you're not a bad person. lol.

  • @edb75001
    @edb75001 6 หลายเดือนก่อน +1

    Thanks for this. I've been thinking of doing something similar here in the Dallas/Ft. Worth area. Running mine at home is getting loud, expensive and dumps so much heat.

  • @krystophv
    @krystophv 6 หลายเดือนก่อน +2

    I'd love to see some content around Nebula as an overlay network. Defined Networking has a pretty generous free tier in the hosted space.

  • @Deepfreezing
    @Deepfreezing 6 หลายเดือนก่อน

    Excellent move! Not having to worry about power issues is a big one.
    Here are my 2c's:
    Moving the switch to the hot side of the rack is something you want to think through. I was dealing with this for years, until Cisco finally started offering fans with reverse air flow, so a) you're not obstructing the airflow in general and b) trying to cool you switch with hot air from the servers.
    It seems there is no side cable management in the racks? I started to use slim cables - less space needed, better air flow. Plus they might fit through the side of the rack.
    If you are using single PSU servers, you might want to invest in an ATS so you can take advantage of dual power sources. As a bonus they offer environmental monitoring and some even remote access to reboot your equipment (Hands up who had to run to a Cisco switch and pull the plug ;)

  • @jonathan.sullivan
    @jonathan.sullivan 6 หลายเดือนก่อน +1

    Thanks for showing the leg work, I kinda had a feeling you got in on a deal when you agreed to colo, prices are insane these days. Cheaper to rent a dedi server and not worry about hardware failure costs.

  • @kenrobertson8239
    @kenrobertson8239 6 หลายเดือนก่อน

    Would love to see you cover more colo type stuff! I had equipment in colo back in the 2000s and loved it, just recently set up some stuff in a colo to augment my homelab.
    In colo, power is almost always your biggest expense. So half rack vs full rack is a small difference for a simple circuit. I've had high setup costs before when they have to set up additional racks. Its odd they make you pay for them to set up the space for you.

  • @CRK1918
    @CRK1918 6 หลายเดือนก่อน +1

    Hi from Minnesota! I watched your TH-cam channel for a while, but I did not know your live so close to me.

  • @SierraGolfNiner
    @SierraGolfNiner 5 หลายเดือนก่อน

    My buddy and I did this a few years ago. Around here there is a company, Hurricane Electric, that basically has the Costco of datacenters. They are mostly a transit provider, but have a few LARGE datacenters in the bay area. You can get 1 gig, 15 amp, full cab for $400/mo.

  • @diegoalejandrosarmientomun303
    @diegoalejandrosarmientomun303 6 หลายเดือนก่อน

    Amazing video! thanks for all your advise Tim, it has helped me out a lot during my homelab journey. Now its time to take it to the next level 🥳

  • @thegreyfuzz
    @thegreyfuzz 6 หลายเดือนก่อน +3

    I almost miss having to drive 30 miles to my ISP were my servers were co-located......25 years ago! Back then it was a real treat to have 100M between servers with a T1 to the internet, and dealing with 2 dialup lines in MLPPP to access them from home. Looking at my full 42U cabinet now....maybe co-lo is a real option again?

  • @shadowperson9
    @shadowperson9 6 หลายเดือนก่อน

    The 511 building is a pretty cool building if you're into tech. I worked out of it for a short time a few years ago. The tenant list is interesting and it has a storied history as well. My understanding is that it was built as an R & D facility for Control Data Corp. Across 6th St to the SW is the Strutwear Knitting Company building, made historic for other reasons.

  • @martinwashington3152
    @martinwashington3152 6 หลายเดือนก่อน

    I did a half-way to going from home DC to colo, I purchased a /28 subnet from ZEN internet while also too allowing some clients to usilise the shelves within my home.

  • @toryelo
    @toryelo 6 หลายเดือนก่อน +3

    If you have full network administrative privileges, a hardware-based site-to-site VPN is the best choice, rather than mesh. Although a mesh network seems to solve many complex network configurations at first glance, from a site perspective, mesh addresses the complexity of peering between multiple sites. Moreover, you only have two sites here.

  • @mrhidetf2
    @mrhidetf2 6 หลายเดือนก่อน

    I use nebula as an overlay network and am really happy with that so far. Seemless connection between all the server and client devices no matter where they are, as long as there is an internet connection

  • @novistion
    @novistion 6 หลายเดือนก่อน +1

    I have a very close setup as you Tim. I have free colocation space from my employer, and my stuff at home. I messed with this a lot over the last year, and site-to-site in my option is the way to go. (Even using site Magic as you seem to be) the convenience (and troubleshooting) are worth it. I have tailscale on a few devices but that is mainly for a "oh shit" when I break something. I'll post some more in the discord.

    • @TeslaMaxwell
      @TeslaMaxwell 6 หลายเดือนก่อน

      agreed. it doesn't hurt to run both. For instance; past few days my TS exit node container was acting super weird and rebuilding it didn't fix it.. it was not until few hours I discovered snort was blocking part of the traffic.. I'd def have both implemented for a PROD deployment.

  • @movax20h
    @movax20h 4 หลายเดือนก่อน

    Nice indeed. I also started moving my homelab to colo, and managed to snap some 10G connectivity, 1U spot, and good pricing, and location. Used modern hardware (cpu, memory, storage, nic), and super speedy. Easily getting 10G to my home (and will expand to 25G once the colo owners upgrade their gear, to 25 or 100G), and getting 0.54ms rtt form home. Nice. I already want another server somewhere (maybe other DC), just for fun.

  • @blackphidora
    @blackphidora 6 หลายเดือนก่อน +3

    If I was you, I would host all my coordination servers at the colo, it has a static IP, you can set up a netbird/tailscale subnet router, and still have a ssh back door if the SDN fails. You can also set up A subnet router at home.
    The benefits will be similar to a site to site.

  • @smalltimer4370
    @smalltimer4370 6 หลายเดือนก่อน

    Excellent pathway, collocation being the natural evolution to homelab'ing :)

  • @brierepooc8987
    @brierepooc8987 6 หลายเดือนก่อน

    TechnoTim is the best on TH-cam. He’s still humble and hasn’t let the publicity get to his head! Thanks man!

  • @G4rl0ck
    @G4rl0ck 6 หลายเดือนก่อน +1

    Been using tailscale and it keeps blowing my mind!

  • @jlt4219
    @jlt4219 6 หลายเดือนก่อน

    Cool topic! Wondering what everyone uses for remote connections in their homelabs. Mesh by software vs hardware sounds like a great video idea!

  • @sarah1202
    @sarah1202 6 หลายเดือนก่อน +1

    HI, I personaly manage multiple colo space, and i use hardware, redudent path vpn + dynamic routing protocol (OSPF).
    Also be sure to have a second way to access if your main tunnel goes down (eg, your ipsec endpoint down).
    Ps: i'm not using ubnt stuf for that setup.
    Also, don't forget to document your ip usage on something like an ipam solution and think about a good adressing plan. It help alot

    • @TechnoTim
      @TechnoTim  6 หลายเดือนก่อน +1

      Great tips, thank you! I wish I were better at networking!

  • @kenny45532
    @kenny45532 6 หลายเดือนก่อน

    Hi Tim, Use the head scale control plane. I use it myself and it can double as a good video tutorial.

  • @jsbaltes
    @jsbaltes 6 หลายเดือนก่อน

    Wows! Cool stuff. Pinging your remote units faster than the ones in your house !? That had to feel good.

    • @TechnoTim
      @TechnoTim  6 หลายเดือนก่อน

      Oh yeah!

  • @izproximity
    @izproximity 5 หลายเดือนก่อน

    Another side note the 511 building is a carrier hotel aswell. It has pretty much every single ISP that is in Minnesota.

  • @rallisf1
    @rallisf1 6 หลายเดือนก่อน

    I haven't used collocation for about a decade. Renting individual VPS is way cheaper and better maintenable than anything else without sacrifing performance (as long as you pick a good host). That said; I use tailscale for pretty much anything. My homelab, my office, my commercial servers, my clients... I love how I can manage ACL easily and quickly give access to anyone to exactly what he needs access to. Keep in mind that I also run my own DERP server. It shouldn't make much of a difference (speed/safety), but it was easy enough to self-host.

  • @GeekOfAllTrades
    @GeekOfAllTrades 6 หลายเดือนก่อน

    sharing a shared space? it's like colo-ception!
    Flipping Genius! 🖖

  • @PedroFonseca5
    @PedroFonseca5 6 หลายเดือนก่อน

    Definitely a nice future video, how to connect have a hyper converged dual site proxmox cluster using some Routing and tunnelling tech.

  • @alanbraithwaite4724
    @alanbraithwaite4724 6 หลายเดือนก่อน

    Tailscale 100%. Been using it for a few years and wouldn't look back. The ease of setup and management is unparalleled and they have pretty big ambitions to be the de-facto VPN/Access/Networking company and I believe have built the team to achieve that.

    • @alanbraithwaite4724
      @alanbraithwaite4724 6 หลายเดือนก่อน

      Are you able to share the range of cost for the setup btw? I didn't catch that in the video.

  • @Mugruokgt
    @Mugruokgt 5 หลายเดือนก่อน

    The final stage of every home lab - data center

  • @JoshDike-lx8gl
    @JoshDike-lx8gl 6 หลายเดือนก่อน

    Personally friends of mine and i run talescale between our houses so we can backup each other data. Also plan to soon add family to it for their backups as well.

  • @ronm6585
    @ronm6585 6 หลายเดือนก่อน

    Looks good Tim, thanks for sharing.

  • @Jaabaa_Prime
    @Jaabaa_Prime 6 หลายเดือนก่อน

    Being tied to to a supplier's hardware/software dependent solution, I would set up a tailscale/headscale solution. There is far more flexibility in a VPN/SDN mesh than a site-to-site vendor specific solution.

  • @RyanJones26
    @RyanJones26 6 หลายเดือนก่อน +1

    Tailscale with subnet routers for the win. Site to site vpns are cool but if you add a third site or more that becomes annoying to manage unless you use something like ospf or bgp.

  • @Kaidesa
    @Kaidesa 6 หลายเดือนก่อน

    I would honestly do both. Having the hardware-based VPN is nice, but if something ever happened and that UDM messed up, instead of a visit, with something like Twingate or Tailscale, you could connect remotely and fix things so long as the network connection were still somehow in tact. Redundancy is never a bad thing.

  • @aflawrence
    @aflawrence 6 หลายเดือนก่อน

    I think that's the old AT&T buidling. I did grad school in St Paul and rember passing by that area numerous times when I went across the river.

  • @jsnfwlr
    @jsnfwlr 6 หลายเดือนก่อน

    I have a few VPS on different cloud providers that i wanted to link together over a pricate network, plus provide access to backup storage on a server in my homelab. Since this doesn't require multiple users or acceas control lists, Tailscale was overkill, so I just setup my own Wireguard mesh which has been working really well for almost a year now.

  • @willernst
    @willernst 5 หลายเดือนก่อน

    "Although it is pretty cool in there." I see what you did.

  • @TrTai
    @TrTai 6 หลายเดือนก่อน

    I'd stick to the site-to-site VPN, you've basically stumbled into the most ideal setup using that and I don't see a lot of benefit to going for the overlay network route in this scenario, as cool as something like tailscale is. Awesome seeing something like plover and I'll have to see if I can find something like that more local. Been kind of wanting to move to a colo for some of my equipment but even getting quotes is a bit of a headache locally.

  • @jbaenaxd
    @jbaenaxd 6 หลายเดือนก่อน

    I have self-hosted and cloud servers in different countries, and I connect to them from many different places, so that's why I'm using a mesh VPN. But since you have them both with the same IPS, it doesn't really matters. But if you are trying to join cloud instances to the network, I'd go with a mesh VPN. Sometimes I need to test something in the cloud, so I automated the instance deployment that automatically connects it to Tailscale and after the instance is terminated or power off for some time, it gets remove from my Tailscale account. That's very handy.

  • @alldaytherapy2919
    @alldaytherapy2919 6 หลายเดือนก่อน

    All I can say is, Tailscale slaps brother. I have been extremely grateful that it is an available option for small homelab users like myself. It may not be a bad idea to at least test it out.

  • @Clarence-Homelab
    @Clarence-Homelab 6 หลายเดือนก่อน

    Either site-to-site wireguard or an overlay network such as Tailscale or Netbird. Prefferably one of the latter.

  • @accik
    @accik 6 หลายเดือนก่อน +1

    Had issues with self hosted TailScale server, would like to see content around that. I too think that colocation is cool but too expensive for hobby projects.

  • @anymoustrend4074
    @anymoustrend4074 6 หลายเดือนก่อน +1

    maybe a dumb question but what is the upside to putting your server into the data center instead of your home local network ?

  • @donovangregg5
    @donovangregg5 6 หลายเดือนก่อน

    May want to put your UDM Pro on the back side of the aisle for cable management, the heat shouldn't be too much of a concern with it. If you moved it to the back, you will also save 1U of space where your cabling runs through.

  • @shiysabiniano
    @shiysabiniano 6 หลายเดือนก่อน

    BGP for site to site and an overlay network like zero tier with a self hosted controller would be a great setup

  • @wodn184fn8
    @wodn184fn8 6 หลายเดือนก่อน

    im an ubiquiti fan so stay like this.

  • @krishnachaitanya4822
    @krishnachaitanya4822 6 หลายเดือนก่อน +15

    I would go with Tailscale, but if I want to host my relays as well I would go with Netbird.

  • @CharlieMartorelli
    @CharlieMartorelli 6 หลายเดือนก่อน

    Cool project, can't wait for more videos .

  • @confusedbeard69
    @confusedbeard69 6 หลายเดือนก่อน

    Dang, those prices are high.
    Here in Swedenland a 10U colo with dedicated gbit full duplex port, a number of IP's and no data limit is about $400/month, and that was just a quick look at a friends company.

  • @youwut8378
    @youwut8378 6 หลายเดือนก่อน

    omg to find out that you are in Minnesota this video is awesome!

  • @maxmustermann9858
    @maxmustermann9858 6 หลายเดือนก่อน

    I would use a mesh network mainly because I’m a big fan of Zero Trust, also you become more independent of the network at home or in the DC. But I would recommend something like Nebula. It’s super fast and lightweight. It won’t has a nice UI unless you use the hosted version but when you use Ansible for everything like setup key rotation it becomes really easy. On top you can use something like NetBird, it’s a German product. Also a mesh VPN solution, mainly does the same but with nicer Auth integration like SMLA etc. That I would use for things like mobile devices or PCs and Nebula for the Backend stuff.

  • @jimmyscott5144
    @jimmyscott5144 6 หลายเดือนก่อน

    I don't know the pro and cons of either one so I'd like you to cover a little bit of both if possible in the next

  • @smitty683
    @smitty683 6 หลายเดือนก่อน

    I would go with a hardware based VPN since that is the closest thing to what you would want in an enterprise setting being an EPLAN (depending on what you want from this rack)

  • @marcosoliveira8731
    @marcosoliveira8731 6 หลายเดือนก่อน +1

    It´s amazing how tech costs are cheap in USA. I have to pay, at least, 5 times more the values you showed.
    I´d use hardware to connect to the remote data center.

  • @shabsZA
    @shabsZA 6 หลายเดือนก่อน

    Welcome to collocated, from here up and up.
    Do yourself a favor and try get a tour of a TIER IIII facility 😊

  • @FredericDT
    @FredericDT 6 หลายเดือนก่อน

    I would chose as the follows:
    1. A site-to-site VPN, even using tailscale/wireguard as a site edge, for the major use case, in case some system does not support ts/wg, as well as reducing configuring time for a simple testing-purpose project.
    3. ts/wg as a backup medium to access one host in that rack in case of the failure of UDM VPN.

  • @GodAtum
    @GodAtum 6 หลายเดือนก่อน +1

    why didnt you consider hetzer servers or just using Digital ocean or AWS?

  • @Amwfilms
    @Amwfilms 6 หลายเดือนก่อน

    Awesome journey. If what you have is safe and secure. You maybe adding more latency and speed bottleneck with using Tailscale.

  • @friendlydawusky
    @friendlydawusky 6 หลายเดือนก่อน

    I Personally run tailscale my self to connect my cloud and homelab stuff together for security and only expose what i need to and when/were i need to, though as of recently been looking into a self hosted solution for privacy/security reasons

  • @maxmustermann194
    @maxmustermann194 5 หลายเดือนก่อน

    IF YOU PLAN FOR MULTI-GIGABIT ROUTING TO THE INTERNET: The integrated switch in the UDMP is like a GBe switch connected to the UDMP. So this always limits internet and inter-vlan routing.
    If you don’t use the internal switch but connect a 10G switch via the UDMP SFP+ port, then you can use 10G towards the internet and for inter-vlan routing. This is reduced to 3.5 with IPS enabled.

  • @PcaplLite
    @PcaplLite 6 หลายเดือนก่อน

    I'm sold on overlay networks like, Tailscale and Zerotier. Enjoy the new digs!

  • @mikecharest
    @mikecharest 6 หลายเดือนก่อน +1

    I was not sure what you ended up paying. The reason i ask is the cost over time providing your own HW more or less then just renting compute..

  • @MaximumDIY
    @MaximumDIY 6 หลายเดือนก่อน

    I used to use hardware VPN and moved to tailscale. I was hesitant at first but don't think I'd ever go back. I can simply add another device like my phone or laptop and in 2 min I'm online with all of my devices.
    Hardware VPN is kind of the old way of doing things, for me it was time to learn something new and change with the times.
    That's my two cents, looking forward to your next video.

    • @mrmotofy
      @mrmotofy 4 หลายเดือนก่อน

      Not to mention all the firewall complications on both ends

  • @itsthebofh
    @itsthebofh 6 หลายเดือนก่อน

    I would keep my main physical infra on the site to site then setup a software defined network for the virtual systems. That way you get the best of both worlds flexibility that comes with software and reliability that comes with hardware solutions.

  • @Pro-cheeseburger
    @Pro-cheeseburger 6 หลายเดือนก่อน

    its interesting as soon as you said you used a UDMP for VPN I thought.. Why didn't he just use Tailscale or Cloudflare tunnel. IMO I would run a cloudflare LXC on each box (assuming they are proxmox) and you'd have connectivity to them. Tailscale would also work great. looking forward to see what you go with!!!

  • @LucS0042
    @LucS0042 6 หลายเดือนก่อน

    Even if you go the 'regular' VPN route, definitely an overlay network like tailscale (or headscale) for the fun of it.

  • @SHUTDOORproduction
    @SHUTDOORproduction 6 หลายเดือนก่อน

    It's no question do the site to site VPN, it's more secure, easier to configure and manage the whole 9 yards. Also what some have already said you would likely benefit from having the udm on the back, typically only goes forward in home lab scenarios or in dedicated all networking racks. That's pretty cool though I never knew you were so close I used to work for an ISP/MSP that owns the fiber into that stadium and colocates at that DC. Although I never went there.

  • @brock2633
    @brock2633 6 หลายเดือนก่อน

    Didn’t know Techno Tim was in MN. I’m in the South Metro. Cool video again.

  • @al.ignatenko
    @al.ignatenko 6 หลายเดือนก่อน

    Hey Tim, thanks for your videos, I like them and learn too. I totally get why you use self managed software, but why do you bother with own racks and HW? Why not simply renting servers or VMs from a cloud provider? It’s not expensive if you use it wisely. I’d definitely move own self hosted public facing apps to the cloud, either to container environment or worst case a simple VMs.

  • @adambahe9309
    @adambahe9309 6 หลายเดือนก่อน

    Yep, I live in Chicago and colo at 350 down on Cermak. My ping is 500usec. What’s great is that pretty much every game server hosts here in downtown Chicago. So my ping to everything shows 0 milliseconds.

  • @Redd00
    @Redd00 6 หลายเดือนก่อน +1

    I had a site to site VPN for the longest time just to connect to my permanent address but it got less and less reliable to the point where I installed tailscale as a container (second one on another node in the cluster) and haven't looked back since. However I think because your ISP is located in the same data center I would just keep it to a site to site VPN.

  • @ToucheFarming
    @ToucheFarming 6 หลายเดือนก่อน

    I ran a few hosts years ago and looked into colocating, and the cheapest places I found was either NJ, NY, or TX. idk why they're so cheap but they're pretty cheap and is why lots of hosts have servers in those areas

  • @accrevoke
    @accrevoke 6 หลายเดือนก่อน

    Wouldn't sharing a colocation rack space be quite a risk? You need to setup some encryption, chassis intrusion, network BPDU, filter, mac security...
    For connection back home, mesh network or a cloudflare container! Since it doesn't look like you have an OOB management device (and probably don't want datacenter remote hand to help you power cycle), would suggest some sort of IP KVM like opengear or piKVM.

    • @chromerims
      @chromerims 6 หลายเดือนก่อน

      👍
      I added a comment sort of along these lines

  • @annihilatorg
    @annihilatorg 6 หลายเดือนก่อน

    Also verify that your power supplies are 220v capable. Most real server PSUs are full-range, but I've seen smoke on more than one occasion.

  • @izproximity
    @izproximity 5 หลายเดือนก่อน

    I know what the first DC quote is. I used to work for them lol.. The location you were looking at was either Edina or Eagan amirite?

  • @charlespickering
    @charlespickering 6 หลายเดือนก่อน

    Just did this and besides all my co workers thinking I was insane, it's been great. $90/mo for 4Us. Tailscale on a EdgeRouter has been dog slow though. Now that I have a static IP and a stable site, I think I will switch to standard IPsec for the site to site VPN and use a regular VPN server to connect remotely. The MTU is much lower with the wireguard overhead than IPsec and it gives me issues with my vSAN Witness occasionally.

  • @anthonydefallo9295
    @anthonydefallo9295 6 หลายเดือนก่อน

    Pretty jealous. Wish I could get some lab colo-space haha

  • @GrimSpec
    @GrimSpec 6 หลายเดือนก่อน +4

    What is the definition of homelab then ? 🤔

  • @thadrumr
    @thadrumr 6 หลายเดือนก่อน +3

    FYI the 2680v4 is a 14 core 28 thread CPU. I know we are talking cores vs threads but just pointing it out.

    • @TechnoTim
      @TechnoTim  6 หลายเดือนก่อน

      Thank you, good call! You’re right, threads not cores. Editing Tim should have caught that!

  • @techaddressed
    @techaddressed 6 หลายเดือนก่อน

    I've got part of my homelab services running in the cloud ... currently using Zerotier but migrating to Nebula.