Exclusive Insight: Visiting one of the Most Advanced Datacenters in the World

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 พ.ค. 2024
  • Thermal Grizzly Minus Pad Extreme:
    www.thermal-grizzly.com/en/pr...
    ---------------------------------------------------------
    Support me on Patreon:
    / der8auer
    ---------------------------------------------------------
    Save 10% on your iFixit purchase: DER8AUER10
    eustore.ifixit.com/der8auer
    Pro Tech Toolkit: bit.ly/2JOFD8f
    ---------------------------------------------------------
    Find my products at Caseking:
    Delid Die Mate 2: bit.ly/2Rhv4y7
    Delid Die Mate X: bit.ly/2EYLwwG
    Skylake-X Direct Die Frame: bit.ly/2GW6yyC
    9th Gen OC Frame: bit.ly/2UVSubi
    Debug-LED: bit.ly/2QVUEt0
    Pretested CPUs: bit.ly/2Aqhf6y
    ---------------------------------------------------------
    My Equipment:
    USB- Microscope*: amzn.to/2Vi4dky
    My Camera*: amzn.to/2BN4h2O
    (*Affiliate Links: If you buy something, I will get a part of the profit)
    ---------------------------------------------------------
    Music / Credits:
    Outro:
    Dylan Sitts feat. HDBeenDope - For The Record (Dylan Sitts Remix)
    ---------------------------------------------------------
    Paid content in this video:
    - Minus Pad Extreme Promotion (not literally paid, but I own part of the company)
    Samples:
    - /
    ---------------------------------------------------------
    Timestamps
    0:00 Intro
    2:17 Advertisement
    2:36 Entrance Power & Data
    5:31 Water treatment
    6:14 Power, UPS & Diesel generator
    12:06 Power distribution
    14:34 Cooling
    18:22 Radiators on the roof
    22:19 Cooling the water
    21:44 Fire prevention
    26:49 Why all the redundancy?
    29:24 IBM Power10, Nodes & Controller
    31:31 Maintenance only by IBM
    33:42 Power10 in detail
    35:30 Why IBM Power?
    37:46 x86 Rack-Server & Blade-Server
    40:48 IBM Mainframe Z15 & Data storage
    45:57 Data Center Efficiency
    47:06 Outro
  • วิทยาศาสตร์และเทคโนโลยี

ความคิดเห็น • 555

  • @izzieb
    @izzieb 2 ปีที่แล้ว +162

    Dr Oetker has an IT business?!! They really do absolutely everything, not just pizzas.

    • @devilboner
      @devilboner 2 ปีที่แล้ว +19

      Just imagine; their company cantina is just frozen pizzas and plastic cups of chocolate mousse EVERY DAY!

    • @izzieb
      @izzieb 2 ปีที่แล้ว

      @@devilboner I'll take the mousse, but I'll leave their pizzas.

    • @cnst2657
      @cnst2657 2 ปีที่แล้ว +10

      Not just any pizzas, they have fishstick pizzas.

    • @der8auer-en
      @der8auer-en  2 ปีที่แล้ว +53

      Their cantina was a restaurant where they cook fresh for everyone :D Actually pretty impressive, too

    • @AlexKidd4Fun
      @AlexKidd4Fun 2 ปีที่แล้ว +6

      Just think.. In the USA, Amazon used to just be a book store. 😉

  • @AJ_UK_LIVE
    @AJ_UK_LIVE 2 ปีที่แล้ว +166

    Wow.
    Two takeaways from this video. The first is as ever, that your content and your commitment to your international viewers is through the roof. Thank you for the English content Roman. It's one thing to do a de-lidding video in two languages but this in unreal.
    The second is this place! Amazing attention to detail in every single thing. Just fascinating. I have also learned that perhaps my hardware is not as top-tier as I thought /s

  • @osgrov
    @osgrov 2 ปีที่แล้ว +261

    This is fantastic! Wow, I'm blown away by that site.
    I work in a datacenter myself, or at least I thought it was before I saw this.. Lol. This is a completely different level than what I'm used to.
    Really looking forwards to part 2. :)
    Keep it up Roman, this was an amazing video.

    • @MIK33EY
      @MIK33EY 2 ปีที่แล้ว +11

      It’s mind blowing isn’t it. Things I’d never thought of are in this installation eg. Four random routed fibre links that don’t cross & purified water.

    • @MrMartinSchou
      @MrMartinSchou 2 ปีที่แล้ว +2

      It's not too surprising. WalMart was/is one of the largest users of data centers in the world. These types of businesses end up creating an enormous amount of data, and they need to know what to do with it.

    • @rstidman
      @rstidman 2 ปีที่แล้ว

      @@MrMartinSchou Der Bauer is a German supremacist.

    • @aaronhartwig8007
      @aaronhartwig8007 2 ปีที่แล้ว +1

      Haha. I was exactly the same. I work in an Data Centre and this one was totally next level!

    • @stang9806
      @stang9806 ปีที่แล้ว +1

      i work in a data center as well, but seeing them handle the parts with no gloves made me wince a bit

  • @Murphistic
    @Murphistic 2 ปีที่แล้ว +51

    OK, hands down this is the most awesome data centre tour I've ever found on TH-cam.
    My mind was blown multiple times: the non-crossing infrastructure, spill gate for battery acid and the reduced oxygen level.

  • @yocobicus
    @yocobicus 2 ปีที่แล้ว +29

    Thank you so much for everyone who made this project possible. I've been waiting for a data center walkthrough for the last 5 1/2 years that had this much in-depth knowledge.

  • @TimmyXaero
    @TimmyXaero 2 ปีที่แล้ว +7

    you've outdone yourself on this video. thoroughly enjoyed the tour, very interesting all of the aspect to make a stable room for the servers to run smoothly. thank you for the very unique chance to see behind the scenes of a data-center. danke Roman. ;)

  • @hyperionxxxxx
    @hyperionxxxxx 2 ปีที่แล้ว +5

    Congrats on getting in there to see and touch all those things and thank you for sharing it with us Roman. Seeing the datacenter up close was awesome, seeing your excitement as you described it all, even better.

  • @diegofernandez4789
    @diegofernandez4789 2 ปีที่แล้ว +2

    Thanks for the tour. Love the details you took care about. Can't wait for the continuation.

  • @markaerovtec
    @markaerovtec 2 ปีที่แล้ว +5

    Stunning video Roman. Totally amazed at the complexity of the facility.

  • @truculenttabasco
    @truculenttabasco ปีที่แล้ว

    These videos are amazing Roman, great work. Thanks for taking the time to share all this.

  • @SpuriousECG
    @SpuriousECG 2 ปีที่แล้ว +10

    That was a great tour, amazing to see into a modern datacenter.

  • @chimey2010
    @chimey2010 2 ปีที่แล้ว +1

    This is one of the best videos I've seen. SO freaking cool they let you have that level of access. What an incredible place!

  • @johngermain5146
    @johngermain5146 2 ปีที่แล้ว +2

    Thanks for the tour, it reminds me of all the equipment I used to work on in my career.

  • @JazekFTW
    @JazekFTW 2 ปีที่แล้ว +4

    Its just amazing that you let us watch how this industry works, ty Roman for releasing this for the international community

  • @andydataguy
    @andydataguy ปีที่แล้ว

    Amazing video!! So grateful for the gents who allowed you to tour and record the facility 🙌🏾🙏🏾

  • @Azeal
    @Azeal 2 ปีที่แล้ว +86

    it's so fascinating how many different technologies (and of course therefore technological experts) have to work together perfectly to make an operation like this work so flawlessly. Awesome video!

    • @BlueRice
      @BlueRice ปีที่แล้ว +1

      some manufacture or companies made their own technologies solely for themself. designing something clever and works good that they need. i find that awesome.

  • @stanbrow
    @stanbrow 2 ปีที่แล้ว +1

    Really enjoyed this. Thank you so much, and I hope you gave the owners a huge thank you, getting this level of access is pretty much unheard of.

  • @eldergeektromeo9868
    @eldergeektromeo9868 2 ปีที่แล้ว +2

    Roman: Thank You again for the peek behind the curtain! Fascinating! And Mahalo to the crew at the data center as well! Just excellent in every way!

  • @larscramer9411
    @larscramer9411 2 ปีที่แล้ว +7

    Top notch content. Extremely interesting. Thank you for this glimpse into cutting edge enterprise stuff.

  • @KenMcAllister_NC
    @KenMcAllister_NC 2 ปีที่แล้ว

    Cool video man, appreciate tat you took the time to run through the infrastructure side before hitting the tech. Most people don't realize how much engineering goes into housing these things. I work in the hyperscale arena, but those smaller enterprise/hosting sites will always have a special place in my heart!

  • @Moodieblue
    @Moodieblue 2 ปีที่แล้ว

    i loved watching this vid - so amazing

  • @droknron
    @droknron 2 ปีที่แล้ว +3

    Amazing video. I love these tours you're doing Roman.

  • @deeb2011
    @deeb2011 2 ปีที่แล้ว +5

    I would like to thank you truly and sincerly for this video.. I watched it from start to finish with full attention. I am soon starting a new Sales job at a DC company. Quite excited about it because It's going to be my first expirience in the DC industry. This video is very educational and helped me so much understand the nature of this business. Again, thank you so much for all the hard effort you have put to help people like me. I appreciate it and I wish you more success! Looking forward for part 2.

  • @DJSammy69.
    @DJSammy69. 2 ปีที่แล้ว +3

    Most fascinating info about datacenter! Just amazing video! Mad props to Roman for doing this in eng and de!!

  • @RaaynML
    @RaaynML 2 ปีที่แล้ว

    Thank you so much for going through this whole datacenter, this is so interesting

  • @jairunet
    @jairunet 2 ปีที่แล้ว

    Awesome work, definitely put together with a high availability mentality. Thank you and keep it up.

  • @fg-zm2yu
    @fg-zm2yu ปีที่แล้ว

    This video is great. I have been sharing it to my colleagues that need to go to Power Plant sites, you have "mini data rooms" over there and you have the same cooling, fire detection and energy backup systems. Thank you for the detailed tour!

  • @Salty4eva
    @Salty4eva ปีที่แล้ว

    Awesome video. I miss my data center days working for EMC and this brought back lots of memories.

  • @jacobreuter
    @jacobreuter ปีที่แล้ว

    This is amazing! Can't believe I'm just now finding this channel. You deserve a few hundred thousand more subscribers IMO

  • @NarekAvetisyan
    @NarekAvetisyan ปีที่แล้ว

    This was absolutely fascinating! My hats off to the people who build this.

  • @BikerBearMTB
    @BikerBearMTB ปีที่แล้ว

    Wow thank you! This was amazing to see such an immense complex piece of kit. Absolute privilege so see and understand through this video

  • @cannesahs
    @cannesahs 2 ปีที่แล้ว

    Thank you making the video.
    I didn't expect to see mainframes there

  • @cLickphotographySEA
    @cLickphotographySEA 2 ปีที่แล้ว +2

    WOW! Amazing how cleanly laid out everything is laid out!

  • @emilantonio007
    @emilantonio007 2 ปีที่แล้ว

    This is a great video. I was surprise impressed with all the technology behind to run data center. And the way you explain everything was fantastic. Thank you

  • @kwisin1337
    @kwisin1337 2 ปีที่แล้ว +2

    Love that kind of content. Please try to find more companies that will showcase thier hard work.

  • @metallurgico
    @metallurgico 2 ปีที่แล้ว +3

    another amazing video.. nice to see this kind of insight perspectives!

  • @PitboyHarmony1
    @PitboyHarmony1 2 ปีที่แล้ว +2

    Thats out of this world, even when compared to other data centres that i have seen tours of.
    Best. Content. Ever.

  • @Ichigo_Kuro_s
    @Ichigo_Kuro_s 2 ปีที่แล้ว +2

    I like how everything there is clean and organized

  • @andrekz9138
    @andrekz9138 2 ปีที่แล้ว +4

    I like shows like "How It's Made", but this video is even deeper than that! Major datacenters are a huge engineering achievement.

  • @psychosis7325
    @psychosis7325 2 ปีที่แล้ว +20

    That fan spin up 😮 That SSD storage 😅 LTTs $1m unboxing looks quaint all of a sudden.

    • @AtanasPaunoff
      @AtanasPaunoff 2 ปีที่แล้ว

      Ha ha that's what i thought too :) Also I want to see Linus reaction... He experienced an orgasm when saw like 100GB/s though so what about 1,5TB/s :D

  • @nicknorthcutt7680
    @nicknorthcutt7680 4 หลายเดือนก่อน

    Wow i am impressed at how professional and advanced everything is inside this data center.

  • @o0Hotiron0o
    @o0Hotiron0o 2 ปีที่แล้ว

    What a tour, great work. thank you.

  • @wiebowesterhof
    @wiebowesterhof 2 ปีที่แล้ว +2

    That's some amazing setup. Appreciate the detailed video, this is the type of stuff you just can't get access to unless you're in high-end hosting or if you work for a place like that typically. The redundancy in every aspect, mildly out of my $ range, but very, very impressive how they managed to do this with the power efficiency stated.

  • @NostalgiaOC
    @NostalgiaOC 2 ปีที่แล้ว +2

    What an amazing video! Very very cool! Thanks Roman.

  • @abdiazizabdihasan5533
    @abdiazizabdihasan5533 8 หลายเดือนก่อน

    just WOW . thanks for taking us in this great tour.i appreciate the efoort you had put in this video. really enjoyed the video. sehr informativ auch

  • @darkcnight
    @darkcnight 2 ปีที่แล้ว +2

    this video was very interesting. even though I know nothing about data centers, you explained everything in a way I can understand. thank you

  • @anthonyc417
    @anthonyc417 2 ปีที่แล้ว +2

    This facility is insane. Very cool content Roman.

  • @Th3James
    @Th3James 2 ปีที่แล้ว

    This was incredible. Thank you!

  • @NeroKoso
    @NeroKoso 2 ปีที่แล้ว

    That was absolutely amazing to see! I never knew how advanced everything was.

  • @BaronsDeKalb
    @BaronsDeKalb 2 ปีที่แล้ว

    That was the cleanest, best laid out, DCIM, SCIF, I have ever seen. Great Content :)

  • @samjones4327
    @samjones4327 2 ปีที่แล้ว

    Thank you very much for this video tour! You're correct in that's it was incredible how the scale of computing and storage capacity is run in this datacenter! I hope you keep these videos coming and cheers to you!

  • @kevinheneghan9259
    @kevinheneghan9259 ปีที่แล้ว

    Have 14 years Data Center Experience working as an operating engineer taking care of equipment. Great video and explanation of everything. Really enjoy seeing different Data Centers.

  • @m0nsterbeats
    @m0nsterbeats ปีที่แล้ว

    Really cool- thanks for showing us!!!

  • @TheNewTimeNetwork
    @TheNewTimeNetwork 2 ปีที่แล้ว +3

    Always a pleasure to look inside a datacenter.
    I had the chance to look inside a very small one in the basement of an IT service company in Cologne with my CompSci class back in school. We had a bit of extra luck, because the operator decided to reschedule the test run of one diesel generator (by one or two days) so we could witness it live in the gen room.
    Sadly, no blade servers, POWER or z-Series there, just standard 19inch Intel x86 servers, Ethernet, and fiber access.
    Eagerly awaiting part 2.

  • @ChEd1980
    @ChEd1980 2 ปีที่แล้ว +2

    Very cool to see inside this DC. The company I work for have equipment in two datacentres and I am part of the DC team so get to visit often and I'm impressed with some of the stuff they have going on here that is not seen in a regular DC such as the oxygen reduction stuff, I didn't know that was a thing either!

  • @joonasrissanen4314
    @joonasrissanen4314 2 ปีที่แล้ว +2

    Super interesting and detailed tour! Thanks :)

  • @webfreakz
    @webfreakz ปีที่แล้ว

    Great tour! Thank you

  • @Elysiann
    @Elysiann 2 ปีที่แล้ว +35

    The gas suppression and aspirating systems was great to see for me. I work in that part of the fire industry here in Australia, and was interested in how ( and what brand) the systems are used overseas . I have set up similar systems for data centres here.

    • @Kenia-sn1cg
      @Kenia-sn1cg 2 ปีที่แล้ว

      Is it complex to do the installation? I am in engineering student and I am hugely interested in working in such fields

    • @Elysiann
      @Elysiann 2 ปีที่แล้ว +8

      @@Kenia-sn1cg Depending on the local requirements , standards, and authorising bodies, it's a hard field to get into.
      Here in Australia, there is a lot of licencing and training to install gas suppression systems like this. Even small systems ( single tank of suppresion agent) require a lot of licencing to install, test, commision and maintain.
      Depending on the agent used, and volumes, and also age of the system, there is a lot of regulation. Some older system used ODP ( ozone depleteing) gases, so if accidently discharged , or even intentionally discharged ( like in a fire instance), there is a lot of paperwork to inform the relevent enviromental agencies.
      Most suppression systems use non ODP gases these days, usually carbon dioxide, nitrogen or nitrogen mixed with other inert gases like argon, or other synthetic agents .
      There is some skill required in designing gas suppression systems. You'll need to take in to account factors like, room size, temperature, room pressure, gas dispersal rates, type of equipement in room, types of fire likely to occur ( electrical, paper, plastics, etc) and many more.
      As for the asprirating systems, most of them usually operate in a similar manner. The general manner they work by , is to constantly sample the air via capillary tubes, or sampling points , through a laser particle reader. This reader will measure obscuration percentage of the air. Some systems can measure as low as 0.01% , by contrast, a smoke detector will normally alarm at 6-8%. So having a dust free enviroment is very important.
      The design of an aspirating system is generally easier, but there are still factors to consider.
      In most cases, the aspirating systems will be part of larger smoke/fire management system used to trigger the release of the suppression system.

  • @barney9008
    @barney9008 2 ปีที่แล้ว +2

    truly facinating would love to see more dc tours

  • @mitchk
    @mitchk 2 ปีที่แล้ว

    very cool, thanks for the tour!

  • @Jules_Diplopia
    @Jules_Diplopia 2 ปีที่แล้ว +18

    I just love the attention to detail. If I had stayed in the industry after 2001, that is the kind of place that I would have wanted to be working in. Thanks so much for the tour. Loved it.
    Oh and Dr Oetker make the best pizzas, we have them here in the Netherlands too.
    The last section with the backup tape drives, I would also want a further backup of in a separate city. Back in the day I argued for data to be backed up from Manchester, Cardiff, Glasgow and London in each of the other centres. My bosses thought I was being OTT.

    • @AJ_UK_LIVE
      @AJ_UK_LIVE 2 ปีที่แล้ว +4

      There is no such thing as OTT in a datacentre. You have to not only prepare for the obvious, but also the unlikely! Everyone always moans at I.T. when things go wrong, but usually it is because they did not allow enough resources in the first place.

    • @MIK33EY
      @MIK33EY 2 ปีที่แล้ว +1

      I have to disagree with you on the pizzas affirmation. All frozen pizzas are like eating pieces of hot cardboard. Dr Oekter, Chicago Town, Goodfellas, etc.… they’re all terrible.

    • @Jules_Diplopia
      @Jules_Diplopia 2 ปีที่แล้ว +1

      @@AJ_UK_LIVE True. After I left, the company concerned forgot to keep backups up to date. They suffered a major data loss.

    • @AJ_UK_LIVE
      @AJ_UK_LIVE 2 ปีที่แล้ว +1

      @@Jules_Diplopia A little schadenfreude there for you I'm sure.

    • @MIK33EY
      @MIK33EY ปีที่แล้ว

      @@leeroyjenkins0 Still doesn’t change the fact that they’re like eating cardboard and don’t even get me started on what comes out of the oven when compared to the packaging imagery. 😂😂

  • @Doctor_X
    @Doctor_X 2 ปีที่แล้ว +3

    this was great! We have been using ibm power for decades. We have both Z and power. Rotating out power 8. Right now i am implementing 2x 1080s and will be migrating from 980s to the 1080s. Running IBMi, AIX, and RHEL.

  • @zekefloyd3918
    @zekefloyd3918 2 ปีที่แล้ว +2

    this is amazing content! This DC is very impressive compared to some I have seen in Australia.

  • @SoTgRave
    @SoTgRave 2 ปีที่แล้ว +2

    Nice and educative, thank you and thank you to the IT data center dudes.

  • @NannoWappi
    @NannoWappi 2 ปีที่แล้ว

    Just one word to say "Awesome" 👍✌😁 Thank you for this nice go through.

  • @roaskywalker
    @roaskywalker 2 ปีที่แล้ว

    Amazing tour and lessons thank you!

  • @pamdemonia
    @pamdemonia ปีที่แล้ว +3

    As an electrician doing a lot of commercial renovations and new construction in San Francisco, I've seen a lot of much smaller server rooms, but this is next level! Very impressed with the electrical work (power in particular) , btw. Very clean and neat. Thanks for the absolutely impressive tour! And thanks to the company for giving you such access.

  • @emike09
    @emike09 2 ปีที่แล้ว

    Love this! Now we're getting into some crazy hardware.

  • @grahamgraham4263
    @grahamgraham4263 ปีที่แล้ว

    Fantastic video, really enjoyed it!🤯

  • @P34chFuzz
    @P34chFuzz ปีที่แล้ว

    Thank you for producing these in both English and German!

  • @razamadaz3417
    @razamadaz3417 2 ปีที่แล้ว

    Really interesting to get an insight into a massive server farm, thanks.

  • @hstrinzel
    @hstrinzel ปีที่แล้ว

    Very impressive indeed! Well done, Dr. Oetker and Roman, thank you for the video!

  • @jamesdk5417
    @jamesdk5417 2 ปีที่แล้ว +30

    It amazed me how little the infrastructure in a new data centre has changed from one I worked in over 20 years ago in Australia.

    • @Marin3r101
      @Marin3r101 2 ปีที่แล้ว +3

      Hahahahahaha australia.... lmao. You are funny.

    • @BravoCharleses
      @BravoCharleses 2 ปีที่แล้ว +8

      The computer technology has moved along at a blistering pace but HVAC has not changed all that much.

    • @jamesdk5417
      @jamesdk5417 2 ปีที่แล้ว +7

      @@Marin3r101 I am so sorry your meds are not working. Best of luck for the future.

    • @adityadivine9750
      @adityadivine9750 2 ปีที่แล้ว +2

      @@Marin3r101 I didn't get the joke probably. I'm Indian though. We've one of the world biggest data centres in the world for 1.4 billion population.

  • @grmnz24
    @grmnz24 ปีที่แล้ว

    Excellent video, thanks very much for making it

  • @ravenfeeder1892
    @ravenfeeder1892 2 ปีที่แล้ว +2

    Great Video! I could only wish that the DC's I've been in were that well equipped and organised.

  • @aanset1
    @aanset1 ปีที่แล้ว

    So Cool, So Clean, and very organized DC...

  • @rekleif
    @rekleif 2 ปีที่แล้ว

    This is the coolest video you have made to date. Nothing else come close to this imho. Wow, just wow. Thank you for this. To se you geek out like that was awesome. I bet this was one of the coolest thing you have done in a while Roman? Love from Norway.

    • @der8auer-en
      @der8auer-en  2 ปีที่แล้ว

      Yes I also enjoyed this a lot :D

  • @nickolp-it7bo
    @nickolp-it7bo 2 ปีที่แล้ว +4

    Incredible video and as always narrated with precision. Mind blown with what goes into these places. I'm wondering what the insurance figure would be to rebuild????

  • @NeonMinnen
    @NeonMinnen ปีที่แล้ว

    I don't usually comment on videos, but this was amazing. Thank you for this great content. Seeing those IBM Power E's got me hyped.

  • @ChaJ67
    @ChaJ67 2 ปีที่แล้ว +19

    Data centers are always fun to go through. Some things I have come across which could make for better, more efficient data centers than the one you went through, and I am sure you have thought about some of this with the videos you have made, but some of it may be new to you:
    1. 380V DC standard for power delivery. This is +=190V nominal, 300V to 425V range to the servers. What this standard does is after the transformer step down to 480V AC, do one conversion to 380V DC nominal and have those batteries hooked up in series and parallel to equal 380V DC nominal. From here the power goes over power rails more efficiently, as DC travels over the wires more efficiently than AC, straight to the servers. The server power supplies have basically half of the hardware in them as they skip the AC/DC conversion, which takes a lot of power electronics to efficiently and cleanly convert AC to DC, instead going straight from 380V DC to 12V DC. This is far more efficient and uses far less hardware than traditional AC power delivery to the server. Something like 30% space savings in data centers that do this (and I spell out data center long to not confuse with direct current) while getting a big boost in efficiency. You mention the efficiency of data centers at the end of the video, which is an important metric and this is a way to raise the actual efficiency and even save on costs when this tech is done at scale.
    2. Considering how much Germany relies on renewable energy, something else that I think should be done, especially when doing a 380V DC standard data center build, is to swap out the batteries in the battery room with LFP batteries and build for at least 4 hours of storage. As you may have noticed, the space used in the battery room is not that efficient and they built around the notion of lead acid batteries spilling, which you will also see in old telco buildings where they built everything to run on 48V DC, which takes some crazy big bus bars to do at that voltage level. The idea of having these hours of storage is you can balance against the green energy grid and may even stop taking power from the grid for a while when the electricity prices / demand is the highest. At least when there is variable pricing involved / you make a deal with the power company to help balance the grid, such a setup can save / make a data center a lot of money as you will already have the conversion hardware and the big power use case, just add more batteries to your battery room. At this those diesel generators don't need to run as much during a power outage and can have a much longer grace period to warm up, saving on electricity to keep them warm when you have a battery system with hours of storage; you just make sure to have a certain minimum storage to give the generators more time to warm up. LFP batteries in the data center is also a good thing as modern LFP batteries will last for decades. Also by the end of the year (2022), Germany will have a large LFP battery manufacturing facility in operation run by CATL, one of the biggest names in LFP battery manufacturing. So you will likely make the batteries used in the data center in Germany.
    3. Getting to where your expertise comes in, liquid cooling the high powered components in servers with a negative pressure liquid cooling loop. A number of data centers do this, especially for supercomputers, and it is extremely efficient and ironically uses a lot less water than the system you showed. The reason for this is air is a very poor carrier of heat, so you have to cool the air to a certain low temperature, much lower than the temperature of the components you are cooling, or else the server hardware in the racks will get too hot because the delta-T (change in temperature) with air is high. With direct liquid to the hot running components using water blocks, you can run the coolant at much higher temperatures as the delta-T is much lower. These much higher temperatures get into a hot summer day where it is say 37C outside is cool enough to not need any extra cooling measures such as evaporative cooling nor air conditioners. Some data centers get down into a PUE of 1.1 where this one you specify as 1.35 and the bar is 2.0. So yeah, efficiency can be better, granted this one is pretty good, granted they use a lot of water, which gets to be a problem in some places where there is not enough water to go around. This problem is getting worse with global climate change, so this thinking about evaporative cooling has to be changed unfortunately.
    4. A number of data centers are moving to back of rack radiators.
    5. Also in your wheelhouse, use of liquid metal thermal transfer compound and high W/mK thermal pads. The idea being the more efficiently you transfer the heat to the heat sinks with a smaller delta-T (change in temperature) between the die and heat sink, the less you have to work to keep the die at or below its max target temperature. Data centers are built around keeping the components down to a certain target temperature at max load and when you throw in all of the inefficiencies of low end thermal transfer compounds, IHS's (Internal Heat Spreaders), air cooling, and heat buildup as you go through long, high powered servers, you end up spending a lot of energy and often water to reach that target inlet temperature. Also those super noisy server fans use a tonne of energy to spin that fast and get into significant air friction heating, so if you carry most of the heat away with high density water blocks where you don't have to work as hard to move the more heat dense liquid coolant around, you can use much slower, more efficient fans for the remaining lower powered air cooled components. Anything you can do to allow the target temperature to be higher reduces your PUE and/or water consumption.
    6. Shifting gears a little, use of ZFS RAIDZ in the data center. While I have used ZFS RAIDZ level 2 on Solaris in the data center primarily on mechanical drives with SSD caching drives, ZFS under Linux and FreeBSD has gotten a lot better in recent years and supports TRIM on SSDs. RAID controllers do not support TRIM. If you have ever done SSD RAID arrays, those SSDs take a beating when used with hardware RAID controllers, especially as hardware RAID does not support TRIM and in general uses the SSDs in a very write intensive fashion. ZFS is setup a lot more intelligently in terms of how much writing you do or I should say it is one of its optimizations at a slight cost elsewhere (space usage), and TRIM support is icing on the cake greatly reducing write amplification. I would venture to say that ZFS is a more reliable and flexible storage system than hardware RAID based on my experience with it in the data center and my usage of it under Linux and FreeBSD. The thing is where a data center may go for super expensive 10 DWDP (Drive Writes Per Day) SSDs when using hardware RAID controllers, they may find they can get the exact same job down with much less costly 1 DWDP drives when using ZFS RAIDZ. I mean the improvement you will see with ZFS RAIDZ level 2 over RAID 6 is huge. As a bit of a side note on this RAID level, with RAID 1 mirrors sometimes the mirror fails before you can rebuild it, causing data lose and RAID 10 just amplifies this problem by adding more mirrors to an array. In a data center there are enough drives to where you will see this happen, it is a guarantee when you are dealing with this many drives. RAID 5 also amplifies this problem as you add more drives to the array. RAID 6 is a lot better at not losing your data to random physical drive failures as you still have redundancy with a single drive failure, so the occasional second drive failure and just sector loss doesn't kill you, granted in a data center as soon as a drive starts losing sectors, you replace it right away. (At least any good admin would.) RAIDZ level 2 is basically ZFS's version of this, except a lot better in terms of data integrity and recovery capabilities. Standard practice where I worked is to have 8 drive vdevs and just add more vdevs to a zpool when increasing storage. In other words you have an 8 drive array with 2 of the drives for redundancy and then you just add more of these arrays into a single logical 'drive' / storage pool to get to the desired storage size. If you have ever dealt with hardware RAID enough, you start finding there are ways to lose data in a more traditional overall setup where there should be ways to make it better and not lose that data. ZFS RAIDZ level 2 is a great answer to these issues. There is a lot to explain here, but this is something you can read about and then this already long post doesn't need to get a lot longer and you can see why ZFS RAIDZ with direct access to the drives is just better. It is also cheaper as you don't have to spend money on all of these fancy RAID controllers, instead just need simple HBAs (Host Bus Adapters) to access the drives.
    7. I was a bit surprised with all of that fiber you saw, there weren't any specialized high speed fiber connections such as InfiniBand. (Maybe you saw InfiniBand, but just didn't know what it was?)
    I suppose this is a thing when you go to a bank's data center, they are going to be a bit more conservative in how they setup things than say a scientific supercomputing center and their hardware suppliers are going to be a bit more traditional in their offerings, so you just won't see some of these ideas to make the data center even more efficient than the setup you saw. The most radical stuff tends to happen with hyperscale data centers. It is just these are even harder to get into as the operators tend to be a bit more secretive on how they do things.

    • @LtdJorge
      @LtdJorge ปีที่แล้ว +1

      As much as I love ZFS, it's not an FS made for scale. If you want to scale ZFS, you have to rely on a cluster FS like Gluster, or Ceph on top of it. These DELL EMC machines you saw are NOT using RAID controllers. They either use their own specific cards to do everything on ASICs or they use software to control it. Also, those are NVMe SSDs, the amount of memory bandwidth needed to support them is massive. Those machines are probably using some kind of DPU that connects the fiber and the storage directly, without even going throught the CPU.

  • @gasracing4000
    @gasracing4000 2 ปีที่แล้ว

    Very impressive! Just next level infrastructure, im in awe.

  • @jmileshc
    @jmileshc 2 ปีที่แล้ว

    Absolutely fascinating !

  • @johngermain5146
    @johngermain5146 2 ปีที่แล้ว +1

    I used to do maintenance testing on Data Center Power Systems. We had been using bigger and bigger batteries and had similar issues with the hazardous properties of them. Now, A 1.2 MWatt Diesel Generator (Caterpillar,, Cummins, Generac....like the one you showed) provides a 2nd power source which when used with a transfer switch allows the use of smaller and smaller batteries which provide power only long enough to start the generator. After the loss of Mains Power, with the Generator started, the transfer Switch switches to the Generator. The Data Center Power is derived from those UPS systems like the ones you showed. whose Inverter is always running off the batteries which themselves are charged either from the Mains or Generator.

  • @tarfeef_4268
    @tarfeef_4268 2 ปีที่แล้ว +6

    Okay this is awesome. I absolutely love this kind of content, the engineering challenges and super cool tech in high end datacentres is just so cool

  • @mrbones666
    @mrbones666 2 ปีที่แล้ว +2

    Mind blowing stuff. Thanks guys.

  • @danielteivelis
    @danielteivelis ปีที่แล้ว

    Simply amazing!

  • @crion88
    @crion88 ปีที่แล้ว

    This was great, thankyou!

  • @TylerBrigham
    @TylerBrigham 2 ปีที่แล้ว +2

    Wow what a high end facility. These guys know what they are doing, and to think this started in the food industry

  • @snarfsnarf5824
    @snarfsnarf5824 2 ปีที่แล้ว +2

    I always appreciate the effort you put into publishing your videos on both your main and your EN channel, but this video goes above and beyond. Doing the full tour once must have been an ordeal to organize, but convincing them to let you go over everything twice really takes your content that extra mile.

    • @der8auer-en
      @der8auer-en  2 ปีที่แล้ว +2

      😬 thanks. Yes was indeed a lot of effort 😁

  • @Sir_Uncle_Ned
    @Sir_Uncle_Ned 2 ปีที่แล้ว +3

    I’ve been to a data centre here in Australia and it’s a similar story with their infrastructure, redundancy is the key word. They didn’t just heat the generators though, they convert the generator to a grid driven motor to keep the engine spinning, so if they are needed then all they need for starting is opening the fuel valve, and the motor automatically switches back to a generator.

    • @bjornroesbeke
      @bjornroesbeke 2 ปีที่แล้ว

      I'd be interested to see how they've implemented that electrically.
      It would probably shave (just) a couple of seconds off the time that UPS'es need to keep the load up, whilst using heaps of power to keep that motor running.
      There's little to no cost saving on UPS'es because the two nets would still need to be switched in and out, and that doesn't happen in less than a couple of seconds.

  • @ohkay8939
    @ohkay8939 2 ปีที่แล้ว +16

    That was really really cool. Thank you for sharing this.
    I worked at IBM back in the 90s, and we had a comparatively miniature tape robot connected to the mainframes. Insane to think they're storing terabytes of data on tape now. I'd love to know how they made that data relatively quickly accessible - it wasn't fun with 100MB.

    • @cederian
      @cederian 2 ปีที่แล้ว +1

      LTO12 will be up to 144TB! Insanely cost effective

    • @charliestevenson1029
      @charliestevenson1029 2 ปีที่แล้ว +1

      I've worked with tape since the 1970s. Tape technology has leapt ahead of disk in terms of storage density. Fuji demonstrated the practicality of 185TB in LTO tape package back in 2015. If you think of the surface area you have to write on in 2000 ft of 1/2" tape, compared to the platters on a disk you get the idea.

    • @ohkay8939
      @ohkay8939 2 ปีที่แล้ว +1

      @@charliestevenson1029 I get the storage density thing, what I'm curious about is the latency accessing the specific data you want. Restores of particular files like I said, were not fun on tapes holding much less data than these do just because of the time it would take the tape to get to where it needed to be.

    • @mycosys
      @mycosys 2 ปีที่แล้ว +1

      @@ohkay8939 Its called cold storage for a reason. Its mainly intended for worst case. Its even on a different site. Speeds have ofc improved with data density but thousands of feet of tape is still thousands of feet of tape, theres only so fast you can move it. If you buy even fairly small cold storage these days the restore times are still 4+ hours.
      Why would it need to be quickly accessible when they have layers of SSD fabric?

    • @charliestevenson1029
      @charliestevenson1029 2 ปีที่แล้ว +2

      @@mycosys It's called HSM - Hierarchical Storage Management where you have different layers of access by speed required. Spinning disks are very expensive to buy and run, so you don't keep rarely accessed data 'online' , but you might keep a stub - just enough so the end user can start accessing the file quickly. The robot kicks in to get the tape with the rest of the data. Since 1985 most tape drives have recorded in a serpentine fashion, so it's not a sequential access to the bit of data you want, it's a combination of horizontal and vertical movement. LTO8 for example records 5160 tracks on 1/2" tape. Worst case is a seek to end of the physical tape. Data is (lossless) compressed, throughput in excess of 240MB/sec. Not all tape applications are for HSM, most is for offline backup security, often with remote robot infrastructure linked by fibre. Check out IBM's technical tape publications. The problem is, so many people think 'tape' is old and slow. The fact is, all large scale data centres use it - they have to. If you see inside any Google data centre, sure you see racks and racks of servers and disk, but you also see robotic tape libraries. Where I used to work, we had petabytes and petabytes of data on nearline tape (it was seismic processing), it didn't make economic sense to have everything on spinning disk.

  • @claudiorivieccio4967
    @claudiorivieccio4967 2 ปีที่แล้ว

    Fascinating! Well done! 🖖

  • @HakanCezayirli
    @HakanCezayirli 2 ปีที่แล้ว +1

    Wow. This is the best video in your chanel. very impressed. best regards from turkey

  • @TheBoltcranck
    @TheBoltcranck 2 ปีที่แล้ว +2

    nicely efficient building , awesome video.

  • @scottcol23
    @scottcol23 ปีที่แล้ว

    WOW what an amazing video. This place in on the cutting edge of tech which I love seeing. Its really rare to get to see inside a center like this. thank you for taking the time to so us around. All of those IBM Power E1080 nodes! .Each costing about $335k USD for each node. looks like there were 4 nodes in just that one rack. A 256gb DDR4 memory card cost $10K so the 16TB option in those servers cost $630K per node. So at the very minimum each rack with 4 nodes cost $3.4 million USD... I work in a data center, while not quite at this level we do have the HPE Flex 280s.

  • @juanbrits3002
    @juanbrits3002 2 ปีที่แล้ว +14

    Data center without wearing ear plugs?
    Fun times...

  • @TylerBrigham
    @TylerBrigham 2 ปีที่แล้ว +2

    This is super cool. Im a fan of this sort of content 👌👍

  • @Marcel1984nl
    @Marcel1984nl 2 ปีที่แล้ว

    This is really what a datacenter should used to be, everything (mirrored) as a complete (backup) installation/ system. Good video btw, you take much effort to explain almost everything about the datacenter and it's installation.

  • @erickdanieltoyomarin1697
    @erickdanieltoyomarin1697 หลายเดือนก่อน

    Great video man. Thank you.

  • @epsilontic
    @epsilontic ปีที่แล้ว

    This is great insight into datacenter infrastructure.