I am so excited for this, this is why I love Wendell, I have such crazy respect for him, just wish more enterprise partners would play with him on the open-source side.
Bro I legitimately had one of those OCZ 128 gig drives and it did in fact fail on me. Absolutely hilarious to bring it up because I'm pretty sure everyone's drive failed at some point.
Wendell, you're freakin awesome, man. Love prevails. I'm so happy you feel you're worth the effort to change - I'm struggling with something similar myself, recently... let's keep doing our best!!!
@@ericneo2 go beyond what fits in the ZFS ram read cache while testing. Read based testing that doesn't go beyond that is testing your memory not your filesystem and IO subsystem. For tiny workloads COW filesystems aren't detrimental but for heavy workloads they are. COW systems slow down when they approach full capacity. The memory cache of ZFS and the memory cache of your database are competing for memory together with your workload when you're applying a heavy load. If you're testing against a simple file system you need to either enable compression on both or neither since file system level compression can distort performance characteristics. In short ZFS and proper databases have a lot of overlapping work that when truly compared apples to apples constitutes an overhead.
@@magfal "COW systems slow down when they approach full capacity." - Isn't that why they recommend a fast the ZIL/SLOG? "proper databases have a lot of overlapping work" - I completely agree but I don't know of a better way of getting requests into a RAM Read cache. Have you come across one by any chance? There are flash and in-memory relational databases that have solved this problem but they are all enterprise or SAAS with a steep price and hardware requirements. My main interest in this is repurposing old servers with plenty of RAM to perform tasks faster than new servers built with spinning rust. A 50GB DB is nothing when you have 192GB to work with.
Some delicious delicacies just oozing goodness that many of us will never see or understand, it like being in a old fashioned sweat shop just ogling at all those strange looking things on shelves. Thanks Wendell and team for the reveals.
From entry level builds to bleeding edge enterprise technology. Wendell and the levelonetechs team has you covered. There is no other tech channel that even comes close!!!
I'm so glad Kioxia decided to send a pile of drives to Wendell this time, instead of giving them to Linus again. This video was ten thousand times more interesting than Linus and his boyfriend making dumb "jokes" about humping the hardware. The bandwidth on these things would make them amazing for crunching huge datasets with thirty+ cores. Saturating twelve channels of DDR5 with flash storage is incredible. With how Sapphire Rapids puts the memory controllers directly on the dies, though, I have to wonder how well that would work though. You'll essentially never saturate the memory on such a processor, because you can't raise the number of memory channels separately from the number of cores. A friend tried to explain that NUMA on a workload on this shared memory setup would also cause nightmarish latency problems, but I wasn't quite clever enough to understand any of it. Definitely sounds like AMDs approach with a separate IO-die is going to be less hassle.
As for the nightmarish latency problems - it depends on the workload. If we assume completely random access to the memory, like with generic numa unaware CPU based ray tracing renders or some simulation software then yes. But otherwise threads in software (well written one) tend to work on their part of the memory and avoid talking to the other threads. Most of the time they dont copy the application data/state to their own memory chunk tho... so that might cause problems, however I'm pretty sure that in cases like this you either can reauest changes to the software or just run a few instances of it (if the workload can be somehow decentralized) Also keep in mind that most of the time CPUs like these end up being partitioned into many LXC/docker containers or VMs.
@@randomname3566 "Well written" is not generally a term you'd apply to CFD suites. AVX512 could perform miracles, but they're not even making use of AVX2 yet. It's all random reads all the time. Yes yes, I know these CPUs are intended for datacentres to slice up and rent out as a VPS, it doesn't change the fact that they belong on my desk, to goof around with. The rest of the world just needs to catch up with my ambitions.
My first SSD was an OCZ 64GB, but the 128GB version was far more viable, I found in Linux the 64GB was throwing errors. But those drives I got to install 64bit Win7 & Linux on, so more like a decade
@@rooksisfuniii I think I have one that probably would run, but the PC is in storage and I'm out the country so cannot get it out to check :) I stopped bothering with the small one, it's hard to even remember how small they were when Win7 & Sandy Bridge were new things :)
Is ZFS still suitable for such SSDs? COW nature of it goes against a lot of SSDs' internal optimizations. In super-HPC world WEKA is used, but it is more of a niche FS.
ZFS is for reliability, not for best speed. Even on normal SSDs it runs slower than say mdadm raid + ext4/xfs. WEKA is a cluster filesystem (it joins storage and compute resources of multiple storage nodes as a single storage pool) so it's a different animal entirely.
My OCZ drive still works! Granted, its in a HP laptop from 2011, acting as a youtube box for while I do dishes. So not a lot of performance needed lol.
I think I still have a 30GB OCZ drive. And I think it still works, but I'm not sure. It hasn't been hooked up to anything in years. I stopped using it just because it was too small. I also have a 150GB Raptor with the acrylic cover from that same era. That one may have crashed heads. It hasn't been used in years either. My 250GB Samsung 840 Pro is still rocking as my boot drive though. I've never even touched an NVME drive as my Z87 board doesn't have any slots for them. All the newfangled stuff that's come out in the last decade just hasn't been necessary to upgrade to.
LOL I do still have one of the original OCZ ssd's. It still works... runs in a rack for a big touch panel in my kitchen. Just runs some Home assistant widgets and music and recipes..
I thought the interconnect for dual Epyc was half the PCIe lanes from each. Did they just halve the interconnect PCIe lanes from each, and give the rest to the motherboard? Thank you guys for all the insanity that you capture, and provide as edumatainment.
I still have an OCZ 120 in a working system lol I remember my first SSDs.. two OCZ Vertex 30gbs in raid 0 I saved my gas station paychecks and it was life changing!
I didn't even have to make it to 33 seconds before hitting the like on this video Knowing all the woes that you worked on with LTT in their NVMe storage server for testing CPU interrupts and all
I hit a problem recently when writing some async io software on a 3rd gen EPYC and 4x 30TB Intel drives, I could get 25.8GiB/s (28GB/s) with nodes per socket (NPS) set to 4 but this drops you down to two channels of memory (~40GiB/s) per numa node so the best i could get achieve was 20GB/s (basically max speed of read+write to RAM on two channels). Any other topology setting would throttle you somewhere else, and ensuring the buffers all fitted in l3 cache also didn’t work, grr. Fortunately 12 channels of ddr5 per socket (3 channel per numa node) is enough headroom even for a theoretical x16 PCI 5.0 raid 0. Kioxia have a press release about CM7 but not clear if these are actually available other than to “selected customers”. 24x pci 4.0 drives is still awesome though and good to see the bottlenecks getting removed
Hi , congratulations for the content . I have some questions related to Enterprise PCI-e ssd's : 1 - Can we install windows 11 pro on them ? ( it is detected or need sone drivers at installation ) 2 - It works better on Intel 13/14 gen or with AMD 7000 ? AMD x3d have some storage advantages ? 3 - They Need some adapters from M.2 to their connection , what do you recommend ? 4 - It is better that my old Intel Optane 900p ? Thank you !
Man, would love to develop software for these; been working on journaling distributed databases and an opportunity to tune and demonstrate on something like this would make customers emerge from behind every bush and from under every rock. P.S. you look fantastic, I can see wellness right on your face.
I once took a class with a database professor who told me he gave a talk at Microsoft to the SQL Server team. He had written a research database with performance which really impressed the SQL Server team. He claimed that one of the reasons for his performance was that he stored his data structures directly on the system's block devices, avoiding the FS overhead. The SQL Server people dismissed this approach because they believed customers preferred the convenience of storing data in the FS. I believe that assumption to be false, as most on-prem DB deployments use dedicated hardware. But I do wonder, how much overhead is there from the FS?
I have a video showing "How to Enable Intel QAT Backup Compression in SQL Server 2022" for some more details about how QAT backup compression works. th-cam.com/video/qeziSrXsE8Q/w-d-xo.html
I still have an OCZ Vertex 4 256GB Windows 10 system drive that works... It's been showing 1 bad block for the past 3 years... Other than that, it works somehow...
I was about to ask if Wendell looks like he's loosing some lb.'s, but looks like I'm not the only one noticing. I noticed a couple weeks ago, but wasn't sure, but its becoming more obvious. GOOD JOB KEEP IT UP!
are there more affordable NVME SSDs for servers? Id like to play with these and have a lead on a server that can use them, but cant afford the enterprise level drives? Basically does anyone have suggestions or at least things to google to find these?
9:23 I do, I don't really use it anymore, haven't for many many years, because it's dog slow, but works fine as an external drive just to move some stuff around. But what died on me were two Samsung drives, one 250GB 960 EVO that didn't even last enough for me to finish installing Windows on it, then another 500GB 960 EVO that died just shy of it's 2 year warranty expiring, and a Toshiba TR200 240GB. Maybe I had another SSD die but I'm not sure. Moral of the story, Samsung drives? Never again.
@@drewcipher896 to a certain point. My weight is 120KG and there is practically zero situations where I wouldn't be better of than if I were able to drop 20KG
I don't have any 2dpc chassis. It's kinda funny it's .. 12 channels would you rather have 8 channel 2dpc or...12 channels? Seems 12 channels is better for now esp with higher density memories buuuuuuut
Why are you standing behnd a monitor for 80% of the video? I was waiting for the monitor to turn on, and it never did...also, you picked up a CD8 with your left hand during the video while you were talking about it...when you put it back on the table, I almost said, "peek a boo!" 🤣
I didn't follow how 24 drives were going to use 75% of the total memory bandwidth. 24 drives * 6 GBps = 144 GBps, but then you multiplied the 144 GBps by three, why? Also I feel like you're going a little easy on us as far as technical details. You said signal integrity was a problem. What were the incorrect cables? what were the correct cables? Are you using MCIO cables or SlimSAS? I can lend you some SlimSAS 8i 0.5m cables if it would help, but Genoa probably uses MCIO for the PCIE 5.0 speeds, even though you are running the SSDs in 4.0 speeds (I think).
Wendell looks healthy!
I hope Wendell can find ways to enjoy this new size and can stay healthy.
Looks can be deceiving, just look at some meth addicts.
Fit and handsome 😉
Actually he does look like he's lost weight
@@bobbybobman3073 weird ass response what demons are you fighting
Holy crap! Looking good Wendell
Wendell went from 16:9 to 4:3
Wendell you are looking great, congrats man
I am so excited for this, this is why I love Wendell, I have such crazy respect for him, just wish more enterprise partners would play with him on the open-source side.
Bro I legitimately had one of those OCZ 128 gig drives and it did in fact fail on me. Absolutely hilarious to bring it up because I'm pretty sure everyone's drive failed at some point.
Wendell, you're freakin awesome, man. Love prevails. I'm so happy you feel you're worth the effort to change - I'm struggling with something similar myself, recently... let's keep doing our best!!!
That mobo where you can tradeoff inter-CPU speed vs disk IO speed, by recabling, is cool.
Looking slimmer my dude. Good work.
True. And younger too.
@@MikeBob2023 infant blood plasma works wonders for the body
@@maddads6492 Hahahaha! 🤣🤣🤣
Damn Wendell, Looking good. Proud of you.
I would love to see MS SQL VS ZFS MariaDB performance comparisons and latency.
ZFS does eat a lot of database performance for the benefits it provides.
@@magfal How so? My testing showed an improvement from 7000ms without ZFS to 63ms with ZFS.
@@ericneo2 go beyond what fits in the ZFS ram read cache while testing.
Read based testing that doesn't go beyond that is testing your memory not your filesystem and IO subsystem.
For tiny workloads COW filesystems aren't detrimental but for heavy workloads they are.
COW systems slow down when they approach full capacity.
The memory cache of ZFS and the memory cache of your database are competing for memory together with your workload when you're applying a heavy load.
If you're testing against a simple file system you need to either enable compression on both or neither since file system level compression can distort performance characteristics.
In short ZFS and proper databases have a lot of overlapping work that when truly compared apples to apples constitutes an overhead.
@@magfal "COW systems slow down when they approach full capacity." - Isn't that why they recommend a fast the ZIL/SLOG?
"proper databases have a lot of overlapping work" - I completely agree but I don't know of a better way of getting requests into a RAM Read cache. Have you come across one by any chance?
There are flash and in-memory relational databases that have solved this problem but they are all enterprise or SAAS with a steep price and hardware requirements.
My main interest in this is repurposing old servers with plenty of RAM to perform tasks faster than new servers built with spinning rust. A 50GB DB is nothing when you have 192GB to work with.
What about a 5 way comparison? MS SQL vs MariaDB vs PostgreSQL vs Oracle vs Db2. Great video Wendell, thank you.
Wendell your really looking good. Keep it up man 🎉
Suggestion for an enterprise SSD duel between KIOXIA and Micron.
Great video. I would like to see a collab with Michael Larabel from Phoronix (even though I don't believe he has a TH-cam channel)
Some delicious delicacies just oozing goodness that many of us will never see or understand, it like being in a old fashioned sweat shop just ogling at all those strange looking things on shelves. Thanks Wendell and team for the reveals.
it's always kinda interesting to see Accelerators waving in and out or relevance, over time.
Hi Wendell! What kind of capabilities in MSSQL would require an Enterprise version of the product?
@18:15 Come for the awesome server stuff, stay In the Pale Moonlight
From entry level builds to bleeding edge enterprise technology. Wendell and the levelonetechs team has you covered. There is no other tech channel that even comes close!!!
You need hearing protection in that room. Please be safe
The best videos are ones where new and interesting hw can show us something interesting about system architectures 😊
Also, star trek puns 😂
Come for the server madness, stay for the DS9 deep cut memes
I've really been looking forward to this. Thanks
Anyone got a link to that Wallhaven blue/yellow background on the left? Can't quite make anything of the pixelation in the URL...
Yes i have an original OCZ SSD i use it as an external drive for booting things with ventory i call it the disk of many boots.
3:22 Love a good BegaMyte of storage.
I'm so glad Kioxia decided to send a pile of drives to Wendell this time, instead of giving them to Linus again. This video was ten thousand times more interesting than Linus and his boyfriend making dumb "jokes" about humping the hardware.
The bandwidth on these things would make them amazing for crunching huge datasets with thirty+ cores. Saturating twelve channels of DDR5 with flash storage is incredible. With how Sapphire Rapids puts the memory controllers directly on the dies, though, I have to wonder how well that would work though. You'll essentially never saturate the memory on such a processor, because you can't raise the number of memory channels separately from the number of cores. A friend tried to explain that NUMA on a workload on this shared memory setup would also cause nightmarish latency problems, but I wasn't quite clever enough to understand any of it. Definitely sounds like AMDs approach with a separate IO-die is going to be less hassle.
As for the nightmarish latency problems - it depends on the workload.
If we assume completely random access to the memory, like with generic numa unaware CPU based ray tracing renders or some simulation software then yes.
But otherwise threads in software (well written one) tend to work on their part of the memory and avoid talking to the other threads. Most of the time they dont copy the application data/state to their own memory chunk tho... so that might cause problems, however I'm pretty sure that in cases like this you either can reauest changes to the software or just run a few instances of it (if the workload can be somehow decentralized)
Also keep in mind that most of the time CPUs like these end up being partitioned into many LXC/docker containers or VMs.
@@randomname3566 "Well written" is not generally a term you'd apply to CFD suites. AVX512 could perform miracles, but they're not even making use of AVX2 yet. It's all random reads all the time.
Yes yes, I know these CPUs are intended for datacentres to slice up and rent out as a VPS, it doesn't change the fact that they belong on my desk, to goof around with. The rest of the world just needs to catch up with my ambitions.
Agreed extremely fascinating to see a deep dive into what a such an over the top setup entails and would actually it might be used for.
How is this comment 13 days old?
@@Walczyk Patreon subscribers get early access to most videos. Also a little check mark on the forums, to mark us out as above the plebeians.
I still have my OCZ 64gb drive from at least 5 years ago still running and rocking in my NAS.
Probably older, I bought a 64GB Kingston in 2008 or so. I think that particular drive is in a laptop somewhere...
My first SSD was an OCZ 64GB, but the 128GB version was far more viable, I found in Linux the 64GB was throwing errors. But those drives I got to install 64bit Win7 & Linux on, so more like a decade
Still have one running my PfSense
Still have one running my PfSense
@@rooksisfuniii I think I have one that probably would run, but the PC is in storage and I'm out the country so cannot get it out to check :)
I stopped bothering with the small one, it's hard to even remember how small they were when Win7 & Sandy Bridge were new things :)
Is ZFS still suitable for such SSDs? COW nature of it goes against a lot of SSDs' internal optimizations. In super-HPC world WEKA is used, but it is more of a niche FS.
ZFS is for reliability, not for best speed. Even on normal SSDs it runs slower than say mdadm raid + ext4/xfs. WEKA is a cluster filesystem (it joins storage and compute resources of multiple storage nodes as a single storage pool) so it's a different animal entirely.
My OCZ drive still works! Granted, its in a HP laptop from 2011, acting as a youtube box for while I do dishes. So not a lot of performance needed lol.
I think I still have a 30GB OCZ drive. And I think it still works, but I'm not sure. It hasn't been hooked up to anything in years. I stopped using it just because it was too small. I also have a 150GB Raptor with the acrylic cover from that same era. That one may have crashed heads. It hasn't been used in years either. My 250GB Samsung 840 Pro is still rocking as my boot drive though. I've never even touched an NVME drive as my Z87 board doesn't have any slots for them. All the newfangled stuff that's come out in the last decade just hasn't been necessary to upgrade to.
LOL I do still have one of the original OCZ ssd's. It still works... runs in a rack for a big touch panel in my kitchen. Just runs some Home assistant widgets and music and recipes..
Wendell, Kioxia's got the new enterprise CM7 in Gen 5 in 2.5".
I used to have a 128gb ocz drive... Yes, it did in fact die. Got it for my backup laptop, had a Samsung 830 256gb in the main
I thought the interconnect for dual Epyc was half the PCIe lanes from each. Did they just halve the interconnect PCIe lanes from each, and give the rest to the motherboard? Thank you guys for all the insanity that you capture, and provide as edumatainment.
I looked at the pricing and it's surprisingly reasonable.
I still have an OCZ 120 in a working system lol I remember my first SSDs.. two OCZ Vertex 30gbs in raid 0 I saved my gas station paychecks and it was life changing!
That shirt is very suggestive and I am here for it!
Go Wendell, goooooo, my guy soon to be giga Chad if this pace continues 😁
Still using an OCZ Vertex3 128GB. A little over 18TB host writes so not a huge load on it.
I didn't even have to make it to 33 seconds before hitting the like on this video
Knowing all the woes that you worked on with LTT in their NVMe storage server for testing CPU interrupts and all
I just picture Wendell living in a land of cubicles surrounded by random server carcasses.
How did you post this comment 9 days ago?
@@mika2666 magic
Dam Wendell you're looking good. I'm happy for you.
Consistency and reliability why I use Enterprise drives in my desktop.
I hit a problem recently when writing some async io software on a 3rd gen EPYC and 4x 30TB Intel drives, I could get 25.8GiB/s (28GB/s) with nodes per socket (NPS) set to 4 but this drops you down to two channels of memory (~40GiB/s) per numa node so the best i could get achieve was 20GB/s (basically max speed of read+write to RAM on two channels). Any other topology setting would throttle you somewhere else, and ensuring the buffers all fitted in l3 cache also didn’t work, grr.
Fortunately 12 channels of ddr5 per socket (3 channel per numa node) is enough headroom even for a theoretical x16 PCI 5.0 raid 0. Kioxia have a press release about CM7 but not clear if these are actually available other than to “selected customers”. 24x pci 4.0 drives is still awesome though and good to see the bottlenecks getting removed
Hi , congratulations for the content .
I have some questions related to Enterprise PCI-e ssd's :
1 - Can we install windows 11 pro on them ? ( it is detected or need sone drivers at installation )
2 - It works better on Intel 13/14 gen or with AMD 7000 ? AMD x3d have some storage advantages ?
3 - They Need some adapters from M.2 to their connection , what do you recommend ?
4 - It is better that my old Intel Optane 900p ?
Thank you !
Man, would love to develop software for these; been working on journaling distributed databases and an opportunity to tune and demonstrate on something like this would make customers emerge from behind every bush and from under every rock.
P.S. you look fantastic, I can see wellness right on your face.
No chapter marks again.
Making gains Wendell. Looking good
I once took a class with a database professor who told me he gave a talk at Microsoft to the SQL Server team. He had written a research database with performance which really impressed the SQL Server team. He claimed that one of the reasons for his performance was that he stored his data structures directly on the system's block devices, avoiding the FS overhead. The SQL Server people dismissed this approach because they believed customers preferred the convenience of storing data in the FS. I believe that assumption to be false, as most on-prem DB deployments use dedicated hardware. But I do wonder, how much overhead is there from the FS?
You love the smell of cores or Coors?
I have a video showing "How to Enable Intel QAT Backup Compression in SQL Server 2022" for some more details about how QAT backup compression works.
th-cam.com/video/qeziSrXsE8Q/w-d-xo.html
Any sign large nas SSD's will come to consumer prices?
cheery-O Wendell, cannot wait to see how you feed your silicon breakfast in the next one.
I still have an OCZ Vertex 4 256GB Windows 10 system drive that works... It's been showing 1 bad block for the past 3 years... Other than that, it works somehow...
ocz hit me right in the feels. they all died so fast
Why MicrosoftSQL over postgresql? What's the value prop beyond microsoft support?
SQL Server 2022 has the Intel QAT Backup compression feature that Wendell was talking about. That is critical for what Wendell is trying to do.
Wendell, how come you have amazingly informative videos and content and you do not have millions of subscribers..?
Where are the millions that could digest this level of information?
@@PointingLasersAtAircraft lol so true
Servers are a niche
I was about to ask if Wendell looks like he's loosing some lb.'s, but looks like I'm not the only one noticing. I noticed a couple weeks ago, but wasn't sure, but its becoming more obvious. GOOD JOB KEEP IT UP!
Oh budddy. I love it 🙂Saturate... memory... bandwidth...
13:22 Were those old NetApp storage arrays I saw?
I have that exact shirt. Love it!
I have nightmares about those old pre-sandforce 64GB and 128GB SSDs.
Wendell looks good!
are there more affordable NVME SSDs for servers? Id like to play with these and have a lead on a server that can use them, but cant afford the enterprise level drives? Basically does anyone have suggestions or at least things to google to find these?
Just got my first cd6 r 3.84tb ssd , planning on 12 or going for 8 7.68 tb cd 6 rs just for the lols could go hhds but why.
yeah I was feeling like hot shit with my CD6 and CM6 here, not any more
You look well. ❤
9:23 I do, I don't really use it anymore, haven't for many many years, because it's dog slow, but works fine as an external drive just to move some stuff around.
But what died on me were two Samsung drives, one 250GB 960 EVO that didn't even last enough for me to finish installing Windows on it, then another 500GB 960 EVO that died just shy of it's 2 year warranty expiring, and a Toshiba TR200 240GB.
Maybe I had another SSD die but I'm not sure.
Moral of the story, Samsung drives? Never again.
I like seeing Wendell getting slimmer
Means we'll be seeing him for many more years.
@@magfal Not necessarily. People with more fat are more likely to survive diseases and serious injury than skinny folk.
@@drewcipher896 to a certain point.
My weight is 120KG and there is practically zero situations where I wouldn't be better of than if I were able to drop 20KG
I think Wendell beat me into submission with this one.
Wendell slimmed down a ton. Well done.
Can I run couple of those drives on AM5 or threadripper platform?
omg. exciting 🎉
5:50 … nice typeface! What is that? 🤔
All this pcie performance is turning him into a giga chad… no, a Giga Wendell
NICE TSHIRT! LOVE IT!
Would like to see duckdb or clickhouse test
That t-shirt is awesome
19:14 Oy, you got a licence for that?
When you understand only half of what Wendell is saying....
I really wish mainstream mobos could eat these SSDs too..... or there'd be an easy way to convert m.2 to u.2....
there is! did a video on that not long ago, my holy grail video, found an adapter. just waiting on price to come down now.
I dig the shirt.
epic content
Ready Wendell for a 9GT world record to get AMD title back? 😁😎👍🏼nice drives but expensive.
Would you please talk about tech like Would direct storage would do to a setup like this?
What do you mean by direct storage? Isn’t that just for games? Downvoting you because that is just dumb. This is a $100k server.
@shrddr I'm dumb, it clearly has more uses than advertised
I've got a storage fetish
"Well son, It all started when I got the Micron 9200..."
Have the Genoa issues with 2DPC been fixed with the latest updates?
I don't have any 2dpc chassis. It's kinda funny it's .. 12 channels would you rather have 8 channel 2dpc or...12 channels? Seems 12 channels is better for now esp with higher density memories buuuuuuut
Was wondering if the Genoa 2DPC issues might have similar roots to AM5’s 2DPC woes.
did Wendell lose weight?
1:17 WHO???
neat
that comment @16:27 X)
This is why we aren’t friends
Drinking Coors in the morning?
PLEASE migrate to RUMBLE
Why are you standing behnd a monitor for 80% of the video? I was waiting for the monitor to turn on, and it never did...also, you picked up a CD8 with your left hand during the video while you were talking about it...when you put it back on the table, I almost said, "peek a boo!" 🤣
SFF Wendell
Wendell, looking good! Congrats on the lost weight!
Can't trust storage called "seedy"
I didn't follow how 24 drives were going to use 75% of the total memory bandwidth. 24 drives * 6 GBps = 144 GBps, but then you multiplied the 144 GBps by three, why?
Also I feel like you're going a little easy on us as far as technical details. You said signal integrity was a problem. What were the incorrect cables? what were the correct cables? Are you using MCIO cables or SlimSAS? I can lend you some SlimSAS 8i 0.5m cables if it would help, but Genoa probably uses MCIO for the PCIE 5.0 speeds, even though you are running the SSDs in 4.0 speeds (I think).