Great Video! I work at the DC that 37 Signals has all their stuff at. This has started an avalanche of #CloudExit stories and we see more every day. Savings of 30-70% is typical when exiting the cloud. Some workloads migrate easy and some are more complicated. The more entrenched into the managed, cloud-native services, the harder it can be. Static workloads generally have the best savings where bursty workloads are a little more tricky. For a Hybrid approach, use an AWS Direct Connect and put your static workloads in the DC and your bursty traffic in AWS. You can also take advantage of some cool stuff like EKS Anywhere for containerized workloads in a central management plane.
Okay but all the arguments for why compute and other basic server infrastructure should be a utility service are still 100% true. The only problem is that utilities, being utilities, have natural monopoly characteristics. That's why municipalities began taking ownership of the provision of utilities in the 1880's. "The power company" in your town is likely a public utility, publicly owned or heavily regulated by law not to screw over consumers, with prices capped and regulated by law. AWS is no longer a good deal compared to self-host because consumers have been screwed over: the compute power of servers you can buy has massively increased since 2006, but what you get in compute from AWS per dollar has not remotely kept pace. There is nothing wrong with the reasoning behind why cloud should be a utility and must be a utility for human progress and flourishing. The problem is that our regulators and governments have completely failed to protect consumers against monopolistic companies like Amazon.
This is geeat stuff. Although i dont have influence i hope to make those with it aware. Suprisingly we have a data center that keeps being downsized. The industry we are in is a late adopter. Maybe that late adopter property will become a benefit. I think the whole idea of obfuscation of cost can happen in a poorly managed data center as well but at least you have more control of that risk.
the video using python coding while DHH is ruby, ruby on rails project. 😂Based on my six years of experience working in data centers and nearly nine years in web and software development, I agree with DHH.
There are different types of cloud, not just AWS. For bare metal, check Hivelocity. For virtual machines check Linode. You pay a monthly fee and that's it. No hidden costs.
As a dinosaur who doesn't have my head in the cloud, I burst out laughing at most of his facts. Huge projects I lead are strictly on cheap cloud VPSes. I still enjoy working with teams who have their heads in the cloud though.
It makes a lot of sense if you have a stable number of users, but if you are a startup it doesn’t and if you is growing rapidly it can also be problematic. Regardless, ”Run the numbers” is certainly a wise advice!
AMD EPYC 9965 192 cores 384 threads 2.25 GHz base 3.7 GHz boost 384 MB L3 cache PCIe 5.0 x 128 DDR5 with 12 memory channels 6000 MT/s 576 GB/s per socket Pro tip: you can buy more than one.
DHH is 100% correct on all but one thing: Racking your servers is nowhere near as simplistic as putting it in the cage and simply connecting the power and ethernet lol. Switches, firewalls, redundancy, vLANs, power consumption, etc. all come into play, and you need a good network architect to put that into place properly. Of course once you've done it you're good to go and it's 1000x better than being a renter in the "cloud".
My son's project with 10 concurrent users in the same building still used cloud service for around $1,000/mo. In my mind this is a waste of money. Do anyone know what a $5,000 server can do vs a $1,000/mo. cloud service? The bare metal server will run circles around the $1,000/mo. cloud service. But what about security? You're in the same building, did everyone forgot about corporate firewall. The problem is cloud is sexy, on-premise/bare metal server is not. As a side note, 5 years for a branded server is very conservative, it could run longer. The only issue is older server have lesser compute power vs newer CPU and most companies fully depreciate computers on it's 5th year in service.
Blame TH-cam. It would be trivial for TH-cam to implement an optional audio channel, so that viewers can choose their own preference, but they don't. Then you come along and decide that everyone would be better off without the music, because that's how _you_ like it. At least grow up and blame the right entity, meaning TH-cam.
The DVD was designed in 1995, and it had optional audio support built in. Almost thirty years later, TH-cam hasn't got there yet. But you blame the content, and not TH-cam.
Listening to DHH talk about engineering always gets me exited to build cool things!
What a passion, what a clarity. Thanks David and DM for this great video
Great Video! I work at the DC that 37 Signals has all their stuff at. This has started an avalanche of #CloudExit stories and we see more every day. Savings of 30-70% is typical when exiting the cloud. Some workloads migrate easy and some are more complicated. The more entrenched into the managed, cloud-native services, the harder it can be. Static workloads generally have the best savings where bursty workloads are a little more tricky. For a Hybrid approach, use an AWS Direct Connect and put your static workloads in the DC and your bursty traffic in AWS. You can also take advantage of some cool stuff like EKS Anywhere for containerized workloads in a central management plane.
Outstanding truths. Glad you put this out there!
Okay but all the arguments for why compute and other basic server infrastructure should be a utility service are still 100% true. The only problem is that utilities, being utilities, have natural monopoly characteristics. That's why municipalities began taking ownership of the provision of utilities in the 1880's. "The power company" in your town is likely a public utility, publicly owned or heavily regulated by law not to screw over consumers, with prices capped and regulated by law. AWS is no longer a good deal compared to self-host because consumers have been screwed over: the compute power of servers you can buy has massively increased since 2006, but what you get in compute from AWS per dollar has not remotely kept pace.
There is nothing wrong with the reasoning behind why cloud should be a utility and must be a utility for human progress and flourishing. The problem is that our regulators and governments have completely failed to protect consumers against monopolistic companies like Amazon.
This guy is the tech version of Russel Brand 😂😂😂
This is geeat stuff.
Although i dont have influence i hope to make those with it aware. Suprisingly we have a data center that keeps being downsized. The industry we are in is a late adopter. Maybe that late adopter property will become a benefit.
I think the whole idea of obfuscation of cost can happen in a poorly managed data center as well but at least you have more control of that risk.
the video using python coding while DHH is ruby, ruby on rails project. 😂Based on my six years of experience working in data centers and nearly nine years in web and software development, I agree with DHH.
There are different types of cloud, not just AWS. For bare metal, check Hivelocity. For virtual machines check Linode. You pay a monthly fee and that's it. No hidden costs.
Bezos getting ready to send a Amazon drone visit . I agree I never understood the cloud appeal for medium business.
Is @dhh coding in python now 🤔? 1:34
lol
noticed that too :D
Ruby is so unpopular that DARK MATTER couldn't find any Ruby stock footage.
As a dinosaur who doesn't have my head in the cloud, I burst out laughing at most of his facts. Huge projects I lead are strictly on cheap cloud VPSes. I still enjoy working with teams who have their heads in the cloud though.
It makes a lot of sense if you have a stable number of users, but if you are a startup it doesn’t and if you is growing rapidly it can also be problematic.
Regardless, ”Run the numbers” is certainly a wise advice!
Its like dividing the cloud further, buying your own hardware to be maintained by others and deploying your own services.
What about agility?
AMD EPYC 9965
192 cores
384 threads
2.25 GHz base
3.7 GHz boost
384 MB L3 cache
PCIe 5.0 x 128
DDR5 with 12 memory channels
6000 MT/s
576 GB/s per socket
Pro tip: you can buy more than one.
3.2m because you use Ruby
For startup, high investment cost for server. Worth it for him with customers akrrady
100% correct. Fully agree with him.
DHH is 100% correct on all but one thing: Racking your servers is nowhere near as simplistic as putting it in the cage and simply connecting the power and ethernet lol. Switches, firewalls, redundancy, vLANs, power consumption, etc. all come into play, and you need a good network architect to put that into place properly. Of course once you've done it you're good to go and it's 1000x better than being a renter in the "cloud".
DHH uses a provider that physical handles their hardware for them. They only ever see the hardware they own through a terminal.
My son's project with 10 concurrent users in the same building still used cloud service for around $1,000/mo.
In my mind this is a waste of money. Do anyone know what a $5,000 server can do vs a $1,000/mo. cloud service? The bare metal server will run circles around the $1,000/mo. cloud service. But what about security? You're in the same building, did everyone forgot about corporate firewall. The problem is cloud is sexy, on-premise/bare metal server is not.
As a side note, 5 years for a branded server is very conservative, it could run longer. The only issue is older server have lesser compute power vs newer CPU and most companies fully depreciate computers on it's 5th year in service.
This video would have been even better if it was sans distracting music
Blame TH-cam. It would be trivial for TH-cam to implement an optional audio channel, so that viewers can choose their own preference, but they don't.
Then you come along and decide that everyone would be better off without the music, because that's how _you_ like it.
At least grow up and blame the right entity, meaning TH-cam.
@afterthesmash To the extent I'm blaming anyone I'm fine blaming the creator of the video thanks.
The DVD was designed in 1995, and it had optional audio support built in. Almost thirty years later, TH-cam hasn't got there yet. But you blame the content, and not TH-cam.