Beware: There's a terminology collision here. Baremetal in Embedded means "there's no code bellow" (no OS) Baremetal in Cloud means "there's only 1 VM bellow"
I have used the google distroless images (specifically the base or cc image if compiled with gnu targets, or the static image with musl targets) for very slim images running simple rust binaries. It also reduces the size and attack surface and you can even run your binaries as non-root.
I kinda like the idea of running on Cloudflare Workers which is pretty much AWS Lambda on edge but with all the Cloudflare advantages like CDN for free.
Think you missed a nice option from AWS for Rust apps, which is ECR+ECS with Fargate, you just need a Docker image that compiles the Rust binary and keep it running and that's it. Also ElasticBeanstalk could help on that too.
I am also interested in cloudflare or fermyon as potential options. Dockerfile's still seem like a good way to go though. Since cloudflare supports rust basically inside a v8 worker, and uses things like web_sys and wasm_bindgen, I wonder what the performance is like here. There is also a great video with cloudflare by Michael Cann that ties nicely into all the Bevy/game related work you do. I think cloudflare with multiplayer edge storage could be an extremely cool video/project. The video is called: Serverless & Databaseless Event Sourcing with Cloudflare Workers & Durrable Objects. Curious what you think of something like this? As per usual, Thanks for the content!
Not for this project, but it's something I could look into. I'm using dual-stage builds and mostly only copying the binary over to a bullseye-slim image, which starts at 30mb. That's quite a bit bigger than the busybox images, but it means that I get easier extendability if I need to add something to the image that isn't contained in the binary.
Maybe it's not very trendy, but you haven't considered one vps scenario, or aws ec2. Applications written in rust seems to me to be the way to go for such a distribution. Small simple programs. Without a docker, and if necessary with a simple script, it can be installed in many locations + Most importantly, you will be able to update the system on which you have installed the software.
> you haven't considered one vps scenario, or aws ec2. Here's the point in the video where I cover why I didn't use EC2: th-cam.com/video/6yQfL-1yWNQ/w-d-xo.html
@@chrisbiscardi These are native applications running on the operating system, so why don't you want to manage them. Because it's not your level and that's that.
@@grzegorz.bylica yes, I was pretty clear in this video that managing OS updates and EC2 configuration doesn't significantly contribute to the application in any way, so I don't have any reason to take on that extra work as it would take away time from actually doing valuable work.
@@chrisbiscardi I agree the less work the better but changing the docker is not an update? Additionally, a full-fledged operating system as a docker is redundant.
true, I tend to not use "monolithic" solutions when approaching serverless functions though. The routing for say, AWS Lambda, is likely already handled by API Gateway so I don't really need to put it in my application as well. It's still a perfectly valid option though.
I don't know how to use Docker and for me it seems like compiling to wasm is actually a much better, lightweight and viable option. Have you heard about Lunatic by the way?
Wasm could become more viable in the future, and not knowing the Dockerfile syntax/docker is tough to get around, but the Dockerfile/docker approach is extremely widely used at this point by providers big and small. There are also performance and feature considerations when compiling to wasm. feature-wise, you might not even be able to send an http request from wasm on an edge platform and performance-wise I would also expect a native Rust application to perform better than something compiled to wasm (but as always, measure perf over guessing). I haven't used lunatic. Looks like it's an interesting project that's pretty early lifecycle wise (judging from their mentions of WASI, etc). So not something I'd use for a production deployment but something I'd play around with.
Cloud Run with GCloud is serverless with docker, would you think this is a good choice and why not? We've been using it for years.. got lots of great services around it with GCloud as well (Like ML content moderation). The main issue I find with these small hosting companies is they lack essential services.
nobody gets fired for choosing one of the big providers for a reason would be my thoughts. If Cloud Run is working for you, and you use other GCP services, then that's great! Plenty of people do way more complicated things than that like running their own k8s clusters or using GKE and that still works for their requirements so I'd be reluctant to tell someone that something that is provably working should be removed or is somehow a bad solution. I wouldn't go with cloud run on this project for the fact that I'm already on one of the big providers (AWS), and I'm reluctant to extend to too many others without really good reasons. You've very much not wrong about smaller providers lacking the extensive suite of services that the larger providers have.
We have full control over the operating system, library updates, etc., which on a lean docker doesn't really work. There is no need to send the operating system to a host, nor is there a need to create it with each release.
“We’re not going to deploy a web based JSON api to a microcontroller” quitter talk
My esp32 running wled had something to say about this. Haha. Generally speaking Chris has great content but this discussion is very specific to him.
Some PLCs provide this capability, but it is strange on an RTOS
Beware: There's a terminology collision here.
Baremetal in Embedded means "there's no code bellow" (no OS)
Baremetal in Cloud means "there's only 1 VM bellow"
I have used the google distroless images (specifically the base or cc image if compiled with gnu targets, or the static image with musl targets) for very slim images running simple rust binaries. It also reduces the size and attack surface and you can even run your binaries as non-root.
I was wondering about this very thing a few days ago. Thanks for the video
I kinda like the idea of running on Cloudflare Workers which is pretty much AWS Lambda on edge but with all the Cloudflare advantages like CDN for free.
Think you missed a nice option from AWS for Rust apps, which is ECR+ECS with Fargate, you just need a Docker image that compiles the Rust binary and keep it running and that's it.
Also ElasticBeanstalk could help on that too.
On cloud providers you have so many options that require the same amount of maintenance as other “heroku alike”. Cloud run is one example btw.
I am also interested in cloudflare or fermyon as potential options. Dockerfile's still seem like a good way to go though. Since cloudflare supports rust basically inside a v8 worker, and uses things like web_sys and wasm_bindgen, I wonder what the performance is like here.
There is also a great video with cloudflare by Michael Cann that ties nicely into all the Bevy/game related work you do. I think cloudflare with multiplayer edge storage could be an extremely cool video/project. The video is called: Serverless & Databaseless Event Sourcing with Cloudflare Workers & Durrable Objects.
Curious what you think of something like this? As per usual, Thanks for the content!
I am also curious why wasm compilation wasn't one of the preferables as a deployment option
ECS Fargate? Lamda can get expensive. I unfortunatly speek from experience.
Hello, have you tried busybox as a base image? It has all you need to launch your binary and it's smaller than alpine image.
Not for this project, but it's something I could look into. I'm using dual-stage builds and mostly only copying the binary over to a bullseye-slim image, which starts at 30mb. That's quite a bit bigger than the busybox images, but it means that I get easier extendability if I need to add something to the image that isn't contained in the binary.
Maybe it's not very trendy, but you haven't considered one vps scenario, or aws ec2. Applications written in rust seems to me to be the way to go for such a distribution. Small simple programs. Without a docker, and if necessary with a simple script, it can be installed in many locations + Most importantly, you will be able to update the system on which you have installed the software.
> you haven't considered one vps scenario, or aws ec2.
Here's the point in the video where I cover why I didn't use EC2: th-cam.com/video/6yQfL-1yWNQ/w-d-xo.html
@@chrisbiscardi These are native applications running on the operating system, so why don't you want to manage them. Because it's not your level and that's that.
@@grzegorz.bylica yes, I was pretty clear in this video that managing OS updates and EC2 configuration doesn't significantly contribute to the application in any way, so I don't have any reason to take on that extra work as it would take away time from actually doing valuable work.
@@chrisbiscardi Less work is better, but docker is more work. It should also be kept up to date.
@@chrisbiscardi I agree the less work the better but changing the docker is not an update? Additionally, a full-fledged operating system as a docker is redundant.
You can still use axum for rust lambdas using the lambda web crate.
true, I tend to not use "monolithic" solutions when approaching serverless functions though. The routing for say, AWS Lambda, is likely already handled by API Gateway so I don't really need to put it in my application as well. It's still a perfectly valid option though.
I don't know how to use Docker and for me it seems like compiling to wasm is actually a much better, lightweight and viable option. Have you heard about Lunatic by the way?
Wasm could become more viable in the future, and not knowing the Dockerfile syntax/docker is tough to get around, but the Dockerfile/docker approach is extremely widely used at this point by providers big and small. There are also performance and feature considerations when compiling to wasm. feature-wise, you might not even be able to send an http request from wasm on an edge platform and performance-wise I would also expect a native Rust application to perform better than something compiled to wasm (but as always, measure perf over guessing).
I haven't used lunatic. Looks like it's an interesting project that's pretty early lifecycle wise (judging from their mentions of WASI, etc). So not something I'd use for a production deployment but something I'd play around with.
@@chrisbiscardi Nix can be used to generate a teeny docker file, no Dockerfile or docker compose syntax.
Hello! Can you go deeper over Rust in embedded? What the about wasm3 project?
we'll cover more embedded for sure. Not sure when exactly though.
Cloud Run with GCloud is serverless with docker, would you think this is a good choice and why not? We've been using it for years.. got lots of great services around it with GCloud as well (Like ML content moderation). The main issue I find with these small hosting companies is they lack essential services.
nobody gets fired for choosing one of the big providers for a reason would be my thoughts.
If Cloud Run is working for you, and you use other GCP services, then that's great! Plenty of people do way more complicated things than that like running their own k8s clusters or using GKE and that still works for their requirements so I'd be reluctant to tell someone that something that is provably working should be removed or is somehow a bad solution.
I wouldn't go with cloud run on this project for the fact that I'm already on one of the big providers (AWS), and I'm reluctant to extend to too many others without really good reasons.
You've very much not wrong about smaller providers lacking the extensive suite of services that the larger providers have.
Rust era
I prefer vm or bare metal
We have full control over the operating system, library updates, etc., which on a lean docker doesn't really work. There is no need to send the operating system to a host, nor is there a need to create it with each release.