Годно, спасибо! Было очевидно, что nginx - это слишком мощный и хорошо написанный инструмент, повторить его результаты на языках с gc почти невозможно. Буду использовать это видео как аргумент 😊
Interesting, I used apache (httpd) as a reverse proxy, historically mostly, interesting how would it compare. And such surprising results for grpc, I would never guess it can be this way. Although I’m wandering wether the actual backend service implementation could affect the results somehow. Don’t see any legitimate reason for proxies to behave this way. I can see the request time much longer for grpc, so longer lasting connections could possibly consume more resources on the proxy side. It seems the answer may be in the app. I think so. Backend analysis might help to figure out. Although in general I surprised how poorly the grpc system behaves. I thought it’s kinda the holy grail for the low latency systems. I would definitely appreciate more in depth analysis of the topic.
I just want to say thank you for these very interesting videos. I want you to know that you've been helping me improve in my career and get better jobs. Thank you sir! (I'm subbed!)
When you say "HTTP/1" here do you really mean 1.0 without keepalive or 1.1 with keepalive? And from the client perspective, do we actually close socket each time?
@@AntonPutra I think this is maybe due to the k6 load test being pretty simple (x amount of users for y time without any fluctation or ramp up and downs which could result in more bursty workloads instead of pegging the cpu in a pretty constant way). You should watch out in future videos when trying to create more advanced scenarios in combination with burstable vms. To be clear, I am not trying to undermine your testing methodology, I really like your videos.
Certainly, even with "unlimited" cpu credits (t3 default) it still throttles the CPU, should be ran in compute instances to see the difference and if it does affect the result
In that specific case, I used AWS and t3a.small instances. I ran tests multiple times (creating new EC2 instances each time) with the same results. github.com/antonputra/tutorials/blob/main/lessons/144/terraform/10-traefik-ec2.tf#L3 github.com/antonputra/tutorials/blob/main/lessons/144/terraform/11-nginx-ec2.tf#L3
🔴 - To support my channel, I’d like to offer Mentorship/On-the-Job Support/Consulting - me@antonputra.com
This is why Nginx is the undisputed King for superior performance with minimum resource utilization.
For now =)
and the configuration is less of a nightmare the traefik config is
And security nginx is the best
Until pingora arrived
@@altairbueno5637 AFAIU it hasn't been open sourced yet. Or did I miss that?
recently i started getting your videos recommended. they are very interesting indeed. keep up the good work!
Thanks Javohir!
The best, or even the only real, comparison video I found on yt. 🙏🙏🙏
Nice testing protcol.
yeah, his video is mind blown to me, i never seen any utuber does benchmarck like this guy....super nice
Your videos are really great. Keep up the good work!🔥
Thanks, will do!
Very interesting to see the performance difference, thanks for making this video
No problem!
Thanks, really like your videos. Would appreciate a comparison with haproxy next time
Thanks, got it
Te amo, este es el tipo de preguntas que quería resolver
you're welcome =)
Thank you for sharing these well designed tests - am learning a lot!
Thank you!
Hi, Thanks, I like your videos. Would appreciate a comparison with Traefik vs Caddy 2 next time
Sure
Годно, спасибо!
Было очевидно, что nginx - это слишком мощный и хорошо написанный инструмент, повторить его результаты на языках с gc почти невозможно. Буду использовать это видео как аргумент 😊
pojalusta, hochu s linkerd proxu sravnit, govoryat bistry :)
@@AntonPutra буду ждать такое сравнение, подпишусь, чтоб не пропустить)
@@КириллКириллович ))
Interesting, I used apache (httpd) as a reverse proxy, historically mostly, interesting how would it compare.
And such surprising results for grpc, I would never guess it can be this way. Although I’m wandering wether the actual backend service implementation could affect the results somehow. Don’t see any legitimate reason for proxies to behave this way.
I can see the request time much longer for grpc, so longer lasting connections could possibly consume more resources on the proxy side. It seems the answer may be in the app. I think so. Backend analysis might help to figure out.
Although in general I surprised how poorly the grpc system behaves. I thought it’s kinda the holy grail for the low latency systems. I would definitely appreciate more in depth analysis of the topic.
Thanks Kyrylo for the feedback. I'll try to figure it out.
Thanks for the video. I do like traefik for the simple fact it's a bit more noob friendly when using with kubernetes
yaml friendly =)
great! would like to see benchmark about envoy proxy
Coming soon!
I just want to say thank you for these very interesting videos. I want you to know that you've been helping me improve in my career and get better jobs. Thank you sir! (I'm subbed!)
Thank you KWKOPS!
It was .... deep and pro 🔥🔥🔥
Thanks! Appreciate it!
Does anyone know how an envoy based reverse proxy compares? I think of something like contour
Great video!
Thanks!
When you say "HTTP/1" here do you really mean 1.0 without keepalive or 1.1 with keepalive? And from the client perspective, do we actually close socket each time?
They both support keepalive, not sure about the latter
How is traefik different from envoyproxy? I know it's a fork of envoy but is it designed for edge proxy?
Can you send a link?
🚀🔥❤
😊
Great video, thank you :)
Could I consider the NGINX performance the same as the NGINX Proxy Manager (NPM)?
Based on the description of the project, yes, they don't 'enhance' code functionality, mainly TLS.
Traefik API GW Installation guide, websocket support required 💐
ok
nginx drop requests because can process more, but OS didn't give enough resources - you should change the limits.
you mean file descriptors? to much customization, prefer to use defaults for tests..
@@AntonPutra default value of worker_connections is smth about 768 (multiple by number of CPU for default value "worker_processes auto")
❤Go (Golang) vs Node JS (Microservices) performance benchmark - th-cam.com/video/ntMKNlESCpM/w-d-xo.html
❤Go (Golang) vs. Rust: (HTTP/REST API in Kubernetes) Performance Benchmark - th-cam.com/video/QWLyIBkBrl0/w-d-xo.html
❤AWS Lambda Go vs. Rust performance benchmark - th-cam.com/video/wyXIA3hfP88/w-d-xo.html
❤AWS Lambda Go vs. Node.js performance benchmark - th-cam.com/video/kJ4gfoe7gPQ/w-d-xo.html
❤AWS Lambda Python vs. Node.js performance benchmark - th-cam.com/video/B_OOim6XrI4/w-d-xo.html
Isn't using burstable vms a bad idea to do these kind of tests? You don't really have any control over when the VM bursts or not.
I ran this test at least 4 times (creating and deleting vms), each time result was the same.
@@AntonPutra I think this is maybe due to the k6 load test being pretty simple (x amount of users for y time without any fluctation or ramp up and downs which could result in more bursty workloads instead of pegging the cpu in a pretty constant way). You should watch out in future videos when trying to create more advanced scenarios in combination with burstable vms.
To be clear, I am not trying to undermine your testing methodology, I really like your videos.
@@xentricator Thanks, I'll keep it in mind
Certainly, even with "unlimited" cpu credits (t3 default) it still throttles the CPU, should be ran in compute instances to see the difference and if it does affect the result
what benchmarking platform do you use?
In that specific case, I used AWS and t3a.small instances. I ran tests multiple times (creating new EC2 instances each time) with the same results.
github.com/antonputra/tutorials/blob/main/lessons/144/terraform/10-traefik-ec2.tf#L3
github.com/antonputra/tutorials/blob/main/lessons/144/terraform/11-nginx-ec2.tf#L3
@@AntonPutra thanks bro, that monitoring with traffic and latency graph is it part of aws service or another platform too?
@@stephen.cabreros It's open source prometheus and grafana, i have all components and dashboards in my repo just in case you want to reproduce
@@AntonPutra ok I'll check it, thank you for this
Nginx vs Pingora
will do!
02:11 там фраза «то есть»?)
? :)
Nginx vs pingora
noted!
gg
🫡
Please try Pingora
ok i'll take a look
kupil tesla?
in last march, they dropped 15k today :(
are you half-indonesian ?
Nope =)
So Go sucks?
Not at all. It's great for beginners and easy to find implementation for anything you're trying to solve.
@@AntonPutra great for beginners!? Nginx is c/c++ golang isn't competing with a behemoth like that! Otherwise golang is killer.
@@AntonPutra Tell that to Google, Docker and all the other companies that use Go on a massive scale