► What should I test next? ► AWS is expensive - Infra Support Fund: buymeacoffee.com/antonputra ► Benchmarks: th-cam.com/play/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn.html&si=p-UOaVM_6_SFx52H
@@HVossi92 yeah but I wanted to see how it compares to nginx since that's what I'm using right now and I have been thinking of switching to traefik because I have been having some strange issues that I can't really pinpoint and was wondering if it could something to do with nginx
I've run nginx serving as reverse proxy in the 30K r/s range for production workloads, the way nginx handles tls is kind of naive and could be improved. Basically what is happening is that there is almost always uneven distribution of work across worker processes and it dogpiles with tls overhead. limiting the tls ciphersuites used can help mitigate this so that there is less variance in how long TLS handshakes take on aggregate. also multi_accept on is you friend.
@@AntonPutrathis mostly happens from dealing with production loads where you have a diverse set of tls client implementations. not everyone will choose the same ciphersuites. this is an example of things often omitted from synthetic benchmarks because people just dont think of it.
another amazing performance benchmark and just the one I wanted to see rerun from your old videos. many thanks and great job I'm still curious about the results tho. I'm really looking forward to seeing someone explain why nginx crashed in the last test also I think that apache's compression algorithm is causing the high cpu usage in the first 2 tests and it would perform more like nginx if compression was off (but that's unrealistic to find in the real world) many thanks again and looking forward to the next x vs y video, this second season is very informative
Valuable benchmarks! Tip: There is this insane resonance on the audio of this video (and probably more of your videos), so when you pronounce words with s, I can feel a brief pressure in my ear from my brain trying to regulate the intensity. 😅
This was really interesting, i used to be running Apache alot a few years ago, and like you, i switched for the huge performance benefit of Nginx (in most cases apparently). Now, i don't do any loadbalancing using nginx or apache but this was really interesting to me as HA is always something i have been looking for but never really managed to do (lack of hardware and knowledge in my homelab). Earned the sub well done!
I wonder if apache and nginx use a different default compression level. The test results hint at this (even though both state 6 as default in their docs), and diminishing returns on a higher compression level might be hurting apache in this test. There might be some improvements investigated by skipping compression on files less than 1kb (which I think is a best practice), as well as setting the same gzip compression level on both services.
Given my exposure to both apache and nginx, this lines up. You want something to serve static content nginx is king. I am concerned about what is happening around that 80% though. The way I see them nginx is like a scythe able to cut through a metric boatload of requests and apache like a swiss army knife with a boatload of tools available to do everything that has ever come up in my travels (this is where I sense apache's slowness comes from, its versatility). I guess the car analogy is nginix can do a 1/4 mile straight faster but apache could do a rally better as its more adaptable. I have a non-compliant endpoint that uses api_key HTTP header and it took effort just to get nginix to leave it alone and then I route that path to an apache container where it gets fixed.
i have found i can make nginx do everything apache does, including serve php and all that application-layer stuff people do with apache. it's not especially advisable, though.
@@MattHudsonAtx the invalid header issue I mentioned I haven't found a way to do it with nginix, at best I can get it to pass it through for something else to deal with using the ignore_invalid_headers directive. Given I was trying to stay just using the nginix proxy manager which is handling everything else I would love to know an alternative way.
We had experienced an interesting issue with Go application and Nginx when migrated from Pyhton to Golang, that Nginx uses A LOT more tcp packets to communicate with golang apps, at first it overloaded a load balancer cluster and then the application itself, we still haven't figured out what happened, because we also were in the process of migrating to Traefik, but it looks like Go and Nginx really want to split requests into a lot of packets since the most load came from TCP reassembly, and there were a lot more sockets in waiting ACK then usual.
Amazing test I did the same test with Grafana K6 but between Nginx and Openlitespeed. Your test definitely explains why cyberpanel is the most performant out of the open source hosting software I tested. it uses a combination of apache and openlitespeed ( I think the perform a reverse proxie with apache and serve the website using openlitespeed )
For the reverse proxy tests, can you test with the swiss army knife of reverse proxies: Envoy proxy? It supports TLS, mTLS, TCP proxying (with or without TLS), HTTP1, 2 and even HTTP3, multiple options for discovering target IPs, circuit breaking, rate-limiting, on the fly re-configuration, and even Lua scripting in case all of that flexibility isn't enough.
really great video! can you do a nginx vs tengine next? it claimed that it has a better performance than nginx and I'm very curious about it, love your vid
When looking at the 85% cpu breakpoint, one thing I could think of was some form of a leak, maybe try to slow down the request increase rate, it might show different results.
I would like to see a test with NGINX Stream Proxy Module which acts as just a reverse TCP or UDP proxy, not as a HTTP proxy. I for example, use this for some game servers where I reverse proxy both TCP and UDP packets. I setup NGINX for this because it seemed like the easiest thing to do, but I don't know if it has the best performance.
Ok thank you very much for really providing these insights! I was in the making of my own reverse proxy, and this is some key data. I think I might have made a RP better than both of those. 😏
i use event more, here is origin config - github.com/antonputra/tutorials/blob/219/lessons/219/apache/httpd-mpm.conf#L5-L12 i also got a pr with improvement - github.com/antonputra/tutorials/blob/main/lessons/219/apache/httpd-mpm.conf#L10-L18
Did you use RSA or ECDSA certificates? Because ECDSA should be used most of the time, as they are faster to transmit (less bytes in TLS handshake). Also, nowadays, when used as Reverse Proxy, the connection to the backend servers (i.e. downstream) should be also encrypted, and not cleartext.
I used RSA in both proxies, and regarding the second point, it's good to have but difficult to maintain, you constantly need to renew the certificates that the application uses.
Why not? Don't you think most people will use the default settings? Imo this way of testing is probably the most representative of real world performance. Of course it's also interesting to see how far you can optimize, but this is definitely useful.
Saya pikir orang Indo, ternyata bukan. But it's a great video (and I'm still sticking to Apache - PHP MPM coz I've never had such a huge traffic... except for the DDOS event).
took me a while to realize this isn't OSS nginx, have not played around with the F5 one, does it come with its builtin metrics module ? or what did u use to export those? great content as always!
@@rafaelpirolla makes sense that latency was obtained from clients, thank you!! worked around this once using otel module + tempo metrics generator, but that was rather convoluted / unsatisfactory approach
yeah, it's open-source nginx. Also, the most accurate way to measure latency is from the client side, not using internal metrics. In this test i collect cpu/memory/network for web servers using node exporter since they are deployed on standalone VMs
I feel like network usage in itself is related to request/s, in that if one webserver is able to satisfy more requests per time, it's prone to having more network usage within that same timeframe. Why not network usage per request?
I didn't get the question. You use compression to improve latency and overall performance. With a payload that is four times smaller, it takes less time to transmit over the network.
Is it possible to benchmark pingora as well? It will be easy to use it after river became available so will wait for it in future! Thanks a lot for the benchmark!
I've noticed this weird behavior of nginx as a reverse proxy to a backend server too. Even if that backend server itself is just serving static data, the mere act of being a reverse proxy seems to cause a rather big performance hit for nginx. Weird.
Something isn't quite right here. In all 3 tests, you show the requests per second synchronized until a major failure happens. The time log at the bottom seems to indicate these requests per second metrics are being gathered over the same time period. Yet how can this be possible when one web server has a significantly higher latency, measured at the client, than the other? Once the latency difference hits 1ms, that means we should notice at least 1,000 fewer requests per second for each second that passes after that moment -- accumulating as time goes by. And, of course, this difference should accumulate even more quickly the higher the latency goes. It looks to me like you (accidentally?) decided to normalize the graphs of each contest so the RPS would match until one of the servers failed. Or if not, what am I missing here?
In the last test, are the Rust applications running in the same instance as the server? It seems like the Rust application in the Nginx case is stealing processor time to the server.
Something is wrong with this test, I don't know what it is, there's no way apache is better than nginx, we ran extensive tests at my ex companies. We handled huge traffic, apache was a headache.
Actually, I tested NGINX vs. Apache maybe a year ago, and NGINX performed better. However, for reverse proxy, Apache performed very well compared to just serving static content.
► What should I test next?
► AWS is expensive - Infra Support Fund: buymeacoffee.com/antonputra
► Benchmarks: th-cam.com/play/PLiMWaCMwGJXmcDLvMQeORJ-j_jayKaLVn.html&si=p-UOaVM_6_SFx52H
NATs vs Kafka
Kafka vs IBM MQ
у этого чела проблемы ... где fast api ?
NATS vs Kafka vs Redis streams, 😁
Node.js vs Elixir (Phoenix framework)
Nginx vs nodejs/deno/bun? (only node would be fine; we know how the other 3 compare)
nginx vs caddy vs traefik please! and maybe try pingora?
and IIRC, nginx drop requests when overloaded, while caddy tries to answer all requests by sacrificing response time
would be so cool
will do!
@@dimasshidqiparikesit1338 why Pingora when there is River?
Please just accept my gratitude for all the benchmarks you're doing and making public. Also, keep doing whatever tests you find relevant. Cheers!
thank you! ❤️
Adding "multi_accept on" directive to nginx config might help availability on high loads.
Is this not the default behaviour?
@@inithinx Nope.. You need to fine tuning not only your database.. Like I told ANton before. But you also need to fine-tune Nginx
@@MelroyvandenBerg makes sense.
Thanks! I'm actually going over the NGINX configuration right now, making sure it's properly optimized for the next test!
Please include caddy next time! I wonder how golang works in this case!
Also, next time try to do brotli compression as well.
Cheers!
It would be interesting to see how caddy compares to Nginx and apache.
caddy, zstd compression, h3
Caddy vs nginx please
traefik vs caddy vs nginx: the ultimate benchmark
I agree, caddy would be very interesting.
Traefik and Caddy!
1 vote for this, and comparem them to nginx
traefik not a web server
@@severgun that's true, I the comparison I wanted is as reverse proxy instead of web server
He already did a performance benchmark between traefik and caddy
@@HVossi92 yeah but I wanted to see how it compares to nginx since that's what I'm using right now and I have been thinking of switching to traefik because I have been having some strange issues that I can't really pinpoint and was wondering if it could something to do with nginx
these Grafana charts are kinda ASMR for me :)
😊
Cool, but Apache (ngnix probably too) has so many things to configure, eg prefork/worker mpm, compression rate etc.
true, i do my best to make these test fair
I was searching for this kind of comparison for years.
i'll do more 😊
I've run nginx serving as reverse proxy in the 30K r/s range for production workloads, the way nginx handles tls is kind of naive and could be improved. Basically what is happening is that there is almost always uneven distribution of work across worker processes and it dogpiles with tls overhead. limiting the tls ciphersuites used can help mitigate this so that there is less variance in how long TLS handshakes take on aggregate. also multi_accept on is you friend.
Thanks for the feedback! I'll see if I can monitor each individual worker/thread next time
@@AntonPutrathis mostly happens from dealing with production loads where you have a diverse set of tls client implementations. not everyone will choose the same ciphersuites. this is an example of things often omitted from synthetic benchmarks because people just dont think of it.
Amazing tests. You got a subscriber for this bloody good content
thank you! 😊
Elixir/Gleam vs nodejs/bun/deno. Really interesting to see where Erlang VM shines.
another amazing performance benchmark and just the one I wanted to see rerun from your old videos. many thanks and great job
I'm still curious about the results tho. I'm really looking forward to seeing someone explain why nginx crashed in the last test
also I think that apache's compression algorithm is causing the high cpu usage in the first 2 tests and it would perform more like nginx if compression was off (but that's unrealistic to find in the real world)
many thanks again and looking forward to the next x vs y video, this second season is very informative
thank you! i got a couple of PRs to improve apache and nginx. if they make a significant improvement, i'll update this benchmark
Great. please do the same test for Nginx vs Haproxy too.
thanks! will do!
Valuable benchmarks! Tip: There is this insane resonance on the audio of this video (and probably more of your videos), so when you pronounce words with s, I can feel a brief pressure in my ear from my brain trying to regulate the intensity. 😅
thanks for the feedback, i'll try to reduce it
This was really interesting, i used to be running Apache alot a few years ago, and like you, i switched for the huge performance benefit of Nginx (in most cases apparently). Now, i don't do any loadbalancing using nginx or apache but this was really interesting to me as HA is always something i have been looking for but never really managed to do (lack of hardware and knowledge in my homelab). Earned the sub well done!
I wonder if apache and nginx use a different default compression level. The test results hint at this (even though both state 6 as default in their docs), and diminishing returns on a higher compression level might be hurting apache in this test. There might be some improvements investigated by skipping compression on files less than 1kb (which I think is a best practice), as well as setting the same gzip compression level on both services.
thank you for the feedback! i'll double check compression levels next time
Given my exposure to both apache and nginx, this lines up. You want something to serve static content nginx is king. I am concerned about what is happening around that 80% though. The way I see them nginx is like a scythe able to cut through a metric boatload of requests and apache like a swiss army knife with a boatload of tools available to do everything that has ever come up in my travels (this is where I sense apache's slowness comes from, its versatility). I guess the car analogy is nginix can do a 1/4 mile straight faster but apache could do a rally better as its more adaptable.
I have a non-compliant endpoint that uses api_key HTTP header and it took effort just to get nginix to leave it alone and then I route that path to an apache container where it gets fixed.
i have found i can make nginx do everything apache does, including serve php and all that application-layer stuff people do with apache. it's not especially advisable, though.
@@MattHudsonAtx the invalid header issue I mentioned I haven't found a way to do it with nginix, at best I can get it to pass it through for something else to deal with using the ignore_invalid_headers directive.
Given I was trying to stay just using the nginix proxy manager which is handling everything else I would love to know an alternative way.
thanks for the feedback!
nginx as reverse proxy with static content caching and apache as dynamic web server is a killer combo!
😊
Just searched yesterday if you already uploaded a benchmark between nginx and caddy and you just now uploaded nginx vs apache. Great starting point :)
I'll make nginx vs caddy vs traefik soon
those benchmarks are so much more useful and truthful than the "official" benchmarks from the devs
thank you!
ngix vs pingora please! great content keep up the good work!
thank you! will do
Definitely there should be caddy and traefik in this tests! Thanks for this kind of videos!
I'll do those two as well soon
We had experienced an interesting issue with Go application and Nginx when migrated from Pyhton to Golang, that Nginx uses A LOT more tcp packets to communicate with golang apps, at first it overloaded a load balancer cluster and then the application itself, we still haven't figured out what happened, because we also were in the process of migrating to Traefik, but it looks like Go and Nginx really want to split requests into a lot of packets since the most load came from TCP reassembly, and there were a lot more sockets in waiting ACK then usual.
Did you try to set `multi_accept on`?
Some sort of freak: - Add IIS to the test
A lot of organizations (corporations mostly) use IIS though, so even if IIS is bad then it would still be worthwhile to show how bad it is.
ok interesting, i'll try it out
Amazing test I did the same test with Grafana K6 but between Nginx and Openlitespeed.
Your test definitely explains why cyberpanel is the most performant out of the open source hosting software I tested. it uses a combination of apache and openlitespeed ( I think the perform a reverse proxie with apache and serve the website using openlitespeed )
thank you!
Great video, as always. Thank you.
thank you!
Please compare Nginx and HAProxy.
That would need various workloads of reverse proxy. Ones that filter traffic and others that don't as HAproxy doesn't do web server part.
ok will do!
Please compare PHP on Swoole/Roadrunner/FrankenPHP Server versus Rust, Go, Node.js
yes i'll do it soon
Love it with the cam, keep it up
thank you!
You’re English is very good. Not sure whether your pronunciation of ‚throughput‘ is your signature move or not. I noticed it in multiple videos..
😊
Oh, the ironey
For the reverse proxy tests, can you test with the swiss army knife of reverse proxies: Envoy proxy?
It supports TLS, mTLS, TCP proxying (with or without TLS), HTTP1, 2 and even HTTP3, multiple options for discovering target IPs, circuit breaking, rate-limiting, on the fly re-configuration, and even Lua scripting in case all of that flexibility isn't enough.
i did it in the past maybe a year ago or so but will definitely refresh it with new use cases soon
really great video! can you do a nginx vs tengine next? it claimed that it has a better performance than nginx and I'm very curious about it, love your vid
ok noted!
very good and good diagram for test scenarios is beautiful and understandable
thank you!
Thank you very much for your hard work 😊
❤️
love u anton
❤️
Perhaps you need to try the previous version to fix problems with nginx, or build it from source too?
i may try something in the future
I agree. Both the 85% CPU behaviour and the much higher backend app CPU usage feel like regressions.
When looking at the 85% cpu breakpoint, one thing I could think of was some form of a leak, maybe try to slow down the request increase rate, it might show different results.
thanks, i'll try next time
I would like to see a test with NGINX Stream Proxy Module which acts as just a reverse TCP or UDP proxy, not as a HTTP proxy. I for example, use this for some game servers where I reverse proxy both TCP and UDP packets. I setup NGINX for this because it seemed like the easiest thing to do, but I don't know if it has the best performance.
That could be one of the comparisons with HAProxy that is also TCP proxy capable.
Interesting, I'll try to include it in one of the new benchmarks
Ok thank you very much for really providing these insights! I was in the making of my own reverse proxy, and this is some key data. I think I might have made a RP better than both of those. 😏
my pleasure, they have a lot of built in functionality
do REST VS GRPC
GraphQL vs gRPC maybe
will do soon as well as graphql!
How do you configure Apache MPM? Fork mode or Event mode?
i use event more, here is origin config - github.com/antonputra/tutorials/blob/219/lessons/219/apache/httpd-mpm.conf#L5-L12
i also got a pr with improvement - github.com/antonputra/tutorials/blob/main/lessons/219/apache/httpd-mpm.conf#L10-L18
I feel like the intro parts are kinda spoilery even if you're blurring out the graph legends
😊
i know a lot of people already asked for this, but i also want to see Traefik and Caddy
Did you use RSA or ECDSA certificates? Because ECDSA should be used most of the time, as they are faster to transmit (less bytes in TLS handshake).
Also, nowadays, when used as Reverse Proxy, the connection to the backend servers (i.e. downstream) should be also encrypted, and not cleartext.
I used RSA in both proxies, and regarding the second point, it's good to have but difficult to maintain, you constantly need to renew the certificates that the application uses.
I don’t agree. Internal certificates can be automated with internal CA and ACME, or external CA (e.g. Let’s Encrypt) or long-lasting certificates.
you need to check kernel params... tcp_mem default is always to low, that can explain nginx problem.
thanks will check
love your performance test....you've saved me a lot of time on product selection!
Again Anton, great test, but you forget to fine-tune the servers again. Just like the database test. You shouldn't use the defaults.
Why not? Don't you think most people will use the default settings? Imo this way of testing is probably the most representative of real world performance. Of course it's also interesting to see how far you can optimize, but this is definitely useful.
there should be sane defaults. many setups will run with defaults.
Agreed the defaults should be representative of the average
@@_Riux wtf no, in the "real world" people actually configure their servers, or it's just a hobby project where nothing of this matters.
@@_Riux People who have defaults have no traffic, if you want to talk about traffic and performance, tuning the server is a must.
Saya pikir orang Indo, ternyata bukan. But it's a great video (and I'm still sticking to Apache - PHP MPM coz I've never had such a huge traffic... except for the DDOS event).
yeah, i'm not 😊 i heard apache php integration is very good
I’m curious how Java spring webflux compares to spring boot
i'll do java soon
I'm looking at benchmarks and feeling good about choosing nginx even though my website gets 1 user per month.
haha
took me a while to realize this isn't OSS nginx, have not played around with the F5 one, does it come with its builtin metrics module ? or what did u use to export those?
great content as always!
this is OSS nginx
oss doesn't come with metric node module. latency can only be measured at the client; server cpu/mem/net is not nginx metric module's responsability
@@rafaelpirolla don't know what you talking about, k8s expose cpu/mem/net stats for every POD
@@rafaelpirolla makes sense that latency was obtained from clients, thank you!!
worked around this once using otel module + tempo metrics generator, but that was rather convoluted / unsatisfactory approach
yeah, it's open-source nginx. Also, the most accurate way to measure latency is from the client side, not using internal metrics. In this test i collect cpu/memory/network for web servers using node exporter since they are deployed on standalone VMs
All the time I had stability with Apache, but with Nginx occasionally I had warnings in my alerts as service was restarting
It's very common in production to quickly fill up all available disk space with access logs; this is issue number one.
Compare go-grpc and rust-tonic please
Tonic contributors fix many issues and increase performance
ok i'll take a look!
please compare the performance of nginx and haproxy
ok noted!
this made me wanna see tcp vs quic
ok i may do it sometime in the future
I'd be very interested nginx VS caddy
will do soon!
Can you try others? Like Envoy? There are some other "obscure" ones .. I wonder if you can test those
i tested envoy in the past but i think it's time to refresh
Please compare River reverse proxy with Nginx
ok interesting
I feel like network usage in itself is related to request/s, in that if one webserver is able to satisfy more requests per time, it's prone to having more network usage within that same timeframe.
Why not network usage per request?
it's common to use rps, requests per second metric to monitor http applications
Why would you activate compression instead of serving pre-compressed files?
I didn't get the question. You use compression to improve latency and overall performance. With a payload that is four times smaller, it takes less time to transmit over the network.
Is it possible to benchmark pingora as well? It will be easy to use it after river became available so will wait for it in future!
Thanks a lot for the benchmark!
yes just added pingora in my list
11:35 higher cpu for apps behind nginx indicate that they have more work to do because nginx must be sending more data per second to apps than Apache.
Please test some more experimental servers too, like maybe rpxy/sozu compared to envoy.
ok i'll take a look at them
How does Caddy compare to these two?
i'll add it as well soon
Please do Nginx vs HaProxy
ok will do!
@@AntonPutra Thanks!
Caddy, traefik, and envoy proxy!
yes will do soon!
Can you add Caddy
will do soon!
check istio gateway vs nginx.
will do! thanks
Can you do `envoy` please? it is widely used by Google GCP
A 4th test with the apache "allowoverride none" would be nice, i've heard it improve performance, but never tried :/
ok i'll take a look!
very interesting
thanks!
Try this test with Dynamic HTML Content fetched from SQL Databases.
Please do Traefik vs nginx ingess controller!!!
will do!
compare apache/nginx to traefik and caddy
yes will do soon
One future idea test, job schedulers
like airflow?
We usually using these two, nginx for ssl dan reverse proxy and apache for php handler :/
yeah apache has nice php integration
I've noticed this weird behavior of nginx as a reverse proxy to a backend server too. Even if that backend server itself is just serving static data, the mere act of being a reverse proxy seems to cause a rather big performance hit for nginx. Weird.
thanks for the feedback
Can You Please Start series on Docker Networking tips or Anything related to DevOps
it will be helpful Learning from your Experience
i'll try to include as many tips as i can in the benchmarks 😊
hi, what tools are you using for monitoring and benchmark graphs?
Grafana
Thanks!
prometheus + grafana
nginx vs Caddy please!
will do!
Redbean and caddy please
ok added to my list
Something isn't quite right here. In all 3 tests, you show the requests per second synchronized until a major failure happens. The time log at the bottom seems to indicate these requests per second metrics are being gathered over the same time period.
Yet how can this be possible when one web server has a significantly higher latency, measured at the client, than the other? Once the latency difference hits 1ms, that means we should notice at least 1,000 fewer requests per second for each second that passes after that moment -- accumulating as time goes by. And, of course, this difference should accumulate even more quickly the higher the latency goes.
It looks to me like you (accidentally?) decided to normalize the graphs of each contest so the RPS would match until one of the servers failed.
Or if not, what am I missing here?
You blur the texts, but the colors give them out🥲 chose colors that arent related to the technology.
😊
How are you exporting the results into the graphing software? Can you explain what softwares those are to do that so I can recreate this setup?
sure, I use Prometheus to scrape all the metrics and Grafana for the UI. it's all open source, and I have a bunch of tutorials on my channe
@@AntonPutra thanks!
Python Web Frameworks
Django x Flask x FastAPI
?!
i'll do ruby on rails vs node next and then vs django
Next Caddy and open litespeed
noted!
Prisma vs drizzle
ok noted!
Cowboy , Erlang and other high performers for future videos
will do soon, but first ruby on rails 😊
Nginx vs YARP
ok noted!
In the last test, are the Rust applications running in the same instance as the server? It seems like the Rust application in the Nginx case is stealing processor time to the server.
At 1:26 he explained where everything is hosted. Applications have separated machines
@@Pero12121 I missed it, thanks.
yeah, in this test web servers are deployed on dedicated vms
FastAPI would be cool
yes soon
Hi, could you create a video explaining step by step how to prepare such a testing system from scratch?
sure, but i already have some tutorials on my chanel that cover prometheus and grafana
Anton, your name is very indonesian. More specifically, chinesse indonesia. Do you any association to indonesian culture
This is slavic name lol.
bro, jangan malu2in...dari logat bicara Anton Putra ini berasa Jowo banget ya? wkwk
no, but i was frequently told about my name when i was in bali
@severgun he was referring to my last name actually
Something is wrong with this test, I don't know what it is, there's no way apache is better than nginx, we ran extensive tests at my ex companies. We handled huge traffic, apache was a headache.
Actually, I tested NGINX vs. Apache maybe a year ago, and NGINX performed better. However, for reverse proxy, Apache performed very well compared to just serving static content.
Like the others said, with Caddy would be amazing
Will you make a comparison between the best frameworks of zig(zzz), rust(axum), go(fiber). I have been waiting for this long time.
yes will do
Test start at 5:21
i have timestamps in each video
@@AntonPutra nice thank you
ngnix vs caddy vs traefik
will do soon!
I always wanted to see this.
my pleasure!