we use testkube that has executers for load runner or gatlling... if we can make a ddosify executer in testkube, then its fine too... I gave up trying to standarize testing tools in my organization... but I would like to standarize how they hosted and how they integrate with gitops and CICD.
Thank you for at least acknowledging the aspect of performance in this video. I want to focus on where we agree. Most performance engineering (whether under load or not) is useless and pointless the way it's being done today. It is providing little value. Not because it shouldn't be done, but those doing it have little expertise with the discipline. Yet - they are responsible for the execution of it for their organization. This is why IT still struggles with performance in modern systems in 2023. A production-like environment is a nice to have, but is NOT the major issue and does not prevent a successful and valuable outcome. Performance engineers have dealt with this limitation for decades. We have a serious education issue in this area. I hope you and others will continue to bring up the topic so we can raise the bar on the lost art of performance engineering.
Awesome video! (Like most of your content 😊) Your “blasphemous” comments resonate with my own opinions on performace testing in production. That being said, you’ve definitly peaked my interest on ddosify and if you have time please do a video on it.
I saw Grafana k6 before due to Grafana's popularity, I am not sure though how it compares with Ddosify in terms of features. Or which one is better. The No-Code UI feature from Ddosify looks great.
great video! it’s great to know I’m not alone when I find myself making your same arguments! Viktor, video idea: application configuration at scale - seems like there are little to no tools which enable configuration as code, and provide sufficient self-servicability for devs.. (not unless you’re obliged to use cloud-specific offerings e.g Azure DevOps) love what you do!
Ddosify does not help you creating the copy of production traffic. For that you probably needs some kind of a networking solution that will copy production traffic to a different environment. However, if you're looking into testing production (or a copy of production), I would suggest using observability tools for that. Performance testing is mostly for deducing what would happen before it reaches production.
In this video, i used ddosify to demonstrate certain aspects of performance testing rather than to feature the tool or compare it with others. I can do that in a separate video though.
Some observations * Apparently, this person can only find performance issues with a production-sized environment and load. Working with environments sized differently from production is something that actual performance engineering professionals have been doing day in and day out for decades. Yes, before the cloud. Before even the Internet took off with two-tier client-server applications, we could find issues with as few as a single user. If it will not meet performance requirements for a single user, it is an axiom that it will not complete it for many users * He's right that most performance tests are poor in understanding performance. This lack of value is due to how most performance testers are introduced to the field. Rather than go through training and mentoring to become effective and valuable, they are socially promoted from other testing areas. I have even witnessed a big four onsite lead tell a person to go home and watch youtube tool videos for tomorrow they would be leading performance efforts. It is no wonder that only about 8% of people are genuinely delivering value in this space. * Performance is about system efficiency. The more efficient, the better the scalability and the faster the end-user response. It also results in lowered support costs and lower hosting costs. The value delivered from any performance effort, from single-user analysis to multi-user test results analysis, to production analysis, is to recommend system changes that improve efficiency. Until that change is implemented, any performance effort is a cost with no return on investment of personnel, time, or tools. * Performance technical debt accrues until you ask the first question. If you wait for production, you will find that hundreds of architectural decisions become impossible to unwind quickly and cost-effectively to improve performance. This issue of defects escaping to production and their costs to fix is a well-studied problem from ANSI, Gartner, IBM, and others. The production repair cost is orders of magnitude higher than even user acceptance testing. Ideally, you want to catch the performance defect as close to the introduction as possible. This means asking performance questions as early as possible in the application lifecycle. We can observe daily in the news and on Twitter feeds and news articles the consequences of asking too late, or not at all, until production. What are my bona fides in making these observations? Over the past three decades, I have helped answer performance questions impacting 1.1 Trillion of the US Economy. These items range from Mortgage generation and securitization, Stock Trades, Tax Payments, eCommerce systems, Digital Currencies, and Trading systems for the same, Internal support systems for manufacturing, HR, payroll, logistics, purchasing airline and hotel reservations, and call centers. I have helped eCommerce providers find millions of extra revenue from improved conversion rates without running a single test. I have helped trading exchanges recover 90% of their resource pool in configuration changes. I have helped a failed electronic currency launch become a success. I have worked with governments at all levels and private industry on applications ranging from internal-only two-tier systems to systems that span multinational private and public clouds. If you have an issue where performance impacts your business, and you need an answer because poor performance has a cost, and you are willing, I will do your work at no upfront charge; My compensation will be a percentage of the improvement on the KPIs for your application, your response time, your total time on the system (measurement of efficiency), your improved conversion rate, your lowered hosting and support costs. And I will do all of this without using DDOSIFY.
Automated or not. Agile, waterfall, Devops, some hybrid methodology, offer still holds. I will do performance work for a percentage of your gain. Don't experience at least a five percent improvement, then zero compensation to me and my team. You don't even have to purchase any tools
How do you do performance testing? What tools do you use? What do you measure?
we use testkube that has executers for load runner or gatlling... if we can make a ddosify executer in testkube, then its fine too... I gave up trying to standarize testing tools in my organization... but I would like to standarize how they hosted and how they integrate with gitops and CICD.
Great coverage with great thoughts & guidance, very well done! ❤
Thank you for at least acknowledging the aspect of performance in this video. I want to focus on where we agree. Most performance engineering (whether under load or not) is useless and pointless the way it's being done today. It is providing little value. Not because it shouldn't be done, but those doing it have little expertise with the discipline. Yet - they are responsible for the execution of it for their organization. This is why IT still struggles with performance in modern systems in 2023. A production-like environment is a nice to have, but is NOT the major issue and does not prevent a successful and valuable outcome. Performance engineers have dealt with this limitation for decades. We have a serious education issue in this area. I hope you and others will continue to bring up the topic so we can raise the bar on the lost art of performance engineering.
Was an honor to meet you at kubecon
Awesome video! (Like most of your content 😊) Your “blasphemous” comments resonate with my own opinions on performace testing in production. That being said, you’ve definitly peaked my interest on ddosify and if you have time please do a video on it.
I saw Grafana k6 before due to Grafana's popularity, I am not sure though how it compares with Ddosify in terms of features. Or which one is better. The No-Code UI feature from Ddosify looks great.
Love your podcast.
great video! it’s great to know I’m not alone when I find myself making your same arguments!
Viktor, video idea: application configuration at scale - seems like there are little to no tools which enable configuration as code, and provide sufficient self-servicability for devs.. (not unless you’re obliged to use cloud-specific offerings e.g Azure DevOps)
love what you do!
I'm guessing you don't mean something like th-cam.com/video/Rg98GoEHBd4/w-d-xo.html...
Can you give me a bit more info about the idea?
What about testing services with copy of production traffic? Seems it is not supported by ddosify
Ddosify does not help you creating the copy of production traffic. For that you probably needs some kind of a networking solution that will copy production traffic to a different environment. However, if you're looking into testing production (or a copy of production), I would suggest using observability tools for that. Performance testing is mostly for deducing what would happen before it reaches production.
how is this different then k6 or gatling for example? I can't tell :)
In this video, i used ddosify to demonstrate certain aspects of performance testing rather than to feature the tool or compare it with others. I can do that in a separate video though.
Some observations
* Apparently, this person can only find performance issues with a production-sized environment and load. Working with environments sized differently from production is something that actual performance engineering professionals have been doing day in and day out for decades. Yes, before the cloud. Before even the Internet took off with two-tier client-server applications, we could find issues with as few as a single user. If it will not meet performance requirements for a single user, it is an axiom that it will not complete it for many users
* He's right that most performance tests are poor in understanding performance. This lack of value is due to how most performance testers are introduced to the field. Rather than go through training and mentoring to become effective and valuable, they are socially promoted from other testing areas. I have even witnessed a big four onsite lead tell a person to go home and watch youtube tool videos for tomorrow they would be leading performance efforts. It is no wonder that only about 8% of people are genuinely delivering value in this space.
* Performance is about system efficiency. The more efficient, the better the scalability and the faster the end-user response. It also results in lowered support costs and lower hosting costs. The value delivered from any performance effort, from single-user analysis to multi-user test results analysis, to production analysis, is to recommend system changes that improve efficiency. Until that change is implemented, any performance effort is a cost with no return on investment of personnel, time, or tools.
* Performance technical debt accrues until you ask the first question. If you wait for production, you will find that hundreds of architectural decisions become impossible to unwind quickly and cost-effectively to improve performance. This issue of defects escaping to production and their costs to fix is a well-studied problem from ANSI, Gartner, IBM, and others. The production repair cost is orders of magnitude higher than even user acceptance testing. Ideally, you want to catch the performance defect as close to the introduction as possible. This means asking performance questions as early as possible in the application lifecycle. We can observe daily in the news and on Twitter feeds and news articles the consequences of asking too late, or not at all, until production.
What are my bona fides in making these observations? Over the past three decades, I have helped answer performance questions impacting 1.1 Trillion of the US Economy. These items range from Mortgage generation and securitization, Stock Trades, Tax Payments, eCommerce systems, Digital Currencies, and Trading systems for the same, Internal support systems for manufacturing, HR, payroll, logistics, purchasing airline and hotel reservations, and call centers. I have helped eCommerce providers find millions of extra revenue from improved conversion rates without running a single test. I have helped trading exchanges recover 90% of their resource pool in configuration changes. I have helped a failed electronic currency launch become a success. I have worked with governments at all levels and private industry on applications ranging from internal-only two-tier systems to systems that span multinational private and public clouds. If you have an issue where performance impacts your business, and you need an answer because poor performance has a cost, and you are willing, I will do your work at no upfront charge; My compensation will be a percentage of the improvement on the KPIs for your application, your response time, your total time on the system (measurement of efficiency), your improved conversion rate, your lowered hosting and support costs.
And I will do all of this without using DDOSIFY.
In favor of selling $M consultancy, dropping tools that can automate most of these is not the way for scale.
Automated or not. Agile, waterfall, Devops, some hybrid methodology, offer still holds. I will do performance work for a percentage of your gain. Don't experience at least a five percent improvement, then zero compensation to me and my team. You don't even have to purchase any tools
Well said, my friend.