It's the degradation of the education system. Programming in C and other older languages is hard to do well. It takes time to learn and many will never become really good at it. Many of the newer languages have greatly reduced learning curves, but that is paid for in other ways and this illustrates those ways.
Well good thing that all you need to do decent C is to just be okay at it and know how to read read to know what operations are unsafe as the spec literally tells you that
Corporations don't have a "degrade education system", if anything it is a symptom of enterprise programming and practices. Teaching how computers work is not their primary goal, their goal is faster time to onboarding which requires easy to understand programming languages and practices. Object oriented programming was the first step. It made it possible to hire engineers at never before seen rates globally.
@@Nicholas-nu9jxC is easy to learn but incredibly hard to master. Writing a resilient and sufficiently complex piece of software in C is objectively not simple.
I'd say c/c++ is not that hard for what you get in return. You learn a language that the whole world can share and expand, and that has been the case for decades. You get a language that can achieve anything. You get a language that every processor is optimized to run, and thus the fastest non-assembly language. And the great part about c++ specifically, is it is pretty much c. Class inheritance is just embedded structs. Polymorphism is just a function pointer table. ect... C++ is just all the things people normally do in a large C program project, packaged up to make it easier, so its going to perform mostly the same unless you deliberately write a c program that avoids all that for as close to assembly performance as possible. And yeah, you can do that. What we need are AI tools that can take these other slow languages and convert them to c/c++. Then, you can just tweak out any mistakes, and have a functionally equivalent natively compiled application that runs better. The real question is how libraries are translated because a scripting language eventually reaches natively compiled libraries in the interpreter or virtual machine, so its not a simple button click conversion. The goal would be to have a language that is super fast to write, but in the compilation process becomes equivalent to a hyper efficient c program.
"Back in the day" when I programmed only on VAX/VMS, one day my boss came over all excited and made me come to his office. We often wrote our programs in VAX BASIC because it was just so darn powerful, but of course even back then the same concerns arose regarding memory, execution speed, etc. So he wrote a quick program that basically just counted to 100,000,000 or something like that printing the start time/end time. For the sake of argument (it has been 30+ years) BASIC took 20 seconds to run, COBOL took 15, Pascal took 7 and C took 1.5... but the REAL shocker was FORTRAN... it completed in 0 seconds. So we compiled/machine and looked at code generated by the compiler and found that FORTRAN was so smart it optimized out everything but the final value. :D
...Also, who wrote the programs for each test type? Because different languages are stronger in different ways and often algorithms can be implemented in a fashion that takes advantage of the strengths of that language? You know what I mean?
I had this thought as well. I doubt it would make any interpreted languages faster than their compiled cousins but doing things in idiomatic ways can make a big difference.
something as simple as "count to 100,000,000,000" is not great for a benchmark, there's a good possibility that it ends up just running at whatever the clockspeed of your CPU is
a.) all rather academic usecases and b.) nobody in their right mind uses pure python for heavy lifting. you use libraries.. which are written in C(++).
In reality, Python programs outside of extremely narrow use cases (inferencing, training, any trivial array programming problem) spend most of their time in plain Python.
Academics do. I'm not even joking. We do. Are we in our right mind? 🤔Some are, others aren't. Clearly I am, otherwise I wouldn't be subscribed to Lunduke.
@@microcolonel those narrow use cases are also the most common python use cases. anything other than that is probably a cli app where performance doesn't really matter. sure, you can write a backend server using django or something but python provides little to no benefits there compared to any typical backend language like c# or js and there are great performance downsides.
It's nothing to do with Ritchie. It's all about the compiler, 40 years back all the assembly guys were whining that C compilers were terribly unoptimized.
@@werthorne true. But some of them are just c library wrappers. If you want to write a loop in python you should just use numpy (or a different language).
I think there's far less cause to be concerned about the energy requirements of developing software versus actually running it as a user. A developer might run a test, for example, that mimics the actions (and energy use) of thousands of users. That's still an efficient practice.
The reason people are able to get by with Python is because the Python libraries that are computationally expensive are written mostly in C or C++. Well, that and the whole "scalability" where people run 100 servers in a cluster to do the same thing 1 server could do if the application was well designed.
My first program was a multithreaded file optimizer script written in MS batch, where the actual optimizing was carried out by the worker script calling an external application. Even there is was very obvious how much the interpreted language acting as a coordinator slowed things down. I see this every time python is brought up "but the actual libraries are C...", and meh.
@@superfoxbat as glue an order of 1% of the program's cpu cycles are used by it, while the other 99% are used by C. If you got rid of it, the maximum performance benefit would be 1%, assuming your replacement language is infinitly faster.
python is pushed by academics that don't really program but do statistics or somethin else instead lol people that want to lazily use other people's libraries also bootstrappers for GPU-driven AI these days, but that's what python was supposed to be, a scripting language, not something to make entire programs in that's the fucking problem
Ts doesn't transpire to js the way js is jit processed. It adds tons of js you wouldn't need if it had been written in js to begin with. Python is mostly used as a wrapper language, most of the work happens in libraries that are written in c.
@@wernerviehhauser94 never had that issue. It’s called learning how to program I guess ;) When it comes to debugging it’s always business logic for me that is the challenge, never memory management and pointer management. But that’s probably a generational thing, as we grew up doing assembly and C and as a result learned how to manage memory and own the hardware. If you haven’t grown up doing it, it maybe challenging.
Hypothetically speaking, Java had much time invested in its compiler/JVM development from highly talented developers from early 1990s. Which is probably why it is so high on the list. I also agree with your point that there is no investment in compilers developments due to how much speed in hw we currently have compared with early 1990s. Which is why the recent languages don't perform as well.
@@Flackon That would be Zig I’m pretty sure. They are now already blazing fast. And the zig team isn’t blowing itself up with internal politics, so they’ll likely still be around in 40 years. 😉Man 54 years for a language to stand firm and be the core language for many kernels is an achievement.
I'm no expert, so feel free to chime in and add corrections. My understanding that adding abstraction slows down the computer. The problem with "Everyone should use C" argument based on power and time can be applied to people who code in assembly. We can go further, why not just code directly in binary if you want to go max speed and power savings? You won't need linkers or compilers. The obvious answer is that we want some level of convenience. The best language balances convenience with power saving and efficiency. But once again, the more convenient a language is, the more abstract from computer code it is, and the less efficient in general it is. The other question posed, will there be a faster language than C made? Possibly, but unlikely since it's a simplistic language that's fairly close to matching computer code. You'd need to get something closer to assembly like Fortran or something. I use C++ myself and am happy with it even if it's not the best. Anyway, that's my 2 cents.
Not everyone, not everytime one should use C, always use the right tool for the right job, but when it comes to system programming C is still king and will continue to be so
Adding abstractions does not "slow down" anything. You literally cannot write a meaningful program without abstractions. Every time you write a function, you are writing an abstraction. Compile options matter greatly and C has its own abstractions
@@jshowao Right, all languages have abstractions, I was trying to use the term 'abstraction' to refer refer to the convenience of languages. The more the language resembles regular English and/or more bells and whistles it has, typically the more the compilers have to do to get it to line up with computer code.
@@haroldcruz8550 Right, I'm not here to say C is bad. I was poking Lunduke's argumentation a bit. It's a fine language and I'm glad people are still learning it.
In the real world, python is just business logic code that does so few computations, that It's really insignificant how slow it is. Once anything remotely computationaly expensive is required, C libraries such as numpy, tensorflow, pytorch etc. Are used. So this is a really silly test. No one is going to use pure python to list the 1000000th digit of pi or break the highest prime found record. That's just not what the language is for and how it's used.
Yes and no. The problem is that some people are oblivious to proper use cases and inefficiencies, or at least _how_ inefficient it can be (ex. maybe they think it's 5x less efficient at a task instead of 50x). Because of this it ends up slowly creeping into other work over time like a weed unless it's put into it's place with reality checks like this. Most programmers are smart, but there are a ton of well-educated highly knowledgeable non-smart people who will still do mistakes like this.
@@jeffreygordon7194 ya after reading all the comments on this video I gathered that. I haven't learned Python myself, but I've read some code. I like the dynamic variable declarations and the language reminds me of basic.
Gotta keep in mind that even though Python itself may be extremely slow compared to something like C, a lot of the compute intensive Python libraries (think numpy, pytorch, etc.) actually make use of native C / CPython libraries behind the scenes, so the Python code itself doesn't really do a whole lot of computation in these scenarios, and is only used to interface with the more complex parts of a library.
Those are python libraries, so it is python. What you are stating would be like saying "This specific c library on this specific platform is written in raw assembly, so using those libraries is not really c." Obviously, everything in c is converted to assembly. Similarly with python, everything in python is eventually run natively through the python interpreter. A better description would be that the python functions in numpy tend to run closer to the native hardware. The implementation could easily change.
@@projectnemesi5950 That is not the same thing _at all_ . Python is a single threaded, garbage collected language. The instructions you write in Python are executed by an interpreter, which itself performs translations (and probably some heuristics, as well as garbage collection, just-in-time compilations, etc.) _at runtime_ , whereas in compiled languages like C or Rust, this heavy lifting is done in advance, at compile time. So yeah, it does make a tangible difference if a library is merely exposing an API that is making function calls to compiled shared objects in the background, as opposed to _also_ being interpreted at runtime.
This was a reply I made but it deserves to be a top-level comment. Ultimately, Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I think that it's overused. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Wholeheartedly agree. The beauty of Python comes from being able to pump out a script to do a simple, seldom-repeated task in an hour or so. Python is a scripting language and should be treated as such. There is no reason to be spending the time creating a website back-end in a Python framework like Django when the benefits of such a "simple syntax" are so heavily outweighed by the drastic loss in performance. You will eventually just have to rewrite the whole thing in a more efficient language or stomach the increased operational costs anyways. I'm a relatively new programmer, but for things I intend to have running 24/7, I much, much prefer slow development over slow performance. Migrating things I originally wrote in Python when younger to Rust has allowed me to do so much more with my raspberry pi micro-homelab than I originally thought would have been possible with the hardware.
Well that's the big problem, isn't it? Python was created to be a more robust alternative to bash when bash isn't quite enough for what you're trying to do. Javascript was meant to be a simple language for simple front-end tasks in a web browser. Now most the world is running on languages that were never meant to be doing what they're doing.
I agree that the Free Pascal compiler is a real gem. It also has the advantage of taking in a language that is fairly easy to compile. I will point out that the software development times don't seem to be getting faster with the newer languages and also these languages that are supposed to defend you against errors seem to be used to write a lot of buggy code.
That's due to Scrum making every issue take multiples of two weeks, and only hiring the cheapest devs, because "more is better" even though that makes the useless Scrum meetings take even longer.
Considering all Windows 10 versions come with a sort of .NET Framework 4 pre installed.. makes you wonder, if the whole OS was written in dotNET C, C# or C++, except a few Exe, which have to run on baremetal like the kernel.
Rust never claimed to be faster than C... It claims to help to write safer software with more creature comfort than C without being noticeable slower. And honestly looking at those charts they've done good job. Only 4% that's really impressive. Also Lua JIT is missing, which shouldn't if you include TypeScript.
@@cajonesalt0191 Programmers are inherently lazy. wasting 4% to make sure they don't screw up because of how lazy they are is an acceptable trade-off. That being said, I'm not certain, but I think we could minimize that 4% further by changing pc architecture.
@@cajonesalt0191 Yeah, but since it replaces the current node/TypeScript garbage it is still a huge improvement. Now, if all the unnecessary cloud nonsense, including the network traffic, would be gone we could decommission additional power plants.
@@cajonesalt0191 Rust is only slightly slower in synthetic benchmarks in a lab setting. Compare real world C projects to Rust projects, and the difference is often Rust being 100-500% more efficient than C.
I mean nobody is using Python or something for high performance application, all of that is offloaded to an actual fast language. Python for "Script you may run a few times a day and completes in 50ms" prob uses less electricity all time than it would for your brain to sit down and spend the time to write it in C
Wow you people still just don't want to admit that Python sucks... just learn AWK and be done with it. AWK is fast, easy and efficient, far easier and less complex than Python. And for data science, just learn R.
@@CaptainMarkoRamius that's correct, for R's package ecosystem is phenomenally comprehensive and well written. It's like comparing a children's tricycle to a space ship with a gravity distortion drive.
Every programming languages has to make certain trade offs because we don’t live in a perfect world. The developers of the Python language wanted to prioritise development time and ease of code maintenance over execution time and efficiency. That’s a perfectly reasonable trade off to make in a lot of cases. For cases where that isn’t desirable there’s C or Rust or whatever. What’s the problem?
I have been using pascal for over 40 years and forever looking for alternatives and no other language has come close for me and I have tried all the main competitors. I now use Lazarus Free Pascal to write fast GUI applications for Windows 11, MacOS and my favourite Arch Linux. Well done Brian, I am with you on this study and agree that these factors should be taken into account. About time a good modern C language should be developed, or an improved modern C++ developed on these criteria.
I think everyone agrees its performance is nearly identical to C. Some cases it may be slower, some it may be faster, but it's at the point where it's hard to compare.
I missed zig too, they probably didn't include it because it isn't used much yet. I think that zig is very promising but it needs more libraries and all that.
I can make the comparison especially between C, Zig, Python and maybe Rust depending on the benchmark. Which of the benchmarks do you think are the most important? Give me the top 5!
Java is a good language. A good language is one used by a lot of people to accomplish a lot of tasks. That is all that really matters. Speed is a side effect of language optimization because people actually use it. That is why c, c++, java, rust, ect.. are all the best performers of the study. And its the reason everyone is criticizing python for performing as poorly as it does despite popular adoption. I think the reason for this is most of pythons popularity are due to a few powerful libraries and prototyping. So you will see a ton of numpy scripts, or opencv, ect..
The price of abstraction is sometimes worth it. I'm a C-zealot myself but I understand the appeal of languages that make life easier. Some people cannot handle the amount of power you're given as a C-programmer, and that's fine. However, I'm still conflicted as I recognize this "slacktitude" is contributing to a slower, more unreliable software ecosystem.
thing is back in the day we used to use Python to prototype things and then once it was figured out we'd write the real thing in C++, not happening anymore
@@Zuranthusin some numerical work it's not even good for prototyping as its too slow even for test problems unless you heavily use numba, cython, jax, cupy etc, at which point it wont be much easier to develop than c++.
@@hawkanonymous2610 And again I'll point out that unless you're a newbie you'll either have written your own libraries or know which ones to use to accomplish tasks equally as fast as programming in Python. It's just a matter of your experience level.
@@transcendtientC programmers say C is the best for everything and if someone criticizes C their only response is “sKiLL IsSuE”. I would know, I write C at my job and I know a lot of those types of people.
If you do a simple search for ranking programming languages by energy efficiency it shows you what you are looking for. As you start typing it in, it even auto fills it.
As a user of many langs, I think pascal is a great combination of speed and easy to debug, plus being RAD. For me, easily the most overlooked and underrated.
@@ThomasCpp Including all of the processing time would be correct for scripting languages. Javascript can be converter into an internal format to gain speed too.
I'd love to see something like this with different versions of C compilers. C is the language for efficiency, so a lot of work was put into C compilers to make that even better.
According to one benchmark i've seen, oksh compiled with cproc, a very small compiler which is only 17479 lines of code (including code for it's backend), is only 1.35 slower than if you would compile it with gcc or clang. Oksh is probably not a good benchmark for code, but i bet with almost any c compiler you'll get higher performance than something like go or C#.
There are two definitions of speed; how fast does the program run, and how quickly can you get the job done. Often, 71x slower means it runs for a minute, while c runs for a second or so. Cool. However, if you need half an hour more to write the thing in c, and the program is basically something that does number crunching and writes it down into the database or in a file, you basically wasted 20 minutes to save one. Sure, if it has to run frequently and especially if a user has to wait for the result, by all means rewrite it in c.
And yet again I find myself pointing out that newbies will not be competent in using C and will have an increased development time. This is who Python is aimed at. With more experience comes using either your own libraries or those that you have learned to use and development time is reduced. Go ahead and ask your mother to write something in C and offer no help.
While we're talking about efficiency, the worst offender might be the web platform. It's "too much work" for most companies to write native applications anymore because all they care about is how many programmers' salaries they have to pay. Us end users are the ones that have to pay to run their inefficient software.
Most people doing python are not making complex maths stuff. They're creating scripts to iterate over a CSV file or an API that grabs some data from a db and sends it as json, and even if they're doing more chances are python is calling a c library trying the heavy lifting.
@@JodyBruchon And not to mention, academic work! I see Python heavily used by lab scientists are Stanford studying things way beyond my comprehension (like comparisons and data analysis involving zebrafish spines). It works well for them. Pytorch is also another thing academics love. So, once again: there's pros and cons to everything. It may be slower, but the trade off is convenience and familiarity within their tribe (lab). Just food for thought!
Pretty sure the default json module that comes with Python is written in pure Python for various reasons, portability being the big one. Please check this though, I could be wrong. But what I know for sure: there are several third-party JSON modules that defer to Ctypes or Cython which perform a lot better, but lack all the bells and whistles the pure Python one has.
@@koitsu2013 Indeed. Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I do think that it's overused, though. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Reptiles can only regulate their body temperature by moving to where they can warm themselves up or cool themselves down. They are optimistic about finding cold spots.
Even worse, when you consider the applications running on these languages. OpenStack (written in Python) is running in containers on top of OpenShift (written in Python). So interpreted code on top of interpreted code on top of interpreted code.
Interesting that Pascal is slower than Chapel but uses less electricity, which suggests that whatever Pascal compiler they are using isn't using the processor as intensively hence the speed difference. So an implementation of the language rather than something inherently slow in the language by design.
Pascal is C are basically equivalent beyond syntax. The only difference there is compiler. I write modern industrial Win32 GPU graphics software in Pascal.
@@LTPottenger As both C and Pascal put raw data into memory and don't wrap it in some kind of object structure, that also is pure compiler logic. My guess is on not aligning record structures to 4/8 bytes. This makes the access slower with modern CPUs but uses less memory.
A 75x slowdown is wildly optimistic. Compared to finetuned, optimized C or assembly, the slowdown factor is easily in the thousands. But it doesn't matter, because Python is mostly used for business logic or as a glue script, with the bulk of the task done in well optimized libraries written in C or other languages. Most software written in Python would not actually benefit from a big speedup if it were rewritten in C (some would, because many programmers are unfortunately not aware of the performance implications of what they do and would happily write in Python things that really should be part of a C library).
How is JavaScript just 6x slower but TypeScript 46x? TypeScript is transpiled to JavaScript before execution. Are they including the transpilation time? (That'd be a bit like including compilation time of compiled languages.)
10:40 is it me or since c is about memory manipulations and everything is directly handled by the developer, it should actually still win on the ram benchmark? programming differences.. so if the code was totally the same on both language c would probably still win on memory
when you create a NewLanguage that in some place faster or use less memory than C, it only means that C can also benefit from the same speed or memory optimization and will be also faster and/or use less memory when this trick will be implemented in the next version of C compiler. but without overhead of NewLanguage features. so probably there will be no NewLanguage that is faster and more memory efficient than C. and it only means that you can decide how much efficiency you are ready to pay for the new features. if you want a more efficient language than C, go to assembly language, but nobody now wants to do it.
Rust being faster and less power consuming than C++ was mildly surprising to me. I get a slight negative rust bias but that seems extremely small cost for more automated memory safety.
@@anon_y_mousse It is better than C++ for memory safety - the compiler catches bugs that most people never do. It has a smarter compiler so you can use your cognition within the problem domain, instead of wasting brain cells figuring out what the compiler already knows better than you. C++ people are upset they're not the best in town anymore.
Rust is exactly just as fast as C according to the most up-to-date results: energy-efficiency-languages/updated-functional-results-2020 Everything the guy in the video said about Rust is outdated by the same study.
Many of Rust's guarantees are build time rather than runtime but I am also suspicious of that. In fact having C++ be slower than C seems like a bug either in the implementation or the compiler. Templates are build time and member functions are basically just normal C functions that take in a this pointer as the first argument. Unless they are using dynamic dispatch or something I can't think of any reason the c++ version should be slower.
I would like to use C or C++ for more, but for rapid development, for systems that change just as rapidly, JITted languages are a healthy alternative (C#, Java, etc). As a C# dev, this chart can be misleading. Run a service for days and you'll notice initial memory usage seems high but compared to other languages and frameworks it does as good or better than the majority. Ruby, Lua, Perl, Python are fine for scripts, but honestly they suck for real production perf. As a language, I can't stand Ruby. It's awful. RoR is a bag of hot doodoo.
I will say, Java and C# have actually been putting in investment towards speed and efficiency. They just started with the handicap of being managed languages, which sometimes is a useful price to pay. Rust is even similar to that in a way. Every runtime check you add for safety and consistency at the language level is going to slow things down. C doesn't care and trusts the dev did that checking at programming time so there's nothing holding it back (from exploding at spectacular speed, sometimes)
Inferior error checking and many other types of issues can conspire to cause more development cycles, potentially far exceeding whatever time and energy savings are achieved by the final working application.
As far as I can tell, the applications tested are meant to make heavy use of the cpu. I am wondering about the actual impact when using a normal application which is mostly waiting for io. Eg when using application written in X vs V, how much delta in energy cost would be accumulated over a year.
LuaJIT tends to be about 100% slower (i.e., taking 2x the time) compared to optimized programs in compiled languages. It even seems to be somewhat behind the Java VM, though it does appear to hold pace with JavaScript's V8 engine (Source: Mike Pall's scimark code vs. equivalents created by others in Rust, C++, etc.) . IME, that's about the difference between competently-but-lazily-written C++ and optimized-by-experts C++, so not a bad result overall. The best results are probably achieved by combining the FFI with optimized native libraries, while minimizing context switches to get the most out of both the JIT compiler and the heuristics used by LLVM/GCC. LuaJIT without native libraries doesn't make sense, so it's not useful to benchmark interpreted code that should have been put in a native library to begin with. And the default interpreter is so slow that talking about its performance is effectively a complete waste of time, especially since it doesn't have a FFI and its C-interoperability layer incurs very high overhead costs. Still better than CPython of course, but the value of Python lies in its vast ecosystem. Lua doesn't even try to compete with its minimalist Do-It-Yourself philosophy.
This is a myth. Luajit is very competent, but it can never be as fast as C. Luajit doesn't statically compile Lua. It compiles it where possible, and still does plain interpretation where not. Code with string concatenations, for example, will run at normal Lua speed (slow). However, in this regard, pypy is very much comparable to Luajit. If they included Luajit, they would've also included pypy. I don't buy the part where they made lua look slower than python, though. That's not been my practical experience with the two. Not at all. There are things where lua can be as slow as python, but for the most part, it's not.
Not surprising seeing how despite computers becoming more powerful, because of all the bloat and inefficient coding, nothing feels all that different in performance than 15-20 years ago. Also I am not sure how many of the background processes are that useful. It;s really annoying that something like a youtube page needs I don't how many gigs of ram just a for a normal HD video...this tab right now uses 1.1 Gb ....that is insane.
In defense of Lua, the 100% interpreted version is very slow. But LuaJIT is more on par with C# and C++ in some little bench testing that I was running a while back. Also, Zig is newer and is supposed to be faster and more efficient
I have not written an article for a while. But as a physicist I must point out that the table showing *normalized* units (i.e. the best one being assigned the value 1.00) has no business putting physical units in the column headers. *Normalized* means that the units cancel out. Nonetheless, I appreciate the material revelation made by this work. Thank you for bringing it up.
We already have cython that does something similar. While widespread, it is not that impressive as a bare speedup to pure python because of python syntax, and to achieve really big speedups requires a massive rewriting with obscure annotations, which require considerable knowledge of how memory works similar to C (sometimes is just easier to rewrite the function in C). I do not see how mojo will benefit the general pythonist.
Yeah, press X to doubt. Usually this is highly dependent on compiling options and nobody uses languages that are pure, external libraries are called usually written in a different language. This was also 3 years ago.
One thing I saw that I do take objection to with regards to their test is that of using trees. The implementations can vary to the point of degrading performance a noticeable amount. I didn't see if they used hash tables too, but those can be excessively different in implementation as well. I primarily use C myself, but not for environmentalism reasons, but because I want to save my users time when they use my software. I'm not surprised about Python, because I use it fairly often too, but I am surprised about JavaScript. Yeah, I use that on occasion and it's slower than molasses in a freezer, but I would've expected that all of the work that has gone into optimizing the various interpreters would have meant it would perform better than it did. At least now there's some degree of quantification of how much the other languages suck.
I did not read the fine article referenced but are these the programming languages used to run most of the CPU time of the world? Zig and Odin are not really consuming much cycles globally. Web browsing and video rendering are the most used applications for end-users, I'd guess. Libraries are doing the CPU cycle work, and they are written in in C/C++/C#, mostly? And GPU helps so there is more research to do.
I started programming in about 1983 in BASIC. My first compiled language was FORTRAN. I was always impressed how much faster compiled languages were, but I never thought about power efficiency. I started programming in Python several years ago, and I was amazed that an interpreted language seemed instantaneous, but I never thought about how the indiscernible slower speed would consume so much more power. Interesting.
I'm deeply skeptical. These large scale comparisons are extremely difficult, as the quality of the implementation is huge factor. Naively implementing a particular algorithm in each language, when no "skilled user" of that language would do it that way, is a sure way to create very misleading results. C and C++ having significant variation is a clear indication of this - written properly, C++ is virtually performance indistinguishable from C. I think the paper is asking the right questions, but I'm not gonna be waving it around saying "we must switch to C" (as much as I would be okay with that, personally...)
Written properly for C++ (in regards to performance) just means never using templates, never using virtual methods and basically never including any header from the standard library. So basically just write C, but instead of functions where the first argument is a pointer to a struct, that struct can be a class with the function as a method and thats it. Oh and maybe you can use range based for loops and std::vec.
3 หลายเดือนก่อน +1
Looks like they used benchmarks game. So it's optimized but unidiomatic code ...
So, it makes you think. Our Linux systems often use like 0.5GB of ram, and have allot of python apps in them. So what runs in Windows, since it takes 5GB of ram, and is filled with c/cpp. > puts on a tinfoil hat
Me too, because my friend at SMHI (a meteorological institute) says that their old FORTRAN programs run circles around their new Java reimplementations. Problem is that they can't find FORTRAN programmers anymore.
This chart just reminds me why I'm an old C, assembly, Pascal, Perl, and PHP programmer. I'm 47. Each language has its pros and cons; knowing which one best fits the situation is critical. I do Python and Lua these days, but as a sysadmin most of what I'm writing are "convenience" programs and not, say, code hosting or integrated into a webserver. There are pros and cons to them all. That said: Lua surprised me, but this is from 2021 so it would be nice to see more recent versions tested. And I'd have loved to see Zig on that chart. That's probably the next PL I'm going with; I've been very, very impressed by it's implementation. Andrew Kelly knows what he's doing. F*** Rust and Go; and in the case of Go, Rob Pike of all people should have known better. He worked with Luca Cardelli for god sake!
The thing about C is that it's pretty much as close to writing in assembly as it gets. Minus the compiler optimizations, the generated assembly is as "literal" as it gets. In my opinion there's no need to "reinvent" C.
@@andrebrait sure you can write anything in C. But it is wrong to claim that it has all the features of a high level language as the original author did. If that were the case people wouldn't be implementing other languages on top of C like ECL among others. That's the whole reason people make programming languages - the host language isn't tailored to their needs. Prolog being written in C is an excellent example. In its domain, it is far more terse than C because there is a shift from imperative programming to declarative programming.
Python is 1,000x faster than using a bunch of Excel sheets and 100,000x cheaper than hiring a bunch of extra accountants and business analysts to manually re-compile a "database" from unstructured data. And the use case for basic business applications still heavily favors high-level languages like Python for most straightforward use cases (and if you need to do stochastic simulations then yeah the libraries are written in C).
Thanks for pointing this out. I know a certain switch company that, using Linux and C++ for their operating software, decided to code "non-critical functions" in Python. They were pretty good at it, incorporating the Python interpreter in the OS and getting pretty good at translating between the languages. After a while, they announced a new policy, namely going back and rewriting python programs in C/C++. This was due to customer complaints about how sluggish the OS was. The moral of the story was that, Python wasn't wasting time in any critical functions. It was wasting time *everywhere*, and general slowness was the result.
Software is going backwards in performance because it started out at the bare metal, where you HAD to understand the architecture. All this abstraction to hide the underlying hardware is where all this inefficiency comes from. Optimizing compilers are grossly overrated, and I have been saying this since the first version of C++. Most libraries are junk, and again, all comes back to dumb lazy programmers who don't want to spend the time understanding the underlying hardware. Compilers can't get better because they would need to anticipate every single possible line of code, or blocks of code, or every algo that could possibly be written, which is impossible. We need to get back to KNOWING THE HARDWARE, and KNOWING WHAT WE ARE REALLY DOING. There is no other replacement. Convenience ALWAYS comes at a price.
Its a mutual relationship. The software must know the hardware, and the hardware must know the software. Processors today are optimized for c programs. Everything is human made. The entire point of a programming language is to make things work and do it in a way that everyone can understand. Just like a spoken language, a "good" spoken language is one that is heavily used.
I always thought from the people I see using Python, that it is sort of a BASIC. Mainly for people that don't really understand computers and mainly for people who do Mathematics. As for Rust, yeah didn't drink the cool aid. Most of the arguments for using Rust and recreating the wheel which C already built, is mainly due to people who have poor modern tool chain skills in software development. (i.e. GIT, MAKE, Autoconf, Valgrind...etc.)
You can't be serious. This must be a trolling video. CPU efficiency and speed is not a primary concern. For modern programs the bottleneck is the network communication and the disk read times. If a python program is using heavy computation, it probably uses numpy, or similar libraries which is/are written in C. For companies, researchers, ease and speed of development is also a concern. Problems expressed in python or other higher level languages in a couple of lines translate to hundreds of lines in C. Python is easier to learn and faster to develop in. Developers don't have to worry about memory leaks for most cases. Also, for practical programs, Java JIT compiler can produce faster code, than C, because unlike C, java can use all the instructions of the concrete CPU it runs on. Also the memory allocation inside the program is not a system call, which makes creating new objects in Java many times faster, than calling malloc in C. You also don't mention, since this study came out, newer python versions rolled out with major performance improvements. Python 3.11 came out with 30% performance improvement.
@@A3A3adamsan That was just a poke at the irony of how a very old programming language is still vastly superior in a world preoccupied with going green.
I would be interested to see how Zig ranks in a test like this. I see Java doing its thing taking quadruple the RAM it should but being blazingly fast for a garbage collected language. What stands out most to me as someone just getting into Node is typescript being _substantially_ heavier than vanilla JS.
Compilers do a better job at optimizing machine code than hand optimized assembly, so there really isn't a point in doing that. Naive assembly will look roughly like what -O0 will emit. To improve that you'd start by adding optimizations but that's actually exactly what a C compiler does when you use -O1 -O2 -O3, so really C is just a way of automating the process of writing assembly.
@@josephp.3341 Depends on what you're writing, but some compilers are not as good as they should be at optimizing poorly written code. Even just the difference between calling a function with two arguments versus a struct with two members can cause wildly different results.
@@josephp.3341 I see this repeated a lot, but it's not really true: compilers are written by humans and the techniques compilers use are the ones humans used to use (excluding a few new peephole optimizations and whatnot). It's all about time, it takes a lot of time for a humans to repeatedly try inlining a few function calls, optimizing the result and finally figuring out whether that was worth it or whether it's best to undo all of that work. Let alone keeping all of that maintainable. Edit: Also, -O0 is (for most compilers) much worse than naive because it's too systematic, for example a human would tend to use more registers at once, use assumptions the compiler isn't allowed to consider and do simple optimizations like "call somewhere ret" -> "goto somewhere"
inb4 someone tries to incorporate transportation, man hours, and calorie consumption relative to error handling and memory safety, into some metric. Ackshtually Rust is the best if you look at it like [blah....]
Rust avoids many costly bugs that can occur in less safe languages. It might even have a formal prover like Spark Ada, which was designed for mission-critical applications like weapons and satellites.
@@CTimmerman Some of that is legitimate, some of it is just to avoid errors made by people who don't belong doing what they're doing. And should never have been incentivized, coerced, and drawn into those fields. I know how that sounds but if I went into the story of how I learned to program and the frame of life I was in, which basically amounts to constant whole body pain, fasting for days at a time, inability to sleep, and functional brain damage, the fact that I'm not only self taught but came out of it having sought out and knowing how the machine works down to the RAS and CAS signals sent to the memory controller, I can bluntly say most people don't belong anywhere near software. They just don't. They can't do recursion, they never bothered with (inline) assembly, the idea of cache misses and data locality is like a whoooaaaaa mind blown moment several years into their career, they just learned java and typescript or whatever and never bothered to learn data structures and how the machine actually works. I mean look, you can argue that it's a manpower and volume issue, so you'll architect tools and frameworks to keep all the normies on the rails so to speak. And at those organization and society scales you can make a case for that. But the fact remains, everything I stated is true. These people are not programmers, and they don't belong doing it either. Rather writing software is a surrogate, it gives them identity and money, and so they do it in order to get the money. This curns out mediocre "bare minimum" self serving types and then equally incompetent layers of management that have to try to channel that writhing malformed mass, that self serving blind beast, into doing something other than devouring itself. The whole mindset is wrong. This fixation on memory safety is partly quality of life improvements, and partly damage control. Mostly, damage control. For people who barely care what they're doing, because they're a product of the Prussian educational model, and due to the media exposure in early childhood [...]. I can say it, I lived it. That was my late teens and twenties, wasted being tortured. And even there what I created was the best. The best. No corners cut, no sloppy hypersocialized water-cooler mentality. No, the best. I take what is, and I make what ought. Those are the types of people you should either seek out, or rework aspect of society and human development in order to manufacture. Those others who can't or won't, don't belong, and shouldn't be pandered to. They need the boot. Get the hell out of here buddy.
C is a wonderful language, and I don't think anyone is surprised to find that its execution speed is much faster than Python's. So what. Execution speed isn't everything. Development time matters too, and on non-trivial projects most devs can crank out a working solution faster in Python than they can in C. Python is good enough for a lot of tasks, and at the end of the day, how much better than "good enough" do you need to be?
Agreed: Development time often matter more. Use the right tool from the job - ie, on a microcontroller that is battery powered where energy, compute, and memory is constrained C is used.
5:57 I’ve actually done a ton of embedded language testing. Compare LuaJIT vs the others and you’ll be SHOCKED by how fast it is. The “regular” lua engine (not LuaJIT) is only optimized for portability. The performance is as weak as you imply.
Well, python would've looked a whole lot faster too if they used pypy, which is about as competent as luajit. I'm only suspicious about Lua having been slower than python in their tests. Lua can be as slow as python in some things, but in my experience it generally isn't.
If you want a language that makes you care about memory, efficiency and speed, Zig is a good option to check out. More ergonomic and readable than C anyway!
wtf why C++ is 50% slower? how? I can compile C code as C++ and in some rare cases code will be FASTER (one youtuber compiled org DOOM in C++ and had 1% speed up). Same with TypeScript, how is worse than JS??? I would not believe this this research much.
@@Sneg00vik Yes we have safer `std::array` and `std::span` instead of `int[]` and `std::unqiue_ptr` instead `T*` and now where is this 50% overhead? We have RTTI and Exceptions that are heavy but if you not using them directly and have serious work load (aka start up is only fraction of time) you should not be able to notice it. Of corse we could created bloated code that will be slow but then we testing bad code or C++? We could probably create equality bad code in C. Only place where C++ is always worse is compile time, where all overloads and templates could make compiler glow red, but then if we include it in cost, how in earth python become 70x worse? When in very big program python could finish workload before C or C++ finish compilation, not mentions Rust that have similar bad compilation times as C++.
Yup. C++ uses the same exact compiler. Maybe they used specific C++ features, like vtables and RAII? I know Common Lisp code isn't the usual code, but uses typed Lisp, which is basically the C/C++ code but with worse compiler.
C++ does not need to be slower than C. Makes you wonder what exactly they did, for example did they write a custom algorithm in C, for some task rather than using a generic one from C++. In that case the study is nonsense.
Oh! This warms my stony, cold, old heart! :) Python is 71 times slower than C. I've sent this to my brother, a Python/Javascript programmer. Why does C get all the hate?
Because all the other languages have guardrails to keep you out of trouble. C you have to manage everything, which is why I would bet, that most bugs and security vulnerabilities are born.
I had a similar idea a while ago while debating whether or not performance mattered. It might not matter for your application itself, but making something 5x faster means you can get by using about 1/5th the same computing power you'd otherwise need. That means either less poweful hardware or fewer replicas, meaning lower cost. If that can be achieved simply by moving from Node.js to Java, Go or Zig, it's a good idea (especially because no one deserves the pain of doing JS on the backend, so everybody wins).
On wall street all our critical systems had to be written in c/c++. Python, java, etc. are all level 2 up ? languages and c/c++ is level 1, it talks directly to the system components. It is superior to any other language by far. where is assembly? Most of these languages are written in c/c++. C/C++ is not used because it is too complex for most people
Lua isn't that slow. I don't buy it for a second. In my experience, pure Lua is quite faster than pure Python, and internally it's a lot less bloated than python, with a whole lot less sanity checking overhead, etc. I'm finding that study suspicious.
A few observations: 1) Ada is still largely ignored even though it's actually a joy to write in. 2) They did not include Forth, which is telling. I have little to no doubt that Forth would either tie, or beat C in at least one of these metrics. The catch is, it sucks trying to read the code several months later - but that's not what this study is testing for. 😉 3) If Pascal received the amount of mindshare that C has received, it probably would beat C in all metrics. Ada is heavily influenced by Pascal, so you can get an idea of what Pascal would be able to do (It's just that Ada is to Pascal, what C++is to C - in terms of size and complexity).
Would explain why Android phones have more ram than PCs... Imagine if they used a proper language how much cheaper phones would be and have way better battery life.
Erlang slower than Javascript? This just shows that the experiment was very specific, run bunch of algorithms. Erlang ran slower does not mean we can swap JS for erlang in telecom and achieve faster tele communications.
I'm very surprised that java outperforms Go in energy and time? How is that even possible? I understand there's a GC in Go, but Java has WHOLE VIRTUAL MACHINE which is a lot of C code by itself. Then standard library, then the application. While Go compiles to the native code with an embedded GC. And Java still has A HEAVIER gc, heavier class runtime structures, heaviver runtime? Why the first entries are all round up at 1? 1 Megabyte, 1 joule, 1 ms. Also Lua is way simpler than python. How on earth does GO uses less memory than C if it has the same GC. This article is so BS honestly, just for C fanboys. No, I'm not pooing on C, it's an awesome language and Python is indeed slow, but this article got something seriously wrong.
This is pretty cool. I know the authors did some splitting of the results by compiled / virtualized / interpreted, but I think what would be even more insightful would be to break down the languages by feature (e.g., with or without run-time reflection, with or without dynamic types, with or without garbage collection, with or without strict-aliasing rules, etc.). Basically, beyond the d-measuring contest between languages, it would be nice to measure the cost that each feature adds to begin (finally!) to do some sober cost-benefit analysis. For example, I suspect Python pays a huge cost for having to translate most attribute or method call into a run-time reflection lookup, for the marginal convenience of occasionally being able to write some pretty nifty code.
you are not thinking in the economics behind python right. the energy (money equals energy consumed with that money) you've saved in engineers time due to utilizing python is WAY higher than developing everything in C
Say you can learn Python more than 70-80 times as fast as C. Which seems like an overestimation, imho. That still does not mean people who know both Python and C well, can write the same script 70-80 times faster in Python than in C.
For competent developers, development time isn't nearly so big a factor as people think. Generally, if you've been in the business for any length of time, then you've either built up your own libraries for doing things or you use specific libraries that someone else wrote. It's only for the newbies that the language's standard library really matters in most instances.
That would be an interesting test, but implementations matter more there than with a language like Python which really only has one implementation. Some were better than others and you'd either need to test them all or write one from scratch. Also, there were quite a few BASIC compilers and they would undoubtedly perform better than the interpreted versions.
Memory usage is kind of misleading. The energy graphs show one reason why, the other is if you have a 100Mb executable for C then it's a 100.5Mb executable for Rust. There's a static overhead for many languages but no overhead in the actual code.
There's also compiler optimization you are not taking into account. Rust and C does not create identical Assembly code so the memory usage won't be the same.
@@asandax6 And Rust apps are almost always blobs with all dependencies compiled in. That provides more flexibility, since there is no pressure to have a stable ABI limiting progress (also a lot of code is generated via macros), but at the expense of no reuse of loaded modules across many instances of an app and increased compilation times.
@@JanuszKrysztofiak That's what I hate about rust. It compiles with bloat. Stable ABI is important and saves memory space (which every first world developer says it's cheap and don't realize the huge number of people that can't afford phones with more than 64GB and laptops with 128GB SSD.
@@asandax6 yeah, ram is no problem when you're talking megabytes no reason to not pre-allocate a couple of megabytes these days, ram is abundant on computers these days i pre-allocate tons of memory on the heap for easier memory designing (allowing for growth) all the time because everyone has it anyway. anyone that can run a web browser with all the bloat on those today, is still EASILY within range of my program, i'd have to allocate hundreds of megabytes to even come close
I get that this is a tongue-in-cheek post. I write in C and Python ( as well as other languages ) I would be mad to use Python to write an OS or device driver, but I would be crazy to write a quick GUI tool to test my code in C, I use Python. Any power delta a running program uses, must be tiny compared with the power used by the whole computer ( including that flash GPU ) and monitors. The real killer in terms of electricity must be java-script/typescript in our browsers as these are the most used applications by your average users. , but even then it must be tiny compared with the background power usage by your setup.
The fact that Pascal is not tied with C for 1 or indiffernetiably close only illuminates the imperfection of their method. But yes compared to the BS modern languages this video is great. Long live Pascal and C, death to the modern bloated slow BS.
So essentially, use the right tool for the job. The same goes for Python modules. Numba, NumPy, and SciPy exist and aren't hard to use. Also, PyPy is easy to use for its performance gain. Why isn't Zig in that list? I've used it for a few small mathematical projects and it's pretty fast. The GNU Multiprecision Library works with it, too. You can create a dynamic library with Zig and use it with ctypes in Python.
HolyC still the king.
HolyC performance so tremendous the study was afraid to include it.
Because thou shall not take the name of HolyC in vain...
There's a reason the glowies had to eliminate the creator of HolyC.
AMEN!! 🙏
TemplateOS might be coming out soon FYI...🎉
Only those who are chosen can program in HolyC
holyc is bloated c
It's the degradation of the education system. Programming in C and other older languages is hard to do well. It takes time to learn and many will never become really good at it. Many of the newer languages have greatly reduced learning curves, but that is paid for in other ways and this illustrates those ways.
Well good thing that all you need to do decent C is to just be okay at it and know how to read read to know what operations are unsafe as the spec literally tells you that
Agreed but C is not hard. It's very simple.
Corporations don't have a "degrade education system", if anything it is a symptom of enterprise programming and practices. Teaching how computers work is not their primary goal, their goal is faster time to onboarding which requires easy to understand programming languages and practices. Object oriented programming was the first step. It made it possible to hire engineers at never before seen rates globally.
@@Nicholas-nu9jxC is easy to learn but incredibly hard to master. Writing a resilient and sufficiently complex piece of software in C is objectively not simple.
I'd say c/c++ is not that hard for what you get in return. You learn a language that the whole world can share and expand, and that has been the case for decades. You get a language that can achieve anything. You get a language that every processor is optimized to run, and thus the fastest non-assembly language. And the great part about c++ specifically, is it is pretty much c. Class inheritance is just embedded structs. Polymorphism is just a function pointer table. ect... C++ is just all the things people normally do in a large C program project, packaged up to make it easier, so its going to perform mostly the same unless you deliberately write a c program that avoids all that for as close to assembly performance as possible. And yeah, you can do that.
What we need are AI tools that can take these other slow languages and convert them to c/c++. Then, you can just tweak out any mistakes, and have a functionally equivalent natively compiled application that runs better. The real question is how libraries are translated because a scripting language eventually reaches natively compiled libraries in the interpreter or virtual machine, so its not a simple button click conversion.
The goal would be to have a language that is super fast to write, but in the compilation process becomes equivalent to a hyper efficient c program.
I did not C that coming. 😂
C la vie
"Back in the day" when I programmed only on VAX/VMS, one day my boss came over all excited and made me come to his office. We often wrote our programs in VAX BASIC because it was just so darn powerful, but of course even back then the same concerns arose regarding memory, execution speed, etc. So he wrote a quick program that basically just counted to 100,000,000 or something like that printing the start time/end time. For the sake of argument (it has been 30+ years) BASIC took 20 seconds to run, COBOL took 15, Pascal took 7 and C took 1.5... but the REAL shocker was FORTRAN... it completed in 0 seconds. So we compiled/machine and looked at code generated by the compiler and found that FORTRAN was so smart it optimized out everything but the final value. :D
...Also, who wrote the programs for each test type? Because different languages are stronger in different ways and often algorithms can be implemented in a fashion that takes advantage of the strengths of that language? You know what I mean?
And not that anyone cares, but Ada (Ada 83) is my favorite language. :)
I had this thought as well. I doubt it would make any interpreted languages faster than their compiled cousins but doing things in idiomatic ways can make a big difference.
something as simple as "count to 100,000,000,000" is not great for a benchmark, there's a good possibility that it ends up just running at whatever the clockspeed of your CPU is
@@steve55619 For a good compiler that possibility should be close to 100%.
a.) all rather academic usecases and
b.) nobody in their right mind uses pure python for heavy lifting. you use libraries.. which are written in C(++).
This was my first thought. Why reinvent the wheel, when perfectly optimized libraries exist?
In reality, Python programs outside of extremely narrow use cases (inferencing, training, any trivial array programming problem) spend most of their time in plain Python.
Academics do. I'm not even joking. We do. Are we in our right mind? 🤔Some are, others aren't. Clearly I am, otherwise I wouldn't be subscribed to Lunduke.
@@microcolonel those narrow use cases are also the most common python use cases. anything other than that is probably a cli app where performance doesn't really matter. sure, you can write a backend server using django or something but python provides little to no benefits there compared to any typical backend language like c# or js and there are great performance downsides.
@@jjtt stuff like Django and Flask is 99% of global Python running right now.
All Hail Dennis Ritchie and his 50+ year old language that trumps all except, assembly..
It's nothing to do with Ritchie. It's all about the compiler, 40 years back all the assembly guys were whining that C compilers were terribly unoptimized.
And even if it's the same assembler it has to be reoptimized for every new generation of every vendor.
Doesn't trump Pascal, which is older.
I loved Pascal, and still have a soft spot for it.
Heh. Lisp. More expressive often compiled.
Meanwhile the energy used to help the programmer with A"I" is dusting all the energy spent running the result.
most of the libraries are in python
@@werthorne true. But some of them are just c library wrappers. If you want to write a loop in python you should just use numpy (or a different language).
Correct !!! with this logic everyone should walk all the time, but accessibility should count for something
I think there's far less cause to be concerned about the energy requirements of developing software versus actually running it as a user. A developer might run a test, for example, that mimics the actions (and energy use) of thousands of users. That's still an efficient practice.
@@jeffreygordon7194 except that most creations of us won't ever see thousand of users 🙂
Pascal is waaaaay better than many people give it credit for. After all these decades for it to still outshine most others speaks for itself.
graphics.h is a bit of a problem with modern computers though lol
Lazarus Free Pascal, once mastered it's hard to bet for GUI applications and cost. $0
@@pwalkz I used to code in Borland Pascal (plus TASM) a LOT ))
The reason people are able to get by with Python is because the Python libraries that are computationally expensive are written mostly in C or C++. Well, that and the whole "scalability" where people run 100 servers in a cluster to do the same thing 1 server could do if the application was well designed.
My first program was a multithreaded file optimizer script written in MS batch, where the actual optimizing was carried out by the worker script calling an external application. Even there is was very obvious how much the interpreted language acting as a coordinator slowed things down. I see this every time python is brought up "but the actual libraries are C...", and meh.
Python is best when it is used as a glue to assemble various C components. Keep it simple and let C do what it does best under the covers.
python is the most expensive glue on earth
@@superfoxbat as glue an order of 1% of the program's cpu cycles are used by it, while the other 99% are used by C. If you got rid of it, the maximum performance benefit would be 1%, assuming your replacement language is infinitly faster.
python is pushed by academics that don't really program but do statistics or somethin else instead lol
people that want to lazily use other people's libraries
also bootstrappers for GPU-driven AI these days, but that's what python was supposed to be, a scripting language, not something to make entire programs in
that's the fucking problem
What about C being a "portable assembly language" don't people understand?
Ts doesn't transpire to js the way js is jit processed. It adds tons of js you wouldn't need if it had been written in js to begin with.
Python is mostly used as a wrapper language, most of the work happens in libraries that are written in c.
And writing C is a 100 times more fun!
As in taking 100 times longer due to debugging?
:-)
@@wernerviehhauser94 never had that issue. It’s called learning how to program I guess ;) When it comes to debugging it’s always business logic for me that is the challenge, never memory management and pointer management.
But that’s probably a generational thing, as we grew up doing assembly and C and as a result learned how to manage memory and own the hardware. If you haven’t grown up doing it, it maybe challenging.
if you're a noob yeah
@@kevinstefanov2841 You mean if you are an expert. Real men program assembly and C after all :D
And takes 10 times more time to write.
Hypothetically speaking, Java had much time invested in its compiler/JVM development from highly talented developers from early 1990s. Which is probably why it is so high on the list.
I also agree with your point that there is no investment in compilers developments due to how much speed in hw we currently have compared with early 1990s.
Which is why the recent languages don't perform as well.
Tech companies: programmer hours are expensive. Energy usage and RAM consumption, that's the customer's problem.
The planned obsolescence and mountains of e-waste, that's Pakistan's problem
More than C being the best language, it's the C compiler being the best compiler.
There is no single C compiler. There are probably a few hundred as it's been taught for a long time in compilation courses.
Fortran has the best compilers because there is no pointer aliasing in Fortran, so the compiler can do optimizations C compilers need help with.
Once once Rust has 40 years of compiler optimizations let's see which one runs better...
@@Flackon That would be Zig I’m pretty sure. They are now already blazing fast.
And the zig team isn’t blowing itself up with internal politics, so they’ll likely still be around in 40 years. 😉Man 54 years for a language to stand firm and be the core language for many kernels is an achievement.
@@Flackon Rust is unlikely to survive that long.
I'm no expert, so feel free to chime in and add corrections. My understanding that adding abstraction slows down the computer. The problem with "Everyone should use C" argument based on power and time can be applied to people who code in assembly. We can go further, why not just code directly in binary if you want to go max speed and power savings? You won't need linkers or compilers. The obvious answer is that we want some level of convenience. The best language balances convenience with power saving and efficiency.
But once again, the more convenient a language is, the more abstract from computer code it is, and the less efficient in general it is.
The other question posed, will there be a faster language than C made? Possibly, but unlikely since it's a simplistic language that's fairly close to matching computer code. You'd need to get something closer to assembly like Fortran or something. I use C++ myself and am happy with it even if it's not the best. Anyway, that's my 2 cents.
Best 2 cents I have red so far. Bang for the buck 😂
Not everyone, not everytime one should use C, always use the right tool for the right job, but when it comes to system programming C is still king and will continue to be so
Adding abstractions does not "slow down" anything. You literally cannot write a meaningful program without abstractions. Every time you write a function, you are writing an abstraction. Compile options matter greatly and C has its own abstractions
@@jshowao Right, all languages have abstractions, I was trying to use the term 'abstraction' to refer refer to the convenience of languages. The more the language resembles regular English and/or more bells and whistles it has, typically the more the compilers have to do to get it to line up with computer code.
@@haroldcruz8550 Right, I'm not here to say C is bad. I was poking Lunduke's argumentation a bit. It's a fine language and I'm glad people are still learning it.
In the real world, python is just business logic code that does so few computations, that It's really insignificant how slow it is. Once anything remotely computationaly expensive is required, C libraries such as numpy, tensorflow, pytorch etc. Are used. So this is a really silly test. No one is going to use pure python to list the 1000000th digit of pi or break the highest prime found record. That's just not what the language is for and how it's used.
Thanks for making this rather obvious point.
Yes and no. The problem is that some people are oblivious to proper use cases and inefficiencies, or at least _how_ inefficient it can be (ex. maybe they think it's 5x less efficient at a task instead of 50x). Because of this it ends up slowly creeping into other work over time like a weed unless it's put into it's place with reality checks like this.
Most programmers are smart, but there are a ton of well-educated highly knowledgeable non-smart people who will still do mistakes like this.
I see data analysts using Python all the time to crunch large data sets. They have no idea how to code in C.
@@microdesigns2000you don't need to code in c to use c bindings in Python. The library does it for you.
@@jeffreygordon7194 ya after reading all the comments on this video I gathered that. I haven't learned Python myself, but I've read some code. I like the dynamic variable declarations and the language reminds me of basic.
Gotta keep in mind that even though Python itself may be extremely slow compared to something like C, a lot of the compute intensive Python libraries (think numpy, pytorch, etc.) actually make use of native C / CPython libraries behind the scenes, so the Python code itself doesn't really do a whole lot of computation in these scenarios, and is only used to interface with the more complex parts of a library.
Not to mention doing work on GPUs, which is increasingly common these days. I’d rather use Python for setting up a GPU job than C.
Those are python libraries, so it is python. What you are stating would be like saying "This specific c library on this specific platform is written in raw assembly, so using those libraries is not really c." Obviously, everything in c is converted to assembly. Similarly with python, everything in python is eventually run natively through the python interpreter. A better description would be that the python functions in numpy tend to run closer to the native hardware. The implementation could easily change.
@@projectnemesi5950 That is not the same thing _at all_ . Python is a single threaded, garbage collected language. The instructions you write in Python are executed by an interpreter, which itself performs translations (and probably some heuristics, as well as garbage collection, just-in-time compilations, etc.) _at runtime_ , whereas in compiled languages like C or Rust, this heavy lifting is done in advance, at compile time.
So yeah, it does make a tangible difference if a library is merely exposing an API that is making function calls to compiled shared objects in the background, as opposed to _also_ being interpreted at runtime.
This was a reply I made but it deserves to be a top-level comment. Ultimately, Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I think that it's overused. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Wholeheartedly agree. The beauty of Python comes from being able to pump out a script to do a simple, seldom-repeated task in an hour or so.
Python is a scripting language and should be treated as such. There is no reason to be spending the time creating a website back-end in a Python framework like Django when the benefits of such a "simple syntax" are so heavily outweighed by the drastic loss in performance. You will eventually just have to rewrite the whole thing in a more efficient language or stomach the increased operational costs anyways.
I'm a relatively new programmer, but for things I intend to have running 24/7, I much, much prefer slow development over slow performance. Migrating things I originally wrote in Python when younger to Rust has allowed me to do so much more with my raspberry pi micro-homelab than I originally thought would have been possible with the hardware.
Well that's the big problem, isn't it? Python was created to be a more robust alternative to bash when bash isn't quite enough for what you're trying to do. Javascript was meant to be a simple language for simple front-end tasks in a web browser.
Now most the world is running on languages that were never meant to be doing what they're doing.
Sounds like a lot of excuses for clubbing baby seals.
@@FatherGapon-gw6yo They deserve it for sending ninjas to kill my parents.
@@JodyBruchon Since I know you use Windows, why not just use the Win32 API functions for extracting bits of an INF file.
I agree that the Free Pascal compiler is a real gem. It also has the advantage of taking in a language that is fairly easy to compile.
I will point out that the software development times don't seem to be getting faster with the newer languages and also these languages that are supposed to defend you against errors seem to be used to write a lot of buggy code.
That's due to Scrum making every issue take multiples of two weeks, and only hiring the cheapest devs, because "more is better" even though that makes the useless Scrum meetings take even longer.
Microsoft and Intel will now collaborate to rewrite Windows in Lua.
Considering all Windows 10 versions come with a sort of .NET Framework 4 pre installed.. makes you wonder, if the whole OS was written in dotNET C, C# or C++, except a few Exe, which have to run on baremetal like the kernel.
Yes please make this happen🙏
They should do it in rust so they can brag about doing it in rust. :|
So now we know why newer computers run slower than older computers despite being better, stronger, faster. 😊
Rust never claimed to be faster than C...
It claims to help to write safer software with more creature comfort than C without being noticeable slower.
And honestly looking at those charts they've done good job.
Only 4% that's really impressive. Also Lua JIT is missing, which shouldn't if you include TypeScript.
On the scale of all computation being done on Earth, 4% is absolutely massive.
Using Rust is genocide. The children, the children.
@@cajonesalt0191 Programmers are inherently lazy. wasting 4% to make sure they don't screw up because of how lazy they are is an acceptable trade-off. That being said, I'm not certain, but I think we could minimize that 4% further by changing pc architecture.
@@cajonesalt0191 Yeah, but since it replaces the current node/TypeScript garbage it is still a huge improvement.
Now, if all the unnecessary cloud nonsense, including the network traffic, would be gone we could decommission additional power plants.
@@cajonesalt0191 Rust is only slightly slower in synthetic benchmarks in a lab setting. Compare real world C projects to Rust projects, and the difference is often Rust being 100-500% more efficient than C.
I mean nobody is using Python or something for high performance application, all of that is offloaded to an actual fast language. Python for "Script you may run a few times a day and completes in 50ms" prob uses less electricity all time than it would for your brain to sit down and spend the time to write it in C
Wow you people still just don't want to admit that Python sucks... just learn AWK and be done with it. AWK is fast, easy and efficient, far easier and less complex than Python. And for data science, just learn R.
@@AnnatarTheMaia the rule of thumb is once you need arrays you switch your script from bash to a proper scripting language
@@AnnatarTheMaia Python’s library ecosystem is not even close to R
@@Ganerrr what does that have to do with anything I wrote?!?!?
@@CaptainMarkoRamius that's correct, for R's package ecosystem is phenomenally comprehensive and well written. It's like comparing a children's tricycle to a space ship with a gravity distortion drive.
Every programming languages has to make certain trade offs because we don’t live in a perfect world. The developers of the Python language wanted to prioritise development time and ease of code maintenance over execution time and efficiency. That’s a perfectly reasonable trade off to make in a lot of cases. For cases where that isn’t desirable there’s C or Rust or whatever. What’s the problem?
I look at python kinda like a shell scripting language.
It's there to run other programs and pipe them to each other.
Python was designed for prototyping. It's only when you use it on things that it was never designed for that it will bite you in the ass.
I have been using pascal for over 40 years and forever looking for alternatives and no other language has come close for me and I have tried all the main competitors. I now use Lazarus Free Pascal to write fast GUI applications for Windows 11, MacOS and my favourite Arch Linux. Well done Brian, I am with you on this study and agree that these factors should be taken into account. About time a good modern C language should be developed, or an improved modern C++ developed on these criteria.
I loved Pascal back in the day. I would love to see what happened to Modula-2 which I also used some at university.
Would have been very nice to have zig in the comparison.
It would be faster and more efficient than C or at the same level and if it's less it won't be by much
I think everyone agrees its performance is nearly identical to C. Some cases it may be slower, some it may be faster, but it's at the point where it's hard to compare.
The study is from 2021. It would be nice to have one from 2024 too, some of the slow languages have improved.
I missed zig too, they probably didn't include it because it isn't used much yet. I think that zig is very promising but it needs more libraries and all that.
I can make the comparison especially between C, Zig, Python and maybe Rust depending on the benchmark.
Which of the benchmarks do you think are the most important? Give me the top 5!
Java is doing surprisingly well there
Java is a good language. A good language is one used by a lot of people to accomplish a lot of tasks. That is all that really matters. Speed is a side effect of language optimization because people actually use it. That is why c, c++, java, rust, ect.. are all the best performers of the study. And its the reason everyone is criticizing python for performing as poorly as it does despite popular adoption. I think the reason for this is most of pythons popularity are due to a few powerful libraries and prototyping. So you will see a ton of numpy scripts, or opencv, ect..
The price of abstraction is sometimes worth it. I'm a C-zealot myself but I understand the appeal of languages that make life easier. Some people cannot handle the amount of power you're given as a C-programmer, and that's fine. However, I'm still conflicted as I recognize this "slacktitude" is contributing to a slower, more unreliable software ecosystem.
thing is back in the day we used to use Python to prototype things and then once it was figured out we'd write the real thing in C++, not happening anymore
@@Zuranthusin some numerical work it's not even good for prototyping as its too slow even for test problems unless you heavily use numba, cython, jax, cupy etc, at which point it wont be much easier to develop than c++.
@@hawkanonymous2610 And again I'll point out that unless you're a newbie you'll either have written your own libraries or know which ones to use to accomplish tasks equally as fast as programming in Python. It's just a matter of your experience level.
@@anon_y_mousse Gotta love the disingenuity in this response.
@@transcendtientC programmers say C is the best for everything and if someone criticizes C their only response is “sKiLL IsSuE”. I would know, I write C at my job and I know a lot of those types of people.
Could you possibly add a link to the paper in the video description so that your viewers can go through the paper?
If you do a simple search for
ranking programming languages by energy efficiency
it shows you what you are looking for. As you start typing it in, it even auto fills it.
Pascal can beat C. Nothing beats Pascal-C.
As a user of many langs, I think pascal is a great combination of speed and easy to debug, plus being RAD. For me, easily the most overlooked and underrated.
I did not expect TypeScript to be less effective than JavaScript...
That is the biggest head scatcher...
Yup. I also don't get that. It's just transpiled to js. Right?
I think it results in some junk JavaScript getting made
They must be transpiling at runtime or somthing, that gap indicates something has gone horribly wrong.
@@ThomasCpp Including all of the processing time would be correct for scripting languages. Javascript can be converter into an internal format to gain speed too.
@@kensmith5694 Converting ts->js is a compile time step, not a run time step.
I'd love to see something like this with different versions of C compilers.
C is the language for efficiency, so a lot of work was put into C compilers to make that even better.
According to one benchmark i've seen, oksh compiled with cproc, a very small compiler which is only 17479 lines of code (including code for it's backend), is only 1.35 slower than if you would compile it with gcc or clang. Oksh is probably not a good benchmark for code, but i bet with almost any c compiler you'll get higher performance than something like go or C#.
There are two definitions of speed; how fast does the program run, and how quickly can you get the job done. Often, 71x slower means it runs for a minute, while c runs for a second or so. Cool. However, if you need half an hour more to write the thing in c, and the program is basically something that does number crunching and writes it down into the database or in a file, you basically wasted 20 minutes to save one. Sure, if it has to run frequently and especially if a user has to wait for the result, by all means rewrite it in c.
And yet again I find myself pointing out that newbies will not be competent in using C and will have an increased development time. This is who Python is aimed at. With more experience comes using either your own libraries or those that you have learned to use and development time is reduced. Go ahead and ask your mother to write something in C and offer no help.
Python dev cope
@@JiggyJones0 Python devs don't need cope, they get paid. "But it's not optimal" is cope from unemployed c programmers.
While we're talking about efficiency, the worst offender might be the web platform. It's "too much work" for most companies to write native applications anymore because all they care about is how many programmers' salaries they have to pay. Us end users are the ones that have to pay to run their inefficient software.
Most people doing python are not making complex maths stuff. They're creating scripts to iterate over a CSV file or an API that grabs some data from a db and sends it as json, and even if they're doing more chances are python is calling a c library trying the heavy lifting.
You realize that heaps of the infrastructure at Google and every other big tech firm is written in Python right?
@@JodyBruchon And not to mention, academic work! I see Python heavily used by lab scientists are Stanford studying things way beyond my comprehension (like comparisons and data analysis involving zebrafish spines). It works well for them. Pytorch is also another thing academics love.
So, once again: there's pros and cons to everything. It may be slower, but the trade off is convenience and familiarity within their tribe (lab). Just food for thought!
Pretty sure the default json module that comes with Python is written in pure Python for various reasons, portability being the big one. Please check this though, I could be wrong.
But what I know for sure: there are several third-party JSON modules that defer to Ctypes or Cython which perform a lot better, but lack all the bells and whistles the pure Python one has.
@@koitsu2013 Indeed. Python is just a tool. Sometimes very fast prototyping or modification outweighs the performance loss. I do think that it's overused, though. I only write shell scripts when writing a C helper is not justifiable or is too complex. I'm one guy. I have extremely low labor availability and high compute availability. I have to balance dev time, run time, and size of workload very carefully. I want to have my textual hash database clean out in C but a shell script was 100x faster to make and I very rarely run it, so C automation makes little sense short-term. On the flip side, I wrote a shell script that parsed Windows INF files and extracting the defined sections was taking tons of time, so a simple C helper that outputs only the section requested made sense, especially since the work is almost trivial.
Not true. All AI scripts and people are using python.. Yea I know, right?
Pythons hate polar bears.
Got it.
lol
It is funny because there is a data frame library for python, written on rust, which is called Polars.
How dare you!
Reptiles can only regulate their body temperature by moving to where they can warm themselves up or cool themselves down. They are optimistic about finding cold spots.
Even worse, when you consider the applications running on these languages. OpenStack (written in Python) is running in containers on top of OpenShift (written in Python). So interpreted code on top of interpreted code on top of interpreted code.
Interesting that Pascal is slower than Chapel but uses less electricity, which suggests that whatever Pascal compiler they are using isn't using the processor as intensively hence the speed difference. So an implementation of the language rather than something inherently slow in the language by design.
Pascal is C are basically equivalent beyond syntax. The only difference there is compiler. I write modern industrial Win32 GPU graphics software in Pascal.
It uses less memory
@@LTPottenger As both C and Pascal put raw data into memory and don't wrap it in some kind of object structure, that also is pure compiler logic. My guess is on not aligning record structures to 4/8 bytes. This makes the access slower with modern CPUs but uses less memory.
A 75x slowdown is wildly optimistic. Compared to finetuned, optimized C or assembly, the slowdown factor is easily in the thousands. But it doesn't matter, because Python is mostly used for business logic or as a glue script, with the bulk of the task done in well optimized libraries written in C or other languages. Most software written in Python would not actually benefit from a big speedup if it were rewritten in C (some would, because many programmers are unfortunately not aware of the performance implications of what they do and would happily write in Python things that really should be part of a C library).
How is JavaScript just 6x slower but TypeScript 46x? TypeScript is transpiled to JavaScript before execution. Are they including the transpilation time? (That'd be a bit like including compilation time of compiled languages.)
There’s no way Java is faster than go
@@MrFunny01 Modern JVMs have quite impressive Just In Time compilers....
Yeah. Exactly. I don't understand why ts is that much slower. Typescript it's just getting transpiled into js.
@@chainingsolidJIT makes Java likely to beat Go in a benchmark due to repeatability, but I'd be interested to see how it fares in more varied usage.
I suspect they took it from the start to end.
This would include the conversion
On python you make up the energy by getting work done faster with ease of use and excellent documentation.
10:40 is it me or since c is about memory manipulations and everything is directly handled by the developer, it should actually still win on the ram benchmark? programming differences.. so if the code was totally the same on both language c would probably still win on memory
and gets the development work done 70 times faster
yeah, if you're a noob and can't write C
@@kevinstefanov2841c is not for rverything dude
when you create a NewLanguage that in some place faster or use less memory than C, it only means that C can also benefit from the same speed or memory optimization and will be also faster and/or use less memory when this trick will be implemented in the next version of C compiler. but without overhead of NewLanguage features. so probably there will be no NewLanguage that is faster and more memory efficient than C. and it only means that you can decide how much efficiency you are ready to pay for the new features. if you want a more efficient language than C, go to assembly language, but nobody now wants to do it.
Rust being faster and less power consuming than C++ was mildly surprising to me. I get a slight negative rust bias but that seems extremely small cost for more automated memory safety.
It's no better than C++ for memory safety, it just has a more annoying compiler.
@@anon_y_mousse It is better than C++ for memory safety - the compiler catches bugs that most people never do.
It has a smarter compiler so you can use your cognition within the problem domain, instead of wasting brain cells figuring out what the compiler already knows better than you.
C++ people are upset they're not the best in town anymore.
Rust is exactly just as fast as C according to the most up-to-date results:
energy-efficiency-languages/updated-functional-results-2020
Everything the guy in the video said about Rust is outdated by the same study.
@@jboss1073 Where?
Many of Rust's guarantees are build time rather than runtime but I am also suspicious of that. In fact having C++ be slower than C seems like a bug either in the implementation or the compiler. Templates are build time and member functions are basically just normal C functions that take in a this pointer as the first argument. Unless they are using dynamic dispatch or something I can't think of any reason the c++ version should be slower.
I would like to use C or C++ for more, but for rapid development, for systems that change just as rapidly, JITted languages are a healthy alternative (C#, Java, etc). As a C# dev, this chart can be misleading. Run a service for days and you'll notice initial memory usage seems high but compared to other languages and frameworks it does as good or better than the majority. Ruby, Lua, Perl, Python are fine for scripts, but honestly they suck for real production perf. As a language, I can't stand Ruby. It's awful. RoR is a bag of hot doodoo.
I will say, Java and C# have actually been putting in investment towards speed and efficiency. They just started with the handicap of being managed languages, which sometimes is a useful price to pay.
Rust is even similar to that in a way. Every runtime check you add for safety and consistency at the language level is going to slow things down. C doesn't care and trusts the dev did that checking at programming time so there's nothing holding it back (from exploding at spectacular speed, sometimes)
Inferior error checking and many other types of issues can conspire to cause more development cycles, potentially far exceeding whatever time and energy savings are achieved by the final working application.
As far as I can tell, the applications tested are meant to make heavy use of the cpu. I am wondering about the actual impact when using a normal application which is mostly waiting for io. Eg when using application written in X vs V, how much delta in energy cost would be accumulated over a year.
It's a pity they did not test LuaJIT, which can be as fast as C, as opposed to the interpreted Lua.
Most people use interpreted Lua.
Once they're running yeah. Java scored well too. They often are slow to start though.
LuaJIT tends to be about 100% slower (i.e., taking 2x the time) compared to optimized programs in compiled languages. It even seems to be somewhat behind the Java VM, though it does appear to hold pace with JavaScript's V8 engine (Source: Mike Pall's scimark code vs. equivalents created by others in Rust, C++, etc.) .
IME, that's about the difference between competently-but-lazily-written C++ and optimized-by-experts C++, so not a bad result overall. The best results are probably achieved by combining the FFI with optimized native libraries, while minimizing context switches to get the most out of both the JIT compiler and the heuristics used by LLVM/GCC. LuaJIT without native libraries doesn't make sense, so it's not useful to benchmark interpreted code that should have been put in a native library to begin with. And the default interpreter is so slow that talking about its performance is effectively a complete waste of time, especially since it doesn't have a FFI and its C-interoperability layer incurs very high overhead costs.
Still better than CPython of course, but the value of Python lies in its vast ecosystem. Lua doesn't even try to compete with its minimalist Do-It-Yourself philosophy.
This is a myth. Luajit is very competent, but it can never be as fast as C. Luajit doesn't statically compile Lua. It compiles it where possible, and still does plain interpretation where not. Code with string concatenations, for example, will run at normal Lua speed (slow).
However, in this regard, pypy is very much comparable to Luajit. If they included Luajit, they would've also included pypy.
I don't buy the part where they made lua look slower than python, though. That's not been my practical experience with the two. Not at all. There are things where lua can be as slow as python, but for the most part, it's not.
Not surprising seeing how despite computers becoming more powerful, because of all the bloat and inefficient coding, nothing feels all that different in performance than 15-20 years ago. Also I am not sure how many of the background processes are that useful. It;s really annoying that something like a youtube page needs I don't how many gigs of ram just a for a normal HD video...this tab right now uses 1.1 Gb ....that is insane.
In defense of Lua, the 100% interpreted version is very slow. But LuaJIT is more on par with C# and C++ in some little bench testing that I was running a while back.
Also, Zig is newer and is supposed to be faster and more efficient
LuaJIT is compiled LUA - the bytecode, right?
I have not written an article for a while. But as a physicist I must point out that the table showing *normalized* units (i.e. the best one being assigned the value 1.00) has no business putting physical units in the column headers. *Normalized* means that the units cancel out.
Nonetheless, I appreciate the material revelation made by this work. Thank you for bringing it up.
Mojo looks like an amazing alternative, but it is still in the early stages
Also Bend, which forks loops onto CUDA cores.
Also, Mojo is not open source, and that's repugnant.
@@nandoflorestan Because it's not released yet? It will be open sourced once it's released
We already have cython that does something similar. While widespread, it is not that impressive as a bare speedup to pure python because of python syntax, and to achieve really big speedups requires a massive rewriting with obscure annotations, which require considerable knowledge of how memory works similar to C (sometimes is just easier to rewrite the function in C).
I do not see how mojo will benefit the general pythonist.
@@lorenzomonacelli Cython works with standard Python type annotations, and Mojo if not typed probably infers types before compilation.
Yeah, press X to doubt. Usually this is highly dependent on compiling options and nobody uses languages that are pure, external libraries are called usually written in a different language. This was also 3 years ago.
One thing I saw that I do take objection to with regards to their test is that of using trees. The implementations can vary to the point of degrading performance a noticeable amount. I didn't see if they used hash tables too, but those can be excessively different in implementation as well. I primarily use C myself, but not for environmentalism reasons, but because I want to save my users time when they use my software. I'm not surprised about Python, because I use it fairly often too, but I am surprised about JavaScript. Yeah, I use that on occasion and it's slower than molasses in a freezer, but I would've expected that all of the work that has gone into optimizing the various interpreters would have meant it would perform better than it did. At least now there's some degree of quantification of how much the other languages suck.
I did not read the fine article referenced but are these the programming languages used to run most of the CPU time of the world? Zig and Odin are not really consuming much cycles globally. Web browsing and video rendering are the most used applications for end-users, I'd guess. Libraries are doing the CPU cycle work, and they are written in in C/C++/C#, mostly? And GPU helps so there is more research to do.
I do find it interesting that zig wasn't on there, never tried it but knowing it can use C headers without issue I might consider it
Because it's not release-ready yet
I started programming in about 1983 in BASIC. My first compiled language was FORTRAN. I was always impressed how much faster compiled languages were, but I never thought about power efficiency. I started programming in Python several years ago, and I was amazed that an interpreted language seemed instantaneous, but I never thought about how the indiscernible slower speed would consume so much more power. Interesting.
I'm deeply skeptical. These large scale comparisons are extremely difficult, as the quality of the implementation is huge factor.
Naively implementing a particular algorithm in each language, when no "skilled user" of that language would do it that way, is a sure way to create very misleading results.
C and C++ having significant variation is a clear indication of this - written properly, C++ is virtually performance indistinguishable from C.
I think the paper is asking the right questions, but I'm not gonna be waving it around saying "we must switch to C" (as much as I would be okay with that, personally...)
Written properly for C++ (in regards to performance) just means never using templates, never using virtual methods and basically never including any header from the standard library. So basically just write C, but instead of functions where the first argument is a pointer to a struct, that struct can be a class with the function as a method and thats it. Oh and maybe you can use range based for loops and std::vec.
Looks like they used benchmarks game. So it's optimized but unidiomatic code ...
So, it makes you think.
Our Linux systems often use like 0.5GB of ram, and have allot of python apps in them.
So what runs in Windows, since it takes 5GB of ram, and is filled with c/cpp.
> puts on a tinfoil hat
Not a result of the language . Windows is filled with bloat and other design nonsense
It's mining BitCoin 24/7
@@42Cosmic42 exactly my point, even tho they use C/C++ for that bloatware, it still uses so much more resources :D
Windows has very little C. It's almost entirely C++ and MS in-house languages like C#
UWP, .NET and spyware are what slows down Windows.
Tinfoil hat is venerating the attributes of Jupiter.
I was expecting Fortran to be highly competitive with C, but it is clearly not.
Me too
There is not a lot of incentive to improve Fortran these days. Almost all the legacy source code for it got ported to C or C++ long ago.
Older versions of Fortran would be faster than newer Fortrans.
@@metaforest However, in the TIOBE index, Fortran is among the top 10 languages, and the Intel Fortran compiler is very active!
Me too, because my friend at SMHI (a meteorological institute) says that their old FORTRAN programs run circles around their new Java reimplementations. Problem is that they can't find FORTRAN programmers anymore.
This chart just reminds me why I'm an old C, assembly, Pascal, Perl, and PHP programmer. I'm 47. Each language has its pros and cons; knowing which one best fits the situation is critical. I do Python and Lua these days, but as a sysadmin most of what I'm writing are "convenience" programs and not, say, code hosting or integrated into a webserver. There are pros and cons to them all. That said:
Lua surprised me, but this is from 2021 so it would be nice to see more recent versions tested.
And I'd have loved to see Zig on that chart. That's probably the next PL I'm going with; I've been very, very impressed by it's implementation. Andrew Kelly knows what he's doing. F*** Rust and Go; and in the case of Go, Rob Pike of all people should have known better. He worked with Luca Cardelli for god sake!
Pascal bit makes sense since Pascal was genuinely superior to C (i love C)
*IS superior, or at least, on par. Although I wish we had C preprocessor.
Both were and are still great. I wonder where Modula-2 would stand ? It was the better Pascal but never took off it seems.
The thing about C is that it's pretty much as close to writing in assembly as it gets. Minus the compiler optimizations, the generated assembly is as "literal" as it gets.
In my opinion there's no need to "reinvent" C.
C will always be around since its the next best option to assembly, yet has all features of a high level language.
It doesn't even have first class functions.
@@PixelOutlaw function pointers are quite ok
@@andrebrait sure you can write anything in C. But it is wrong to claim that it has all the features of a high level language as the original author did. If that were the case people wouldn't be implementing other languages on top of C like ECL among others. That's the whole reason people make programming languages - the host language isn't tailored to their needs. Prolog being written in C is an excellent example. In its domain, it is far more terse than C because there is a shift from imperative programming to declarative programming.
You must be high if you think that C has all the features of a high level language. Good lord.
Python is 1,000x faster than using a bunch of Excel sheets and 100,000x cheaper than hiring a bunch of extra accountants and business analysts to manually re-compile a "database" from unstructured data. And the use case for basic business applications still heavily favors high-level languages like Python for most straightforward use cases (and if you need to do stochastic simulations then yeah the libraries are written in C).
Wouldve loved to see Zig up there, given it's pushed as a direct replacement to C
zig wasn't all that popular in 2021, it's still not exactly there yet
Thanks for pointing this out. I know a certain switch company that, using Linux and C++ for their operating software, decided to code "non-critical functions" in Python. They were pretty good at it, incorporating the Python interpreter in the OS and getting pretty good at translating between the languages. After a while, they announced a new policy, namely going back and rewriting python programs in C/C++. This was due to customer complaints about how sluggish the OS was.
The moral of the story was that, Python wasn't wasting time in any critical functions. It was wasting time *everywhere*, and general slowness was the result.
No COBOL, huh?
That would require someone to write the COBOL implementations of the benchmark tasks.
COBOL is a carbon sink. 😅
Glad to C my favourite language is the best XD Been using it for the last 13 years sob plenty of experience in being memory/thread safe :)
Software is going backwards in performance because it started out at the bare metal, where you HAD to understand the architecture. All this abstraction to hide the underlying hardware is where all this inefficiency comes from.
Optimizing compilers are grossly overrated, and I have been saying this since the first version of C++.
Most libraries are junk, and again, all comes back to dumb lazy programmers who don't want to spend the time understanding the underlying hardware.
Compilers can't get better because they would need to anticipate every single possible line of code, or blocks of code, or every algo that could possibly be written, which is impossible.
We need to get back to KNOWING THE HARDWARE, and KNOWING WHAT WE ARE REALLY DOING.
There is no other replacement.
Convenience ALWAYS comes at a price.
Its a mutual relationship. The software must know the hardware, and the hardware must know the software. Processors today are optimized for c programs. Everything is human made. The entire point of a programming language is to make things work and do it in a way that everyone can understand. Just like a spoken language, a "good" spoken language is one that is heavily used.
I always thought from the people I see using Python, that it is sort of a BASIC. Mainly for people that don't really understand computers and mainly for people who do Mathematics.
As for Rust, yeah didn't drink the cool aid. Most of the arguments for using Rust and recreating the wheel which C already built, is mainly due to people who have poor modern tool chain skills in software development. (i.e. GIT, MAKE, Autoconf, Valgrind...etc.)
You can't be serious. This must be a trolling video.
CPU efficiency and speed is not a primary concern. For modern programs the bottleneck is the network communication and the disk read times.
If a python program is using heavy computation, it probably uses numpy, or similar libraries which is/are written in C.
For companies, researchers, ease and speed of development is also a concern. Problems expressed in python or other higher level languages in a couple of lines translate to hundreds of lines in C. Python is easier to learn and faster to develop in. Developers don't have to worry about memory leaks for most cases.
Also, for practical programs, Java JIT compiler can produce faster code, than C, because unlike C, java can use all the instructions of the concrete CPU it runs on. Also the memory allocation inside the program is not a system call, which makes creating new objects in Java many times faster, than calling malloc in C.
You also don't mention, since this study came out, newer python versions rolled out with major performance improvements. Python 3.11 came out with 30% performance improvement.
So now it's only 50.33 times slower? Nice. 😎
@@kneekoo Yes, roughly. So what?
If you think, that's a gotcha comment, then you must be bad at reading comprehention as well as list comprehention ;)
@@A3A3adamsan That was just a poke at the irony of how a very old programming language is still vastly superior in a world preoccupied with going green.
@@kneekoo Fair point :). It's superior in a few aspects, that can be important for some use cases.
I'd love to see a kill count or billions of dollars lost chart per million lines of code per language 😂
I would be interested to see how Zig ranks in a test like this. I see Java doing its thing taking quadruple the RAM it should but being blazingly fast for a garbage collected language. What stands out most to me as someone just getting into Node is typescript being _substantially_ heavier than vanilla JS.
Where is pure assembly ?
Compilers do a better job at optimizing machine code than hand optimized assembly, so there really isn't a point in doing that.
Naive assembly will look roughly like what -O0 will emit. To improve that you'd start by adding optimizations but that's actually exactly what a C compiler does when you use -O1 -O2 -O3, so really C is just a way of automating the process of writing assembly.
right below HolyC
@@josephp.3341 Depends on what you're writing, but some compilers are not as good as they should be at optimizing poorly written code. Even just the difference between calling a function with two arguments versus a struct with two members can cause wildly different results.
@@josephp.3341 I see this repeated a lot, but it's not really true: compilers are written by humans and the techniques compilers use are the ones humans used to use (excluding a few new peephole optimizations and whatnot). It's all about time, it takes a lot of time for a humans to repeatedly try inlining a few function calls, optimizing the result and finally figuring out whether that was worth it or whether it's best to undo all of that work. Let alone keeping all of that maintainable.
Edit: Also, -O0 is (for most compilers) much worse than naive because it's too systematic, for example a human would tend to use more registers at once, use assumptions the compiler isn't allowed to consider and do simple optimizations like "call somewhere
ret" -> "goto somewhere"
C is 765x slower than assembly.
inb4 someone tries to incorporate transportation, man hours, and calorie consumption relative to error handling and memory safety, into some metric. Ackshtually Rust is the best if you look at it like [blah....]
Rust avoids many costly bugs that can occur in less safe languages. It might even have a formal prover like Spark Ada, which was designed for mission-critical applications like weapons and satellites.
@@CTimmerman Some of that is legitimate, some of it is just to avoid errors made by people who don't belong doing what they're doing. And should never have been incentivized, coerced, and drawn into those fields. I know how that sounds but if I went into the story of how I learned to program and the frame of life I was in, which basically amounts to constant whole body pain, fasting for days at a time, inability to sleep, and functional brain damage, the fact that I'm not only self taught but came out of it having sought out and knowing how the machine works down to the RAS and CAS signals sent to the memory controller, I can bluntly say most people don't belong anywhere near software. They just don't. They can't do recursion, they never bothered with (inline) assembly, the idea of cache misses and data locality is like a whoooaaaaa mind blown moment several years into their career, they just learned java and typescript or whatever and never bothered to learn data structures and how the machine actually works.
I mean look, you can argue that it's a manpower and volume issue, so you'll architect tools and frameworks to keep all the normies on the rails so to speak. And at those organization and society scales you can make a case for that. But the fact remains, everything I stated is true. These people are not programmers, and they don't belong doing it either. Rather writing software is a surrogate, it gives them identity and money, and so they do it in order to get the money. This curns out mediocre "bare minimum" self serving types and then equally incompetent layers of management that have to try to channel that writhing malformed mass, that self serving blind beast, into doing something other than devouring itself. The whole mindset is wrong. This fixation on memory safety is partly quality of life improvements, and partly damage control. Mostly, damage control. For people who barely care what they're doing, because they're a product of the Prussian educational model, and due to the media exposure in early childhood [...].
I can say it, I lived it. That was my late teens and twenties, wasted being tortured. And even there what I created was the best. The best. No corners cut, no sloppy hypersocialized water-cooler mentality. No, the best. I take what is, and I make what ought. Those are the types of people you should either seek out, or rework aspect of society and human development in order to manufacture. Those others who can't or won't, don't belong, and shouldn't be pandered to. They need the boot. Get the hell out of here buddy.
All computational algorithms in Python are implemented in C or Fortran anyway. If you use JIT extensions such as taichi io get same performance
C is a wonderful language, and I don't think anyone is surprised to find that its execution speed is much faster than Python's.
So what. Execution speed isn't everything. Development time matters too, and on non-trivial projects most devs can crank out a working solution faster in Python than they can in C. Python is good enough for a lot of tasks, and at the end of the day, how much better than "good enough" do you need to be?
Agreed: Development time often matter more. Use the right tool from the job - ie, on a microcontroller that is battery powered where energy, compute, and memory is constrained C is used.
It all depends on your development platform. If you are coding for 'retro' machines, C is usable on all of them, Python and Ruby are not.
“Never let perfect become the enemy of good.”-Voltaire
Artificial intelligence will KILL Python at some point...
If you want energy efficiency - you can be 72 times better.
Respect for good old Java: apparently the fastest JIT language, although one of the most memory hungry... 😮
5:57 I’ve actually done a ton of embedded language testing. Compare LuaJIT vs the others and you’ll be SHOCKED by how fast it is.
The “regular” lua engine (not LuaJIT) is only optimized for portability. The performance is as weak as you imply.
Well, python would've looked a whole lot faster too if they used pypy, which is about as competent as luajit. I'm only suspicious about Lua having been slower than python in their tests. Lua can be as slow as python in some things, but in my experience it generally isn't.
If you want a language that makes you care about memory, efficiency and speed, Zig is a good option to check out. More ergonomic and readable than C anyway!
wtf why C++ is 50% slower? how? I can compile C code as C++ and in some rare cases code will be FASTER (one youtuber compiled org DOOM in C++ and had 1% speed up).
Same with TypeScript, how is worse than JS???
I would not believe this this research much.
There can be good explanations for TypeScript. The conversion time would be included and the JS code made may not be quite as good
OO is glacial but agree that well written generic C++ is as fast or faster than C
Because modern C++ is not "C with classes"
@@Sneg00vik Yes we have safer `std::array` and `std::span` instead of `int[]` and `std::unqiue_ptr` instead `T*` and now where is this 50% overhead? We have RTTI and Exceptions that are heavy but if you not using them directly and have serious work load (aka start up is only fraction of time) you should not be able to notice it.
Of corse we could created bloated code that will be slow but then we testing bad code or C++? We could probably create equality bad code in C.
Only place where C++ is always worse is compile time, where all overloads and templates could make compiler glow red, but then if we include it in cost, how in earth python become 70x worse? When in very big program python could finish workload before C or C++ finish compilation, not mentions Rust that have similar bad compilation times as C++.
Yup. C++ uses the same exact compiler. Maybe they used specific C++ features, like vtables and RAII? I know Common Lisp code isn't the usual code, but uses typed Lisp, which is basically the C/C++ code but with worse compiler.
C++ does not need to be slower than C.
Makes you wonder what exactly they did, for example did they write a custom algorithm in C, for some task rather than using a generic one from C++. In that case the study is nonsense.
Oh! This warms my stony, cold, old heart! :)
Python is 71 times slower than C.
I've sent this to my brother, a Python/Javascript programmer.
Why does C get all the hate?
Because all the other languages have guardrails to keep you out of trouble. C you have to manage everything, which is why I would bet, that most bugs and security vulnerabilities are born.
I had a similar idea a while ago while debating whether or not performance mattered. It might not matter for your application itself, but making something 5x faster means you can get by using about 1/5th the same computing power you'd otherwise need. That means either less poweful hardware or fewer replicas, meaning lower cost. If that can be achieved simply by moving from Node.js to Java, Go or Zig, it's a good idea (especially because no one deserves the pain of doing JS on the backend, so everybody wins).
Wait, what is Go's excuse?!
I think this whole article is BS. There's no way java which runs on a VM with a GC can be faster than go which runs natively with a native GC.
java is jitted, so it should perform similarly to go
@@FinaISpartan similar but not that big of a difference. And certainly go shouldn’t be that slower than Java.
@@MrFunny01Java has 20+ years of hacky optimisations so it does make sense
What vreion of go was used?
On wall street all our critical systems had to be written in c/c++. Python, java, etc. are all level 2 up ? languages and c/c++ is level 1, it talks directly to the system components. It is superior to any other language by far. where is assembly? Most of these languages are written in c/c++. C/C++ is not used because it is too complex for most people
Lua isn't that slow. I don't buy it for a second. In my experience, pure Lua is quite faster than pure Python, and internally it's a lot less bloated than python, with a whole lot less sanity checking overhead, etc.
I'm finding that study suspicious.
Experts agree that Rust is both safe and effective.
A few observations:
1) Ada is still largely ignored even though it's actually a joy to write in.
2) They did not include Forth, which is telling. I have little to no doubt that Forth would either tie, or beat C in at least one of these metrics. The catch is, it sucks trying to read the code several months later - but that's not what this study is testing for. 😉
3) If Pascal received the amount of mindshare that C has received, it probably would beat C in all metrics. Ada is heavily influenced by Pascal, so you can get an idea of what Pascal would be able to do (It's just that Ada is to Pascal, what C++is to C - in terms of size and complexity).
looks like java wasn't so stupid choice for android's main app language
Wrong conclusion yet again. In Android, the Java code does not run in a common JVM, it runs in something else, so performance may differ
Would explain why Android phones have more ram than PCs... Imagine if they used a proper language how much cheaper phones would be and have way better battery life.
Erlang slower than Javascript? This just shows that the experiment was very specific, run bunch of algorithms. Erlang ran slower does not mean we can swap JS for erlang in telecom and achieve faster tele communications.
I'm very surprised that java outperforms Go in energy and time? How is that even possible? I understand there's a GC in Go, but Java has WHOLE VIRTUAL MACHINE which is a lot of C code by itself. Then standard library, then the application. While Go compiles to the native code with an embedded GC. And Java still has A HEAVIER gc, heavier class runtime structures, heaviver runtime? Why the first entries are all round up at 1? 1 Megabyte, 1 joule, 1 ms. Also Lua is way simpler than python. How on earth does GO uses less memory than C if it has the same GC. This article is so BS honestly, just for C fanboys. No, I'm not pooing on C, it's an awesome language and Python is indeed slow, but this article got something seriously wrong.
This is pretty cool. I know the authors did some splitting of the results by compiled / virtualized / interpreted, but I think what would be even more insightful would be to break down the languages by feature (e.g., with or without run-time reflection, with or without dynamic types, with or without garbage collection, with or without strict-aliasing rules, etc.). Basically, beyond the d-measuring contest between languages, it would be nice to measure the cost that each feature adds to begin (finally!) to do some sober cost-benefit analysis. For example, I suspect Python pays a huge cost for having to translate most attribute or method call into a run-time reflection lookup, for the marginal convenience of occasionally being able to write some pretty nifty code.
you are not thinking in the economics behind python right. the energy (money equals energy consumed with that money) you've saved in engineers time due to utilizing python is WAY higher than developing everything in C
C++ is still quite good.
Why hire competent developers to do the job properly when you can cause more environmental damage instead?
That equation breaks down pretty quickly once you're running the python code at scale
Say you can learn Python more than 70-80 times as fast as C. Which seems like an overestimation, imho. That still does not mean people who know both Python and C well, can write the same script 70-80 times faster in Python than in C.
For competent developers, development time isn't nearly so big a factor as people think. Generally, if you've been in the business for any length of time, then you've either built up your own libraries for doing things or you use specific libraries that someone else wrote. It's only for the newbies that the language's standard library really matters in most instances.
Why didn't they list Basic? Not that I ever would like to touch it again, but it would be interesting to see how it performs compared to Python etc
That would be an interesting test, but implementations matter more there than with a language like Python which really only has one implementation. Some were better than others and you'd either need to test them all or write one from scratch. Also, there were quite a few BASIC compilers and they would undoubtedly perform better than the interpreted versions.
Like you said, Basic is outdated. Visual Basic dotNet is basically C# with different syntax.
Memory usage is kind of misleading. The energy graphs show one reason why, the other is if you have a 100Mb executable for C then it's a 100.5Mb executable for Rust. There's a static overhead for many languages but no overhead in the actual code.
There's also compiler optimization you are not taking into account. Rust and C does not create identical Assembly code so the memory usage won't be the same.
@@asandax6 And Rust apps are almost always blobs with all dependencies compiled in. That provides more flexibility, since there is no pressure to have a stable ABI limiting progress (also a lot of code is generated via macros), but at the expense of no reuse of loaded modules across many instances of an app and increased compilation times.
@@JanuszKrysztofiak That's what I hate about rust. It compiles with bloat. Stable ABI is important and saves memory space (which every first world developer says it's cheap and don't realize the huge number of people that can't afford phones with more than 64GB and laptops with 128GB SSD.
@@asandax6 yeah, ram is no problem when you're talking megabytes
no reason to not pre-allocate a couple of megabytes these days, ram is abundant on computers these days
i pre-allocate tons of memory on the heap for easier memory designing (allowing for growth) all the time because everyone has it anyway.
anyone that can run a web browser with all the bloat on those today, is still EASILY within range of my program, i'd have to allocate hundreds of megabytes to even come close
I get that this is a tongue-in-cheek post. I write in C and Python ( as well as other languages ) I would be mad to use Python to write an OS or device driver, but I would be crazy to write a quick GUI tool to test my code in C, I use Python. Any power delta a running program uses, must be tiny compared with the power used by the whole computer ( including that flash GPU ) and monitors. The real killer in terms of electricity must be java-script/typescript in our browsers as these are the most used applications by your average users. , but even then it must be tiny compared with the background power usage by your setup.
The fact that Pascal is not tied with C for 1 or indiffernetiably close only illuminates the imperfection of their method. But yes compared to the BS modern languages this video is great. Long live Pascal and C, death to the modern bloated slow BS.
So essentially, use the right tool for the job. The same goes for Python modules. Numba, NumPy, and SciPy exist and aren't hard to use. Also, PyPy is easy to use for its performance gain. Why isn't Zig in that list? I've used it for a few small mathematical projects and it's pretty fast. The GNU Multiprecision Library works with it, too. You can create a dynamic library with Zig and use it with ctypes in Python.