45 Computer Languages Compared: Which is FASTEST?

แชร์
ฝัง

ความคิดเห็น • 679

  • @DavesGarage
    @DavesGarage  3 ปีที่แล้ว +301

    NEWS ALERT: The current winners are NOT C, C#, C++, or Assembly. If you think it should be, you'll have to join the GitHub project and make it so, because right now other more modern languages are in the lead!

    • @billowen3285
      @billowen3285 3 ปีที่แล้ว +377

      Someone is programming assembly badly if it's not winning

    • @valdisblack1541
      @valdisblack1541 3 ปีที่แล้ว +88

      @@billowen3285 that's obvious! every language is the ASM code at the end.

    • @jackgerberuae
      @jackgerberuae 3 ปีที่แล้ว +53

      @@billowen3285 the programmer made a note in the code stating that a procedure can be optimised by using bit shifting which would make it faster. He just did not bother to, so it’s up to next guy to pick up the cudgels.
      I only have reporting and not the requisite assembly skills, unfortunately…🥵

    • @kamurashev
      @kamurashev 3 ปีที่แล้ว +7

      that's shocking, makes me curious as hell.

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +215

      @@billowen3285 You can't call code bad unless you can improve on it!

  • @DwayneSmith1965
    @DwayneSmith1965 3 ปีที่แล้ว +432

    Obviously SQL will win, you just have to have the correct index into the table I prepared earlier :)

    • @judewestburner
      @judewestburner 3 ปีที่แล้ว +13

      Whatever the question is, the answer is SQL

    • @kittysreview9055
      @kittysreview9055 3 ปีที่แล้ว +5

      🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +51

      The irony is that a lookup table wouldn't really help much. What you need is a "distance to next prime", that's the unknown and hard part. Otherwise the question is figuring out which numbers were worth looking up (ie: were prime).

    • @EwanMarshall
      @EwanMarshall 3 ปีที่แล้ว +21

      @@DavesGarage Well, you could pre-calculate the entire sieve and just have a table of:
      0, 2
      1,3
      2,5
      ...
      But yeah, that would fall afoul of the rules for good reason :D
      Could also try to obfuscate it by making the indexes for the primes some easy to calculate sequence and fill in the rest with random numbers.

    • @thebuccaneersden
      @thebuccaneersden 3 ปีที่แล้ว +21

      @@judewestburner
      Q: "Who is truly behind 9/11?"
      A: "SQL"
      I knew it!!!

  • @tompov227
    @tompov227 3 ปีที่แล้ว +43

    "Americans call him by value and Europeans call him by name..... well if you've got a better Pascal joke Id love to hear it" Amazing how the first joke (by value vs. by name) is funny but its even funnier when he follows it up with that

  • @delberry8777
    @delberry8777 3 ปีที่แล้ว +64

    Like your videos. Makes me feel like I'm back in the 90s when I was excited about things like programming languages etc. I'm still a software dev but now I rarely get passed annoyance about new languages, frameworks, build tools etc.

  • @RvnKnight
    @RvnKnight 3 ปีที่แล้ว +65

    You brought me down nostalgia lane today as I haven't really used Pascal in over two decades. Pascal is the first compiled language I ever used.

    • @jackgerberuae
      @jackgerberuae 3 ปีที่แล้ว +5

      Ditto, but more than 30years for me.

    • @RvnKnight
      @RvnKnight 3 ปีที่แล้ว +4

      @@jackgerberuae
      Just under 30 for me. I learned it at 14/15 and I'll be 40 soon.

    • @khoibut6206
      @khoibut6206 3 ปีที่แล้ว +5

      I am 16 currently and I learnt Pascal when I was 13

    • @jackgerberuae
      @jackgerberuae 3 ปีที่แล้ว +4

      @@khoibut6206 good for you. Pascal was so easy and clear. At the time it was competing with C, C+ and later Turbo Pascal and C++.
      Proper C+ became too complicated very quickly
      I started with Python 3 years ago, after a very long time not coding anything. It was easy, and it builds on what we knew from Pascal.
      Anyways, my silly stories… good luck. Pascal is cool

    • @nahuelcutrera
      @nahuelcutrera 3 ปีที่แล้ว +1

      me too here.

  • @paulwratt
    @paulwratt 2 ปีที่แล้ว +3

    I know this is a bit dated now (6 months), and I did watch when the series as it came out, and I was going to comment but I (seriously) thought someone else would make this observation anyway - well 544 comments and still not - the only issue I have with it, is the recored timing procedure. Anyone who has done alot of cross-platform and/or cross-compiler work, is that the "text to screen" routines vary hugely, even with the same compiler of different platforms with the same CPU architecture. IE a lot of time can be lost in this routine. EG pipe output to file to see an example of just how much (with BASH its upto 100x faster). - Apart from that, I am glad someone finally took a stab at this sort of "drag race" speed test - thanks alot

  • @yummybeers
    @yummybeers 3 ปีที่แล้ว +29

    As it’s said, there are NO coincidences. Anyone else notice that Forth was left out here? I think we know why.

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +55

      Shh... I do NOT need the FORTH and ALGOL goons coming around my lab again, roughing me up, saying "It'd be a shame if something happened to your IEEE math routines".

    • @charlesbass6324
      @charlesbass6324 3 ปีที่แล้ว +2

      @@DavesGarage Yes, I'd really like to see something regarding Forth also.

    • @TurboGoth
      @TurboGoth 3 ปีที่แล้ว

      @@charlesbass6324 I think i know what Dave is having a hard time holding his tongue on with this one. My psychic powers hear him screaming, "THEN BY ALL MEANS ADD IT!!!...." and then "no, no - Don't say it, take a breath", he tells himself. =)

    • @charlesbass6324
      @charlesbass6324 3 ปีที่แล้ว

      @@TurboGoth I believe we ALL (that know) know the answer to this one. A Forth definition drops right into assembler and executes. It's damn small and fast. Dave?! You listening ??? Look, I'll buy on of your coffee mugs if you just run the tests, okay?

    • @PhillipEaton
      @PhillipEaton 3 ปีที่แล้ว +1

      Forth could certainly challenge, if someone could be bothered optimising their compiler appropriately in the process. And yes, you ARE allowed to do that in Forth, encouraged in fact!

  • @excitedbox5705
    @excitedbox5705 3 ปีที่แล้ว +158

    would have been nice to include a list of the results for those who don't have the time to set this up.

    • @adarshyadav253
      @adarshyadav253 2 ปีที่แล้ว +12

      Exactly

    • @Mpivovitz
      @Mpivovitz ปีที่แล้ว

      *SOUP NAZI VOICE* no results for you!

  • @joejoesoft
    @joejoesoft 3 ปีที่แล้ว +12

    FYI: Delphi has a free Community Edition. This used to be called the "Starter" edition and cost 200 bucks. About 3 years ago, they renamed it and removed the price tag. For non-commercial use, it's free indefinitely. For commercial use, it's limited to 5 devs and under 5K revenue per year.

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +1

      I would LOVE it if you could join the project and make the Delphi code we already have work with it in such a way that everyone could run it and we not violate any licenses! I don't know if this meets their def of non-commercial though since I AM incorporated and make more than 5K a year... just not with Delphi :-)

    • @joejoesoft
      @joejoesoft 3 ปีที่แล้ว +5

      @@DavesGarage So, after a couple of hours of research, I don't think I could add anything helpful to the project via Delphi (a platform I've used for 20 years). I can only confirm you're correct; the licensing would still be an issue.
      Radstudio (official) has Docker images, but the licensing would be the hard part. When the license for Community Edition is for a business, you'd not be eligible. The license states, "regardless of whether the Community Edition is used solely to write applications for the business' internal use or is seen by third parties outside the company or has a direct revenue associated with it" as a disqualification for the over 5K a year revenue.
      There is a GPL option: Lazarus. It's "Delphi compatible", but this wouldn't be testing Delphi itself. I don't think this would be a suitable replacement in the spirit of your project.

  • @ErraticPT
    @ErraticPT 3 ปีที่แล้ว +11

    Always had a soft spot for pascal, probably because I used Turbo Pascal on my college course and Hispeed Pascal at home, the later was basically a clone of TP for the Atari ST. Meant I could do most of my course work at home at night, bring it in to college during the day and it would run with zero changes.
    Met the programmers of Hispeed Pascal at a few computer shows (company called Hi-soft, famous for Devpac disassemblers and other dev tools for the ST and Amiga) and even pointed out a few bugs, ended up with them regularly sending me non public beta versions regularly. This was before the Internet proper so it was all done via letters and packages stuffed with floppy disks!

    • @paulmoffat9306
      @paulmoffat9306 3 ปีที่แล้ว

      I have the Reverse Assembler tool that could 'translate' compiled 8080/z80 programs into Assembler. That program was called 'REVAS'. Incidentally, that name was used in S1E1 of Max Headroom. I disassembled the CP/M 2.2 OS to Assembler and re-compiled it - worked fine! FYI- still have it.

  • @MRCAGR1
    @MRCAGR1 3 ปีที่แล้ว +14

    The Ada was using 64 bits whereas the Pascal was 32 bits. What processor were they running on, 32 or 64 bit architecture, single core or multi core? What level of optimisation? Which compilers were used?

    •  ปีที่แล้ว

      Probably more importantly, Ada was using packed bits while Delphi was using one bit per byte. The latter is much faster on most processors, including the one used here, but uses eight times as much RAM.

  • @arturkovacs3689
    @arturkovacs3689 3 ปีที่แล้ว +34

    2 hours after the "premier", it's still only available in 360p in Northen Europe. Not sure if this is normal. This is merely a friendly heads up.

    • @johannesbohm6458
      @johannesbohm6458 3 ปีที่แล้ว

      At least it tells you the truth. On my device it says "1080 (auto)" and If I want to Change it it says "quality Options Not available" but its clearly 360p.

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +12

      I know, I think something is borked with TH-cam on this one. After 12 hours, I tried editing the video and trimming a second out to see if that will "reset" it. I also uploaded an HD version alongside, but hate doing that. Frustrating!

    • @kilrahvp
      @kilrahvp 3 ปีที่แล้ว

      Still same after 6... I was actually playing with old VHS tapes and a small CRT TV, so I guess it's fitting that YT only gives me 360p. Had fun transferring the video to tape and watching it there as a nod :D

    • @HomelabExtreme
      @HomelabExtreme 3 ปีที่แล้ว

      Still the same here in Denmark :(

    • @Dr_Dude
      @Dr_Dude 3 ปีที่แล้ว

      @@DavesGarage same in US, Utah.

  • @KryptKicker5
    @KryptKicker5 3 ปีที่แล้ว +10

    I still use Pascal. Currently using it for a project. It’s my favorite language. I also really admire Niklaus Wirth. I feel better now that I’ve publicly admitted it :)

    • @stevenbliss989
      @stevenbliss989 3 ปีที่แล้ว +2

      Which Pascal?

    • @KryptKicker5
      @KryptKicker5 3 ปีที่แล้ว +3

      @@stevenbliss989 Lazarus + FPC. I used to use Delphi but it took so long for 64 bit, Unicode and Linux that I moved on. And the ridiculously high price cemented my feelings. The LCL is still not nearly as slick as the VCL but it’s cross platform and pretty impressive in its own right. FPC has always been awesome but Lazarus and the LCL have really matured a lot through the years. Personally I probably wouldn’t go back to Delphi. I’m happy with it and the community seems enthusiastic.

    • @stevenbliss989
      @stevenbliss989 3 ปีที่แล้ว +2

      ​@@KryptKicker5 Cool. Borland did fuck up Delphi after the great Product it started out being, when they went insane with crap like Inprise etc. Lately (in versions 10.4.2+) they took away floating form design - INSANITY! So where I work were will stay on 10.3.3 forever, and stopped paying for update subscriptions - until they fix this, ...likely never! Maybe in 10 years they might be where they should have been ten years ago, ...very sad! :)

    • @KryptKicker5
      @KryptKicker5 3 ปีที่แล้ว +1

      @@stevenbliss989 That's just crazy. I can believe it though. This topic came up for me not just a few days ago. To be fair, Lazarus doesn't have the best alternative. It's got a couple of docking managers, with the older more restrictive type being the go to. I haven't dealt with it but I've seen a number of forum posts about it. Seems that if you want anything resembling Delphi's older functionality you're expected to RYO. There's even an incomplete tutorial to follow :P

    • @pimpthyride
      @pimpthyride 3 ปีที่แล้ว +1

      We love you KryptKicker5.

  • @wisenber
    @wisenber 3 ปีที่แล้ว +14

    This had better be worth the manhunt and the carjacking!I only had three weeks left before I got out.

    • @WarrenGarabrandt
      @WarrenGarabrandt 3 ปีที่แล้ว +2

      He said break out not go on a rampage. Lol

  • @DavesGarage
    @DavesGarage  3 ปีที่แล้ว +38

    Make sure you're subscribed so you don't miss gold like this trailer :-)

    • @vinay866
      @vinay866 3 ปีที่แล้ว +1

      I guess its C or assembly

    • @moffix
      @moffix 3 ปีที่แล้ว +1

      Cannot wait for the next head-to-head. Appreciate the way you you broke down Pascal and Ada. Was in college in the late 80s and coded in both. We even had defense department grant for Ada development. Unfortunately ended up doing C and Cobol after college.

  • @jeffreyphipps1507
    @jeffreyphipps1507 3 ปีที่แล้ว +77

    I wondered how he was going to get 45 languages compared in 22 minutes!

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +20

      I figure I should be able to get 4-5 languages in a normal episode where I don't have to explain the entire setup each time!

  • @johanrg70
    @johanrg70 3 ปีที่แล้ว +9

    Great video, got me hooked for the next one :) Assembly is potentially the fastest but it comes down to the implementation of the programmer, more so than the other languages.

    • @johanrg70
      @johanrg70 3 ปีที่แล้ว

      @jshowa o No, but similar implementations in other languages can result in bigger speed difference where the language itself is the limiting factor.

    • @johanrg70
      @johanrg70 3 ปีที่แล้ว

      @jshowa o My point here was only that to beat something like c++ when both are highly optimized is going to require more effort but it's theoretically possible with assembly. In certain other languages, it's the language itself that is the limiting factor.

  • @jackgerberuae
    @jackgerberuae 3 ปีที่แล้ว +7

    The assembly code is not yet optimized.
    It has a comment stating a critical procedure can be made faster. So there is uplift for the right skilled programmer to try.

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +5

      You bet. But unless and until a human does so, assembly language is slower, because assembly language is dependent on human authors.

  • @beaconofwierd1883
    @beaconofwierd1883 3 ปีที่แล้ว +31

    I was expecting a neat bar graph of all the languages where you could clearly see clusters based on language type :(

    • @pst659
      @pst659 3 ปีที่แล้ว +2

      same his videos are not that good.

    • @AlFredo-sx2yy
      @AlFredo-sx2yy 2 ปีที่แล้ว

      @@pst659 besides the comparisons are unfair because the code used for some of the languages are just absurdly terribly written. Almost as if the objective was to intentionally slow down some languages when you can very easily write a program that is hundreds of times faster lol... this channel has decayed over time tbh.

  • @berndeckenfels
    @berndeckenfels 3 ปีที่แล้ว +7

    Overflow detection at Compilation time… except when you fly an Ariane 5 (V88) and skip it

    • @fabiosemino2214
      @fabiosemino2214 3 ปีที่แล้ว

      I knew about the software error ported from Ariane 4, was it an ada program?

    • @berndeckenfels
      @berndeckenfels 3 ปีที่แล้ว +2

      @@fabiosemino2214 es, thats my understanding. The overflow condition was actually flagged by the compiler, but it had been Sessel and accepted for Ariane 4 as would not happen. For 5 it could happen, was triggered and worst of all, in code which was not used after launch.

  • @FlaviusFernandes
    @FlaviusFernandes 3 ปีที่แล้ว +13

    You cannot do a drag race without engineers from other languages involved.

  • @jongseokpark1235
    @jongseokpark1235 3 ปีที่แล้ว +36

    I mean we all know every language is just machine code with extra steps... Wouldn't this just depend on how optimized the compiler/interpreter is for the given ISA/microarchitecture? (Unless your using Verilog for a FPGA or something)

    • @wintercoder6687
      @wintercoder6687 3 ปีที่แล้ว +4

      If the algorithm is optimized to the language, the fastest is going to be assembly/machine code... simply because any language at a higher level (which would be all of them) get compiled down to assembly/machine code for execution.
      Plus other factors such a processor speed, etc., play a role.
      How will this be measured? Time? Number of execution clock cycles? Not all clock cycles are created equal.
      How do you compare ARM vs x86? (In terms of performance, since the typical platform for each is vastly different.)
      Now... if you are comparing features/functions/usage... well.. that's all subjective.

    • @Tiddle_s
      @Tiddle_s 3 ปีที่แล้ว +7

      A garbage collected language that's allocating and deallocating will be slower than one that isn't, and an interpreted language (including VM languages which compile to intermediate code) will be influenced by how well their VM is optimised for the instructions being executed or how well the compiler can substitute commands for more efficient ones (like using AVX512 instructions).
      Sometimes a language just isn't meant to be fast. Take Bash for example, it was made to be a command language (shell) for unix. Most of what it does is call other programs and pipe their inputs/outputs together or redirect their input/output from/to a file. There's no reason to optimise it as a scripting language because anything that should be optimised should be a program it can call.

    • @TurboGoth
      @TurboGoth 3 ปีที่แล้ว +1

      I think you've sorta got a good point but consider how the performance of this particular algorithm is considered as representative of the speed of a language. Just one algorithm was chosen. And likewise, just one compiler or architecture is being chosen as representative of what that language can do. But languages don't determine performance and we all know that: tooling does. And so since we can't use all possible tooling to determine language speed, we need to pick one and consider it representative. But still, I think mainstream tooling and architectures make for as representative numbers as we can get. I mean, let's say you have a bizarre fluke of a machine and a wacky compiler/linker that just kills on your particular hardware? Who cares? What would you best numbers ever really say to the rest of the world? We just need to go with mainstream here. It does represent the most relevant criteria: the real world.

    • @jongseokpark1235
      @jongseokpark1235 3 ปีที่แล้ว +1

      @@TurboGoth Yeah I see the point of the video is comparing performances in mainstream environments but I just wanted to mention that performance differences depend more heavily on how much the final machine code exploit the underlying hardware (threads, cache optimization, blocking, SIMD, prefeching, etc.) So if a compiler engineer from Intel can write a C code which is 100 times faster than what I could in Rust or CUDA/nvcc could genrate codes which are way faster than OpenCL its because they have deeper (often proprietary) knowledge about the hardware not because of the language itself.

    • @TurboGoth
      @TurboGoth 3 ปีที่แล้ว

      @@jongseokpark1235 the hardware parallelism that may be applied to any given algorithm is very relevant to this discussion. Certainly if this problem lended itself to what could be done well in CUDA/OpenCL/Vulkan using a graphics card then C would win very unfairly because calling these APIs is always most readily available as an option from C. And while something can be said for C for it's easy use of system libraries without gobs of intermediate glue code, it certainly doesn't mean it's a faster language because of it.

  • @houstonfirefox
    @houstonfirefox 2 ปีที่แล้ว +3

    I always liked Borland Delphi which was a successor to Turbo Pascal. One of my first jobs at Compaq involved sitting there for a week with an empty desk then one day a shrink-wrapped boxed set of Borland Delphi 1.0 showed up on my desk. I tell my manager that I have no experience programming in this language - he said "welcome to the club, nobody else does either". Literally, there was no training around for it and I had never programmed in Pascal. Ended up writing an EDI (Electronic Data Interchange) system in it that they used until HP bought them out.
    A plus is that Delphi is the only language I know of that was Compiled in Itself! The source code was actually part of the installation so any bugs in the language could be fixed.
    Ah, the good old days!

  • @sabatiniontech7256
    @sabatiniontech7256 3 ปีที่แล้ว +4

    Your comment that ADA was developed from Pascal is fundamentally incorrect. Although both derive from Algol, ADA came via PL/1. PL/1 language was jointly developed between IBM and Bell Labotatories in the 1964 to 1965 time frame to be a general use compiled language that produced code good enough to use as a system programming language. To that end it was used as a basis for IBM OS360 (later zOS, extended as PL/S, programming language for systems). It was also the language in which Multics, the preeminent non IBM operating system was written (of which UNIX is a pale copy of Multics, a single ICS if you will and C was a stripped down degenerate copy of PL/1 which required C++ to add back in the missing features).
    ADA took PL/1 and added such things as operator overload and other object oriented features to the language (all stolen from Smalltalk, the most important and misunderstood programming language you never heard of).

    • @tconiam
      @tconiam 2 ปีที่แล้ว

      Yup, Ada has a lot of parents! Part of what makes Ada great, and caused it so much trouble early on, was that it took the best software engineering features from so many prior languages. Unfortunately, it pushed the compiler developers so hard it took a long time for Ada compilers to provide all the features reliably. I know, during the early '90s we essentially helped Alsys debug their PC compiler because we were using so many of the language features that we constantly ran into issues we sent back to them for fixes.

  • @VioletGiraffe
    @VioletGiraffe 3 ปีที่แล้ว +7

    So early to the video, only 360p is available! Good job, TH-cam.

  • @RS-ls7mm
    @RS-ls7mm 3 ปีที่แล้ว +4

    Um, its not the language, its the compiler. Huge differences even with the same language.

    • @simonfarre4907
      @simonfarre4907 3 ปีที่แล้ว +2

      Which is why people's conception of C being faster than any of the higher level low lever languages wrong. The reason why C++ and Rust can beat C code in some cases, is because those languages provide guarantees that the C compiler can't, thus will emit better assembly code ultimately.

    • @RS-ls7mm
      @RS-ls7mm 3 ปีที่แล้ว +1

      @@simonfarre4907 I find that slight differences in coding even in the same language has the biggest influence. The differences don't even have to make sense. Its still pretty much trial and error.
      When you see the absolutely crazy optimized machine code that comes out of a compiler its no wonder it can beat hand written assembly.

    • @simonfarre4907
      @simonfarre4907 3 ปีที่แล้ว +1

      @@RS-ls7mm I agree with you there. Coding practices matter, if *squeezing* out that extra umph out is required.
      I just find it strange that people *to this day* believe that C is somehow inherently faster than C++ or Rust, as if somehow C is magic. It's not. Also, the strong(er) type system of C++ and Rust compilers can provide some guarantees that C compilers don't necessarily provide out of the box.
      Yeah, that's another point I keep seeing "assembly is so much better" - yeah, if you know what you are doing, at *all* times. And nobody (or rather, most people aren't, including myself) is. Sure, if you're cranking out games, which is pump-and-dump kind of development, fine. But maintaining software over time, in purely assembly would be dumb. Which is why nobody does it. Compiler writers (at least the ones who work on the "big" low level ones) are pretty damn smart developers.

    • @stevenbliss989
      @stevenbliss989 3 ปีที่แล้ว

      Totally agree!

  • @hqcart1
    @hqcart1 3 ปีที่แล้ว +6

    The fastest language is obviously Assembly, It's the mother of fastest languages, and you can't be faster than that, if you had different results, then check your assembly code :)

    • @surferdude4487
      @surferdude4487 3 ปีที่แล้ว +2

      Assembly is not a true language. You cannot implement the same assembly code on different processors.

    • @hqcart1
      @hqcart1 3 ปีที่แล้ว

      @@surferdude4487 Duh, just like he is writing different code for each language, you can write assembly code for each processor architecture.

    • @surferdude4487
      @surferdude4487 3 ปีที่แล้ว

      @@hqcart1 Since he has not restricted the competition to high level computer languages, I will concede your point. Anyone that can code competently in assembly should be able to beat any compiled or interpreted programming language.

    • @hqcart1
      @hqcart1 3 ปีที่แล้ว

      @@surferdude4487 And by the way, he pinned his comment "The current winners are NOT C, C#, C++, or Assembly", so he is using assembly to compete with other languages.

  • @TurboGoth
    @TurboGoth 3 ปีที่แล้ว +9

    I can appreciate the efficiency argument for the most productive languages: optimize in terms of minimizing the cost on the most expensive resources: the programmers. =)

    • @WarrenGarabrandt
      @WarrenGarabrandt 3 ปีที่แล้ว +1

      That is exactly what high level languages do. Sure it's possible to use a lower level language to write your user interface so it takes a thousandth of a second less time to render, but if you have to pay your program for 10 times as much to do it, did you really save anything?

    • @TurboGoth
      @TurboGoth 3 ปีที่แล้ว +1

      @@WarrenGarabrandt then there's the discussion on the real math of how much time is saved with efficiency. If 1 second is saved then that's 1 second and so what? But if it is 1 second saved by each run of the program EVER then you need to multiply that second by how many times it is run. And by jow many people? And in how tight of a loop that is needed to run quickly in order to maintain smooth animation in a game?

    • @WarrenGarabrandt
      @WarrenGarabrandt 3 ปีที่แล้ว

      @@TurboGoth well yes and no. If the time saved is small enough, people probably won't even notice it. Besides, the slowest part about any computer is almost always the person using it. Writers take about the same amount of time to write a book on a typewriter as they do a modern high end computer. This is because the bottleneck isn't how fast they can type, for the most part. Whether it takes Microsoft word 2 seconds to open for 15 seconds to open hardly matters if the writer is going to spend eight hours in the program and only write maybe one chapter of text at most.
      There are always cases where performance is absolutely critical and you have to write the code as efficiently as possible. You're not going to find interpreted languages and rapid application development packages used for this kind of performance critical code. The vast majority of software that is written on a daily basis is hardly what anyone would consider to be performance critical.
      On that note, nobody is choosing their line of business software or any program for that matter based on whether or not it's one second faster at any one operation. People buy Adobe products not because they're the fastest on the planet, but rather because of the ecosystem and the support. People use Windows not because it's the most efficient operating system, but rather because the ecosystem and the support.
      Sure Microsoft could spend a bunch of time making it where the start menu opens more quickly. They could also prioritize local file searches over web searches, which is what most people (I think) are going to use the start menu for anyways. They don't do that because that's not how they make their money.
      The simple fact is computers are so fast today that we can afford for almost all of the programs we use to be a little inefficient and it makes hardly any difference.
      To people like myself (and I expect probably you too) that is appalling, and I believe all of the critical software that I use should be written to be as efficient as possible. But taking a quick inventory of the programs I use the most, and not one of them are written in Assembly language. Most are written in high level interpreted languages actually, and they work just fine.

    • @sebastianmestre8971
      @sebastianmestre8971 3 ปีที่แล้ว

      @@WarrenGarabrandt
      When I'm coding, I like to close my editor while I compile or test my program.
      A low resource machine might not be able to run two heavy GUI apps smoothly at the same time, so closing one to open the other is necessary for some users.
      These workflows are very unergonomic if startup time is too large.
      Also, if an app launches very quickly, I might be able to use it programatically (e.g. in an AHK script)
      In general, fast things open up use cases that one might not even consider for slower pieces of software.
      It's not a matter of 'it does what I use it for quickly enough', but of 'what could I do with it if it was 10x or 100x faster?'
      I know for a fact that performance is something Adobe engineers take very seriously. I remember watching a talk by an engineer at Adobe where he talks about how user perception of a feature changes depending on speed, and how they aim to make every feature fast enough so that their users can comfortably incorporate them into their workflow.

    • @SimGunther
      @SimGunther 3 ปีที่แล้ว

      Second most expensive: the tool (not the language)

  • @Mark.Brindle
    @Mark.Brindle 3 ปีที่แล้ว +8

    It's not really about 'language', it's more comparing compiler and optimizer output. E.g., Turbo Pascal and Delphi generate native assembler (except for the Delphi Borland released for .Net V1.1), while UCSD Pascal generates byte code which is interpreted. MS Pascal in the 80s was a dog and was killed by Turbo Pascal. C# will be slow on the first pass as the IL (byte code) is jittered and optimized to native code, then there is the server or workstation versions of the Garbage Collector.
    It's interesting to compare high level language features as I have always wanted more languages on .Net (as MS promised when it was released) allowing for a mix of languages in a application leveraging the best of each (Prolog as a rules engine, or APL for doing the financial heavy lifting). Someone did a Forth on .Net in the early days which was pretty awesome.
    Performance will come down to the compiler and optimizer being used and how good they are.

    • @TurboGoth
      @TurboGoth 3 ปีที่แล้ว +1

      Good point about C#. While the first pass is still just one pass out of the millions that this task requires, the jit-hit should be negligible. The important piece of that puzzle would be the tradeoff in jit implementations where producing optimized code is far behind just creating working code ASAP and so the jit tradeoff should really show up blindingly. This would be one of those places where the mainly theoretical hotspot detection logic could pay in dividends where critical spots could be identified as such and then given a shot by a much more optimizing jit that may take its sweet time to produce excellent code.

    • @MGSncB
      @MGSncB 3 ปีที่แล้ว

      ​@@TurboGoth .NET Core 2.x included Tiered Compilation (it was disabled by default then, 3.x+ enabled it out of the box) where the JIT compiler emitted code with little to no optimizations in Tier 0. Hot methods are being optimized in Tier 1. They've made a ton of progress on that one in .NET 5, and the Tier 1 code can sometimes be a lot faster than no-tiered full JIT, even with MethodImpl.AggressiveOptimization - the runtime actually guides the JIT and tells it where to fold, hoist and replace. There's also a separate "Quick JIT" toggle for loops.

    •  3 ปีที่แล้ว

      UCSD Pascal generates bytecode for a virtual machine. You know, like Java?

    • @Mark.Brindle
      @Mark.Brindle 3 ปีที่แล้ว

      @@TurboGoth I agree. The Benchmark library for .Net Core separates out the jitter time. It's a minimal time in many applications, although a compiled application foes not require it. The GC overhead would be interesting for each version. A good book back in V1.0 days was ".Net Performance". It's all changed now.

    • @Mark.Brindle
      @Mark.Brindle 3 ปีที่แล้ว

      @ yes, I know and performed like a dog. I have no idea how they could justify the cost of it when I used it back in 1984.

  • @skf957
    @skf957 3 ปีที่แล้ว +3

    Dave, your channel just improved by some order of magnitude.
    And it was already pretty good. Thanks for this, looking forward to the next one.

  • @remlatzargonix1329
    @remlatzargonix1329 3 ปีที่แล้ว +7

    Regarding the "fastest" computer language: I think that would depend upon the task(s) you are trying to accomplish.
    Some languages might be extremely fsst when it comes to array manipulation whereas others might be more efficient at looping and so on.

  • @notenoughmonkeys
    @notenoughmonkeys 3 ปีที่แล้ว +7

    Nice video, look forward to seeing where this goes. In terms of Ada vs Pascal, in all honesty, whilst you *can* get Ada closer in terms of performance, alot of this would be achieved going against the spirit of what Ada is supposed to provide, as much of the time will be won/lost with Ada's extensive use of run-time assertions / range checks etc. which is why it would be selected as a language in the first place, as well as, as you rightly mentioned, the fact so many errors are pushed to the compilation phase compared to other languages.
    I have noticed that the latest commits have tweaked the compilation settings to disable run time checks etc., which is fine, it will show that Ada can close the gap if you absolutely have to, and certainly some projects I've worked on did exactly this to meet throughput needs (effectively treating the run-time checks more like debug asserts in C/C++, i.e. with an end goal of releasing with all these being suppressed by the compiler), but typically, the expectation is that these remain active, and appropriate exception handling will keep things working as intended, otherwise you lose one of the main benefits of the language.
    In terms of esoteric languages, I can chip in with DCL (targeted at the old VAXs) if I get some free time. Good luck running it though... 😀

  • @TheSilent333
    @TheSilent333 2 ปีที่แล้ว +3

    Turbo Pascal is the reason I'm a programmer today. Great video as always!

  • @walterpark8824
    @walterpark8824 3 ปีที่แล้ว +2

    I enjoy your posts, and this is a good one. But I've got to focus on the amazing fort four minutes or so. It's always fun to see a smart, knowledgeable, and articulate (now there's a combination!) guy discuss tech, but the true genius of your story is the internet itself. Thank you, Tim Berners-Lee and w3! To have such volunteer resources come together so effectively and efficiently will never cease to amaze me. I'm happy to put up with a million cat videos to have access to a few collaborations like this one.

  • @joshuablanton3016
    @joshuablanton3016 3 ปีที่แล้ว +4

    "Recreationally fluent" is an excellent description of a programming language familiarity. I love that!

  • @myentertainment55
    @myentertainment55 3 ปีที่แล้ว +5

    Me after watching first episode:
    Hm, it would be interesting how java is going to perform in that test.
    Dave: NEW VIDEO with 45 languages!
    Just Wow :D

  • @glencmac
    @glencmac 3 ปีที่แล้ว +3

    Microcode is the fastest executing language. If you design your code correctly you can interleave as many as 3 or 4 programs. One program is executing while the others are waiting for bus latches to be completed. It is by far the hardest and most time consuming to write and test. Me, if I want fast development I go with C. If I want fast execution I go assembler.

  • @Phantom_Aspekt
    @Phantom_Aspekt ปีที่แล้ว +3

    I've recently decided to try and learn a programming language and thought that Lisp would be an interesting place to start, I then seen this and got curious about what the results were and found Lisp sitting in second place behind C++, seems Lisp is super underrated

  • @stevenbliss989
    @stevenbliss989 3 ปีที่แล้ว +3

    Including Assembler, ...I cannot see the use! - IT WILL ALWAYS WIN.
    IT MUST, because this is what ALL languages ultimately run down to, and often even compile directly to!
    The only variable is the skill of the programmer.

    • @SianaGearz
      @SianaGearz 3 ปีที่แล้ว +1

      Depending on the target, it tends to be insanely difficult to outsmart the compiler on a microbenchmark. The compiler has a pretty good model of how long CPU instructions take to execute and when they can be executed in parallel. It will also partially unroll the loops and rearrange things into order that is not very congruent with algorithmic thinking patterns, making it into "write-only" sort of code that is extremely difficult to grasp and modify, and if the handwritten code has any chance of competing, it will grow similar highly undesirable properties. Furthermore CPUs get developed to optimise execution of code typically emitted by the C compilers and compilers derived from them, which is particularly heavily the case on modern desktop. So canonical, legible assembly is generally spectacularly bad in all regards except binary size.
      Optimised assembly also lacks educational value in the classic sense because the solutions tend to be profiling based and highly situational, so you can't just generalise them and necessarily apply them to another code, and the legibility is so bad.
      You can not compete with the machine on equal footing. Machine will eat you for breakfast. You don't compete with a shovel against an excavator for digging trenches either, do you?
      Why I say depending on the target, is that instruction set extensions such as SIMD don't get put to good use by the compiler, and in this area you have some leverage, that even accounting for manual ordering being worse than compiler's, you can still make good on performance a little. Sometimes, knowledge of unusual hardware can also help you write better code for it, for example on the ancient ARM7TDMI you want external memory or ROM code to be 16-bit Thumb, but you can have on chip scratch ram where you can copy there a small section of 32-bit code which can make use of condition prefixed instructions, looks a bit more like DSP code. Incidentally, Thumb instructions won and classic ARM instructions were eventually killed off completely, because compilers never made great use of condition prefixes even when these were available, which just goes to prove a point that compilers rule the world even when they are wrong.

  • @dannoringer
    @dannoringer 2 ปีที่แล้ว +4

    Fun. I'm an old Pascal coder, and love that language. Makes me a little sad that it's score is so bad compared to the highest performer, but with computers as fast as they are Pascal will always meet my needs. (If I can find a modern Pascal compiler....good luck with that)

  • @Ranchhand323
    @Ranchhand323 2 ปีที่แล้ว +1

    Your out takes are the best Dave. They always make me smirk or chuckle. I hope everyone sticks around the whole play to enjoy the full experience.

  • @27455628
    @27455628 2 ปีที่แล้ว +1

    I wish this bring back my memory :(
    I am just a newbie programmer that only know javascript and python.

  • @justsomeperson5110
    @justsomeperson5110 2 ปีที่แล้ว +2

    LOL I just recently stumbled across your channel and already found my way to this video and I have to say, I absolutely love this idea, in part for its implementation. Because a lot of times the "fastest" language is not so much which language is actually faster, but which implementation of a function is the most optimized. And this race covers both concepts well! Kudos to everyone involved and contributing! As an old timer who has worked in waaaaaaaaay too many different languages (computer and human) over the years, I love seeing this!

  • @HweolRidda
    @HweolRidda 2 ปีที่แล้ว +3

    It strikes me that this is not merely a test of languages. It is also a test of language implementations. I have worked in a place with foot-to-the-floor computing for 30 years. In the early days of Fortran and then C, I was involved in assessing compilers and we could easily see factors of 2 or 3 times between the same code on the same machine with different compilers. Other teams do those assessments today, but I cannot believe that conclusions from one implementation of a language can now be blindly applied to another.
    I could also quibble about using bit operations as an assessment of languages that nobody would select for bit operations. For example I have used perl for decades for text manipulation but bit manipulation never played a serious role in production code; if bit manipulation is non-trivial part of a task then invest the effort to move it from perl to something like C.

  • @bruker2211
    @bruker2211 3 ปีที่แล้ว +2

    When the upper limit (1000000) is defined as an constant literal directly in the source code, some languages can take advantage of this and find the prime numbers in compiletime (meta programming)

    • @manojbajaj95
      @manojbajaj95 3 ปีที่แล้ว

      Exactly. C++ solution "attempts" to calculate sqrt in compile time, why not just check if it is prime or not.

  • @MsHojat
    @MsHojat 3 ปีที่แล้ว +2

    I'm not suggesting Autohotkey/AutoIt is fast or do well in this (I'd imagine it would do even worse than Python), but since you mentioned the languages you used in this video, it sounds like you've never used these? They are windows-specific languages used to automate all sorts of simple/small (or larger) tasks quickly as well as change/set hotkeys. It's really useful. What kind of stuff do you use instead for that? only powershell? The little bit of dabbling in Python perhaps? Or would you you use C++/C#?

  • @charlesbass6324
    @charlesbass6324 3 ปีที่แล้ว +4

    I do hope that Forth is also in this comparison of languages.

  • @slycordinator
    @slycordinator 3 ปีที่แล้ว +1

    If we include testing shell languages, some of the difference you get could be from which equivalent shell program you run.
    If I were to change the existing bash one to be posix compliant to run on any "/bin/sh", you could end up with differing results if sh is dash, busybox, bash, the ones included with the various BSD's, and others. They'd probably behave similarly, but I know that dash is generally faster than bash for regular sh scripts.

  • @PixelsWorkshopVideos
    @PixelsWorkshopVideos 3 ปีที่แล้ว +1

    I love it!!! Gives a tremendous insight on the conditions and deployment characteristics of every specific language.

  • @Ranoake
    @Ranoake ปีที่แล้ว +1

    I hope each iteration of finding all the primes is much less than 5 minutes, or the rounding errors would be huge.

  • @lucidmoses
    @lucidmoses 3 ปีที่แล้ว +15

    Well, it has to be assembler, If it's not then your just comparing bad programming. Still, I'm more interested in what's second.

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +8

      You can't say it's bad programming unless you can do better, right? And it's open source! So....

    • @lucidmoses
      @lucidmoses 3 ปีที่แล้ว +11

      @@DavesGarage Sure I can. Assembler is one to one with the machine code. All other languages boil down to machine code and no matter what they do you can always do the same thing in assembler. High level languages only add restrictions. The fewer the better and to keep speed up and that's what your measuring. Now I've done my fair share of assembler but not on x86. I just believe the principal is the same. It's not until you include people that don't know what their doing or a just learning that you can get sloppy C people beating sloppy assembler people. But bad programming can waste any amount of CPU.

    • @Tubbelol
      @Tubbelol 3 ปีที่แล้ว +2

      depending what kind of test we are having, c++ and c could produce better machine code than humans can with assembly.

    • @AshesOfEther
      @AshesOfEther 3 ปีที่แล้ว +4

      If Assembly doesn't win, it's not necessarily because the implementation was bad. Rather, we have been optimizing compilers to a ridiculous level for decades, such that a compiler can generate machine code that is much smaller and faster than anything most programmers would think of doing.

    • @kamurashev
      @kamurashev 3 ปีที่แล้ว

      @@Tubbelol can but it doesn't mean it would, if the results of asm is bad than it contains some mistakes.

  • @ikannunaplays
    @ikannunaplays 3 ปีที่แล้ว +2

    BASIC and Pascal are the first languages I learned in school. Interesting to see how it turned out in the race. Also glad to see because I haven't coded in it it since.

    • @iulianalinsbengheci5438
      @iulianalinsbengheci5438 ปีที่แล้ว +1

      same here Visual Basic and Pascal 2005 I last coded in them .... nostalgia

  • @TheUltimaxxxx
    @TheUltimaxxxx 3 ปีที่แล้ว +2

    9:00 the video actually starts.

  • @AndrewKelley
    @AndrewKelley 3 ปีที่แล้ว +4

    I'm curious to see how the ARM asm and x86 asm are tested against each other, with respect to keeping the hardware the same.

    • @officialstrike
      @officialstrike 3 ปีที่แล้ว +2

      Definitely, it's the implementation that matters, not the language!

  • @WillyChuck
    @WillyChuck 3 ปีที่แล้ว +2

    Wait, when did Pascal become object oriented and gain classes? That's not the Pascal I grew-up with.

    • @bwc1976
      @bwc1976 2 ปีที่แล้ว

      Apple used "Clascal" (later renamed Object Pascal) starting in 1983 to develop software for the Lisa and Macintosh, and Borland's Turbo Pascal first added OOP support with version 5.5 in 1989.

  • @jimiarisandi
    @jimiarisandi 3 ปีที่แล้ว +3

    I think. Im guess. But it must be . Nothing else faster than. The one the only. Asm asembly. And the slowest is many. Maybe basic . I love basic or quick basic.

  • @1Eagler
    @1Eagler 3 ปีที่แล้ว +8

    I used to make a lot of money with Pascal back in the 80s. Now, classes in Pascal? Like a 60yr old woman with sillicone everywhere in her body.
    Nope, I prefer to remember Pascal in her youth.

  • @SubCodeX
    @SubCodeX 3 ปีที่แล้ว +6

    This brings back sooo many childhood memories... Started out on my Cyrix "386-ish" with Borland Turbo Pascal 7.0... Hitting mode 13 with interupt 10 (or was it the other way around?) and doing graphics like there was no tomorrow.. =P Yes, yes.. the inline ASM support was awesome! =)

  • @rootbeer666
    @rootbeer666 3 ปีที่แล้ว +1

    Awesome intro, Dave! You're a great showman. As someone who took a similar computer languages course in college I really appreciate the series and look forward to the upcoming videos. 👍

  • @IanSlothieRolfe
    @IanSlothieRolfe 3 ปีที่แล้ว +3

    I always had a soft spot for Delphi, it was one of the first packages to make developing Windows programs easy (except perhaps for Visual Basic but that's another story!). At the time I was also using OpenROAD (formally Ingres Windows4GL) and it had many similarities in its general object-oriented nature. I think Delphi was one of the few languages I developed GUI based programs for recreationally.

  • @Thompson8200
    @Thompson8200 3 ปีที่แล้ว +15

    It's surprising to me, although I guess it really shouldn't be, how C-centric everyone seems to be. Rust has been around for a while now and it can be faster than C, somehow that concept is shocking to people.
    It's not as if C is a special langauge of the gods, it plays by the same rules any other has to. It puts on its compiler pants one leg at a time. There's no reason it can't possibly be bettered or improved on in any way.

    • @mrlazda
      @mrlazda 3 ปีที่แล้ว

      Fortran is in most cases for pure calculations much faster then C or basically any other language, that is why near all programs for collection on supercomputers are written in Fortran.
      By the way Rust can be faster then C but in all tests I saw where it is faster it only show that Rust library is faster standard c library.
      And I never saw Rust vs ICC, and according to mysql developers test same musql source compiled with ICC is like 30% faster then with GCC/Clang.
      Basicly most is important how good is available compilers for language then language itself, but language can have some impact ( for example I found this explanation "Fortran semantics say that function arguments never alias and there is an array type, where in C arrays are pointers. This is why Fortran is often faster than C." )

    • @A1rPun
      @A1rPun 3 ปีที่แล้ว +1

      Holy-C is the special language of the gods!

    • @stevenbliss989
      @stevenbliss989 3 ปีที่แล้ว

      @@A1rPun :)

  • @rjones6219
    @rjones6219 ปีที่แล้ว

    Reminds me of the marketing ploy of RISC computing. Back in the mid 80s, there was a push to introduce RISC computers, because customers had become transfixed on MIP rates, for comparing machine performance.

  • @anonymic79
    @anonymic79 3 ปีที่แล้ว +3

    Everyone is taking their guesses for the fastest language, but I'm more curious about the slowest. It's going to have to either be completely out of its own wheelhouse with a lot of wrangling going on under the hood, or interpreted OOP, like Java.

    • @JuryDutySummons
      @JuryDutySummons 3 ปีที่แล้ว +1

      I'm going to guess it would be something like batch or PowerShell.

    • @hrgwea
      @hrgwea 3 ปีที่แล้ว +2

      The AutoHotkey language would be pretty slow too. That thing is nothing but a line by line interpreter, without any preprocessing stage.

    • @JuryDutySummons
      @JuryDutySummons 3 ปีที่แล้ว

      @@hrgwea I wonder how some of the meme languages will do. Like the all-whitespace language, and others. :)

  • @richardwehe7325
    @richardwehe7325 2 ปีที่แล้ว

    I watched the video, but from my experience back in the old days, my answer is that nobody touches a good assembly language programmer on a PDP-11. The machine had a 16 bit address space, with larger amounts of physical memory. The operating system had the ability to allocate contiguous files on disk. Fast code execution was determined by the programmers ability to keep the right variables in register, cache memory, or physical memory, the ability to minimize disk seek time, the bus width, and the processor instruction set.
    Each language was intended to facilitate a particular problem representation or had some other properties that made it interesting.
    Each compiler has some sort of translation function, and perhaps an optimization function. In the case of the sieve algorithm, operating without user input, it is theoretically possible for the compiler to recognize that it has all the information necessary to actually execute the program, evaluate the accuracy of the result, and reduce the actual executable to a few print statements and an operating system wait function.
    Implementing the task with the wrong problem representation can be akin to trying to do precision work while wearing mittens.

  • @RetroTechChris
    @RetroTechChris 3 ปีที่แล้ว +2

    Ah, Ada! I worked with it extensively about 10 or 15 years ago. We switched from AdaMulti to GNAT and got about a 99% speedup in build time, I'm not even kidding! I'm guessing that Ada was probably much slower due to runtime checks being enabled. And... of course, as noted, typing was strong, so what did people do from time to time? Use unchecked conversions, which defeated the purpose of it!!

  • @enc4p
    @enc4p 3 ปีที่แล้ว +1

    Now TH-cam doesn't even show quality option when clicking 3 dots on mobile

  • @Roy-K
    @Roy-K 2 ปีที่แล้ว

    Loved the little LED display doing the EQ of his audio instead of the usual fire effect

  • @PSjustanormalguy
    @PSjustanormalguy 3 ปีที่แล้ว +2

    Dave you need to update your execution approach. Use C# and write a function to check prime-ness for just 1 number passed in as a parameter. Write a calling function to spawn 1,000,000 AWS lambda instances of your C# function and run those million functions for 1 second.
    Problem solved, 2021 style 😎

    • @Zzznmop
      @Zzznmop ปีที่แล้ว

      this would only benchmark the lambda runtime environments

  • @Kawa-oneechan
    @Kawa-oneechan 3 ปีที่แล้ว +3

    I may not like Pascal much, but I loved the the "call by value" joke.

    • @DavesGarage
      @DavesGarage  3 ปีที่แล้ว +1

      Thanks! It's his, of course, not mine, but it always stuck with me as kind of clever...

    • @henrikcarlsen1881
      @henrikcarlsen1881 3 ปีที่แล้ว +1

      Well, he was wrong on quite a lot regarding Pascal. Didn't do his homework properly.

  • @popolony2k
    @popolony2k 2 ปีที่แล้ว +1

    Very cool...but please add the specifications about compilers used on this test (both for Pascal and Ada).
    If you're using Turbo Pascal prior to 7.0 (maybe even 7.0), this compiler is strongly based on runtime which means that each test is a CALL to a generic comparison routine in runtime library.
    For example (I'll try to use a pseudo ASM code Z80):
    IF( number > 10) Then { LD DE, (number)
    LD HL, 10
    CALL comparision_routine_address_at_runtime
    DJNZ else_statement_address }
    DoSomething;
    ELSE
    DoSomethingElse;
    So all branch test routines for all statements (IF THEN ELSE, REPEAT UNTIL, WHILE DO) were not inline on older Pascal compilers, so Pascal results could be better depending on compiler version used on your test;
    About Ada...I'm a huge fan of this language.
    Regards
    PopolonY2k

  • @chrisnewman7281
    @chrisnewman7281 3 ปีที่แล้ว +3

    I thought that assembler was but it’s is a steep learning curve

    • @taragnor
      @taragnor 3 ปีที่แล้ว

      It is, and it isn't. I mean ultimately all languages boil down to machine language. You might have bytecode running in a virtual machine, like Java or an interpreter running the code like python, but ultimately whatever is actually running the code is programmed in ML. Assembly is equivalent to machine language (there's a 1 to 1 relationship between an assembly instruction and a ML instruction). So in theory, it's impossible to outperform assembly, because it is the final form of every language. The sum total of everything the computer is capable of is represented in assembly.
      Really when you talk about comparing assembly to other languages, you're talking about if the final product that the compiler generates or that the interpreter runs is faster than hand-coded assembly. The thing with assembly is that writing it is slow and hard, and optimizing it is difficult. The bigger your program is, the less time you have to optimize and likely the less efficient it will be. Compilers on the other hand are constantly getting better and are able to optimize almost as good as the best assembly hand-coders. So in practice most of the time you'll get better results from writing something in C compared to writing it in assembly.

  • @tmsaskg
    @tmsaskg ปีที่แล้ว

    I stumbled into your definition of fastest language by calculating AVERAGE number of runs.
    As a mathematician I would simply use the best results for all languages for comparison. Further more I would count processor cycles for each compiled (if compileable) code for another comparision. And of course the best test is to run in a virtual machine with stable cpu quota for more precise run comparisions without background process inerruption. But that's probably only me...

  • @javimm77
    @javimm77 3 ปีที่แล้ว +3

    I think Fortran should do pretty good in the comparison. I've used it professionally for some years and it is pretty fast if used correctly!. It's used extensively in HPC environments...

    • @johannweber5185
      @johannweber5185 3 ปีที่แล้ว

      Actually i wrote the Fortran Code. My code turned out to be quite optimization dependent - in a way I did not fully understand. My results varied between 25% and 100% of the performance for my system (Ryzen). Still I am curious about the results obtained by Dave's team.
      There has also been quite a variation between differen versions of gfortran.
      I do however, think that my code still can be improved.

    • @javimm77
      @javimm77 3 ปีที่แล้ว +1

      @@johannweber5185 That's interesting. I used Intel's compiler at work. In my own testing, gfortran was usually slower than Intel's Fortran, but reading Intel's documentation, the code could be optimized in a lot of ways, so I guess the same applies to gfortran. I guess it's a matter of knowing the compiler very well and the kind of optimizations that are done, but the same goes for C++, for instance. I'm curious to see the results here when the stream starts.

    • @rajatkarnatak3866
      @rajatkarnatak3866 3 ปีที่แล้ว

      @@javimm77 try using -O3 optimization when compiling with gfortran. It increases the compile time but boosts performance substantially, sometimes even matching the ifort.

  • @stevenbliss989
    @stevenbliss989 3 ปีที่แล้ว +6

    There are free versions of Delphi, ....again, just saying.

  • @oneeyedphotographer
    @oneeyedphotographer 2 ปีที่แล้ว +1

    I can probably do JCL with only a little refresher. There are also JOL and REXX and TSO's procedure language, you could do a sieve in those. I think Google can find you JOL, possibly you will need Hercules and MVS to run it. Clem Clarke is the world wide expert, unless he's died. He'd be around 80.
    I also used to be able to program in FORTRAN, Pascal (especially Borland's implementation), PL/I, COBOL, IBM Assembler F.

    • @mrcryptozoic817
      @mrcryptozoic817 ปีที่แล้ว

      The TSO procedure language (fyi was called "CLIST") and it was a translated language that had some odd holes in it. You couldn't do some table processing with it. I tried for a couple of hours one evening after my "work" time. The goal was to be able to write code that anyone using CLIST could maintain. I ended up invoking a short assembler routine for that table.
      Much much much...later REXX was introduced and I rewrote it in REXX, which could do everything, even get into the operating system (IBM OS/390). Then turned it over to another team to maintain.
      (JCL was a specific purpose language for submitting work (jobs) to the OS to execute, it was never intended to do anything else, although in about 1995, I figured out how make it send a report to a remote branch by email, instead of using a courier.)
      Now, 15 years later, I'm sure IBM systems, with co-processing, can run a million prime number tests in less time than it takes to read this paragraph.

  • @bb5242
    @bb5242 3 ปีที่แล้ว +4

    While fun, I'm not sure this kind of benchmarking is scientifically sound. The kinds of qualities I need in a programming language aren't just relating to raw compiled code speed. There are so many other details to consider besides raw speed in a certain small subset of programming tasks. Also, running the code inside of Docker, a virtualized environment, will not be consistent--it's convenient, but again, it's a flaw in the study design that introduces more variables.

  • @CnfuD-Choticstreaming
    @CnfuD-Choticstreaming 2 ปีที่แล้ว

    A video on how this effort was organised and on it's technical aspect, aka how you ended up automating everything down to a single make.

  • @richardwallace6323
    @richardwallace6323 ปีที่แล้ว

    In Dave's run of Ada, he didn't use the pragma suppress(all_checks). The run he did had run-time type checking turned on. Dave, rerun with the pragma and I'll bet you a cup of coffee that Ada will have a better showing.

  • @mrlazda
    @mrlazda 3 ปีที่แล้ว +1

    ADA fail from grace in military relatively long ago, and last nail in cofin was ariane acciden (which would never happened if ADA supposed to do all error handling as promised, but some of checks was silently omitted from compilers). Most ADA programs are converted to SPARK (Auirbus replace most ADA with SPATK, as most air traffic controls in Europe that used ADA was moved to it).
    For most new military programs is use C.
    One note most ICBMs used C ftin beginning and never used ADA (Soviet and later Russina military and aerospace use mostly C and never used ADS).

    • @tconiam
      @tconiam 2 ปีที่แล้ว

      You may want to read up on the actual cause of the Ariane 5 first launch problem. It wasn't Ada, it was the engineers reusing Ariane4 software and forcing a 64 bit float into a 16 bit integer. Bad engineering, not the language used. It would have failed just as spectacularly had it been written in any other language.
      The biggest problem Ada adoption has had is a chicken and egg problem that never recovered from its growing pains in the '80s. The compilers were new and still working through implementing capabilities, that up until that time, had never been included in a compiler before. There just weren't enough Ada programmers around so everyone took advantage of the waiver process and used the same languages they had always used. Which of course, never helped the shortage of experienced programmers.
      Yes SPARK is the way to make Ada even more safe. SPARK takes the safety-first design philosophy of Ada and takes it to the extreme and is definitely meant for safety critical systems.

    • @mrlazda
      @mrlazda 2 ปีที่แล้ว

      @@tconiam it was ADA problem and official investigation pointed to it (you need to read official investigation document where most blame is pointed to ADA compiler and specification), converting 64 bit float to 16 bit integer should be safe operation which was not in case of ADA compiler they used. And reusing software from from previous version is not any problem and not bad engendering, same as converting 64 float to 16 bit integer is not engendering error it is common practice in real world control systems (you know real control actuators have physical limitations) but problem is in case of ariane ADA software it crushed on supposedly safe operation (according to official investigation they "silently" dropped safe conversion, what ever silently mean, but in version of ADA they used conversion was not safe operation any more)
      Only engendering mistake on ariana team part was they didn't do enough testing if they did it they would discover problem earlier.
      ADA as idea was good intention but never achieved promises that why most just dropped using it and for new systems they mostly moved to C (even US military moved to it after software fiasco with F22 and F35 in case of later they used mix of ADA and C and according to developers only solution is rewrite all ADA code in C).
      By the way Motor Industry Software Reliability Association (MISRA) give only guidelines for using C or C++ for safety critical software in automotive industry.
      But you can write safe (or unsafe) software in any language (even in ADA) if you know what you are doing and use tool (compiler) that follow standard behaviour of language specification (what was problem with ADA in case of ariane but if they know for that they could easy work around of it, but that would just put ADA in rang of all other not safe languages)
      Edit: if you do not understand why converting 64 float to 16 bit integer (in safe way) is not error in real world control systems you maybe should take look at "Integral windup" (not directly same but explain problems with limitations of real world actuators, and solution (one of it) for it is basically same as converting higher number to lover (what you called engendering mistake, but contrary is right solution for problem))

  • @izzieb
    @izzieb 3 ปีที่แล้ว +2

    If you can program in many different languages, does that make you a polyglot?

  • @le9038
    @le9038 2 ปีที่แล้ว +1

    One question about this race, did you run the programs in high priority? In the task manager (or system monitor) you can tell the system to put programs at a higher priority to make them run faster.
    This is best for video games and programs calculating things as fast as possible.
    Just wondering doe. Because that might effect performance.

  • @wayneholzer4694
    @wayneholzer4694 2 ปีที่แล้ว

    I am no means a expert on any programming language I have played a bit with C/C++/C# Java and python I am about to dip my toes into SQL I am trying to find a language to focus on and get really good and flaunt with while keeping up with technology. I know someone who has worked with SQL for a long time and it has caught my curiosity it is truly amazing at the plethora of programming languages out there to be honest I am not surprised that there is just one universal language used across the board by now but as we know each serves a purpose. Thanks for another informative video.

  • @GHHodges
    @GHHodges 3 ปีที่แล้ว +6

    Helmet on. Seatbelt fastened. Let’s race!

    • @johannesbohm6458
      @johannesbohm6458 3 ปีที่แล้ว

      How so you have a 11h old comment on an 2h old Video???
      Wtf

    • @GHHodges
      @GHHodges 3 ปีที่แล้ว

      @@johannesbohm6458 I am an admin on Dave’s Garage and I often view them before they get posted. Errr…. I mean…magic!

  • @ilektrokioydio
    @ilektrokioydio ปีที่แล้ว

    Wow that intro was so cool!

  • @Antiorganizer
    @Antiorganizer 2 ปีที่แล้ว

    Many if not most micro benchmark experiments fail to give VMs the opportunity to fully JIT the code. Too many people don't know that running a method a second time often runs much faster.
    So to measure performance, you have to run the core benchmark algorithm more than once, and toss out the results of the first or first few runs even. Only then will you discover the true performance.

  • @justsomeperson5110
    @justsomeperson5110 2 ปีที่แล้ว +2

    Oh, also, as a side note: Ada. Hmm... Yes. As a former military 3C052, my experience with Ada is that first and foremost, it is a PITA to work in because it is ridiculously redundant. LOL But when failure is NOT an option, that's a good thing. It catches errors humans don't. However, second and lesser known, at least in my experience and that of everyone I knew of on base, Ada is probably the least-used language in the military. ROFL As long as you can prove another language is better in some way, faster, shorter development time, etc. you could get a waiver to use the better language. And, well, those Ada results in the race kind of speak for how often projects had those waivers.

    • @tconiam
      @tconiam 2 ปีที่แล้ว

      Former 3C072 here with lots of Ada experience. Ada is a PITA if you're don't like the compiler pointing out your bad coding or like taking advantage of loose data types. But as Dave said, once it compiles it's pretty much guaranteed to do what you asked of it.
      From my recollection, the real problem in the 80's-90's wasn't the language, but the lack of trained programmers and the cost/maturity of the compilers. I swear we submitted so many bug reports to Alsys, we felt like their QC team. A real chicken and egg problem. It didn't help we were stretching the compiler to its limits with multi-tasking, low level hardware interfacing, and real-time programming on DOS 3.0 running on a 80286. Thankfully, the few remaining modern Ada compiler manufacturers have really nailed down their products and produce some really great stuff. If you're still interested, check out AdaCore's Community Edition compiler built on the modern gcc suite.

    • @toby9999
      @toby9999 2 ปีที่แล้ว

      Ada was the main language used in one of our programming language courses at University and I hated it. It felt it was way too verbose and clumsy.

  • @cliff8675
    @cliff8675 3 ปีที่แล้ว +1

    Back in college I was thrown a languages class and we picked up an Ada compiler for our last project. I was thrown to the wolves and ended up almost line for line translating my Pascal code in about a week. Ahh the good ol' days. I remember teh complier taking about a half hour to compile the smallish file and that was if there were no errors. Thanks to Ada, I learned to batch jobs on a VAX rather quickly.

  • @russellanderson2418
    @russellanderson2418 2 ปีที่แล้ว

    Started in machine, assembler in 1979. The main frame computer was wire wrap wiring and pushbutton bootstrapping. I currently code in C#

  • @kanati
    @kanati 3 ปีที่แล้ว +1

    so isn't this not so much a drag race between the languages and more a test of which compiler is better optimized?

    • @SianaGearz
      @SianaGearz 3 ปีที่แล้ว

      It's difficult to draw a line between language and language implementation. Obviously, every implementation is worse than optimally possible, and none are better. As an example, aliasing guarantees is why some modern languages derived from C can outpace C in microbenchmarks, while sharing the same code generation back end as the corresponding C compiler. They can enable optimisations not otherwise permitted.

  • @zweiwing4435
    @zweiwing4435 ปีที่แล้ว

    Can you tester Bun, Zig, C, Perfect Table hash #, and Fortran; to know which is the fastester?

  • @GRBtutorials
    @GRBtutorials 2 ปีที่แล้ว +1

    0:23 But what if C, Rust and assembly are your favorites?

  • @gerthddyn
    @gerthddyn ปีที่แล้ว

    That's odd. This implementation of Pascal has features that weren't in the language base when I was in high school. I also don't think it is the way I would have done it. And static declarations are supported. I don't know who created these extensions. Given that I've seen Turbo Pascal turn in results as fast in similar math functions to C, the cruft that was added to this implementation is pretty lackluster.

  • @stevenbliss989
    @stevenbliss989 3 ปีที่แล้ว +1

    What Pascal are you running, seriously? ....and NO, NO, NO, you cannot lump Pascal with Delphi (which btw has some years back become it's own recognized language as "Delphi"). Delphi is as much like Pascal is like Ada! I wish you had not even mentioned Delphi, it has distoted it beyond recognition, ...H may be the C# guy now at M$, but he created Delphi, so do not be surprised if he gives you a few harsh words about this as well! :)
    This brings me to another point. It's not the language, it's the run time, compiler etc. etc. Even comparing languages that run native CPU code to P-Code, to J-Code or to DotNet is at best not fair their various languages.

  • @tomooo2637
    @tomooo2637 ปีที่แล้ว

    Thank you for your time working though this.
    Your tests are primarily bit and integer manipulations and it would be good if you could run though a floating point computational analysis, in some cases using libraries (like numpy in python) Normally a bespoke algorithm is usually much faster than built in generic libraries (Java I mean you). (Yes, I know you cannot write anything close to numpy in native python - that would be insane).
    My work is mostly scientific, designing ML algorithms from first principle, 3D molecular graphics (ie GL + matrix multiplication) FFT, N-dimensional refinement - both LS and Maximum likelihood - so I generally stick to C for outright performance though I started with F66/F77 a long time ago. I have reproduced very C like java cos of working requirements, and at the moment I am stuck reluctantly in javascript preparing rendering engine data for GPU's which is for utterly horrible. Even taking out class instance instantiations in javascript produced a doubling of performance (for interactive calculations) when working on 3D collision and boundary analysis for 3D rendering which is complicated maths. (Yes I know that GL shader-engine does this, but not in the way users wanted).
    So, could you do a review of computational analysis (floating point complex calculations) in these languages too as I would really like to find out your view/finding on this.

  • @justinl.7401
    @justinl.7401 3 ปีที่แล้ว

    Great presentation and I'm looking forward to the rest of the series :)
    Sidenote: If the mic you're using is the Rode Procaster, the manual says it's supposed to be

  • @gregsb3454
    @gregsb3454 2 ปีที่แล้ว

    Cheers Dave, You have caused a major dust problem at my house, when you mentioned all the obtuse programming languages, that some how managed to move all the dust covering them in my brain. Great series

  • @aytviewer2421
    @aytviewer2421 3 ปีที่แล้ว +1

    I must be getting old... Starting in late 1981, I have used about 75-80% of those languages one time or another over the years. Some of them I lived and breathed in, others I had to take someones else's mess and update it for some reason or another.

  • @zweiwing4435
    @zweiwing4435 ปีที่แล้ว

    Can you tester Bun, Zig, C, and Perfect Table hash # to know which is the fastester?

  • @jmi967
    @jmi967 ปีที่แล้ว

    They’d be on the slowest end of the list, but someone could go totally esoteric and use Minecraft or Conway’s Game of Life

  • @tednoob
    @tednoob 3 ปีที่แล้ว +1

    This content is made for binge watching. People of the future, I envy you.