ความคิดเห็น •

  • @_rid9
    @_rid9 3 ปีที่แล้ว +58

    It's kinda sad that many modern programming challenge platform doesn't support apl, it would be fun

  • @jgd7344
    @jgd7344 3 ปีที่แล้ว +65

    Haskell may be elegant, but by God APL is efficient.

    • @NoNameAtAll2
      @NoNameAtAll2 3 ปีที่แล้ว +6

      "efficient" as in golfed
      all solutions shown outperform apl's and haskell's one

    • @SimonClarkstone
      @SimonClarkstone 3 ปีที่แล้ว +12

      Even decades ago I would expect the APL interpreter to have tightly optimised routines to do signum over an array and product of an array. Since this is a modern APL interpreter, possibly it can do JITing and loop fusion to combine them into one loop.
      Haskell's usual compiler, GHC, can certainly fuse a left-fold with a map, and is likely to produce similar object code to what a C solution would compile to.

  • @dukereg
    @dukereg 3 ปีที่แล้ว +34

    You can do a one liner in most of those languages if you use the same approach. They probably didn't because they assumed that performance characteristics were important. APLs to apples comparisons would be a lot more impressive.

    • @EricMesa
      @EricMesa 3 ปีที่แล้ว +2

      One-liner, sure. But not 3 chars

    • @dukereg
      @dukereg 3 ปีที่แล้ว +8

      @@EricMesa So? This isn't code golf, it's a language comparison. He was using a different approach, and emphasising line count when looking at all the other solutions. A fair comparison would be either the much longer APL solution that bails out early on 0 and only uses a few words of extra space, or else to compare the "elegant" one line solutions with the same approach in other languages that are also elegant and one line.

    • @felixthehuman
      @felixthehuman 3 ปีที่แล้ว

      @@dukereg I've been wondering about this: I wrote a solution that early exits on zero, but if you look at the domain Connor works in (GPGPUs) I have to wonder if these other solutions really have better performance characteristics since the Haskell and APL solutions seem more easily parallelizable.

    • @dukereg
      @dukereg 3 ปีที่แล้ว +8

      @@felixthehuman Maybe. Are you sure? For the equivalent Java one-liner you can be certain and the logic doesn't change apart from specifying the stream as parallel.
      I would much rather read ×/× than nums.parallelStream().map(java.lang.Math::signum).reduce(java.lang.Math::multiplyExact).getAsInt() or whatever, but why the conflation of language with approach? Why can't we say "the equivalent Java one-liner is disgusting and 33 times more characters"?
      Now let's say that you have one core allocated, very large arrays with a high likelihood of containing at least some 0s, don't have the memory on average to keep a whole array of signums, and need to satisfy latency and throughput requirements. Suddenly the APL solution and Java solution I gave both fail, and the top solutions on leetcode are almost optimal.
      We can't easily have these discussions because it's a rigged competition for whose language can express the most subjectively beautiful solution, instead of comparing apples to apples in the context of the speed and size realities that software engineering is all about. This is in spite of the fact that the Haskell and APL solutions aren't natural and are already optimised somewhat to deal with the realities of processing! On a blackboard when Iverson invented it, no-one would dispute that ××/ was the correct APL expression of "signum of product of", but those pesky machines with limited word sizes overflowing on large products get in the way.
      At the end of the day, if these languges are actually more elegant, that would show in fair, realistic comparisons.

    • @felixthehuman
      @felixthehuman 3 ปีที่แล้ว

      @@dukereg I definitely don't know. I just watched an APL conference video where Connor talked about how he was on the NVIDIA RAPIDS team, and I looked it up and it occurred to me that works best for my dual-core laptop may be totally different than what works for best for a system using a gpgpu with lots of cores.

  • @maemss
    @maemss 3 ปีที่แล้ว +33

    In Julia, it's a bit shorter than in Haskell:
    signfunc(xs) = prod(sign.(xs))
    Because of the dot shorthand f.(xs) for map(f, xs)

    • @terrydaniel9045
      @terrydaniel9045 ปีที่แล้ว +2

      sign(prod(xs)) will work also, one character less no need to broadcast but both are nice solutions.

    • @vytah
      @vytah ปีที่แล้ว

      ​@@terrydaniel9045It's better to do signum first to avoid any potential overflows

  • @mitchellszczepanczyk2300
    @mitchellszczepanczyk2300 3 ปีที่แล้ว +16

    These are great videos. I learn a lot about Haskell and APL by watching these. Please make more. Thanks!

  • @cramble
    @cramble 2 ปีที่แล้ว +3

    because someone else posted a python "oneliner" that only works with import statements not included in their "oneliner", I've made a slightly cursed solution
    def arrSign(arr): from functools import reduce ; return reduce(lambda x,y: x*y, map(lambda x: 0 if x == 0 else x / abs(x), arr), 1)
    It's cursedness comes from the local import in the function, the use of a semicolon, and the fact `arrSign([1, 5, 0, 2, 1, -3])` returns `-0.0`
    After reading through a bunch more comments, I think people should remember when showing off python code, Write down your imports... I'm not going to count a python "oneliner" if it has 'hidden' imports.

    • @cramble
      @cramble 2 ปีที่แล้ว +1

      Also because I think it fits, Nim aswell...
      func arrSign(x : seq[int]): int = (var acc = 1; for i in x: (acc *= (if i == 0: i else: (if i > 0: 1 else: -1))); acc)
      Nim is a bit more lax than Python when it comes to newlines and indents

  • @marcrichards4402
    @marcrichards4402 3 ปีที่แล้ว +12

    Yes the APL and Haskel solutions are elegant, but you are evaluating the whole array - wheras if you are iterating through an array and reach a zero you can can immediately return a result. For very large datasets this is a significant efficiency saving.

    • @chromosundrift
      @chromosundrift 2 ปีที่แล้ว +3

      It's quite clear and yet yearning to be explicated that the compared solutions were written to be optimal using different metrics: runtime vs elegant brevity

    • @AndreiGeorgescu-j9p
      @AndreiGeorgescu-j9p 7 หลายเดือนก่อน +3

      This hyper obsession with performance is a clear indication of low skill. Your job isn't to write the most hyper performant code, it's to solve problems and have maintainable code. If that specific function ends up being a bottleneck for your specific technical requirements, go ahead and rewrite it instead of trying to hyper-optimize by writing bad code all the time

  • @arisweedler4703
    @arisweedler4703 3 ปีที่แล้ว +3

    I simply have never been able to grok the grammar of APL. I’ve learned a bit about it and I do believe I can think in terms of functional programming well enough to express solutions as array based, but I’m just not there in terms of syntax.
    Love the channel. Going to do a deep dive and see if I can learn 2 parse APL.

  • @romaing.1510
    @romaing.1510 3 ปีที่แล้ว +5

    In python, you can do the same thing with numpy exept the functions' names are not one character long :
    def answer(nums) : return np.sign( np.prod( np.array(nums) ) )

    • @Dennis-gg9yv
      @Dennis-gg9yv ปีที่แล้ว +1

      now do it point free

    • @julians.2597
      @julians.2597 7 หลายเดือนก่อน +1

      ​@@Dennis-gg9yvanswer = toolz.compose(np.sign, np.prod, np.array)

  • @lukacordell8186
    @lukacordell8186 3 ปีที่แล้ว +7

    In python :
    import numpy as np
    arraySign = lambda nums : np.prod(np.sign(nums))
    :D

    • @sohangchopra6478
      @sohangchopra6478 3 ปีที่แล้ว +3

      But the difference is that you're using an external library for Python (NumPy) whereas the Haskell and APL solutions both used only usual built-in functions.

    • @sefirotsama
      @sefirotsama 3 ปีที่แล้ว +1

      @@sohangchopra6478 also no implicit function composition which is the true base of functional programming.

  • @RelatedNameHere
    @RelatedNameHere 2 ปีที่แล้ว +1

    matlabs one line solution:
    signfunc = @(x) sign(prod(x));
    Lambda Calculus and the functional approach to programming is super cool and matlab has been a surprisingly good way to get used to it! Perks of using a math based language to teach yourself a type of calc I guess 😅

  • @nickhyland9497
    @nickhyland9497 3 ปีที่แล้ว +2

    Nice videos. I would like to see an example of a video though where you take on one of the medium or hard coding platform problems, where time complexity is important

  • @MCLooyverse
    @MCLooyverse 2 ปีที่แล้ว

    Part of the question is whether you're going for simplicity/shortness of code (not necessarily the same), or whether you're going for the most efficient code. In JS, you could do something like
    const prodSign = a => a.reduce((acc, x) => acc * Math.sign(x), 1);
    But that's the naive approach.
    The way I'd approach the problem in C++ by default is to take as many shortcuts as possible.
    int prodSign(const std::vector& a)
    {
    int acc = 0;
    for (auto e : a)
    {
    if (e == 0) return 0;
    else if (e < 0) ++acc;
    }
    return acc & 1 ? -1 : 1;
    }
    And if you want to still optimize for fewer lines (without feeling very weird. C++ can all go on one line, which makes is a somewhat strange measure.)
    int prodSign(const std::vector& a) {
    int s = 0;
    for (auto e : a)
    if (e == 0) return 0;
    else if (e < 0) ++s;
    return s & 1 ? -1 : 1;
    }

  • @mahmoodal-imam2892
    @mahmoodal-imam2892 3 ปีที่แล้ว +4

    Fascinating! I got a question: how fast is APL comparing with C ?

    • @mlliarm
      @mlliarm 2 ปีที่แล้ว +2

      Well I wouldn't compare an interpreted language with a compiled one. Maybe a more meaningful question would be, how fast is APL compared to python, julia and R.

  • @jsbarrett88
    @jsbarrett88 3 ปีที่แล้ว +5

    It seems a lot of the simplicity gained is
    from algorithms that come baked in the languages
    and a more functional-style approach
    If you had the signum and product algorithms already,
    you could have Javascript (and I imagine other languages) also be a one liner
    In Javascript it could be:
    const arraySign = x => x.map(signum).reduce(product)
    (you could also make those algorithm names into unicode characters
    ... if you wanted to use symbols that looked similar to APL,
    not saying you should just that you could haha)
    Still not as beautiful/elegant as Haskell/APL ...
    but the difference is less striking/impressive
    than the comparison against the 10-15 line examples using for loop, if statements, and internal state
    --------
    Still love the video, and love being inspired by how other languages solve and approach problems
    Makes my day-job code (Javascript) constantly improve
    Thanks for what you do man
    Enjoy your live-streams as well (when I'm able to join)

    • @the_cheese_cultist
      @the_cheese_cultist 3 ปีที่แล้ว

      in C# it would be
      int ArraySign(int[] input) => Math.Sign(input.Aggregate(1, (a,n) => a*n));

    • @douglasgabriel99
      @douglasgabriel99 3 ปีที่แล้ว +1

      In JS is possible to do:
      const arraySign = x => x.map(Math.sign).reduce(Math.imul, 1)

    • @ignaciomartinoli3881
      @ignaciomartinoli3881 3 ปีที่แล้ว +1

      The fact that they are built in functions means that they are probably implemented in optimized C. So even if I could write a JS helper function, it will still be much slower. Is the same reason why in Python For loops are much faster than While loops

  • @sefirotsama
    @sefirotsama 3 ปีที่แล้ว +1

    I find Haskell and APL solutions both equivalent: 3 symbols for APL (with composition implicit) and 3 symbols (+1 for composition) in Haskell. Also signum reduced the code... basically the batteries included solved the problem. I still feel sold to those.

    • @ignaciomartinoli3881
      @ignaciomartinoli3881 3 ปีที่แล้ว

      The good things about the batteries included in the language is that, they are probably implemented in optimized C

  • @chromosundrift
    @chromosundrift 2 ปีที่แล้ว +1

    Gotta love Haskell (and APL) but for the record, here's the same technique in Scala and Java:
    Scala lambda using "_" as the implied arg:
    val arraySign:Array[Int] => Int = _.map(sign).product
    a Java method (would be nicer if the param was List), reducing with the * operator in a lambda:
    int arraySign(int[] nums) {
    return Arrays.stream(nums).map(Integer::signum).reduce(1, (a, x) -> a * x);
    }
    You can use another method ref instead of the lambda in reduce if you feel it's less noisy:
    int arraySign2(int[] nums) {
    return Arrays.stream(nums).map(Integer::signum).reduce(1, Math::multiplyExact);
    }
    Others have shown similar solutions for python, julia, javascript etc. The top voted discussion comments on leetcode seem to favour runtime performance and not value brevity as much as viewers of this channel.

  • @EricMesa
    @EricMesa 3 ปีที่แล้ว +1

    I love this series! These languages are so beautiful

  • @Baptistetriple0
    @Baptistetriple0 2 ปีที่แล้ว +1

    most programming language can do something similar, its just that those had performance as target, so they wanted to check for 0 and return early. in rust you could do `nums.into_iter().map(i32::signum).product()` which is exactly the same thing

    • @0LoneTech
      @0LoneTech 11 หลายเดือนก่อน

      Okay, if we assume it's all about optimizing for that corner case (at the expense of all non-trivial short cases):
      prodsign = foldr1 prod . map signum
      where
      prod 0 _ = 0
      prod a b = a*b
      And I can verify this exits because of the shortcut by applying it to an infinite list.
      The who's who of imperative solutions all spent way more work on reimplementing foldl for an array than on the shortcut.

  • @JellyMyst
    @JellyMyst 3 ปีที่แล้ว +4

    Possibly a stupid question, but why is a low line count good? Looking at those Java solutions at the start, I might be able to shrink them down to four lines, or even less, but at the cost of readability. Not necessarily because of complicated logic, but because I'd need to use mildly obscure symbols.

    • @batlin
      @batlin ปีที่แล้ว

      I find the 3 character APL solution quite readable and less error prone than 15 lines of Java.

    • @JellyMyst
      @JellyMyst ปีที่แล้ว

      @@batlin I find it all but incomprehensible, if you want my honest opinion.

    • @batlin
      @batlin ปีที่แล้ว

      @@JellyMyst sure; it does require some effort to learn what the symbols mean and how to combine them, and not everybody wants to spend time on that.
      Personally I'm interested enough to try that, so I can see if larger programs are still easy to build and reason about.
      I've been working with overly verbose Java (and friends) codebases for over 15 years now and want to investigate other ways of doing things.

    • @JellyMyst
      @JellyMyst ปีที่แล้ว

      @@batlin That's perfectly fine, but I don't think it's necessarily more readable with fewer lines.
      Take mathematics. Using substitution and niche notation you can condense any equation, but that will not make it easier to understand. The substitution, if done excessively or clumsily, can even make it harder to read. If you substitute too much, you'll end up with an equation that just looks like a=a. If you substitute clumsily, then the structure of your equation gets lost, and that structure can be helpful in understanding.
      Both of these things can be analogous to code. If there's a function that does nought but call another function, it does nothing to help you understand the structure of the program. And if there are functions that execute some seemingly arbitrary sets of instructions, that is also at best unhelpful (and also probably called only once so it should be in-lined anyway, but that's another discussion).
      To be clear, APL may be fantastic once you learn it. I wouldn't know, since I haven't. I just don't think the philosophy of fewer lines = better code, holds true.

    • @batlin
      @batlin ปีที่แล้ว +1

      @@JellyMyst well said. I think we're on the same page there: some verbosity is necessary, but some isn't. I suspect the concise nature of APL might make it easier to see at a glance the intent and behaviour of a program, but I might be wrong.

  • @samuelemorreale7510
    @samuelemorreale7510 3 ปีที่แล้ว +6

    In python you could do "math.prod(map(numpy.sign, x))" if you want to solve the problem like you did in the video

    • @reisaki18
      @reisaki18 2 ปีที่แล้ว

      he use primitives/built-ins thats why using numpy is cheating 😂😂

    • @samuelemorreale7510
      @samuelemorreale7510 2 ปีที่แล้ว

      @@reisaki18 you could use lambdas and math.copysign

    • @0LoneTech
      @0LoneTech 11 หลายเดือนก่อน

      Numpy ufuncs automatically map, so np.prod(np.sign(xs)) works just as well.

  • @felixthehuman
    @felixthehuman 3 ปีที่แล้ว +6

    Do Haskell or APL have a way you can do early exit on zero in the reductions?

    • @BigBeesNase
      @BigBeesNase 3 ปีที่แล้ว +1

      There is a way in Haskell but not sure about APL.

    • @BigBeesNase
      @BigBeesNase 3 ปีที่แล้ว +2

      there is an alternative product implementation that does exactly that while maintaining the readability of the arraySign function.
      stackoverflow.com/questions/55170949/fold-thats-both-constant-space-and-short-circuiting

    • @felixthehuman
      @felixthehuman 3 ปีที่แล้ว

      @@BigBeesNase Thanks! I paused at the problem statement and wrote a scheme solution that used fold wrapped in call/cc so I could jump out on zero (10 lines, I know) and wondered if there was a way to mod these solutions to do something similar.

    • @BigBeesNase
      @BigBeesNase 3 ปีที่แล้ว

      @@felixthehuman can you share a gist?

    • @felixthehuman
      @felixthehuman 3 ปีที่แล้ว

      @@BigBeesNase match is a pattern matcher.

  • @Russtopia
    @Russtopia 3 ปีที่แล้ว +1

    Haha, @5:32 startled me watching quietly here

  • @japedr
    @japedr 3 ปีที่แล้ว +4

    Possibly the fastest solution is to XOR all the inputs (using scan/fold/whatever) and return the sign of the result (of course, assuming two's complement), I don't know if there is UB for negative numbers in C or C++, but any sane implementation should do the right thing; maybe it can be done with memcpy or bit_cast into an unsigned integer. Otherwise an assembly implementation should not be too hard.
    This is cache- and SIMD-friendly and also there no branches to mispredict and no risk of overflow (well, there wasn't if sign is applied before prod anyways...).
    The only problem is with languages that use bigints by default, I know python probably does the right thing in this case but I don't know about Haskell.
    Edit: oops, forgot about the zero... well that should be easy too, just override the result with a zero if any input is zero. Now that I think about it, you can do it also in the same scan by applying a bitwise AND alongside the XOR (i.e. the accumulator is a pair of numbers).

  • @amj864
    @amj864 3 ปีที่แล้ว

    Cool video. What do you think of K ?

  • @paulzupan3732
    @paulzupan3732 ปีที่แล้ว

    Couldn't you avoid the map call by just doing
    > signum $ product x

  • @steveAllen0112
    @steveAllen0112 2 ปีที่แล้ว

    The problem result is the same, and probably more efficient. But notice that the problem given is "Sign of the Product", not "Product of the Signs".
    Taken exactly, APL would be ××/ as opposed to ×/×.

    • @andreapiseri8521
      @andreapiseri8521 ปีที่แล้ว +2

      If the result is the same, then expressing a problem in a way that makes it more efficient is a good habit, and APL makes this very simple by making the solutions just permutations of each other.
      It's neat that it mirrors tons of theorems in math that are all about knowing when it is valid to permute operators.

  • @themilkman8034
    @themilkman8034 2 ปีที่แล้ว

    Python one line solution:
    def arrSign(arr: list[int]) -> int: return functools.reduce(operator.mul, map(lambda x: 1 if x > 0 else -1 if x < 0 else 0, arr))

  • @cramble
    @cramble 2 ปีที่แล้ว

    A flattened two-pass version of the solution seen at @2:03 as just a function and not in a class
    def arrSign(arr): return 0 if 0 in arr else 1 - (len([i for i in arr if i < 0]) % 2) * 2
    in Haskell that'd be
    arrSign arr = if elem 0 arr then 0 else ((1-) . (*2) . (`mod` 2) . length . filter (

  • @irrelevantgaymer6195
    @irrelevantgaymer6195 11 หลายเดือนก่อน

    I feel like the line count comparison is a little disingenuous, cause in Haskell you’re using functions which take lines or code to define. When you take that into consideration the gap is a lot less. I don’t know enough about APL to say anything for each of the functions you used: I don’t know if each function is built into the language or defined in the language.

  • @darkenblade986
    @darkenblade986 2 ปีที่แล้ว +1

    maybe im missing something but i dont get it. you are replacing functions names with characters and utilizing a very dense standard lib that gives you a func like signum.
    imagine signum and a list prod existed in python the solution would look like this:
    math.product( map(signum(),input_arr) ) # a one liner
    now lets replace with symbols with M denoting map, * denoting product, ` denoting signum and x being the input array
    *M`x
    yay 3 chars!?
    am i missing something ?

    • @FaranAiki
      @FaranAiki 2 ปีที่แล้ว +1

      It should be `signum`, not `signum()`. The *M`x is invalid syntax (point-free style in Python), look below.
      There are major differences, can you do point-free style in Python if you want to generalize? APL can, Python cannot, and in this video, point-free style is needed. In Haskell, this can do, maybe, but not Python.

    • @darkenblade986
      @darkenblade986 2 ปีที่แล้ว

      @@FaranAiki I know python does not have point free style. That's was not the point of the comment.

    • @JacksonBenete
      @JacksonBenete 2 ปีที่แล้ว +1

      The way he records the video is just a way for he and we to have fun, and for potentially attract new people to APL.
      He don't really makes the point in any of the videos.
      The point isn't about solving problems in the characters.
      The point is about using the APL notation as a tool of thought, and also the array oriented programming as a paradigm for creating new mental models when you're thinking about a problem and designing a solution.
      There is something special about using a good notation, which is the APL "symbols".
      Have you tried to reason about a organic reaction mechanism or organic molecules without using the established organic chemistry notations?
      Have you tried to think about and express wave functions without using wave functions notation?
      What about reasoning on approximation problems and areas without derivatives and integration notation? What about gradients?
      Notations are designed to give you tools for thinking, the point of APL is how the notation helps into that, once you learn them, and also how you approach the design of the solution differently.
      You can balance a chemistry equation by holistic and brute force which is what is taught on schools, or you can transform the reactions on linear equations and solve the linear system.
      You can solve a lot of problems by reducing the problem using linear algebra, you can transform a graph in a matrix to solve it using linear algebra instead of solving as a graph.
      This is know as reduction in complexity theory.
      APL is about reducing problems on array and matrices mental models, applying bitmasks, filtering, counting, reducing...
      The point is not really solving problems with three characters, though the concise solutions are a side effect of the notation.

  • @001victorOLK
    @001victorOLK 3 ปีที่แล้ว

    One-line javascript solution:
    const arraySign = a => { a.includes(0) ? 0 : a.filter(n => n < 0).length % 2 == 0 ? 1 : -1 }

    • @WilcoVerhoef
      @WilcoVerhoef 2 ปีที่แล้ว

      This code walks 2 times over the array, and builds another one from scratch. That's a lot of unnecessary work and memory

  • @oceanmoist8553
    @oceanmoist8553 ปีที่แล้ว

    java can be like “Arrays.stream(x).map(Math::signum).reduce((x,y) -> x*y)”. java has no syntactic sugar for builtin (primitive) functions. this solution isn’t that bad.

  • @hdbrot
    @hdbrot 11 หลายเดือนก่อน

    It‘s a bit cheating though if you don‘t implement the product and and sign functions. If you do that, the answer in Haskell would be something like
    arraySign [] = 1
    arraySign (x : xs)
    | x > 0 = arraySign xs
    | x < 0 = negate . arraySign xs
    | otherwise = 0

  • @perigord6281
    @perigord6281 2 ปีที่แล้ว

    Could've sworn ghc would have an optimization when multiplying by 0 when doing multiply folds.

  • @silentsimon1234
    @silentsimon1234 3 ปีที่แล้ว

    you're really cool dude, thank you

  • @NikolajLepka
    @NikolajLepka 2 ปีที่แล้ว

    I'm honestly surprised nobody came up with this solution:
    const solve = (arr) => arr.map(Math.sign).reduce((x, y) => x * y)
    I'm unsure if there's a way to run the reduce with just a Number.times or something to get rid of the lamba, but even if you can't it's still a far more terse solution than anything you found.

    • @OnFireByte
      @OnFireByte 2 ปีที่แล้ว +1

      1. because the procedural way is faster than functional-style
      2. off-topic but I thought like you so I do some test and just found out that const solve = (x) => Math.sign(x.reduce((a, b) => a * b)) is actually faster than the code that you write for some reason, even in big array (20k+), maybe JS is better at handling big number than big loop

    • @WilcoVerhoef
      @WilcoVerhoef 2 ปีที่แล้ว

      It's because each float (or int) multiplication takes the same amount of time. So it's faster to do just one signum call at the end.
      However, you might end up overflowing if you multiply first like this.

    • @NikolajLepka
      @NikolajLepka 2 ปีที่แล้ว

      @@OnFireByte JS has some pretty intense optimisations. Even array splicing is faster than pushing or concatinating

  • @metinersinarcan92
    @metinersinarcan92 3 ปีที่แล้ว +1

    In python you can also do this:
    arraySign = lambda x: functools.reduce(operator.mul, map(lambda n: math.copysign(1, n), x))

    • @FaranAiki
      @FaranAiki 2 ปีที่แล้ว +1

      In C, C++, JavaScript, Java, Assembly, you can do that too! But, what is the point? Being pedantic?

    • @metinersinarcan92
      @metinersinarcan92 2 ปีที่แล้ว

      @@FaranAiki It is just a fun puzzle for me, writing the shortest or fastest or ...est piece of code. I am not a software engineer. Obviously, this is ridiculous to use in a production code.

    • @FaranAiki
      @FaranAiki 2 ปีที่แล้ว +1

      @@metinersinarcan92
      It is not per se ridiculous to use in the production. But, what I mean is that this video shows how powerful a functional programming (language) can, and Python is not one of them.

  • @kirillholt2329
    @kirillholt2329 3 ปีที่แล้ว

    haha dope series man

  • @Caareystore153
    @Caareystore153 9 หลายเดือนก่อน

    It's shame that leetcode doesn't support Haskell and APL

  • @aly-bocarcisse613
    @aly-bocarcisse613 3 ปีที่แล้ว

    🤯🤯🤯👏🏿

  • @monomere
    @monomere 2 ปีที่แล้ว

    The JavaScript solution is kinda bad though

  • @rahulprasad2318
    @rahulprasad2318 3 ปีที่แล้ว

    Nice

  • @mishikookropiridze
    @mishikookropiridze 3 ปีที่แล้ว

    Same solution in Python
    reduce(
    lambda x, y: x*y,
    map(sign, x)
    )

    • @FaranAiki
      @FaranAiki 2 ปีที่แล้ว +1

      "Same solution in Python."
      Unless it does not. The algorithm is different and throws bunch of errors. Why two variables, x and y? No one needs that.
      NameError: name 'sign' is not defined.
      NameError: name 'reduce' is not defined.

    • @mishikookropiridze
      @mishikookropiridze 2 ปีที่แล้ว

      ​@@FaranAiki You have to import reduce and sign, `from functools impot reduce` `from numpy import sign`
      reduce function of two arguments and iterable object, and `reduces` iterable based on function provided [i.e reduce(lambda x, y: x*y, [1, 2, 3]) = (1*2)*3 = 6

    • @FaranAiki
      @FaranAiki 2 ปีที่แล้ว +1

      @@mishikookropiridze
      Then, it is not, "Same solution in Python."

    • @mishikookropiridze
      @mishikookropiridze 2 ปีที่แล้ว

      @@FaranAiki I won’t argue semantics of what constitutes same solution, conceptually they are same.

    • @FaranAiki
      @FaranAiki 2 ปีที่แล้ว +1

      @@mishikookropiridze
      Conceptually, they are different. Python cannot do point-free style thing. Haskell and APL does not even need lambda. Just look at Python's bytecode, you will get different than APL (Python's bytecode is larger than APL).

  • @theMuritz
    @theMuritz 2 ปีที่แล้ว

    Interesting

  • @juhanilepisto7361
    @juhanilepisto7361 3 ปีที่แล้ว

    Crystal:
    x.product 0

  • @VincentZalzal
    @VincentZalzal 3 ปีที่แล้ว +1

    Seems I'll be the lone defender of Matlab/Octave on your videos, Conor ;) I'd say Matlab is actually a lot like APL, in the sense it is an array-oriented language, but with words instead of symbols.
    prod(sign(x))
    And I guess we could solve this in a single (long) line of C++ with transform_reduce and a lambda for the missing signum function.

    • @japedr
      @japedr 3 ปีที่แล้ว +3

      The same code works in Julia. It tries to mimic MATLAB in its good parts (array syntax), but it is free and not a pain to work with for anything nontrivial.
      Sorry, I have a strong opinion against MATLAB after having used it for almost 10 years... most code I've read in it is just terrible and I feel that the language encourages that.

    • @VincentZalzal
      @VincentZalzal 3 ปีที่แล้ว +1

      @@japedr I can understand that and I also agree. MATLAB is good at what it was designed for, which is prototyping math-oriented code with visualization. As soon as people start writing full applications in MATLAB, then indeed, it usually becomes a mess quite quickly. I think MATLAB got a few things right, like math syntax, but modularity is terrible.
      I am actually curious about trying Julia, I've heard good things about it.

  • @maraschwartz6731
    @maraschwartz6731 3 ปีที่แล้ว

    rip can't think of a pretty python 1-liner

    • @maraschwartz6731
      @maraschwartz6731 3 ปีที่แล้ว

      best I can do is using functools
      from functools import reduce (reduce is foldl)
      solve = lambda x: (lambda x: 1 if x > 0 else -1 if x < 0 else 0)(reduce(lambda x,y: x*y, x, 1))
      solve([1,2,3,4]) -> 1

    • @maraschwartz6731
      @maraschwartz6731 3 ปีที่แล้ว

      slightly better:
      solve = lambda x: reduce(lambda x,y: x*y, [1 if e > 0 else -1 if e < 0 else 0 for e in x], 1)

    • @FaranAiki
      @FaranAiki 2 ปีที่แล้ว +1

      ​@@maraschwartz6731
      Use `sign` from NumPy, since Python does not have that builtin. If using the if-statement, that is nasty.

  • @MrAbrazildo
    @MrAbrazildo ปีที่แล้ว

    Even assembly can solve anything in 1 line, if the solution was already made by someone else. So, I don't admire this. What C++ can do is be compact while developing the algorithm (the zero case is pointless):
    const auto is_negative = [](auto s, auto f) { auto r = *s; while (s++ < f) r *= *s; return r < 0; };

  • @mbgdemon
    @mbgdemon 2 ปีที่แล้ว

    Seriously, what are these people writing leetcode solutions smoking? Why does everything need to be a for loop? It's not like a python one-liner is actually hard to read if you know the language

  • @davidmoore3784
    @davidmoore3784 2 ปีที่แล้ว

    What happens in 10 years when apl is forgotten and some poor schmuck has to read and interpret your code?

    • @mlliarm
      @mlliarm 2 ปีที่แล้ว +4

      The language exists over 50 years already and still has an active community. I don't think it's going to be extinct.
      That being said, the answer to your question is pretty much the same with any foreign natural language, or any mathematical field. You shouldn't expect to understand what is written in that language if you don't speak the language. So, I suppose that poor schmuck would have to learn to read APL first in order to understand how the code works.
      As in mathematics, there's no royal road in APL. You have to take the long and hard way.

    • @chromosundrift
      @chromosundrift 2 ปีที่แล้ว

      @@mlliarm absolutely. It can be quite surprising to discover the age of some languages. Part of what makes APL so surprising and amazing is its relative age. Is it just me or does it feel modern?

  • @13thk
    @13thk 2 ปีที่แล้ว

    Rust solution:
    fn solve(x: &[i32]) -> i32 {
    x.iter()
    .map(i32::signum)
    .product()
    }

  • @realemolga6306
    @realemolga6306 2 ปีที่แล้ว +1

    My Haskell solution is even shorter:
    let arraySign = signum . product