I think that the naming and default operations for inner_product, accumulate and partial_sum can be justified since they belong to . I don't see why reduce, transform (or many others) should belong in and not .
That's actually pretty nice to see that someone researching at Nvidia can make such mistakes. Because Hoekstra's talks inspired me to try and write that specific idea: Sum((x-mean)^2 / (n-1)) in algorithms and I couldn't figure it out without a couple of phases. In haskell I got it with . sqrt . (/x -> x / fromIntegral (length array - 1)) . sum . map (^2) . map (subtract mean) where mean = sum / fromIntegral (length array) But I wonder how to write it in C++ the same way. I suppose the two maps can be written in one lambda in C++, do fold, but then I suppose the division is another lambda though and things start getting ugly, especially considering the mean has to be calculated as well.
Not the most relevant comment, but at th-cam.com/video/sEvYmb3eKsw/w-d-xo.html#t=1800s long long int solveF(std::vector& v){ long long int backDrop = std::reduce(v.cbegin(), v.cend()); partial_sum(cbegin(v), cend(v), begin(v), [](auto a, auto b){return std::max(a,b);}); return std::reduce(cbegin(v), cend(v)) - backDrop; } looks more appropriate whenever your accumulates don't sum over the largest int. And, while writing this, I realize that we don't need to modify inplace the input vector. This code flows out automatically: long long int solves(const std::vector& v){ long long int waterLevel = 0; std::reduce(v.cbegin(), v.cend(), 0ll, [&waterLevel](auto a, auto b){auto c = std::max(a,b); waterLevel += c; return c;}); return waterLevel - std::reduce(cbegin(v), cend(v)); } is the capture an antipattern? Does it come from the lack of possibility to pipe the two reduces (max and plus) in C++17? Thx for answers to the last two questions.
I think that the naming and default operations for inner_product, accumulate and partial_sum can be justified since they belong to . I don't see why reduce, transform (or many others) should belong in and not .
That's actually pretty nice to see that someone researching at Nvidia can make such mistakes. Because Hoekstra's talks inspired me to try and write that specific idea: Sum((x-mean)^2 / (n-1)) in algorithms and I couldn't figure it out without a couple of phases.
In haskell I got it with
. sqrt
. (/x -> x / fromIntegral (length array - 1))
. sum
. map (^2)
. map (subtract mean)
where
mean = sum / fromIntegral (length array)
But I wonder how to write it in C++ the same way. I suppose the two maps can be written in one lambda in C++, do fold, but then I suppose the division is another lambda though and things start getting ugly, especially considering the mean has to be calculated as well.
Not the most relevant comment, but at th-cam.com/video/sEvYmb3eKsw/w-d-xo.html#t=1800s
long long int solveF(std::vector& v){
long long int backDrop = std::reduce(v.cbegin(), v.cend());
partial_sum(cbegin(v), cend(v), begin(v), [](auto a, auto b){return std::max(a,b);});
return std::reduce(cbegin(v), cend(v)) - backDrop;
}
looks more appropriate whenever your accumulates don't sum over the largest int.
And, while writing this, I realize that we don't need to modify inplace the input vector. This code flows out automatically:
long long int solves(const std::vector& v){
long long int waterLevel = 0;
std::reduce(v.cbegin(), v.cend(), 0ll,
[&waterLevel](auto a, auto b){auto c = std::max(a,b);
waterLevel += c; return c;});
return waterLevel - std::reduce(cbegin(v), cend(v));
}
is the capture an antipattern? Does it come from the lack of possibility to pipe the two reduces (max and plus) in C++17?
Thx for answers to the last two questions.
8:45 const west!