As a CS I get this all the time: "Just show me how to solve this". Unfortunately that makes little to no sense unless I also know what _specifically_ you want to get solved. Most of us literally spend the first decade of our career (ie: _after_ we've spent 4-7 years of studying this) trying to figure it out, and then you either give up and move on to management, decide this sucks and dig your own rabbit hole and specialize on a single task, or you're among the few unicorns that get it and become an "architect".
I get your point - but in the case of this CDT event, most of us do not need to understand exactly how git works - we need to know how to use it for our own work and for working collaboratively. We also needed to spend more time learning how to speed up our code, rather than spending almost an hour on the importance of timing our code. We didn't learn the useful things. If we'd also learned the useful things I'd have less issue there
Trouble with git is that it isn't really a versioning system. It's rather a related hacks, scripts and tools that tries to mimic Linus Torvalds very strong opinions on how to make SW, and entirely by accident this cloud of runables happen to be able to to version control. As it is not an explicit goal that they should be usable, safe, consistent or developer friendly (and Linus is definitely not!), there's simply just a lot of stuff you need to know to avoid formatting your machine, killing your server, deleting all your research data and order a Hawaiian pizza entirely by accident :P When it comes to performance, the most important part is to know what to look for (and testing is probably the most important bit there - hence why they spent time on that), and even that is really hard to condense to just a few lectures (we had several courses just for this - and that was just surface level stuff). How to then solve that is, as I mentioned, career size problems. In general, tight loops, localized data, don't "jump around" (ie callbacks & domain switches) will have a higher cost, but the algorithm you use will almost always be key to solving your problem fast. So the best advice I can give is learn to test well, then try out a lot of different changes. The list of techniques that might help is endless (literally - we only have vague ideas about how much any given problem can be optimized), the size and structure of your specific problem will have big implications for what works well and what doesn't, and data organization and code flow combined with the CPU/GPU/FPGA/ASIC? architecture you run it on has a huge impact as well.
So interesting to hear about the CDT system!
As a CS I get this all the time: "Just show me how to solve this". Unfortunately that makes little to no sense unless I also know what _specifically_ you want to get solved. Most of us literally spend the first decade of our career (ie: _after_ we've spent 4-7 years of studying this) trying to figure it out, and then you either give up and move on to management, decide this sucks and dig your own rabbit hole and specialize on a single task, or you're among the few unicorns that get it and become an "architect".
I get your point - but in the case of this CDT event, most of us do not need to understand exactly how git works - we need to know how to use it for our own work and for working collaboratively. We also needed to spend more time learning how to speed up our code, rather than spending almost an hour on the importance of timing our code. We didn't learn the useful things. If we'd also learned the useful things I'd have less issue there
Trouble with git is that it isn't really a versioning system. It's rather a related hacks, scripts and tools that tries to mimic Linus Torvalds very strong opinions on how to make SW, and entirely by accident this cloud of runables happen to be able to to version control. As it is not an explicit goal that they should be usable, safe, consistent or developer friendly (and Linus is definitely not!), there's simply just a lot of stuff you need to know to avoid formatting your machine, killing your server, deleting all your research data and order a Hawaiian pizza entirely by accident :P
When it comes to performance, the most important part is to know what to look for (and testing is probably the most important bit there - hence why they spent time on that), and even that is really hard to condense to just a few lectures (we had several courses just for this - and that was just surface level stuff). How to then solve that is, as I mentioned, career size problems. In general, tight loops, localized data, don't "jump around" (ie callbacks & domain switches) will have a higher cost, but the algorithm you use will almost always be key to solving your problem fast.
So the best advice I can give is learn to test well, then try out a lot of different changes. The list of techniques that might help is endless (literally - we only have vague ideas about how much any given problem can be optimized), the size and structure of your specific problem will have big implications for what works well and what doesn't, and data organization and code flow combined with the CPU/GPU/FPGA/ASIC? architecture you run it on has a huge impact as well.