My idea would be to split the work to fixed number of chunks, like 1024. Then spawn the same number of threads as I have number of processors. Or maybe add one or two threads more in case of some thread gets stuck on I/O for a while, so the extra threads could run in meantime. Then each thread would repeatedly take one work chunk form shared queue until the queue is empty. This is more work for the programmer, but I believe the CPU utilization will be more even. For example when the work items are part of image that needs to be processed in some way (ray casting). Or when converting video file. If some part of the image is solid color or if some part of video is still then the speedup would be still (close to) linear.
parallel_for works along similar lines, but it does not choose a fixed number, but uses the actual number of iterations, keeping the CPUs busy by throwing the next item at an idle CPU.
high quality content
FAX me and all my homie fuck with PAVEL, @LOCALHOST
My idea would be to split the work to fixed number of chunks, like 1024. Then spawn the same number of threads as I have number of processors. Or maybe add one or two threads more in case of some thread gets stuck on I/O for a while, so the extra threads could run in meantime. Then each thread would repeatedly take one work chunk form shared queue until the queue is empty.
This is more work for the programmer, but I believe the CPU utilization will be more even. For example when the work items are part of image that needs to be processed in some way (ray casting). Or when converting video file. If some part of the image is solid color or if some part of video is still then the speedup would be still (close to) linear.
parallel_for works along similar lines, but it does not choose a fixed number, but uses the actual number of iterations, keeping the CPUs busy by throwing the next item at an idle CPU.
can you use modular arithmetic for getting chunks?
What do you mean "modular arithmetic"?
@@zodiacon th-cam.com/video/lJ3CD9M3nEQ/w-d-xo.html&pp=ygUdemFjaCBzdGFydCBtb2R1bGFyIGFyaXRobWV0aWM%3D