In minute ~17:15 it's mentioned that it's possible to run all the run jobs in the experiment in parallel, instead of a for loop. Can someone link to a tutorial / docs explaining how to do that? Thanks!
Hello! This doc has info on running parallel SageMaker jobs: go.aws/3Pg0phd. If this doesn't have what you're looking for, feel free to post your question directly to re:Post for community guidance: go.aws/aws-repost. 📝 ^LG
** Edit: problem solved when using the for-loop and setting wait=False inside the estimator.fit(). Hello @@awssupport No, this is not what I meant.. If you watch this video on minute 17:15, Emily Webber is talking about the example of a for-loop training, and says that there is a better way of doing it, without a time-consuming for-loop . Neither of your example notebook seems to tackle this issue. I will also post this question in the community guidance, but just to be clear - I want to send multiple training jobs with different parameters at the same time. This is very to the built-in Hyperparameters-Tuning Job, with max_parallel_jobs > 1, but I would like to do it as a customized experiment (where I can change for example the amount of data I'm training on, to see how the data load affects results). A reference to some relevant docs will be appreciated!
the variable values are defined as part of the loop initialization ... and then used to setup Pytorch estimator here th-cam.com/video/zLOMYKZGxK0/w-d-xo.html
Killing me changing the f string back to .format ahhhhh noooo :'(
In minute ~17:15 it's mentioned that it's possible to run all the run jobs in the experiment in parallel, instead of a for loop.
Can someone link to a tutorial / docs explaining how to do that?
Thanks!
Hello! This doc has info on running parallel SageMaker jobs: go.aws/3Pg0phd. If this doesn't have what you're looking for, feel free to post your question directly to re:Post for community guidance: go.aws/aws-repost. 📝 ^LG
** Edit: problem solved when using the for-loop and setting wait=False inside the estimator.fit().
Hello @@awssupport No, this is not what I meant..
If you watch this video on minute 17:15, Emily Webber is talking about the example of a for-loop training, and says that there is a better way of doing it, without a time-consuming for-loop . Neither of your example notebook seems to tackle this issue.
I will also post this question in the community guidance, but just to be clear -
I want to send multiple training jobs with different parameters at the same time.
This is very to the built-in Hyperparameters-Tuning Job, with max_parallel_jobs > 1, but I would like to do it as a customized experiment (where I can change for example the amount of data I'm training on, to see how the data load affects results).
A reference to some relevant docs will be appreciated!
Graphs didn't show up?
I do not see which step explicitly specify the number of hidden layer.
the variable values are defined as part of the loop initialization ... and then used to setup Pytorch estimator
here th-cam.com/video/zLOMYKZGxK0/w-d-xo.html
Wow! Amazing!