Hi Eugene! Thanks for the awesome talk. Will you guys add an example that uses batch_sequences_with_states with sequences of different lengths to tensorflow/models repository? I have been trying to use that method without any luck. Or did I miss some working example somewhere?
I think we should probably just update the ptb_word tutorial to use batch_sequences_with_states. I've opened up an internal bug for doing this, but it may be some time.
Few questions. 1) You mentioned in your talk that you are gonna publish the new tutorial, may I ask when will it be released? 2) Can you make the code in the slides available by any chance? Thanks in advance. Nice talk and awesome work.
Thanks! Example code can be found in the unit tests in this directory: github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/seq2seq/python/kernel_tests/ The new tutorial will depend on TF 1.2, which is being released soonish. We're working hard on getting it ready for release.
Hi, thanks for the talk ! I wonder, is RNN capable of handling video or images from video size 224 X 224 ? especially if I want to do semantic segmentation ? So the output for each image is an image size pixel classification. thanks.
22:50 he mentions 50 "timesteps" and in the table it says number of units = 32 .what is the difference between those 2 things? I am confused. Can anyone clarify?
We found a way to do this using the regular decoder and a RNNCell called AttentionWrapper. See tf.contrib.seq2seq.AttentionWrapper and several of the attention Mechanism classes there that determine how to attend.
Thanks, actually I did exactly the same, following some code examples on the internet. BTW New seq2seq tutorial that explains also how to fully leverage multi-gpu setting with fast input pipeline would be great, any plans on doing that?
That helper cannot; but thanks for the feedback. I opened an internal bug to add a version that samples from either the dense RNN output (possibly with an intermediate projection) or from the ground truth next input.
Hi Eugene! Thanks for the awesome talk. Will you guys add an example that uses batch_sequences_with_states with sequences of different lengths to tensorflow/models repository? I have been trying to use that method without any luck. Or did I miss some working example somewhere?
I think we should probably just update the ptb_word tutorial to use batch_sequences_with_states. I've opened up an internal bug for doing this, but it may be some time.
Few questions.
1) You mentioned in your talk that you are gonna publish the new tutorial, may I ask when will it be released?
2) Can you make the code in the slides available by any chance?
Thanks in advance. Nice talk and awesome work.
Thanks! Example code can be found in the unit tests in this directory:
github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/seq2seq/python/kernel_tests/
The new tutorial will depend on TF 1.2, which is being released soonish. We're working hard on getting it ready for release.
Hi, thanks for the talk !
I wonder, is RNN capable of handling video or images from video size 224 X 224 ? especially if I want to do semantic segmentation ? So the output for each image is an image size pixel classification.
thanks.
22:50 he mentions 50 "timesteps" and in the table it says number of units = 32 .what is the difference between those 2 things? I am confused. Can anyone clarify?
@Eugene Brevdo: Hi, do you know where to find AttentionDecoder class or when this class will be added to master?
We found a way to do this using the regular decoder and a RNNCell called AttentionWrapper. See tf.contrib.seq2seq.AttentionWrapper and several of the attention Mechanism classes there that determine how to attend.
Thanks, actually I did exactly the same, following some code examples on the internet. BTW New seq2seq tutorial that explains also how to fully leverage multi-gpu setting with fast input pipeline would be great, any plans on doing that?
The unit tests do not contain the training part... any news on the tutorial / sample frontier ?
Amazing talk!!, It's do help me a lot!!! By the way , where can I get all the source code or this slides in this talk?
You can find example code in the form of unit tests here:
github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/seq2seq/python/kernel_tests/
陳聖方 hhhu0-
kiulko8
ok 9oo89006oilujp7y
Tripoli gift htog6
where can i get the PPT ? I think that show the code is too quickly.
Can ScheduledEmbeddingTrainingHelper be used with real-valued inputs/outputs for a linear regression problem?
That helper cannot; but thanks for the feedback. I opened an internal bug to add a version that samples from either the dense RNN output (possibly with an intermediate projection) or from the ground truth next input.
Thanks Eugene. Enjoyed the talk :)
Glad you liked it! Watch our github repo today/tomorrow for a PR named "Add a ScheduledOutputTrainingHelper". That should do what you want.
No messing around, thanks again :)
Can anyone explain what decoder_sample_ids is? 30:05
God damnit, could this guy maybe not talk through his teeth for a second??