Laurence Moroney tensor flow is awesome Laurence. I am from Dublin, I think you are too, I can pick up the accent mixed in there. I was wondering if you could do a tensor flow video on career paths with tensor flow or something along those lines
Wow, this is gold. I love google and how they enable us with less of CS background to go in and get stuff done (or at least having fun trying)! Thank you for this talk. Btw, where's the place cool ppl in ML/AI hanging around? Would love a list of ecosystem.
Fun started when Laurence got in, i can connect well what he shared related to devs transition to ML .. Awesome intro video. Pycharm is awesome was looking something like that since coming from VS background
I tried to test the neural network in 10 lines, but the code has the following error, ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (60000, 28, 28) I think that some code is missing. Please send me the rigth code
Yep -- when using your own deve environment, there's a different install. And GPU needs NVidia stuff installed too. We have a couple of videos and blog posts about that, hope they help :)
Laurence Moroney I have definitely got it done but I work on a project with people that are just starting coding and sometimes get tripped up when an error. Happens. Especially matching versions of GPU installs to tensor installs
Agreed, it can be really tricky. If you are learning, I would recommend just going non-GPU with pip install TensorFlow to avoid any gotchas with Cuda CuDNN drivers etc. As you get more experienced with developing in it, then it's worth looking into GPU-based training.
how will be development of a hybrid programming language such as compiler and interpreter run from same platform with a toggle shifter .So IDE can process bothe from same platform when nessecery .
I'm facing an issue with the code :( "Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (60000, 28, 28)" can someone help ?
I exactly get the same error, there is a size mismatch but I can’t figure out what to change yet. I have x train and x label, 60000 is the number of images in mnist and 28 is the pixel amont in x or y axis. What is the connectin between??
It's great at 25:40 there is eager execution shown, except you forgot to feed the data from it to any model. SHow us how to do that without a session, as you claimed is possible. All you showed is reading the input files. That's nothing.
Watched it live during my lunch hour. Was starving, but totally worth it. Now, I'm on colab, and I just tried doing "for image, label in dataset" in eager mode, but it's throwing me a "not an iterable" error. Are they using a newer version than what I have available, or is there just a little more omitted code there?
stackoverflow.com/questions/49658802/how-can-i-use-tf-data-datasets-in-eager-execution-mode (found an answer, it may well work as they use it in a newer version, but for now there's just one more line involved)
One of the coolest real-time projects I've seen is PoseNet in TensorFlow.js: medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5
Paul Dacus Think of it like a ball rolling down a hill. It is possible for that ball to roll out of the "sweet spot" and end up further away from the global/local minimum.
Right, I guess his example is guitar tuning, and he states that once you reach a "tuned" state that continuing to tune it will make it go out of tune, which I suppose is true if you are only tuning with constant rotation amounts on the tuning keys and only in one direction. This is sort of unrealistic in deep learning cuz learning rates and back prop take care of both these problems, and for the most part error rates will monotonically decline in a properly functioning deep learning model. If I saw an error graph like the one he is showing, I would know something is terribly wrong with the model...
To everybody interested in the theory behind NN's, i highly recommend Haykins "Neural Networks and Learning Machines". Its available for free, just google it. ~900 pages of concentrated knowledge
For anyone who wants to play with the keras-mnist example : colab.research.google.com/github/CaoManhDat/100DaysOfML/blob/master/tf-high-level/keras-getting-started.ipynb
Keras has been integerated into Tensorflow - raw TF code is tedious to write and understand. Keras abstracts it into something more intelligible, and allows other ML frameworks to be plugged into the backend without complete code re-writes.
Am I the only one who loathes and despises Jupyter ? I have no idea where the cursor is going to end up, or what the system is doing. It's also full of indecipherable memory leaks and unwanted object caching.
tonycatman but it does allow you to test multiple manipulations on the dataset in memory, without having to restart the code and having to wait a long time. Also there is a good way of documenting what is going on, instead of being restricted to just commented out text
"So when you are building a neural network, start as simply as possible and build from there." No, sorry, you need to develop a model for low bias first. It's a separation of concerns that's paramount during ML project management. There are too many knobs otherwise, as well as model architectures, like number layers, units per layer, and upteen other choices. There are a hundred hyperparameters, so what are you going to do, grid search until the cows come home? No you need to be intelligent. After your model has capacity to get low bias on the training set, then you work on the overfitting as a separate task if you want to remain sane. More data examples, more regularization, dropout, batch norm, initializations and network architecture adjustments are next as you monitor the gap between error due to bias on training set, versus error due to variance on a holdout set (dev set). You certainly do not start out simple, which gets you nothing.
The slideshow claims that 80% or something of effort is getting the data. That's maybe correct. So then tell me why is another MNIST magical import done to demonstrate how to do get the data in this great new product release of TF? I have a different idea. Please explain how to use, with real datasets like IRIS or something else that is NOT built-in, something else that is NOT going to fit into RAM all at once, the tf.data.Dataset? Better than IRIS which is small, how about HIGGS.CSV? We do not EVER use built-in data for real world projects. My goodness another MNIST to load into memory all at once up front? This is dumber than dumb and does not match the introductory slides that claim getting the data is the hard work. Well it is in TF plenty hard work. Show us the real thing now. Use an external dataset of sufficient size and stop the toy nonsense.
This is the best free course on AI I've ever seen
This guy loves his coffee.
Haha! I couldn't get it off him! :)
Both of you were great, thanks a lot :-)
Security coffee
I kept expecting it to be a part of the presentation
I was noticing this for long.
Finally there it is .. the most awaited video
Thanks!
Laurence Moroney tensor flow is awesome Laurence.
I am from Dublin, I think you are too, I can pick up the accent mixed in there.
I was wondering if you could do a tensor flow video on career paths with tensor flow or something along those lines
Brian Kenny Thanks Brian. 🙂 I'm from India. I would be nice to work together
Hey Brian! Close. I grew up between Drogheda and Cardiff, Wales. :)
Good idea about career videos. Let me look into it! :)
This is great introduction to Tensor Flow and Keras. Thank you everyone and Josh...Keep up the great work!
Wow, this is gold. I love google and how they enable us with less of CS background to go in and get stuff done (or at least having fun trying)! Thank you for this talk. Btw, where's the place cool ppl in ML/AI hanging around? Would love a list of ecosystem.
Fate Riddle stackoverflow datascience group
This video opens me into a wide range of information. Thanks a lot.
Fun started when Laurence got in, i can connect well what he shared related to devs transition to ML ..
Awesome intro video. Pycharm is awesome was looking something like that since coming from VS background
Haha! Thanks! :) Glad you enjoyed!
Excellent presentation and concepts were covered well through Hands-on codes.
I tried to test the neural network in 10 lines, but the code has the following error, ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (60000, 28, 28)
I think that some code is missing.
Please send me the rigth code
It is not just pip install tensorflow for python....
Not if you are using the GPU version
Yep -- when using your own deve environment, there's a different install. And GPU needs NVidia stuff installed too. We have a couple of videos and blog posts about that, hope they help :)
Laurence Moroney I have definitely got it done but I work on a project with people that are just starting coding and sometimes get tripped up when an error. Happens. Especially matching versions of GPU installs to tensor installs
Agreed, it can be really tricky. If you are learning, I would recommend just going non-GPU with pip install TensorFlow to avoid any gotchas with Cuda CuDNN drivers etc. As you get more experienced with developing in it, then it's worth looking into GPU-based training.
Use Linux/Ubuntu and Anaconda and all you need is to type this in terminal: conda install -c anaconda tensorflow-gpu
Installing GPU was Google's form of a gang initiation. Once you survive the 2 day beating you are in the club.
You're just awesome Josh. Very helpful and informative. Thanks a lot.
What am I? Chopped Liver? Hehehehe
Can someone point me to the talk he is referring to at 6:36
how will be development of a hybrid programming language such as compiler and interpreter run from same platform with a toggle shifter .So IDE can process bothe from same platform when nessecery .
Josh, you are the God of ML!
Always, your's clips are nice! 🌹
Can this method use keras layers with CTC? how to implement? can you give me a sample example?
I'm facing an issue with the code :(
"Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (60000, 28, 28)"
can someone help ?
It is telling Use a two dimensional array.
Just Read The Docs amd check out the function defnition.
I exactly get the same error, there is a size mismatch but I can’t figure out what to change yet. I have x train and x label, 60000 is the number of images in mnist and 28 is the pixel amont in x or y axis. What is the connectin between??
How to show tensorflow,keras,colab,how a notebook works, mnist, eager exec and cnn in 30 minutes. Bravo
Thanks!! :)
wow thanks! great talk. I am just getting started with TensorFlow and this talk really helped me alot.
what course? he referred to a course. Please drop link. Thanks
It's great at 25:40 there is eager execution shown, except you forgot to feed the data from it to any model. SHow us how to do that without a session, as you claimed is possible. All you showed is reading the input files. That's nothing.
Hello from Anjin Games in sunny Hove UK. Nice demos +1 on the pose estimation. This is enabling my animation work at the moment.
Great! And say Hi to Brighton for me ! :D
Laurence Moroney, the world is built on a wall... LOL will do.
Quick question based on what you've said: can I get data if I feed rules and answers? And what is the name of this part of science? :)
Aleksander Lukashou that would be religion
Great video guys!
Watched it live during my lunch hour. Was starving, but totally worth it.
Now, I'm on colab, and I just tried doing "for image, label in dataset" in eager mode, but it's throwing me a "not an iterable" error. Are they using a newer version than what I have available, or is there just a little more omitted code there?
stackoverflow.com/questions/49658802/how-can-i-use-tf-data-datasets-in-eager-execution-mode
(found an answer, it may well work as they use it in a newer version, but for now there's just one more line involved)
you should upgrade your tensorflow to 1.8
Yup exactly. Try !pip install -U tensorflow==1.8.0
I want to work with some real time projects .. is there any opportunities?
One of the coolest real-time projects I've seen is PoseNet in TensorFlow.js: medium.com/tensorflow/real-time-human-pose-estimation-in-the-browser-with-tensorflow-js-7dd0bc881cd5
where i can get that book(deep learning with python)
www.deeplearningitalia.com/wp-content/uploads/2017/12/Dropbox_Chollet.pdf
Where is the code for the example he used? :( either show all code OR post a link to it.
Very well explained. Thank you so much!
how to implement code for moving object detection in tensorflow using CNN and python language
Here you go! github.com/tensorflow/models/tree/master/research/object_detection
Have fun installing the proto builder :)
Could anyone give me the link of that manning book of deep learning python!
www.deeplearningitalia.com/wp-content/uploads/2017/12/Dropbox_Chollet.pdf
is the tensorflow use only for deep learning?
no. tf works for gradient descent which is more general.
Where is the code from the number classifier?
As well as the dataset?
20:48 Does he say 'air'?
eVul6 he said "error".
20:08 I've never heard of a "epoch sweet spot" for training... always heard more is better.
Paul Dacus Think of it like a ball rolling down a hill. It is possible for that ball to roll out of the "sweet spot" and end up further away from the global/local minimum.
Right, I guess his example is guitar tuning, and he states that once you reach a "tuned" state that continuing to tune it will make it go out of tune, which I suppose is true if you are only tuning with constant rotation amounts on the tuning keys and only in one direction. This is sort of unrealistic in deep learning cuz learning rates and back prop take care of both these problems, and for the most part error rates will monotonically decline in a properly functioning deep learning model. If I saw an error graph like the one he is showing, I would know something is terribly wrong with the model...
tysm fo this video its really simple and gr8!
To everybody interested in the theory behind NN's, i highly recommend Haykins "Neural Networks and Learning Machines". Its available for free, just google it. ~900 pages of concentrated knowledge
Thank you for your recommending.I appreciated.
Raspberry Pi TensorFlow Object Detection - th-cam.com/video/zqIBce4LKx8/w-d-xo.html
Sir docker is not installed pls help
nice gremlins reference
hehehehe
For anyone who wants to play with the keras-mnist example : colab.research.google.com/github/CaoManhDat/100DaysOfML/blob/master/tf-high-level/keras-getting-started.ipynb
Thank you, Laurance
this guy is amazing !
Why does tensor flow hate Java so much?
Siraj Florida discrimination in the tech industry is a hot topic these days.
Ummm....I can't speak for everyone else, but I *love* Java.
always awesome!
Thanks!
The files are gone.
speedbumphu +1
Trying to test the code but missing
the coffee got cold
FRANÇOIS CHOLLET, the author is.
Eager to use eager :)
i though that keras and tf was different frameworks
Keras has been integerated into Tensorflow - raw TF code is tedious to write and understand. Keras abstracts it into something more intelligible, and allows other ML frameworks to be plugged into the backend without complete code re-writes.
Am I the only one who loathes and despises Jupyter ?
I have no idea where the cursor is going to end up, or what the system is doing. It's also full of indecipherable memory leaks and unwanted object caching.
tonycatman but it does allow you to test multiple manipulations on the dataset in memory, without having to restart the code and having to wait a long time.
Also there is a good way of documenting what is going on, instead of being restricted to just commented out text
I love Jupiter. It's all I code in now. So much easier than a IDE.
Good presentation
Next time leave the coffee cup on the lectern. It distracts the audience, also you're limiting to engage more the audience
Stellar talk.
Well you definitely taught my family how to turn me into their project.. which was not fun at all...
تحياتي الخالصة
"So when you are building a neural network, start as simply as possible and build from there." No, sorry, you need to develop a model for low bias first. It's a separation of concerns that's paramount during ML project management. There are too many knobs otherwise, as well as model architectures, like number layers, units per layer, and upteen other choices. There are a hundred hyperparameters, so what are you going to do, grid search until the cows come home? No you need to be intelligent. After your model has capacity to get low bias on the training set, then you work on the overfitting as a separate task if you want to remain sane. More data examples, more regularization, dropout, batch norm, initializations and network architecture adjustments are next as you monitor the gap between error due to bias on training set, versus error due to variance on a holdout set (dev set). You certainly do not start out simple, which gets you nothing.
20:48
Awesome!
bad guitar tuning analogy :)
TLDR anyone?
please
Now you can keras inside your tensorflow while you're tensorflowing.
Recommendable
come on tensorflow
Nice shirt.
powered by coffee
Seriously put down the coffe have some respect
The slideshow claims that 80% or something of effort is getting the data. That's maybe correct. So then tell me why is another MNIST magical import done to demonstrate how to do get the data in this great new product release of TF? I have a different idea. Please explain how to use, with real datasets like IRIS or something else that is NOT built-in, something else that is NOT going to fit into RAM all at once, the tf.data.Dataset? Better than IRIS which is small, how about HIGGS.CSV? We do not EVER use built-in data for real world projects. My goodness another MNIST to load into memory all at once up front? This is dumber than dumb and does not match the introductory slides that claim getting the data is the hard work. Well it is in TF plenty hard work. Show us the real thing now. Use an external dataset of sufficient size and stop the toy nonsense.