The Jetson can act like an Ubuntu server and you can access over network connection using something like ssh. So yes you can treat the Jetson like a little server and connect with an M1 Mac no problem.
Thanks for all the video on the Xavier AGX. In your opinion, would you recommend buying an AGX for training versus buying a used GTX 1080 TI and add it to an existing system? The cost seems to be pretty similar
I would say a 1080ti is going to be better from a speed and ease of use perspective. While I really like the AGX Xavier, small and portable, a 1080ti is likely to be better in terms of ease of use and raw speed.I have a couple of TitanXp cards are they are similar to 1080ti and likely I think 2x faster than the Xavier. I can do a benchmark between them if you are interested.
Nice, but - 1. how about the result when training it on different language ? 2. And is it possible to train gpt-2 to transform natural language to code(bash, or shell comands, python) like gpt-3 ?
Hi, great tutorial.. Can we train GPT-2 for natural language generation from structured data ?? I have to generate summary/ insight from tabular data but not getting how to fed it into GPT-2.. If you can suggest than I will really appreciate it.
What I know is it won’t fit in a Tesla K80 with 12GB RAM per GPU, The Jetson AGX Xavier where it’s using 6GB for CPU and 24GB for GPU does not handle it. There are posts about Tesla v100 with 32GB working okay. I am not sure but I think 32GB might do it. I might try to calculate how much *should be required*.
@@robingrosset6941 Could you take a look at this video (the later half) th-cam.com/video/4iK-IuvatxI/w-d-xo.html & do like a follow up tutorial on How to build a Siri-like voice assistant?
im teaching my son chinese and he is learning about 50 characters , is there any way to use this AI to generate a small story with only 50 known characters? ive been doing this manually at the moment. thanks
I would use an RNN for this, you would need a good training set with the 50 characters. There are a few good blogs on how to use RNNs with a given vocabulary.
The CUDA version is installed with JetPack 3.4. I cover the steps to do this here th-cam.com/video/Z5faMHohbfs/w-d-xo.html The TensorFlow version is 2.0 I installed it following these steps on NVIDIA's site docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html Note that in recent versions they changed from requiring tensorflow_gpu to just using package name tensorflow. As long as when you run your model training its saying things like loading library libcudart.so.10.0 you know its using CUDA and TensorFlow GPU version
I am using the GPU for training. On the Jetson AGX Xavier the 32GB RAM is shared between the CPU and the GPU. As the memory is shared the memory usage for the whole system includes the GPU usage of memory. This is different from a normal PC with a discrete GPU. The way to use the GPU in training is to use something like the TensorFlow library. By default on Jetsons the TensorFlow library uses the GPU version. It is very easy to tell if its using the GPU as when the TensorFlow library starts up it will print out some information about the GPU it found in the system and how much memory is sees as available. at 7 mins 49 seconds in the video roughly in the middle of the terminal you will see /device:GPU:0 with 24577 MB memory. I hope this helps.
@@robingrosset6941 thanks bro, I can use GPU to train now. But I found by using GPU, the output simple sometimes will be very repetitive, same words occur, again and again, like this:" Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Finally, finally, finally, finally, finally, finally, finally, finally, finally, finally, finally, finally, " still trying to find why
If you have Python2 and Python3 installed you might need to run pip3 install tensorflow-gpu==1.15. Also of you are running on Jetson Xavier then you can drop the gpu section altogether and it by default uses GPU
My mistake if I run 'pip3 list' the tensorflow version is tensorflow==1.15.2+nv20.2.tf1 So I think the 20.2 is the NVIDIA element. The other thing you should check is the power mode mine is set to MAXN also note the training part is speeded up in the video. It probably takes about 1 hr to run.
Love this! Thanks for posting.
Glad you like it!
the code window is too small. if the font bigger it could be better. is it possible to use GPT-2 to create an auto-encoder to vectorize documents?
Thanks Robin. This was super useful.
Is it possible some how that macbook Air M1 can be use an monitor for Jetson board ??
Please reply I am new to this field.
The Jetson can act like an Ubuntu server and you can access over network connection using something like ssh. So yes you can treat the Jetson like a little server and connect with an M1 Mac no problem.
@@robingrosset6941
Thank you so much for suggestions and for prompt reply.
Thanks for all the video on the Xavier AGX. In your opinion, would you recommend buying an AGX for training versus buying a used GTX 1080 TI and add it to an existing system? The cost seems to be pretty similar
I would say a 1080ti is going to be better from a speed and ease of use perspective. While I really like the AGX Xavier, small and portable, a 1080ti is likely to be better in terms of ease of use and raw speed.I have a couple of TitanXp cards are they are similar to 1080ti and likely I think 2x faster than the Xavier. I can do a benchmark between them if you are interested.
@@robingrosset6941 some benchmark would be super cool!
Nice, but - 1. how about the result when training it on different language ? 2. And is it possible to train gpt-2 to transform natural language to code(bash, or shell comands, python) like gpt-3 ?
is there a way to make it generate text everyday on its own.
Yes I am sure it can. Just keep it running. It can generate as much as you like.
@@robingrosset6941 ok ok hopefully I can get this project up and running this week thank you this video.
Hi, great tutorial.. Can we train GPT-2 for natural language generation from structured data ?? I have to generate summary/ insight from tabular data but not getting how to fed it into GPT-2.. If you can suggest than I will really appreciate it.
Nice do you know of a mandarin version?
I need this for GPT-NEOx 🙏🏼
How much system ram or gpu ram do you think the 1558M model would have needed for the training to be able to run at all?
What I know is it won’t fit in a Tesla K80 with 12GB RAM per GPU, The Jetson AGX Xavier where it’s using 6GB for CPU and 24GB for GPU does not handle it. There are posts about Tesla v100 with 32GB working okay. I am not sure but I think 32GB might do it. I might try to calculate how much *should be required*.
@@robingrosset6941 Could you take a look at this video (the later half) th-cam.com/video/4iK-IuvatxI/w-d-xo.html & do like a follow up tutorial on How to build a Siri-like voice assistant?
You try to give the video more brightness it will be great if you do
im teaching my son chinese and he is learning about 50 characters , is there any way to use this AI to generate a small story with only 50 known characters? ive been doing this manually at the moment. thanks
I would use an RNN for this, you would need a good training set with the 50 characters. There are a few good blogs on how to use RNNs with a given vocabulary.
Good stuff. You Rock!
hi, Could you pls let me know which version of cuda and tensorflow-gpu you are using to run this demo? thx!
The CUDA version is installed with JetPack 3.4. I cover the steps to do this here th-cam.com/video/Z5faMHohbfs/w-d-xo.html
The TensorFlow version is 2.0 I installed it following these steps on NVIDIA's site docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html
Note that in recent versions they changed from requiring tensorflow_gpu to just using package name tensorflow. As long as when you run your model training its saying things like loading library libcudart.so.10.0 you know its using CUDA and TensorFlow GPU version
@@robingrosset6941 thx,bro. I will try it out. lots of gpu mmo problems just make me crazy
@@mengjunwang8687
if I run 'pip3 list' the tensorflow version is
tensorflow==1.15.2+nv20.2.tf1
So I think the 20.2 is the NVIDIA element.
I am just confused about whether you are using CPU or GPU for training? How could I use my GPU to train my data? thank u so much
I am using the GPU for training. On the Jetson AGX Xavier the 32GB RAM is shared between the CPU and the GPU. As the memory is shared the memory usage for the whole system includes the GPU usage of memory. This is different from a normal PC with a discrete GPU. The way to use the GPU in training is to use something like the TensorFlow library. By default on Jetsons the TensorFlow library uses the GPU version. It is very easy to tell if its using the GPU as when the TensorFlow library starts up it will print out some information about the GPU it found in the system and how much memory is sees as available. at 7 mins 49 seconds in the video roughly in the middle of the terminal you will see /device:GPU:0 with 24577 MB memory. I hope this helps.
@@robingrosset6941 thanks bro, I can use GPU to train now. But I found by using GPU, the output simple sometimes will be very repetitive, same words occur, again and again, like this:" Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Slowly, Finally, finally, finally, finally, finally, finally, finally, finally, finally, finally, finally, finally, " still trying to find why
i keep getting ModuleNotFoundError: No module named 'tensorflow' but i installed pip install tensorflow-gpu==1.15
If you have Python2 and Python3 installed you might need to run pip3 install tensorflow-gpu==1.15. Also of you are running on Jetson Xavier then you can drop the gpu section altogether and it by default uses GPU
yo can you please give us that cleaned up dataset, would be a great help
Sure I will upload the project to GitHub later today and post a link here :-)
Here is a link to GitHub with the dataset and code
github.com/robin7g/burnsbot
@@robingrosset6941 thank you very much bro
Does it work with Tensorflow 2.0? I thought it wouldn't?
I'm not able to install Tensorflow 1.0 and I don't know why :(
My mistake if I run 'pip3 list' the tensorflow version is
tensorflow==1.15.2+nv20.2.tf1
So I think the 20.2 is the NVIDIA element. The other thing you should check is the power mode mine is set to MAXN also note the training part is speeded up in the video. It probably takes about 1 hr to run.
The promised GitHub link to your source code modification would have helped
CovidImages need to be invested more than half19