Jetson Nano: Vision Recognition Neural Network Demo

แชร์
ฝัง
  • เผยแพร่เมื่อ 27 ก.ย. 2024

ความคิดเห็น • 522

  • @user-uw1wq9rj8g
    @user-uw1wq9rj8g 5 ปีที่แล้ว +12

    The best explainer on the TH-cam is only Mr. Barnatt!! Thank you sir.

  • @oskimac
    @oskimac 5 ปีที่แล้ว +117

    plot twist, he puts a jellyfish and the ANN detects it as "green background".

  • @ericschleicher
    @ericschleicher 5 ปีที่แล้ว +2

    came here for SBC demos, got masterclass into to machine learning. good simple explanation for those not already steeped in machine learning

  • @geoffreyjohnstone5465
    @geoffreyjohnstone5465 5 ปีที่แล้ว +2

    This is really clever. I know it can be done on much more expensive equipment but this is soo cool in that you can carry it in your pocket. I would never even think to try this kind of thing.

  • @1974UTuber
    @1974UTuber 5 ปีที่แล้ว +28

    Great video and demonstration Chris.
    I found it interesting that it identified your background as a jellyfish each time you removed all the items from the shot.

  • @9ColorZebra
    @9ColorZebra 5 ปีที่แล้ว +3

    Thanks Chris. I enjoyed the presentation and forwarded it to my friend in case he didn't see your notification. He went and bought a Jetson Nano.

  • @ГригорийЕрёмин-ч4й
    @ГригорийЕрёмин-ч4й 5 ปีที่แล้ว +2

    very happy for the fact that such videos are released. Thank you very much for this. I hope that such highly specialized videos will be released!

  • @JohnyDays69
    @JohnyDays69 4 ปีที่แล้ว +1

    Man, you are AWESOME. You should teach some courses for all of us we don't have great knowledge in this field.
    Your explanations are clear and understood. I think I will buy one of these boards and try to dive into this advance step.

  • @rv6amark
    @rv6amark 5 ปีที่แล้ว +2

    Watched this video twice simply because the subject matter was so interesting and well presented. I can see similar processing going on in my "Nest" doorbell's facial recognition, which works amazingly well on its tiny processor. Thank you, Christopher, for another great Sunday morning watch. Now I can spend my afternoon exploring the links you provided.

  • @pulesjet
    @pulesjet 5 ปีที่แล้ว +2

    WoW, I found the IA's Vocabulary alone amazing. I'm still having issues understanding how the Internet truly works . World Wide Neural Nets and Nodes. How so much information can be compared in uSeconds from my mind, to key board to the entire complex of the net, back to me as fast as I can type. Boggles my noodles it does. To think Our minds do the same thing all located between the ears. The Creator had it's chit together for sure. Our minds are nothing more then Yes and no''s being compared in a gray jelly substance we pretend to control. When you think about it , it's i truly amazing we can do what we can do, eaaaa? Yet again Your gray jelly teaming with yes and no's prevail Sir ! One of the best video's I've seen from you. Thank You !

  • @resrussia
    @resrussia 5 ปีที่แล้ว +2

    Excellent presentation of neural networks and Jetson implementation of one. I am looking forward seeing how it can trained for working with specialized domains of knowledge. Excellent video and keep up the excellent work!

  • @mickybee3247
    @mickybee3247 5 ปีที่แล้ว +2

    Fascinating video - superbly presented. I'd love to see two of these setup so it can truly 3D determine whether it's a wooden spoon or drum stick, and distance to object. Powerful and complex technology that (like most technology), will be used for good and bad.

  • @XSpImmaLion
    @XSpImmaLion 5 ปีที่แล้ว +1

    Super interesting! Also, the fact that you went offline for this Cris, matters a lot... I'd be pretty interested to play with this tech around a bit if it's not creeply connected to something else.
    Thanks for sharing!

  • @salilsaxena9529
    @salilsaxena9529 4 ปีที่แล้ว +1

    Loved this Video.
    The only concern I have is that in Section where you explain the ANN's is that there exists no Neural Network with 2 output nodes (as in terms of Binary Classification a single node can do this task by simply indicating 0/1 with the help of Sigmoid or SVMs).
    Please continue spreading Practical knowledge the World needs it.

  • @niallwood
    @niallwood 5 ปีที่แล้ว +4

    Thanks for another great video! I have my GCSE computer science exam tomorrow and Thursday, wish me luck!

  • @RocktCityTim
    @RocktCityTim 5 ปีที่แล้ว +1

    I remember when a Smalltalk app could recognize a phrase that you typed. It took it long seconds and would consume a 286 CPU platform that cost over $15,000. Many won't realize how amazing what you just showed us is - but it is f-ing AMAZING for $99!

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      I totally agree. Even the cloud AI vision recognition you can try for free now -- eg at cloud.google.com/vision/ -- takes a few seconds to process a still. And this board is delivering 17fps. It really is staggering.

  • @armisis
    @armisis 4 ปีที่แล้ว

    I can't wait to try this, I want to get it to learn the family and then greet the people who it knows when they come in the house and question new people as a type of security system.... Should be a fun project.

  • @NicoDsSBCs
    @NicoDsSBCs 5 ปีที่แล้ว +1

    That's amazingly well explained Christopher. I wish you had explained me the first time they tried to explain to me. It took a while before I understood. With your explenation everyone can understand it in a the first 3 minutes.
    I also had books with those pictures of the neural nodes. In multiple configurations (multi-layer networks...)
    It's amazing seeing how things have evolved. In the early 2000's we had to write all library's ourself. And it wasn't used for anything media like this. No pictures nor video, only text. And the output was a library of data that we had to try to interpret. What had cost 1 million dollar 19 years ago is far surpased by something of $100 dollar now. What's the future going to bring next :)
    Amazing video, I loved every second of it. I haden't heard about neural networks for years after CELE went bust. Now it's everywhere.
    Have a great day Christopher.

    • @NicoDsSBCs
      @NicoDsSBCs 5 ปีที่แล้ว +1

      It was an Indian elefant. It's got small ears :) I'm watching it again :)

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +1

      Hi Nico. Fascinating to put this in the context of your previous neural network experience. This technology is going to grow and grow.

  • @briancrane7634
    @briancrane7634 5 ปีที่แล้ว +2

    Very nice demo indeed! Fascinating! Thank You! I must add that in order to understand confidence intervals and Deep Learning in general one must have A-levels in MATHS! In particular being able to differentiate a DL equation with respect to a matrix is key. Many, many videos and courses available at no cost on the internet for anyone with the vision (pun intended) to study them!

  • @srtcsb
    @srtcsb 5 ปีที่แล้ว +2

    This is really good stuff. I guess if they ever figure out how a computer can define smell and /or feel, Skynet can't be far behind ;-) . But this early tech is fascinating to see and work with. Thanks for another great video Chris.

  • @risquefiasco1305
    @risquefiasco1305 5 ปีที่แล้ว

    I wish you had explainers for every computing question I have, but thankfully you answer many on my favourite subject; the single board computer. Thank you

  • @JaimeDeFi
    @JaimeDeFi 5 ปีที่แล้ว +18

    "Is not a coffee mug, is a tea mug" you have my thumb up! XDDD

  • @apoch003
    @apoch003 5 ปีที่แล้ว

    That was a fun one, Chris. I could have watched it trying to recognize things all day.

  • @freesaxon6835
    @freesaxon6835 5 ปีที่แล้ว +42

    "Not named after a fruit, but can recognise fruit " 😁

    • @perrymcclusky4695
      @perrymcclusky4695 5 ปีที่แล้ว +1

      Free Saxon The best quote of the video!

  • @AngryRamboShow
    @AngryRamboShow 5 ปีที่แล้ว +1

    You're awesome! Thanks for the Jetson Nano coverage. Hope you keep it coming! You have the greatest channel on TH-cam for cool tech.

  • @edrymes3653
    @edrymes3653 5 ปีที่แล้ว +2

    Fascinating, and also a bit scary. First of all the power of the SBCs is incredible compared to my first PC, an 8086 with 720k of ram. Put that together with the AI software and you start to get a taste of the near future. Facial recognition for your front door? The possibilities are endless.

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      Yes. It is not what a $99 maker board can do today that is important, but what it signals for the years ahead . . .

  • @darkholyPL
    @darkholyPL ปีที่แล้ว +1

    I love when there's nothing on the screen the AI just goes: 'Umm, yup that's a jellyfish right there!' lol

  • @Alex1891
    @Alex1891 5 ปีที่แล้ว +1

    This ExplainingComputers upload is the most interesting-to-date for me, because of both the topic and the presenter (hey Mr Barnatt!).
    Near the beginning of the video, it is explained that artificial neural networks have an initial training phrase. The example of showing one multiple pictures of rabbits is used. This is wonderful. Furthermore, unless my understanding is incorrect, it is implied that the artificial neural network would have to be told what it is looking at during the training phase. This allows it to return something understandable during the inference phrase. Within the context of this video, it is able to eventually tell us that an input picture is likely a rabbit.
    I would like to pose the question of whether it is an inherent, necessary step for artificial neural networks to be told what they are looking at during their training phases. What if you showed one multiple pictures of rabbits, but did not tell it that it was looking at rabbits? Surely, upon being shown a novel picture of a rabbit in the inference phase, it would still be able to tell that it was looking at something it knew about?
    This ties into an example later in the video. When Mr Barnatt is showing the camera the ExplainingComputers mug, it is likely a "novel" experience for the artificial neural network. However, can it remember the appearance of the mug such that it would be able to usefully respond to a future request for, say, "all known images of $this", where $this might be a sample picture supplied by the user? Could this eventually go deeper, with there someday being the ability to get a useful output to the request, "Please show me all known images of $thisSpecific.", where $thisSpecific might be a sample picture supplied by the user, with the output being images of one, specific ExplainingComputers mug, identified by its unique possible cracks, discolouring, grease, and/or other possible attributes?
    Thank you for your erudition, Mr Barnatt.
    Edit: For clarity, I would like to specify that this comment is wondering about the identification of specific things. I am aware that it is possible today to do a reverse image search in Google; however, it returns images similar to the uploaded one. I am looking for something to return images showing the same thing as what was uploaded, possibly from any time that a public picture was available. Imagine the scenario in which someone in the distant future scans an unearthed ExplainingComputers mug, and, in return, receives images (and possibly video) from a massive archive of public media, possibly including photos of the person who owned the mug holding it. Then, you could take that person's image as input and see other images of that person, and learn about their life, knowing they were an ExplainingComputers fan. :-)

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +1

      Great post, and your understanding of the training process is correct. Today, most neural nets are trained, then used for inference (in part because training is a far more resource intensive process). But there are self-learning neural net AIs -- such as Google's Deep Mind -- that do not have to be fed known training data (ie told what they are looking at, as it were).

  • @Tangobaldy
    @Tangobaldy 5 ปีที่แล้ว

    Yay an explaining video. Noice. This single board reviews don't interest me. But what they can comment great to learn. I wonder what the dnn recognised you as?

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      I never tried looking at the camera! :) But there are no people in the list of 1000 things this sample net can recognize.

  • @thebeststooge
    @thebeststooge 5 ปีที่แล้ว +5

    Fascinating and scary at the thought of where this will end up at the same time. I am torn over this because I know how it will be used for evil more than good.

  • @KomradeMikhail
    @KomradeMikhail 5 ปีที่แล้ว +38

    But can it recognise a Raspberry Pi ?...
    Or a raspberry pie ?

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +17

      Sadly not. I've read the list of the 1000 things it knows. It can identify a raspberry, though.

    • @statorworksrobotics9838
      @statorworksrobotics9838 5 ปีที่แล้ว +1

      You beat me to it

    • @brianwesley28
      @brianwesley28 4 ปีที่แล้ว +1

      Make it look into a mirror.

  • @El_Grincho
    @El_Grincho 5 ปีที่แล้ว +46

    It's a b... It's a b... It's a small, off-duty Czechoslovakian traffic warden!

    • @ThinkinThoed
      @ThinkinThoed 5 ปีที่แล้ว +1

      Hahaha, that was a good reference! Now I've got to go rewatch the show, thanks. 😂

    • @jonathanmaybury5698
      @jonathanmaybury5698 5 ปีที่แล้ว

      @@ThinkinThoed Love it LOL

    • @ethzero
      @ethzero 3 ปีที่แล้ว

      Came here to make or like this comment!

  • @stevensexton5801
    @stevensexton5801 5 ปีที่แล้ว +13

    I'm still looking for the jellyfish. All I can see is a cow eating grass in a very large field.

  • @MicrobyteAlan
    @MicrobyteAlan 5 ปีที่แล้ว +15

    Good topic. Interesting and well presented. Thanks from Florida’s Space Coast

    • @gpalmerify
      @gpalmerify 5 ปีที่แล้ว +2

      Howdy from Houston 🚀

    • @joeyhillers9460
      @joeyhillers9460 5 ปีที่แล้ว +2

      Smack dab in the middle of Missouri

  • @IndiandragonIn
    @IndiandragonIn 5 ปีที่แล้ว +4

    @10:29 Chris is a madlad, he needn't worry about demonetisation!

  • @jim5461
    @jim5461 3 ปีที่แล้ว

    Steps taken to make the first command work : adjusted resolution for my camera (3280, not 3820), and lowered fps value to 15. Then the command worked.

  • @trevorford8332
    @trevorford8332 5 ปีที่แล้ว +2

    I've always been fascinated by neutral networks, when I had a mini stroke it was a good opportunity find out how brain works inparticular when parts of the Neutral network dies as with the brain functions you can find many examples in life!! 😊

    • @vvwording4844
      @vvwording4844 5 ปีที่แล้ว +2

      A stroke has its advantages: one of the main jobs of my brain after my stroke has been finding those advantages. In stroke-land curiosity and a touch of humor are good allies in the war against Big Nurse.

    • @trevorford8332
      @trevorford8332 5 ปีที่แล้ว +2

      @@vvwording4844Oh god yeah, definitely recommend a good sense of humour!!

  • @qzorn4440
    @qzorn4440 3 ปีที่แล้ว +1

    well at least working with the raspberry pi opencv is a great exercise in downloading soft-stuff to the nano..:) thanks, great hello world neural video

  • @MohammadAminAbouHarb
    @MohammadAminAbouHarb 4 ปีที่แล้ว +4

    legend says innocent jellyfishes were mercilessly slaughtered on that same desk. you can still feel their torched souls yearning for justice to be served. poor fellas

  • @jerrygundecker743
    @jerrygundecker743 5 ปีที่แล้ว +5

    Your AI could be named Mr. Magoo. "Oh, AI, you've done it again!"

  • @stanrogers5613
    @stanrogers5613 5 ปีที่แล้ว +16

    It needs an olfactory sensor module. It's high time someone made something that can definitively tell cheese from petrol.

    • @motogee3796
      @motogee3796 5 ปีที่แล้ว +1

      check this mini spectrometer...it can identify foods, medicines and their quality as well.
      th-cam.com/video/YKv9ESLMOEE/w-d-xo.html

    • @totalermist
      @totalermist 5 ปีที่แล้ว +1

      @@motogee3796 You know that's a scam, don't you? It *cannot* work. Professional equipment that's orders of magnitude more expensive and -bulky can't do what those scammers claim their product can achieve.
      If it seems too good to be true, it probably is...

    • @motogee3796
      @motogee3796 5 ปีที่แล้ว

      @@totalermist I actually believed it...thanks for pointing out.
      They raised 3 million $$ on kickstarter

    • @totalermist
      @totalermist 5 ปีที่แล้ว

      @@motogee3796 To be fair calling them "a scam" was a bit harsh - they at least released a product; albeit one that was several years late and very underwhelming.
      The problem is not so much the product itself, it's the hype and unrealistic goals. I genuinely believe the guy behind it wanted to make it a reality. But reality just didn't play along...

    • @mikeorjimmy2885
      @mikeorjimmy2885 3 ปีที่แล้ว

      @@totalermist Does Reality ever play along? I have noticed that in my 65 years only half of the time. No flying cars, no moon trips no fusion for power.

  • @xdxfxzx
    @xdxfxzx 5 ปีที่แล้ว +1

    Would love to see more videos with this board. The gpu on it makes it infinitely more usable than the Rpi

  • @weerobot
    @weerobot 5 ปีที่แล้ว +38

    Fast Forward 30yrs....Say hello to T 800......

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +11

      Exactly. That is my thought here. I am amazed that a 99$ board can already do this.

    • @spuds6423
      @spuds6423 5 ปีที่แล้ว

      @Richard Addison what was that movie where Robin Williams plays an Android that becomes more human as it is upgraded to the point where he is legally a "sentient being" and allowed to have relations with a human?? It was "Bicentennial Man"

    • @floydlooney6837
      @floydlooney6837 5 ปีที่แล้ว +2

      No, T-800 says Hello to you, Puny Human, all glory to Skynet!

    • @Fred_PJ
      @Fred_PJ 5 ปีที่แล้ว +4

      98 . 50% Sarah Connor

    • @henson2k
      @henson2k 5 ปีที่แล้ว

      @@JamecBond Unlikely, look at self-driving cars or space exploration. Stuck for a while...

  • @BharatMohanty
    @BharatMohanty 5 ปีที่แล้ว +1

    Nice and informative video sir
    1. neural network image recognition with terminal on looks like a scene from Hollywood movie
    2. Neural network needs to learn that Englishmen prefer tea over coffee on any given day. 😀

  • @allluckyseven
    @allluckyseven 5 ปีที่แล้ว +1

    Very interesting.
    I don't know exactly how do they work (or this specific implementation), but seeing the wooden spoon suddenly turn into a drumstick makes me think that the AI should consider not just the current image it's seeing, but also the previous ones. Not all of them necessarily, but the wooden spoon hadn't even left the screen and it thought it was something else.
    So it should consider tracking the objects, the history of images analyzed, and maybe it could work better with a second camera or depth sensors to read what's in front of it.

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +1

      I like your line of thinking here. The demo I imagine interprets each frame in isolation.

  • @cloudcloud1
    @cloudcloud1 5 ปีที่แล้ว +2

    *Again* a great video.🌺
    Curious what KI will do in the future?
    Paradise for technically interested people.
    I assume NVIDIA will make a lot more possible.

  • @chroma7247
    @chroma7247 5 ปีที่แล้ว +2

    I finally figured it out. Chris is Robert Fripp, but with less guitars.

  • @rayrayray63
    @rayrayray63 5 ปีที่แล้ว +10

    To name it after a fruit when it can recognize fruit is just nuts.

    • @mickelodiansurname9578
      @mickelodiansurname9578 5 ปีที่แล้ว +2

      And of course nuts are technically fruit. So now its just confusing.

    • @spuds6423
      @spuds6423 5 ปีที่แล้ว

      @@mickelodiansurname9578 but it certainly not a vegetable. 😃

  • @AurioDK
    @AurioDK 5 ปีที่แล้ว +30

    I am really scared now, the world is full of jellyfish, I knew there was something between heaven and earth. Now it´s been confirmed.

  • @junkmauler
    @junkmauler 5 ปีที่แล้ว

    Would love to see you take this a tad further and actually show the learning/training process of adding new objects for detection.

  • @buck-johnson
    @buck-johnson 2 ปีที่แล้ว +1

    I really enjoyed this video. Thanks.

  • @marathonmanchris
    @marathonmanchris 5 ปีที่แล้ว +1

    This is great video, thanks, can hardly wait to try it!

  • @SaccoBelmonte
    @SaccoBelmonte 5 ปีที่แล้ว +1

    Fascinating, you should make it harder with reflective objects such a metal ball, mirrors, crystal objects and see what happens.

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      Yes, I should try to confuse it! Although the teapot was reflective.

    • @SaccoBelmonte
      @SaccoBelmonte 5 ปีที่แล้ว +1

      @@ExplainingComputers Trying to confuse it will make for a great video :D

    • @SaccoBelmonte
      @SaccoBelmonte 5 ปีที่แล้ว +1

      Keep the good job man, you rock!

  • @magefront1485
    @magefront1485 5 ปีที่แล้ว

    DNN is a bit confusing, it could be referred to Dense Neural Network.
    I think in this case the architecture of the fruit recognition network should be the well known InceptionVx(might be 1-4) trained on the dataset imagenet.
    Nice video Chris, hope to see how this little beast performs on face recognition.

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      I take your point on "DNN"; I used the term in part because it is the one NVIDIA use.

  • @steve6375
    @steve6375 5 ปีที่แล้ว +1

    Great video! I would like to know how to train it to recognise new objects and how to train it to distinguish between very similar objects such as different types of apples or human faces or breeds of dog, etc.

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +1

      There is a training demo: see developer.nvidia.com/embedded/twodaystoademo and the section "Employing Deep Learning".

  • @KingJellyfishII
    @KingJellyfishII 5 ปีที่แล้ว +13

    Early! I know this is going to be a great video before I've even watched it

  • @FailsafeFPV
    @FailsafeFPV 5 ปีที่แล้ว +2

    Do you plan to take a look at the Atomic Pi x86 sbc? I have just brought one.

  • @kokopelli314
    @kokopelli314 5 ปีที่แล้ว

    Add a text reader and you have a prosthesis for visually impared.

  • @menghuajiang3509
    @menghuajiang3509 4 ปีที่แล้ว +1

    It cannot seem to handle transparent object... I wonder what if you show it a mirror?

  • @panvrek8952
    @panvrek8952 5 ปีที่แล้ว

    Wow you know, it's my first time seeing something like this. Thanks

  • @GervasitorSpaceman67
    @GervasitorSpaceman67 5 ปีที่แล้ว +1

    Nice video ! Now I want to get one, it's nice to be able to play with AI for that price.

  • @kingmarviemarv
    @kingmarviemarv 5 ปีที่แล้ว +1

    Great video on the Jetson Nano. I've enjoyed watching this video as a first time watcher and subscriber. Can the Jetson Nano be used as a 3D scanner as well?

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +1

      Thanks for watching and subscribing. There is certainly 3D scanning potential here, although the Jetson Nano only has one CSI port.

  • @Gdolwell
    @Gdolwell 5 ปีที่แล้ว +4

    As much as I love the specs of sbcs, use cases are much more interesting.

  • @user-vn7ce5ig1z
    @user-vn7ce5ig1z 5 ปีที่แล้ว +7

    As a jellyfish, I am quiet concerned that AIs are already programmed to recognize us at all costs. The future does not look good for jellyfish-kind. 😲

  • @andrewfrost8866
    @andrewfrost8866 5 ปีที่แล้ว +1

    Excellent Chris!

  • @bcollinsks1
    @bcollinsks1 5 ปีที่แล้ว +1

    Great Job! Can't wait to get setup and try it.

  • @suvetar
    @suvetar 5 ปีที่แล้ว +1

    I wonder if the console output could be tweaked for it to say what it thinks the alternatives are; I mean - I know it flashed up snorkel when showed the water bottle for example, but for when say it was 60% sure that the object was an elephant, what was the other 40%? I don't know if the software works like this, but perhaps that 40% could be used at back-propogation data? Just a thought anyway!
    Thanks for the great video as always! Fascinating subject and you introduce it in a very painfree manner!

  • @DestroyerFather
    @DestroyerFather 5 ปีที่แล้ว

    Very good stuff man amazing video

  • @peperiosh
    @peperiosh 5 ปีที่แล้ว

    Totally Amazing! Thanx a lot for this. Fan of your site. Greetings from Lima - Perú

  • @fe911s
    @fe911s 5 ปีที่แล้ว +2

    Thanks

  • @johandeklein5253
    @johandeklein5253 5 ปีที่แล้ว +1

    Just fascinating

  • @Cynthia_Cantrell
    @Cynthia_Cantrell 5 ปีที่แล้ว +4

    It appears that the computer thinks it's under water in the Caribbean and is detecting all those jellyfish that are nearly invisible. ;)

  • @EliasOda
    @EliasOda 5 ปีที่แล้ว +2

    That wholeshowing objects on the camera......You m ade it sound like we are watching playschool.....Its funny.....good work.

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +2

      You are right. I had not thought of it that way. But now I think of it, holding up objects for a computer to recognize is very much like showing something to a child and saying "what's this then?"

  • @spicemasterii6775
    @spicemasterii6775 5 ปีที่แล้ว

    Amazing video. Thank you bringing Jetson to my attention. I was completely unaware of it's existence. I might order one and follow along with your channel. Liked and subscribed.

  • @scottwatschke4192
    @scottwatschke4192 5 ปีที่แล้ว +1

    Awesome technology.

  • @soumitradey8208
    @soumitradey8208 5 ปีที่แล้ว +1

    Fantastic sir
    Followed the steps!! It could identify my pet dog exactly! English cocker spaniel 68 % probability!!

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +1

      It is when a computer system does something like that -- something to connect it to the real world around us -- that you realize things are changing. :)

    • @soumitradey8208
      @soumitradey8208 5 ปีที่แล้ว

      @@ExplainingComputers oh yes! Fantastic feeling

  • @knarftrakiul3881
    @knarftrakiul3881 5 ปีที่แล้ว +1

    I was 12 when Terminator came out on HBO. I would tell my friends that will happen. They all laughed and told me it was impossible. Now look at all the robots DARPA has created. Just amazing and terrifying at the same time. Lol They also use LIDAR along with the camera for their robots. I wonder if the jetpack software has any demos for LIDAR?

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      You are so right -- things are moving fast. I'm not aware of a specific LIDAR demo, but the board would handle this I imagine.

  • @cerberes
    @cerberes 5 ปีที่แล้ว +1

    Very cool. Mine just arrived yesterday.

  • @Dia1Up
    @Dia1Up 5 ปีที่แล้ว +1

    Very impressive, what hardware does it have for AI specifically? Just the normal cuda cores? Ie: could you run this script at home with an Nvidia GPH, and or on a Pi?

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      Just the CUDA core GPU. The software here is ARM based for the NVIDIA Jetson boards. A Pi may run something less complex (smaller neural net). A similar demo could run on an x86 PC with a CUDA core GPU.

  • @ralphups7782
    @ralphups7782 5 ปีที่แล้ว +1

    With that ai's guess rate, I don't think I would like it to be in full control of a motor vehicle. And me all strapped in for the ride.😲

  • @stevenlevin6765
    @stevenlevin6765 5 ปีที่แล้ว +1

    Great video! So informative. Thank you!!!

  • @arashi926
    @arashi926 5 ปีที่แล้ว +4

    It amused me as much as it amused you. Apparently, I've been drinking a lot of "lotion" ;-)

  • @gurpreet19
    @gurpreet19 5 ปีที่แล้ว +1

    thanks for the help, going to try out this

  • @OfficialNetDevil
    @OfficialNetDevil 4 ปีที่แล้ว +1

    Connect the DJI Osmo pocket with a usb cabl e?

  • @mattias0114
    @mattias0114 4 ปีที่แล้ว +1

    Can the program learn also from saver data data that you showed it from previous times like several spoons and materials

    • @ExplainingComputers
      @ExplainingComputers  4 ปีที่แล้ว

      The neural net can be trained (and retrained) by showing it images that you identity. I show the training process towards the end of this video: th-cam.com/video/wKMWjIKaU68/w-d-xo.html

  • @SuperU2tube
    @SuperU2tube 5 ปีที่แล้ว +7

    It didn’t guess “wooden elephant” so I wooden trust it!

  • @야옹냐옹-q4z
    @야옹냐옹-q4z 5 ปีที่แล้ว +1

    Please let me know how to change pipeline string for gstcamera.
    Because [gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e] works.
    I do not know, so I can ask you some questions.I want to change the string of ./imagenet-camera.But I do not know where to change.The part I want to change is framerate, flip-method, width = 1200-> 960, height = 900-> 616.The command ['video / x-raw, width = 1200, height900'!] Does not work.Great video
    Thank you very much.

  • @aspirante.a.vagabundo
    @aspirante.a.vagabundo 3 ปีที่แล้ว +1

    Wow! Thanks!

  • @twmbarlwmstar
    @twmbarlwmstar 5 ปีที่แล้ว

    Really old school Clive OU feel to this week’s episode. Have you seen the prices of some of these
    Jetson’s- I think mainly for educationalists (they get a discount) and enterprise? A million miles from a Raspberry Pi (although
    the Foundation will shift a few cameras thanks to it).
    Amazing the power in such a small form factor but completely
    beyond me, my bank balance and my fumbling.

  • @DavidIFernandezMunoz
    @DavidIFernandezMunoz 5 ปีที่แล้ว +1

    Given the list of videos on the way, with over 40 in the pipeline, it certainly is bold of me to request an update on 2019 Linux distros but, well... there you are...

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      May well happen! :) I am likely to focus on Linux quite a bit in the second half of 2019 as Windows 7 support nears its end.

  • @icenesiswayons9962
    @icenesiswayons9962 5 ปีที่แล้ว +1

    Very cool!

  • @BilisNegra
    @BilisNegra 5 ปีที่แล้ว

    Give this to a kid and (s)he'll be absorbed using it for hours... And I would be putting objects before the camera for a good while too! Imagine the educational possibilities of this.

  • @leilaghasemzade5880
    @leilaghasemzade5880 2 ปีที่แล้ว

    Thanks for the tutorial. I am trying to connect a camera with MIPI port to jetson nano. I have did the initial setup of nano, I connected the camera to the board and opened the cheese application but I am getting *no device found*. can I activate camera without any dependency (tensorflow, opencv or running scripts)?

  • @IsaacPiera
    @IsaacPiera 5 ปีที่แล้ว +1

    May I give you a security recommendation? mute the camera when typing passwords. With enough typing data it's possible to get statistics on the typing style and timing between keys and crack passwords using that information.

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว

      Thanks, but this is a video. :) So there are no remotely significant passwords used here, and hence nothing of value to crack. OS installs in videos like this rarely last more than a few hours. And there is also a thing called editing! :)

  • @godwhomismike
    @godwhomismike 5 ปีที่แล้ว +1

    An invisible jellyfish draws near!
    Command? > Attack.
    Mike Attacks!
    The Invisible Jellyfish's hit points have been reduced by 64.
    Thou hast done well in defeating the Invisible jellyfish.

  • @PrasannaRoutray97
    @PrasannaRoutray97 5 ปีที่แล้ว +1

    Thanks for the video. Is it possible to have some information about FPS during inference and simple tracking?

    • @ExplainingComputers
      @ExplainingComputers  5 ปีที่แล้ว +1

      You can see an FPS display at the top of the Window I think. I will be posting another Jetson Nano AI video fairly soon! :)

  • @celestialode
    @celestialode 5 ปีที่แล้ว +1

    Now that was impressive!

  • @KCtheAmateur-1
    @KCtheAmateur-1 5 ปีที่แล้ว +1

    Lol, "it's not a coffee mug, that's tea in there!"

  • @Jayenkai
    @Jayenkai 5 ปีที่แล้ว +3

    Did anyone else catch the "Nipple" during the water bottle bit?!

    • @Rich-on6fe
      @Rich-on6fe 5 ปีที่แล้ว +1

      Yes, at 10:30 - it shows how highly trained and optimised our tiny minds are. Good to see that he was wearing clothes when showing the teapot.

  • @orleydoss3171
    @orleydoss3171 5 ปีที่แล้ว +1

    Pretty cool 👍