Inside a Neural Network - Computerphile

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 มิ.ย. 2016
  • Just what is happening inside a Convolutional Neural Network? Dr Mike Pound shows us the images in between the input and the result.
    How Blurs & Filters Work (Kernel Convolutions): • How Blurs & Filters Wo...
    Cookie Stealing: • Cookie Stealing - Comp...
    Rob Miles on Game Playing AI: • AI's Game Playing Chal...
    Secure Web Browsing: • Secure Web Browsing - ...
    Deep Learning: • Deep Learning - Comput...
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

ความคิดเห็น • 311

  • @pw7225
    @pw7225 7 ปีที่แล้ว +356

    Dr Pound is the best lecturer here. Very clear, intelligently funny, interesting topics.
    Would deserve his own channel

  • @theREAL9er
    @theREAL9er 7 ปีที่แล้ว +186

    The pictures he printed of the layers helped me grasp the concept so much better than other videos, so thank you

  • @mohammaddawas481
    @mohammaddawas481 5 ปีที่แล้ว +20

    This is the best explaination of what is going on inside a neural net! Now I can imagine it more clearly
    Thanks alot!

  • @oliviamay
    @oliviamay 8 ปีที่แล้ว +16

    Loving these videos with Dr. Pound, keep it up!

  • @ProphecySam
    @ProphecySam 8 ปีที่แล้ว

    I've been studying neural network for the last couple of months and haven't come across any resource that explains it with this perfection. You have made it so easy with the visualization.
    I'd really appreciate more videos on topics like RNN, how to set number of layers, filters etc (hyperparameters).

  • @aplcc323
    @aplcc323 6 ปีที่แล้ว +4

    Computerphile, you single handedly helped me regain my interest with computer science.
    Thank you very much for all your videos (:

  • @astropgn
    @astropgn 8 ปีที่แล้ว +19

    In the last video I asked how the images were in these various convolutions. I knew that they wouldn't be nothing like the input image, but I was very curious to see the process anyway.
    And now you make a video answering exactly what I wanted! Thank you so much! :)

  • @rhoneletobe
    @rhoneletobe 8 ปีที่แล้ว +1

    So useful. As a CS student, this was more helpful than a ton of other DLNN stuff I've seen online. Thank you!

  • @SomethingUnreal
    @SomethingUnreal 8 ปีที่แล้ว +61

    It'd be really interesting to take a network trained to detect random objects as seen by a camera, then give it the live feed from a camera and watch the activation of each neuron in realtime as the object moves about in the camera's view, or rotates around the object, etc. I guess the earlier layers would change a lot, while the deeper layers (which have a better idea of what constitutes an object) would change less.

    • @Vulcapyro
      @Vulcapyro 8 ปีที่แล้ว +5

      Projects like this have been done, but only in the sense that they usually just output the most probable class(es) because that's usually the only real way to deal with the amount of information. For modern networks you should be able to visualize activations of a single layer in real-time, but the number of pixels you'd need for a given layer can range from thousands to millions. So doable, but probably not easy to visually parse just by looking at it.

    • @smileyball
      @smileyball 8 ปีที่แล้ว +1

      Consider looking into visualization approaches (like saliency/heat maps/deconvolutional neural network) and approaches that focus on maximal activation (like Google's DeepDream)

    • @MadJDMTurboBoost
      @MadJDMTurboBoost 6 ปีที่แล้ว +1

      SomethingUnreal Id imagine if it was programmed properly and trained long enough, it may look similar to an fMRI.

  • @talhatariqyuluqatdis
    @talhatariqyuluqatdis 7 ปีที่แล้ว

    This guy is my second favourite on computerphile. Lovin these demos

  • @tho207
    @tho207 8 ปีที่แล้ว +5

    what a fantastic explanation, I loved the digits convolution representation
    hope to see more videos about this!
    (RNNs)

  • @Aarrmehearties
    @Aarrmehearties 3 ปีที่แล้ว +6

    Massively interesting and well presented, even for my aging neural network!

  • @aungthuhein007
    @aungthuhein007 7 ปีที่แล้ว +33

    Would love to have someone like him as my professor in my life!

  • @displayoff
    @displayoff 8 ปีที่แล้ว +34

    I love this man.

  • @dolibert
    @dolibert 6 ปีที่แล้ว

    Mike and Rob, the stars of computerphile.
    Great content and nice puns.
    Keep it up guys

  • @ehsankiani542
    @ehsankiani542 4 ปีที่แล้ว +1

    The best tutorial ever! Cheers, Mike!

  • @Kitsudote
    @Kitsudote 2 ปีที่แล้ว

    Oh wow, this video made me understand neurological networks in an insanely deep way. Thank you!

  • @davidm.johnston8994
    @davidm.johnston8994 6 ปีที่แล้ว

    Thank you Mike, and thank you Shaun, this video is really helping me in my quest! I'm making a small game in which I'm trying to make an AI using the tensorflow library.

  • @cazino4
    @cazino4 3 ปีที่แล้ว

    Excellent video! Visual seeing the neurons light up blew me away... It was like looking at an artificial, scaled down brain being imaged...

  • @Wurzelbrumpft1
    @Wurzelbrumpft1 8 ปีที่แล้ว +1

    love this series about machine learning

  • @harborned
    @harborned 8 ปีที่แล้ว

    Fantastic video! Interesting to see "inside the mind" of a neural network

  • @kevintoner6068
    @kevintoner6068 4 ปีที่แล้ว

    wow....
    I didn't expect to understand any of that, but it was all explained perfectly. It made sense. Awesome video

  • @shifter65
    @shifter65 4 ปีที่แล้ว

    Love the visualization!

  • @hotfrost_
    @hotfrost_ ปีที่แล้ว

    Thank you so much. This was very helpful.

  • @zakeryclarke2482
    @zakeryclarke2482 6 ปีที่แล้ว +3

    Please do a video on the maths of forward and back propagation and how they are implemented

  • @Pfaeff
    @Pfaeff 8 ปีที่แล้ว +71

    Doesn't google use those captchas as a crowd sourced labeling technique for their own deep learning stuff?

    • @emosp0ngebob
      @emosp0ngebob 8 ปีที่แล้ว +8

      there's a computerphile video on that too somewhere, again a google project. you get shown two words, and the computer knows one of them and not the other, so when you type the two words in the computer learns what a word is. that's for transcribing libraries and things... i cant remember which computerphile video it was though.

    • @zubirhusein
      @zubirhusein 8 ปีที่แล้ว +1

      They do

    • @daniellewilson8527
      @daniellewilson8527 4 ปีที่แล้ว +1

      I wonder if there’s a website that someone can go to to do the image things to help train the deep learning systems?

  • @s.e.7268
    @s.e.7268 3 ปีที่แล้ว

    it was very enjoyable, thanks for the video.

  • @Sebi0043
    @Sebi0043 5 ปีที่แล้ว

    Thank you very much! Very helpful video!

  • @crazygood150
    @crazygood150 8 ปีที่แล้ว +4

    who needs dual monitors when you have dual PC! Great video btw

  • @RG-jv2nv
    @RG-jv2nv 8 ปีที่แล้ว

    This really clarified the previous video :)

  • @TheCritic609
    @TheCritic609 8 ปีที่แล้ว +1

    Great Video!

  • @punkkap
    @punkkap 7 ปีที่แล้ว

    Brilliant video.

  • @SinanAkkoyun
    @SinanAkkoyun 5 ปีที่แล้ว +2

    Dr Mike Pound, please make a tutorial series on q learning! In depth

  • @heaslyben
    @heaslyben 8 ปีที่แล้ว

    That was really interesting! Thanks!

  • @probE466
    @probE466 8 ปีที่แล้ว +5

    Could you maybe link to the actual code? Would be interesting to look at the implementation

  • @Fireking300
    @Fireking300 8 ปีที่แล้ว

    Great video!

  • @sedthh
    @sedthh 7 ปีที่แล้ว +1

    please do more on this

  • @hosmanadam
    @hosmanadam 5 ปีที่แล้ว

    Very enjoyable, thank you!

  • @shanesrandoms
    @shanesrandoms 8 ปีที่แล้ว

    enjoying the neural net videos. looks like ANNs are coming back in after not really seeing much of it since the 90s.
    i remember my first exposure to the math ans theory behind this was an assembly program on my 8bit C64 back in the late 80s creating a 3 layer Back Propegation network

  • @jambalaya201
    @jambalaya201 3 ปีที่แล้ว +1

    How do you even visualize the output of the NN?
    Crazy, this perspective is so insightful.

  • @Zorbeltuss
    @Zorbeltuss 8 ปีที่แล้ว

    A thought that I've gotten when thinking about this and the previous episode, would it be possible to "reverse" the order of the convolutional neural network, getting a sort of idealized result, probably not extremely useful in most cases, but likely somewhat usable for seeing what extra data can be used to train it for more accurate results or perhaps some sort of data generation.
    Doing the same for a standard neural network would not result in any useable data I know, but it seems like it might be possible with the convolutional one.

  • @haakonvt
    @haakonvt 7 ปีที่แล้ว

    As always, a splendid video! However, every single clip taken from the angle where the pictures of the convolutions are visible, are out of focus. Pitty

  • @deepquest
    @deepquest 7 ปีที่แล้ว

    Its a really good lecture to understand what is going on inside NN. I am using NN for target classification in thermal images. Is NN is a good approach to do that ? Or I should go for any other option.

  • @sirivellamadhuphotos
    @sirivellamadhuphotos 5 ปีที่แล้ว

    @ 5:58 I got the point i am searching for. Thank you very much..

  • @kachrooabhishek
    @kachrooabhishek 2 ปีที่แล้ว

    i just love this guy

  • @DracarmenWinterspring
    @DracarmenWinterspring 8 ปีที่แล้ว

    With all the edge detection going on, would it be harder to recognize a 4 if some versions had the top parts join at an angle, like the 4 in this font, versus the open version as in the video? Likewise a 7 with or without the strike through it? I mean, does it remember some kind of average of all the objects in a class or all of them / all of the sufficiently different ones (which might be hard for a large database)?

  • @Kubaizi
    @Kubaizi 6 ปีที่แล้ว

    very helpful, thanks

  • @mimArmand
    @mimArmand 5 ปีที่แล้ว

    Very cool!
    @5:24 Grayscale is quite a few bits deep, 1-bit depth would be Black & White ( which is not the case in your images, looks like you have at least 16-bit images - if not 256-bit standard grayscale - )

  • @ericklestrange6255
    @ericklestrange6255 4 ปีที่แล้ว

    amazing, i didnt know u could visualize the high rank features

  • @airraidsiren
    @airraidsiren 7 ปีที่แล้ว

    After watching this, the one thing I don't feel is completely explained is where the convolution kernel values come from. At first he says they are things "like Sobel edge detectors", but later says they are not manually entered, but rather learned values. That leaves the obvious question of how are they initialized? Do they start as just matrices with random entries? During the training, how are they adjusted? Is the "training" some kind of iterative search for kernel values that give the strongest response (e.g. the values that most consistently uniquely identify the one digit being learned and most strongly reject the other 8 digits?) I could use a bit more explanation on what the training process looks like and how it adjusts all the kernels.

  • @jonathanstrasburg3609
    @jonathanstrasburg3609 6 ปีที่แล้ว +1

    I realize this isn't likely to get a reply this late, but I'm trying to replicate the configuration of this network. What activation function are you using for the first fully connected layer? Is it dotplus with a renormalization? I'm assuming FC2 is a softmax layer, so maybe they are both softmax.

  • @CarterColeisInfamous
    @CarterColeisInfamous 8 ปีที่แล้ว

    13:16 what he wants to say is that if the images are segmented then its much easier. the segmentation problem is hard. That's why google captchas are all mushed up on each other. Google apparently fixed the segmentation problem by just training it to recognize multiple pairs of letters

  • @davidscarlett5097
    @davidscarlett5097 8 ปีที่แล้ว +1

    How are the outputs of the multiple kernels at each layer managed? Are they somehow merged so that the kernels of the next layer all process the same input? Or do the 20 kernels of layer 2 operate on the 20 outputs of the layer 1 kernels respectively? And if the latter, then what happens when moving from a 20 kernel layer to a 50 kernel layer? Would some of the 20 kernels of the previous layer be duplicated twice, and others duplicated three times to make up the inputs to the 50 kernels in the new layer?

  • @dino130395
    @dino130395 7 ปีที่แล้ว

    Seeing that a lot of people are confused by this video being 50fps, I'd want to clear that up. 50fps is a standard frame rate for television and video in general. 60fps is a standard for animated and generated images, like animations, or games. Sure, you can do either with both, but it's generally so that high-frame-rate TV broadcast are always 50, not 60 fps.
    The scale for TV: 25, 50, 100, 200 Hz
    The scale for Computers: 30, 60, 120 Hz
    (Hz = fps)

  • @black_platypus
    @black_platypus 8 ปีที่แล้ว

    Now I see it, too... For some reason, YT gave me the video at a lower resolution (not watching it in full screen mode, I hadn't noticed), and I was thinking "I don't understand all the people complaining about the video being "wobbly", the video looks fine to me"... Then I saw I wasn't watching it at 50 fps, so I changed the quality.
    I, too, find it a bit weird. I guess stabilization doesn't quite kick in as hard when there are more frames to interpolate between?

  • @bofk7306
    @bofk7306 8 ปีที่แล้ว

    Can you look at your last but one fully connected layer and calculate the typical "distance" between different digits? E.g. just euclidean distance on the normalized terms in FC1.
    Would those distances depend on your neural network you're using or would they be similar across all successful neural networks? That is could you say something like a 1 and a 7 are typically closer than a 0 and a 4.

  • @CarterColeisInfamous
    @CarterColeisInfamous 8 ปีที่แล้ว

    14:36 google captcha api morphs the filter the more samples you get until the training data is useless, also those image based captchas are also being broken after all the success of imagenet

  • @anassgxz
    @anassgxz 8 ปีที่แล้ว +1

    can you do a episode on histograms of oriented gradient

  • @germangb8752
    @germangb8752 8 ปีที่แล้ว

    I'm taking a two credit course in deep learning next week!

  • @MrArunavadatta
    @MrArunavadatta 7 ปีที่แล้ว

    excellent..

  • @HyzerFlip
    @HyzerFlip 8 ปีที่แล้ว +1

    Love the out of focus shots on the pictures...

  • @beautifulsmall
    @beautifulsmall 3 ปีที่แล้ว

    wrote python convolution algos on bitmaps around this time just to self learn python, filters and convolutions are amazing to see in action . Its a little scarry to see how far we are now in 2021 . Covid hasnt stopped SW engineers .

  • @mikejones-vd3fg
    @mikejones-vd3fg 8 ปีที่แล้ว

    so how do the nueral networks do this? is there speed advantages to this network vs just regular processing?

  • @Embedonix
    @Embedonix 8 ปีที่แล้ว

    Can you share the caffe scripts you used please?

  • @SleeveBlade
    @SleeveBlade 8 ปีที่แล้ว +1

    really interesting! I would be interested tot see if it is possible to start from the final convolution and see which image fits it the best, as in 'what looks the most like a 2'.

    • @ruben307
      @ruben307 8 ปีที่แล้ว

      it would be interesting to know if there can be totally different pictures that just would get the same number. Similar to a hashing collisions.

    • @black_platypus
      @black_platypus 8 ปีที่แล้ว

      Well sure, that's basically the same concept (irreversible / one-way transformations giving you an abstract result)

    • @quakquak6141
      @quakquak6141 8 ปีที่แล้ว

      I saw somewhere a neural network that was trained to fool convolutional neural networks, sometimes it produced normal images (in this case it would have produced a 2) other times it produced something that looked almost like pure noise but it was still able to fool the networks

  • @iroxudont
    @iroxudont 8 ปีที่แล้ว +10

    >using windows as a host machine and linux as a guest

    • @ebolapie
      @ebolapie 8 ปีที่แล้ว +4

      Why anyone would use HyperV over KVM is beyond me.

    • @box8250
      @box8250 8 ปีที่แล้ว +8

      It might be university property

    • @michaelpound9891
      @michaelpound9891 8 ปีที่แล้ว +21

      Almost. Actually this was teamviewer - accessing a machine on another campus that is significantly more powerful that the one I have in this office. Deep learning doesn't really work in a guest due to needing the graphics card, which is tricky to get.
      The machine to the left is also Linux, and is used for deep learning, but someone else was using it at the time of this video!

    • @iroxudont
      @iroxudont 8 ปีที่แล้ว +1

      Michael Pound
      Why not SSH

  • @caw25sha
    @caw25sha 8 ปีที่แล้ว

    I am one of those strange people who draws a horizontal bar through the number 7. How would you deal with that? Would you need a separate set of 7+bar training digits (in effect an 11th character) and then map both 7 and 7+bar back to 7?

  • @SapphireCrook
    @SapphireCrook 7 ปีที่แล้ว +2

    I never knew YT even supported 50FPS. :O
    Also, cool computer learning. Today is a day of new smarts.

  • @3zehnutters
    @3zehnutters 8 ปีที่แล้ว +21

    is there a GitHub link to the projekt, Mike ?

    • @recklessroges
      @recklessroges 8 ปีที่แล้ว +1

      github or it didn't happen ;-) "I call GnuImageManip'Prog"

    • @CodeAbstract
      @CodeAbstract 7 ปีที่แล้ว +3

      Why do you need a GitHub page? He just literally explained the full architecture of his built CNN (Convolutional Neural Network). Now, if you want to test this for yourself, you can easily implement all he said. Only find the right programming language which is supported by the libraries to complete the task. He even mentioned what he used, but you could also look at Lua with Torch for example. All the libraries that he mentioned all already there, so you won't need to code in any of the layers, just implement them.

  • @felixmerz6229
    @felixmerz6229 4 ปีที่แล้ว

    I imagined that each layer uses all its kernels on all the images of the previous layer. But that can't be right, hearing that the last convolutional layer here only outputs in a size of 50*4*4. Does that mean that there essentially are "kernel pipelines"? So kernel0 of layer1 will only be fed with the output of kernel0 of layer0?

  • @G3rain1
    @G3rain1 8 ปีที่แล้ว +2

    It would have been way more interesting to see different examples of the same number and how it tranlates into the same output.

  • @Locut0s
    @Locut0s 8 ปีที่แล้ว +1

    Very interesting! I wonder if this gives some insight into how neurones in our brains work on a very basic level?

    • @cmptrn825
      @cmptrn825 8 ปีที่แล้ว

      Points for making me look at my screen with my head turned 90 degrees to the left until I realize I look like a crazy person

  • @slayer646464
    @slayer646464 8 ปีที่แล้ว

    amazing

  • @ahmadibraheem1141
    @ahmadibraheem1141 3 ปีที่แล้ว

    Hey I have a question. After the first conv layer, we are left with 20 images of 24*24 pixels. Do these 20 images transform into one 24*24 sized image, to be given as an input to the next conv. layer?

    • @TheAbdelwahab83
      @TheAbdelwahab83 2 ปีที่แล้ว

      No, after the first conv layer you have like a volume (24*24*20), and it is the input to the next conv layer of size ( 5*5*20), so if you apply this kernel to that input volume you'll get one image of size 20*20, and because you have 50 filters of (5*5*20) so your output will be 20*20*50

  • @CarterColeisInfamous
    @CarterColeisInfamous 8 ปีที่แล้ว

    how were the kernels generated for this one?

  • @the1exnay
    @the1exnay 8 ปีที่แล้ว

    would i be wrong in thinking that if you gave a convolutional neural network the ability to control where to click and what to type and gave it enough convolutions and kernals (perhaps beyond what current computers can handle) and trained it enough then it would be able to solve any captcha, even a new one with different interface that still used the same basic principles?

  • @pequenoZero
    @pequenoZero 8 ปีที่แล้ว

    It would be nice, if you talked a bit about how much data is needed for a CNN to be any kind of useful. The datasets in this video seem extremely big. Specifically it would be nice to have an idea on how well it works on many "categories" with a low amount of data.

  • @MeAtHome5
    @MeAtHome5 4 ปีที่แล้ว +1

    I always do a *very* firm two. 06:55

  • @tamebeverage
    @tamebeverage 4 ปีที่แล้ว

    Excuse me if I have missed something obvious, but I'm not sure I understand what the input of, say, C2 is. Is it a sort of average of all of the images produced by C1?

  • @andre.queiroz
    @andre.queiroz 7 ปีที่แล้ว +1

    people interested in this experiment, you can actually do it in the Machine Learning course (Stanford) on Coursera

  • @CDmc98
    @CDmc98 7 ปีที่แล้ว

    Shouldn't one be able to generate characters(letters, whatever) by going the other way around? I'm thinking what if you tell it to generate a picture from a fully connected layer?

  • @elanvanbiljon5237
    @elanvanbiljon5237 8 ปีที่แล้ว

    Is there any chance you could upload a copy of the source code for the CNN some where? (or even pseudo code) I am sure many people would greatly appreciate it :D

  • @denischikita
    @denischikita 11 หลายเดือนก่อน

    You look like smart guy. Well done

  • @JohnHollowell
    @JohnHollowell 8 ปีที่แล้ว +2

    I wonder, can you work backwards somewhat to get a general idea of what the original image looked like from the convolution layers?

    • @RoySchl
      @RoySchl 8 ปีที่แล้ว +1

      i don't think so, that would be like asking "what 5 numbers did i multiply to get 3600?" there is only on possibility if you do the multiplication, but many possibilities when you try to guess backwards, and with those convolutions it's the same thing just exponentially worse.
      basically you drop a lot of information

    • @DagarCoH
      @DagarCoH 8 ปีที่แล้ว

      Well, I'd say yes, as if you have all the information every layer puts out, you just have to reverse the process the first layer did on the data it gave. Since you have many processes on the one image, there should be much redundancy and therefore a high certainty. If you however only have the output of the sixth convolution layer, I highly doubt that you could get much out of it.

    • @compuholic82
      @compuholic82 8 ปีที่แล้ว +1

      Partially. The problem is that (in general) a convolution is not a reversible operation. However, you can apply something that is known as a "matched filter" which is basically a convolution with the transposed filter kernel. If you go backwards through the network you can (to some degree) reconstruct the input signal. If you look at this paper you can see how the reconstructions look like: arxiv.org/pdf/1311.2901v3.pdf
      And just to prevent confusion: The author calls it "Deconvolution". But he isn't doing a "deconvolution" as he describes in his paper. He is applying a "matched filter".

  • @MitchPleune
    @MitchPleune 8 ปีที่แล้ว +11

    The blurry camera is annoying

  • @DrBlort
    @DrBlort 8 ปีที่แล้ว

    How do you decide what the convolution kernels should be? Is that important, or could they be defined randomly at the beginning?

    • @rory4987
      @rory4987 4 ปีที่แล้ว +1

      Neural network weights are set randomly and then learnt

  • @EQuivalentTube2
    @EQuivalentTube2 8 ปีที่แล้ว +10

    The first layer looks like something Andy Warhol would do.

  • @gryzman
    @gryzman 8 ปีที่แล้ว +2

    What's the library/program Dr Mike is using please ?

    • @realcygnus
      @realcygnus 8 ปีที่แล้ว

      Big Corporate Top Secret

    • @MrVbarroso
      @MrVbarroso 8 ปีที่แล้ว +7

      It's called caffee.

    • @kolby4078
      @kolby4078 8 ปีที่แล้ว +1

      using caffe in linux

  • @vikassrivastava2680
    @vikassrivastava2680 7 ปีที่แล้ว

    HOw can i get these algos if i want to do it on my machine?

  • @andreyguskov1697
    @andreyguskov1697 7 ปีที่แล้ว

    If the first convolution layer has 20 filter and the second one has 20, does thing mean that each C2 filter processes all 20 images from C1? That would make 400 images for C2 output

  • @Remix00zero
    @Remix00zero 8 ปีที่แล้ว

    So would it be possible to use convolutional neural networks for something general like arbitrary image matching or are they limited to narrowly trained applications like the one here?

    • @2Cerealbox
      @2Cerealbox 8 ปีที่แล้ว

      You can. I think google uses a neural net for their "visually similar images" feature.

    • @paulhendrix8599
      @paulhendrix8599 8 ปีที่แล้ว

      nice picture.

  • @thejll
    @thejll ปีที่แล้ว

    What if the training images have digits drawn at different scales?

  • @HolowatyVlogs
    @HolowatyVlogs 8 ปีที่แล้ว

    Is this the first Computerphile video you've shot at 50?

  • @lohphat
    @lohphat 7 ปีที่แล้ว

    How to you replicate the learned connections to other systems? How is the "knowledge" abstracted for transport, backup, and further improvements?
    With discrete programming, the instructions are compact and finite and are easily copied.

  • @fatehourghi1083
    @fatehourghi1083 4 ปีที่แล้ว

    The best explanation of CNN's thanx

  • @david2sdad
    @david2sdad 6 ปีที่แล้ว

    Is captcha a method to filter out bots, or is it a way to coerce humans into training and AI?

  • @TheGameFreak013
    @TheGameFreak013 5 ปีที่แล้ว

    how do you know how many kernels, layers, etc. are best suited for your needs?

    • @m3n4lyf
      @m3n4lyf 5 ปีที่แล้ว

      That is an excellent question! Unfortunately it requires at least a moderate amount of knowledge in the subject matter to answer, so I doubt you'll be getting a satisfactory response from this resource any time soon.

  • @DrNaserRazavi
    @DrNaserRazavi 8 ปีที่แล้ว

    Actually, The correct pronunciation of Le-Net is Lo-Net. "Le" in French is like "The" in in English but just for masculine.

  • @0xChRS
    @0xChRS 8 ปีที่แล้ว +24

    whoa why is it at 50 fps?

    • @ChristopherPuzey
      @ChristopherPuzey 8 ปีที่แล้ว +47

      europe is poor and can't afford the extra 10

    • @deadalusdx5637
      @deadalusdx5637 8 ปีที่แล้ว +4

      muh socialism

    • @AuroraNora3
      @AuroraNora3 8 ปีที่แล้ว +1

      Except for Scandinavia

    • @CJSwitz
      @CJSwitz 8 ปีที่แล้ว +17

      Based on PAL (25/50/100HZ), whereas NA is based on NTSC (30/60/120HZ). There is no techincal reason for it anymore, but it was originally because it was clocked off of the AC power grid which ran at 60hz in NA and 50hz in Europe.

    • @ChristopherPuzey
      @ChristopherPuzey 8 ปีที่แล้ว

      "So my monitor runs at 60 fps"

  • @marcoswappner8331
    @marcoswappner8331 4 ปีที่แล้ว

    Let's see if someone can help me out here. The first layer here outputs 20 24×24 images (or a 20 channel image) after performing all the convolutions. The second layer will output 20 20×20 images. But how are they constructed? How do they combine the 20 channels from the previous layer? I mean, they are not applying all 20 filters to each of the 20 channels, that'd be a 400 channel output. Do they simple add the convolutions for each channel up? So channel 1 of layer 2 is the sum of the convolution between kernel 1 of layer 2 with each of the 20 channels of layer 1?