Explaining the AI black box problem

แชร์
ฝัง
  • เผยแพร่เมื่อ 13 ก.ค. 2024
  • Sheldon Fernandez, CEO of DarwinAI, explains to Tonya Hall what the AI black box is, what the issues are surrounding it, and how we can transform AI's black box into a glass box.
    FOLLOW US
    - Subscribe to ZDNet on TH-cam: bit.ly/2HzQmyf
    - Watch more ZDNet videos: zd.net/2Hzw9Zy
    - Follow ZDNet on Twitter: / zdnet
    - Follow ZDNet on Facebook: / zdnet
    - Follow ZDNet on Instagram: / zdnet_cbsi
    - Follow ZDNet on LinkedIn: / zdnet-com
    - Follow ZDNet on Snapchat: / zdnet_cbsi
  • บันเทิง

ความคิดเห็น • 5

  • @jareknowak8712
    @jareknowak8712 2 ปีที่แล้ว

    Clear and informative.
    Good Job!!

  • @namishabhasin5857
    @namishabhasin5857 3 ปีที่แล้ว

    i want to know in which direction research is going on in this field and scope in research.

  • @agisler87
    @agisler87 4 ปีที่แล้ว

    Very interesting discussion!

    • @AlgoNudger
      @AlgoNudger 2 ปีที่แล้ว

      @comment sense Poor kid with poor analogies, it only shows your illiterateness on this field, 🤣🤣🤣 I beg u don't even understand what algorithms is. 😂😂😂

  • @primodernious
    @primodernious 2 ปีที่แล้ว +2

    the network behave like a sieve with letter holes shaped into it after training. when you pour random letters trough the network the sieve only allow shapes that fit the holes in it to fall trough in only those places the holes where made in the network. the rest of the network does not do anything. the problem is when you start to train in other numbers these does not alway create a letter shaped hole in a blank spot but sometimes these holes overlap previous holes and when happens next time you feed a random image to the network it can get confused and a f and a d can be seen as a d because the holes got so big that other letters fell trough them. this is the hills and vally problem and the problem is as old as the really ancient single layer linear neural networks. nothing really evovled there. the sieve is just a analogy. numbers where decimals values are to similar can be guessed to be the same even they belong to a different letter. the network stores the patterns in a random mess higly fragmented. these problems was never solved when the network was just begun in research in the 80s. the data points in the network correspond to a single letter is stored randomly. everytime yo feed new letters into the network it will train those values radnomly as well and overlap previous datapoints in the network. technically current neural network as in the old ones are capable of teaching itself how to do thing it was not previously programmed to do. a self driving car can teach itself how to run over a pedestrian after its trained to follow tracks because the netowrk is capabale of corrupting itself over time because the data that it adjusted is not feed by back propagation back into the network not in the same places in the network but the values get distribuated a litte bit random in the network and the similarity problem with the basic network can also happen with the self learning network that the network get confused. it can teach itself moves that is worse than previous as the values with the highest acuracy becomes dominant in the arangement of adding decimal numbers into output. it means the network can teach itself how to do mistakes. the network does not care what is the right pattern to adding numbers as long as the sequence overall score in higher. one less mistake in the overall producing a advantage to the networks ability to navigate in this case as a car can result in the network do a move that cause accident in the meantime. these new error will resolve themseves over time but as soon as the network restored a improves pattern of behavior over a previous one it will be more safe than previous but in order to get there it can do mistakes in the meantime. this is eactly what makes these neural black boxes so dangerous. i seen people train basic neural network in car simulation an the network do fantastic for many runs until some point it begin do mistakes sudently and then after loads of mistakes it improves and do even better. there is no function in the network that can prevent this dangerous behavior because of the random nature in how the network improves itself.