Convolutional Neural Networks from Scratch | In Depth

แชร์
ฝัง
  • เผยแพร่เมื่อ 28 ก.ย. 2024

ความคิดเห็น • 124

  • @jackfarah7494
    @jackfarah7494 ปีที่แล้ว +48

    I have been researching CNNs for about a month now. Every video i watch, i end up more confused and no answers. I cant express how grateful i am of this video. Thank you so much for this great content and educational information. Keep it up man!

    • @far1din
      @far1din  ปีที่แล้ว +4

      Thank you my friend. Glad you got some value out of the video! 💯

    • @maxave7448
      @maxave7448 3 หลายเดือนก่อน

      ​@@far1dini absolitely love how this looks like a 3Blue1Brown video but doesn't throw a bunch of numbers and terms at the viewer nonstop. This is great for beginners!

  • @mendezzzzz123
    @mendezzzzz123 5 หลายเดือนก่อน +2

    This is amazing, thanks, nothing better to understand this abstract concept, just visualizing it

  • @chinmaythummalapalli8655
    @chinmaythummalapalli8655 27 วันที่ผ่านมา

    I racked my brain for hours and couldn't figure out why the features' maps aren't multiplying after each layer and this video just helped me realize they become channels of images , it helped me relax and I think I can go downstairs for dinner now.

    • @far1din
      @far1din  27 วันที่ผ่านมา

      Glad it helped! 😄

  • @kyugelblitz
    @kyugelblitz 2 หลายเดือนก่อน

    Can't express my gratitude, albeit here I am. Everything is shown very detailed, explained accurately and understandably. Keep up the good work.

  • @naveens482
    @naveens482 ปีที่แล้ว

    I have learnt CNN on many platforms but this video is the one that thought me exactly what i need .Need more videos like this

  • @notsoclearsky
    @notsoclearsky ปีที่แล้ว +22

    Bruh I can't thank you enough, this is some gold tier education literally. Keep up the good work

  • @im-Anarchy
    @im-Anarchy 10 หลายเดือนก่อน +2

    Sankyo zo munch vor this veautifull, vuonderfull ,amazing video. Arigatoya!

    • @far1din
      @far1din  10 หลายเดือนก่อน +1

      Thank you bradder :D

    • @im-Anarchy
      @im-Anarchy 10 หลายเดือนก่อน +1

      @@far1din Bradder???
      do you mean: A word for a particularly unattractive female, usually with reference to a slightly deformed smile.

    • @far1din
      @far1din  10 หลายเดือนก่อน +2

      😂😂 no my friend. Just saw this definition on urban dictionary.
      it’s a typo for «brother» 🥇

  • @jayeshkurdekar126
    @jayeshkurdekar126 ปีที่แล้ว +3

    You are a gold 🥇 professor..wish I was a billionaire..would have gifted to you for you sheer clarity

    • @far1din
      @far1din  ปีที่แล้ว +2

      Haha, I hope you become a billionaire one day 💯

  • @ViralKiller
    @ViralKiller ปีที่แล้ว

    Amazing, didn't understand crap until you explained it with images...please make more

  • @shirmithNirmal-
    @shirmithNirmal- 23 วันที่ผ่านมา

    That was awesome explanation

  • @SelfBuiltWealth
    @SelfBuiltWealth หลายเดือนก่อน

    beautiful explanation❤

  • @Tezla0
    @Tezla0 ปีที่แล้ว

    Really good content. You deserve more subscribers and views.

  • @manishsoni8806
    @manishsoni8806 8 หลายเดือนก่อน +1

    Awesome Explanation 😍😍

  • @lushbeard
    @lushbeard ปีที่แล้ว

    This is fantastic level of explanation

  • @khayyamnaeem5601
    @khayyamnaeem5601 ปีที่แล้ว +1

    Amazing content!

  • @OnlineClasses-rs5yf
    @OnlineClasses-rs5yf ปีที่แล้ว

    Great work... Much better than college professors

  • @debraheric2308
    @debraheric2308 ปีที่แล้ว

    Wow such great content. Subscribed!

  • @khvnp1l0t
    @khvnp1l0t 4 หลายเดือนก่อน

    In the output layer, is it just the highest value after the calculations that makes the prediction? Wonderful video by the way, this has cleared up a lot of questions for me in general about how a CNN works!

  • @Temuei
    @Temuei 9 หลายเดือนก่อน

    Thanks, this video very easy to understanding for me

  • @ButcherTTV
    @ButcherTTV 4 หลายเดือนก่อน

    great video!!!

  • @HiepNguyen-bw6dj
    @HiepNguyen-bw6dj ปีที่แล้ว

    thank you so much! good explanation!

  • @tejan8427
    @tejan8427 5 วันที่ผ่านมา

    How do we know how many layers or filters we need at each layer ? I mean, how can we construct our architecture.

  • @Deepak-ip1se
    @Deepak-ip1se 3 หลายเดือนก่อน

    Very nice video

  • @MalamIbnMalam
    @MalamIbnMalam 6 หลายเดือนก่อน

    Is there a website where we can solve sample problems pertaining to CNN and RNN?

  • @Satrix1689
    @Satrix1689 11 หลายเดือนก่อน +1

    Hi Im new to this , may I know how do you get the bias term at the first layer and also the bias term and weight at the output node?

    • @far1din
      @far1din  11 หลายเดือนก่อน

      Hello, the bias start of as 0 and it changes through training (backpropagation). The weights are generally assigned "randomly" and they also change through backpropagation. I will link some videos on backpropagation. I did not include the bias term in my video as I was focusing on the weights, but 3blue1brown did and he has a really good video on this topic. Although he is explaining regular neural networks, this might help you get some clarity as for the bias terms and the final fully connected layer :D
      Backpropagation in neural networks by @3blue1brown: th-cam.com/video/Ilg3gGewQ5U/w-d-xo.html
      Backpropagation in convolutional neural networks by me: th-cam.com/video/z9hJzduHToc/w-d-xo.html
      Training convolutional neural networks by me: th-cam.com/video/2nrrzIx_i4g/w-d-xo.html

  • @BooleanDisorder
    @BooleanDisorder 8 หลายเดือนก่อน

    But like, how did we learn to do this? How was the logic of all the layers thought out?

  • @hchattaway
    @hchattaway ปีที่แล้ว +1

    awesome video that really helped understand the inner workings!
    I've always wondered about each hidden layer... Is it true that for each kernel used there is one of these layers?
    And when using a tool like PyTorch are there just standard kernels that are used for pulling out features? Is there control over the make up of those kernels?
    Also, I can imagine, depending on the nature of the images being trained, that custom kernels could be created to best pull out features for a particular data set?
    Thanks for the awesome work!

    • @far1din
      @far1din  ปีที่แล้ว +2

      Thank you my friend!
      1. I’m not sure what you are referring to, but each convolutional layer will output an activation layer. If this activation layer is «sandwiched» between the output and the input, it’s called a hidden layer.
      A convolutional layer usually have many filters within. Each filter will return one activation matrix. The activation layer is basically a combined term for all the «activation matrices». Also, some publications referr to the pooling layer as an individual layer while others don’t. In this video the pooling was not considered an individual layer.
      2. There are different methods for initializing the kernel/filters, but here we just used the default initialization which is called «glorot uniform». After the training process, you can save the weights and reuse them as you would like. You could also use different initializations or try your own custom ones in pytorch etc. See reference 1.
      3. That’s correct. The convolutional neural network shown in this video will be excellent at predicting handwritten digits, but would do poorly in detecting for example handwritten letters. However, the weight’s can be reused in order to train a model which detects handwritten letters. See ref 2 where Andrew NG explains how to implement open source models.
      Hope this answered your doubts! 🚀
      Reference 1: discuss.pytorch.org/t/initialize-weights-of-convolution-layer/52672
      Reference 2: th-cam.com/video/cFFu__mcoIw/w-d-xo.html

  • @윤기좔좔엉덩이
    @윤기좔좔엉덩이 หลายเดือนก่อน

    What are the criteria for setting filters?

  • @domahidipeter6092
    @domahidipeter6092 3 หลายเดือนก่อน

    Activation 1 layer ishave a dimension of ( 28-5+1=24) 24*24?

  • @hihaoay8042
    @hihaoay8042 5 หลายเดือนก่อน

    Well i have a question, the final result is 7 and 3.6 so the model predict the input number is 7, so how about the others, I mean from 0 to 100 fully connected layer, they can be 0 , 8, 9,10 right ? so what value will they predict? Thank you so much for your video

  • @mennobangma
    @mennobangma ปีที่แล้ว +1

    So where is the first convolutional Layer based on. Why do these filters work so well on 'analysing' numbers? What kind of edges or shapes do they detect?

    • @far1din
      @far1din  ปีที่แล้ว +4

      The weight’s were initialized randomly at det beginning and trained for 100 epochs/iterations if I’m not mistaken (this was a couple months ago).
      The training data is from the mnist dataset, which is a dataset containing only handwritten digits. That’s why this network is capable of detecting handwritten digits.
      The edges and shapes detected by each of these filters post training can be seen in the activation layers. These are the layers to the right of each filter after the convolutions.
      For an untrained network that doesn’t have «trained filters» the output will most likely be blurry, have «random pixels» and the model will output «random values». As you train the model (backpropagation) the filters in the model will learn to detect shapes etc. as seen in the video.
      I should probably make a visualization for the entire process 🤔
      I hope this clarified some of your questions!

  • @rotemlv
    @rotemlv ปีที่แล้ว +1

    I'm curious why you used the sigmoid function in particular - did you get better accuracy using it in this model than with 2 ReLUs?

    • @far1din
      @far1din  ปีที่แล้ว +1

      Hey Rotem, I only used sigmoid to show that different activation functions could be used. I didn’t want anybody watching to think that ReLU is the only activation function. This video was made for educational purposes, and I didn’t think much of the accuracy as it was above 90%
      On the sidenote, I was also supposed to show max and average pooling, but realized I had used max pooling for both after doing all the animations 😪

    • @rotemlv
      @rotemlv ปีที่แล้ว +1

      ​@@far1din Thanks for the reply. Yeah your logic regarding showing the alternatives makes sense. It's just that from what I read, sigmoid isn't recommended (basically just "use ReLU", since using sigmoid can "kill" the gradient more easily than ReLU).
      Also (forgot to say this in my comment) - these videos are very informative and easy to follow, kind of a 3b1b vibe, with the animations.

    • @far1din
      @far1din  ปีที่แล้ว

      Thank you. These animations are made with the same library (manim) which 3b1b created! :)

  • @iancoify
    @iancoify 5 หลายเดือนก่อน

    wow ty!
    -n

  • @arpitgaur4310
    @arpitgaur4310 3 หลายเดือนก่อน

    this was missing in 3b1b video of CNN

  • @TheSquareClasses
    @TheSquareClasses 3 หลายเดือนก่อน +1

    You explained everything in detail with mathematics except the fully connected layers, how they work.
    Please explain this one in a different video.

  • @HoshRampageZA
    @HoshRampageZA ปีที่แล้ว +6

    Wow! I finally feel like I understand neural networks including the math at every stage. I have never seen a complicated math formula broken down so simply and elegantly. You are an excellent teacher. Thank you for this video. Subscribed.

  • @Number_Cruncher
    @Number_Cruncher ปีที่แล้ว +8

    You nicely explained the action of each layer. I wonder if there is an interpretation of the visuals that where seen in the intermediate steps. Or, it would also be nice to see, how the filters evolve from random to there trained configuration. Can the values of the filters be interpreted somehow? I think of edge detection, gradients or something similar.

    • @far1din
      @far1din  ปีที่แล้ว +3

      Thank you!
      There are no exact interpretation of the values within the filter that I am aware of. Please comment below if there is.
      What can be seen, from this video and others is that the first layers will detect the shapes, and the deeper layers will interpret more complex features.
      Although this is not a mathematical proof, you can see this effect by for example creating three sets of the same model, initializing different weights for each of the three models and training the model on the same set of data. You will notice that the filters converge to different values, but the outputs for each layers are somewhat the same. You start with some form of edge detection and move on to more complex features that in my opinion are hard to identify, at least for handwritten digits. I will try to make a video visualizing the training process so that this effect can be seen! ☺
      Here is a video by Andrew NG explaining what deeper layers are learning: th-cam.com/video/ChoV5h7tw5A/w-d-xo.html

    • @far1din
      @far1din  ปีที่แล้ว

      th-cam.com/video/2nrrzIx_i4g/w-d-xo.html

  • @rangilanaoermajhi1820
    @rangilanaoermajhi1820 ปีที่แล้ว +3

    Brilliant! Looking for more visual representations! In the final layer of softmax, if you can also explain how its going to 7 using the learnt parameters. Big thanks 👍

  • @DmitrievAlexander
    @DmitrievAlexander 11 หลายเดือนก่อน +1

    First, thank you really very much! Question: is filters are 'generally random' and then 'trained' through feedback, means that we don't really know why the system generally recognise the image? Why it detects this image like '7'? Am I right? (But again, thank you very very much, I'm visual person, and math converted to images explained everything crystall clear!)

    • @far1din
      @far1din  11 หลายเดือนก่อน +1

      That is correct. This a really really small model, and it has 1138 trainable parameters. Bigger models like resnets have tens of millions of trainable parameters. There is no way, atleast as of today, where a single person or a group of people can pick and choose/guess what number to add to the filter. Although it should be possible :P The probability for this is almost zero.
      However, we have a feedbackloop where we start of randomly and train the model with backpropagation. I will link some videos below. You can almost think of this as regression. In linear regression, you start with a scatter plot of point, but you don't know what function will give you the best fit line:
      - A line has the formula ax + b. You do some math, and you can solve for constant a and b. Add more points, and you'll most likely get different a and b values.
      - For this model, you do backpropagation and solve for 1138 parameters. You change the training images, and you'll most likely get different filter values and biases.
      I hope this made sense! :)
      Backpropagation in neural networks by @3blue1brown: th-cam.com/video/Ilg3gGewQ5U/w-d-xo.html
      Backpropagation in convolutional neural networks by me: th-cam.com/video/z9hJzduHToc/w-d-xo.html

  • @paedrufernando2351
    @paedrufernando2351 ปีที่แล้ว +3

    you helped clear the finishing clincher for me in the worlkd of AI..cant thank you enough

  • @oculotronicstest2866
    @oculotronicstest2866 ปีที่แล้ว +1

    Hi, Can anyone tell me how the weights are assigned in the last fully connected layer. Thanks in advance : )

    • @far1din
      @far1din  11 หลายเดือนก่อน

      Initially it’s «random», but then it get’s trained through the backpropagation!

  • @overtrust7143
    @overtrust7143 8 หลายเดือนก่อน +1

    Awesome

  • @JamieTx23
    @JamieTx23 4 วันที่ผ่านมา

    Excellent video! Thanks for taking the time and breaking it down so clearly.

  • @bambusleitung1947
    @bambusleitung1947 ปีที่แล้ว +1

    does the convolutional layer always have a stride of 1?

    • @far1din
      @far1din  ปีที่แล้ว

      No, it’s something you choose.

  • @r0cketRacoon
    @r0cketRacoon 2 หลายเดือนก่อน

    what happens if I specify the convo layer 2 have only 2 dimensions? the same kernel will be applied for both 2 images? then be added?

  • @boklausen9583
    @boklausen9583 ปีที่แล้ว +2

    Brilliant explanation!! - thank you so much for sharing! Now, what is the magic (or heuristics) behind defining the various kernels and pools (sizes, strides and contents)?

    • @far1din
      @far1din  11 หลายเดือนก่อน

      It’s an iterative process. Trial and error until you get the best results.
      Andrew Ng actually talks about choosing hyperparameters in this video: th-cam.com/video/1waHlpKiNyY/w-d-xo.htmlsi=saG0hYPuKg5yHiji

  • @SolathPrime
    @SolathPrime ปีที่แล้ว +3

    Wow first

    • @far1din
      @far1din  ปีที่แล้ว +2

      😂🔥

  • @Aldotronix
    @Aldotronix 3 หลายเดือนก่อน

    i can't understand how a computer can figure out an image after many convolutions, seems like magic.

  • @kemaldursun8192
    @kemaldursun8192 3 หลายเดือนก่อน

    thank u man it's great content and helped me so much

  • @birajkumarkaranjit7259
    @birajkumarkaranjit7259 9 หลายเดือนก่อน +1

    very well explained

  • @tamurhaq
    @tamurhaq ปีที่แล้ว +1

    Excellent content. You've made this keeping in mind the viewer's intuition. Keep making more just like this. ❤

  • @SamuelMoyoNdovie
    @SamuelMoyoNdovie 25 วันที่ผ่านมา

    What an explanation man 🫡

  • @rubytejackson
    @rubytejackson 28 วันที่ผ่านมา

    exceptional explanation u did! I have several questions , but first id like to ask is it ok to support u from the thanks button since i dont have any paypal account?
    thnks
    warmest regards
    ruby

    • @far1din
      @far1din  27 วันที่ผ่านมา

      Ofc my friend! Feel free to shoot me a DM on X if you have any questions aswell 💯

  • @sachink9102
    @sachink9102 6 หลายเดือนก่อน +1

    WooooooW ! I am speechless man ! you are THE genius

  • @ratfuk9340
    @ratfuk9340 ปีที่แล้ว +1

    Awesome, this cleared things up for me. Thanks!

  • @let1742
    @let1742 ปีที่แล้ว +1

    thank you! this is the clearest explanation i've seen, i hope you will continue to produce videos of this kind!

  • @boramin3077
    @boramin3077 2 หลายเดือนก่อน

    Great explanation!

  • @eneadriancatalin
    @eneadriancatalin ปีที่แล้ว +1

    One mention: 9:14 the sigmoid function is 1/(1+e^(-x)) and your x is already -7.36 so it will be 1/(1+e^7.36), that's almost 0 (0.000485425106)

    • @far1din
      @far1din  ปีที่แล้ว +1

      Yes my friend. I had to «scale up» the pixels in order for them to be seen.

  • @ahmedhesham3125
    @ahmedhesham3125 6 หลายเดือนก่อน

    good video

  • @tobiaspucher9597
    @tobiaspucher9597 4 หลายเดือนก่อน

    amazing

  • @keremkezer6826
    @keremkezer6826 8 หลายเดือนก่อน

    👏👏👏👏👏👏👏👏👏👏👏👏

  • @nelsonvanduin2583
    @nelsonvanduin2583 ปีที่แล้ว +1

    Sick!

  • @nitishaggarwal-i8y
    @nitishaggarwal-i8y ปีที่แล้ว

    mast bc

  • @mateokladaric
    @mateokladaric 2 หลายเดือนก่อน

    Finally someone who doesn't just say "it convoluted the image and poof one magic later it works"

  • @doctorshadow2482
    @doctorshadow2482 ปีที่แล้ว

    Thank you for the nice visualization.
    Two points:
    1. You promised in depth explanation, will it follow? In this video you don't explain where from you take these filters/kernels; in depth explanation doesn't assume something "just given", I need to understand where from to get it and how exactly.
    2. There are tons of videos on youtube on this topic, it would be nice if you make a difference, explaining, for example, how all this could work with shift/rotation/scale of the image. Nobody covers this.

  • @bdeceulaer
    @bdeceulaer ปีที่แล้ว +1

    Brillant visualisation and explanation! Your video clarified to me in minutes the difference between a convolutional layer and a fully connected one, the meaning of stride size, max pooling and activation function. What is the impact of different activation functions? I assume weights, biases and filter values are determined iteratively during training. Would be great to have a visualisation video of that training phase for this same image recognition example.

    • @far1din
      @far1din  ปีที่แล้ว

      Hi Bart, and thank you!
      1. The activation function.
      The activation function induces non linearity to network. I highly suggest you watch Andrew NG's video on this as he explains the mathematics behind. I have referenced it below for you! :)
      An intuition I once heard that stuck with me is that you want these neurons in the hidden layers to fire. When using activation functions such as ReLU, this is exactly what happens. If the calculated value goes below zero, the "neuron" in the next layer is set to zero.
      2. Weights, biases and filters are set iteratively? You are correct that the weights and biases are set iteratively during the backpropagation, but the filters are fixed/predefined.
      I will try to make a video on the complete training process for the next video! :)
      ref: th-cam.com/video/NkOv_k7r6no/w-d-xo.html

    • @far1din
      @far1din  ปีที่แล้ว

      th-cam.com/video/2nrrzIx_i4g/w-d-xo.html

  • @RAHUL1181995
    @RAHUL1181995 ปีที่แล้ว

    This was really helpful....Thank you so much for the vizualization...Keep up the good work...Looking forward to your future uploads.

  • @SelfBuiltWealth
    @SelfBuiltWealth หลายเดือนก่อน

    this is a very unique and underrated explanation!beautiful work thank you so much❤

  • @tebs1989
    @tebs1989 4 หลายเดือนก่อน

    This is the most clear and exceptional video explanation about CNNs that I ever seen so far. Thank you so much!

  • @immohobot9288
    @immohobot9288 9 หลายเดือนก่อน

    Nice explanation. It was really helpful. Thanks.

  • @riturajput9040
    @riturajput9040 4 หลายเดือนก่อน

    How is weight initialised ?

    • @andyh3970
      @andyh3970 3 หลายเดือนก่อน

      That is done during training via back propagation Here’s a picture, The answer is 7 now set the weights backwards so the output neuron for 7 turns on.

  • @samruddhisaoji7195
    @samruddhisaoji7195 ปีที่แล้ว

    Thank you! your explanation and animations were very helpful!

  • @卞正-s5y
    @卞正-s5y ปีที่แล้ว

    I have learned a lot from this video. It is beneficial for people like me who don't have studied CNN at all and want to learn something.

  • @Ivan-fz3ou
    @Ivan-fz3ou ปีที่แล้ว

    Awesome work! This gave me a new insight and understanding of CNNs; the intricacies and math of how it works.

  • @Blackprogger
    @Blackprogger ปีที่แล้ว

    Thank you for this great explanation! It couldn't be explained any better! Very nicely visualized and explained step by step! The best explanation of CNNs I've seen so far! Thanks!

  • @andyh3970
    @andyh3970 3 หลายเดือนก่อน

    The single best explanation I have seen 15/10

  • @pappo-nc5yh
    @pappo-nc5yh 9 หลายเดือนก่อน

    Great video and very clear explanation, thanks!

  • @FelLoss0
    @FelLoss0 ปีที่แล้ว

    very well explained!!! new subscriber here :)

  • @prashantkesharwani9205
    @prashantkesharwani9205 8 หลายเดือนก่อน

    Thank you so much for creating such an informative content. Keep it up, your channel is so much underrated!

  • @danny2704
    @danny2704 ปีที่แล้ว

    Pretty like this visualization !!

  • @imadboukhari8033
    @imadboukhari8033 9 หลายเดือนก่อน

    Great video man better than anyone out there thank you

  • @peterpan0201
    @peterpan0201 ปีที่แล้ว

    This is actually very good!

  • @samuelbrouwer430
    @samuelbrouwer430 ปีที่แล้ว

    this could not be more clear thank you

  • @lpschaf8943
    @lpschaf8943 ปีที่แล้ว

    Even Brain Age for the DS had A.I.

  • @louissimon2463
    @louissimon2463 ปีที่แล้ว

    this is excellent, thank you

  • @imotvoksim
    @imotvoksim ปีที่แล้ว

    Very thorough and great visualizations!

  • @sajanphilip8221
    @sajanphilip8221 ปีที่แล้ว

    Best Explanation ever

  • @joseluisdiaz233
    @joseluisdiaz233 ปีที่แล้ว

    An amazing job, thank you for your time and for sharing

  • @satellitesabunim
    @satellitesabunim ปีที่แล้ว

    Excellent video.

  • @adrianhochla3664
    @adrianhochla3664 ปีที่แล้ว

    I really like this!

  • @jaybhatt6775
    @jaybhatt6775 ปีที่แล้ว

    wow.amazing illustrations!

  • @emadhajaj4245
    @emadhajaj4245 10 หลายเดือนก่อน

    Great work, actually it one of the most beautiful videos made in AI.

    • @far1din
      @far1din  10 หลายเดือนก่อน

      Thank you my friend! 😃

  • @mysteriousXsecret
    @mysteriousXsecret ปีที่แล้ว

    7:13 why am I having 4x2 filters?

    • @far1din
      @far1din  ปีที่แล้ว

      Basically, you are free to choose how many filters you want and what size they should be. I choose the filters just so you (viewers) could get a better understanding of the convilution process! Zfj

  • @XenoZeduX
    @XenoZeduX ปีที่แล้ว

    Great

  • @nithina5105
    @nithina5105 ปีที่แล้ว

    Can you do a video on gradCAM working?

    • @far1din
      @far1din  ปีที่แล้ว

      Unfortunately, I'm not well-versed in the concept of GradCAM. :/

  • @way2on
    @way2on 5 หลายเดือนก่อน

    can you believe this is over complicated. you can literally just do some upscaling. plus make it bi-directional