The Jacobian matrix

แชร์
ฝัง

ความคิดเห็น • 150

  • @raybroomall8383
    @raybroomall8383 6 ปีที่แล้ว +400

    Reference to a "last video" is 'Local linearity for a multivariable function' Many years ago, back when Fortran was the coolest thing you could find in a computer I tried to understand what linear algebra. Back then it was hoped that if you kept solving enough problems that sooner or later the light would go on and you would know it all. as it turned out my light was a small flickering candle. I'm 70 this year and now I have the time to take a closer look at the beauty of math.Thanks for presenting concepts rather than processes. Khan and 3b1b rock.

    • @ggsgetafaf1167
      @ggsgetafaf1167 5 ปีที่แล้ว +2

      thank for your comment, i know which video is "last video". :D

    • @joluju2375
      @joluju2375 5 ปีที่แล้ว

      Thank you, for "last video" was "Jacobian prerequisite knowledge" in the playlist I'm watching ...

    • @suratvita
      @suratvita 5 ปีที่แล้ว +1

      @@joluju2375 th-cam.com/video/VmfTXVG9S0U/w-d-xo.html

    • @lakshya6235
      @lakshya6235 3 ปีที่แล้ว +6

      you rock yourself.

    • @leiberlyu1493
      @leiberlyu1493 3 ปีที่แล้ว

      You are of great help!
      THX

  • @mrarkus7431
    @mrarkus7431 7 ปีที่แล้ว +569

    This sounds and looks like 3blue1brown

    • @aseempatwardhan6778
      @aseempatwardhan6778 7 ปีที่แล้ว +44

      Michael L He is...or so I hear

    • @davepogue543
      @davepogue543 7 ปีที่แล้ว +17

      thats awesome i was just looking on his channel for this very topic

    • @luffyorama
      @luffyorama 7 ปีที่แล้ว +68

      He IS 3B1B

    • @muzammil360
      @muzammil360 6 ปีที่แล้ว +14

      Yeah, you are right. This does sound like 3Blue1brown

    • @Fasteroid
      @Fasteroid 6 ปีที่แล้ว +16

      I think that's because it is ( ͡° ͜ʖ ͡°)

  • @danielc4267
    @danielc4267 7 ปีที่แล้ว +15

    It is helpful for intuition to multiply df1/dx at 2:37 by 1. df1/dx is a rate and you need to multiply it by 1 to give you the x-output of the unit vector (1,0). Note that the first column of the Jacobian represents what the unit vector (1,0) becomes after the transformation.

    • @aSeaofTroubles
      @aSeaofTroubles 7 ปีที่แล้ว +15

      That is only true if the local linear approximation is still valid at further distances. Let me explain some more:
      The columns in the matrix track where the points (x_o+dx, y_o) and (x_o, y_o+dy) are mapped with respect to where (x_o, y_o) is mapped.
      For example, f(x_o+dx, y_o) - f(x_o, y_o) is the distance between (x_o+dx, y_o) and (x_o, y_o) after being mapped. This is simply our Jacobian times (dx, 0). Since the second entry is zero, we only recover (df_1/dx) + (df_2/dx), which by the chain rule is simply the total derivative of f with respect to x.
      Likewise, we can show the Jacobian times (0, dy) is the total derivative of f with respect to y.
      So the Jacobian matrix is mapping *differential* vector quantities that are in the direction of our original basis vectors. We can think of these differential vectors dx and dy as our new basis!
      But if we choose a basis that is very small, we better make sure our transformation returns a number that isn't very small too. This is why we can imagine "normalising" by the small quantities "dx" and "dy" in the bottom of our matrix. In a normal transformation matrix, we know the denominator is simply "1". But since our function isn't actually linear, we do not have the luxury of using such a simple basis. We can only act on small vectors accurately, so we re-scale :)

  • @NovaWarrior77
    @NovaWarrior77 4 ปีที่แล้ว +1

    Thank you. That last paragraph was just SO well constructed. Rest of the video too.

  • @SB-zv9ls
    @SB-zv9ls ปีที่แล้ว +1

    Simply awesome. I wish we could give him Nobel prize or something.

  • @ClosiusBeg
    @ClosiusBeg 3 ปีที่แล้ว

    The best explanation I've ever seen!

  • @Dhruvbala
    @Dhruvbala 4 ปีที่แล้ว +1

    Great video! But one thing I don't quite get is why you divide by del x and del y to find the different components of the Jacobian. Could someone please explain?

    • @kartikvarshney9257
      @kartikvarshney9257 3 ปีที่แล้ว +2

      Because we are looking for the ratio of how much the axis is stretched or squeezed

  • @joshace5988
    @joshace5988 3 ปีที่แล้ว

    Legend! you explained that so well

  • @donlansdonlans3363
    @donlansdonlans3363 4 ปีที่แล้ว +2

    How does he make all those animations? Such as bending the plane

    • @vishank7
      @vishank7 4 ปีที่แล้ว +1

      He has made a project named "manim" for these animations. Check it out!

  • @yksnimus
    @yksnimus 4 ปีที่แล้ว +2

    wheres the next vid...the vids should have previous and next on the description

  • @kadirbasol82
    @kadirbasol82 4 ปีที่แล้ว +1

    is this extracting axis of rotation ? so this seems like an "eigenvector" ? is there any relationship between Jacobian matrix and eigenvector ?

  • @samsmusichub
    @samsmusichub 5 หลายเดือนก่อน

    Hey thanks.

  • @inaamilahi5094
    @inaamilahi5094 5 ปีที่แล้ว

    Simple and to the point (Y)

  • @harishd37
    @harishd37 5 ปีที่แล้ว

    Shouldn't the origin also remain fixed? Won't we also need the information of where the origin movess? Just recording the information in a 2 X 2 matrix seems insufficient. So

    • @doaby3979
      @doaby3979 5 ปีที่แล้ว

      Harish D Keep in mind that when using this matrix, we’re only focusing on local points surrounding the point we originally focused on, not the grid as a whole. The fact that we’re taking partial derivatives automatically encapsulates this idea of locality. Also, the origin in this example moves because the matrix transformation isn’t linear.

  • @matejamartin2199
    @matejamartin2199 2 ปีที่แล้ว

    Is this from precalculus?

  • @marcovillalobos5177
    @marcovillalobos5177 5 ปีที่แล้ว +1

    Hold on, so gradient is a jacobian matrix of only 1 column because there is only f1(x)??

    • @Cessedilha
      @Cessedilha 4 ปีที่แล้ว +1

      Only one row*

  • @emfournet
    @emfournet 2 ปีที่แล้ว

    3Blue1Brown? You take my hand in the darkness and lead me through perdition.

  • @charmatataa5858
    @charmatataa5858 2 ปีที่แล้ว

    I love you, Grant

  • @ruralmetropolitan
    @ruralmetropolitan 7 ปีที่แล้ว

    This is nice.

  • @Joshua-c
    @Joshua-c 5 ปีที่แล้ว

    Brilliant

  • @mrscherryhuggypoinks9438
    @mrscherryhuggypoinks9438 7 ปีที่แล้ว +3

    notif squad wer u at? no one? mkay....

  • @justin.t.mcclung
    @justin.t.mcclung 2 ปีที่แล้ว

    No disrespect, but your symbol for the partial derivative needs work. It looks like a g

  • @lullubi5957
    @lullubi5957 4 ปีที่แล้ว

    I don't understand nothing 🤦‍♀️

    • @aashsyed1277
      @aashsyed1277 3 ปีที่แล้ว

      You don't understand anything or if u say nothing it means u understand :P

  • @ericobukhanich5575
    @ericobukhanich5575 2 ปีที่แล้ว +218

    this guy knows a thing or two about maths. he should start his own channel

    • @marcopel83
      @marcopel83 2 ปีที่แล้ว +5

      I Knew it!

    • @jjqerfcvddv
      @jjqerfcvddv 2 ปีที่แล้ว +18

      Dude!! that’s “Grant Sanderson” from 3blue1brown.

    • @sindhiyadevimaheshwaran3738
      @sindhiyadevimaheshwaran3738 ปีที่แล้ว +14

      @@jjqerfcvddv bruh...thats the joke

    • @PurasamaMan
      @PurasamaMan ปีที่แล้ว +7

      @@sindhiyadevimaheshwaran3738 not everyone knows my friend - we must convert the vast unwashed into Grant's mathematical following

    • @bossdelta501
      @bossdelta501 ปีที่แล้ว +1

      BRUH I LEGIT JUST THOUIGHT ABOUT THAT

  • @deepakmecheri4668
    @deepakmecheri4668 5 ปีที่แล้ว +48

    I've never seen someone make so much sense in my life. Grant, you are the GOAT

  • @hektor6766
    @hektor6766 3 ปีที่แล้ว +9

    "Some years ago at Khan Academy, I made many videos and articles on multivariable calculus. " -Grant Sanderson (3blue, 1brown)

  • @cringotopia8850
    @cringotopia8850 ปีที่แล้ว +2

    Wow, this guy is good at explaining math, maybe he should start an independent TH-cam channel or something...

  • @Lutterot
    @Lutterot 3 ปีที่แล้ว +13

    Doing physics I have been using Jacobians for years. This video finally lifted this beyond a 'trick' and me the insight of what it really means.

  • @nishparadox
    @nishparadox 6 ปีที่แล้ว +29

    Thanks Grant. You are awesome. Finally, I have understood what Jacobian Matrix really represent.

    • @maxpercer7119
      @maxpercer7119 4 ปีที่แล้ว +1

      no you havent. the jacobian matrix is much deeper than this . he is just touching the tip of the iceberg.

    • @aryamanpatel8250
      @aryamanpatel8250 4 ปีที่แล้ว +6

      @@maxpercer7119 Then please do point in the direction where I can gain an even deeper understanding.

    • @ryan_chew97
      @ryan_chew97 3 ปีที่แล้ว

      @@aryamanpatel8250 you could say... please point him in the locally linear direction so he can arrive at his deeper destiantion ;)

    • @lampa298
      @lampa298 2 ปีที่แล้ว +1

      @@maxpercer7119 ma vai a cagare

  • @ameyislive2843
    @ameyislive2843 4 ปีที่แล้ว +4

    Wait is this The Talking Pi

  • @MayankGoel447
    @MayankGoel447 2 ปีที่แล้ว +3

    Great video! I didn't get why the x component of transformed dx must be df1/dx

  • @giuseppealmontelalli840
    @giuseppealmontelalli840 4 ปีที่แล้ว +3

    397/5000
    hi, I have a question, how can I align a real surface to the CAD model by touching the real part? then I have the 3D model, I take n points, then I go to the real part and start looking for the surface going in the same coordinate(with a robot for example), after which if in the real piece I have a rotation / translation I have to correct the error. Actually I don't know how to do it ... could you recommend me some techniques?

  • @MohitYadav09
    @MohitYadav09 ปีที่แล้ว +1

    Can someone please explain: Grant said at 2:10 that the x component of the 2-D movement in output space is seen as partial change in f1, why do we say this why does that x comp equals the partial of f1??

    • @gate2024-c5b
      @gate2024-c5b ปีที่แล้ว

      change in dx results in change in df and it has two components which is the first column in jacobian matrix ,similarly for dy

  • @sivasudharshan5444
    @sivasudharshan5444 5 ปีที่แล้ว +3

    He is 3 blue 1 brown

  • @hr2441
    @hr2441 6 ปีที่แล้ว +9

    I am totally confused......

  • @FugieGamers
    @FugieGamers 6 ปีที่แล้ว +4

    Thanks for this vídeo, it finally clicked for me, you are great.

  • @dan_pal
    @dan_pal หลายเดือนก่อน

    This is the only video that I understood for Jacobians. Why other TH-camrs just start spitting equations? If I were able to understand just by that, I'd read the book!

  • @fritzzz1372
    @fritzzz1372 3 ปีที่แล้ว +16

    This closely relates to divergence and curl.
    If you, from the final matrix (The Jacobian) add the top left and bottom right entry ( the partial derivative of x with respect to x and the same for y),
    you get the divergence.
    If you subtract the top right entry from the bottom left one (subtract the partial derivative of x with respect to y from the partial derivative of y with respect to x),
    you get the curl.

  • @Melkboer38
    @Melkboer38 6 ปีที่แล้ว +11

    the channel name is khan something but he sounds suspiciously like 3b1b

    • @RandyFortier
      @RandyFortier 6 ปีที่แล้ว +4

      Same guy. Khan academy has different teachers, one of whom is the same guy as from 3b1b.

  • @1.4142
    @1.4142 2 ปีที่แล้ว +1

    soothing voice

    • @csaracho2009
      @csaracho2009 2 ปีที่แล้ว

      It is 3Blue1Brown voice!

  • @xruan6582
    @xruan6582 4 ปีที่แล้ว +1

    The green and red arrows are too small. They should be zoomed in to give readers a clear idea of what happens when both x and y change a tiny amount

  • @venkatachaitanyayadla1794
    @venkatachaitanyayadla1794 3 ปีที่แล้ว +1

    what is the name of this playlist?

  • @Juniper-111
    @Juniper-111 2 ปีที่แล้ว +1

    grant has the most beautiful voice on TH-cam

  • @TBadalov
    @TBadalov 7 ปีที่แล้ว +4

    Confused after transformation of the graphics :(

    • @aSeaofTroubles
      @aSeaofTroubles 7 ปีที่แล้ว +7

      Perhaps my explanation to Daneil C above may help!
      We are mapping small changes (dx, 0) and (0, dy) to small changes of f using the chain rule!
      J (dx, 0) = df_1/dx + df_2/dx
      But this is just the total derivative of f with respect to x by the chain rule.
      Likewise for (0, dy) = total derivative of f with respect to y.
      So, locally, we know how far we would move from the point we are evaluating if we took small steps.

  • @randomstuff9960
    @randomstuff9960 9 หลายเดือนก่อน

    Wow! He is mathematical wizard...

  • @nicholasandrzejkiewicz
    @nicholasandrzejkiewicz 7 ปีที่แล้ว +4

    Howto & Style? How is this not education?

    • @That_One_Guy...
      @That_One_Guy... 4 ปีที่แล้ว

      What do you mean howto ? This isn't some simple cooking tutorial or something this is academic teaching

    • @nicholasandrzejkiewicz
      @nicholasandrzejkiewicz 4 ปีที่แล้ว +1

      @@That_One_Guy... I didn't mean anything, that is how TH-cam categorized the video.

  • @rajibsarmah6744
    @rajibsarmah6744 4 ปีที่แล้ว +1

    Change of variable in double intregral

  • @avneeshkhanna
    @avneeshkhanna 3 ปีที่แล้ว +2

    Great video! However, I have a doubt. When you were tracking that yellow square, the grid lines transformed like a linear transformation. However, the grid itself translated to another coordinate [near (-1, 0)]. Since we know that translations are NOT linear transformations, then how can we say that the grid represents linear transformation?

    • @palantea1367
      @palantea1367 3 ปีที่แล้ว

      He's not considering that translation, he's just considering the linear transfromation around (-2,1). It's like when we are on earth, we don't consider that earth is moving when we are doing some physics calculations.

    • @hektor6766
      @hektor6766 3 ปีที่แล้ว

      At about 1:20, you can see he selects -2,1 on the original matrix. That selected point moves to near -1,0 after the various partial transformations performed throughout the video.

  • @smallvilleclark3990
    @smallvilleclark3990 ปีที่แล้ว +1

    The best math teacher on TH-cam. Even kids can understand any big subjects he’s teaching.

  • @eStalker42
    @eStalker42 7 ปีที่แล้ว +3

    what is 1dimensiona jacobian matrix? just df/dx ?

    • @nathanielsaxe3049
      @nathanielsaxe3049 7 ปีที่แล้ว +4

      sounds right

    • @ozzyfromspace
      @ozzyfromspace 6 ปีที่แล้ว

      Yup, 1x1 Jacobian Matrix is essentially a derivative of a univariate scalar function, 1 x m Jacobian Matrix is the transposed gradient vector of a multivariate scalar function. Cool beans.

  • @youzhisun3651
    @youzhisun3651 5 ปีที่แล้ว +2

    This definitely sounds like 3blue1brown.

    • @x0cx102
      @x0cx102 4 ปีที่แล้ว +2

      it is 3b1b! :)

  • @robmarks6800
    @robmarks6800 3 ปีที่แล้ว +1

    What bothers me is that the non linear transformation translates a point to another. But this is never captured by the jacobian matrix. Why isnt the translation important?

    • @joluju2375
      @joluju2375 3 ปีที่แล้ว +1

      Same problem here. I hoped the next video "Computing a Jacobian matrix" would clarify that and answer this question, but nope.
      What is missing here is a real example of how the use of the jacobian matrix would give a satisfying solution to a problem otherwise too complicated.
      So far, the best I can understand is that the Jacobian matrix can simplify determining what's happening to the *neighborhood* of the point by using only linear functions, but I can't imagine a situation where I would need that.
      Finally, my best bet is that I missed something important.

  • @상추-o1f
    @상추-o1f 2 ปีที่แล้ว

    what a nice lacture

  • @bevel1702
    @bevel1702 2 ปีที่แล้ว

    Grant is on khan??

  • @crane8035
    @crane8035 2 ปีที่แล้ว

    it’s grant !

  • @PunitSoni00
    @PunitSoni00 5 ปีที่แล้ว +2

    which one is the next video? Is there a playlist for this series?

    • @slashholidae
      @slashholidae 5 ปีที่แล้ว

      Did you ever find out?

    • @bharasiva96
      @bharasiva96 5 ปีที่แล้ว +2

      Yes it indeed is. Here is the link to the entire playlist which is called "Multivariable Calculus":
      th-cam.com/play/PLSQl0a2vh4HC5feHa6Rc5c0wbRTx56nF7.html

  • @geophysicsadvancedseismice7542
    @geophysicsadvancedseismice7542 4 ปีที่แล้ว

    Sir, what is the difference between Newton-Raphson and Gauss-Newton Methods... any video link regarding these methods?

  • @MrBemnet1
    @MrBemnet1 4 ปีที่แล้ว

    send me a message if any one doesn't understand the concept

  • @puneetsharma2135
    @puneetsharma2135 3 ปีที่แล้ว

    value of jacobian matrix ( its determinant ) should be high or low, what is its ideal value

  • @Tntpker
    @Tntpker 6 ปีที่แล้ว

    I still don't understand why this relates to the Jacobian pointing in the direction of steepest ascent? So it's basically the gradient, but for functions that output vectors?

  • @solsticetwo3476
    @solsticetwo3476 5 ปีที่แล้ว +1

    So, how to know when is a transformation and when is a vector field?

    • @douglasmangini8744
      @douglasmangini8744 4 ปีที่แล้ว +2

      It depends on how you see the input space, that is, if it's filled with vectors (things that can be added to each other and scaled by numbers) or dots (simple pairs of numbers that cannot be added or scaled). If you think about vectores, then it's a transformation, like those you see in Linear Algebra, but this time they are not necessarily linear. If you think the function is mapping dots to vectors, then it's a vector space. But I think that Grant's point in this course is that those are two complementary ways of seeing the same thing, it's just that the transformation has this agile nature of taking vectors from one place to another, while vector spaces are more static.

  • @Krishnajha20101
    @Krishnajha20101 6 ปีที่แล้ว +1

    Are there functions that are not even locally linear? What can be an example of that function?

    • @PfropfNo1
      @PfropfNo1 5 ปีที่แล้ว +2

      Abs(x) at 0; 1/x at 0 etc.

    • @chandankar5032
      @chandankar5032 5 ปีที่แล้ว +1

      @@PfropfNo1 can you help a bit, I have a doubt, is all the entries in jacobian matrix represent :
      Change in output space divide by change in input space ?
      Considering the jacobian matrix the mapping of basis in input space must be transformed in to the entries of jacobian matrix.
      But I did not get how
      delf1/delx,dlef1/dy.... are obtained ?

  • @antonienewman9379
    @antonienewman9379 5 ปีที่แล้ว

    Why do you divide Partial f by dx i dont understand

  • @IJKersten
    @IJKersten 7 หลายเดือนก่อน

    This is so incredibly well explained.

  • @zeyad544
    @zeyad544 6 ปีที่แล้ว +1

    what

  • @thefacelessmen2101
    @thefacelessmen2101 7 ปีที่แล้ว +1

    Can you please add a link to the software you are using for this and perhaps the code.

    • @connemignonne
      @connemignonne 7 ปีที่แล้ว

      I bet it's proprietary

    • @farlyso
      @farlyso 7 ปีที่แล้ว +7

      github.com/3b1b

  • @martovify
    @martovify 4 ปีที่แล้ว

    what is the name of this series!?!?

  • @divyamgarg9078
    @divyamgarg9078 ปีที่แล้ว

    Amazing video! precisely what I was looking for. The physically intuition is so important to understand a concept.

  • @mohamedusaid456
    @mohamedusaid456 4 ปีที่แล้ว

    Super maa more than super.

  • @Tara-li4hl
    @Tara-li4hl ปีที่แล้ว

    You're awesome❤

  • @GOODBOY-vt1cf
    @GOODBOY-vt1cf 4 ปีที่แล้ว

    thank you so much

  • @SohamChakraborty42069
    @SohamChakraborty42069 4 ปีที่แล้ว

    I have a question here, which kind of seems to be self-explanatory, but it would still be nice to get some confirmation. Is local linearity a property of every point in every transformation? The reason is ask this is that due to non-differentiability at some points we may not be able to calculate the value of some of the partial derivatives for certain kind of functions. How should this be interpreted ?

    • @Cessedilha
      @Cessedilha 4 ปีที่แล้ว +1

      Locally linear = differentiable. If it's not differentiable at a certain point, this means that it can't be locally approximated by a linear transformation, and vice versa.

    • @SohamChakraborty42069
      @SohamChakraborty42069 4 ปีที่แล้ว

      @@Cessedilha Thanks a lot!

  • @RCPN
    @RCPN 4 ปีที่แล้ว +4

    Why did we divide delf1 and delf2 with delx and dely?
    I understood that the x component would be delf1 and y component would be delf2, but then we divide it with dely and delx.... Why

    • @Cessedilha
      @Cessedilha 4 ปีที่แล้ว +1

      The reason is that in the approximation the Jacobian is multiplied by the vector [delx,dely]. If the vector was [1,1] you'd be correct that it should be just delf1 and delf2.
      Think that the approximation (taking some liberties with notation) is dF = J*dX, where F is the vector of the function and X is the vector with the variables.

  • @exarkk
    @exarkk 5 ปีที่แล้ว

    excellent

  • @antonienewman9379
    @antonienewman9379 5 ปีที่แล้ว

    Partial derivatives represent rate , but i dont really get it. The values in the matrix should represent coordinates of where basis vectors land. Can someone make this clear

    • @jordanjacobson6046
      @jordanjacobson6046 4 ปีที่แล้ว

      Well, each of the partial derivatives will give you a function that tells you the rate of change of one function with respect to another, and when we evaluate it at a specific point, its going to tell us what that change was. That's the important part, and he said it, that we have to evaluate it and it will just turn into a matrix with numbers in it instead of functions.

  • @콘충이
    @콘충이 4 ปีที่แล้ว

    Thank you!

  • @尾崎元恒-j2u
    @尾崎元恒-j2u 3 ปีที่แล้ว

    男優と数学者以外に仕事をしたくないです。

  • @dr_ingenium
    @dr_ingenium 7 ปีที่แล้ว

    Does the local linearity have to be at (-2,1)? or Every point is locally linear after the transformation if you zoom closely enough?

  • @snehasishchowdhury6900
    @snehasishchowdhury6900 6 ปีที่แล้ว +1

    Does Jacobian matrix has some sence for a linear transformation because there we don't need to zoom?

    • @jamesedwards6173
      @jamesedwards6173 6 ปีที่แล้ว +1

      Any possible linear transformation of x and y can be conceptually represented as shown in the video by the matrix (with a-f being constants):
      [ ax+by+e]
      [ cx+dy+f ]
      (As should be expected, these are just equations for lines.)
      What happens if you apply the Jacobian to this matrix? It reduces to precisely the linear transformation matrix that's normally used to transform (x,y) points:
      [ a b ]
      [ c d ]
      Why is this so? Why is it just constants? ... Because the Jacobian expresses how much a transformation is "changing things locally", and a _linear_ transformation changes the entire transformation space in exactly the same way (which is why lines stay parallel, and whatnot). In other words, it does not vary; it stays constant. It is comprised entirely of uniform scaling and shearing (and potentially translating).
      In short, the reason the (general, i.e., unevaluated) Jacobian shown in the video varies from point to point is _because_ the functions selected for the transformation were *not* linear (sine and cosine). If they were linear, the resulting matrix would have simply been full of constants.

    • @Cessedilha
      @Cessedilha 4 ปีที่แล้ว

      The Jacobian matrix is a linear approximation. For a linear transformation (matrix multiplication), the Jacobian would be the linear transformation itself. Kind of what happens in 1-d derivation, when multiplying a constant by x the derivative is the constant itself.

  • @csaracho2009
    @csaracho2009 2 ปีที่แล้ว

    I kind of recognize this voice...