By the end you say that the case for homomorphisms being injective but not surjective happens only in infinite dimensional Banach spaces. Is that just common sense or something deeper ? Is there like an intuitive explanation? Thanks
I have course about Linear Algebra, where we also discuss linear maps and dimensions. There, you will find the answer: tbsom.de/s/la Note that I made that statement for linear maps from X to X.
2:23 shouldn’t the definition of homomorphism for the multiplication operation be f(lamba*x) = f(lambda) * f(x) ? If not, why not? Because that’s the definition I see more oftenly...
Don't get confused: lambda*x is the scalar multiplication. Writing f(lambda) wouldn't make any sense since f is needs vectors (and not scalars) as inputs.
Hey! I think I might be able to guess the source of your confusion based on your comment about the “f(λ)” term. Perhaps you first heard about homomorphisms in an algebraic setting such as introductory Group Theory. If so, recall that when speaking of two groups, say (G, •) & (H, ∗), we say that a map φ: G → H that has the property, φ(g • g’) = φ(g) ∗ φ(g’) for all g, g’ in G is called a “homomorphisms of groups”; often simply referred to as a “group homomorphism”. Furthermore, we refer to such maps as “isomorphisms of groups” or "group isomorphisms" if φ is also bijective. If this is the case, then it tells us that the groups in question are essentially the same because the mapping preserves various group structures on either the domain or codomain. Perhaps this is the definition you were thinking of and recalling syntactically by writing f(λ•x) = f(λ)∗f(x) [ which is indeed erroneous given the context for the map "f" as was explained in the other comment by Bright Side :) ] As was said in the video, homomorphisms and isomorphisms are very broad and important concepts in mathematics as they codify the idea of “persevering mathematical structures” and hence the exact definitions that manifest vary depending on the mathematical structures in context. Notably, the exact definitions for homomorphisms and isomorphisms differ in analytic settings versus algebraic settings. The definitions discussed in the video take place in an analytic setting. Was my guess correct?! Either way, I hope my comment is found to be helpful to whoever may read it!
@@tylerlabus360 Yes, your guess is right on the fly! I’m not familiar with the concept of homomorphisms. Question: would you have any good book/material (or even TH-cam video) to share with me about homomorphisms in an analytic settings, as you said? I appreciate the explanation!
@@dibeos Hi again! And no problem, I’m glad that I was able to help! Now with regards to your follow up question... Short Answer: Unfortunately, no, I do not have any such recommendations. Longer Answer: I don’t have any recommendations because I don’t think there are any math textbooks dedicated to solely discussing the topic of homomorphisms in analytic settings. In part, because even within a given setting (algebraic or analytic) the definitions for a homomorphism / isomorphism are different for different mathematical objects. For example, as we saw in the video, the definitions for homomorphisms and isomorphisms for general metric spaces and banach spaces will be similar in spirit, yet technically different. As for an algebraic example of this, note the formal definition for a homomorphism of rings is again similar in spirit to that of groups but different as now with rings the second binary operation and its structure are to be preserved. My experience (and my overall guess) is that author(s) simply will explicitly state the definitions for these kinds of maps for the relevant mathematical objects of interest, as needed. Again, not to be repetitive, but the main idea to walk away with here for these kinds of maps is that the mathematical objects in question have their ‘key’ properties preserved under such mappings. Simply take the first example we encountered in the video to see this. The ‘key’ and defining property of vector spaces are that the elements within them behave linearly (in fact, some textbooks refer to vector spaces as ‘linear spaces’ to emphasize this - usually older texts). The source data, x + λx’ is a statement that we can add and scale vectors in the source space X and since f(x), f(x’) live in Y, then the target data f(x) + λf(x’) is a statement that we can add and scale vectors in the target space, Y. We want these statements to be “preserved”. Hence, we required definitionally that if f is to be a homomorphism, then it should be a linear map, since f being linear ⟹ f(x + λx’) = f(x) + λf(x’), which accomplishes the needed preservation of the relevant mathematical structure. This is the intuition. In metric spaces, the ‘key’ property is that we have metric structure in addition to topological structure. The distance function/metric allows us to measure finite distances between points in our space. Thus, it makes sense to ask that distances between the images of points in the codomain are no more than the distances between their preimages in the domain, if f is to be a homomorphism. I’ll leave it to you to reflect on these ideas for banach spaces and why the definition we saw in the video makes sense. I think you should be able to adequately convince yourself that it does. To wrap things up, if I were you, I would simply google isomorphisms or homomorphisms for whatever mathematical objects you are interested in. I do this often and find that Wikipedia, Wolfram’s MathWorld, or Springer’s Encyclopedia of Mathematics are good references and reading material. Lastly, I apologize for the lengthy comment! Good luck and best wishes with your future endeavors and studies!
To add to this. If you know any category theory, then if we take instances of a mathematical structure type as the objects of our category (e.g. a class of metric spaces, or a class of groups, etc.), then we can define a "morphism system" to be any class of functions such that taken as arrows between the objects they form a category. In the context of (Bourbaki) structures, there is a canonical* way to define isomorphisms and many natural ways to define morphisms for any given structure type. In category theory, however, the morphisms (arrows) are prior to the isomorphisms (arrows with inverses). In the video we saw this for metric spaces, where we defined the morphisms first and then defined the isomorphisms to be the (bijective) morphisms whose inverses are also morphisms (this happens to agree with the Bourbaki notion).
Beautiful and insightful examples. Thank you!
I love the effort you put on bruh!!❤️
Just love your functional analysis series
Thank you very much!
So nicely and easily you are explained.
Your efforts appreciate
This video is very much intersting and usefull
Die Videos sind unglaublich gut! Chapeau!
Thank you. Waiting for more functional analysis course
By the end you say that the case for homomorphisms being injective but not surjective happens only in infinite dimensional Banach spaces. Is that just common sense or something deeper ? Is there like an intuitive explanation? Thanks
I have course about Linear Algebra, where we also discuss linear maps and dimensions. There, you will find the answer: tbsom.de/s/la
Note that I made that statement for linear maps from X to X.
Exceptionally good
Hey! Love your videos. What program/setup do you use for recording these lectures?
I use the free and beautiful program Xournal.
2:23 shouldn’t the definition of homomorphism for the multiplication operation be f(lamba*x) = f(lambda) * f(x) ? If not, why not? Because that’s the definition I see more oftenly...
Don't get confused: lambda*x is the scalar multiplication. Writing f(lambda) wouldn't make any sense since f is needs vectors (and not scalars) as inputs.
Hey! I think I might be able to guess the source of your confusion based on your comment about the “f(λ)” term.
Perhaps you first heard about homomorphisms in an algebraic setting such as introductory Group Theory. If so, recall that when speaking of two groups, say (G, •) & (H, ∗), we say that a map φ: G → H that has the property,
φ(g • g’) = φ(g) ∗ φ(g’) for all g, g’ in G
is called a “homomorphisms of groups”; often simply referred to as a “group homomorphism”.
Furthermore, we refer to such maps as “isomorphisms of groups” or "group isomorphisms" if φ is also bijective.
If this is the case, then it tells us that the groups in question are essentially the same because the mapping preserves various group structures on either the domain or codomain.
Perhaps this is the definition you were thinking of and recalling syntactically by writing f(λ•x) = f(λ)∗f(x)
[ which is indeed erroneous given the context for the map "f" as was explained in the other comment by Bright Side :) ]
As was said in the video, homomorphisms and isomorphisms are very broad and important concepts in mathematics as they codify the idea of “persevering mathematical structures” and hence the exact definitions that manifest vary depending on the mathematical structures in context.
Notably, the exact definitions for homomorphisms and isomorphisms differ in analytic settings versus algebraic settings. The definitions discussed in the video take place in an analytic setting.
Was my guess correct?! Either way, I hope my comment is found to be helpful to whoever may read it!
@@tylerlabus360 Yes, your guess is right on the fly! I’m not familiar with the concept of homomorphisms. Question: would you have any good book/material (or even TH-cam video) to share with me about homomorphisms in an analytic settings, as you said?
I appreciate the explanation!
@@dibeos Hi again! And no problem, I’m glad that I was able to help!
Now with regards to your follow up question...
Short Answer: Unfortunately, no, I do not have any such recommendations.
Longer Answer: I don’t have any recommendations because I don’t think there are any math textbooks dedicated to solely discussing the topic of homomorphisms in analytic settings. In part, because even within a given setting (algebraic or analytic) the definitions for a homomorphism / isomorphism are different for different mathematical objects.
For example, as we saw in the video, the definitions for homomorphisms and isomorphisms for general metric spaces and banach spaces will be similar in spirit, yet technically different.
As for an algebraic example of this, note the formal definition for a homomorphism of rings is again similar in spirit to that of groups but different as now with rings the second binary operation and its structure are to be preserved.
My experience (and my overall guess) is that author(s) simply will explicitly state the definitions for these kinds of maps for the relevant mathematical objects of interest, as needed.
Again, not to be repetitive, but the main idea to walk away with here for these kinds of maps is that the mathematical objects in question have their ‘key’ properties preserved under such mappings.
Simply take the first example we encountered in the video to see this. The ‘key’ and defining property of vector spaces are that the elements within them behave linearly (in fact, some textbooks refer to vector spaces as ‘linear spaces’ to emphasize this - usually older texts).
The source data, x + λx’ is a statement that we can add and scale vectors in the source space X and since f(x), f(x’) live in Y, then the target data f(x) + λf(x’) is a statement that we can add and scale vectors in the target space, Y. We want these statements to be “preserved”.
Hence, we required definitionally that if f is to be a homomorphism, then it should be a linear map, since f being linear
⟹ f(x + λx’) = f(x) + λf(x’), which accomplishes the needed preservation of the relevant mathematical structure. This is the intuition.
In metric spaces, the ‘key’ property is that we have metric structure in addition to topological structure. The distance function/metric allows us to measure finite distances between points in our space. Thus, it makes sense to ask that distances between the images of points in the codomain are no more than the distances between their preimages in the domain, if f is to be a homomorphism.
I’ll leave it to you to reflect on these ideas for banach spaces and why the definition we saw in the video makes sense. I think you should be able to adequately convince yourself that it does.
To wrap things up, if I were you, I would simply google isomorphisms or homomorphisms for whatever mathematical objects you are interested in.
I do this often and find that Wikipedia, Wolfram’s MathWorld, or Springer’s Encyclopedia of Mathematics are good references and reading material.
Lastly, I apologize for the lengthy comment!
Good luck and best wishes with your future endeavors and studies!
To add to this. If you know any category theory, then if we take instances of a mathematical structure type as the objects of our category (e.g. a class of metric spaces, or a class of groups, etc.), then we can define a "morphism system" to be any class of functions such that taken as arrows between the objects they form a category. In the context of (Bourbaki) structures, there is a canonical* way to define isomorphisms and many natural ways to define morphisms for any given structure type. In category theory, however, the morphisms (arrows) are prior to the isomorphisms (arrows with inverses). In the video we saw this for metric spaces, where we defined the morphisms first and then defined the isomorphisms to be the (bijective) morphisms whose inverses are also morphisms (this happens to agree with the Bourbaki notion).
شكرا جزيلا