0:25 Maybe it’s time to create a new channel? You can keep this channel for small exercises/contest problems and shorts and use the second one for more advanced topics, Q&A series, anything else? It’s an idea I just had
I have had this idea as well, but I haven't quite figured out where I land on the idea philosophically. On the one hand this makes sense as it would keep the "main channel" more consistent in terms of its level and niche. That being said, I really like how the popularity of this channel allows me to hack the system in order to get a larger number of eyes on advanced topics. Anyway, I would really like to hear everyone's thoughts about this....
@@MichaelPennMath You should make a community post or a poll on that matter. Maybe ask that in a video for the 200K subs 😛 But yeah, that’s what I would do : two channels for two different contents, for two different audiences.
@@MichaelPennMath You’re right. If you reserve this type of content for a second channel, then your just keeping this stuff hidden for only those who particularly are interested in it. On here, more people, including myself, get to see this stuff that I probably otherwise wouldn’t. I’d be down for a combination of your typical content and this where you could increase the frequency of this type of stuff.
100% agreed. The contest videos are neat and seem to me like good ways to learn new and interesting methods for solving problems, but i much prefer these construction and exploration type videos
This and the yesterday video about tensor product were great videos. I love all your videos but these videos are especially entertaining because there were some new, abstract concepts that you presented in an easy to understand way. That would be great if you continued making this kind of algebraic videos. I hope that you could make them in such a way that to understand one video the viewer doesn't need to watch every single video in the series beforehand
I rarely watch your "contest problems" videos but I always watch videos like this one, where your explain various topics on algebra, number theory, etc. Your explanations are great! Thank you!
Instead of "formal linear combinations", I think it's simpler to define the free vector space over K on S as the set of all finitely supported functions from S to K.
I think it's much less confusing to look at the vectors of V(S) as functions v: S → R such that there is finite number of s in S for which v(s) ≠ 0, and adding two vectors v and w is just adding functions, multiplying by a constant is just multiplying a function by a constant
I think that's a great thing to think about, and helps to get an intuition for what these objects are like, but it's a pain to write any examples, since they're all, by definition, awkward piecewise functions. Like {7 if x=1; -1 if x=7; 0 otherwise} is obviously not supposed to simplify to 0, but it's also not a nice notation to work with.
@@iabervon perhaps writing down specific vectors is slightly more cumbersome, but in exchange there is no ambiguity in any of the arithmetic operations which go into computing with the free vectors. It also makes dimensionality obvious: if S is a set with infinite cardinality, V(S) cannot be a finite dimensional space. A basis is also easy to construct: Use the kronecker delta. Free vector spaces as functions removes any requirement for "formal" sums and products. Instead we have literal sums and products. Much easier to compute when you know exactly where your + and * come from.
@@iabervon I agree that notation is cumbersome, but it's a separate question, first it's good to make the concept 100% clear and then use some convenient notation for it
We also did this. And then this helps justify the notation of R^n when you start with R^{1,2,3,...,n} which is V({1,2,3,...,n}) and if you abbreviate the set with numbers up to n by n_, then it's just R^n_ and then a small notational step is to leave out the _ at the end to end up with R^n. So for example, R^3 = R^{1,2,3} = V_R ({1,2,3}) with elements f: {1,2,3} -> R, 1 -> f_1, 2 -> f_2, 3 -> f_3 or in short f = (f_1, f_2, f_3)
I agree that this is a much better model. The use of "formal empty sum" from the video leave uncomfortable. I think to make that method work rigorously, you would need to toss in some extra set element to stand for this symbol. The approach you mention here doesn't run into these issues, and is much more elegant. Also more useful for proving the universal property.
The notation: sum for i= 1 to n, does not work when the empty sum is being considered. What works in general is: the sum for i < n. In the case where n = 0, there is no i < n so that case indicates an empty sum.
Instead of talking about “formal sums”, you could say that the free vector space over R generated by a set S, V(S), is the set of all functions f:S->R such that f(x)≠0 only for a finite number of x∈S. To connect these two definitions you only have to identify every function f with the “formal sum” Σf(x)x (x∈S)
This is the most lifesaving video for me... every other content on the internet is so confusing and do not talk about the free vector space over a vector space... thanks for the vedio
Please do more videos like these. I'm a math grad student so these higher level videos are actually really useful to me and I'm sure for a lot of others as well!
These vector space videos have been great, I would really enjoy more videos about these. Not just vector spaces but maybe even fields and sub spaces too!
I found this video *really* helpful after the tensor video. I was familiar with the free functor in category theory, but seeing this 'from the ground up' approach has given me a much better understanding of how that really works. Many thanks !!!
I quite like these videos on abstract algebra, although I'd also like to see some videos on applications of abstract algebra or at least ones with good examples. This video certainly clarified the previous video, which is very welcome, although I thought the first one was pretty understandable on its own. And I'm very interested on the connection between these tensor products and the tensors used in general relativity. In fact, I think a decent explanation of the mathematics of those tensors and their operations would make a very good video series. I've read through several books on tensors and GR, and I never quite seem to fully understand what's going on. Recently I saw an explanation by another TH-camr that made everything make much more sense, but I still feel a bit in the dark on it. They explained tensors as basically objects that behave like vectors or square matrices in that they fundamentally stay the same under changes of basis, even though their elements are changed, but that tensors incorporate additional bits of information, like rank-3, dimension-3 tensors being like a triplet of triplets of vectors, in the same way that a dimension-3 square matrix is a triplet of vectors. Unfortunately, they never even mentioned co-variant and contra-variant tensors, and that's still a bit mysterious to me. I know the difference is in how they transform under a change of basis, but exactly what's going on is beyond me. I'd love it if you did a series explaining that stuff. The actual GR stuff is unnecessary, since the math is so interesting on its own.
Hey! On my channel I recently made a video that briefly goes over how tensors come up in physics, and where the definition comes from. I'm going to be expanding it into a fuller series where I give the gritty details about how to derive the transformation law, but the first video is more of a "bird's eye view".
at ~ 19:35, instead of b_i*w_i, shouldn't it be indexed by j since you don't know what order each vector was taken in relative to the basis? And shouldn't the upper bound be n for both v and w since they are from the same n-dimensional space?
unfortunately, "contest problems" videos gets much more views than these higher level maths :/ I hope you choose to do what you like most rather than worry about views :)
The free vector space of a set A (up to isomorphism) is a vector space V containing A such that A is a basis (linearly independent spanning set) of V. So a quick check of V(R) shows that it has an uncountable basis and therefore is very complicated.
Great video! I was unclear about just one point: around 19:08, when you define vector addition for v and w, how does one deal with the situation when n does not equal m in the sums? Does one use the minimum of n and m as the dimension of the vector sum of v and w, by truncating the extra terms in the higher-dimensional of v and w?
You would use 0 as the coefficients for vectors not included. For example, let's say I have x, y, z as three of my elements in S. And I define v = x+3z and w = 2x-y. Then v = x+0y+3z and w = 2x-y+0z So v+w = 3x-y+3z
I honestly like a lot more these types of videos than the contest problems. Those are entertaining because most of the have crazy solutions, but these videos make you know more stuff!
Rather than using the v * w notation, could we not simply write this as an ordered pair (v,w)? The * notation seems to be too easy to interpret as a binary operation to which it is very tempting to assign some meaning.
Lots of use of the word "formal" here, and I think it's worth clarifying. "Formal", here, means "having the form of something", not in the sense of "a formal occasion". So a formal sum is just something that looks like a sum: for example, apple+orange+pear is a formal sum of fruit. Because it's a formal sum, we're not interested in figuring out what fruit is equal to apple+orange+pear, compared to an ordinary sum in which we usually want to know what integer is equal to the sum 5+2+7. Rather, the formal sum apple+orange+pear is just some abstract object, which we will manipulate as necessary.
These sums are finite by definition. So just take the finitely many elements you're going to add, and name index them 1,,,,n. You have to change your indices for different sums, otherwise as you correctly point out we wouldn't have enough indices.
It would be so nice if you did a series of videos covering these topics (it would be even more fantastic if you could make some videos on the easiest concepts in this field. U know, like "starting at the beginning")
I really like these videos:) I would be really interested in learning how this definition of a tensor relates to the physicists tensor as something that transforms like a tensor (?)
This lecture could be a step toward defining the linear divisors on curves used in formulating the Riemann-Roch theorem in algebraic geometry. The coefficients are used to describe order of zeros and poles of meromorphic functions on a Riemann surface. l would love to see you wade through some algebraic geometry with tensor products, linear divisors, and differential forms, with the goal of developing intersection theory, but then you might need to introduce some other topics like topology, commutative algebra, and projective spaces. Personally, I would love to see a video on Hartog's extension used in several complex variables.
I have a question. In your example where S was R you used a summation over the elements of R. But I thought that R could not be indexed over. Is there a way to reconcile this?
I have a question. When we constructing vector space, like the standard 3-dimentional space, we usually begin from the "basis" - an arbirtaray set of linerly-independent vectors, usually unitary and orthogonal. It means that we need to have vectors beforehand, in order to construct vector space. All that we do is imposing some kind of structure onto the already existing thing. But is there a way to construct a vector space from the ground up, or at least from the different starting point?
The usual way to construct the standard 3-dimensional vector space over R is to take the cartesian product RxRxR and define addition and scalar multiplication as follows: 1) (a, b, c) + (d, e, f) = (a + d, b + e, c + f) -- component wise addition 2) k (a, b, c) = (ka, kb, kc) where a, b, c, d, e, f in R and k in R. It is easy to see that this is a vector space satisfying the usual rules for being a vector space.
If K is any field and S is any set, then V_K (S) is just the set of maps of S into K. Each basis element is defined in terms of each x in S, mapping x to 1 and anything that isn't x to 0. On the case of S = K = *R* (say), this is just the real-valued real functions.
Great video as always. It would be nice however to get a precise definition of what “formal” means here, since S can be any set and we haven’t defined scalar multiplication and addition on arbitrary sets.
One thing I don't understand: the standard axioms for a vector space require that the vectors form an abelian group. Is there a step missing here ? You seem to be saying that (in V({a,b}) over R), a + a = (1 + 1)a = 2a, but without associativity, there doesn't seem to be a way to simplify a + b + a ?
At 14:45 gör the "more understandable" part, you can use characteristic functions of real numbers as the base elements. Of course this makes the elements of the free vector space functions as well. This representation would be less confusing, I guess. source: Lee Intro. Top. manifolds. Ch5
There's a vector space used to prove that regular tetrahedra by themselves cannot tile space, whereas tetrahedra and octahedra together can. It consists of the angles mod the rational multiples of a turn, somehow combined with the real numbers representing lengths of edges. I have not yet understood it. Could you explain it?
Hey Micheal, enjoyed the video as always. From what I gather, the set S is always the basis since it is both independent and a spanning set. I wonder whether the free vector space can be used to prove that every vector space has a basis. If that were true, then this could tie in nicely with the axiom of choice or equivalent axioms such as the Hausdorff Maximality principle.
They can't be used to prove that every vector space has a basis, because that statement is actually equivalent to choice, which is independent of ZF, and the definition here only uses ZF. However, I do believe that you can use free vector spaces to prove if every vector space has a basis, then the axiom of choice holds.
Just a question for you, I don’t know if you see this, but I’m really curious - what’s your experience with other fields of math? For example, I really like differential geometry, do math doctorates who specialize in algebra bother with something unrelated?
This might be totally obvious to everyone else but i recall being utterly confused whenever someone carried out a procedure “formally”. I associated “formal” with handwaving, but once I started reading “formal” as “by formula” or “by form” then it all clicked.
Aha, I just commented about the formal product reminding me of a free monoid and then immediately saw that this video existed. Interested to see how this video helps my intuition
Really enjoyed this video! I have one question. On example 2, with S={x}, we have v=ax where a is an element of R and v is an element of V(S). But is V(S) isomorphic to R, or is R merely a subspace of V(S)? I envision R as "the x-axis", i.e., the one-dimensional line y=0, but V(S) seems to be "all lines passing through the origin", of which R would seemingly be a mere subspace.
V(S) is isomorphic to R by sending ax to a. Show that this is a bijective linear map by checking all the requirements. V(S) is just one line because x is a chosen variable that cannot change. X doesn't even need to be a number.
@@somgesomgedus9313 I see. I didn't realize that x is not a variable. So we could take that example to be S={1} and let V(S) be all elements that consist of a formal finite sum of elements of S with coefficients from R. Then V(S) is isomorphic to R. Might have helped if we used c instead of x.
Question around minute 4, when you're doing example 1. How do you define the multiplication of "a sub i" in the Reals with an element of V. Is the multiplication inherited from the underlying field? I'm trying to see how. Thanks.
It is just juxtaposition. It is similar to the subring of all polynomials in x, y, z, ... where there are no constant terms and no quadratic or higher terms (that is, only linear terms in the variables). For example, consider the polynomial 3 x - 5 y + 7 z added to 10 x - 1 z to get 13 x - 5y + 6z. I suspect that you are not asking how x is being multiplied by 3 in this case.
@@billh17 No... because a polynomial comes from R[x] and thus multiplying two elements is well defined because any real and also 'x' are in ring. At least that's my understanding. What I was asking was that we are taking a (sub i) to be defined as a member of the reals... and a set S and then defining V(S) to be a sum over the ordinality of S of products of what? Am I thinking too hard on this. For example if S is (apple, banana, orange) then I should just be happy with 5 apple + 10 banana and not worry about the 'product' of 5 and apple being well defined?
@@aoehler1 asked " I should just be happy with 5 apple + 10 banana and not worry about the 'product' of 5 and apple being well defined?" Yes. But, the product of 5 and apple is well defined: it is the juxtaposition of 5 before apple. The 'sum' + is also well defined: it is the concatenation of juxtaposed terms with other juxtaposed terms separated by the + sign. One is trying to define the elements of the set V(S). This must be done before we can define addition and scalar multiplication on V(S) to make it into a vector space. The + sign and the scalar multiplication sign * in the expression " 5 * apple + 10 * banana " are just constant symbols without meaning or interpretation at the beginning stage of just defining the set of elements that make up V(S). When we get to defining a vector structure on V(S), then the * will "transform" into the tensor product which is a real product.
Literally no clue what is going on, but I'm guessing: So basically the x_i part determines what dimension the vector exists in? So like S = {1,2,3} gives us all possible vectors that exist in 3 (or fewer) dimensions? And thus like when we have S = {0} we get the 0 dimensional vector space (which only has the 0 vector as an element}? And thus when we have the S ε ℝ, the free vector space consists of all vectors with real dimensions. Although looking back I'm not that certain, as I'm unsure what summing vectors in different dimensions would look like.
If S = {0}, we get the 1 dimensional vector space (not the 0 dimensional vector space). Michael Penn used the notation v sub x. One could also use the notation e sub x (which might be more familiar). Thus, if S = {1, 2, 3} , then one wants to form all formal sums of e1, e2, e3 over the real numbers R. That is, one wants to consider all the following as vectors: a1 * e1 + a2 * e2 + a3 * e3 with a1, a2, a3 in R (where addition + is commutative and scalar multiplication * follows the normal rules for vectors). If S = {0}, then one wants to form all formal sums of e0 over the real numbers R: a0 * e0 where a0 in R. Of course, it would be clearer if one started with the set {e1, e2, e3} to begin with rather than with the set {1, 2, 3} since we associate much more structure to 1, 2, and 3 than we do to e1, e2, e3. However, we really want to be able to form the free vector space over any set S (since we are going to later cheat and make possible use of the additional structure that the elements of S have). As has been pointed out in other replies, it may be easier to think of the free vector space over S to be just the maps from S to R that are zero almost everywhere. This formal approach is not used in general since we want to be informal. Compare with the way a polynomial in x is sometimes defined: informally a polynomial in x is like a0 + a1 * x + a2 * x^2 + ... an * x^n. But what is that x? The usual formal definition of a polynomial in x is to take x to be equal to (0, 1, 0, 0, ...) and polynomials to be (a0, a1, a2, ...) where almost all the a's are 0 and where one defines sum in the usual way and the product of two such polynomial sequences using the "convolutional" type product so that x * x = (0, 0, 1, 0, 0, ...) etc.
V(S) also has vectors like... 3(0), or even just (0) and that's a non-zero vector. Honestly it's a bit of the kind of confusion that we get when we work inside Set theory, and we don't distinguish between when we're working with R (the field) or R (the support set for the R field)
The confusion seems to be essentially self inflicted. We don't want to do mathematics formally, but to express things informally. Natural language and natural thought process are more appealing to us than the language of set theory and first order logic. I find the set theory definition of the free vector space over a set S (that is, maps from S to R) to be more understandable and easier to do proofs for than the informal definition of 'formal' sums of such and such. But, after doing the simpler proofs, the informal approach appears to be more suitable for understanding more advance results. It is amazing how much 'notation' like v * w allows us to proceed.
If you want to define free vector space with all details, then you could say that the vector space V(S) consists of functions v: S -> R, where v(s) = 0 for all but finitiely many elements s of S. One can sum and scale functions v: S -> R in V(S) pointwise and hence V(S) forms an R-vector space: Let v,w be elements of V(S) and r an element of R. Now v+w is defined to be the function s|->v(s)+w(s) and rv the function s|->r(v(s)). Now every element s of S can be identified as a function v_s: S -> R, where v_s(s') = 0 if s' != s and v_s(s) = 1. These functions v_s, s\in S, define a basis for V(S).
I may coming from a different 1background from people here for the contest problems, but The Algorithm has long suggested your videos to me, and I never quite why since the contest problems didn’t interest me. But maybe The Algorithm just knew that you’d eventually branch out into these intro-to-abstract-math videos, which I think are great :) so maybe that’s a weird argument for keeping all in one channel.
In your example 3 (2) + 5 (3) I don’t get why we can’t multiply they both real Number, I understand that you don’t want add them because of the format sub but why you can’t multipy either ?? And you Said that 2 and 3 are independant but we can find a1, a2 such as a1 (2) + a2 (3) = 0 ? So they are not independant ? It seems that I miss something important :-( Ty for reading and your videos are amazing :-)
I may or may not be following the JWST launch countdown a little too closely. My first haphazard glance at the video title yielded the phrase "vector from space"
In order to use the convention V(S), must all elements be linearly independent? Because in the given definition, all elements of S were linearly independent, yet it was never in the definition that this was necessary. Additionally, would the free vector space just be the span of S? Because the free vector space seems to simply be the set of all linear combinations of the elements of S, and from what I understand, that is the definition of the span of the set S. Great video, and very understandable! I just finished up a linear algebra class, and I had to do an extra project with one of my classes in order to stay in the honors college, and I chose to do this with my linear algebra class. In it, we proven most everything taught in an introductory linear algebra class, but without any determinants, and it was very interesting! The reason why I bring this up is because this stuff came up ALL OF THE TIME in that extra project!
@Joshua Grumski asked "In order to use the convention V(S), must all elements be linearly independent?" S is just a set. There is no vector space structure on S. Even if S is the underlying set of vectors for a vector space structure imposed upon S, that extra structure is forgotten and ignored. Since S is just a set, elements of S being linearly independent has no meaning.
@@billh17 alright, makes sense. So from my understanding, V(S) is an object that provides a vector space structure for the set S? And then the vectors corresponding to the elements of S, are they linearly independent, or do we “forgive and forget” that too? In other words, do the vectors corresponding to the elements of S have to form a basis for V(S)?
@@joshuagrumski7459 Note: S is not made into a vector space (the elements of S become linearly independent elements of a vector space V(S) that contains S where V(S) is the 'smallest' such vector space). So, yes the vectors corresponding to the elements of S form a basis for V(S), but they don't form a basis for the vector space they originally belong to (if S was a vector space to begin with). @Joshua Grumski asked "do we forgive and forget" the underlying vector space for S if S is a vector space? Yes and no (this is what I mean by cheat). First, note that there are two vector spaces V(S) and S if S is a vector space. They are not related (except the set S is a subset of the underlying set of vectors in V(S)). But, you don't want to think too much about S (the underlying set of vectors) being a subset of V(S). This is not going to lead to too much. The cheating will come in when we take the quotient of V * W by I where we make use of how vectors in V and W can be expressed in terms of other vectors in V and W. But, V(S) doesn't know about this (maybe "I" sort of knows about it). The same thing is done when forming the group algebra over the complex numbers C for a group G. One first forms V(G) to make a vector space. Note that this is a vector space even though G is a group. Now, technically V(G) doesn't know that G is a group: it knows only that G is a set. But, then we make use of the group structure of G within V(G) to make V(G) an algebra that has a multiplication.
@@billh17 ah, ok. So, from my understanding, V(S) defines an addition and a multiplication for the elements of S, and those definitions along with the elements of S form V(S). Is it correct to say that now? Or am I still missing something?
Maybe this might be a better explanation. It looks like you are not assuming that S is a vector space (so things I said about that assumption don't apply). @Joshua Grumski asked "In order to use the convention V(S), must all elements be linearly independent?" The convention V(S) will define a vector space (it must be prove though). After one proves V(S) is a vector space, then one can prove that S is a linearly independent set of vectors in V(S). But the proof that V(S) is a vector space doesn't depend upon proving the elements of S are linearly independent or form a basis for V(S) (which can also be proved). @Joshua Grumski said "Because in the given definition, all elements of S were linearly independent, yet it was never in the definition that this was necessary." In the given definition, all elements of S were not claimed to be linearly independent in V(S) (hard to prove since V(S) has not been defined until it is defined). Yes, the definition is set up so that the elements of S will be linearly independent, but that is just because that is the intention of the definition and not a requirement of the definition. Since "formal" sum is not quite defined sufficiently, it might appear that one is imposing that the elements of S are linearly independent, but with a more explicit definition of "formal" sum you would see that the elements of S being linearly independent is not a requirement to form such sums. @Joshua Grumski asked "would the free vector space just be the span of S?" Yes, but this would require a proof (easy).
Here is a fun problem: Is every vector space isomorphic to a free vector space over some set S? I believe yes, because every vector space has a basis (assuming the axiom of choice). Hence, an arbitrary vector space is isomorphic to the free vector space over its own basis.
Why do we require the sum in the free product to be finite? Without this limitation we could get that the free vector space of R over R are the set of real functions, and of R over {1,x,x^2,...} is the set of analytic functions - why limit ourselves?
Free objects typically have a universal mapping property. For vector spaces, that universal mapping property is as follows: Let F be a field. For a set S, let V_F(S) denote the free F-vector space with respect to S, and let ι : S → V_F(S) be the natural map with ι(s) = 1s. Given any F-vector space V, any set S, and any function f : S → V, there exists a unique F-linear transformation g : V_F(S) → V with g∘ι = f. This is a generalization of the following theorem: Let V and W be F-vector spaces. Given any basis β of W and any function f : β → V, there exists a unique F-linear transformation g : W → V so that, for each x in β, f(x) = g(x). Essentially, you can define a unique morphism out of your free object by choosing where the "basis" of the free object gets sent. And you are free to choose where the basis gets sent however you like. (The concept of a free object can be generalized to any concrete category.) This universal mapping property would fail if we allowed for infinite formal sums in V_F(S). As an example, let R be the field of real numbers and let S = {1, x, x^2, ...}. I could choose V = R[x] (the polynomial ring with coefficients in R, but viewed as an R-vector space) and the function f : S → V to be f(x^i) = x^i. If V_F(R) were to be R[[x]] (the formal power series ring with coefficients in R, but viewed as an R-vector space), there is not a unique R-linear transformation g : R[[x]] → R[x] so that g(x^i) = f(x^i) = x^i, since S is not a basis for R[[x]]. S is, indeed, linearly independent in R[[x]], so it can be extended to a basis, but the fact that it can be extended (and therefore can be extended in infinitely many ways) shows that there are infinitely many such R-linear transformations. But you could see this as pushing the problem to only allowing finite sums generally in vector spaces. If we allowed infinite sums in vector spaces, then there would be a unique map there. But algebra typically cannot handle infinite sums except formally. If you consider R as an R-vector space, you would have to contend with questions like 1+1+1+1+... and things like 1+(-1)+1+(-1)+... To handle infinite sums in vector spaces like this, you need a way to determine which infinite sums are "good". I don't know much about Banach and Hilbert spaces. I think perhaps those are the typical ways to do this. Maybe if one has a topological vector space over a topological field, that might work too. Since I'm not very experienced with these topics, I can't tell you if your idea of allowing infinite sums would represent a "free" Banach space or Hilbert space or topological vector space, but those would be the first sort of settings I would look investigate to see if it can be a notion of freeness anywhere.
But there is still an issure here: + is a formal operation between some amount of elements of S. I understand I can "add" nXi, i.e 3Xi=Xi "+" Xi "+ Xi, but how do you define aiXi where ai is a real number. If S is the set of fruits, how can I define 3.5 banana ?
With formal addition, it's always: a_i v_i + b_i v_i simplifies to (a_i+b_i) v_i, but nothing else simplifies. That is, you write a formal sum as terms with "+" between them, where each term is a number next to an element of S, and you add these formal sums to each other by combining like terms and not combining unlike terms, which means you only need to understand equality for the elements of S.
I think it might have been helpful if you gave an explicit construction V(S) as I think there is a good chance of people leaving this video still being confused.
I think the notion you didn't take the time to explain in both videos is that you are taking a set S with no structure whatsoever and you are writing a*x (a in R, x in S) without defining this operation beforehands. It took time for me to understand what you mean by "formal sums" and "formal products"
As other people have expressed in other comments, still not really comfortable with you use of "formal sums", especially the "empty" formal sum here. I realize your goal is more to give a feel for things though, and not be perfectly rigorous. Also just curious about the Justice Nerd shirt, are you just a nerd about justice in general, or is it from something specific?
en effet une famille libre n'a que des zéros pour en coefficients quand on cherche les racines du polynômes de la famille ainsi dans R[X] dont la famille est infinie à savoir X^i avec i dans N (on fait le bezout à la espacevectoriel) le polynôme à savoir la serie infinie sigma de 0 à n n dans N TENDANT VERS L'INFINI pour tout coefficients réels de cette serie en vérité ils sont tous nuls vu que nous avons une famille libre donc jamais on ne pourra dire qu'on peut faire d'un X un algébrique tant il est transcendant FAUT LE X PI EST jamais et c à la lettre transcendant jusque ses propres concaténation et additive et multiplicative le genre de monodie exemple un conoïde en X on aura X XX XXX etc indéfiniment et et pour l'addition et la multiplication dans cette écriture on ne trouvera le moyen d'en faire un zero dans quelque équation il en va ainsi de l'équation de shrodinger elle dévoile ce phénomène des transcendance en effet le fait que les phénomènes soit indépendants de l'observateur ils se tienne dans un vide sidéral merci JR
Error at th-cam.com/video/UYdVeHZT_Yk/w-d-xo.html . That is not an empty sum. It is a sum with n terms with coeffs 0. An empty sum has no terms (as you correctly said before).
0:25 Maybe it’s time to create a new channel? You can keep this channel for small exercises/contest problems and shorts and use the second one for more advanced topics, Q&A series, anything else? It’s an idea I just had
I have had this idea as well, but I haven't quite figured out where I land on the idea philosophically. On the one hand this makes sense as it would keep the "main channel" more consistent in terms of its level and niche. That being said, I really like how the popularity of this channel allows me to hack the system in order to get a larger number of eyes on advanced topics.
Anyway, I would really like to hear everyone's thoughts about this....
@@MichaelPennMath You should make a community post or a poll on that matter. Maybe ask that in a video for the 200K subs 😛
But yeah, that’s what I would do : two channels for two different contents, for two different audiences.
The road to 200k has really slowed.... I should get better at self promotion CTAs.
I will continue to mull this over.
@@MichaelPennMath You’re right. If you reserve this type of content for a second channel, then your just keeping this stuff hidden for only those who particularly are interested in it. On here, more people, including myself, get to see this stuff that I probably otherwise wouldn’t. I’d be down for a combination of your typical content and this where you could increase the frequency of this type of stuff.
Yeah, personally I disagree with splitting up the channels, thought it may sadly be better for TH-cam analytics...
Definitely preferr these types of videos. Not that the others aren't well done/entertaining/informative but these are just so cool!
I think variety is the greatest though. It's harder to get into videos that are part of a series
@mythic man sam
Agreed
100% agreed. The contest videos are neat and seem to me like good ways to learn new and interesting methods for solving problems, but i much prefer these construction and exploration type videos
This and the yesterday video about tensor product were great videos. I love all your videos but these videos are especially entertaining because there were some new, abstract concepts that you presented in an easy to understand way. That would be great if you continued making this kind of algebraic videos. I hope that you could make them in such a way that to understand one video the viewer doesn't need to watch every single video in the series beforehand
I rarely watch your "contest problems" videos but I always watch videos like this one, where your explain various topics on algebra, number theory, etc.
Your explanations are great! Thank you!
Instead of "formal linear combinations", I think it's simpler to define the free vector space over K on S as the set of all finitely supported functions from S to K.
that is what "formal linear combination" means, so what's needed is a mention of that
I think it's much less confusing to look at the vectors of V(S) as functions v: S → R such that there is finite number of s in S for which v(s) ≠ 0, and adding two vectors v and w is just adding functions, multiplying by a constant is just multiplying a function by a constant
I think that's a great thing to think about, and helps to get an intuition for what these objects are like, but it's a pain to write any examples, since they're all, by definition, awkward piecewise functions. Like {7 if x=1; -1 if x=7; 0 otherwise} is obviously not supposed to simplify to 0, but it's also not a nice notation to work with.
@@iabervon perhaps writing down specific vectors is slightly more cumbersome, but in exchange there is no ambiguity in any of the arithmetic operations which go into computing with the free vectors. It also makes dimensionality obvious: if S is a set with infinite cardinality, V(S) cannot be a finite dimensional space. A basis is also easy to construct: Use the kronecker delta. Free vector spaces as functions removes any requirement for "formal" sums and products. Instead we have literal sums and products. Much easier to compute when you know exactly where your + and * come from.
@@iabervon I agree that notation is cumbersome, but it's a separate question, first it's good to make the concept 100% clear and then use some convenient notation for it
We also did this. And then this helps justify the notation of R^n when you start with R^{1,2,3,...,n} which is V({1,2,3,...,n}) and if you abbreviate the set with numbers up to n by n_, then it's just R^n_ and then a small notational step is to leave out the _ at the end to end up with R^n. So for example, R^3 = R^{1,2,3} = V_R ({1,2,3}) with elements f: {1,2,3} -> R, 1 -> f_1, 2 -> f_2, 3 -> f_3 or in short f = (f_1, f_2, f_3)
I agree that this is a much better model. The use of "formal empty sum" from the video leave uncomfortable. I think to make that method work rigorously, you would need to toss in some extra set element to stand for this symbol.
The approach you mention here doesn't run into these issues, and is much more elegant. Also more useful for proving the universal property.
More videos like this would be awesome, I thought the last tensor product one was really interesting
The notation: sum for i= 1 to n, does not work when the empty sum is being considered. What works in general is: the sum for i < n. In the case where n = 0, there is no i < n so that case indicates an empty sum.
Instead of talking about “formal sums”, you could say that the free vector space over R generated by a set S, V(S), is the set of all functions f:S->R such that f(x)≠0 only for a finite number of x∈S.
To connect these two definitions you only have to identify every function f with the “formal sum”
Σf(x)x (x∈S)
This is the most lifesaving video for me... every other content on the internet is so confusing and do not talk about the free vector space over a vector space... thanks for the vedio
I really like algebra, and those "higher level" abstractions are really interesting.
On a different topic: do you have any videos on Galois fields?
11:11 maybe you could use like, putting the vector numbers in a box or do x subscript any real number?
Please do more videos like these. I'm a math grad student so these higher level videos are actually really useful to me and I'm sure for a lot of others as well!
I definitely prefer these kinds of videos as there is already a lot of competition style math videos out there and not really any videos like this.
These vector space videos have been great, I would really enjoy more videos about these. Not just vector spaces but maybe even fields and sub spaces too!
Honestly I only watch your channel for these advanced topics. Keep them coming!
I found this video *really* helpful after the tensor video. I was familiar with the free functor in category theory, but seeing this 'from the ground up' approach has given me a much better understanding of how that really works. Many thanks !!!
Yes! more videos like this!! Awesome channel btw
I quite like these videos on abstract algebra, although I'd also like to see some videos on applications of abstract algebra or at least ones with good examples. This video certainly clarified the previous video, which is very welcome, although I thought the first one was pretty understandable on its own. And I'm very interested on the connection between these tensor products and the tensors used in general relativity. In fact, I think a decent explanation of the mathematics of those tensors and their operations would make a very good video series. I've read through several books on tensors and GR, and I never quite seem to fully understand what's going on. Recently I saw an explanation by another TH-camr that made everything make much more sense, but I still feel a bit in the dark on it. They explained tensors as basically objects that behave like vectors or square matrices in that they fundamentally stay the same under changes of basis, even though their elements are changed, but that tensors incorporate additional bits of information, like rank-3, dimension-3 tensors being like a triplet of triplets of vectors, in the same way that a dimension-3 square matrix is a triplet of vectors. Unfortunately, they never even mentioned co-variant and contra-variant tensors, and that's still a bit mysterious to me. I know the difference is in how they transform under a change of basis, but exactly what's going on is beyond me. I'd love it if you did a series explaining that stuff. The actual GR stuff is unnecessary, since the math is so interesting on its own.
Hey! On my channel I recently made a video that briefly goes over how tensors come up in physics, and where the definition comes from. I'm going to be expanding it into a fuller series where I give the gritty details about how to derive the transformation law, but the first video is more of a "bird's eye view".
at ~ 19:35, instead of b_i*w_i, shouldn't it be indexed by j since you don't know what order each vector was taken in relative to the basis? And shouldn't the upper bound be n for both v and w since they are from the same n-dimensional space?
Love these videos! It's an area of maths that I really enjoy for sure.
I just love this videos, thank you so much !!!
I like both types of videos. I love math contests, but stuff like this, vertex algebras, etc. are really interesting too.
For case (3), if the x_i's are linearly dependent vectors, shouldn't that mean V(S) is isomorphic to R^m where m
Yes! Please do more types of this videos, are my favourite videos of yours!!!!!
unfortunately, "contest problems" videos gets much more views than these higher level maths :/
I hope you choose to do what you like most rather than worry about views :)
This is very interesting. Keep up the good work.
The free vector space of a set A (up to isomorphism) is a vector space V containing A such that A is a basis (linearly independent spanning set) of V. So a quick check of V(R) shows that it has an uncountable basis and therefore is very complicated.
I really enjoy these types of videos, it's cool you're considering making more of them.
Great video! I was unclear about just one point: around 19:08, when you define vector addition for v and w, how does one deal with the situation when n does not equal m in the sums? Does one use the minimum of n and m as the dimension of the vector sum of v and w, by truncating the extra terms in the higher-dimensional of v and w?
You would use 0 as the coefficients for vectors not included.
For example, let's say I have x, y, z as three of my elements in S.
And I define v = x+3z and w = 2x-y.
Then v = x+0y+3z and w = 2x-y+0z
So v+w = 3x-y+3z
I honestly like a lot more these types of videos than the contest problems. Those are entertaining because most of the have crazy solutions, but these videos make you know more stuff!
I really enjoy this sort of video and the previous one on tensors. Please make more
Free vector space? More like "Fantastic learning place!" These videos are amazing and I hope you never stop making them!
Like others, these are definitely the kind of videos I prefer on your channel. Looking forward to more!
Rather than using the v * w notation, could we not simply write this as an ordered pair (v,w)? The * notation seems to be too easy to interpret as a binary operation to which it is very tempting to assign some meaning.
Isn't the set S at 20:30 basically just the cartesian product of the sets V and W?
Or am I missing something?
Lots of use of the word "formal" here, and I think it's worth clarifying. "Formal", here, means "having the form of something", not in the sense of "a formal occasion". So a formal sum is just something that looks like a sum: for example, apple+orange+pear is a formal sum of fruit. Because it's a formal sum, we're not interested in figuring out what fruit is equal to apple+orange+pear, compared to an ordinary sum in which we usually want to know what integer is equal to the sum 5+2+7. Rather, the formal sum apple+orange+pear is just some abstract object, which we will manipulate as necessary.
at 11:00 how can you sum all elements from i to n if there does not exist a bijection from R to N? or does that not matter at all
These sums are finite by definition. So just take the finitely many elements you're going to add, and name index them 1,,,,n.
You have to change your indices for different sums, otherwise as you correctly point out we wouldn't have enough indices.
It would be so nice if you did a series of videos covering these topics (it would be even more fantastic if you could make some videos on the easiest concepts in this field. U know, like "starting at the beginning")
I really like these videos:) I would be really interested in learning how this definition of a tensor relates to the physicists tensor as something that transforms like a tensor (?)
This lecture could be a step toward defining the linear divisors on curves used in formulating the Riemann-Roch theorem in algebraic geometry. The coefficients are used to describe order of zeros and poles of meromorphic functions on a Riemann surface. l would love to see you wade through some algebraic geometry with tensor products, linear divisors, and differential forms, with the goal of developing intersection theory, but then you might need to introduce some other topics like topology, commutative algebra, and projective spaces. Personally, I would love to see a video on Hartog's extension used in several complex variables.
Very helpful. Now i can rewatch the other video and possibly understand it 😅. Will need to refresh my memory on quotient groups though.
Subscribed! Please make more of these videos, but I would enjoy some applications so my brain knows why you would want to develop these ideas
I love this kind of videos
Agreed. More content like this, please!
If you can give a video on universal properties and how they work, I would love that.
I have a question. In your example where S was R you used a summation over the elements of R. But I thought that R could not be indexed over. Is there a way to reconcile this?
Would it not be equivalent to define V*W to be the span of the vectors x_i, with the index i drawn from the Cartesian product V x W?
I really enjoy these kind of videos. Hope to see more of them in the future :)
I have a question. When we constructing vector space, like the standard 3-dimentional space, we usually begin from the "basis" - an arbirtaray set of linerly-independent vectors, usually unitary and orthogonal. It means that we need to have vectors beforehand, in order to construct vector space. All that we do is imposing some kind of structure onto the already existing thing. But is there a way to construct a vector space from the ground up, or at least from the different starting point?
The usual way to construct the standard 3-dimensional vector space over R is to take the cartesian product RxRxR and define addition and scalar multiplication as follows:
1) (a, b, c) + (d, e, f) = (a + d, b + e, c + f) -- component wise addition
2) k (a, b, c) = (ka, kb, kc)
where a, b, c, d, e, f in R and k in R.
It is easy to see that this is a vector space satisfying the usual rules for being a vector space.
I'm hoping the series on differential forms and the like gets finished. I want to know for sure the generalised stokes theorem is really true.
Awesome! More of these types of videos please :)
If K is any field and S is any set, then V_K (S) is just the set of maps of S into K. Each basis element is defined in terms of each x in S, mapping x to 1 and anything that isn't x to 0. On the case of S = K = *R* (say), this is just the real-valued real functions.
Great video as always. It would be nice however to get a precise definition of what “formal” means here, since S can be any set and we haven’t defined scalar multiplication and addition on arbitrary sets.
Even more fascinating. I can almost feel the foundation of Manifold construction here. How does this approach apply to Manifolds and Metrics?
Thank you Michael
I think of formal sums of R over R as considering the vectors to be numerals not numbers. They're just symbols without additional meaning to them.
I remember asking my teacher what the book means by a formal sum. It confused me a lot more considering the sum in question was over an infinite set.
One thing I don't understand: the standard axioms for a vector space require that the vectors form an abelian group. Is there a step missing here ? You seem to be saying that (in V({a,b}) over R), a + a = (1 + 1)a = 2a, but without associativity, there doesn't seem to be a way to simplify a + b + a ?
what's the different between covariant vector and contravariant vector?
At 14:45 gör the "more understandable" part, you can use characteristic functions of real numbers as the base elements. Of course this makes the elements of the free vector space functions as well. This representation would be less confusing, I guess. source: Lee Intro. Top. manifolds. Ch5
I'd love to see videos about the interesting aspects of Category Theory and how they relate to Set Theory
There's a vector space used to prove that regular tetrahedra by themselves cannot tile space, whereas tetrahedra and octahedra together can. It consists of the angles mod the rational multiples of a turn, somehow combined with the real numbers representing lengths of edges. I have not yet understood it. Could you explain it?
As an undergrad math student I'm really getting a lot out of these types of videos.
Hey Micheal, enjoyed the video as always. From what I gather, the set S is always the basis since it is both independent and a spanning set. I wonder whether the free vector space can be used to prove that every vector space has a basis.
If that were true, then this could tie in nicely with the axiom of choice or equivalent axioms such as the Hausdorff Maximality principle.
They can't be used to prove that every vector space has a basis, because that statement is actually equivalent to choice, which is independent of ZF, and the definition here only uses ZF.
However, I do believe that you can use free vector spaces to prove if every vector space has a basis, then the axiom of choice holds.
Just a question for you, I don’t know if you see this, but I’m really curious - what’s your experience with other fields of math? For example, I really like differential geometry, do math doctorates who specialize in algebra bother with something unrelated?
Tensors are epic. I much prefer this to contest problems.
+1 for these types of videos
Can you cover group rings next? They’re pretty related to free vector spaces and I think they’re pretty interesting.
This might be totally obvious to everyone else but i recall being utterly confused whenever someone carried out a procedure “formally”. I associated “formal” with handwaving, but once I started reading “formal” as “by formula” or “by form” then it all clicked.
Aha, I just commented about the formal product reminding me of a free monoid and then immediately saw that this video existed. Interested to see how this video helps my intuition
Really enjoyed this video! I have one question. On example 2, with S={x}, we have v=ax where a is an element of R and v is an element of V(S). But is V(S) isomorphic to R, or is R merely a subspace of V(S)? I envision R as "the x-axis", i.e., the one-dimensional line y=0, but V(S) seems to be "all lines passing through the origin", of which R would seemingly be a mere subspace.
V(S) is isomorphic to R by sending ax to a. Show that this is a bijective linear map by checking all the requirements. V(S) is just one line because x is a chosen variable that cannot change. X doesn't even need to be a number.
@@somgesomgedus9313 I see. I didn't realize that x is not a variable. So we could take that example to be S={1} and let V(S) be all elements that consist of a formal finite sum of elements of S with coefficients from R. Then V(S) is isomorphic to R. Might have helped if we used c instead of x.
Totally make more videos like this.
10:00 crazy example using V=R
Question around minute 4, when you're doing example 1. How do you define the multiplication of "a sub i" in the Reals with an element of V. Is the multiplication inherited from the underlying field? I'm trying to see how. Thanks.
It is just juxtaposition. It is similar to the subring of all polynomials in x, y, z, ... where there are no constant terms and no quadratic or higher terms (that is, only linear terms in the variables). For example, consider the polynomial 3 x - 5 y + 7 z added to 10 x - 1 z to get 13 x - 5y + 6z. I suspect that you are not asking how x is being multiplied by 3 in this case.
@@billh17 No... because a polynomial comes from R[x] and thus multiplying two elements is well defined because any real and also 'x' are in ring. At least that's my understanding. What I was asking was that we are taking a (sub i) to be defined as a member of the reals... and a set S and then defining V(S) to be a sum over the ordinality of S of products of what? Am I thinking too hard on this. For example if S is (apple, banana, orange) then I should just be happy with 5 apple + 10 banana and not worry about the 'product' of 5 and apple being well defined?
And 'No' was meant as an answer to your question, not meant as disagreement
@@aoehler1 asked " I should just be happy with 5 apple + 10 banana and not worry about the 'product' of 5 and apple being well defined?"
Yes. But, the product of 5 and apple is well defined: it is the juxtaposition of 5 before apple. The 'sum' + is also well defined: it is the concatenation of juxtaposed terms with other juxtaposed terms separated by the + sign.
One is trying to define the elements of the set V(S). This must be done before we can define addition and scalar multiplication on V(S) to make it into a vector space.
The + sign and the scalar multiplication sign * in the expression " 5 * apple + 10 * banana " are just constant symbols without meaning or interpretation at the beginning stage of just defining the set of elements that make up V(S). When we get to defining a vector structure on V(S), then the * will "transform" into the tensor product which is a real product.
Thanks. Makes sense now
This channel is amazing
This is very very very good!
Literally no clue what is going on, but I'm guessing: So basically the x_i part determines what dimension the vector exists in? So like S = {1,2,3} gives us all possible vectors that exist in 3 (or fewer) dimensions? And thus like when we have S = {0} we get the 0 dimensional vector space (which only has the 0 vector as an element}? And thus when we have the S ε ℝ, the free vector space consists of all vectors with real dimensions.
Although looking back I'm not that certain, as I'm unsure what summing vectors in different dimensions would look like.
If S = {0}, we get the 1 dimensional vector space (not the 0 dimensional vector space).
Michael Penn used the notation v sub x. One could also use the notation e sub x (which might be more familiar).
Thus, if S = {1, 2, 3} , then one wants to form all formal sums of e1, e2, e3 over the real numbers R. That is, one wants to consider all the following as vectors: a1 * e1 + a2 * e2 + a3 * e3 with a1, a2, a3 in R (where addition + is commutative and scalar multiplication * follows the normal rules for vectors).
If S = {0}, then one wants to form all formal sums of e0 over the real numbers R: a0 * e0 where a0 in R.
Of course, it would be clearer if one started with the set {e1, e2, e3} to begin with rather than with the set {1, 2, 3} since we associate much more structure to 1, 2, and 3 than we do to e1, e2, e3. However, we really want to be able to form the free vector space over any set S (since we are going to later cheat and make possible use of the additional structure that the elements of S have).
As has been pointed out in other replies, it may be easier to think of the free vector space over S to be just the maps from S to R that are zero almost everywhere. This formal approach is not used in general since we want to be informal. Compare with the way a polynomial in x is sometimes defined: informally a polynomial in x is like a0 + a1 * x + a2 * x^2 + ... an * x^n. But what is that x? The usual formal definition of a polynomial in x is to take x to be equal to (0, 1, 0, 0, ...) and polynomials to be (a0, a1, a2, ...) where almost all the a's are 0 and where one defines sum in the usual way and the product of two such polynomial sequences using the "convolutional" type product so that x * x = (0, 0, 1, 0, 0, ...) etc.
V(S) also has vectors like... 3(0), or even just (0) and that's a non-zero vector. Honestly it's a bit of the kind of confusion that we get when we work inside Set theory, and we don't distinguish between when we're working with R (the field) or R (the support set for the R field)
The confusion seems to be essentially self inflicted. We don't want to do mathematics formally, but to express things informally. Natural language and natural thought process are more appealing to us than the language of set theory and first order logic. I find the set theory definition of the free vector space over a set S (that is, maps from S to R) to be more understandable and easier to do proofs for than the informal definition of 'formal' sums of such and such. But, after doing the simpler proofs, the informal approach appears to be more suitable for understanding more advance results. It is amazing how much 'notation' like v * w allows us to proceed.
If you want to define free vector space with all details, then you could say that the vector space V(S) consists of functions v: S -> R, where v(s) = 0 for all but finitiely many elements s of S.
One can sum and scale functions v: S -> R in V(S) pointwise and hence V(S) forms an R-vector space: Let v,w be elements of V(S) and r an element of R. Now v+w is defined to be the function s|->v(s)+w(s) and rv the function s|->r(v(s)). Now every element s of S can be identified as a function v_s: S -> R, where v_s(s') = 0 if s' != s and v_s(s) = 1. These functions v_s, s\in S, define a basis for V(S).
I may coming from a different 1background from people here for the contest problems, but The Algorithm has long suggested your videos to me, and I never quite why since the contest problems didn’t interest me. But maybe The Algorithm just knew that you’d eventually branch out into these intro-to-abstract-math videos, which I think are great :) so maybe that’s a weird argument for keeping all in one channel.
thanks
In your example 3 (2) + 5 (3) I don’t get why we can’t multiply they both real Number, I understand that you don’t want add them because of the format sub but why you can’t multipy either ?? And you Said that 2 and 3 are independant but we can find a1, a2 such as a1 (2) + a2 (3) = 0 ? So they are not independant ? It seems that I miss something important :-(
Ty for reading and your videos are amazing :-)
More about tensor pleaaaaase
I may or may not be following the JWST launch countdown a little too closely. My first haphazard glance at the video title yielded the phrase "vector from space"
More!
In order to use the convention V(S), must all elements be linearly independent? Because in the given definition, all elements of S were linearly independent, yet it was never in the definition that this was necessary.
Additionally, would the free vector space just be the span of S? Because the free vector space seems to simply be the set of all linear combinations of the elements of S, and from what I understand, that is the definition of the span of the set S.
Great video, and very understandable! I just finished up a linear algebra class, and I had to do an extra project with one of my classes in order to stay in the honors college, and I chose to do this with my linear algebra class. In it, we proven most everything taught in an introductory linear algebra class, but without any determinants, and it was very interesting! The reason why I bring this up is because this stuff came up ALL OF THE TIME in that extra project!
@Joshua Grumski asked "In order to use the convention V(S), must all elements be linearly independent?"
S is just a set. There is no vector space structure on S. Even if S is the underlying set of vectors for a vector space structure imposed upon S, that extra structure is forgotten and ignored. Since S is just a set, elements of S being linearly independent has no meaning.
@@billh17 alright, makes sense. So from my understanding, V(S) is an object that provides a vector space structure for the set S? And then the vectors corresponding to the elements of S, are they linearly independent, or do we “forgive and forget” that too? In other words, do the vectors corresponding to the elements of S have to form a basis for V(S)?
@@joshuagrumski7459 Note: S is not made into a vector space (the elements of S become linearly independent elements of a vector space V(S) that contains S where V(S) is the 'smallest' such vector space). So, yes the vectors corresponding to the elements of S form a basis for V(S), but they don't form a basis for the vector space they originally belong to (if S was a vector space to begin with).
@Joshua Grumski asked "do we forgive and forget" the underlying vector space for S if S is a vector space? Yes and no (this is what I mean by cheat). First, note that there are two vector spaces V(S) and S if S is a vector space. They are not related (except the set S is a subset of the underlying set of vectors in V(S)). But, you don't want to think too much about S (the underlying set of vectors) being a subset of V(S). This is not going to lead to too much. The cheating will come in when we take the quotient of V * W by I where we make use of how vectors in V and W can be expressed in terms of other vectors in V and W. But, V(S) doesn't know about this (maybe "I" sort of knows about it).
The same thing is done when forming the group algebra over the complex numbers C for a group G. One first forms V(G) to make a vector space. Note that this is a vector space even though G is a group. Now, technically V(G) doesn't know that G is a group: it knows only that G is a set. But, then we make use of the group structure of G within V(G) to make V(G) an algebra that has a multiplication.
@@billh17 ah, ok. So, from my understanding, V(S) defines an addition and a multiplication for the elements of S, and those definitions along with the elements of S form V(S). Is it correct to say that now? Or am I still missing something?
Maybe this might be a better explanation. It looks like you are not assuming that S is a vector space (so things I said about that assumption don't apply).
@Joshua Grumski asked "In order to use the convention V(S), must all elements be linearly independent?"
The convention V(S) will define a vector space (it must be prove though). After one proves V(S) is a vector space, then one can prove that S is a linearly independent set of vectors in V(S). But the proof that V(S) is a vector space doesn't depend upon proving the elements of S are linearly independent or form a basis for V(S) (which can also be proved).
@Joshua Grumski said "Because in the given definition, all elements of S were linearly independent, yet it was never in the definition that this was necessary."
In the given definition, all elements of S were not claimed to be linearly independent in V(S) (hard to prove since V(S) has not been defined until it is defined). Yes, the definition is set up so that the elements of S will be linearly independent, but that is just because that is the intention of the definition and not a requirement of the definition. Since "formal" sum is not quite defined sufficiently, it might appear that one is imposing that the elements of S are linearly independent, but with a more explicit definition of "formal" sum you would see that the elements of S being linearly independent is not a requirement to form such sums.
@Joshua Grumski asked "would the free vector space just be the span of S?" Yes, but this would require a proof (easy).
Here is a fun problem: Is every vector space isomorphic to a free vector space over some set S? I believe yes, because every vector space has a basis (assuming the axiom of choice). Hence, an arbitrary vector space is isomorphic to the free vector space over its own basis.
Yeah, technically we have a pair of functors between the category of sets and the category of vector spaces.
yup! people usually say this as "every vector space is free" (which is no longer true when we generalise vector spaces to modules)
Love the new chalk!
maybe its not new, but I enjoy a lot of iridescent colors on the board. aesthetically pleasing math is the way to go.
Please more
Why do we require the sum in the free product to be finite?
Without this limitation we could get that the free vector space of R over R are the set of real functions, and of R over {1,x,x^2,...} is the set of analytic functions - why limit ourselves?
Free objects typically have a universal mapping property. For vector spaces, that universal mapping property is as follows:
Let F be a field. For a set S, let V_F(S) denote the free F-vector space with respect to S, and let ι : S → V_F(S) be the natural map with ι(s) = 1s.
Given any F-vector space V, any set S, and any function f : S → V, there exists a unique F-linear transformation g : V_F(S) → V with g∘ι = f.
This is a generalization of the following theorem:
Let V and W be F-vector spaces.
Given any basis β of W and any function f : β → V, there exists a unique F-linear transformation g : W → V so that, for each x in β, f(x) = g(x).
Essentially, you can define a unique morphism out of your free object by choosing where the "basis" of the free object gets sent. And you are free to choose where the basis gets sent however you like.
(The concept of a free object can be generalized to any concrete category.)
This universal mapping property would fail if we allowed for infinite formal sums in V_F(S). As an example, let R be the field of real numbers and let S = {1, x, x^2, ...}.
I could choose V = R[x] (the polynomial ring with coefficients in R, but viewed as an R-vector space) and the function f : S → V to be f(x^i) = x^i.
If V_F(R) were to be R[[x]] (the formal power series ring with coefficients in R, but viewed as an R-vector space), there is not a unique R-linear transformation g : R[[x]] → R[x] so that g(x^i) = f(x^i) = x^i, since S is not a basis for R[[x]]. S is, indeed, linearly independent in R[[x]], so it can be extended to a basis, but the fact that it can be extended (and therefore can be extended in infinitely many ways) shows that there are infinitely many such R-linear transformations.
But you could see this as pushing the problem to only allowing finite sums generally in vector spaces. If we allowed infinite sums in vector spaces, then there would be a unique map there.
But algebra typically cannot handle infinite sums except formally. If you consider R as an R-vector space, you would have to contend with questions like 1+1+1+1+... and things like 1+(-1)+1+(-1)+... To handle infinite sums in vector spaces like this, you need a way to determine which infinite sums are "good". I don't know much about Banach and Hilbert spaces. I think perhaps those are the typical ways to do this. Maybe if one has a topological vector space over a topological field, that might work too. Since I'm not very experienced with these topics, I can't tell you if your idea of allowing infinite sums would represent a "free" Banach space or Hilbert space or topological vector space, but those would be the first sort of settings I would look investigate to see if it can be a notion of freeness anywhere.
You wrote S is any empty set, however don't we need the definition of the addition of elements of S ?
Seems an answer à 7.46: the operation is not an addition (as in common sens) and what is called 0 is not a real.
But there is still an issure here: + is a formal operation between some amount of elements of S. I understand I can "add" nXi, i.e 3Xi=Xi "+" Xi "+ Xi, but how do you define aiXi where ai is a real number.
If S is the set of fruits, how can I define 3.5 banana ?
With formal addition, it's always: a_i v_i + b_i v_i simplifies to (a_i+b_i) v_i, but nothing else simplifies. That is, you write a formal sum as terms with "+" between them, where each term is a number next to an element of S, and you add these formal sums to each other by combining like terms and not combining unlike terms, which means you only need to understand equality for the elements of S.
I think it might have been helpful if you gave an explicit construction V(S) as I think there is a good chance of people leaving this video still being confused.
Thanks it is realy interesting
2:05
I think the notion you didn't take the time to explain in both videos is that you are taking a set S with no structure whatsoever and you are writing a*x (a in R, x in S) without defining this operation beforehands. It took time for me to understand what you mean by "formal sums" and "formal products"
But, every vector space is free (assuming choice). I guess the idea here is that you can choose your basis to be as confusing as you please.
no, its not.
@@Euler108 huh? It's absolutely true that every vector space is free, assuming the axiom of choice.
I prefer these videos a lot more than the more "random" exercises, but I also don't know how much I'm the target audience
As other people have expressed in other comments, still not really comfortable with you use of "formal sums", especially the "empty" formal sum here. I realize your goal is more to give a feel for things though, and not be perfectly rigorous.
Also just curious about the Justice Nerd shirt, are you just a nerd about justice in general, or is it from something specific?
en effet une famille libre n'a que des zéros pour en coefficients quand on cherche les racines du polynômes de la famille ainsi dans R[X] dont la famille est infinie à savoir X^i avec i dans N (on fait le bezout à la espacevectoriel) le polynôme à savoir la serie infinie sigma de 0 à n n dans N TENDANT VERS L'INFINI pour tout coefficients réels de cette serie en vérité ils sont tous nuls vu que nous avons une famille libre donc jamais on ne pourra dire qu'on peut faire d'un X un algébrique tant il est transcendant
FAUT LE X PI EST
jamais et c à la lettre transcendant jusque ses propres concaténation et additive et multiplicative le genre de monodie exemple un conoïde en X on aura X XX XXX etc indéfiniment et et pour l'addition et la multiplication dans cette écriture on ne trouvera le moyen d'en faire un zero dans quelque équation
il en va ainsi de l'équation de shrodinger elle dévoile ce phénomène des transcendance
en effet le fait que les phénomènes soit indépendants de l'observateur ils se tienne dans un vide sidéral
merci JR
Error at th-cam.com/video/UYdVeHZT_Yk/w-d-xo.html . That is not an empty sum. It is a sum with n terms with coeffs 0. An empty sum has no terms (as you correctly said before).