You may not realize it, but you’re making awesome videos whether it be coding theory or tensor calc. Theres almost no channels like this with the same high quality as your videos. Keep up the great work
I just wanted to let you know that this content is incredibly straight forward, clearly explained and made me understand the material way better than what I got from sitting in the lecture hall or trying to understand it myself with the book. Thank you so much for not just making these incredible videos, but making them in a way that is not only easily digestible but entertaining to watch.
I just want to thank you for your hard work making this wonderful video. This video delivers clear, coherent lectures about the basic concepts of coding theory so that every watcher can grasp the concepts very easily.
Linear Codes use special matrix called Generator Matrix to project binary message into higher dimensional space. This allows us to correct errors. Valid and invalid words - 0:50 Moving to higher dimension - 4:15 Generator Matrix - 6:00
so in your cube example, one test you could perform to decode the message is to look at the vertex in question and decide if it falls on one side or the other of a hyperplane that partitions the space. I noticed that the generator matrix [1 1 1] is also the coordinates for the vector of the normal of the hyperplane for the cube. is there an analog for the 4,7 example? is the generator matrix the normal of the hyperplane?
I'm not sure I know how to answer that question. The codewords always form a line/plane/cube/hyperplane subspace of the larger space, since the sum of any two codewords gives another codeword. The rows of the generator matrix are the "basis vectors" for this subspace, meaning you can get to all codewords in the space by scaling the rows of the G matrix by different amounts. In that cube example, the 2 codewords form a 1D line, with a single basis vector of [1 1 1], and the plane orthogonal to that line happens to cut the 3D space in 2. I'm not sure if there's a way to generalized that to higher dimensions. As I said, I just think of the rows of G as being basis vectors for the valid codeword subspace.
Almost all of the diagrams I make by hand in Powerpoint. That crazy 7D one I drew with the help of a short Javascript program I wrote with an HTML5 canvas. It listed all the vertices and edges for a 7D cube and projected it down to 2D space.
It seems like you're referring to the process of understanding and implementing Linear Codes, specifically Error Correcting Codes (ECC) using Generator Matrices. Let me break it down into stages for you. 1. Understanding Error Correcting Codes (ECC): ECCs are essential in data transmission and storage to protect information from errors that may occur during the process. They work by adding redundancy to the original data, allowing the receiver to detect and sometimes even correct errors. 2. Linear Codes: Linear codes are a subset of Error Correcting Codes, where the codewords generated are linear combinations of the original information symbols. These codes have several advantages, such as efficient encoding and decoding algorithms, and the ability to correct errors based on their structure. 3. Generator Matrix: A generator matrix is a square matrix used to represent a linear code. It is an essential tool for encoding data using the code. The generator matrix, denoted as G, is an m x n matrix, where m is the number of parity check bits added, and n is the number of information bits. 4. Decoding Problems: Decoding is the process of recovering the original information from the received codeword, which may contain errors. There are different decoding techniques for linear codes, such as: a. Hard Decision Decoding: In this method, the received signal is quantized into discrete levels, and the decoder makes a hard decision on the most likely transmitted symbol. b. Soft Decision Decoding: This technique uses the received signal's continuous amplitude information to make more accurate decisions about the transmitted symbols. c. Maximum Likelihood Decoding: This method aims to find the codeword that is most likely to have been transmitted, given the received signal. It is generally computationally expensive but provides the best performance. d. Minimum Distance Decoding: This approach focuses on decoding the codeword that is closest to the received signal in terms of Hamming distance. It is based on the fact that the minimum distance between codewords determines the code's error correction capability. 5. Stages in Deciding Problems: When dealing with decoding problems in linear codes, you would typically follow these stages: a. Encode the original information bits using the generator matrix G. b. Transmit the encoded message over a communication channel. c. Receive the encoded message, which may contain errors. d. Decode the received message using one of the decoding techniques mentioned above. e. Determine the most likely original information bits based on the decoded message. Remember, the choice of decoding technique and the specific stages involved may vary depending on the type of linear
You may not realize it, but you’re making awesome videos whether it be coding theory or tensor calc. Theres almost no channels like this with the same high quality as your videos. Keep up the great work
I just wanted to let you know that this content is incredibly straight forward, clearly explained and made me understand the material way better than what I got from sitting in the lecture hall or trying to understand it myself with the book.
Thank you so much for not just making these incredible videos, but making them in a way that is not only easily digestible but entertaining to watch.
I just want to thank you for your hard work making this wonderful video. This video delivers clear, coherent lectures about the basic concepts of coding theory so that every watcher can grasp the concepts very easily.
Thanks. Unfortunately this series is currently in an unfinished state. But I'm hoping you can still get value out of it.
Linear Codes use special matrix called Generator Matrix to project binary message into higher dimensional space. This allows us to correct errors.
Valid and invalid words - 0:50
Moving to higher dimension - 4:15
Generator Matrix - 6:00
Best ECC courses you can find on the internet! Thank you!
Morning of the Exam, you explained the last piece I didn't understand, thank you!
And thank you TH-cam for recommending this to ne last second haha
Cant find such explanation anywhere, this is just unbeleivable.
I've told to police that I didn't wrote "weed", just 3 errors occurred in "food" due message transfer. They didn't believe me((
Thank you for making these kind of videos!
Wonderful video. you deserve many cups of coffee.
Excellent lecture, much appreciated!!
so in your cube example, one test you could perform to decode the message is to look at the vertex in question and decide if it falls on one side or the other of a hyperplane that partitions the space. I noticed that the generator matrix [1 1 1] is also the coordinates for the vector of the normal of the hyperplane for the cube. is there an analog for the 4,7 example? is the generator matrix the normal of the hyperplane?
I'm not sure I know how to answer that question. The codewords always form a line/plane/cube/hyperplane subspace of the larger space, since the sum of any two codewords gives another codeword. The rows of the generator matrix are the "basis vectors" for this subspace, meaning you can get to all codewords in the space by scaling the rows of the G matrix by different amounts. In that cube example, the 2 codewords form a 1D line, with a single basis vector of [1 1 1], and the plane orthogonal to that line happens to cut the 3D space in 2. I'm not sure if there's a way to generalized that to higher dimensions. As I said, I just think of the rows of G as being basis vectors for the valid codeword subspace.
@@eigenchris I think it has to do with the null space of the H matrix. I love your videos btw
@@abenedict85 The "image" of the generator matrix is the same as the nullspace of the H matrix. This is why G*H = 0.
This video make so much sense and can relate a lot to it.
Great job! Very good explained, you are awesome
2:06 . That's what she said.
Thank you! Keep up the good content/quality!
New sub here :)
If I pass my exam I will donate you.
thank you, you really saved my life
Ajmo FEROVCI!
what uml diagram software you using ?
Almost all of the diagrams I make by hand in Powerpoint. That crazy 7D one I drew with the help of a short Javascript program I wrote with an HTML5 canvas. It listed all the vertices and edges for a 7D cube and projected it down to 2D space.
Very well explained, Thank you
your videos are the best. why is the first video private?
Very good lecture! Thanks
Thank you !!! 🤩Very good videos
Great lecture
Everything is connected, isn’t it? Quite fascinating.
It seems like you're referring to the process of understanding and implementing Linear Codes, specifically Error Correcting Codes (ECC) using Generator Matrices. Let me break it down into stages for you.
1. Understanding Error Correcting Codes (ECC):
ECCs are essential in data transmission and storage to protect information from errors that may occur during the process. They work by adding redundancy to the original data, allowing the receiver to detect and sometimes even correct errors.
2. Linear Codes:
Linear codes are a subset of Error Correcting Codes, where the codewords generated are linear combinations of the original information symbols. These codes have several advantages, such as efficient encoding and decoding algorithms, and the ability to correct errors based on their structure.
3. Generator Matrix:
A generator matrix is a square matrix used to represent a linear code. It is an essential tool for encoding data using the code. The generator matrix, denoted as G, is an m x n matrix, where m is the number of parity check bits added, and n is the number of information bits.
4. Decoding Problems:
Decoding is the process of recovering the original information from the received codeword, which may contain errors. There are different decoding techniques for linear codes, such as:
a. Hard Decision Decoding: In this method, the received signal is quantized into discrete levels, and the decoder makes a hard decision on the most likely transmitted symbol.
b. Soft Decision Decoding: This technique uses the received signal's continuous amplitude information to make more accurate decisions about the transmitted symbols.
c. Maximum Likelihood Decoding: This method aims to find the codeword that is most likely to have been transmitted, given the received signal. It is generally computationally expensive but provides the best performance.
d. Minimum Distance Decoding: This approach focuses on decoding the codeword that is closest to the received signal in terms of Hamming distance. It is based on the fact that the minimum distance between codewords determines the code's error correction capability.
5. Stages in Deciding Problems:
When dealing with decoding problems in linear codes, you would typically follow these stages:
a. Encode the original information bits using the generator matrix G.
b. Transmit the encoded message over a communication channel.
c. Receive the encoded message, which may contain errors.
d. Decode the received message using one of the decoding techniques mentioned above.
e. Determine the most likely original information bits based on the decoded message.
Remember, the choice of decoding technique and the specific stages involved may vary depending on the type of linear
ok
ok
you're amazing, god bless you
Would you provide an email address so I can write to you privately ?
Awesome! Support time :D
Food job!
"ablup is not a valid word"
Not yet it isn't
*taint
thank you ! Ablup !
Hi Chris,do you have a cryptocurrency wallet.I really want to sponsor but I dont have a paypal account.
I don't. I can look into getting one. It's late at night for me now though.
Awesome!
Very good
merci
valid
i love you
Abmork is now a part of my English language and you can't do anything about it.
ALLIGABOR 🐊