This series may be the best tutorial series I have ever seen. Honestly great work. Not only is it easy to follow, but it is easy to understand and it just feels good. To be fair a lot of the heavy work is already taken care of by you but due to the nature of what is being made, we are free to look deeper into the API. Since I have a basic understanding of game graphics I have also managed to understand in depth each step along the way so I really appreciate these videos.
I love how this series both shows practical real-world examples and explains the theory in detail. I have found other resources that do one or the other, but never both at once.
Thank you so much for the content! Your material is very high quality and has helped me to refresh theoretical concepts that I saw in Computer Graphics class and never applied to a graphics engine.
Spotted a typo in the Transformations Recap matrix page at 1:23 -- in the first Perspective Projection matrix, row 3 column 4 should be -nf/(f-n). The same page in the Perspective Projection video is correct. Thanks for the video!
For the engine design unfortunately there isn't any one thing that it's based off of and it's kind of being made up as the tutorials go 😅 I have different past projects of mine that I look at for inspiration. Usually they are kind of sloppy coding wise, so I'll simplify and clean up stuff as I move it to the tutorial. Sometimes I'll look at existing opensource engines to get the idea of how they structure things (such as godot). Then just trying to follow generally considered good coding practices and behaviors.
Hi Brendan, thank you for creating the best Vulkan Game Engine tutorial on the planet! :D I have a question regarding the future videos. Which topics do you expect to have covered by the end of July?
Thank you! And what I have currently planned is Tutorial 15 - game loop timing and keyboard input Tutorial 16 - diffuse shading Tutorial 17 - loading models Subject to change. Sometimes when I’m working on a video I realize covering a different topic first would make more sense. After that I think I’ll cover descriptor sets, uniform buffers and textures in august
In the beginning I though: "Looks difficult, but it will be easy and fun to make my own engine." Now: "Still fun, but I see why people prefer to just use a pre-made engine."
Cannot wait for one covering offscreen rendering. Im not sure if you are going to go that far but at the moment that's what I am not able to understand myself. I think it is similar to rendering to swapchain images and then using them as textures, but I am not sure how to do that in practice. Anyway, this is the best video tutorial series on Vulkan. Definitely nice for me because I am a visual learner and a slow reader.
It’s definitely a topic I’m planning on covering. Offscreen rendering and multiple render passes are when rendering starts to get fun! But probably not going to get to it until November.
View matrices suddenly made perfect sense to me when I realized that the View Matrix is the inverse matrix of the Model Transformation one would use if you were to draw the camera. To be inside of the camera means your matrix in eye space must be Identity after all. Mcamera * View = I. That is nice to know if you want to write directly into the view matrix but, like me, think it looks unfamiliar.
ahh yes this is a great insight. The invese view == the camera game objects model matrix. I think I'll mention this in the next video since we'll be using the inverse view matrix there.
I'm not sure if this is correct but I spent a serious 2.5 hours wrapping my head around the matrices section and wanted to save anyone else curious about the details the hassle of putting this together if they didn't want to, but still want to know what's going on. Sorry in advance if this is unintelligible. Also note that glm is in col major format (this caused me some headache figuring out). Thank you for your great series, but for those writing out the matrices remember that vec3 is a column vector and the dot product of 2 column vectors where the u, v, and w are the normalized basis vectors along their respective x, y, and z axis (not real x, y, z, but like a virtual camera x, y, and z), will result in grabbing the scale and therefore translational magnitude of the position, where you can assume that the position is a vector of p.x, p.y, and p.z: The T at the end is the transpose because it's a column vector with RowxCol (MxN) equal to 3x1, not a row vector which it looks like in text. The values here are in respect to the camera view, u.x, u.y, u.z, v.x, etc, should technically be here, but this is for explanation purposes. Column vector(u) = [1, 0, 0]T Column vector(v) = [0, 1, 0]T Column vector(w) = [0, 0, 1]T Column vector(position) = [p.x, p.y, p.z]T This is why you get -p.x, -p.y, and -p.z for this code (at least from the camera's perspective, but it still works when replacing the above orthonormal vectors with their respective u.x, u.y, u.z, v.x, etc.: m_viewMatrix[3][0] = -glm::dot(u, position); m_viewMatrix[3][1] = -glm::dot(v, position); m_viewMatrix[3][2] = -glm::dot(w, position); Altogether, the Rotation, Translation, and View matrices are: Rotation matrix (R): u.x, u.y, u.z, 0 v.x, v.y, v.z, 0 w.x, w.y, w.z, 0 0, 0, 0, 1 Translation matrix (T): 1, 0, 0, -p.x 0, 1, 0, -p.y 0, 0, 1, -p.z 0, 0, 0, 1 View matrix (V): V = R * T u.x, u.y, u.z, -dot(u, position) v.x, v.y, v.z, -dot(v, position) w.x, w.y, w.z, -dot(w, position) 0, 0, 0, 1 In the code, you negate the value to translate back to the origin (or think of it like traversing to position x, y, z via the position vector and then you negate the values to travel backwards where the arrows lead you to where you came from). Also, the normalize is omitted from v because it is implied since w and u are already orthogonal to each other and facing their respective coordinates. If you took the cross of u, w (backwards instead) you would be getting the opposite direction when using the right hand rule, so order matters.
Still working on episode 9 (adding other stuff so I can prepare to work on ImGui, like making a DOD system), but this looks cool, I have an idea to avoid floating point (objects render weird), but not sure, I think this tutorial might help with my idea. My idea is this: I have cells (like a grid) within these cells are objects, when objects enter another cell, they get "parented" to this new cell. Now my idea is whatever cell the camera is it, gets moved in the world space back to 0, 0, so if I render objects based on cell positioning, shouldn't the floating point issue be fixed because the cell your in or the cell you're going to be positioned at 0,0 in world space?
Yup that sounds like a good solution! So if I follow what you're saying, each object will have local coordinates relative to its parent cell, so an objects' full coordinates (somewhat similar to a mailing address) would be its parent cell grid location + its local coordinates, this includes the camera object. But when rendering the coordinates will be converted to camera space with the camera object being centred at 0,0. I think the only gotcha you need to watch out for is when doing the conversion to the camera space coordinates. Based on how you do the calculation you may lose precision if you're not careful. For example if you try to do something like calculating an objects global coordinates with globalObjectCoord = objectGridLocation * gridSpacing + objectsLocalCoords; globalCameraCoord = cameraGridLocation * gridSpacing + cameraLocalCoord; objectInCameraSpace = globalObjectCoord - globalCameraCoord; You would still run into precision issues because even though objectInCameraSpace may be a value centred near 0,0 the intermediate values for globalObjectCoord and globalCameraCoord may be very large values and lose precision. And you don't get precision back after it's lost even if globalObjectCoord and globalCameraCoord are close in value. The solution is either use doubles to represent all intermediate values and then convert to float only at the end for the objectInCameraSpace value. Or I think something like this would also work objectInCameraSpace = gridSpacing * (objectGridLocation - cameraGridLocation) + objectsLocalCoords - cameraLocalCoord Essentially by rearranging the order that intermediate values are calculated you can limit precision loss by trying to minimise the size of the intermediate values. So in this case if an object is close to the camera than objectGridLocation - cameraGridLocation is going to be either 0 or a small value, etc. Hope that makes sense. Cheers!
@@BrendanGalea Wow, thanks so much on the heads up, still will need to do the previous tutorials to catch up, but I really like the help you've given me and other people.
Hey Brendan, thanks a lot for these videos! I was messing around with glm and testing the functions glm::perspective and glm::lookAt to generate the relevant matrices. When combining glm::perspective (and also multiplying [1][1] of that result by -1 as suggesting in vulkan-tutorial) and the view matrix in your video, I couldn't see anything on-screen. After experimenting, the glm matrix for both perspective and lookAt holds different values to the view and perspective matrices generated using your formulae. I'm not 100% if I'm misunderstanding something, but I would think that at least the view projection matrix would be the same, since OpenGL is also right handed in world space, but after looking at the code for glm::lookAtRH, the signs are different so I am getting a little confused. Specifically, the position where the "j" and "k" vectors should land in the 3x3 rotation component of the view matrix have opposite signs for the glm version. Do you have any idea whats going on? Also, do you happen to know why I can't use the glm::perspective function with your view matrix, but if I use their perspective function with a [1][1] sign flip, along with their lookAt function, I indeed get the correct results? Thanks a lot!
I think the difference might be because of the different default view direction. I'm using the looking down the positive z axis convention, I think openGL looks down the -z axis by default. glm::lookAtRH negates the 3rd row, Result[x][2]. I think this is to change from using openGL's LH device normalized coordinates to a right handed world space. If you use glm::perspective function with my view matrix, try setting the transform.translation value of the game obj to use a negative z value, and see if that works? But its probably best to not mix and match. I'd either use both the glm functions, or stick to the functions we wrote.
@@BrendanGalea Ah yes, definitely due to both the fact that OpenGL looks down the negative z, and that it needs to go from RH in world space to LH in clip space. The glm::lookAt matrix indeed maps the forward vector to -z, and from there the vector for +y is also wrong as it uses the up vector which points in the positive y for OpenGL, but the negative y for Vulkan Then, the perspective matrix obviously takes this into account, but also flips the Z again to go into clip co-ordinates I wasn't planning on mixing and matching - just wanted to understand the different co-ordinate systems a little bit better! Thanks for the help ;)
I wanted to implement ImGui to see the camera changes in real time, but im new to Vulkan (and c++ programming) do you have some tip ?? thx in advance, keep it up with the series, fantastic work :)
I have this demo example of how to integrate imgui using tutorial 11 as the starting point. But it should be relatively straightforward to use tutorial 14 as the starting point instead github.com/blurrypiano/littleVulkanEngine/tree/master/littleVulkanEngine/imguiDemo You could also try following this tutorial here: vkguide.dev/docs/extra-chapter/implementing_imgui/ I haven't followed this specific part but vkguide.dev in my experience is a high quality resource Also the next tutorial will cover getting keyboard input and using that to move the camera in realtime. That will be released next week. I took some time off for a bit of summer break, hence the delay in videos.
I was initially confused about perspective matrix involving orthographic matrix, since we didnt do any calculation for perspective involving ortho in the code, then I realised it was already done for us :D would it be fair to say the following?: 1. there are TWO types of projection matrices here, orthographic and perspective. 2. there is ONE type of transformation matrix here, which is the perspective transform matrix. 3. we can use the orthographic projection matrix on its own, OR we can combine it with the perspective transformation matrix to create the perspective projection matrix
@@BrendanGalea Thanks Brendan, loving the series I have learned so much already thanks to you. I have one more question about rotations. You have provided us with a lovely way of accessing camera rotation using the tait bryan angles and keyboard input. After some research I found that using the tait bryan angles also prevents the gimbal lock problem, since they are not re-using an axis. However this got me thinking, if gimbal lock problem is not an issue, which normally people suggest quaternions for to solve, then is there any need for quaternions with regards to world rotation to camera? Im thinking, if tait bryan angles dont suffer gimballock, then why do I need quaternions?! just a bit confused about why they would be needed if tait bryan solves the problem they are also solving *edit* turns out the tait bryan angles also have gimballock when the middle axis is rotated ±π/2
With this code I was able to create a rotating camera around the box using the targetted view to {0, 0, 2.5}: float camx = 2.0f * glm::sin(glm::radians(timeF)); float camy = -glm::cos(glm::radians(2.0f * timeF)); float camz = -2.0f * glm::cos(glm::radians(timeF)); timeF is just a local variable that increases by 1.0f every frame
Hey I might have missed it but is there a particular reason that we want to represent every operation as a linear transformation? Is it just so that we can evaluate the composition of all the transformations into a single matrix?
Umm good question which I should probably have an answer to… but I don’t fully remember why. When I first started the series I do recall debating which to use. I don’t think there was any one main reason, and the hpp headers are a perfectly valid choice. To venture a guess I think it was because I somewhat intended the series to be a continuation for people where vulkan-tutorial.com left off. And that used the .h headers. Also ive come across more educational content using the .h headers so I figured it might be easier for people who are learning to see consistency across different resources.
@@BrendanGalea ya, I get that completely. Nice series anyways. I'm using the hpp and it really is nice with RAII and also queries are not as cumbersome. But alas almost no edu material on it so im stuck translating from c :). If you ever feel like doing a refactor episode I would think people would be interested (at least I would), since you'd be the only one. Anyways, many thank for the great content I really appreciate it.
Thanks for the suggestion! Might be some time before I can get around to that but once I've covered the basics I think that is something I'd like to do. I don't think it would be to difficult for me to create a copy of the git repo where the code for each tutorial uses the hpp headers instead.
Hi, thank you for this series. For the assert in setViewTarget, would the following be acceptable? *assert( glm::all(glm::epsilonNotEqual(target, position, std::numeric_limits::epsilon())) && "Direction must be a non-zero vector");*
yes except use glm::any rather than glm::all. We only need at least one component to be non-zero. You could also add in the function setViewDirection() assert(glm::dot(direction, direction) > std::numeric_limits::epsilon() && "Direction must be a non-zero vector"); dot(direction, direction) is equal to the square of the length of the direction vector so will always be positive, so we can just check that the squared length of direction is greater than epsilon.
@@BrendanGalea My "I'm not really a math guy" is probably different than the normal version of it. I studied electrical engineering after all. If it's simple enough and if it isn't just pure math then I don't really care. Just as long as I don't have to proof anything.
I seriously look forward to every new video. Makes my day when I see one of these pop up.
Thank you! More are on the way :)
This series may be the best tutorial series I have ever seen. Honestly great work. Not only is it easy to follow, but it is easy to understand and it just feels good. To be fair a lot of the heavy work is already taken care of by you but due to the nature of what is being made, we are free to look deeper into the API. Since I have a basic understanding of game graphics I have also managed to understand in depth each step along the way so I really appreciate these videos.
Wow, thanks!
Can't wait for the uniform buffers tutorial! Loving this series!
Should be fairly soon. I think tutorial 17 or 18 I’ll start covering them.
I love how this series both shows practical real-world examples and explains the theory in detail. I have found other resources that do one or the other, but never both at once.
Thanks! I thought it would be a good idea
Your videos are diamond
Your videos are some of the best on 3D programming!
I really love the work you're doing. Thank you so much!
Thank you! I’m glad people appreciate it!
Really great, the illustrations are immensely helpful. I instantly understood what you’re explaining.
Another precise and simple tutorial, thanks mate
Holy smokes Brendan, your content is awesome! Thank you very much!
I appreciate that!
@@BrendanGalea I also joined your Discord :-)
Clear explanation, very nice!
Thank you so much for the content! Your material is very high quality and has helped me to refresh theoretical concepts that I saw in Computer Graphics class and never applied to a graphics engine.
Really love the way you explain the graphics theories!! Thank you!
Spotted a typo in the Transformations Recap matrix page at 1:23 -- in the first Perspective Projection matrix, row 3 column 4 should be -nf/(f-n). The same page in the Perspective Projection video is correct. Thanks for the video!
this is easy to understand, and easy to use :] . thanks a bunch for making this video
Thank you so much
My new favorite series! Are you basing your design off anything? Any good resources to learn about how you are structuring your game engine design?
For the engine design unfortunately there isn't any one thing that it's based off of and it's kind of being made up as the tutorials go 😅
I have different past projects of mine that I look at for inspiration. Usually they are kind of sloppy coding wise, so I'll simplify and clean up stuff as I move it to the tutorial. Sometimes I'll look at existing opensource engines to get the idea of how they structure things (such as godot). Then just trying to follow generally considered good coding practices and behaviors.
@@BrendanGalea very nice! Well I look forward to learning from this series!
Thank you!!!!!!!!
So good tutorials, totally love it with both explanation and implementation.
Thank you 😊
Pure gold tutorial as always
Thank youu for explaining this tough math!
Thanks for amaizing tutorial series.
Hi Brendan, thank you for creating the best Vulkan Game Engine tutorial on the planet! :D I have a question regarding the future videos. Which topics do you expect to have covered by the end of July?
Thank you! And what I have currently planned is
Tutorial 15 - game loop timing and keyboard input
Tutorial 16 - diffuse shading
Tutorial 17 - loading models
Subject to change. Sometimes when I’m working on a video I realize covering a different topic first would make more sense.
After that I think I’ll cover descriptor sets, uniform buffers and textures in august
In the beginning I though: "Looks difficult, but it will be easy and fun to make my own engine."
Now: "Still fun, but I see why people prefer to just use a pre-made engine."
Cannot wait for one covering offscreen rendering. Im not sure if you are going to go that far but at the moment that's what I am not able to understand myself. I think it is similar to rendering to swapchain images and then using them as textures, but I am not sure how to do that in practice.
Anyway, this is the best video tutorial series on Vulkan. Definitely nice for me because I am a visual learner and a slow reader.
It’s definitely a topic I’m planning on covering. Offscreen rendering and multiple render passes are when rendering starts to get fun! But probably not going to get to it until November.
View matrices suddenly made perfect sense to me when I realized that the View Matrix is the inverse matrix of the Model Transformation one would use if you were to draw the camera. To be inside of the camera means your matrix in eye space must be Identity after all. Mcamera * View = I. That is nice to know if you want to write directly into the view matrix but, like me, think it looks unfamiliar.
ahh yes this is a great insight. The invese view == the camera game objects model matrix. I think I'll mention this in the next video since we'll be using the inverse view matrix there.
I'm not sure if this is correct but I spent a serious 2.5 hours wrapping my head around the matrices section and wanted to save anyone else curious about the details the hassle of putting this together if they didn't want to, but still want to know what's going on. Sorry in advance if this is unintelligible. Also note that glm is in col major format (this caused me some headache figuring out).
Thank you for your great series, but for those writing out the matrices remember that vec3 is a column vector and the dot product of 2 column vectors where the u, v, and w are the normalized basis vectors along their respective x, y, and z axis (not real x, y, z, but like a virtual camera x, y, and z), will result in grabbing the scale and therefore translational magnitude of the position, where you can assume that the position is a vector of p.x, p.y, and p.z:
The T at the end is the transpose because it's a column vector with RowxCol (MxN) equal to 3x1, not a row vector which it looks like in text.
The values here are in respect to the camera view, u.x, u.y, u.z, v.x, etc, should technically be here, but this is for explanation purposes.
Column vector(u) = [1, 0, 0]T
Column vector(v) = [0, 1, 0]T
Column vector(w) = [0, 0, 1]T
Column vector(position) = [p.x, p.y, p.z]T
This is why you get -p.x, -p.y, and -p.z for this code (at least from the camera's perspective, but it still works when replacing the above orthonormal vectors with their respective u.x, u.y, u.z, v.x, etc.:
m_viewMatrix[3][0] = -glm::dot(u, position);
m_viewMatrix[3][1] = -glm::dot(v, position);
m_viewMatrix[3][2] = -glm::dot(w, position);
Altogether, the Rotation, Translation, and View matrices are:
Rotation matrix (R):
u.x, u.y, u.z, 0
v.x, v.y, v.z, 0
w.x, w.y, w.z, 0
0, 0, 0, 1
Translation matrix (T):
1, 0, 0, -p.x
0, 1, 0, -p.y
0, 0, 1, -p.z
0, 0, 0, 1
View matrix (V): V = R * T
u.x, u.y, u.z, -dot(u, position)
v.x, v.y, v.z, -dot(v, position)
w.x, w.y, w.z, -dot(w, position)
0, 0, 0, 1
In the code, you negate the value to translate back to the origin (or think of it like traversing to position x, y, z via the position vector and then you negate the values to travel backwards where the arrows lead you to where you came from).
Also, the normalize is omitted from v because it is implied since w and u are already orthogonal to each other and facing their respective coordinates. If you took the cross of u, w (backwards instead) you would be getting the opposite direction when using the right hand rule, so order matters.
👍🏻
Still working on episode 9 (adding other stuff so I can prepare to work on ImGui, like making a DOD system), but this looks cool, I have an idea to avoid floating point (objects render weird), but not sure, I think this tutorial might help with my idea. My idea is this:
I have cells (like a grid) within these cells are objects, when objects enter another cell, they get "parented" to this new cell. Now my idea is whatever cell the camera is it, gets moved in the world space back to 0, 0, so if I render objects based on cell positioning, shouldn't the floating point issue be fixed because the cell your in or the cell you're going to be positioned at 0,0 in world space?
Yup that sounds like a good solution! So if I follow what you're saying, each object will have local coordinates relative to its parent cell, so an objects' full coordinates (somewhat similar to a mailing address) would be its parent cell grid location + its local coordinates, this includes the camera object. But when rendering the coordinates will be converted to camera space with the camera object being centred at 0,0.
I think the only gotcha you need to watch out for is when doing the conversion to the camera space coordinates. Based on how you do the calculation you may lose precision if you're not careful.
For example if you try to do something like calculating an objects global coordinates with
globalObjectCoord = objectGridLocation * gridSpacing + objectsLocalCoords;
globalCameraCoord = cameraGridLocation * gridSpacing + cameraLocalCoord;
objectInCameraSpace = globalObjectCoord - globalCameraCoord;
You would still run into precision issues because even though objectInCameraSpace may be a value centred near 0,0 the intermediate values for globalObjectCoord and globalCameraCoord may be very large values and lose precision. And you don't get precision back after it's lost even if globalObjectCoord and globalCameraCoord are close in value.
The solution is either use doubles to represent all intermediate values and then convert to float only at the end for the objectInCameraSpace value. Or I think something like this would also work
objectInCameraSpace = gridSpacing * (objectGridLocation - cameraGridLocation) + objectsLocalCoords - cameraLocalCoord
Essentially by rearranging the order that intermediate values are calculated you can limit precision loss by trying to minimise the size of the intermediate values. So in this case if an object is close to the camera than objectGridLocation - cameraGridLocation is going to be either 0 or a small value, etc.
Hope that makes sense.
Cheers!
@@BrendanGalea Wow, thanks so much on the heads up, still will need to do the previous tutorials to catch up, but I really like the help you've given me and other people.
Hey Brendan, thanks a lot for these videos! I was messing around with glm and testing the functions glm::perspective and glm::lookAt to generate the relevant matrices. When combining glm::perspective (and also multiplying [1][1] of that result by -1 as suggesting in vulkan-tutorial) and the view matrix in your video, I couldn't see anything on-screen. After experimenting, the glm matrix for both perspective and lookAt holds different values to the view and perspective matrices generated using your formulae.
I'm not 100% if I'm misunderstanding something, but I would think that at least the view projection matrix would be the same, since OpenGL is also right handed in world space, but after looking at the code for glm::lookAtRH, the signs are different so I am getting a little confused. Specifically, the position where the "j" and "k" vectors should land in the 3x3 rotation component of the view matrix have opposite signs for the glm version.
Do you have any idea whats going on? Also, do you happen to know why I can't use the glm::perspective function with your view matrix, but if I use their perspective function with a [1][1] sign flip, along with their lookAt function, I indeed get the correct results?
Thanks a lot!
I think the difference might be because of the different default view direction. I'm using the looking down the positive z axis convention, I think openGL looks down the -z axis by default.
glm::lookAtRH negates the 3rd row, Result[x][2]. I think this is to change from using openGL's LH device normalized coordinates to a right handed world space.
If you use glm::perspective function with my view matrix, try setting the transform.translation value of the game obj to use a negative z value, and see if that works? But its probably best to not mix and match. I'd either use both the glm functions, or stick to the functions we wrote.
@@BrendanGalea Ah yes, definitely due to both the fact that OpenGL looks down the negative z, and that it needs to go from RH in world space to LH in clip space. The glm::lookAt matrix indeed maps the forward vector to -z, and from there the vector for +y is also wrong as it uses the up vector which points in the positive y for OpenGL, but the negative y for Vulkan
Then, the perspective matrix obviously takes this into account, but also flips the Z again to go into clip co-ordinates
I wasn't planning on mixing and matching - just wanted to understand the different co-ordinate systems a little bit better! Thanks for the help ;)
I wanted to implement ImGui to see the camera changes in real time, but im new to Vulkan (and c++ programming) do you have some tip ?? thx in advance, keep it up with the series, fantastic work :)
I have this demo example of how to integrate imgui using tutorial 11 as the starting point. But it should be relatively straightforward to use tutorial 14 as the starting point instead
github.com/blurrypiano/littleVulkanEngine/tree/master/littleVulkanEngine/imguiDemo
You could also try following this tutorial here: vkguide.dev/docs/extra-chapter/implementing_imgui/
I haven't followed this specific part but vkguide.dev in my experience is a high quality resource
Also the next tutorial will cover getting keyboard input and using that to move the camera in realtime. That will be released next week. I took some time off for a bit of summer break, hence the delay in videos.
No problem thx for the answer ❤️
I was initially confused about perspective matrix involving orthographic matrix, since we didnt do any calculation for perspective involving ortho in the code, then I realised it was already done for us :D would it be fair to say the following?:
1. there are TWO types of projection matrices here, orthographic and perspective.
2. there is ONE type of transformation matrix here, which is the perspective transform matrix.
3. we can use the orthographic projection matrix on its own, OR we can combine it with the perspective transformation matrix to create the perspective projection matrix
Yup you got it!
@@BrendanGalea Thanks Brendan, loving the series I have learned so much already thanks to you. I have one more question about rotations. You have provided us with a lovely way of accessing camera rotation using the tait bryan angles and keyboard input. After some research I found that using the tait bryan angles also prevents the gimbal lock problem, since they are not re-using an axis. However this got me thinking, if gimbal lock problem is not an issue, which normally people suggest quaternions for to solve, then is there any need for quaternions with regards to world rotation to camera? Im thinking, if tait bryan angles dont suffer gimballock, then why do I need quaternions?! just a bit confused about why they would be needed if tait bryan solves the problem they are also solving *edit* turns out the tait bryan angles also have gimballock when the middle axis is rotated ±π/2
With this code I was able to create a rotating camera around the box using the targetted view to {0, 0, 2.5}:
float camx = 2.0f * glm::sin(glm::radians(timeF));
float camy = -glm::cos(glm::radians(2.0f * timeF));
float camz = -2.0f * glm::cos(glm::radians(timeF));
timeF is just a local variable that increases by 1.0f every frame
Nice work!
Hey I might have missed it but is there a particular reason that we want to represent every operation as a linear transformation? Is it just so that we can evaluate the composition of all the transformations into a single matrix?
Love the videos. I'm late to the party, so this question is perhaps a bit late, but why are you not using the vulkan.hpp rather than the c header?
Umm good question which I should probably have an answer to… but I don’t fully remember why. When I first started the series I do recall debating which to use. I don’t think there was any one main reason, and the hpp headers are a perfectly valid choice.
To venture a guess I think it was because I somewhat intended the series to be a continuation for people where vulkan-tutorial.com left off. And that used the .h headers.
Also ive come across more educational content using the .h headers so I figured it might be easier for people who are learning to see consistency across different resources.
@@BrendanGalea ya, I get that completely. Nice series anyways. I'm using the hpp and it really is nice with RAII and also queries are not as cumbersome. But alas almost no edu material on it so im stuck translating from c :). If you ever feel like doing a refactor episode I would think people would be interested (at least I would), since you'd be the only one. Anyways, many thank for the great content I really appreciate it.
Thanks for the suggestion! Might be some time before I can get around to that but once I've covered the basics I think that is something I'd like to do. I don't think it would be to difficult for me to create a copy of the git repo where the code for each tutorial uses the hpp headers instead.
Hi, thank you for this series. For the assert in setViewTarget, would the following be acceptable? *assert( glm::all(glm::epsilonNotEqual(target, position, std::numeric_limits::epsilon())) && "Direction must be a non-zero vector");*
yes except use glm::any rather than glm::all. We only need at least one component to be non-zero.
You could also add in the function setViewDirection()
assert(glm::dot(direction, direction) > std::numeric_limits::epsilon() &&
"Direction must be a non-zero vector");
dot(direction, direction) is equal to the square of the length of the direction vector so will always be positive, so we can just check that the squared length of direction is greater than epsilon.
Thank you for your reply. I completely agree with you.
I'm also looking forward to the upcoming faster pace. Math is great and all, but I'm not really a math guy.
hahaha well, there's always going to be a little bit of math 😅
@@BrendanGalea My "I'm not really a math guy" is probably different than the normal version of it. I studied electrical engineering after all. If it's simple enough and if it isn't just pure math then I don't really care. Just as long as I don't have to proof anything.
Hi, do you plan to upload a tutorial about textures?
Also, do you have a twitter account? thank you
Yes. A few topics to cover first but I think tutorial 20 will cover them
@@BrendanGalea thank you! your tutorials are amazing and very helpful
I have tried to implement the projection and view matrix in my vulkan application but my screen still blank even after using this method.