- 102
- 875 920
OGLDEV
เข้าร่วมเมื่อ 10 พ.ย. 2020
Creating video tutorials on modern OpenGL and Vulkan using C++ on Windows and Linux.
Check out my website for more tutorials: ogldev.org.
🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small commission (thank-you!).
🙏If you like my channel please consider supporting it:
🔹Patreon - www.patreon.com/ogldev
🔹By joining the TH-cam channel as a member
🔹A one time donation will also be highly appreciated: ogldev.org/donate.html
Check out my website for more tutorials: ogldev.org.
🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small commission (thank-you!).
🙏If you like my channel please consider supporting it:
🔹Patreon - www.patreon.com/ogldev
🔹By joining the TH-cam channel as a member
🔹A one time donation will also be highly appreciated: ogldev.org/donate.html
Vertex Buffers // Vulkan For Beginners #15
🔥To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/OGLDEV/ . You’ll also get 20% off an annual premium subscription.
In this video we learn how to create vertex buffers in Vulkan.
🔥How to install GLM: th-cam.com/video/KEKcDvS3hDQ/w-d-xo.html
🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small commission (thank-you!).
📚 Timecodes 📚
0:00 Intro and sponsorship
1:51 Background
3:35 Changes to the app code
8:08 Creating the vertex buffer - high-level
9:33 Creating the staging buffer
13:23 Buffer memory requirements
14:04 Buffer memory type
19:17 Allocating the buffer
20:05 Binding the buffer
20:45 Populating the staging buffer
21:28 Creating the final buffer
22:39 Copying the staging buffer to the final buffer
25:31 Outro
Make sure to watch all the previous tutorials in the "Vulkan For Beginners" playlist at th-cam.com/play/PLA0dXqQjCx0RntJy1pqje9uHRF1Z5vZgA.html
Please visit ogldev.org to see more of my tutorials on modern OpenGL.
🔥Link to source:
github.com/emeiri/ogldev/tree/master/Vulkan/Tutorial15
If you want to get the same version of the code that was used in the video checkout the tag 'VULKAN_15'
🔥Vulkan specification:
vulkan.lunarg.com/doc/view/1.3.204.1/windows/1.3-extensions/vkspec.html
This video was sponsored by Brilliant.
Feel free to comment below.
🔗Contacts:
🔹Email: ogldev1@gmail.com
🔹Instagram: @ogldev1
🔹Github: github.com/emeiri/ogldev.git
🔹TikTok - www.tiktok.com/@ogldev
🔹Twitter: @ogldev
🔹One time donations (Paypal): ogldev.org/donate.html
🔹Patreon: www.patreon.com/ogldev
🙏Credits:
🔹GTA clip by @DopeGameplays - th-cam.com/video/GOW3wuGQxzU/w-d-xo.html
🔹Sound effects by zapsplat.com
🔹Music: "Tropic Fuse" by "French Fuse" (youtube audio library)
Enjoy,
Etay Meiri
#vulkan
In this video we learn how to create vertex buffers in Vulkan.
🔥How to install GLM: th-cam.com/video/KEKcDvS3hDQ/w-d-xo.html
🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small commission (thank-you!).
📚 Timecodes 📚
0:00 Intro and sponsorship
1:51 Background
3:35 Changes to the app code
8:08 Creating the vertex buffer - high-level
9:33 Creating the staging buffer
13:23 Buffer memory requirements
14:04 Buffer memory type
19:17 Allocating the buffer
20:05 Binding the buffer
20:45 Populating the staging buffer
21:28 Creating the final buffer
22:39 Copying the staging buffer to the final buffer
25:31 Outro
Make sure to watch all the previous tutorials in the "Vulkan For Beginners" playlist at th-cam.com/play/PLA0dXqQjCx0RntJy1pqje9uHRF1Z5vZgA.html
Please visit ogldev.org to see more of my tutorials on modern OpenGL.
🔥Link to source:
github.com/emeiri/ogldev/tree/master/Vulkan/Tutorial15
If you want to get the same version of the code that was used in the video checkout the tag 'VULKAN_15'
🔥Vulkan specification:
vulkan.lunarg.com/doc/view/1.3.204.1/windows/1.3-extensions/vkspec.html
This video was sponsored by Brilliant.
Feel free to comment below.
🔗Contacts:
🔹Email: ogldev1@gmail.com
🔹Instagram: @ogldev1
🔹Github: github.com/emeiri/ogldev.git
🔹TikTok - www.tiktok.com/@ogldev
🔹Twitter: @ogldev
🔹One time donations (Paypal): ogldev.org/donate.html
🔹Patreon: www.patreon.com/ogldev
🙏Credits:
🔹GTA clip by @DopeGameplays - th-cam.com/video/GOW3wuGQxzU/w-d-xo.html
🔹Sound effects by zapsplat.com
🔹Music: "Tropic Fuse" by "French Fuse" (youtube audio library)
Enjoy,
Etay Meiri
#vulkan
มุมมอง: 802
วีดีโอ
3D Camera - a GLM Story
มุมมอง 1.8K21 วันที่ผ่านมา
In this video we learn how to create a 3D camera using the GLM library. This camera can be used for both OpenGL and Vulkan. 📚 Timecodes 📚 0:00 Intro 1:18 Background 3:04 Installing GLM 4:46 The camera class 7:16 Matrix/quaternion initialization 8:24 The view matrix 9:53 The perspective projection matrix 10:17 Camera class integration 11:23 Keyboard control 16:10 Calculating the velocity and acc...
Graphics Pipeline Object // Vulkan For Beginner #14
มุมมอง 1.6Kหลายเดือนก่อน
In this video we will render our first triangle! This will made possible using the Graphics Pipeline Object which will be bound to the command buffer. It will provide all the state required by the draw command (also added in this video). 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about O...
The Endless Grid (re-uploaded)
มุมมอง 3.5Kหลายเดือนก่อน
In this video we learn how to create an infinite grid using OpenGL. It is based on an article from the '3D Graphics Rendering Cookbook' (see link below). 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the boo...
Shaders // Vulkan For Beginners #13
มุมมอง 1.8K2 หลายเดือนก่อน
In this video we learn how to create the vertex and fragment shader for our first Vulkan demo. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small com...
Programmable Vertex Pulling // Intermediate OpenGL Series
มุมมอง 2.9K3 หลายเดือนก่อน
In this video we learn how to get vertices into the vertex shader using the Programmable Vertex Pulling technique. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I ...
Pass me that render, please // Vulkan For Beginners #12
มุมมอง 3.4K3 หลายเดือนก่อน
In this video we learn how to use the Render Pass and Framebuffer objects. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small commission (thank-you!)...
Normal Mapping Adventures // OpenGL Intermediate Series
มุมมอง 2.4K4 หลายเดือนก่อน
In this video we learn how to implement normal mapping, also known as bump mapping. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small commission (th...
Image Memory Barriers // Vulkan For Beginners #11
มุมมอง 1.7K4 หลายเดือนก่อน
In this video we learn how to use Image Memory Barriers in order to transition the layout of the images in the swap chain. This fixes the errors that we saw in the previous tutorial. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you u...
Submitting Command Buffers // Vulkan For Beginners #10
มุมมอง 1.8K5 หลายเดือนก่อน
In this video we learn how to submit command buffers into a Vulkan queue. This will get us the first visual response in Vulkan - the window will finally be cleared... 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy a...
The OpenGL Software Ecosystem
มุมมอง 9K5 หลายเดือนก่อน
This video provides a short history of OpenGL and explains the role of software packages such as FreeGLUT, GLFW, GLEW and GLAD in setting up a working OpenGL environment. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to b...
Recording the clear command // Vulkan For Beginners #9
มุมมอง 1.6K6 หลายเดือนก่อน
In this video we learn how to record the clear command into a Vulkan command buffer. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I will get a small commission (t...
Buffer Those Commands! // Vulkan For Beginners #8
มุมมอง 2.4K6 หลายเดือนก่อน
In this video we learn how to allocate command buffers in Vulkan that help us reduce the overhead of the driver. 🔥 See the list of the books that I'm using as background information for my tutorials: ogldev.org/books.html. I highly recommend them for learning more about OpenGL and Vulkan. These are affiliate links so if you use them to buy any of the books the price is the same for you and I wi...
Terrain Tessellation Shaders // Terrain Rendering Episode 13
มุมมอง 3.6K7 หลายเดือนก่อน
Terrain Tessellation Shaders // Terrain Rendering Episode 13
DJ! Swap that chain!!! // Vulkan For Beginners #7
มุมมอง 2.5K7 หลายเดือนก่อน
DJ! Swap that chain!!! // Vulkan For Beginners #7
It is illogical (device) // Vulkan For Beginners #6
มุมมอง 1.7K8 หลายเดือนก่อน
It is illogical (device) // Vulkan For Beginners #6
Let's Get Physical (Device) // Vulkan For Beginners #5
มุมมอง 2.4K8 หลายเดือนก่อน
Let's Get Physical (Device) // Vulkan For Beginners #5
Creating a Surface Handle // Vulkan For Beginners #4
มุมมอง 2.4K9 หลายเดือนก่อน
Creating a Surface Handle // Vulkan For Beginners #4
Creating a Debug Callback Function // Vulkan For Beginners #3
มุมมอง 2.7K9 หลายเดือนก่อน
Creating a Debug Callback Function // Vulkan For Beginners #3
Creating an Instance // Vulkan For Beginners #2
มุมมอง 6K9 หลายเดือนก่อน
Creating an Instance // Vulkan For Beginners #2
Quad Tessellation // OpenGL Tutorial #51
มุมมอง 3.5K10 หลายเดือนก่อน
Quad Tessellation // OpenGL Tutorial #51
Direct State Access // OpenGL Tutorial #50
มุมมอง 5K11 หลายเดือนก่อน
Direct State Access // OpenGL Tutorial #50
Render a Wireframe On a Solid Mesh // OpenGL Tutorial #49
มุมมอง 4.2K11 หลายเดือนก่อน
Render a Wireframe On a Solid Mesh // OpenGL Tutorial #49
Optimize Your Meshes! // OpenGL Tutorial #48
มุมมอง 6Kปีที่แล้ว
Optimize Your Meshes! // OpenGL Tutorial #48
Introduction To Tessellation // OpenGL Tutorial #47
มุมมอง 7Kปีที่แล้ว
Introduction To Tessellation // OpenGL Tutorial #47
Text Rendering In OpenGL // OpenGL Tutorial #46.2
มุมมอง 4.6Kปีที่แล้ว
Text Rendering In OpenGL // OpenGL Tutorial #46.2
Text Rendering In OpenGL // OpenGL Tutorial #46.1
มุมมอง 7Kปีที่แล้ว
Text Rendering In OpenGL // OpenGL Tutorial #46.1
Billboarding With The Geometry Shader // OpenGL Tutorial #45
มุมมอง 4.3Kปีที่แล้ว
Billboarding With The Geometry Shader // OpenGL Tutorial #45
Skydome // Terrain Rendering episode #12
มุมมอง 4.2Kปีที่แล้ว
Skydome // Terrain Rendering episode #12
Good video. It's worth noting that in Vulkan you are not supposed to allocate memory for each buffer since number of allocations is limited. It's fine for learning purposes but for real Vulkan app you should allocate one big chunk of memory and map buffer for that memory with proper offset. It's also recommended to use single bufer for same data as you also use buffer with offset. You can write your own allocator or use existing. AMD VulkanMemoryAllocator is pretty good, it is used in some commercial games.
Thanks for the feedback. The plan is indeed to render the entire frame using a single buffer. This will happen later on in the series once I finish with the basics.
Should i just start with vulkan ? Or opengl
If you have no prior knowledge of 3D rendering then definitely start with OpenGL. Vulkan is much more complex but at least the math part is the same so you will learn that with OpenGL.
Thanks alot! Exactly what I needed
Glad it helped!
Thank you for this tutorial! Have you considered the possibility to add roughness factor for more detalization? (In the next video?)
You're welcome :-) There is a roughness factor in the midpoint displacement algorithm. Is that what you mean?
@@OGLDEV In the last video about tessellation shaders, the LOD depended only on distance from camera. How about adding some kind of roughness factor for the patch, influencing final LOD tessellation level? Anyway, thanks for the good content!
Not sure if this will improve the visual result. This would mean that you will have different LODs in regions which are the same distance from the camera. Perhaps you can add a texture with random numbers and use them to offset the height at the interpolated points. I haven't tried that though.
Snow looks better if you also filter by slope instead of just altitude
Yep, this is called slope based texturing. I will probably implement it at some point.
For non pot maps, just make a temp hf which is next pot, midpoint displace it, then resample down to the original hf
Sorry, I didn't understand this comment.
@ pot = power of two, hf = heightfield
OK, got it. Yes it makes sense. The algorithm continues running under the same constraint but from there you can resample the heights to any size between two consecutive power of two.
Hello, I'm a student in CS and I currently follow your OpenGL series which is greate and, I would like to know how do you use multiple shaders at the same time? Do you have any videos on it?
You can group sets of shaders into different program objects and call glUseProgram(ProgHandle) to switch between the programs. In my shadow mapping videos you can see an example of a shader program for rendering into the shadow map and another program for rendering into the color buffer.
What do you mean by use multiple shaders? OpenGL only support bind one shader to the state machine at a time. If you have multiple materials with different shaders, just bind them sequentially and on each binding, render the meshes associated with the shader.
@@赟杨 oh wow thanks!
Start from 1 so need no -1
Do you mean the screen position?
3 Time of each color. 3 timer for each dimensions Each dimensions, each sprite had square and circle , and olygon and x an o super imposed onwith tecture of each and siper computer os can do fast only at location of display
Sorry, I didn't understand this comment.
To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/OGLDEV/ . You’ll also get 20% off an annual premium subscription. ********************************************************************************* Clone the sources: git clone --recurse-submodules github.com/emeiri/ogldev.git If you want to get the same version that was used in the video you can checkout the tag 'VULKAN_15'. Build on Windows: Open the Visual Studio solution: ogldev\Windows\ogldev_vs_2022\ogldev_vs_2022.sln Build the project 'Vulkan\Tutorials\Tutorial15'
Hello. In the fragment shader we don't need to use uniform variable gShadowMapOffsetTextureSize because this information can be retrieved with textureSize(gShadowMapOffsetTexture, 0).yz. Also in the for loop where we populate our filters with offsets there's no need for double loop for window_pos on x and y axis because we don't use it anywhere, we could only do: for (int window_pos = 0; window_pos < window_siz * window_siz; ++window_pos) which makes it a little bit more readable. Also I might be wrong but in fragment shader it seems fishy that when we look at the vector we use for for texelFetch, then we basically get vec3(i, OffsetCoord.yz), because y and z axis of our sampler3D object corresponds to our window height and width, so OffsetCoord.y should be used for z axis and OffsetCoord.z for y axis so in short vec3(i, OffsetCoord.zy). I know it doesn't change anything because in this implementation offset windows are squares, but just wanted to point out. Overall this video helped me a lot to implement this myself and the trick with rotating the data structure of offsets with comparison to what was used in nvidia article to make for loops a lot easier to read is amazing. Thank you for the video it helped me a bunch.
You're welcome! Thanks for the feedback, you may be right on the YZ vs ZY issue but it's been so long since this video was published so I need to spend some time to look into this...
I actually did this while modding Halo 3's graphics. Thanks.
You're welcome :-)
my scene feels dark when I use light sources, is there a reason for this? Colors feel less bright than what I feel like they should feel
The question is whether this is a bug or simply incorrect or badly tuned params of the light/material properties. You can take the assets and lighting params from an existing app that seems to work correctly and plug them into your sample to compare.
AABB/OBB Frustum Culling can be done very efficiently with the following criterion: If all eight corners are outside any one of the frustum's plane, then the bounding box can be culled. And Nvidia gives a very short and fast implementation in nvpro/gl_vk_meshlet_cadscene. The cullbits for one point can be found in this paper: meshshading.vzout.com/mesh_shading.pdf
See Listing 4 Determine whether a screen position is on screen. With bitwise and to combine all culling bits to final bitmask, say frustumBits, one can tell if frustumBits is not zero, then the bounding box can be culled.
Hi Etay, great video, I have question though: In the CalcCameraOrientation function, for the DeltaQuat cast, why do change the component order for the vec3? Your component order is y,x,z. Why not xyz? Thanks, Rich
I think you answered it at 23:25, going through it again
Yes, this is a combination of two quaternions: a rotation around 1,0,0 by the y-delta (y,0,0) and a rotation around 0,1,0 by the x-delta (0,x,0). Combine them together to get y,x,0.
Got it, mouse x rotates on y up, mouse y rotates on x right
Any reason why GL_LINK_STATUS is checking for equal to zero, while GL_VALIDATE_STATUS is being checked for non-zero. glGetProgramiv(ShaderProgram, GL_LINK_STATUS, &Success); if (Success == 0) { ... } glGetProgramiv(ShaderProgram, GL_VALIDATE_STATUS, &Success); if (!Success) { .. }
No good reason, but note that the check is identical: !Success is TRUE if Success equals to zero. I will make them consistent. Will go into my next commit. Thanks for the feedback.
@@OGLDEV Thanks.. I wrongly assumed that the condition checks were different
Amazing content!
Appreciate it!
it was more easy to do with the deprecated features.
Well, we have to keep moving forward...
Thanks, I might try one of these books, as I really want to learn Vulkan after OpenTK. I did some Vulkan before but it is a lot harder
You're welcome! You can use my affiliate links: ogldev.org/books.html ;-)
5 days stuck on this trying to understand why the depth texture was rendering correctly but not the color attachment. 5 days... Turns out OpenGL considers the single-channel texture to be fully transparent and I had to disable blending
Do you mean that the standard rectangular shadow map was populated correctly but the cube map was not? I don't think it has anything to do with the number of channels. The cubemap is actually a color buffer because (afaik) we can't use a cubemap as a depth buffer. So we render the depth explicitly as a color value which means that it is subject to blending operations.
@OGLDEV at one point I copy-pasted your code to make sure I wasn't making some coding mistake. It still wasn't working so I used Nvidia Nsight and Renderdoc to check it and the depth texture looked correct but as mentioned the cubemap did not render (every pixel was max float). So the glClear worked but the actual drawing of the elements only produced results in the depth texture. In the end, in Renderdoc, I saw that the shader returned the right value for R but also an alpha value of 0. Even though the shader only had one float output and the color attachment had one color channel. I solved it by using glDisable(GL_BLEND) and re-enabling it after the shadow map is done
OK this seems weird because the cubemap is defined as R32F and the output from the shader is a single float so they should match. What GPU do you have? I use NVIDIA and I often see that subtle OpenGL spec issues are handled differently between NVIDIA and any other brand. You can try outputing (value, 0, 0, 1) from the fragment shader and see whether this allows you do get rid of the blend thing.
@@OGLDEV yeah indeed, definitely weird. I also have an Nvidia card, 4090, with Game ready drivers installed. I'll try to use the vec4 return value
@@OGLDEV So... The same thing happens on a Mac (OpenGL 4.1). If I change the output of the fragment shader from a float to a vec4 like you said it fixes the issue and I don't need to disable GL_BLEND. Thanks a lot!
if you still have issues with roll you can try to get the roll angle using dot of right with global up, and do another quaternion on Z axis to fix it: auto fix = glm::dot( camera_component.right, glm::vec3( 0, 1, 0 )); auto fix_quat = glm::normalize( glm::angleAxis(( float ) fix, glm::vec3( 0, 0, 1 ))); camera_component.orientation = glm::normalize((fix_quat) * camera_component.orientation );
Thanks for the feedback. I'm trying to follow up on your logic. You are calculating the dot product between the actual right vector and a constant up vector which is the cosine of the angle between them. This should be zero in most cases unless your right vector is "bad". Then you create a quaternion to pull the camera back to a natural position. Is my understanding correct? Is this an alternative solution to the SetUpVector function at the end of the video?
@@OGLDEV Exactly. This worked for me long time ago. I'm not a quaternion especialist although 😁
It's always good to have alternatives :-)
Really cool way of making this camera. I was used to Joey DeVries for a long time, this is a bit different but love the damping! Very fun thank you! 👍👍👍
You're welcome :-)
I keep getting perspective distortion, how do i fix this?
What's the nature of the distortion? What's the contents of the projection matrix?
@OGLDEV basically, when the object goes to the side of the viewport, it gets heavily distorted, i tried reducing the fov but that doesn't seem to work.
OK, and how does the projection matrix looks like? Try using a square window do disable interference of the aspect ratio.
@@OGLDEV sorry for being late to reply, but the projection matrix looks like this: 1/ (aspect * tan(halfFOV)), 0, 0, 0 0, 1 / (tan(halfFOV)), 0, 0, 0, -((far_z + near_z) / (far_z + near_z)), -((2*far_z*near_z)/(far_z - near_z)), 0 0, 0, -1, 0 The fov is 45 degrees, while the near_z is 1.0f and the far_z is 1000.0f. Sorry for the congested typing.
@@OGLDEV sorry to bother you again, but have you a solution yet
For anyone interested with a more visual derivation of the math, I made a different kind a video about it: th-cam.com/video/hdv_pnMVaVE/w-d-xo.htmlsi=4IffwjaJffDpTQWi
Very, very good explanation. Any time ago i was created my camera3d with quaternions. I worked hard and this camera works on the basis of quaternions. But in this video implementation of camera3d is more better. Masterful execution and code:)) Thanks for this video.
You're welcome :-)
i was watching one of your terrain lod videos but i wounder how does a skinned mesh lod works , especialy when a animation or animation controler is playing a sequence of keyframes for the skinned mesh , how does a the lod changes during such an event with skinned meshes?
I'm not sure how common it is to implement LOD on a skinned mesh. Skinning is mostly for humans, creatures, mechanical devices, etc. LOD is for terrains. Anyway, skinning is implemented in the vertex shader so you can still have an LOD system using tessellation which comes after the VS. The geomipmapping algorithm which I presented in the terrain series is probably not a good match for skinning but need to think more about it.
@@OGLDEV ok thanks
When I look at the glm API reference for the lookAt, lookAtRH or lookAtLH functions, they all return a 4x4 matrix, not a quaternion. I don't understand how you can store the result from either of these functions into the camera orientation property which is a quaternion and not a matrix ? A lookAt function returning a quaternion would not take the camera position as a parameter as it does not need it for the orientation only. Are there different versions of the glm libraries and this changed over time ? Also, I find the glm manual and reference to be hard to use. Maybe it's just me.
Hello iam Aswin, I have previously asked some doubts regarding game development. Thanks for your time in answering me. Iam in an very big confusion. Iam right now learning opengl and vulkan for game development. I have completed 19 vedios of your opengl beginner guide. I have also seeing many other youtubers vedio. But my confusion is that should I go with low level game dev or learn a game engine like unreal engine. As a upcomming computer science graduate, iam very passionate about learning game development from scratch. Will my passion endup in success or failure. My worries is not spending time and learning opengl but will the game made using opengl and vulkan match the graphics of unreal engine. Will the time which I put in give me something in return. U have far more experience in this field. So please to give suggestions on my thoughts.
You speak about totally different jobs, learning graphics (i.e. Vulkan / OpenGL / DirectX / Metal, and other low level technologies etc) will allow you to create game engines, improve performance of work on GPU (for games, film industry, artificial intelligence, etc), or even do scientific research on more beautiful or fast computer graphics algorithms. This path almost doesn't include any game development (i.e. making levels, game scripts, game balance, making story, etc), unless you make your own indie game from scratch. So if you seek the job of actually making games, you better learn the popular game engines, prototype your own small games, and seek a job position in game development. But if you're passionate about computer graphics, that's an awesome career path as well (and I enjoy every bit of it), it just that you won't probably make many games, but you'll be near the games, supporting them from low level
It's different but cpp is transferable
To expand on the other comments in this thread - if you plan to be a professional game developer then the question you need to ask yourself is: what's the most efficient way for me to develop this game which will provide the highest chances of completing the project successfully within the budget and time constraints? Since you (I guess) haven't completed any game yet either from scratch or using a game engine you may want to spend some time creating a prototype in both ways. I would assume that if you pick up your first Unity/Unreal tutorial you will be able to create a simple 3D game in a couple of weeks. As far as I know, most of the game developers these days choose this route. However, there are also many people who don't consider themselves great programmers and they go the game engine way simply to avoid the complexity of programming from scratch. If you do know OpenGL/Vulkan/DirectX you will have a unique advantage of being able to work on low level systems in companies that develop game using an in-house game engine and there are quite a few of those. Also try to avoid doing stuff from scratch more than once. You should build reusable components and systems that will provide you with an ever increasing library of code to help you jumpstart your next project more quickly. Bottom line, if you find graphics programming interesting then having knowledge of both OpenGL and one or two game engines will position you perfectly for many companies. If you are an indie game developer with a limited amount of time for your next game then a game engine is probably the safest bet. Hope this helps!
Thanks for all the people. All your answers helps me to get an good clarity. As of now, I have planned to spend time equally on learning both the opengl and the unreal engine. One small doubt, will the graphics of the game made using opengl, vulkan and directx will be equal to the graphics of unreal engine?. If not why plz since the game engine are made using these libraries right. Another one question, will AI replace the game programmers??? Once again! THANKS FOR ALL YOUR ANSWERS.
While Unreal engine is indeed based on an existing API which serves as its graphics backend, the difference is the level of techniques and algorithms that were implemented. You can watch videos from GDC on youtube where people talk about how they achieved the graphics that you see on today's AAA games. Regarding AI - I have no clue. Naturally some tasks can be done more quickly by AI but whether AI can replace the entire or most of the workforce? Can AI replace human creativity from this point forward?
Thank you for the clarity and precise word choices.
You're welcome :-)
Excellent video!, ❤
Thanks!
I have something very similar but instead of using glm::lookAt I use the inverse of the camera's position matrix, it lets you look upside down without doing the weird snappy thing and inverting all of your controls. I need it because I'm trying to make a space game but this is great for like 99% of applications
Thanks. The inverse of the camera position simply moves the camera to the origin. What about rotating the world so that the camera is aligned with the Z axis?
Ty
:-)
thx very much, I was on the perspective projection matrix video today following the first tuto, but I was following with glm so I was tweaking a lot, it made me slow
You're welcome :-)
Clone the sources: git clone --recurse-submodules github.com/emeiri/ogldev.git If you want to get the same version that was used in the video you can checkout the tag 'TUT_55_GLM_CAMERA'. Build on Windows: Open the Visual Studio solution: ogldev\Windows\ogldev_vs_2022\ogldev_vs_2022.sln Build the project 'OpenGL\Tutorials\Tutorial55_GLMCamera'
👍 Another video. You're rolling. @10:00 "Notice the order. First view and then projection". That is indeed the order of multiplicating matrices. However, some newbee may be confused about how that order is shown. You firstly do select 'View', and secondly select "m_persProjection", but the line of code reads: first m_persProjection, and then View :) But yeah.. i had to write _some_ comment to help the algorithm. Cheers!
Thanks! What we are trying to accomplish is the following: P*(V*(W*v)) where v is the local space position of the vertex. In this video we skipped W but this is the general idea. Instead of multiplying each matrix individually we combine them together to a single matrix. The way that the evaluation works and how the operators are executed in the statement 'm_persProjection * View' is that it's as if View is multiplied first by the vertex and then projection. So it's written like that but the evaluation is from right to left.
Wish math was explained more with code, as math guys like to have random function short names which sometimes you need to guess what it is...
ok
i learning thanks for giving free informations🙏
You're welcome :-)
I love the structure that you employ in the videos each builds of the last, assuming no prior knowledge unless you've talked about it makes it very enjoyable to watch and aligns with my personal philosophy so I appreaciate this series you've made will definitely check out your website wish you all the best, Max
Thank you so much!
Hey, kid, I'm a computer.
me2 ;-)
thanks for the tutorial! you are a great teacher.
You're welcome :-)
i took a break for a while, came back somehow understanding ALL of what i learned, you are a great teacher! thank you for the tutorial, this made it quite clear for me, by far one of the coolest videos in the series! like you answered the EXACT questions i had.
Thanks!
Somebody please answer my question, i am trying to figure this out for days now, Why does x values get mapped to an higher interval in the first place?, I though that is because the horizontal field of view is actually gonna be greater and for the same d distance that would map the values to a greater interval. But in matrix implementation we don't even use horizontal field of view, how does that make any sense? I thought we would use horizontal field of view for the x values and then divide by the aspect ratio, instead we still use the vertical field of view, How does it even map the values correctly?
You can structure a matrix using independent horizontal and vertical FOVs but you will probably end up with the two axes being not synchronized with one another - the image will be squeezed on one axis and stretched on the other. So the common practice is to define one FOV (can be either one) and basically match the other FOV dependent on that one through the aspect ratio. Let's assume that the FOV is 90 degrees. This makes tan(FOV/2)=1 and we can ignore this term in the matrix. Let's also assume that our window has a width of 2000 and a height of 1000. So the aspect ratio is 1/2. Now let's see what happens to the point (2,2,2). The matrix has 1 in cell [0][0] so x remains 2 and after perspective division (2/2) we get 1. This means that the point is exactly on the righthand side of the (-1,-1) to (1,1) square which is what eventually gets mapped to the viewport. What about y? In cell [1][1] we get 2 (1/ar) so after multiplying we get 4 and after perspective division we get 2 which is way out of that normalized square. This means that to get to the top of the square and not get clipped the y has to be 1 (when Z is 2). So the bottom line is that when Z=2 the visible horizontal range is -2 to 2 while the vertical range is only -1 to 1. This matches the fact that the height of the window is half the width. We simply have more pixels from left to right than we have from top to bottom.
OK I see the source of the mixup. Looks like my latest code has changed since the time of the video and my initial reply refers to the latest code. The aspect ratio used to be the width/height and it was multiplied by 'd' in cell [0][0] (that's the first cell along the diagonal which is multiplied by x). Now the aspect ratio is height/width and it goes to the denominator of cell [1][1] (the second cell along the diagonal which is multiplied by y). If we use a horizonal field of view we indeed get the range of -old_ar to +old_ar (which is -2 to 2). Now we use a vertical field so the same range is actually -1/new_ar to +1/new_ar which is actually the same.
@@OGLDEV I get it now thank you
np
@@OGLDEV i am so sorry if I am taking too much of your time. But after reading your second reply I got a bit confused again. I am just going to say what i think makes sense and if you can just tell me what is wrong with that approach i would forever be greatful. Let's say we are using the x/y aspect ratio, and in the [1][1] we simply have 1/tan(vfov/2) which makes complete sense it puts y values in -1 to 1 interval. Now for the x if i use horizontal instead of vertical (like you mentioned assume that they are matched) as you just confirmed i would get the interval between -ar to ar and then we got the division by ar which puts it into -1 to 1 interval. So why not use the horizontal instead then? If this works How does the vertical even work?
Absolute goat!
Thanks!
Greate series! Quick question, why the normal is not affected by the bone transformations? Thanks!
Thanks! Good question. The simple answer is that we ignore the bone transformations. This algorithm doesn't provide a good solution when it comes to normals. Some people apply the same bone transformation matrix on the normal. Depending on the model and the animation, this may provide better results than ignorning the normal. You can also add geometry shader and recalculate the normal based on adjacency info.
Got it! Thanks!
You're welcome :-)
Amazing tutorial, just got done with it. Thank you so much!!
Glad you liked it!
Thanks, but $ ./build.sh /usr/bin/ld: cannot find -lmeshoptimizer: No such file or directory collect2: error: ld returned 1 exit status
I solved sudo apt install libmeshoptimizer-dev
Dear OGLDev, Thank you so much for this amazing tutorial - it's incredibly complete yet concise. I am trying to load a model with 16 bones per vertex. I naively tried to use int[16] and float[16] arrays in the vertex and fragment shaders but this is causing problems particularly with the if-statements used on the bone weights in this tutorial. I don't think it will be an issue in subsequent tutorials that use only the weight as a value not in a condition. One potential workaround is to use multiple vec4s but this results in annoying and messy duplicate code in the vertex/fragment shaders. Have you run into this issue and have you seen any workarounds? For now I think I'll just drop it to a vec4 / ivec4 same as you do here, and then convert back to arrays in the next tutorial. Strangely, comparing Bone Indices in the fragment shader doesn't fail, not sure if it's because it's flat or int. This is the exact error I'm getting with an NVIDIA 4080 on Ubuntu: error C7001: inconsitent use of semantic modifiers: "<null atom>" and "<null atom>"
I thought the camera is supposed to look down toward negative z axis
Yes, when using a right handed coordinate system. I'm using left handed so the z is positive.
Why copy the device? Why not just a reference?
You mean VkDevice? All the Vulkan handles are actually pointers so you simply pass them around as is. The device itself is not copied.
I have always wondered how to do this, THANK YOU!
You're welcome!