ADG Filler #48 - Is the Doom Engine a Raycaster?
ฝัง
- เผยแพร่เมื่อ 6 ก.พ. 2025
- So there's a bit of a debate out there as to whether or not the Doom Engine, used in a variety of games, not just the original Doom, is a raycaster like Wolfenstein 3D or not. Well, after doing a bit of research, here's the definitive answer!
For more information on how the Doom Engine works, here's a very good run-down of what the source code is doing behind the scenes: fabiensanglard....
------------------------------------------
Additional Information and Corrections:
I couldn't think of a good way to demonstrate most of what I was talking about, as it can be difficult to intentionally create some of the potential issues the engine can run into. Fortunately, I stumbled across one just looking through random Doom levels with Doom Builder 2. Easy to see how it was missed too given all the textures in that area were primarily red. :B
------------------------------------------
Pixelmusement Website: www.pixelships.com
ADG on Pixelmusement: www.pixelships....
Alphabetical Index of ADG Episodes: www.pixelships....
I came being confused about how the Doom engine works.
I left being even more confused.
It is confusing, but it's also genius.
The actual levels are just flat 2D maps, as in a top-down view. How the game renders the pseudo-3D space we see in the game is still voodoo magic to me.
The first part I understand.
But how the game renders the pseudo-3D space that we see... is what I was even more confused about.
+Nicholas Markovich Since multiple techniques are being used in tandem, you pretty much have to know what those techniques are, at least at a most basic level, to understand how they all fit together. I tried to break it down but this video would've been four times as long if I had to explain each part of the process in extreme detail. x_x;
***** I do understand how the Wolfenstein 3D engine works. The player position has a cone of view and the distance to a wall or object/item/enemy is measured from player position to object and the distance determines the texture scaling needed to correctly display the object or the wall column. But the Doom Engine does not use the same method as Wolf 3D. I understand the need for a BSP tree to build sections for various floor and ceiling heights, variable lighting intensity, complex geometry, staircases, windows, raised platforms, elevators etc. What I don't understand is how texture scaling works and how it is calculated without ray casting.
Eli Malinsky OK, that I can answer. What Doom is doing is calculating the relative positions of the polygons which makes up each wall in view. Knowing the shape of the polygon, and knowing that the camera can only turn on a single axis, Doom is able to rasterize each polygon in a similar way to Wolfenstein 3D. In fact, many of the textures in Doom are stored in the WAD files in a special way to optimize this process. The floors and ceilings on the other hand are even simpler, as they're just calculated as infinite planes and only draw in the pre-calculated visplane space determined each frame. Sprites are simply scaled by distance and rasterized, no extra calculations required!
Doom isn’t a raycaster because it doesn’t cast rays. To render a wall, it will go to the nearest visible wall according to the BSP tree, calculate where the endpoints (vertices) are in screen space by their angle to the player, subtract their vertex position from the player position to get each endpoint’s distance to the player, use the inverse distance to get the height of the wall endpoints and draw them at their screen space x coordinates, then linearly interpolate between the tops of the wall, getting the slope of the line connecting the two tops, and use this slope to draw all columns in between at the correct heights. It would then go to the next closest wall according to the BSP tree and repeat etc... One caveat is that the walls would be clipped against an occlusion array before being drawn, so there was no overdraw. At no point does Doom ever cast a ray to figure out column height or to determine visible surfaces like Wolfenstein did. Instead Doom used BSP Trees to solve the VSD (Visible Surface Determination) problem. The Doom engine is actually a much more efficient engine than a raycaster like Wolfenstein 3D, which was why an early version of it was used to make the SNES port of Wolfenstein.
This explanation is gold! I've been looking for more explanations of how doom worked.. lots of good info on Wolfenstein but Doom always was murkier on how it worked
Hail Carmack
Hail
Fascinating stuff! I remember reading in "Masters of Doom" that adding push walls in Wolfenstein 3D was a big pain for Carmack, and that his solution was a big messy hack, but I never understood why. Not that I really understand it now. Is there something about the simplicity of the visplanes in Wolfenstein (right angles, no ceilings/floor textures) that enabled him to do it?
Great video. I love seeing more technical stuff like this. There's definitely a niche audience for it.
Dome-Candy Games Wolfenstein 3D is a raycaster. It has no visplanes. :P
What made the pushable walls a pain in that game was that the raycaster was optimized to take advantage of the grid-like nature of the maps, thus predicting ahead of time where rays would strike solid walls and thus greatly reducing the number of calculations needed per ray.
I don't know the specifics, but from what I understand, pushable walls needed an entire section of code devoted just to them which had to override the normal wall rendering algorithm.
***** Aha, that makes sense. Super interesting stuff. It would be interesting to see you deconstruct the Dark Forces engine and contrast it to the Doom engine as well. I always found it fascinating that it rendered so similarly to the Doom engine, but then it was also capable of multiple levels and even 3D objects.
Dome-Candy Games I'd love to see that too!
@@DomeCandyGames The dark forces engine uses portals (like build), which allow for movable sectors and multiple levels. 3D objects could be done in Doom, and don't really influence the overall rendering architecture (just render them in back to front order, like sprites are drawn).
@@Pixelmusement He should've had the walls just lift up, like in Doom. That'd just be a matter of shifting the texture upward, and I suppose casting the rays again for that section to see what was behind the wall. You'd only need to do that twice, for above / below the wall's lower limit.
i liked this and would like to hear your thoughts on quake, maybe in a future filler video
velius2014 Why filler? Quake is DOS game and also a Windows game. I remeber back in day when I played it under DOS on my 486 in like 3-5 frames per second in a post stamp sized window :D
yes
I feel like I commited a crime by playing Doom, making doom levels and all and never knowning what raycasting meant. I'll be honest, I still have trouble knowing it but I really enjoyed this episode. good video.
I think this is why I love the Doom engine so much. It's the marriage of 2D and 3D that can only be achieved in a virtual world. 2D can't be seen in a 3D world and vice versa... unless you're playing Doom!
It's something abstract and special that's kind of lost in full 3D engines. It touches upon the existential and it tickles my brain...
Don't know if that makes sense to anyone else other than me!
+Mint Jeffer Oh trust me, it makes PERFECT sense! One of the main appeals of having the world defined in this way is that it's very easy to create stuff for it by comparison to a fully 3D world. I found making custom levels for Doom and Duke Nukem 3D WAY easier than for Quake or Descent! :B
@@Pixelmusement maybe a little late but, how different is Build engine of Doom engine? I mean, since Ken Silverman was doing at his own, have they similiarites in how render the walls etc.? Thank you so much for your video : )!
Ok, nvm, I found this : D fabiensanglard.net/duke3d/build_engine_internals.php
+Drerhu Yeah, Doom and Build render things in a similar way, but they differ in how they cull the world so that they're rendering as little as possible. Doom uses a BSP Tree which needs to be pre-built out of the level geometry, whereas Build is literally counting pixels as it renders them to figure out whether to keep rendering past the sector boundaries. Doom's method is faster, but only works with static geometry. (Later Doom Engine games like Hexen have hacks in place to allow for basic forms of full 3D objects.)
Oh yes, yes it does make sense to me.
Here are two crucial facts about the Doom renderer, solid parts of the map [anything without transparency] are rendered near to far and objects with transparency [things and textured openings] are rendered far to near. Skyline is rendered before the sprites, textured openings, player's weapon and the other 2D graphics get drawn.
Great video, I agree with the explanations. Also good mention of some other Doom aspects like visplanes. I do remember trying to make a detail map with lot's of broken rocks in a corner and I later got the dreaded visplane error. I didn't think of combining the ones with the groups with the same height in a single sector then. I didn't even know what this error was then anyway.
Unfortunately this was always a big misconception and you'd still hear people saying that Doom, Duke3D and others are raycaster, just because they see column rendering and 2d maps. As I have seen in the code, no rays are cast, but the walls that are visible to the camera are found (via BSP in Doom, via Portals in Duke 3D) and the Z and U tex coordinates per column are calculated by interpolating from one edge of a wall line piece to the other, thus giving Z/U per column to do the stretched column rendering.
Although, the two most famous tutorials on how to build a wolfenstein engine, also mistakenly mention Doom and Duke 3D as raycasters, and one even mistakenly defines raycasting="raytracing for just 320 columns" while raytracing="for every pixel". It took me much later through scientific literature to discover that raycasting is just the first step in raytracing, casting a ray and hitting a surface, before the raytracing part which is reflecting this ray and continuing the process.
p.s. It's interesting to mention that the SNES port of Wolfenstein is not using raycasting as it proved to be too slow, but a BSP approach. I have read this in a John Carmack interview.
It's very interesting what you wrote, but at some point the Doom engine needs to calculate distances to the walls, doesn't it?
@@plrc4593 Yeah, but it's not done with raycasting. There has to be a Z found for the left/right edges of a wall, then interpolate for the rest of the columns.
@@Optimus6128 But how does the game determine without casting a ray what is visable when, for intance, a door opens and a next room becomes visable? Does the game do it similarly to real 3D using some projection matrix?
@@plrc4593 Doom uses BSP trees for visibility determination.
In reality, several techniques are used in a 3D engine. So, one cannot say "a modern game is using projection matrix for real 3D" and stop there. Modern games might be using various visibility methods like BSP or Portals or other, to determine which areas to render, then do the 3D projection and rendering.
Doom is using the BSP tree to determine which wall segments are visible, then some kind of trigonometry tables to project the left/right edges of a wall (it will get the X coord and scale from distance (for which it also seems to use some other math tables but finds it with trigonometric math not raycasting)).
It would be easier to explain that levels consist of convex polygons (each such polygon having a value for floor and ceiling height). The edges of the 2D polygon where the camera currently is are clipped to the 2D view frustum (which can be described as a trapezoid put onto the map, clipping out everything outside of it) and projected onto the screen. Since each such line is essentially stretched vertically to form a wall, you can then fill that wall by drawing vertical lines from each matching pixel. Perspective correction for the texturing only needs to be done once per vertical line.
If this edge was actually shared with another polygon, the vertical line will consist of one, two or three sections, the middle one being rendered recursively for that other polygon. (The middle section of the vertical line is kind of a "portal" view to the adjacent polygon.)
Texturing the floor and the ceiling efficiently (and perspective-correctly) requires a couple of smart tricks, but it's possible (ie. you can draw them as horizontal lines, and you only need to calculate perspective correction at the endpoints of these lines).
This explains, for example, why one such convex polygonal sector can have only one wall texture, one floor texture and one ceiling texture. (If you wanted to use more than one texture, you need to split the sector.)
The nice thing about Doom's rendering engine is that the vast majority of calculations are done in 2D, and there's a minimal amount of perspective correction needed for the textures, yet you get perfectly perspective-correct texturing.
+WarpRulez That... didn't sound even remotely easier to explain. ;D
*****
Some graphics would help. It's difficult to express things graphically in a youtube comment.
+WarpRulez Given the level of maturity I've seen in some comments, you should probably be EXTREMELY thankful that graphics in comments aren't a thing... o_o;
Great explanation *****. It was interesting to know a little more of this, even though I was half way confused about the subject lol.
My guess for the next episode is that you will cover Madden. Unsure if there was a version of Madden for DOS.
I hope you get to do a Tomb Raider episode sometime, I remember running it on DOS back in the day.
Keep up the good work =D
***** There is DOS versio of Madden ;)
Very interesting stuff there since i am a programmer but never done anything with 3d.
By the way i would have expected this channel to be way more popular with the sort of high quality videos you make.
berni8k Give it a little... the subscriber count has actually gone up by 33% since switching everything over from Blip just a couple months back. That said, DOS gaming is kind of a niche thing nowadays, but it's something I know fairly well which is why I talk about it. :)
***** Good luck with the channel. It deserves it.
@@Pixelmusement Ah, I remember when there only WAS DOS gaming! Memmaker, the PC gamer's friend! Then WING, the pathetic attempt at trying to do games in Windows, that eventually mutated into Directx.
So cool. Even today, it's a very cool technical achievement. Is Maze War (the very first FPS and in 3D or maybe pseudo 3D), raycasted?
Some of the best information on doom mapping I've ever seen on here! Love Plutonia. 👊 🚀 💀🚀 👊 👊 🚀💀 🚀👊
That constant jumping is headache inducing... Instead of a meaningless flyby it would have been more interesting to see actual examples of the things you talked about, like when you showed the walls go infinitely up.
I had to shut the video off. Watching these 3D scenes quantum jump from position to position over and over again was giving me an extreme headache. It overwhelms the amazing information being spoken about. It doesn't help I am watching this on a large TV. Please take everything with that Doom editor and throw it away. Just show Doom gameplay while you chat instead. I don't think people realize how jarring sudden motion is when you are not the author.
If this engine were released today, would binary space partitioning be necessary?
Also, you didn't demonstrate why the walls can't move sideways. I would appreciate further explanation.
Nowadays culling methods are often much more advanced, but BSP is still used in some of them! The trick is that BSP trees, once compiled, cannot change mid-game since it would be impossible to do real-time. This is why the Doom Engine originally didn't have horizontally-moving walls because the walls were what defined the BSP structure. If a wall moved, the BSP structure would no longer account for all possibilities and thus things either wouldn't be culled, reducing the framerate, or culled when it shouldn't be, resulting in parts of the screen not getting rendered, leading to a hall of mirrors effect. Later iterations of the Doom Engine found ways around this by allowing disconnected walls to exist outside of the BSP calculations, provided their movement was restricted to staying inside of the sector they were positioned in, a detail I explain a bit better in my review of Hexen: th-cam.com/video/uJoqcVPjqTc/w-d-xo.html
@@Pixelmusement
Can you explain how the Doom engine draws walls and stairs? Is it raycasting? What's the fundamental logic there?
I've heard that it was running (but poorly) before BSPs.
@@AntonyCannon At a fundamental level, it's using the BSP data to decide what geometry to draw, based on the player position, then effectively turns what should be the visible walls, floors and ceilings into polygons, though it's a bit more complex than that in order to avoid rendering the same pixel more than once wherever possible, as there's a TON of optimizations in place to make this go extremely fast given the hardware of the time.
@@Pixelmusement Thank you for your reply, but it's that "turns what should be visible walls, floors and ceilings into polygons" bit that I want to know about. How?
Raycasting I can fathom, but what does this engine do to position and stretch its textures?
@@AntonyCannon Raycasting is merely a means to get an arbitrary distance from one point to another without knowing what objects are in the way. The BSP Tree tells the engine EXACTLY which walls and floors/ceilings to even attempt to render, thus raycasting ahead of time is unnecessary; it can just pull all of the wall coordinates and sector floor/ceiling heights and work from that. :B
I tried to look up scan line rendering but I didn't understand it much, how does doom draw the walls from the nodes?
It's basically creating polygons and then filling those polygons with textures. It's not much different from traditional polygon rendering, except several shortcuts are being taken to speed the process up given that there are known orientations and limitations to how those polygons are going to be oriented, given the data they're being constructed from. Heck, the textures THEMSELVES are stored in the WAD file in a special way which makes them faster to process given the way the engine works! To get any more specific than that would require some complex math and I'm not well versed on those specifics at all. :P
as a huge fan of the series this was pretty interesting. if I'm not mistaken the idea to use BSP trees was also used in much later generations of engines. don't know though about todays engines, if this is still the way to go.
say, what do you think about the new DOOM which was shown at E3? I was positively surprised how they obviously took influence from the mod/map community of the original games (finishing moves from brutal doom, changine into monsters in the multiplayer reminds of the master of puppets mod). and snapmap seems to be a great way to ensure a healthy community around the game.
Sp00kyFox Haven't actually seen the new Doom yet. Been too busy with other stuff... plus I'm not as in tune with modern gaming as I once was. :P
*****
okay, but you're missing out ;) if you have a free half-hour I'd highly suggest to take a look at the DOOM presentation from the Bethesda press conference. it begins at 30:00
www.twitch.tv/bethesda/v/6189973
You still get a huge benefit on modern GPUs drawing from front-to-back as the z-buffering hardware can reject a lot geometry before it gets to the more expensive shading stage, or even in deferred renderers you can still do a z prepass to save on expensive GBuffer updates. BSPs are useful for CPU side collision detection when you are firing rays into a complex scene to figure out collisions more efficiently, but that's usually done with a low polygon representation of the world. You also get some neat culling tricks from BSP, like PVS (used in Q2) or portals (Unreal Engine 1, i think).
WHY IS NO ONE TALKING ABOUT SOURCE!?!?!?
Im not sure if I get how the rendering works, so with doom rendering you already know how far the walls are and then you cast a ray to the end of the map, which you then check if the first intersection is visible and if not, you move on to the next one further down the ray? or am I wrong? I don't know what role visplane plays in this? Oh and I got this idea both from your vid and this forum right here www.gamedev.net/forums/topic/163803-bsp-doom-style-rendering/
No, Doom doesn’t cast rays at all. The precalculated BSP tree will tell you which wall is closest and you would just subtract the vertices of the wall from the players position to get the distance between them. Then you would use linear interpolation to know how tall to draw the columns in between. The only reason you would cast rays would be to find the nearest visible wall. But, Doom already solves this more efficiently with BSP trees. So there would be no point.
@@robertforster8984 This is perhaps the best explanation how the Doom engine works that I have found. Thank you very much. Could you also explain how the Doom engine draws floors and ceilings?
Great video, subbed.
7:08 "Doom did indeed do"
Say that five times in a row,
Ghost81 Doomdidindeeddodoomdidindeeddodoomdidindeeddodoomdidindeeddodoomdidindeeddo.
:P
***** You win this time. >:(
How was it possible for Hexen to have side-moving doors, while still using the Doom engine?
The Doom Engine was never a "static" entity. With each game coming out which utilized it, changes and additions were made to make the engine better. When I ultimately get around to Hexen I'll investigate how this works because there's a number of ways it could've been done while still using fixed BSP Trees to manage level rendering.
***** Thanks ;)
This moving sideways objects was called "polyobjects" and was inviented in Hexen version of engine, yes.
@Felipe Gomes One thing that's important to remember about Heretic and Hexen is they were NOT made by Id Software, so naturally the devs were going to give those games their own look and feel within the limits of what the Doom Engine was capable of. As for what the engine actually does, the Doom Engine is primarily a set of libraries and functions which allow for loading a particular level format and rendering a 3D viewport into that level while also being able to blit graphics and fonts onto the screen almost anywhere you want, with a few helper functions for providing menus, messaging, and rendering of the map from an overhead perspective. Beyond that, how everything in the world behaves and moves is entirely up the coders to write themselves. :B
Im confused about "Doors not able to go left and right" when in Hexen at the beginning the doors dont go left or right but turns. Im really confused.
+Epic_Man89 I was mostly referring to the original Doom, Doom II and Heretic which exclusively relied on BSP trees for the world. Hexen still does too, but uses a slightly more advanced form of the engine which is able to incorporate moveable one-sided walls, referred to as "Polyobjects". There's some huge limitations with these but the BSP tree is able to take them into account because of those limitations.
@@Pixelmusement Now i get it!
So if I understand corectly, its not raycaster nor rasterizer. Is that correct?
Actually, "Rasterizer" applies to how it draws all of the walls and sprites. The floors and ceilings however are a bit more hacked in terms of how they work, thus why they can appear to go on forever if you glitch out of bounds or fail to set a texture on a wall.(Or set a single-sided wall texture to something with transparent sections.)
Does that mean that each wall/door/column rendered is in fact a sprite being drawn onto the screen? If so, wouldn’t that mean that everything is being redrawn every time the player moves the camera?
+Vincent Carver Everything is being redrawn every FRAME, regardless of if the camera moves or not. It may sound inefficient, but it's the simplest and fastest way to handle a camera which is moving frequently, whereas optimizing for those extremely few moments where it's not moving would be pointless and would burn more RAM. As for the walls and doors and such, it would be more apt to say they're being rendered like polygons. :B
Walls and sprites use the same functions to draw them. The only difference is that walls are also transformed to introduce perspective, while sprites are simply scaled with distance.
Virtually all games redraw everything on every frame. That's not unusual in the least. It's technically possible to cache rendered sections of the screen, of course, but way way WAY more effort than it's worth.
That genius Carmack
Great video!
is there anything I can download to edit the source code of doom 1?
+AA Cannon Productions Perhaps the source code itself? Just a wild guess. Though no, I don't know where it's available from, just that it IS out there. :B
I can't find it anywhere could you give me a link please
Source code is available here, you'll have to make some modifications to get it to compile though. I think you need to add a sound API amongst other things: github.com/id-Software/DOOM
+=XA= HahYouMissedMe I think GitHub contains the source code of Doom and some of its offspring, the source ports. Just search GitHub for Doom and you can see stacks of repositories containing Doom's code or enhanced and/or portable versions of the said engine.
+=XA= HahYouMissedMe You need to compile it with c++ borderland 3.0.
I got much more confused
The question now still remains ... is it or not 3D?
+Thomas Silcher I've heard this argument before. The correct answer is: Doom is a visually 3D game with mostly (but not entirely) 2D gameplay. (Player's height is a factor for level geometry, although not for entities.)
The Doom engine doesn’t depend on the full GL(3) group of transformations, but rather on GL(2). This would make it 2D.
@@Pixelmusement Well, projectiles do in fact care about Z axis and heights.
Proof: make a room with a raised column in it. Put an imp onto it. Stand away from the imp, let him throw a fireball, run forward. Fireball goes over your head and impacts the wall.
Can you make a mirror on that engine
Not the original Doom Engine; it has no means to accomplish such a task. I wouldn't be surprised if modern source ports could do it, but then those effects would be incompatible with the DOS original. Duke Nukem 3D on the other hand has a rather crafty way of handling reflections and mirrors, but I'm not sure if it's a Build Engine feature or something the people at 3D Realms figured out how to do after the fact given how it works.
Man the black magic programmers had to resort to just to render a fake 3d environment. Are today's devs spoiled?
+Professor
Monkeyface Ph.D. Yes and no. Indie devs are way spoiled with all the choices of engines which automatically do so much of the work for them, but the commercial devs making their own engines still have a massive amount of really hard work to do getting their creations to not only look good but perform well. There's even all kinds of tricks which still apply even today. For instance, the less often you switch which textures you're drawing FROM on 3D hardware, the faster everything goes, so if you can render all instances of the same object in one pass, rather than in whatever order they happen to be in, you help speed things up on the GPU; it's even better if multiple objects share the same textures! (Although this isn't always feasible/practical.)
@@Pixelmusement Not to mention that even today, getting the game to work right on all sorts of hardware configurations with a custom 3D engine is ridiculously painful. In contrast, 2D and soft 3D could have performance issues, but it would always work. The move from 2D and soft 3D to hardware 3D is only simple while you only test on your own computer :D
Is it any more fake than modern engines? Personally it seems a lot more "real" than raycasting engines like Wolfenstein3D.
Wait, It Really Is a Raycaster
Me: hahahahaha
my friend: STOP BULLY ME!
Ray casting and tracing are pretty simple to describe. Ray casting is sending out (from your eyes/camera) rays and noticing what walls / things you hit. Ray TRACING is the opposite. It sends out rays FROM ALL LIGHT SOURCES, lets them bounce around, and >>IF
+Chris Katko Actually, what you've described is referred to as "light-based" ray tracing and is less common than "eye-based" ray tracing which is what I described. "Light-based" is how eyes work in the real world, as what we visually see are the photons from light sources reflecting off of surfaces and hitting our eyes, since a single light source outputs an insanely massive number of photons in every moment. For virtual space though, "eye-based" algorithms are far more common because they compute way faster, although to achieve certain complex effects it's actually possible to combine BOTH methods together! Using the eye-based results for the basic rendering pass and stitching in light-based results for any surfaces or volumes requiring advanced lighting techniques. :B
I think in the scientific literature, raytracing is an extension of raycasting. In traditional raytracing, you still cast rays from the camera eye, hit something, but contrary to raycasting you reflect the rays at the hit points and continue tracing for n-times if you want n-reflections. One additional part is to cast a ray from the light back to the hitpoint in order to see if it hits something in between and decide whether to shadow that hit point too. But what you describe I think it's called photon mapping, which is also considered an advanced raytracing method.
In the gamedev community, there used to be a very popular tutorial for how to make a wolfenstein engine. The author had used the term raycasting (showing games like Wolfenstein, Doom, etc) while in comparison he used the term raytracing (showing prerendered games like 7th Guest). But I realized, after I discovered how graphics papers use the terms, that It was misleading in many ways (Doom isn't using raycasting, raycasting doesn't mean only cast rays per column). But that stayed in the community. Somehow people thought raycasting is the simplified wolfenstein version of 2D rays hitting. But no, there is raycasting per pixel when you want to cast a ray and check if hit voxel data for every pixel, but don't care about reflections, etc. Raycasting is also used today in game engines, when you say "I cast a ray from where I fired my gun and see if it hits an enemy".
Only reason wolfenstein 3D was fast with raycasting is that in 320*200 resolution it casted only 320 rays and checked purely in 2D block map what it hit (then it could finally find distance Z and finally draw scaled columns at the correct positions), while modern 3D raycasting/raytracing either cast one ray per pixel or multiple because of reflection bounces, so they have to do at least 320*200 raycasts or 320*200*n per frame which is a lot.
In raytracing you draw, for each pixel on screen, an imaginary line passing from the viewer's eye, through the screen, into virtual space, and as it hits things it reflects or refracts off them, sending it on a new course. Each time, you take into account what it's hit. You allow so many bounces, because if you let it bounce forever it'd never finish. Then when it hits a light, you look at the list of bounces and work out what colour it would be after passing through all those bounces. You set the pixel to that colour.
In raycasting, you draw the same line, except as soon as it hits an object you stop, and whatever it has hit, you draw. In the case of Wolfenstein you don't need to do each pixel, since the walls are all purely vertical objects, so you just take the wall's top and bottom and draw the texture between them. So you only need 1 imaginary line per column of screen.
why is it a 1.5 3d engine?
...I think you meant to ask why it's a 2 1/2 D engine, the answer of which is that the original Doom engine doesn't actually render 3D shapes. The map is 2D with floor/ceiling heights defined for each "sector" to determine where to render the floor and ceiling and then numerous optimizations are made to give the appearance of 3D without actually doing 3D rendering. Since it's not real 3D the objects in the world are also limited to being sprites. Nowadays, source ports to modern computers have full 3D rendering of Doom levels and can even incorporate 3rd-party models in place of the original sprites, but the levels still follow the same original 2D sector-based format. :B
Damn, this video was way ahead of it's time.
did anyone else pick up on the guys way he ends alot of words? like its a question, with an upward inflection
. higher toned at the end haha
You should come visit Australia. Soooo many people speak like that and it's really weird and annoying. Source: I'm Australian.
I'm very confused on 2D levels are stored as raw data then 3D just completely screws me over
+MeatBeat What specifically has you confused?
+Pixelmusement loading bites from a file and making them work in different ways such as walls blocking the player or enemies
+MeatBeat That's just "data interpretation". A wall in Doom for instance consists of X and Y coordinates for its two ends, texture definitions which are just ID numbers which match the ID numbers of the textures stored in the Doom WAD file, and a simple value which tells the engine which code to run when the wall is interacted with. It doesn't need to have its 3D height stored because that information is stored in the sectors those walls belong to. Once you've created this stuff, you simply write it to a file in a certain order so that when you read it back in from that file in the same order it fits right into the data structures you created in your code! :)
@@jacobthesitton9142 Usually in games, even 2D ones, walls don't really block players or enemies. Instead, the game checks to see if you, or an enemy, is trying to go past a wall. If you are, it doesn't allow you to move in that direction. So really everyone in the game is just being polite and stopping before they walk through walls, or (in some games) each other.
It does the check by comparing your coordinates with those of the walls etc. If your coordinates start to coincide with a wall's coordinate, then you're going too far. This applies in 3D games where polygons are checked not to move through each other's space, but also in stuff like the original Super Mario Brothers, where the level is just bricks and ground tiles laid out in a grid.
You can demonstrate this easily in Doom by using NOCLIP to turn off the checking. You can just barge through walls, and see what happens to the renderer when you're in situations the game wasn't designed to let you get into.
interesting.
very good video
however it still doesn't seem like we have an answer of what to call it. Someone could say it's BSP based sure... but BSP is not the only thing it uses either
especially the front to back rendering process is why i thought it was "raycasted", literally in each vertical line i have seen someone coding their own one
i saw someone build one on youtube, after he done the BSP bit, he showed how it's draw from front to back (like raycasting).... how if a floor is higher, it will obscure what is behind it implicitly.... the same to happening with the ceiling (i wish i could find this video, he called it a raycasting engine to) .... this gives the result we have no room above room, and right there that is what people called a "raycasting engine"
sure it doesn't just go up the screen, but maybe you have over-simplified ray-casting
personally i'm gonna avoid the argument, but i think them people who wrote the old books might know more than you think.... just it's hard to put a word on it, one simple word was not gonna describe the doom engine
Ultimately, that's why we call it an "engine". If it was just one simple answer which did everything, it wouldn't be much of an engine, just an equation! ;)
The point is that raycasters go the width of the screen, pixel by pixel, determining wall height (by casting a ray until it hits a wall). Doom and Duke Nukem 3D go through (potentially visible) walls, one by one, calculate the four corner vertices and then interpolate between them. It's drawn front to back to avoid overdraw (which was very expensive - you didn't have VRAM access on every CPU cycle!) - if you ignore sprites, Duke Nukem 3D has _no overdraw_ - each pixel is written exactly once per frame. All you need is a very simple occlusion array (the "top" and "bottom" of every wall rendered so far) - just two values for each column of the screen.
This isn't why there's no "room over room" in Doom. That's a combination of the limitations of the BSP and the level editing. Build, which used explicit portals, could actually do room over room, though it wasn't used much (you could even have two different rooms at the same position, if you were clever enough) - as long as you never tried to render both rooms at the same time. I've always loved the Build engine for the insane things you could do in it - I've written my own Build map editor to exploit the engine for all its worth; one of my first really custom maps was a tesseract arena with the kind of alien topologies we wouldn't see again in games for a long time. The main problem with the Build engine was that a) it put a lot of the work on part of the level designers and b) the jank. There were so many things you could do with it that resulted in a rendering artifact here and there.
I've got a few software 3D renderers under my belt. I've always loved the Build-style approach far more than Doom or Quake. And I still prefer the way those engines end up kind of emulating how human vision works, while "true" 3D engines mostly emulate how _cameras_ work. And it's amazing that even today, my C# Build-style engine can push 40 FPS in Full HD :D - in contrast, my true 3D software renderer can do 60 FPS... at 320x200. Of course, I'm not done optimizing either :)
@@LuaanTi :O You're writing your engines with C#? :P You should put some videos with them in your chanel. I'm working on my raycasting engine in C# currently. I managed to implement good looking wall texturing and quite decent and fast floor texturing. I was curious how it works in Doom and was surprised that it's reportedly not a raycasting at all... I understand the idea of BSP and precalculating distances to walls but don't exactly understand how to determine what is visable without raycasting.
It's a raycasting engine if it casts rays to find which geometry to render. Doom doesn't do that so it's not a raycasting engine. It instead traverses the BSP tree to find the closest groups of walls which are convex and therefore can be rendered independent of order (assuming backface culling) and then calculates how to draw them to the screen, while ignoring columns of pixels which have already been drawn. In other words, Wolfenstein uses raycasting to find which parts of which walls to draw but Doom does it the other way around, by finding which walls it should draw and rendering them to the screen (presumably with simple projection maths? I'm not sure. Also not sure if you can call projection maths simple).
@@LuaanTi I find it strange that you would make a distinction between human vision and cameras. They work in nearly the same way. Both have lenses on the front to focus light and sensors at the back to turn the light into electrical signals.
For great justice!
Okay, but is the Build engine a raycaster?
+Caleb Child I don't think so, but you could always go to Ken Silverman's website and ask him yourself. ;)
@@Pixelmusement Man, I remember playing Ken's Labyrinth. He got better!
i feel very stupid now.... came to try and learn something.... left with even more questions in my head... not that the video didn't explain well... but the complexity of the engine is way out of my league. Should have paid attention to the algorithm class..
+timothy
chan This was more to answer the one question posed in the title, not explain everything. If you have more questions you should research the engine itself and learn more about it! ;)
ugh, the jumpy movement causes nausia and headache
The doom engine is a confusing mess, the source ports add more by throwing in code borrowed from the build and quake engines.
+Demo_the
_man The irony being, the best-running code in the early days of gaming often looked the worst. EVERY ONE of the legendary coders of the 80s or 90s all look back at their old code and think, "Wow... how did this even work?!" ;D
Actually, the doom engine is not that confusing and is pretty well designed. It has clear abstraction layers and is organized well, which is why it had so many ports, and why people still port it to new devices to this day. The rendering algorithm (visplanes aside, perhaps), is conceptually very simple and quite elegant if you ask me. If you want a confusing mess, check out the Build source code ;)
I think Wolfenstein for the SNES was a BSP accelerated raycaster.
I would be very surprised if it didn't just do normal polygon projection like doom, since it's such a natural thing to do with the bsp tree and convex sectors like in doom. Raycasting with a bsp tree doesn't seem worthwhile at all, at least not for doom or wolf style levels. You lose all the benefits that front to back sorting gives you for occlusion.
@@Ehal256 I agree. I did some more research and found out the SNES port of Wolfenstein used a simplified version of the Doom engine that took advantage of Wolf’s simple geometry.
Why not just ask John Carmack?
how do lifts and fall damage work in doom though? If there is no z axis for characters then I understand how you can make walls appear at different heights and such, but I do not understand how lifts and fall damage would work without a 3rd value.
+Vasilis Papanikolaou There's a Z axis, it just has a lot of shortcuts in place. For instance, every sector is a 2D shape but you still have to set the height levels of the floor and ceiling, then the game is able to process that and determine how to render the sector. Entities also have a Z height, however, that height isn't checked in all calculations, which is why you can't jump over enemies or obstacles as the collision check between entities is ignoring the height and only checking X/Y coordinates and size, whereas height is absolutely checked when deciding if an entity can fit through an opening in the level geometry, or can step up from a lower sector into a higher sector. Lifts are simply adjusting the floor height of a sector, which is doable because the BSP tree which decides what to render and what not to only factors in the 2D level geometry and ignores all height values.
Trees?
The method through which BSP is used is often referred to as a "tree" because, just like with a real tree, you start at a root and then branch off into other sections, which can then branch off into others, and others, and others, etc.
Wolfenstein 3D uses "raycasting" in some way... Furthermore, in that game levels are flat, the is no different floor height. Walls are strictly vertical both in Wolfenstein 3D and Doom. In Wolfenstein 3D visibility checks are done with tecniques that include ray casting, in Doom visibility checks are done with a mix of techniques including BSP trees.
Shoutout from Allegro
No raytracing is NOT an evolution of raycasting, it is the opposite, raycasting is a dumbed down version of raytracing.
+Chaotikmind I was just trying to simplify my description of what was going on, but if you REALLY want to split hairs, ray casting TECHNICALLY came first as the concept was initially introduced in 1968, but it wasn't actually called ray casting until 1982, about three years after traditional ray tracing became a thing in 1979, thus what I said is accurate and it's easy to see where the confusion can come into play as to which came first. :B
@@Pixelmusement indeed there were calling that raycasting in 1968, but it is still far closer to raytracing, than say wolfenstein algorithm.
They surely called their algorithm raycasting too at the time (wolfenstein) since the method is similar, but in reality, it should have been called something else ! IMO.
Hall of mirrors!
What
1:58 _"sort the BSP tree data"_
What?!? No???
i sounds like english but i think its a foreign languege he was talking so i understood nothing
Doom is not a 3D engine game
This is why it's frequently called "2 1/2D" since certain aspects ARE handled in 3D, like floor and ceiling height checks for moving between sectors, projectiles, and the rendering itself, but some aspects are handled in 2D, such as the actual map layout, hitscan weapons, activation checks for switches, and certain kinds of sprite-on-sprite collisions, notably the player with monsters.
I personally think it's closer to a 3D engine than it is to 2D. The rendering has some limitations so you can't really look up and down and it doesn't render arbitrary polygons, but entities are still processed in 3D and have gravity and the resulting visuals are definitely 3D (flattened to 2D for display on a screen, like every game).
I dunno why you guys have such a hard time with this topic. I don't know what it is anymore because everyone keeps redefining all the words but I can tell you Doom is not 3D by modern day standards because there is no up down dimension. Doom map is really 2D with a renderer that adds height to walls to make them pop up from the 2D flat map you're walking on.
+Wizardtroll Games Which is why a lot of people call it a 2 1/2 D game. Besides, the third dimension DOES play a factor in terms of whether you can cross a gap, get up a ledge, even the projectiles will check the height values and can fly overtop of the heads of enemies or the player! Granted, hitscan weapons will not check the height of anything. It's a visually 3D game with a 2D map and a mix of 2D and 3D mechanics, so it's properly 2 1/2 D. ;)
You can dodge underneath a cacodemon's or imp's fireballs, and you can be crushed by ceilings.
If there isn’t any y coordinate (up/down dimension) then how can you have 2 adjacent sectors with different floor heights, like ledges or stairs? Or if you walk off of a ledge, the y dimension will cause you to fall instead of floating.
I would argue that it's 3D because there's a 3D world both in how it's rendered and how entities are positioned (including the player). Only the map is really 2D and even that has height information. Just because the world isn't made of arbitrary polygons doesn't make it not 3D. You see in three dimensions, you walk and jump in three dimensions, enemies shoot in three dimensions. You can't look up and down but that's just a limitation of how the 3D is rendered, not because it isn't a 3D game.
"Things" in Doom have a height property. The property is not part of the world. The world is 2d with all things on it having a height property. That doesn't make it 3D it makes it fake 3D or 2.5D
I don't care.
Ok zooner
Then why did you come here?
It's a "binary partitioning" game. This makes doom a 2D game
Err... no. BSP Trees are not 2D or 3D, they are merely a method by which the level information has been broken down for easier processing by the game engine. Doom is referred to as a 2 1/2 D (or 2.5D) game because the level data is 2-dimensional, while the gameplay itself is 3-dimensional.
BSP has absolutely nothing to do with it being a 3D or a 2D game. It's an optimization thing. It splits the environment based on what needs to be rendered, so the entire level isn't being rendered at once, which would bog down the machine.
togeskov.net/pics/XNAOctTree1.jpg
Source Engine games use BSP trees to store map data, I believe so does Unreal Engine. It's not something that determines the type of rendering done, it's something that helps you optimize your rendering.
This is all subjective analysis based on your concept of what "3D" is. Going by the logic presented in that video, you could argue that many 2D platformers are actually 1D games. Doom renders in 3D and allows the movement of objects in 3D. The levels are made in 2D and the logic of the gameplay suggests a 2D environment with height merely being an illusion as in many games rendered in 2D. So, does that make the game 2D or 3D? It all depends on your subjectivity and the smartest thing to do has always been to call Doom 2.5D or 2 1/2 D, because the arguments for calling it 2D or 3D are all right. :P
no, a bsp is only a manner of sorting out the environment. Actually, Doom's bsp is a limited form of bsp as all the splitting planes are vertical, while a regular bsp can have planes on any orientation, but it is just an acceleration structure