This was really great! It's rare to see these old SDKs being programmed w/ on real hardware and I enjoyed the progression showing all these features. Really looking forward to the next part.
I never considered Gourand shading as something to handle close-up lights. Rather, it was always presented as a way of making faceted objects look round. There are telltale signs in NASA's Spacecraft animations, in the edge of the cylinder still shows the low-poly effect in its shape.
Yes, I think that is right. The distinction is really between a light at infinity, which gives you flat shading (if I recall correctly), and a "closeup light" which gives you something more realistic, one of the models for which is Gouraud shading. "More realistic" here means that the amount of light reflected changes smoothly over the face of the polygon, which gives the rounder appearance you are talking about.
I have a Windows 98se computer built around this same 3dfx Voodoo 1 card (the same card I had back in 1997), paired with a Nvidia MX400, and a SoundBlaster Live sound card; it has a AMD K6-2/500 CPU, and 256mb of PC100 ram. I have 42 games for this machine, many are Glide games, such as NFS2se, POD, Wing Commander Prophecy to name a few.
@8:58 When you render a perspective projection scene it's not called "perspective correction" it's just "perspective projection". Perspective correction applies to texturemapping, because the simpler version of texturing is "affine texturemapping" and if you want your textures to not distort and warp over polygon surfaces as the camera moves through a scene then you use "perspective correct texturemapping". Just projecting polygons to a framebuffer with perspective applied is just "perspective projection", not "correction", because there's nothing incorrect about an orthogonal projection, it's just not perspective projection. For texturemapping using affine texturemapping results in "incorrect" texturing, thus "perspective correct" texturemapping.
This is cool! I don't think I've seen anyone utilize programming to actually demonstrate the individual rendering techniques before. Looking forward to the follow-up!
Think of all of us who drooled and dreamed of the SGI and its 3D performance back in the days, not knowing that we already had some SGI legacy right inside our beige boxes
Glide was an absolute pleasure to work with, due to its simplicity (especially for the first versions). If I remember correctly, the antialiasing features were rarely used by games, probably because the resolutions the card worked with were considered high enough it wasn't worth the blending performance hit (or because everybody used CRTs, where edges are already somewhat blurry and antialiasing is not that important). In any case, their anti-aliasing method (using the line equation to calculate coverage of the triangle edges and add alpha blending just to those) was intriguing and very fast, and as far as I know no other card implemented something like it. Modern techniques like MSAA also focus on polygon edges, so in a way it was ahead its time. Unfortunately, it had a few important drawbacks. Its reliance on alpha blending would introduce issues if the polygons are not drawn in order, for example, and at the end of the day it didn't look very accurate (probably due to gamma correction issues).
I certainly read something in the manual about alpha-blending caring about polygon order. I have forgotten how the anti-aliasing worked and so hadn't quite realised the same would apply there. I don't know how much of a performance hit it implies. At least as far as the card is concerned I don't know if it slows anything down. Of course sorting polygons with the CPU certainly involves a performance hit.
I've played lots of 3DFX games and the jaggies were noticeable on our 17'' monitor. Especially if you consider that you had to use 640x480 full screen mode for most games and you couldn't use windowed mode due to the pass-trough. I've always considered that anti-aliasing was just too much performance overhead, especially when engines like Unreal and id Tech 2 hit the gaming scene. A lot of people still used Voodoo 1s. I vaguely remember playing Quake 3 Arena, and I certainly remember playing Unreal Tournament on it. I did play one game which did use anti-aliasing, it was a simple shareware game where you controlled a ball on a bumpy landscape avoiding pits. Don't remember its name but I played it around 1999-2001. The animated 3DFX intro you see in the video also used it, but not much else.
I had the 3DFX Diamond VooDoo 2 12Mb PCI with an nVIDIA CREATIVE RivaTNT2 Ultra 32Mb AGP, Pentium 2 450Mhz, 128Mb ram I think it was on a CRT 17". Now I just got Nglide to run on my modern hardware to get those retro games running Glide + widescreen @ 1440P oh boy those old games look really nice.
That's a blast from the past. I bought that exact card back in '97 or '98 to go with my Cyrix P166+ in order to play Tomb Raider. It went from choppy to buttery smooth and the uplift in quality of the visuals was impressive. I still have it in a box somewhere. I retired it when I bought a Nvidia TNT2 card when I upgraded to a Pentium II system.
I don't recall if I ever owned one myself, but I certainly used one as I ran a small computer shop and we sold a few of them. Naturally I had to test the hardware out before delivering to customers. 🙂
Nice video, would definitely be interested in more in-depth videos on this. We take a lot for granted these days with our OpenGL/DirectX/SDL APIs abstracting away the hardware.
Thanks mate. It's great to see you do something with the old hardware, unlike others that just play Doom or some other game and some benchmarks. I see so many people put some great retro kit together, only to do nothing with it. Like a magpie with a shiny thing.
I'm a bit of an unusual collector that's for sure. But there are quite a few of us out there who actually use our machines to the max. It's much more common in the retro demo scene which I am a bit into.
This was a fantastic card when it came out. I was the first to get one at my place and all my friends bought one when they saw the first games running with it (a racing games , "pods" (?) and later, quake). I've recently rebuilt a retro pc with 2 x 3dfx2, and windows 98 :)
Nice video! I think it would be more interesting if for every step you also showed the relevant bit of code. Perspective *projection*.... correction is a different thing. Also for an infinite light source and flat faces there should be absolutely no difference between flat and gouraud shading. There should be a slight difference if the light is not infinite, depending on the relative angle between the light and each vertex normal. But if you really want to showcase gouraud shading, you need your faces to be approximating a smooth curved surface, and their vertex normals to be calculated based on that curvature. Ideally a torus or a sphere.
A good example for gouraud shading is Star Wars:Tie Fighter. It was the first game i know of, that used gouraud shading in its software renderer in 1994. Star Wars: X-Wing, released a year earlier, used flat shading. Many 3D objects in these two games are identical, which is why you can easily see the differences between flat shading and gouraud shading by comparing just these two DOS games. But note this does not apply to the later high-resolution SVGA versions (Collector's CD-ROM Edition) of these two games because the collector edition does use for both games the same enhanced engine. This enhanced engines does also use texturing, the older engines do not.
Good stuff. I’d like to see a breakdown of what the hardware does vs the glide library vs the application. I’d also like to see a quick rundown of hardware registers, and what your code would look like if you bypassed glide altogether. But maybe that’s just me. 😁
No worries. I just subscribed to the channel, and have started watching the previous videos about the voodoo card. Most people would probably find a video about hardware registers pretty boring. Or maybe not. You guys comment if you’d be interested. But for me, I spent probably 2 years going through the hardware and software side of the Super FX chip from the SNES. Interestingly, it works a lot like modern pixel shaders, where the software is actually loaded to the hardware first, and that code is responsible for handling all of the matrix math and face culling. Keep up the good work.
Wow. I learned a lot about graphics cards programming in this one short video. Would love to hear more in the same way where you iterated into more complicated examples. Maybe with code. Are there a lot of 3dfx programming tutorials on TH-cam?
I might release the code I wrote, but it isn't particularly good code. Thanks for the comments on the video. Glad you've enjoyed it. I didn't look to see if there are 3Dfx tutorials on TH-cam. Probably there is very little demand for it, so I doubt it very much.
Cool video. The 1990's were a very exciting time for computer hardware compared to today IMO where things are pretty homogeneous and boring. Luckily alternate ISA's like ARM64 and maybe even RISC-V are getting some mainstream exposure.
Please continue this series! For me texturing on a voodoo card would be a hot topic to watch. The voodoo 1 and 2 can only handle 256x256 pixel size texture (at 8 or 16 bit). The texture buffer is not that big, so if the programmer decide to use textures that large, the shapes on the sceen cannot be so much (I mean with different textures) because voodoo runs out of memory. The other interesting thing is the texture memory of the voodoo 2. The card has 4 MB memory for each (it has two) TMU chip, but but both TMU chip has to have the same texture in the memory to work, that means 4+4MB texture memory means only one time 4MB texture memory. I would like to see a test program, when we reach the memory limit with the textures. Is it possible to check the memory usage with a function, and show it on the srceen? The other thing what I would like to see, is this: "What happens if we use the max size texture (256x256) and the camera comes near and goes far from a surface where the texture is drawn. Is the picture pretty? When the object is far in the Z, probably it is a lot of work for the card to shrink the bix texture. The last thing I would like to see is the limitation of the Z buffer. In 1999 the car racing games like NFS Posche, you could define the draw distance. During the racing, you were able to see, that the trees in the background far away is just drawn as your car advanced on the level. So the programmes must decide from what distance we are not drawing objects. There was a fog or some black blackground where the objects are appearing or disappearing. Thank you in advance!
Somthing just come into my mind: Alpha blending. I remember drawing a window on a room is not so easy. The window is an object where we can see through. It is also a good thing, if it changes the color of the background objects. The bluisch color was the common. Probably some reflection on it makes it more realistic. The other thing would be the wireframe display. You say in the video, if the voodoo card showing us a polygon which is a cube, probably the sides of the cube are not rectangles, probably they are 2 triangles. Is there any way to see the wireframe, how voodoo splits up the rect in 2 triangles? (this is not that important, but this would be very informative)
Thanks very much for all the interesting information and ideas for things to cover! I definitely intend to continue this series. Unfortunately I have a few other videos in the pipeline first, but it will eventually come. I'm really looking forward to getting back to it myself.
How do you even can implement Z-buffer yourself on a 3d accelerated graphic? You just sending command on drawing triangles to a video card and don't know where video card draw them. Maybe you can get drawn area back, but that would defeat purpose of 3d acceleration. So implementing z-buffer would not be a mess, it would be impossible :) or I got something wrong?
Really easy going guide there, takes me back to when I tried to implement some very basic 3D models in something called Virtek 3Dware back in the day, must dig that out again one day 😋
@@PCRetroTech just CPU rendering, but it seemed pretty quick, well good on my old 486DX4. Was given away in magazines circa 95, even featured briefly on Bad Influence UK videogames TV show! I think it was only used on a couple of Big Red commercial games, but was fun as a teenager to tinker with even if most of it went over my head. You might find it interesting to compare with your own homebrew 3D routines.
Very motivational video which makes me want to start glide programming for my voodoo based retro pc. It would be awesome if IDEs like Unity would support publishing for glide.
Excellent! That is the hope with making these videos. Unfotunately, setting up the compiler and fixing the bugs in the Glide SDK build system is not trivial, but once you get past that it is all relatively straightforward.
In this case get a Linux based system like Debian 12. There you will find a current version of libglide3 and its libglide3-dev package directly available from the Debian packaging system. This should be a convenient system for programming in Glide. A CPU that supports the NX bit and PAE is recommended to run a modern Linux kernel. And a motherboard with a PCI or AGP slot is required for the Voodoo cards. There is no point for Unity to support glide. The available hardware that supports glide is very old and the market very small and modern 3d effects are not possible with it.
Nice. Interesting to know you've done this from DOS. Can you explain a little bit more how did you set up your build environment? Also, could it be done directly from Windows? What's the performance penalty of doing so?
I don't know for sure if it can be done from Windows. One can use Watcom on Windows, but I don't have any specific information on how to make the Glide SDK work there (though it is technically supposed to be possible). Take a look at my previous video for some hints on how I set everything up.
Cubes are supposed to have flat faces so there wouldn't be a lot of visible difference. The best would be to use a curved surface that is represented on screen as a mesh of triangle. e.g. using an on-screen Icosahedron to approximate a virtual sphere. Or a low-poly model of a torus. - Flat shading: each triangle is evenly lit. You can actually see the individual triangles. - Gouraud shading: each *vertex* (tip of the triangle) is individually lit, inside the triangle the shading transitions smoothly from one tip to the other, so the triangle aren't visible, the object looks kind of roundish. And the difference between Gouraud and Phong would also be visible on such an object: - Gouraud: if a tiny detail (e.g.: specular highlight, or if the light is very close to the surface) "falls between" vertices, it will be "missing", as light is only calculated at the tips. (You won't get a specular highlight dot on a very coarse low poly sphere or torus. Games usually cheated by using textures instead). - Phong: lights are computed for every single pixel, so tiny detail will appear, e.g., a tiny specular highlight will happen only on the few pixelsin the middle of a triangle.
You can have a nice comparison by just comparing the first versions of the DOS Games Star Wars: X-Wing and Star Wars: Tie Fighter. Both do use only software rendering and most 3d objects are the same. But Tie Fighter does use gouraud shading while the older X-Wing does only use flat shading. This does not apply to the later SVGA version, the SVGA version is based on an enhanced engine of the engine used in Tie Fighter. The enhancements are a higher resolution and texturing.
so basically the geometry and animation is still is done on the cpu, glide breaks it all up into triangles for the gpu to render, so the card doesn't really do all that much, but offloading as much as it does is still better than doing it all on the cpu, so you'd still be able to get better graphics?
In the past, geometry wasn't particularly complex. The big workload was texturing and simplified lighting, as well as adding effects like bump mapping, texture filtering, etc.
I kind of think of the 3dfx card like a sprite engine for PC that just happen to be able to sort the pixels by deapth. Pretty much all the transform is made on the CPU. That turned out a problem later when the 3Dfx card becom faster than the CPU pf the Day.
Yeah I tend to think of it as more than a sprite engine. They referred to it as a rendering engine I think. The Gouraud shading feature is pretty nice, as is the texture mapping and fog that we haven't seen yet.
@@PCRetroTech what it also does.. and I would say.. Is the reason why it was popular is that its interpoliate texture pixels. Prior to it, all 3d card looked like pixelated mess
thanks for showing this, always wondered how far off glide was from fixed function gl. Do you have to upload the matrix for each object? is the coordinate data uploading closer to the immediate mode api of GL1?
The matrix is kept on the PC side. No rotations get done on the card itself. The coordinates are just specified as an array to one of the Glide functions.
@@PCRetroTech right i know the matrix math is on the pc side at this point in time, but i was curious if you had to the load the matrix i.e glLoadMatrix. Surprised that buffers could be sent, i would have to double check but i didnt think early opengl had it.
@@Pickle136 No, there's no matrix sent to the card. It doesn't have any operations that would need it. The "buffers" here are really very short arrays. The Glide library breaks everything up into triangles, so that's the only thing sent to the card.
Really interesting. I would like to see a similar analysis of the "Apple Quickdraw 3D Accelerator Card" from 1995. The QD3D accelerator was sold from Nov. 1995 onwards and could be installed with several cards to archive more performance. A single QD3D accelerator is about 1/3 to 1/2 as fast as the same computer with a 3DFX Voodoo. The QD3D acc. has only 512K texture memory (12 textures per scene max.) and cannot do any effects (fog) or per pixel transparency (textures with alpha). It can do per vertex transparency, 32bit color, phong shading, up to 8Kx8K viewports and trilinear texture filtering. It can also do some vertex-based boolean operations called "CSG" (constructive solid geometry). The History of the Apple 3D chipset is pretty interesting and goes back to 1988.
@@PCRetroTech Apple had an early start and completely blew it. They switched to OpenGL in 1998, ditching the Quickdraw 3D api completely. It lasted from 1995-1999 with the last major updates in 1997. The technology behind the QD3D chipset is very academically driven. It has more in common with raytracing than classic 3D-accelerators. The hardware is build to have enough in-chip memory to render some continuous pixels (16 in this case) of a scanline. It was designed like a single rasterizer pipeline to allow the use of multiple pipelines (chips) for a 3D accelerator. The group behind the QD3D accelerator had a very powerful (20x faster) successor in development but the whole 3D development branch was laid off when steve jobs came back to Apple. The successor would have used a continuation of the technology but adapted to a W-buffer pipeline (the release of that accelerator was meant for late 1997). The QD3D accelerator was more like a proof of concept than a finished product for mass market. Especially the texturing component seems more like an afterthought than an integral part of the technology. I think the development was massively delayed and they had to add key features just before the release or else the whole project would have been stopped. This was most likely because of the more academic approach (disregard to development-timeframes) and poor internal management than problems with the technology itself. I suspect that the original chipset was meant for either "NuBus" or more likely for the "PPC-PDS Bus" and had to be adapted to PCI (which explains the second ASIC on the QD3D accelerator and is most likely an adapted Texas Instruments intelligent PCI-Bridge). The base-clock of the QD3D accelerator is 40MHz, the same as the bus clock of the early NuBus PPC Macs. Some of the Apple 3D developers made their way to 3Dfx, Nvidia and SGI. The most similar technology that lead to a 3D accelerator was the PowerVR chip. They used a very similar technology (tiles) for reducing the scene complexity to allow the usage of on-chip cache. Both the PowerVR and Apple 3D technology had no real Z-buffer because they sorted the triangles by z-index from the front to the back in some kind of 'bucket' and streamed the resulting list to the 3D-accelerator. The accelerator would stop requesting triangles for shading, when opacity reached 100%. This results in a very optimized approach to z-buffering since no external buffer is needed for each triangle edge and the rendering is stopped even if invisible fringes are left behind. This is the often featured 'hidden surface removal' technology that is simply part of the rendering process. The QD3D-accelerator had 12Kb of internal cache/attribute memory for the shading process. So in total: 512Kb texture memory, 128Kb Object memory and 12Kb of internal cache. I have measured the pins of the chips of the Apple QD3D-accelerator and found that it had originally supported 18bit textures (6x6x6) and was reduced to 16bit (5x6x5) to use cheaper memory. I also suspect that the card can actually address 2Mb of texture memory but is limited by software (hardcoded) to only use 512Kb. With a clock of 40MHz, it takes (best case) about 4 cycles to generate one trilinear filtered textured pixel and 2-3 cables to generate an untextured pixel. The polygon engine can calculate 20MLayer/s and could therefore realize a throughput of 20MPix/s with a flat scene. The design is a multi-cycle approach where up to 7 layers (2 cycles/layer) of polygons can be processed in the pipeline in a linear fashion. The hardware can theoretically calculate an infinite amount of layers at the cost of additional cycles but it is most likely limited by the driver software. _In regards to history_ : The 3D-Team at Apple had in 1992/3 an early version of the QD3D accelerator without texturing working at 880 KPoly/s compared to the 120 KPoly/s of the released card. A Voodoo 1 would usually do between 300-550 Poly/s depending on the computer. The early Apple 3D QD3D accelerator was most likely a stand-alone version with its own frame buffer and 4 pipelines. The original (textureless) chipset had most likely a maximum throughput of around 10-20Mpix/s per pipeline (40-80Mpix/s in total) which allowed 1280x960 screens to be rendered at 30Hz. This was not possible with the simple SRam approach for texturing. It would have taken either a 64bit memory bus or 80MHz access speed for that to work. The costs would have been too much for a $399 accelerator card in 1994/5. Most of my information comes from scientific papers from Apple (SIGGRAPH 1987-1997) and Apple patents (1988-1998). There is a lot of information to find there and it goes very deep to the point of information overload.
@@john_ace Wow, thank you very much for a fascinating read and for your references at the end. I focus mainly on PC hardware, but it sure is interesting to examine the environment into which the PC hardware emerged. That was truly a different beast to the Voodoo. But I hadn't realised how different the PowerVR also was. Apparently some NVIDIA products are believed to have used some of the tile-based IP.
Commodore was working on 64bit hombre architecture in 1988 and would have blown everything away if only the company wasn't destroyed from the inside It would have been out by 1990 but instead we got AGA due to stupid higher ups killing R&D Dave Haynie has talked a lot of the great stuff we could have had and we all could be using something different today
I think glide was instrumental to 3dfx's success. because it doesn't do much you could plug it into usual 3d dos engines people had around that time without too much problems or you could use the usual 3d engine txt file guides floating around at the time and simply replace the trifiller portion with glide or the linefiller portion for the trifiller. simplifying a bit, but that was a lot easier than adapting an existing 3d engine for some other early cards like nv1 - and they didn't need to use the api's 3d calculations so they could use all the tricks and shortcuts they had for that portion for their games. 3dfx was also good at providing the tools for everyone without fuss to develop glide programs - and devs didn't need to port their stuff to windows either just to support it. have you thought about looking into software quakes "s-buffer" ? I never quite understood it from the txt files at the time (not 3dfx related)
@@PCRetroTech re about the s-buffer, it segmented the buffer somehow so it didn't need to check z for every pixel(afaik the walls didn't need the check just objects due to the bsp) Did you run across if glide did the triangles sides itself in cpu or hw if you used the triangle or quad routines instead of the horizontal line in your own triangle drawing code?
@@lasskinn474 I vaguely remember something in the Glide manual about "cockpit overlays" that had some interesting interaction with the z-buffer. I think it made it so you didn't have to redraw a cockpit. Only the objects behind it had to be redrawn. I'm not sure if this is at all related to s-buffering though. I don't know for sure if Glide does the triangle edges or not, but I suspect that is done in hardware. Of course I only supplied vertices of the triangles, nothing more. I'm not sure what you mean with respect to horizontal lines. Certainly quads are broken into triangles by Glide.
@@PCRetroTech hmm or vertical i maybe messed the term, left to right lines between the edges of the triangle thats the slow part especially in perspective corrected texture mapping. The sbuffer in quake software renderer didn't use like a per pixel buffer but like segments avoiding check per pixel, something like that but i never quite grasped how it worked from that txt doc and getting a voodoo it became irrelevant to me at the time
That's probably just an interaction of the camera with the monitor. I could have made the rotation steps smaller which might have made that less obvious, but it would also have looked like it was rotating slower.
@@PCRetroTech I thought it might be something like that, but you did such a good job filming the screen that I couldn't be sure it wasn't a direct capture.
The Voodoo 5 (VSA-100) offered a motion blur effect with its FSAA capabilities. Personally i never used it. motion blur is the first feature i turn off on modern games.
If you are willing to make the video, i would 100% would want to see the more involved programming code for it. you underestimate what your audience wants. Yeah, i know its a lot of work to do that. im not going to hold you by your throat for it. I get it.
Unfortunately the demand is just not there for that sort of content on TH-cam and such things take months of work to produce. I'd do it if I were a full time TH-camr supported somehow to do that sort of thing, but that is actually not possible for at least the next three years.
If there was ever anything I'd want to see from voodoo devs, it would be a wrapper for bump mapping, because voodoo could do it, but it wasn't a standard. So you ended up with a voodoo 5 playing Doom 3 without bump mapping when even a voodoo 2 could run the donut demo. Also, there was that blade of darkness game that ran real time shadows on anything. Doom 3 never required dx8. John Carmack supported GeForce 2. I think to truly get closure on 3dfx, someone needs to get doom 3 running on par with GeForce 2. Until then, voodoo is left incomplete. It's a real disappointment all voodoo development dropped at windows XP 64. We need signed drivers, and somebody needs to compile MesaFX, because there's an update sitting there uncompiled. The existing code can bypass the 256 texture limit and run quake 3 in high detail on a voodoo 2, but not on Windows 10, and we never got bump mapping. Also, Aureal got a worse deal, full driver support stopped at 9x.
Voodoo Graphics did, and I can find Diamond Monster 3D cards that have 1996 stamped on them. But this particular card that I was using actually has a date of 1997. It might be a later revision of the card actually, which is interesting.
The Voodoo 1 was sold from October 1996 onwards for consumer PCs. It was available in volume in Nov/Dec. of 1996, right for Christmas. A version of the card was available a bit earlier (mid 1996) for professional users (Quantum 3D Obsidian) and for arcade machines. The Voodoo 2 was release at the beginning of 1998.
Basically "Glide" was the OpenGl optimized for 3DFX and thats why works only in 3DFX and was so cool see that logo when Glide was enabled.
This was really great! It's rare to see these old SDKs being programmed w/ on real hardware and I enjoyed the progression showing all these features. Really looking forward to the next part.
Thanks, I'm glad you enjoyed it.
I never considered Gourand shading as something to handle close-up lights. Rather, it was always presented as a way of making faceted objects look round. There are telltale signs in NASA's Spacecraft animations, in the edge of the cylinder still shows the low-poly effect in its shape.
Yes, I think that is right. The distinction is really between a light at infinity, which gives you flat shading (if I recall correctly), and a "closeup light" which gives you something more realistic, one of the models for which is Gouraud shading. "More realistic" here means that the amount of light reflected changes smoothly over the face of the polygon, which gives the rounder appearance you are talking about.
good to hear you pronounce Gouraud correctly! ❤
I have a Windows 98se computer built around this same 3dfx Voodoo 1 card (the same card I had back in 1997), paired with a Nvidia MX400, and a SoundBlaster Live sound card; it has a AMD K6-2/500 CPU, and 256mb of PC100 ram. I have 42 games for this machine, many are Glide games, such as NFS2se, POD, Wing Commander Prophecy to name a few.
@8:58 When you render a perspective projection scene it's not called "perspective correction" it's just "perspective projection". Perspective correction applies to texturemapping, because the simpler version of texturing is "affine texturemapping" and if you want your textures to not distort and warp over polygon surfaces as the camera moves through a scene then you use "perspective correct texturemapping". Just projecting polygons to a framebuffer with perspective applied is just "perspective projection", not "correction", because there's nothing incorrect about an orthogonal projection, it's just not perspective projection. For texturemapping using affine texturemapping results in "incorrect" texturing, thus "perspective correct" texturemapping.
This is cool! I don't think I've seen anyone utilize programming to actually demonstrate the individual rendering techniques before. Looking forward to the follow-up!
Thanks. It's a bit of work, but hopefully worth it.
It seems wrong to see polygons that actually move at 60fps with more than 4 colors in 640x480 on your channel :D
Ha ha! Yeah it took about a year less effort too!
Think of all of us who drooled and dreamed of the SGI and its 3D performance back in the days, not knowing that we already had some SGI legacy right inside our beige boxes
This rotating qubes are hypnotic. 😅
Your channel is fantastic. Your deep dives into the mechanics of these things are very insightful in demystifying how older hardware functions!
Thanks! I appreciate the feedback.
Glide was an absolute pleasure to work with, due to its simplicity (especially for the first versions). If I remember correctly, the antialiasing features were rarely used by games, probably because the resolutions the card worked with were considered high enough it wasn't worth the blending performance hit (or because everybody used CRTs, where edges are already somewhat blurry and antialiasing is not that important). In any case, their anti-aliasing method (using the line equation to calculate coverage of the triangle edges and add alpha blending just to those) was intriguing and very fast, and as far as I know no other card implemented something like it. Modern techniques like MSAA also focus on polygon edges, so in a way it was ahead its time. Unfortunately, it had a few important drawbacks. Its reliance on alpha blending would introduce issues if the polygons are not drawn in order, for example, and at the end of the day it didn't look very accurate (probably due to gamma correction issues).
I certainly read something in the manual about alpha-blending caring about polygon order. I have forgotten how the anti-aliasing worked and so hadn't quite realised the same would apply there. I don't know how much of a performance hit it implies. At least as far as the card is concerned I don't know if it slows anything down. Of course sorting polygons with the CPU certainly involves a performance hit.
I've played lots of 3DFX games and the jaggies were noticeable on our 17'' monitor. Especially if you consider that you had to use 640x480 full screen mode for most games and you couldn't use windowed mode due to the pass-trough.
I've always considered that anti-aliasing was just too much performance overhead, especially when engines like Unreal and id Tech 2 hit the gaming scene. A lot of people still used Voodoo 1s. I vaguely remember playing Quake 3 Arena, and I certainly remember playing Unreal Tournament on it.
I did play one game which did use anti-aliasing, it was a simple shareware game where you controlled a ball on a bumpy landscape avoiding pits. Don't remember its name but I played it around 1999-2001.
The animated 3DFX intro you see in the video also used it, but not much else.
I had the 3DFX Diamond VooDoo 2 12Mb PCI with an nVIDIA CREATIVE RivaTNT2 Ultra 32Mb AGP, Pentium 2 450Mhz, 128Mb ram I think it was on a CRT 17". Now I just got Nglide to run on my modern hardware to get those retro games running Glide + widescreen @ 1440P oh boy those old games look really nice.
Absolutely loving this kind of content
We'll get back to this eventually. So many videos to make!
That's a blast from the past. I bought that exact card back in '97 or '98 to go with my Cyrix P166+ in order to play Tomb Raider. It went from choppy to buttery smooth and the uplift in quality of the visuals was impressive. I still have it in a box somewhere. I retired it when I bought a Nvidia TNT2 card when I upgraded to a Pentium II system.
I don't recall if I ever owned one myself, but I certainly used one as I ran a small computer shop and we sold a few of them. Naturally I had to test the hardware out before delivering to customers. 🙂
Nice video, would definitely be interested in more in-depth videos on this. We take a lot for granted these days with our OpenGL/DirectX/SDL APIs abstracting away the hardware.
Indeed. I've done some OpenGL programming (on the channel even), but it is quite a different beast.
Thanks mate. It's great to see you do something with the old hardware, unlike others that just play Doom or some other game and some benchmarks.
I see so many people put some great retro kit together, only to do nothing with it. Like a magpie with a shiny thing.
I'm a bit of an unusual collector that's for sure. But there are quite a few of us out there who actually use our machines to the max. It's much more common in the retro demo scene which I am a bit into.
Very nice video, would have been cool to also see a demo of some fog effects.
Fantastic work so far, really looking forward to the next part.
Thank you very much!
I would love to make a horror game for the voodoo card
This was a fantastic card when it came out. I was the first to get one at my place and all my friends bought one when they saw the first games running with it (a racing games , "pods" (?) and later, quake).
I've recently rebuilt a retro pc with 2 x 3dfx2, and windows 98 :)
That sounds like a nice machine!
Another great video Will!!
Nice video! I think it would be more interesting if for every step you also showed the relevant bit of code.
Perspective *projection*.... correction is a different thing.
Also for an infinite light source and flat faces there should be absolutely no difference between flat and gouraud shading. There should be a slight difference if the light is not infinite, depending on the relative angle between the light and each vertex normal. But if you really want to showcase gouraud shading, you need your faces to be approximating a smooth curved surface, and their vertex normals to be calculated based on that curvature. Ideally a torus or a sphere.
A good example for gouraud shading is Star Wars:Tie Fighter. It was the first game i know of, that used gouraud shading in its software renderer in 1994. Star Wars: X-Wing, released a year earlier, used flat shading. Many 3D objects in these two games are identical, which is why you can easily see the differences between flat shading and gouraud shading by comparing just these two DOS games.
But note this does not apply to the later high-resolution SVGA versions (Collector's CD-ROM Edition) of these two games because the collector edition does use for both games the same enhanced engine. This enhanced engines does also use texturing, the older engines do not.
really enjoying your videos. cool stuff!
I do recall the cell shading I did in the first 2000s, it was a nice effect for lighting.
Good stuff. I’d like to see a breakdown of what the hardware does vs the glide library vs the application. I’d also like to see a quick rundown of hardware registers, and what your code would look like if you bypassed glide altogether. But maybe that’s just me. 😁
I do try to describe the former a bit in the video. The register stuff would be way too much for a video. It's complex.
No worries. I just subscribed to the channel, and have started watching the previous videos about the voodoo card. Most people would probably find a video about hardware registers pretty boring. Or maybe not. You guys comment if you’d be interested.
But for me, I spent probably 2 years going through the hardware and software side of the Super FX chip from the SNES. Interestingly, it works a lot like modern pixel shaders, where the software is actually loaded to the hardware first, and that code is responsible for handling all of the matrix math and face culling.
Keep up the good work.
My big brother old PC is laying in closet, it have Voodoo 2 card :D looking more this kind videos
All the voodoo stuff should work on a voodoo 2 as well. Good luck implementing something on it.
Wow. I learned a lot about graphics cards programming in this one short video. Would love to hear more in the same way where you iterated into more complicated examples. Maybe with code. Are there a lot of 3dfx programming tutorials on TH-cam?
I might release the code I wrote, but it isn't particularly good code. Thanks for the comments on the video. Glad you've enjoyed it. I didn't look to see if there are 3Dfx tutorials on TH-cam. Probably there is very little demand for it, so I doubt it very much.
Cool video. The 1990's were a very exciting time for computer hardware compared to today IMO where things are pretty homogeneous and boring. Luckily alternate ISA's like ARM64 and maybe even RISC-V are getting some mainstream exposure.
Please continue this series! For me texturing on a voodoo card would be a hot topic to watch. The voodoo 1 and 2 can only handle 256x256 pixel size texture (at 8 or 16 bit). The texture buffer is not that big, so if the programmer decide to use textures that large, the shapes on the sceen cannot be so much (I mean with different textures) because voodoo runs out of memory.
The other interesting thing is the texture memory of the voodoo 2. The card has 4 MB memory for each (it has two) TMU chip, but but both TMU chip has to have the same texture in the memory to work, that means 4+4MB texture memory means only one time 4MB texture memory.
I would like to see a test program, when we reach the memory limit with the textures. Is it possible to check the memory usage with a function, and show it on the srceen?
The other thing what I would like to see, is this: "What happens if we use the max size texture (256x256) and the camera comes near and goes far from a surface where the texture is drawn. Is the picture pretty? When the object is far in the Z, probably it is a lot of work for the card to shrink the bix texture.
The last thing I would like to see is the limitation of the Z buffer. In 1999 the car racing games like NFS Posche, you could define the draw distance. During the racing, you were able to see, that the trees in the background far away is just drawn as your car advanced on the level. So the programmes must decide from what distance we are not drawing objects. There was a fog or some black blackground where the objects are appearing or disappearing.
Thank you in advance!
Somthing just come into my mind: Alpha blending. I remember drawing a window on a room is not so easy. The window is an object where we can see through. It is also a good thing, if it changes the color of the background objects. The bluisch color was the common. Probably some reflection on it makes it more realistic.
The other thing would be the wireframe display. You say in the video, if the voodoo card showing us a polygon which is a cube, probably the sides of the cube are not rectangles, probably they are 2 triangles. Is there any way to see the wireframe, how voodoo splits up the rect in 2 triangles? (this is not that important, but this would be very informative)
Thanks very much for all the interesting information and ideas for things to cover! I definitely intend to continue this series. Unfortunately I have a few other videos in the pipeline first, but it will eventually come. I'm really looking forward to getting back to it myself.
How do you even can implement Z-buffer yourself on a 3d accelerated graphic? You just sending command on drawing triangles to a video card and don't know where video card draw them. Maybe you can get drawn area back, but that would defeat purpose of 3d acceleration. So implementing z-buffer would not be a mess, it would be impossible :) or I got something wrong?
If you had hardware acceleration you wouldn't implement the z-buffer yourself. It's really an integral part of the graphics rendering hardware.
Really easy going guide there, takes me back to when I tried to implement some very basic 3D models in something called Virtek 3Dware back in the day, must dig that out again one day 😋
Interesting. I've never heard of Virtek 3Dware. What hardware did it target?
@@PCRetroTech just CPU rendering, but it seemed pretty quick, well good on my old 486DX4. Was given away in magazines circa 95, even featured briefly on Bad Influence UK videogames TV show! I think it was only used on a couple of Big Red commercial games, but was fun as a teenager to tinker with even if most of it went over my head. You might find it interesting to compare with your own homebrew 3D routines.
@@mogwaay Sounds interesting for sure.
… and now we have Vulcan, D3D 12 and Metal APIs. All brand new concepts. So many techs run in cycles …
Very motivational video which makes me want to start glide programming for my voodoo based retro pc. It would be awesome if IDEs like Unity would support publishing for glide.
Excellent! That is the hope with making these videos. Unfotunately, setting up the compiler and fixing the bugs in the Glide SDK build system is not trivial, but once you get past that it is all relatively straightforward.
In this case get a Linux based system like Debian 12. There you will find a current version of libglide3 and its libglide3-dev package directly available from the Debian packaging system. This should be a convenient system for programming in Glide. A CPU that supports the NX bit and PAE is recommended to run a modern Linux kernel. And a motherboard with a PCI or AGP slot is required for the Voodoo cards.
There is no point for Unity to support glide. The available hardware that supports glide is very old and the market very small and modern 3d effects are not possible with it.
Love Voodoo graphics cards - Still got my Voodoo Rush Lite, Voodoo II and Voodoo III
Nice. Interesting to know you've done this from DOS. Can you explain a little bit more how did you set up your build environment? Also, could it be done directly from Windows? What's the performance penalty of doing so?
I don't know for sure if it can be done from Windows. One can use Watcom on Windows, but I don't have any specific information on how to make the Glide SDK work there (though it is technically supposed to be possible).
Take a look at my previous video for some hints on how I set everything up.
is that implementation on pcem is close enough to real voodoo?
It should be good unless you want to do something undocumented.
10:03 that cube looks like the Roblox logo
A side by side comparison of the slowly rotating cube with flat shading vs gouraud shading would've been nice,
Cubes are supposed to have flat faces so there wouldn't be a lot of visible difference.
The best would be to use a curved surface that is represented on screen as a mesh of triangle.
e.g. using an on-screen Icosahedron to approximate a virtual sphere.
Or a low-poly model of a torus.
- Flat shading: each triangle is evenly lit. You can actually see the individual triangles.
- Gouraud shading: each *vertex* (tip of the triangle) is individually lit, inside the triangle the shading transitions smoothly from one tip to the other, so the triangle aren't visible, the object looks kind of roundish.
And the difference between Gouraud and Phong would also be visible on such an object:
- Gouraud: if a tiny detail (e.g.: specular highlight, or if the light is very close to the surface) "falls between" vertices, it will be "missing", as light is only calculated at the tips. (You won't
get a specular highlight dot on a very coarse low poly sphere or torus. Games usually cheated by using textures instead).
- Phong: lights are computed for every single pixel, so tiny detail will appear, e.g., a tiny specular highlight will happen only on the few pixelsin the middle of a triangle.
You can have a nice comparison by just comparing the first versions of the DOS Games Star Wars: X-Wing and Star Wars: Tie Fighter. Both do use only software rendering and most 3d objects are the same. But Tie Fighter does use gouraud shading while the older X-Wing does only use flat shading.
This does not apply to the later SVGA version, the SVGA version is based on an enhanced engine of the engine used in Tie Fighter. The enhancements are a higher resolution and texturing.
More.. Please
Coming soon I hope!
The cpu is whats actually drawing the triangles on original voodoos
Again, a really good video!
so basically the geometry and animation is still is done on the cpu, glide breaks it all up into triangles for the gpu to render, so the card doesn't really do all that much, but offloading as much as it does is still better than doing it all on the cpu, so you'd still be able to get better graphics?
In the past, geometry wasn't particularly complex. The big workload was texturing and simplified lighting, as well as adding effects like bump mapping, texture filtering, etc.
I kind of think of the 3dfx card like a sprite engine for PC that just happen to be able to sort the pixels by deapth.
Pretty much all the transform is made on the CPU. That turned out a problem later when the 3Dfx card becom faster than the CPU pf the Day.
Yeah I tend to think of it as more than a sprite engine. They referred to it as a rendering engine I think. The Gouraud shading feature is pretty nice, as is the texture mapping and fog that we haven't seen yet.
@@PCRetroTech what it also does.. and I would say.. Is the reason why it was popular is that its interpoliate texture pixels.
Prior to it, all 3d card looked like pixelated mess
I have this one)) Maybe it even in work condition)
It's definitely a nice card.
bilinear filtering and tmu / texture mapping unit
thanks for showing this, always wondered how far off glide was from fixed function gl.
Do you have to upload the matrix for each object? is the coordinate data uploading closer to the immediate mode api of GL1?
The matrix is kept on the PC side. No rotations get done on the card itself. The coordinates are just specified as an array to one of the Glide functions.
@@PCRetroTech right i know the matrix math is on the pc side at this point in time, but i was curious if you had to the load the matrix i.e glLoadMatrix.
Surprised that buffers could be sent, i would have to double check but i didnt think early opengl had it.
@@Pickle136 No, there's no matrix sent to the card. It doesn't have any operations that would need it. The "buffers" here are really very short arrays. The Glide library breaks everything up into triangles, so that's the only thing sent to the card.
What's the deal with the 3Dfx splash screens?
Its part of the dev software he was using
The 3dFX splash screen was part of the glide API and is marketing related. Every game that used glide had this splash screen.
Really interesting.
I would like to see a similar analysis of the "Apple Quickdraw 3D Accelerator Card" from 1995. The QD3D accelerator was sold from Nov. 1995 onwards and could be installed with several cards to archive more performance. A single QD3D accelerator is about 1/3 to 1/2 as fast as the same computer with a 3DFX Voodoo. The QD3D acc. has only 512K texture memory (12 textures per scene max.) and cannot do any effects (fog) or per pixel transparency (textures with alpha). It can do per vertex transparency, 32bit color, phong shading, up to 8Kx8K viewports and trilinear texture filtering. It can also do some vertex-based boolean operations called "CSG" (constructive solid geometry). The History of the Apple 3D chipset is pretty interesting and goes back to 1988.
Wow, that card sounds like a beast. Apple sounds like it was way ahead of what was happening on the PC!
@@PCRetroTech Apple had an early start and completely blew it. They switched to OpenGL in 1998, ditching the Quickdraw 3D api completely. It lasted from 1995-1999 with the last major updates in 1997. The technology behind the QD3D chipset is very academically driven. It has more in common with raytracing than classic 3D-accelerators. The hardware is build to have enough in-chip memory to render some continuous pixels (16 in this case) of a scanline. It was designed like a single rasterizer pipeline to allow the use of multiple pipelines (chips) for a 3D accelerator. The group behind the QD3D accelerator had a very powerful (20x faster) successor in development but the whole 3D development branch was laid off when steve jobs came back to Apple. The successor would have used a continuation of the technology but adapted to a W-buffer pipeline (the release of that accelerator was meant for late 1997). The QD3D accelerator was more like a proof of concept than a finished product for mass market. Especially the texturing component seems more like an afterthought than an integral part of the technology. I think the development was massively delayed and they had to add key features just before the release or else the whole project would have been stopped. This was most likely because of the more academic approach (disregard to development-timeframes) and poor internal management than problems with the technology itself. I suspect that the original chipset was meant for either "NuBus" or more likely for the "PPC-PDS Bus" and had to be adapted to PCI (which explains the second ASIC on the QD3D accelerator and is most likely an adapted Texas Instruments intelligent PCI-Bridge). The base-clock of the QD3D accelerator is 40MHz, the same as the bus clock of the early NuBus PPC Macs. Some of the Apple 3D developers made their way to 3Dfx, Nvidia and SGI. The most similar technology that lead to a 3D accelerator was the PowerVR chip. They used a very similar technology (tiles) for reducing the scene complexity to allow the usage of on-chip cache. Both the PowerVR and Apple 3D technology had no real Z-buffer because they sorted the triangles by z-index from the front to the back in some kind of 'bucket' and streamed the resulting list to the 3D-accelerator. The accelerator would stop requesting triangles for shading, when opacity reached 100%. This results in a very optimized approach to z-buffering since no external buffer is needed for each triangle edge and the rendering is stopped even if invisible fringes are left behind. This is the often featured 'hidden surface removal' technology that is simply part of the rendering process. The QD3D-accelerator had 12Kb of internal cache/attribute memory for the shading process. So in total: 512Kb texture memory, 128Kb Object memory and 12Kb of internal cache. I have measured the pins of the chips of the Apple QD3D-accelerator and found that it had originally supported 18bit textures (6x6x6) and was reduced to 16bit (5x6x5) to use cheaper memory. I also suspect that the card can actually address 2Mb of texture memory but is limited by software (hardcoded) to only use 512Kb. With a clock of 40MHz, it takes (best case) about 4 cycles to generate one trilinear filtered textured pixel and 2-3 cables to generate an untextured pixel. The polygon engine can calculate 20MLayer/s and could therefore realize a throughput of 20MPix/s with a flat scene. The design is a multi-cycle approach where up to 7 layers (2 cycles/layer) of polygons can be processed in the pipeline in a linear fashion. The hardware can theoretically calculate an infinite amount of layers at the cost of additional cycles but it is most likely limited by the driver software.
_In regards to history_ : The 3D-Team at Apple had in 1992/3 an early version of the QD3D accelerator without texturing working at 880 KPoly/s compared to the 120 KPoly/s of the released card. A Voodoo 1 would usually do between 300-550 Poly/s depending on the computer. The early Apple 3D QD3D accelerator was most likely a stand-alone version with its own frame buffer and 4 pipelines. The original (textureless) chipset had most likely a maximum throughput of around 10-20Mpix/s per pipeline (40-80Mpix/s in total) which allowed 1280x960 screens to be rendered at 30Hz. This was not possible with the simple SRam approach for texturing. It would have taken either a 64bit memory bus or 80MHz access speed for that to work. The costs would have been too much for a $399 accelerator card in 1994/5.
Most of my information comes from scientific papers from Apple (SIGGRAPH 1987-1997) and Apple patents (1988-1998). There is a lot of information to find there and it goes very deep to the point of information overload.
@@john_ace Wow, thank you very much for a fascinating read and for your references at the end. I focus mainly on PC hardware, but it sure is interesting to examine the environment into which the PC hardware emerged. That was truly a different beast to the Voodoo. But I hadn't realised how different the PowerVR also was. Apparently some NVIDIA products are believed to have used some of the tile-based IP.
Commodore was working on 64bit hombre architecture in 1988 and would have blown everything away if only the company wasn't destroyed from the inside
It would have been out by 1990 but instead we got AGA due to stupid higher ups killing R&D
Dave Haynie has talked a lot of the great stuff we could have had and we all could be using something different today
I think glide was instrumental to 3dfx's success.
because it doesn't do much you could plug it into usual 3d dos engines people had around that time without too much problems or you could use the usual 3d engine txt file guides floating around at the time and simply replace the trifiller portion with glide or the linefiller portion for the trifiller. simplifying a bit, but that was a lot easier than adapting an existing 3d engine for some other early cards like nv1 - and they didn't need to use the api's 3d calculations so they could use all the tricks and shortcuts they had for that portion for their games.
3dfx was also good at providing the tools for everyone without fuss to develop glide programs - and devs didn't need to port their stuff to windows either just to support it.
have you thought about looking into software quakes "s-buffer" ? I never quite understood it from the txt files at the time (not 3dfx related)
I suspect you are right about Glide. I haven't looked into the "s-buffer". That sounds intriguing.
@@PCRetroTech re about the s-buffer, it segmented the buffer somehow so it didn't need to check z for every pixel(afaik the walls didn't need the check just objects due to the bsp)
Did you run across if glide did the triangles sides itself in cpu or hw if you used the triangle or quad routines instead of the horizontal line in your own triangle drawing code?
@@lasskinn474 I vaguely remember something in the Glide manual about "cockpit overlays" that had some interesting interaction with the z-buffer. I think it made it so you didn't have to redraw a cockpit. Only the objects behind it had to be redrawn. I'm not sure if this is at all related to s-buffering though.
I don't know for sure if Glide does the triangle edges or not, but I suspect that is done in hardware. Of course I only supplied vertices of the triangles, nothing more.
I'm not sure what you mean with respect to horizontal lines. Certainly quads are broken into triangles by Glide.
@@PCRetroTech hmm or vertical i maybe messed the term, left to right lines between the edges of the triangle thats the slow part especially in perspective corrected texture mapping.
The sbuffer in quake software renderer didn't use like a per pixel buffer but like segments avoiding check per pixel, something like that but i never quite grasped how it worked from that txt doc and getting a voodoo it became irrelevant to me at the time
What causes the motion blur effect?
That's probably just an interaction of the camera with the monitor. I could have made the rotation steps smaller which might have made that less obvious, but it would also have looked like it was rotating slower.
@@PCRetroTech I thought it might be something like that, but you did such a good job filming the screen that I couldn't be sure it wasn't a direct capture.
The Voodoo 5 (VSA-100) offered a motion blur effect with its FSAA capabilities. Personally i never used it. motion blur is the first feature i turn off on modern games.
this was really nice vid!
Thanks I appreciate that.
Can this card ray trace?
If you add an RTX 2080 🙂
No.
whoa!
If you are willing to make the video, i would 100% would want to see the more involved programming code for it. you underestimate what your audience wants. Yeah, i know its a lot of work to do that. im not going to hold you by your throat for it. I get it.
Unfortunately the demand is just not there for that sort of content on TH-cam and such things take months of work to produce. I'd do it if I were a full time TH-camr supported somehow to do that sort of thing, but that is actually not possible for at least the next three years.
If there was ever anything I'd want to see from voodoo devs, it would be a wrapper for bump mapping, because voodoo could do it, but it wasn't a standard. So you ended up with a voodoo 5 playing Doom 3 without bump mapping when even a voodoo 2 could run the donut demo. Also, there was that blade of darkness game that ran real time shadows on anything. Doom 3 never required dx8. John Carmack supported GeForce 2. I think to truly get closure on 3dfx, someone needs to get doom 3 running on par with GeForce 2. Until then, voodoo is left incomplete. It's a real disappointment all voodoo development dropped at windows XP 64. We need signed drivers, and somebody needs to compile MesaFX, because there's an update sitting there uncompiled. The existing code can bypass the 256 texture limit and run quake 3 in high detail on a voodoo 2, but not on Windows 10, and we never got bump mapping. Also, Aureal got a worse deal, full driver support stopped at 9x.
I thought it came out in 1996
Voodoo Graphics did, and I can find Diamond Monster 3D cards that have 1996 stamped on them. But this particular card that I was using actually has a date of 1997. It might be a later revision of the card actually, which is interesting.
@@PCRetroTech iirc they did sell them for a few years, overlapping much into the voodoo2 era as a cheaper card.
The Voodoo 1 was sold from October 1996 onwards for consumer PCs. It was available in volume in Nov/Dec. of 1996, right for Christmas. A version of the card was available a bit earlier (mid 1996) for professional users (Quantum 3D Obsidian) and for arcade machines. The Voodoo 2 was release at the beginning of 1998.
@@john_ace obsidian was sold in regular places a little bit earlier than monster 3d where i lived