Great post. It was a tedious task at the time. Memmaker had the L option, which allowed you to manually set individual memory areas in the UMB. Unfortunately, this was always trial and error, as some TSR programs or drivers require additional RAM during the initialization phase. What I still remember is that emm or himem were not using the correct DOS manufacturer version. So you mixed MSDOS, DRDOS and PCDOS. Then there were keyboard drivers for config.syd. One was from the CT, was around 400 or 500 bytes in size with code page 437. *Something else about me* I come from the Apollo generation. Amateur radio and electronics tinkering. Back in the 70s, our booking machine had BASIC and a line display. I was already working at night in program mode😀 Test the Limit. About 15 years ago I switched to Linux. I first came into contact with Unix professionally in 1985/6. Today I am retired and am pleased that interest in antiques is increasing. 😅 Grüsse von DE nach CH.
Fun fact: there were MS-DOS machines that were incompatible with the IBM PC. Some didn't even try at any level of compatibility. One that comes to mind first for me is the NEC PC-98 series, which was basically the "default computer" in Japan before the Windows 9x era (and it had ports of up to Windows 2000 for it, exclusive to Japan - but that's another story). So I wondered how things worked there - but well, it had a very similar memory map to the IBM PC, with 640K being the conventional memory barrier. The Japanese MS-DOS 6.2 for NEC PC-98 even includes MemMaker!
As I don't have any of these early not-fully-compatible machines, it's hard to tell how it would behave. Maybe someone with one of these early PC clones sticks around and comes by giving it a shot? I would be curious how it goes!
Such as the MSDOS based Tandy 2000 which with its 8Mhz 80186 and much enhanced graphics, was in theory a significant advancement on the PC as it jettisoned most of IBM PC 5150's cost cutting decisions. Despite being faster and having better graphics than the later released IBM AT (6Mhz 286), failed because it was only ~99% compatible with the PC. That 1% was a deal killer if your initials were not IBM.
I remember on my 8086 machine, I was able to push around 768K of conventional RAM, but never made it quite this high. I'm impressed! 768K was achieved by both 128K in DIP, 512K in SIMMs (2x 256K), and another 128K on an ISA card mapped between ROM's. Fun times, and I kind of miss those days to be honest.
Years ago when dos was still large. I played around with a few utilities. I never had much luck with them. Programs would lock up and such. Thank You, for the better understanding about using different version of DOS. What I do remember is things like dos 6.22 and after playing around with it. Using Dos=high,umb and such. I was able to get 618-623 base memory free. It was my stubbornness to beat the horrible memmaker programs.
In the 80s and 90s German technical magazines were gold. For only for the computer, but also electronics. I remembered that in '87 we got a Commodore PC-10 and I stupidly thought that Commodore 64 assembly world also work on the PC-10. That was not the case. In fact, I had to learn programming in (turbo) pascal since qbasic was worthless (and I didn't have access to full quickbasic or gwbasic) and the C64 tricks did not work. Later moved on to Delphi and a lot of other programming languages. Thanks for the bringing back all these wonderful memories from my childhood!
Really good video again. This was quite interesting to see the program written out in a magazine for its subscribers to type out themselves and build it. I remember those types of magazines, they usually had some really good stuff in like this one. Yes i remember using dos memory management tools back in the late 80s/early 90s. I cant wait until the next one. Your videos are essential Sunday viewing
Back in the late 80s (or maybe it was early 90s? don't remember exactly) I spent a number of hours typing a few pages of code from a magazine like this and assembled it. It was a demo of how to switch a 286 CPU in protected mode, have it perform a task in protected mode (print a message to screen) and then re-enter real mode by doing a soft reset. It was very satisfying to see it run correctly after all that typing, assembling and debugging some typos. :P
Betrayal at Krondor. This was the game that was hardest for me to play on real hardware. Since my first encounter with the game (486DX2@66 with 48M RAM by then) until 2-3 years ago (some Atom netbook with 1.5G RAM), its need of 621k free RAM and expanded RAM makes it very hard to play, that is if you want smartdrive and sound, aside the mouse and emm memory manager. If wanted also the CDROM to play music, add more kb to fill the memory. Luckily, in 486 era, I used memmaker under dos 6.22 and also dos 7 (Win95 dos mode) to arrange TSRs and drivers in the memory, also sorting them and choosing the slimmest. I remember having to choose between MS Mouse driver 8, 9 or 11; or Amouse (accelerated mouse of some sort) also the CD drivers were from OAK and an LG CD-ROM, the LG one being far smaller. So, yes, I used memmaker a lot, each time after formatting the computer and installing MS DOS. I once used PC Tools utility but I remember it was shareware or demo and I did not liked it.
Anything by dynamix was ram hungry. Aces of the Pacific was a game that was just as bad. Oddly enough, I know enough about upper memory now where I can have all the things and still run those programs. No bootdisk needed.
@@ratix98 I learned back then the memory management just because I did not liked bootdisks (I got may times my computer infected with bootsector viruses without knowing). I rememebr reading the help in msdos 6.22 and writing the examples on paper to try them on the config.sys and autoexec.bat to test various configurations. Actually I have more troubles now to max the memory on modern-ish hardware (anything with more than 512M of RAM) that I had with the original 486/pentium machines.
Literally one of the best games ever made. When you look at our especially in the time of the industry.. so many design aspects were brilliant. Like the Easter eggs ffs holy crap so much to discover on the map.
@glytchd my favorite is going the complete opposite direction to get to krondor in chapter 1. I remember they even had dialog for that specific scenario.
I remember fiddling around with freeing up conventional memory to run some DOS games on my first computer in 1999 :) (it was a Cyrix M-II PR300 thing with 32MB of RAM, 1MB Cirrus Logic graphics and a Crystal CX4235 sound card, so basically nothing to write home about, but it was my first PC) As a 10 year old, I didn't really know what exactly I was doing, but somehow I always got the game to run. A couple of years later, when my first retro desires hit and I got a 486 from a local school for free, I was much more knowledgable about what and why to do to free up enough conventional memory. Usually, memmaker and some tweaking afterwards was more than enough.
I love anything Novell, Banyan as that was the beginning of my career as a teenager before very quickly moving to NT for everything, so can't wait for next week's vid. Even though decades have passed I still have a flame for those amazing pieces of software from the time.
When you were showing 800KB conventional memory (the max I saw?) - what address was DOS writing to for the screen? Isn’t B800 necessary for text output?
I remember a couple of tricks, that I was never able to suss out years later. I swear there was an EMS driver that would page out to hard drive instead of real memory for those that just needed a few pages of ems, my HP200LX would benefit from this as reading memory and writing to disk was nearly the same speed. It was still technically slower than EMS because of copying data around... Another one was loading drivers into an EMS device, I think this only worked with REAL hardware, not a virtual setup, but some I'm convinced I remember some drivers used an EMS card page for keeping themselves resident - when getting an interrupt a conventionally resident part would tell the ems card to switch banks, and it would jump into that bank, service the interrupt and then jump back out, and revert the page change. Now I think this was specific drivers, not some universal loader for any driver. Anyway, these are all foggy and it may be confused memories, but I'd brought it up a few times and my group of retro nerds couldn't figure these out, figured they sounded interesting.
QEMM has two important limitations to consider (which are applicable to its latest QEMM 97 version): a maximum of 256 MB of RAM and the fact that they do not allow DOS to halt the CPU, which is extremely important both for power management and for slow down applications (used mostly for retro gaming and backwards application compatibility).
I'm looking forward for the next sunday, because LANtastic is fantastic. :D And the next sunday is the sunday after the big CSD parade here. CSD = Christopher Street Day; the parade is on saturday, the August 17 2024.
I home, that we don't need to be protected by the police against some Neonazis, which want to interfere the parade. This month is the election of destiny, final date is the September 1 2024.
The real answer to your problems is that despite when it was released nobody had an 8088/6 with EGA graphics as it became popular for about a year in the transition from 286->386 days. But many later cheap "turbo" 8086 clones came with 1Meg of memory (a pair of 512Kbx8 chips) and implemented shadow ram which copied slow 8bit roms to faster 16bit ram. In your VM case if you would chose a CGA card along and expanded your memory the easy way by excluding B800-BFFF and ran one of the many utilities back in the day that reset totalmem (much like your German confram) and soft rebooted you would get an extra 96K of conventional -- before anyone did the loadhigh parts (loading into UMBs also existed for these 8088/6 boxes as shown by your beyond).
How can be system bios and vga bios discarded from upper memory ? Only solution coming into my mind is to handle all interrupts in protected mode and run Dos apps in V86. But it required reimplementing api of both bioses. There is also option to call bios api in separate V86 memory block but in this case far pointers into upper memory from Dos app can't be accessed in the bios. Another problem is a TSR hooking interrupt vector table and calling original address directly.
Goddamn! My data plan is depleted, so I have to watch that video @home, but for this I have to fight against the delays by the Deutsche Bahn. Thank you for traveling with Deutsche Bahn. Even when all toilets in the train are broken. 🤬
By the way the wifi in the city train between Halle and Leipzig is garbage, because the max transfer rate is bouncing between 1 and 10 kilobytes per second with 1 second latency. It's a pain here in germany.
Most times i boot MS DOS without EMS memory on 80386+, because i like to switch into the undocunented 16 bit "BIG" Real Mode with A20 adddress line on and with 4 gb segment size for DS, ES, FS and GS segment and 64 kb for CS and SS segment size. In this mode we can directly write into the linear framebuffer(located in the 4th gb) of VBE 2 AND VBE 3 graphic modes for higher resolution like 1920x1200x32 (16: 10 aspect ratio) on 28" LCD using a Radeon 9750 PCIe card. Or with analog monitor(19" with 96 khz capacity) refreshrate controlled video modes like 800x600x32 with 140hz refresh rate, 1024x768x32 with 100hz refresh rate, 1280x1024x32 with 85 hz refresh rate together with VBE 3 hardware triple buffering for flicker free movement of very large objects across the screen in MS DOS.
On PC with UDMA the HDDs are very fast, so we can easy load 64 kb at once with MS DOS multiple times into the main memory. I never had the problem with not enough memory using my own DOS applications. If Windows can use a page file, then the hdd is fast enough for MS DOS to load new files between the levels while a DOS game is running.
Regarding Memory Commander - could the disk image be assuming a nonstandard disk layout? I know that Microsoft did this with their Win3.11 3.5” floppies. There are videos on this site on nonstandard disks, I can’t remember who at the moment. Help me, hivemind!
on my i7 laptop (i know i'm crazy :P) i have a maximum reported conventional memory of 629kb i tried all i could in the past but never managed to address more conventional memory.. i will try all those tricks aswell!
IIRC, the best way to achieve the best conventional memory available was by using monochrome graphics adapter. I also heard about some Tandy computer that had a lot of conventional memory available, isn't it?
Well its only applied to MDA, but not Hercules. Herc requires 32KB addressable memory, while original MDA only demand 4KB. Both are considered to be monochrome Display adapter.
@rashidisw oh so that's the main difference then? I remember mda and herc having the resolution etc? Wasn't there an ega equivelent mda rezolhtion? Or am i thinking of vga/mda? I think somw games like dune2 or wingcommander 2 had the mda 256 or 16 color option?
@@skilletpan5674 MDA are *text* mode only. Similar to CGA text mode. Each character on screen take up 2 bytes. One for the character itself and the other for attributes. MDA has 80 columns x 25 rows, so its about: 80 * 25 * 2 = 4,000 bytes. Hercules on other hand while compatible with MDA monitor, using similar starting memory address, The Hercules unlike MDA also offers graphics mode. In this mode a pixel can be precisely turn on or off by using 1 bit, hence monochrome. Hercules mode has 720 x 348 screen pixels, which is: 250,560 bits or 31,320 bytes. Due to addressing system, it takes up 32K of RAM.
@@rashidisw Yeah I know. I remembered after I did the comment and looked it up. I was thinking MCGA but I couldn't find the comment I made to edit it fast enough :D. I used to use all of them as a kid in the 80s and 90s. I lived through about 4 or 5 major versions of DOS as my Daily drivers. I just wish that OS/2 became more popular.
QEMM v8 was the bast. Also when opendos v6 and then v7 became free that was the best way to get lowmem of only around 4k or so used. (1k is always the interrupt table). I was lucky because i never had issues with qemm. I had to use norton commander v5 to eat up a kb or so so some games would run because they didn't like having too much ram free.
i am sure on my 486-Pentium systems i could push to 740K and if i was ok (which i was not) with monochrome video i could push way up to 800K odd but we are talking what almost 30 year ago so i cannot remember for sure NB it was DOS 6.22 i done that with
I did use QEMM in the day, but I recall it not being amazingly stable, but that is a very early memory for me so I might be wrong. I do know I strictly used memmaker after that which was just fine for the apps I wanted to run... Doom 2, Windows 3.11, the day of the tentacle. 😊😊
I think pre v6 had a bad rep but i used qemm v6 and v8 and they where fine for me. OpenDOS was also a good free (as of v7) option that was about the same as qemm in terms of lowmem etc. Mind you i'd be running os/2 on my fantasy ultimate "dos" pc.
Re the software that you couldn't get to work: Did you test it with US-English DOS? I have no proof of it, but I have a suspicion that some of the various tools and utilities weren't tested on various country/language versions of DOS and might not work with them. (Or maybe my subconscious part of the brain just blame this as the reason for why I never got Deskqview/X to work :) )
Another fun fact: some DOS games won't work with memory managers like QEMM, so the Hentai RPG game "Knights of Xentar" won't run on QEMM, it's tied to EMM386. On QEMM it throws errors. And if you want to play "Knights of Xentar", I recommend you to install the NR-18 patch, but LEAVE YOUR HANDS OUT OF YOUR PANTS! 😂
OMG. Back then, we did EVERYTHING to move forward and get away from this *****. Even Win9x had traces and remains of the archaic decisions at a time, when typical enduser machines hat 128 MB RAM . The ONLY excuses at that time were backwards compatibility (...argument destroyed by OS/2) and Mac OS7-9 were even worse, despite not depending on x86, but 68k ... and still struggling with memory management. So, what's the point in warming up this stuff, at a time, where 5$ modules like the Raspi Pico and ESP32 come with 32bit processors? Why not unbury an abacus and tune it using silicone oil, titanium rods, precision bearings and superlight nano carbon balls? The design of the 16bit DOS PC was spiked with dumb decisions and letting it rise in a neverending volcano of "unavoidable" backwards compatibility of all kind, while already in 1990, there were half a dozen competing architectures (for CPU, OS and programming APIs) that had better characteristics and outlook. Celebrating this BIGGEST WRONG DECISION in human industrial history, resulting in higher costs, until today, makes my guts cramp.
Thanks for sharing your thoughts. It seems you're passionate about technology and the evolution of computing! I get where you're coming from and compared to today's standards, the limitations and decisions made back in the MS-DOS are fundamentally archaic. However, the purpose of my video, and my channel in general, is to explore the history and context of past IT technology. Understanding where we've come from helps us appreciate how far we've advanced. It's not about celebrating the past as the 'best way,' but rather understanding it as a stepping stone that led to new innovations. That said, I do agree that our present-day technology and operating systems, be it Linux, Raspberry Pi, and what not, offer incredible capabilities and advancement over the days of MS-DOS. Again, I appreciate your feedback, and I hope we can continue to have constructive discussions about the evolution of technology.
@@THEPHINTAGECOLLECTOR Evolution is quite the keyword: DOS and Win9x never really evolved. This OS family is a symbol for staying in the past for the sake of compatibility and forcing workarounds forever. Comparing how DOS differed from its competitors would have been exploration. Showing the whole misery to even use installed RAM with the LEADING OS of these days is ... celebrating it.
@@martinb.770 You're absolutely right: DOS and Win9x had significant limitations and often required obscure workarounds that feel totally out of place by today's standards. Still I disagree with your viewpoint. My goal with the video wasn’t to celebrate those limitations but to highlight the challenges that developers faced at the time, and how they worked around the constraints they had. In my opinion, one must acknowledge the creativeness of all those people coming up with, what I believe, clever workarounds. Were these workarounds, be it homebrewn like the own explored in this video, or commercial products like QEMM and others, sustaniable and future-proof solutions? Definitely not. Admittedly, they all just unnecessarily prolonged the life of the DOS basis. And while this construct undeniably caused many problems in the long run, it would be wrong to pretend it never happened, lest to not explore it. I like your idea about comparing to other operating systems, and there will be room for this in upcoming videos. Right now, my focus is on DOS, and DOS networking. But I definitely will look into other systems along the way of my networking series, for certain a revisit to NT, but also early Linux versions, and as well contempary UNIX versions of the 80s and 90s period.
@@THEPHINTAGECOLLECTOR There are clever workarounds that allow people to use the full potential of some system, and there are limitations or hurdles by design, that FORCE workarounds all the time and make working with it an everlasting pain, because every workaround might introduce another incompatibility. On the other hand, there are clever designs, that barely need changes and leave people energy to use it and evolve. Not a single pro wants this stuff back. Take a look at those systems, that made good decisions from the beginning. Sorry for this discussion, but the topic triggers quite some anger, looking back on the lost decade called "90ies". Everything the professionals was already available in 90-93. Take a deep look at what Microsoft considered "evolution" and what every other player in 1994 had on the table. It's like showing what great car the Trabant was, in its ecosystem (DDR), while the rest of Europe raced around in Golfs. Once again sorry - but it just triggers angry memories of lost years.
@@martinb.770 True. Even so there's enthousiasts keeping the Trabi alive, restoring them back, I think nobody want's to use is it as a daily-driver even nowadays. Same for me, I love looking back at this retro tech, but it would never occur to me to use it as daily drivers again. Still, speaking of limitations, or their absence, by design or by intention. As much as DOS was a CP/M clone, and thus inherited some fundamental limitations even from the 70s, Linux had similar heritage. Althogh it was written from scratch, the dependance of being a UNIX clon showed with shortcomings in various areas as well, like with glibc. Notably the kernel, which was not restricting the number of file descriptors, but earlier versions of glibc did, by defining a static limit for FD_SETSIZE. Hence every tools linked against glibc inherited that limit itself. And this ultimately became a problem for heavy duty webservers, where you very easily needed more than 1024 file descriptors. So changing it required not only tampering around with the glibc header file, but also recompiling every bit of software to inherit the new definition. It took several years as well to move away from these static compile-time definitions to dynamically definable value at runtime. Having said that, that would be my personal "source of anger", as this caused me a lot of pain 25 years ago. But it doesn't help it, except for the realization, that yes, inherent and solid design can to some extend prevent such mistakes from being repeated. At least, as far as evolution goes, Linux and GNU did many steps in this area to evolve, where I agree to your earlier statement, DOS did not. Many advancements, which only arrived at a later stage into the consumer mass market, were already there, as you say. Take those SGI Workstations I have, which are running IRIX. It's XFS filesystem was designed with scalability in mind, and didn't suffer from things like "maximum file size of 2 GiB" situations, like it was still the case with the FAT filesystems. There's surely plenty of such examples, where technology was already there at the time. Definitely an opportunity for a context-specific comparison at some point. PS: Why *not* to take an Abacus, put new sliders, silocone lube, and everything to tune it into the best performing ever manually operated Rechenschieber of all times? Just joking ;-)
why not simply use "device=c:\emm386.exe noems i=a000-b7ff" or even use more? This was actually totally safe, if you would use CGA graphics and no MCA or VGA/EGA Adapter. Granted, only works on 386, but there were memory managers for 286 also that could do this. There was a EMM286 or Quaterdieck QRAM for 286's.
This is answered at exactly this point of the video th-cam.com/video/Nbw5klso-VY/w-d-xo.html It's because this works on 8086/8088 systems, where none of the above will work otherwise.
@@THEPHINTAGECOLLECTOR yes for the 8088. There is a way with a lo-tex ram card though, I think. Not tested it. But for the 80286, it is much easier as stated above. And you used a virtual one. 🙂 Anyways I liked the video! :-)
@@stamasd8500 yes and? did you read my whole comment? There are memory managers for a 286 that do the same. only for a 8088 there are things like this needed.
@@Jerrec emm286 does not do the same things as emm386. It is purely an expanded memory manager. Does not provide UMBs; moreover it places the EMS page in conventional memory, therefore reducing it. Even if you have another memory manager that provides UMBs, emm286 refuses to use them.
Great post.
It was a tedious task at the time. Memmaker had the L option, which allowed you to manually set individual memory areas in the UMB. Unfortunately, this was always trial and error, as some TSR programs or drivers require additional RAM during the initialization phase. What I still remember is that emm or himem were not using the correct DOS manufacturer version. So you mixed MSDOS, DRDOS and PCDOS. Then there were keyboard drivers for config.syd. One was from the CT, was around 400 or 500 bytes in size with code page 437.
*Something else about me*
I come from the Apollo generation. Amateur radio and electronics tinkering. Back in the 70s, our booking machine had BASIC and a line display. I was already working at night in program mode😀 Test the Limit. About 15 years ago I switched to Linux. I first came into contact with Unix professionally in 1985/6. Today I am retired and am pleased that interest in antiques is increasing. 😅
Grüsse von DE nach CH.
Fun fact: there were MS-DOS machines that were incompatible with the IBM PC. Some didn't even try at any level of compatibility. One that comes to mind first for me is the NEC PC-98 series, which was basically the "default computer" in Japan before the Windows 9x era (and it had ports of up to Windows 2000 for it, exclusive to Japan - but that's another story). So I wondered how things worked there - but well, it had a very similar memory map to the IBM PC, with 640K being the conventional memory barrier. The Japanese MS-DOS 6.2 for NEC PC-98 even includes MemMaker!
As I don't have any of these early not-fully-compatible machines, it's hard to tell how it would behave.
Maybe someone with one of these early PC clones sticks around and comes by giving it a shot? I would be curious how it goes!
Such as the MSDOS based Tandy 2000 which with its 8Mhz 80186 and much enhanced graphics, was in theory a significant advancement on the PC as it jettisoned most of IBM PC 5150's cost cutting decisions.
Despite being faster and having better graphics than the later released IBM AT (6Mhz 286), failed because it was only ~99% compatible with the PC. That 1% was a deal killer if your initials were not IBM.
I remember on my 8086 machine, I was able to push around 768K of conventional RAM, but never made it quite this high. I'm impressed! 768K was achieved by both 128K in DIP, 512K in SIMMs (2x 256K), and another 128K on an ISA card mapped between ROM's.
Fun times, and I kind of miss those days to be honest.
Technology has progressed significantly. Nice finds, thanks for sharing the compiling experience. 🖥️
@@8randomprettysecret8 my pleasure!
Yet another amazing video. Unearthing some really cool concepts that I certainly haven't heard about. I look forward to your networking videos!!
@@RetroTechChris yeah, I have a full backlog of topics for the next 6 months!
Wooo glad to hear it!
You put a lot of effort into these two videos and they have been VERY educational. Thank you!
@@josephphillips9243 my pleasure!
Years ago when dos was still large. I played around with a few utilities. I never had much luck with them. Programs would lock up and such. Thank You, for the better understanding about using different version of DOS. What I do remember is things like dos 6.22 and after playing around with it. Using Dos=high,umb and such. I was able to get 618-623 base memory free. It was my stubbornness to beat the horrible memmaker programs.
In the 80s and 90s German technical magazines were gold. For only for the computer, but also electronics.
I remembered that in '87 we got a Commodore PC-10 and I stupidly thought that Commodore 64 assembly world also work on the PC-10. That was not the case.
In fact, I had to learn programming in (turbo) pascal since qbasic was worthless (and I didn't have access to full quickbasic or gwbasic) and the C64 tricks did not work. Later moved on to Delphi and a lot of other programming languages.
Thanks for the bringing back all these wonderful memories from my childhood!
Really good video again. This was quite interesting to see the program written out in a magazine for its subscribers to type out themselves and build it. I remember those types of magazines, they usually had some really good stuff in like this one. Yes i remember using dos memory management tools back in the late 80s/early 90s. I cant wait until the next one. Your videos are essential Sunday viewing
Back in the late 80s (or maybe it was early 90s? don't remember exactly) I spent a number of hours typing a few pages of code from a magazine like this and assembled it. It was a demo of how to switch a 286 CPU in protected mode, have it perform a task in protected mode (print a message to screen) and then re-enter real mode by doing a soft reset. It was very satisfying to see it run correctly after all that typing, assembling and debugging some typos. :P
Betrayal at Krondor. This was the game that was hardest for me to play on real hardware. Since my first encounter with the game (486DX2@66 with 48M RAM by then) until 2-3 years ago (some Atom netbook with 1.5G RAM), its need of 621k free RAM and expanded RAM makes it very hard to play, that is if you want smartdrive and sound, aside the mouse and emm memory manager. If wanted also the CDROM to play music, add more kb to fill the memory. Luckily, in 486 era, I used memmaker under dos 6.22 and also dos 7 (Win95 dos mode) to arrange TSRs and drivers in the memory, also sorting them and choosing the slimmest. I remember having to choose between MS Mouse driver 8, 9 or 11; or Amouse (accelerated mouse of some sort) also the CD drivers were from OAK and an LG CD-ROM, the LG one being far smaller. So, yes, I used memmaker a lot, each time after formatting the computer and installing MS DOS. I once used PC Tools utility but I remember it was shareware or demo and I did not liked it.
Anything by dynamix was ram hungry. Aces of the Pacific was a game that was just as bad. Oddly enough, I know enough about upper memory now where I can have all the things and still run those programs. No bootdisk needed.
@@ratix98 I learned back then the memory management just because I did not liked bootdisks (I got may times my computer infected with bootsector viruses without knowing). I rememebr reading the help in msdos 6.22 and writing the examples on paper to try them on the config.sys and autoexec.bat to test various configurations. Actually I have more troubles now to max the memory on modern-ish hardware (anything with more than 512M of RAM) that I had with the original 486/pentium machines.
Literally one of the best games ever made. When you look at our especially in the time of the industry.. so many design aspects were brilliant. Like the Easter eggs ffs holy crap so much to discover on the map.
@glytchd my favorite is going the complete opposite direction to get to krondor in chapter 1. I remember they even had dialog for that specific scenario.
I remember fiddling around with freeing up conventional memory to run some DOS games on my first computer in 1999 :) (it was a Cyrix M-II PR300 thing with 32MB of RAM, 1MB Cirrus Logic graphics and a Crystal CX4235 sound card, so basically nothing to write home about, but it was my first PC) As a 10 year old, I didn't really know what exactly I was doing, but somehow I always got the game to run.
A couple of years later, when my first retro desires hit and I got a 486 from a local school for free, I was much more knowledgable about what and why to do to free up enough conventional memory. Usually, memmaker and some tweaking afterwards was more than enough.
I love anything Novell, Banyan as that was the beginning of my career as a teenager before very quickly moving to NT for everything, so can't wait for next week's vid.
Even though decades have passed I still have a flame for those amazing pieces of software from the time.
Thanks for always providing a comfy retreat for my weary mind.
@@DankUser you‘re welcome!
When you were showing 800KB conventional memory (the max I saw?) - what address was DOS writing to for the screen? Isn’t B800 necessary for text output?
I remember a couple of tricks, that I was never able to suss out years later. I swear there was an EMS driver that would page out to hard drive instead of real memory for those that just needed a few pages of ems, my HP200LX would benefit from this as reading memory and writing to disk was nearly the same speed. It was still technically slower than EMS because of copying data around...
Another one was loading drivers into an EMS device, I think this only worked with REAL hardware, not a virtual setup, but some I'm convinced I remember some drivers used an EMS card page for keeping themselves resident - when getting an interrupt a conventionally resident part would tell the ems card to switch banks, and it would jump into that bank, service the interrupt and then jump back out, and revert the page change. Now I think this was specific drivers, not some universal loader for any driver.
Anyway, these are all foggy and it may be confused memories, but I'd brought it up a few times and my group of retro nerds couldn't figure these out, figured they sounded interesting.
@@prozacgodretro right, the one for the 200LX is certainly correct.
I own one and had played around with that EMS driver as well.
Great video. In my highscool we have had competition WHO can make more basen menory available but I was sure you can not pass 640kB. Till today.
WAHOO! Good Morning from Detroit
Pushin' them DOS Limits!
There was/is a tool called USE!UMBS for the PC/XT, to use 1MB of RAM. However, it required a custom GAL as a memory mapper.
Using edlin like a boss!
Yeah, way faster for a quick edit, especially when working from floppies, even emulated ones.
But still, the most archaic editor ever.
Great video.
We need a shirt with Mr. Know-it-All that says “What about Windows Me?” lol
Used to do this in IBM DOS7.0 back in the 90's
QEMM has two important limitations to consider (which are applicable to its latest QEMM 97 version): a maximum of 256 MB of RAM and the fact that they do not allow DOS to halt the CPU, which is extremely important both for power management and for slow down applications (used mostly for retro gaming and backwards application compatibility).
I'm looking forward for the next sunday, because LANtastic is fantastic. :D And the next sunday is the sunday after the big CSD parade here.
CSD = Christopher Street Day; the parade is on saturday, the August 17 2024.
I hope, that the memory management issues for the public transport are fixed on that day. 😄
I home, that we don't need to be protected by the police against some Neonazis, which want to interfere the parade. This month is the election of destiny, final date is the September 1 2024.
The more I dive intro retro stuff the more I understand that overall crazy IBM PC design.
@@AncapDude
It‘s a child of it‘s time. They surely did it better today.
IMO it‘s even crazier, how much of these concepts are still present today!
@@THEPHINTAGECOLLECTOR Backward compatibility is what this concept made successful.
The real answer to your problems is that despite when it was released nobody had an 8088/6 with EGA graphics as it became popular for about a year in the transition from 286->386 days. But many later cheap "turbo" 8086 clones came with 1Meg of memory (a pair of 512Kbx8 chips) and implemented shadow ram which copied slow 8bit roms to faster 16bit ram.
In your VM case if you would chose a CGA card along and expanded your memory the easy way by excluding B800-BFFF and ran one of the many utilities back in the day that reset totalmem (much like your German confram) and soft rebooted you would get an extra 96K of conventional -- before anyone did the loadhigh parts (loading into UMBs also existed for these 8088/6 boxes as shown by your beyond).
How can be system bios and vga bios discarded from upper memory ?
Only solution coming into my mind is to handle all interrupts in protected mode and run Dos apps in V86. But it required reimplementing api of both bioses.
There is also option to call bios api in separate V86 memory block but in this case far pointers into upper memory from Dos app can't be accessed in the bios.
Another problem is a TSR hooking interrupt vector table and calling original address directly.
FREEDOS
@@vincentfernandez7328 yeah, that‘s missing, apparently (-:
3:54 as someone who doesn't know how to program, I didn't expect to be able to find the error 🤣✨
Goddamn! My data plan is depleted, so I have to watch that video @home, but for this I have to fight against the delays by the Deutsche Bahn. Thank you for traveling with Deutsche Bahn. Even when all toilets in the train are broken. 🤬
By the way the wifi in the city train between Halle and Leipzig is garbage, because the max transfer rate is bouncing between 1 and 10 kilobytes per second with 1 second latency. It's a pain here in germany.
@@msdosm4nfred
I‘ve made my share with Deutsch Bahn as well. Safe travels!
@@THEPHINTAGECOLLECTOR Public transport here in Leipzig has sucked at the european football championship. The entire world is laughing to Germany. 😮💨
LOL this is going to be a good video but I'm intruiged by "ST" and "Amiga" on that old CT magazine @ 0:54 :)
@jrherita Those were the days 🙂
Don't tell anyone, but I will do something about an Amiga 500 towards the end of the year, or maybe early 2025 ;-)
@@THEPHINTAGECOLLECTOR Awesome!! Amiggahhhh :)
Most times i boot MS DOS without EMS memory on 80386+, because i like to switch into the undocunented 16 bit "BIG" Real Mode with A20 adddress line on and with 4 gb segment size for DS, ES, FS and GS segment and 64 kb for CS and SS segment size.
In this mode we can directly write into the linear framebuffer(located in the 4th gb) of VBE 2 AND VBE 3 graphic modes for higher resolution like 1920x1200x32 (16: 10 aspect ratio) on 28" LCD using a Radeon 9750 PCIe card.
Or with analog monitor(19" with 96 khz capacity) refreshrate controlled video modes like 800x600x32 with 140hz refresh rate, 1024x768x32 with 100hz refresh rate, 1280x1024x32 with 85 hz refresh rate together with VBE 3 hardware triple buffering for flicker free movement of very large objects across the screen in MS DOS.
On PC with UDMA the HDDs are very fast, so we can easy load 64 kb at once with MS DOS multiple times into the main memory. I never had the problem with not enough memory using my own DOS applications. If Windows can use a page file, then the hdd is fast enough for MS DOS to load new files between the levels while a DOS game is running.
Regarding Memory Commander - could the disk image be assuming a nonstandard disk layout? I know that Microsoft did this with their Win3.11 3.5” floppies.
There are videos on this site on nonstandard disks, I can’t remember who at the moment. Help me, hivemind!
i am A Old . . . so old i can't remember what machine generation i was running when i found QEMM . . . i thought i'd died and gone to heaven!
on my i7 laptop (i know i'm crazy :P) i have a maximum reported conventional memory of 629kb i tried all i could in the past but never managed to address more conventional memory.. i will try all those tricks aswell!
IIRC, the best way to achieve the best conventional memory available was by using monochrome graphics adapter.
I also heard about some Tandy computer that had a lot of conventional memory available, isn't it?
@@jbinary82 Tandy 2000, according to my research. Maybe others as well.
Well its only applied to MDA, but not Hercules. Herc requires 32KB addressable memory, while original MDA only demand 4KB.
Both are considered to be monochrome Display adapter.
@rashidisw oh so that's the main difference then? I remember mda and herc having the resolution etc? Wasn't there an ega equivelent mda rezolhtion? Or am i thinking of vga/mda? I think somw games like dune2 or wingcommander 2 had the mda 256 or 16 color option?
@@skilletpan5674 MDA are *text* mode only. Similar to CGA text mode. Each character on screen take up 2 bytes. One for the character itself and the other for attributes.
MDA has 80 columns x 25 rows, so its about: 80 * 25 * 2 = 4,000 bytes.
Hercules on other hand while compatible with MDA monitor, using similar starting memory address,
The Hercules unlike MDA also offers graphics mode.
In this mode a pixel can be precisely turn on or off by using 1 bit, hence monochrome.
Hercules mode has 720 x 348 screen pixels, which is: 250,560 bits or 31,320 bytes. Due to addressing system, it takes up 32K of RAM.
@@rashidisw Yeah I know. I remembered after I did the comment and looked it up. I was thinking MCGA but I couldn't find the comment I made to edit it fast enough :D. I used to use all of them as a kid in the 80s and 90s. I lived through about 4 or 5 major versions of DOS as my Daily drivers. I just wish that OS/2 became more popular.
QEMM v8 was the bast. Also when opendos v6 and then v7 became free that was the best way to get lowmem of only around 4k or so used. (1k is always the interrupt table).
I was lucky because i never had issues with qemm. I had to use norton commander v5 to eat up a kb or so so some games would run because they didn't like having too much ram free.
You need to exclude more if you are using those Expanded Memory LIM cards.
i am sure on my 486-Pentium systems i could push to 740K and if i was ok (which i was not) with monochrome video i could push way up to 800K odd but we are talking what almost 30 year ago so i cannot remember for sure NB it was DOS 6.22 i done that with
I did use QEMM in the day, but I recall it not being amazingly stable, but that is a very early memory for me so I might be wrong.
I do know I strictly used memmaker after that which was just fine for the apps I wanted to run... Doom 2, Windows 3.11, the day of the tentacle. 😊😊
I think pre v6 had a bad rep but i used qemm v6 and v8 and they where fine for me. OpenDOS was also a good free (as of v7) option that was about the same as qemm in terms of lowmem etc. Mind you i'd be running os/2 on my fantasy ultimate "dos" pc.
Re the software that you couldn't get to work:
Did you test it with US-English DOS?
I have no proof of it, but I have a suspicion that some of the various tools and utilities weren't tested on various country/language versions of DOS and might not work with them.
(Or maybe my subconscious part of the brain just blame this as the reason for why I never got Deskqview/X to work :) )
@@Thesecret101-te1lm You mean the Memory Commander I mentioned?
I tried only the english version of DOS.
I'm all up for the ethernet base 2 network!
Another fun fact: some DOS games won't work with memory managers like QEMM, so the Hentai RPG game "Knights of Xentar" won't run on QEMM, it's tied to EMM386. On QEMM it throws errors.
And if you want to play "Knights of Xentar", I recommend you to install the NR-18 patch, but LEAVE YOUR HANDS OUT OF YOUR PANTS! 😂
Where is that hot ass track with the sax? That song is banging
What if you Fly by rootkitty niiiiiiice
@MatthewHolevinski
Check it out, it's on SoundCloud: soundcloud.com/rootkitty
@ashalya2923 kudos to you!
Jenseits von RAM und ROM
OMG. Back then, we did EVERYTHING to move forward and get away from this *****. Even Win9x had traces and remains of the archaic decisions at a time, when typical enduser machines hat 128 MB RAM . The ONLY excuses at that time were backwards compatibility (...argument destroyed by OS/2) and Mac OS7-9 were even worse, despite not depending on x86, but 68k ... and still struggling with memory management.
So, what's the point in warming up this stuff, at a time, where 5$ modules like the Raspi Pico and ESP32 come with 32bit processors? Why not unbury an abacus and tune it using silicone oil, titanium rods, precision bearings and superlight nano carbon balls?
The design of the 16bit DOS PC was spiked with dumb decisions and letting it rise in a neverending volcano of "unavoidable" backwards compatibility of all kind, while already in 1990, there were half a dozen competing architectures (for CPU, OS and programming APIs) that had better characteristics and outlook.
Celebrating this BIGGEST WRONG DECISION in human industrial history, resulting in higher costs, until today, makes my guts cramp.
Thanks for sharing your thoughts. It seems you're passionate about technology and the evolution of computing! I get where you're coming from and compared to today's standards, the limitations and decisions made back in the MS-DOS are fundamentally archaic.
However, the purpose of my video, and my channel in general, is to explore the history and context of past IT technology. Understanding where we've come from helps us appreciate how far we've advanced. It's not about celebrating the past as the 'best way,' but rather understanding it as a stepping stone that led to new innovations.
That said, I do agree that our present-day technology and operating systems, be it Linux, Raspberry Pi, and what not, offer incredible capabilities and advancement over the days of MS-DOS.
Again, I appreciate your feedback, and I hope we can continue to have constructive discussions about the evolution of technology.
@@THEPHINTAGECOLLECTOR Evolution is quite the keyword: DOS and Win9x never really evolved. This OS family is a symbol for staying in the past for the sake of compatibility and forcing workarounds forever.
Comparing how DOS differed from its competitors would have been exploration. Showing the whole misery to even use installed RAM with the LEADING OS of these days is ... celebrating it.
@@martinb.770 You're absolutely right: DOS and Win9x had significant limitations and often required obscure workarounds that feel totally out of place by today's standards.
Still I disagree with your viewpoint.
My goal with the video wasn’t to celebrate those limitations but to highlight the challenges that developers faced at the time, and how they worked around the constraints they had.
In my opinion, one must acknowledge the creativeness of all those people coming up with, what I believe, clever workarounds.
Were these workarounds, be it homebrewn like the own explored in this video, or commercial products like QEMM and others, sustaniable and future-proof solutions? Definitely not. Admittedly, they all just unnecessarily prolonged the life of the DOS basis.
And while this construct undeniably caused many problems in the long run, it would be wrong to pretend it never happened, lest to not explore it.
I like your idea about comparing to other operating systems, and there will be room for this in upcoming videos.
Right now, my focus is on DOS, and DOS networking.
But I definitely will look into other systems along the way of my networking series, for certain a revisit to NT, but also early Linux versions, and as well contempary UNIX versions of the 80s and 90s period.
@@THEPHINTAGECOLLECTOR There are clever workarounds that allow people to use the full potential of some system, and there are limitations or hurdles by design, that FORCE workarounds all the time and make working with it an everlasting pain, because every workaround might introduce another incompatibility.
On the other hand, there are clever designs, that barely need changes and leave people energy to use it and evolve.
Not a single pro wants this stuff back.
Take a look at those systems, that made good decisions from the beginning.
Sorry for this discussion, but the topic triggers quite some anger, looking back on the lost decade called "90ies".
Everything the professionals was already available in 90-93.
Take a deep look at what Microsoft considered "evolution" and what every other player in 1994 had on the table.
It's like showing what great car the Trabant was, in its ecosystem (DDR), while the rest of Europe raced around in Golfs.
Once again sorry - but it just triggers angry memories of lost years.
@@martinb.770 True. Even so there's enthousiasts keeping the Trabi alive, restoring them back, I think nobody want's to use is it as a daily-driver even nowadays.
Same for me, I love looking back at this retro tech, but it would never occur to me to use it as daily drivers again.
Still, speaking of limitations, or their absence, by design or by intention.
As much as DOS was a CP/M clone, and thus inherited some fundamental limitations even from the 70s, Linux had similar heritage.
Althogh it was written from scratch, the dependance of being a UNIX clon showed with shortcomings in various areas as well, like with glibc.
Notably the kernel, which was not restricting the number of file descriptors, but earlier versions of glibc did, by defining a static limit for FD_SETSIZE. Hence every tools linked against glibc inherited that limit itself. And this ultimately became a problem for heavy duty webservers, where you very easily needed more than 1024 file descriptors.
So changing it required not only tampering around with the glibc header file, but also recompiling every bit of software to inherit the new definition.
It took several years as well to move away from these static compile-time definitions to dynamically definable value at runtime.
Having said that, that would be my personal "source of anger", as this caused me a lot of pain 25 years ago.
But it doesn't help it, except for the realization, that yes, inherent and solid design can to some extend prevent such mistakes from being repeated. At least, as far as evolution goes, Linux and GNU did many steps in this area to evolve, where I agree to your earlier statement, DOS did not.
Many advancements, which only arrived at a later stage into the consumer mass market, were already there, as you say.
Take those SGI Workstations I have, which are running IRIX. It's XFS filesystem was designed with scalability in mind, and didn't suffer from things like "maximum file size of 2 GiB" situations, like it was still the case with the FAT filesystems.
There's surely plenty of such examples, where technology was already there at the time.
Definitely an opportunity for a context-specific comparison at some point.
PS: Why *not* to take an Abacus, put new sliders, silocone lube, and everything to tune it into the best performing ever manually operated Rechenschieber of all times? Just joking ;-)
why not simply use "device=c:\emm386.exe noems i=a000-b7ff" or even use more?
This was actually totally safe, if you would use CGA graphics and no MCA or VGA/EGA Adapter.
Granted, only works on 386, but there were memory managers for 286 also that could do this. There was a EMM286 or Quaterdieck QRAM for 286's.
This is answered at exactly this point of the video th-cam.com/video/Nbw5klso-VY/w-d-xo.html
It's because this works on 8086/8088 systems, where none of the above will work otherwise.
@@THEPHINTAGECOLLECTOR yes for the 8088. There is a way with a lo-tex ram card though, I think. Not tested it.
But for the 80286, it is much easier as stated above. And you used a virtual one. 🙂
Anyways I liked the video! :-)
@@Jerrec emm386.exe will not work on a 286. It requires a 386 or later.
@@stamasd8500 yes and? did you read my whole comment? There are memory managers for a 286 that do the same. only for a 8088 there are things like this needed.
@@Jerrec emm286 does not do the same things as emm386. It is purely an expanded memory manager. Does not provide UMBs; moreover it places the EMS page in conventional memory, therefore reducing it. Even if you have another memory manager that provides UMBs, emm286 refuses to use them.