As one step in the development of the Windows 3.x/2.x display driver, I needed to replace a BIOS INT 10h call to set the video mode with a “native” mode set code going directly to the (virtual) hardware registers. One big reason is that the (VBE 2.0) BIOS is limited to a predefined set of resolutions, whereas native mode set code can set more or less any resolution, enabling widescreen resolutions and such.
Replacing the code was not hard (I already had a working and tested mode set code) and it worked in Windows 3.1 and 3.0 straight away. When I got around to testing Windows 2.11, I noticed that although Windows looked fine and mouse worked, the keyboard didn’t seem to be working. Windows was just completely ignoring all keyboard input.
Curiously, the letters I fruitlessly typed in Windows popped up on the DOS command prompt as soon as I quit Windows (which was not hard using a mouse). This indicated that the keyboard input was not exactly lost, but it was not ending up in the right place somehow.
After double and triple checking, I assured myself that yes, using native display mode setting code instead of the BIOS broke the keyboard in Windows 2.11 (but not in Windows 3.x). That was, to put it mildly, not an anticipated side effect. How is that even possible?!
Fortunately I did not have to spend too much time figuring out the problem, but only because I had the Windows 2.11 DDK on hand, including the source code to KEYBOARD.DRV. And soon enough, there I found the following (excerpted) source code:
; ROM BIOS addresses CrtMode EQU 449H ; rom bios data area for screen mode ... ; When running Windows, it can happen that we will be interrupted by a popup ; in text mode that will read keyboard (Business Network). This very ; wierd case is detected by finding the screen in text mode (this is ; quite uncommon when this keyboard driver is activated). ; ifndef TEST ; IF you want to test this driver with the OEM keyboard test, ; TKEYBD.EXE, it must be assembled with the TEST flag set so ; that this code is NOT executed. cmp byte ptr ds:[CrtMode],4 ; text mode? jb jkbi0 ; yes, jump to ROM. endif
The code in the Windows 2.11 keyboard driver checks the CrtMode
byte at address 0:449h (current BIOS video mode) and if it’s less than 4, lets the ROM BIOS handle the keyboard input, bypassing Windows keyboard handling.
Said code is a bit sketchy. It does not consider mono mode 7, which is also a text mode. But more importantly, it makes the assumption that Windows must run in a graphics mode (reasonable) and that the BIOS must have been used to set the mode, changing the CrtMode
byte to a value 4 or higher (much less reasonable).
Interestingly, this clearly already broke the OEM keyboard test utility TKEYBD.EXE, which presumably in fact ran KEYBOARD.DRV in text mode.
I wondered how this works when Windows 2.11 runs with a Hercules graphics card (HGC), which does not use the BIOS to set the graphics mode (because there is no BIOS support). I could not find any code in the Hercules driver that would touch CrtMode
at all. Eventually I realized that the HGC would in fact run in mode 7, so the strange logic in KEYBOARD.DRV would not kick in. Did the Business Network pop-up problem not exist on Hercules cards? Who knows.
At any rate, now I had a good idea how to un-break the keyboard in Windows 2.11 with my display driver. I just added a line of code to write some more or less random value higher than 3 to 0:449h. And sure enough, suddenly the keyboard worked in Windows 2.11!
But that left me with a question what to do about Windows 3.x. Clearly Windows 3.x didn’t need CrtMode
to change (at least at first glance). But perhaps some obscure code would benefit if it did?
So I left the code in for all Windows versions. And poof, when Windows 3.1 started, I ended up with a black screen and a non-functional VM. The same happened with Windows 3.0, only not quite, because Windows 3.0 just instantly returned to DOS.
Except for Windows 3.0 in real mode, which worked fine. Then I realized that of course writing to 0:449h is dumb, because it can’t work in protected mode. Instead I needed to write to 40:49h, which works in both real and protected mode.
The excursion to Windows 2.11 KEYBOARD.DRV also allowed me to finally understand one rather strange fragment of code in the Windows 3.1 Video 7 sample driver (in the physical_enable
routine) that I previously removed:
mov ax,040h ; bogus for keyboard !!! mov es,ax mov BYTE PTR es:[49h],06h
This code writes the value 6 into the CrtMode
byte. The “bogus for keyboard !!!” comment is not exactly enlightening, to put it mildly. But seeing the KEYBOARD.DRV code, it makes sense that a display driver might need to write something there if it didn’t go through the BIOS to set the Windows graphics mode. In retrospect, perhaps I should not have removed that code…
On the other hand, I could not find any similar logic in the Windows 3.1 driver for the 8514/A, which also does not use the BIOS to set the mode. Which makes me think the issue was really specific to Windows 2.11 (or maybe more generally Windows 2.x).
Indeed checking the KEYBOARD.DRV source code in the Windows 3.1 DDK, one can find the following in the change history:
; 20 apr 89 peterbe Removed old comments above kbic: about ; Business Network and W.1.00 keyboard test.
In other words, the sketchy logic was removed from KEYBOARD.DRV before Windows 3.0 was released.
In a way I was lucky that I didn’t integrate the native mode set code right away, because figuring out why the keyboard does not work in Windows 2.11 would have been massively more difficult.
What’s the moral of the story? Having source code access can save a lot of time and head scratching, and good comments are important. The logic in Windows 2.11 KEYBOARD.DRV is rather strange and extremely non-obvious, but at least the comments explain why it’s there and the reader can understand what’s happening.
On the other hand, the “bogus for keyboard” comment in a display driver is impossible to to understand without context (that only exists well outside of the display driver), and it’s really no better than no comment at all.
The other takeaway is that 16-bit Windows is a house of cards built on a foundation of sand. An innocent change in one place can cause something to break in a seemingly completely unrelated location. Maintaining and debugging such a system is a nightmare. It also demonstrates that a flaky house of cards can still be a major commercial success.
Wait, so the driver will not be a VBE driver? What hardware is it going to support then?
It will run on VirtualBox. Possibly bochs/qemu as well.
Japheth’s SVGAPatch has been around for years and does an adequate job of providing a VBE compatible 256-color driver for Windows 3.1.
SVGAPatch is a dirty hack that only supports three hardcoded resolutions and breaks as soon as you open a full-screen DOS app and then exit it. I’d consider it ‘better than nothing’ rather than ‘adequate’. Though like you say, even a flaky house of cards can be a success… Even then, it’s probably understandable that I’d be looking for something more polished.
And while it’s true that VBE sometimes doesn’t provide all the resolutions supported by the display device, people have been known to hack their shadow video BIOS to make them available (look for 915resolution). Hell, one could probably even create custom builds of the virtual machine’s video BIOS.
How hard do you expect it to be to adapt your driver into a VBE client? If you don’t want to work on it yourself, that’s fine, but it’s still my primary interest.
Are there any plans to add the Windows 2.11 DDK and 3.x DDKs to the Windows Library section of this site?
Finding DDKs for 95 and newer is trivial, I’m not having much luck finding them at the usual locations (BetaArchive, Winworldpc, Archive.org).
Cheers and happy new year.
I’m curious, is it ok for us (the readers) to try out a WIP version of the driver?
I’m trying to figure how these Win3.x drivers work in the three operating modes.
I am mistaken if I say that in order to take advantage of them, you have to write three versions of the driver? One running in real mode, one in 16-bit protected mode and one VxD running in 32-bit protected mode?
Then for the latter two, you have to interact with the DOS Extender if you want to use the BIOS API, or you can stay in protected mode if you don’t intent to use these APIs and directly read/write the device registers?
I wonder which kind of “popup” would this matter
I haven’t found what product The Network Manager was. I vaguely remember that one of the PC-MOS related products was offered under a similar name but that could be wrong. The pop-up would be a text mode pop-up that did bad things to any graphics screen. Most of the later DOS networking utilities made a point of not doing a pop-up when in graphics mode. Windows being able to handle that unwanted intrusion is a tribute to very effective coding.
The .drv drivers are all just normal Win16 code with fancy entry points. They don’t really care what mode they’re running under. Of course, they can be sensitive to it, but I’ve found it doesn’t seem to matter when writing one myself.
Sometimes they’re paired with a .vxd which does run in protected mode. Think MOUSE.DRV vs. *VMD (which is a built-in VXD included with the protected mode kernel, afaik)
The VDD driver mostly handles DOS application run in a window. The grabber is the third piece for most video drivers; the grabber handles cut and paste with DOS apps. Mismatches between driver, VDD, and grabber take up several pages in the Win 3 troubleshooting guide. The most common warning of a mismatch is that DOS applications won’t launch under Windows. Beats crashing I guess.
The Windows 3.1 DDK can be found in various MSDN archives.
When I get to upload something that I consider minimally functional, yes. Not quite there yet, I especially need to figure out the installation bit.
No. There is one display driver that can handle Windows 3.0 in Real, Standard, and Enhanced mode. The VxD typically handles virtualization of DOS boxes in Enhanced mode, but it’s added functionality, and it is not part of the display driver per se (that is, the VxD is not involved at all when only Windows applications run).
Windows 3.0 added all the protected-mode APIs for selector management and such, and in real mode they do nothing or some minimal equivalent. For example AllocCStoDSAlias just returns the same segment value and FreeSelector does nothing. So it’s possible to write code that works in both protected and real mode without too much trouble.
Honestly, just a couple of files to dump into WINDOWS\SYSTEM and instructions to modify SYSTEM.INI wouldn’t be all that bad. In fact, personally I might even prefer it, as I have come to loathe Windows-style installation wizards that make me wonder what exactly they are doing, whether it will be possible to undo later, and how many browser toolbars they’re going to install behind my back… if I copy the files myself, at least I know what I did to my system.
Windows 3.x’s video driver installer doesn’t seem to have been developed for a world where one video card could support multiple output modes. Most created dummy driver alias in the installer .INI for every supported mode + large fonts. The actual mode selection appears to have been done with entries in WIN.INI or SYSTEM.INI.
Windows was designed for users that set the highest resolution mode that the video card plus monitor would accept. The idea that resolutions would need to be changed on the fly was still in the future along with cards that could do that. ATI included drivers to handle their resolution changes according to the troubleshooting guide but the other ones that tried managed to place the image where the display couldn’t show it. The troubleshooting advice was to turn off NMI on the video card and prevent mode changes.
So do you have the actual 2.11 DDK or is this the 2.11 OEM binary adaptation kit that’s available on Winworld?
BAK = DDK
The BAK is a little bit more, but for most purposes it’s the same thing. The difference is that the BAK enabled an OEM to ship their own modified Windows version, which was no longer really a thing in the Windows 3.x days.
Yes. And the Windows 3.1 installer was already a lot more flexible than the Windows 3.0 installer. Windows 3.0 had absolutely no concept of multi-resolution drivers, which is why OEMs used to ship separate binaries for each supported resolution. In Windows 3.1 Microsoft added a way to provide individual INI file settings for each selectable choice in OEMSETUP.INF, so it was possible to control the resolution and small/large fonts with a single driver binary. Only Windows 95 standardized the mode control and defined a “proper” way to specify resolution and color depth. Technically Windows 3.1 (and really 3.0) could have done it too, it just didn’t.
Part of it was that as Richard Wells says, graphics cards and monitors used to be delivered together as a matched set, so there was little need to mess with sub-optimal resolutions. That completely changed in the 1990s when cards were capable of many resolutions and the monitor was often the limiting factor. With different color depths thrown into the mix, users had even more trade-offs to balance and more need for a wide palette of options.
Was SVGA common when Windows 3.0 released in 1990?
I guess that depends on what you call “common”. Cards like ATI VGA Wonder or Video 7 VRAM were out there in 1990 and offered up to 1024×768. How many people had a good enough monitor I have no idea. I don’t have hard numbers but I would guess that in 1990, SuperVGA cards were really just starting to get popular. Windows 3.0 itself was clearly designed to be usable in 640×480 resolution.
1990 was when Gateway switched all of its featured systems to include 1024×768 monitors. It wasn’t much of a stretch to the budget either. Going from 256K to 512K added $50 to the video card cost. The 640×480 color monitor at 41 DPI was about $300; 640×480 color monitor at 31 DPI was about $350; 1024×768 color monitor at 28 DPI could be gotten for about $400. Those prices were all for PC Brand branded 14″ monitors; other brands have slightly different prices for the same budget sized monitors. Of course, the large and multisync monitors were much more expensive.
The 1988 Paradise card I have did 800×600. Once the 256 kbit chips returned to normal pricing, the 512K became the most common type of card.
SVGA would have been the most common video setup for preinstalled copies of Windows 3.
On the other hand, looks like monitors like the NEC MultiSync Plus were quite expensive in 1988.
Yes, SVGAs seem to have been somewhat common in the late 1980s. I don’t have a good sense of how many people had the monitor to go with it.
I’m also not sure how many Windows 3.0 copies were sold with new machines vs. installed on existing systems. Something like a Deskpro 386 was probably a good target for Windows 3.0 but didn’t necessarily have anything better than standard VGA.
Most Windows 3 copies had to be sold with new machines just because of how the market was growing. Cumulative sales of all PCs and compatibles was slightly more than 30 million from 1981 to 1989 with another 17 million in 1990. After a slight slump in 1991, the increases resumed and the combined sales from 1991 to 1994 were more than 100 million. With something like 90% of prebuilt computers shipping with Windows 3.x during those years, that dwarfed the total of all computers that might possibly get an upgrade Win 3 install.
On the other hand, it looks like monitors like Sony Multiscan 1304 were already common in 1989, though existing systems in 1990 are not likely to have it.
Note that I am talking about 1990 and not 1992 here.
The NEC Multisync came out in 1986, followed by the Multisync II in 1987. Both could almost do 800×600 (the top and bottom of the screen got cut off).. At the time multiscan monitors were advertised more for their ability to support multiple video cards (TTL+analog RGB covering CGA/EGA/VGA/PGC) more so then higher resolution video.
Even when cards like the IBM XGA and Paradise PVGA1 came out with higher resolution modes, people didn’t typically run them. In 1992 when “better then VGA” became the norm, the focus was higher color depth because of that hip new multimedia thing.
I am looking for good SVGA monitors. The original NEC Multisync and Sony Multiscan 1302 for example had a maximum horizontal sync of 35Khz.
The high-resolution Kanji monitors were very readable. Not sure of the exact date of introduction but the 1984 model of the 5551 is listed as 24 dot mono which translates to an effective resolution of 1024×768. 24 dot with 16 colors was available before the PS/2. Made the development of VGA and 8514 easy if monitors were available for those resolutions and only needed a minor change to the design to handle the new video connector. It took some time for those to take hold in the US since the cost was high but provided only marginal benefits for most not doing CAD or DTP.
In my possession is a game, Theatre of War by Artech/Three Sixty. The installer notes this is an early SVGA game and contains a routine to test the systems compatibility with SVGA video modes as, if I recall properly, the developers did not believe systems to be compatible on the whole.
That game I think was published in 1992 if this data point is valuable in any way.
That’s what VESA VBE was designed to solve. There were many SVGA cards, but not compatible on the register level because everyone extended the standard VGA registers in their own way. Different models from the same vendor were barely compatible, and there was absolutely zero cross-vendor compatibility. They all did more or less the same thing, but differently. But in 1992, VBE was not yet common, and was very unlikely to be built into a SVGA card’s BIOS.
In my personal circle of acquaintances, higher than standard VGA was very rare until around 1992/93. I was 12 in 1990, when I got my first PC, though (a 286).
Where I was working in 1992, the entire department had switched to Compaqs with Super VGA. Mostly that was because the other major piece of software was designed for Sun’s graphical interface and one needed a fast computer with very high resolution displays to have a terminal for that. The software I was writing worked adequately at 640×480 but was designed to automatically scale and look even better at 800×600 and 1024×768.
In 1992 I had a 386SX with SuperVGA… but the monitor could only handle 1024×768 interlaced, which was not at all fun to work with. So 800×600 at (IIRC) 56Hz was the highest practically usable resolution, and even that wasn’t great. That sort of capability was probably common for the mass-market systems of the time.
For my purposes, the ability to do 640×480 with 256 colors was more useful than the higher resolutions.
Or even 640×400 at 256 colors back when 256k cards were common.
Yes, 640×400@256 was a thing in the early 1990s, precisely because there were enough cards that could do it but not 640×480. I believe the DAC and CRTC side was basically standard VGA (320×200 is really 640×400 because pixels and lines are doubled) but the card needed support for bank switching to address the memory.
800×600@16 was popular for the opposite reason, it needed a CRTC and DAC (and monitor) better than VGA, but did not need bank switching, which meant that as long as the BIOS could set the mode, software could work with it using only standard VGA registers and memory, without any card specific knowledge.
Part of the point of 640×400@256 was that no extra memory would be required on SVGA cards to use the mode.
Which also reminds me that VGA is a planar device. I wonder how would SVGA bank switching be different.
Not sure what you mean exactly about bank switching.
800×600@16 needs 240,000 bytes video memory, but because in planar modes, each host-addressable bit corresponds to four planes, only 60,000 bytes are required on the host side to address all of that. No bank switching required.
640×400@256 needs 256,000 bytes video memory, just slightly more. But because it’s not a planar mode, the host actually needs to address 256,000 bytes… and that requires banking (or a LFB).
And yes, both of those modes work on a 256K card, which anything calling itself VGA has to have.
I am talking about compared to things like https://en.wikipedia.org/wiki/Mode_X
I think it should have been possible to do 640×400 using mode-X-like memory addressing. I don’t know if anyone did it.
That Wikipedia article is a bit funny: “Even though planar memory mode is a documented part of the VGA standard and was used in earlier commercial games, it was first widely publicized in the Mode X articles, leading many programmers to consider Mode X and planar memory synonymous.” I think it meant to say “many game programmers”. Because GUI programmers used planar modes from the beginning and were quite familiar with them.
Modes X/Y make perfect sense when one realizes that the way the host accesses video memory is quite independent from the way the CRTC accesses video memory for the purposes of display refresh. Which is something standard documentation does not really talk about, or even suggest that it’s possible.
The Windows 2.0 DDK was uploaded a while ago so that might help w/ development.
https://archive.org/details/ms-win20-ddk-rel/
A standard VGA has to double pixels horizontally when using 8 bits per pixel, so only resolutions like 320×400 or 360×480 are possible.
Pretty much every SVGA chip manufacturer came up with their own way to (a) do 8bpp and (b) set 800×600 display timings; the latter was fairly standard via the BIOS (AH=6Ah with INT 10h Set Mode), but the former didn’t have much of a standard before VESA.
It’s good to see, but no real help. The differences from the Windows 2.11 DDK are minimal. A Windows 3.0 DDK would help significantly more.
Mode 6Ah was actually defined by VESA as well. I’m not sure when, but VBE 1.0 is from October 1989 so it must have been before that.
I don’t remember off hand if the VGA CRTC registers can handle 800×600 but the clock definitely does not, so vendors had to invent some way to set the appropriate clock. And clock programming was a giant mess on old SuperVGAs.
Technically DPMI code should not be hardcoding the 40h selector in protected mode. The http://sudleyplace.com/dpmione/dpmispec1.0.pdf says that “Some DPMI version 0.9 clients are incorrectly making use of GDT selector 0040H as
referring to 40:0H in real mode.”
I don’t think Windows applications, let alone drivers, could ever be considered to be generic DPMI clients.
It is interesting how much Windows 3.0 was designed around DPMI, even though it is far from a “generic DPMI client’.
Windows/386 used GEMMIS, which could be thought of orthogonal to VCPI. GEMMIS didn’t do much except allow EMM386 to share its configuration and allow another 80386 hypervisor to basically turn EMM386 off. In typical Microsoft fashion, Windows kept requiring GEMMIS.
Windows 3.0 was implemented as DPMI servers, and (mostly) a 16-bit DPMI client. It then in turn shipped a 16-bit DPMI server called DOSX, which… in turn required HIMEM.SYS. In 386 Enhanced Mode, it shipped a 32-bit hypervisor which provided both 32-bit and 16-bit DPMI servers.
This was an impressively virtualised and modular architecture, but Microsoft never really promoted or marketed it as that. They just claimed it was an “operating environment” or “presentation manager” for DOS. To me, the most impressive part of this was how little the system requirements were – Windows 3.0 could still run on a bone stock PC XT, Standard Mode on a PC AT with just 1MB of RAM, and 386 Enhanced Mode in 2MB of RAM, and was actually usable, and on top of DOS 3.10 no less.