About twenty years ago, I bought a used IBM Model M keyboard with a PS/2 connector. I believe it cost me around $5-$10 plus shipping at the time. A good investment, given that this sort of keyboard is probably worth $100 or more these days.
Once about every 10 years, I clean the keyboard. This time I removed all the key caps I could and ran them through a dishwasher; that worked very well. The rest I cleaned with a damp cloth and a bit of dish soap.
Looks like new
After the cleaning, the keyboard looks really good. It was used when I got it, I used it pretty heavily for about ten years, and it’s now close to 30 years old. And yet it shows much less wear than the vast majority of keyboards of considerably newer vintage.
Every now and then I attack the large amount of floppy disks in my basement and run a bunch of them through Kryoflux. This time it was a shoebox full of OS/2 related floppies. Among them was a very incomplete set of 10 floppies labeled “PRE-RELEASE IBM OS/2 2.0 32-bit Graphics Engine and WIN-OS/2 VERSION 3.1”, with the floppy masters most likely created on August 14, 1992.
What is this really?
Now, the labeling on those floppies is a bit funny. It’s obviously not a pre-release of OS/2 2.0 if the floppies are from August 1992 and OS/2 2.0 had already been released in March/April. At the same time the floppies don’t say “OS/2 2.1 pre-release”.
When I compared the floppy contents with existing floppy images in my archive, I realized that they (nearly) match two different sets: One that I thought was an OS/2 2.1 beta, and another that I identified as OS/2 2.00.1.
Upon closer examination, the supposed 2.1 beta and OS/2 2.00.1 images were 100% identical; clearly my fault. The images had come from an October 1992 IBM PDK CD, and on the CD they are rather unhelpfully labeled “Operating System OS/2 2.X”.
Since this is IBM we’re talking about, there’s both some measure of chaos and decently sized written record. Consulting the OS/2 V2.1 Technical Update shed some light on the confusion. Some.
Recently I had the need to use several different DOS VMs that all used a SMB network client. Although I did not use networking heavily, I noticed that there are massive differences in performance between the VMs. Copying a circa 42 MB file would take anywhere between about 5 seconds and 49 minutes (not a typo). What’s more, some VMs were fast in both directions, while others were very slow sending and yet others were very slow receiving.
Since in all cases the VMs communicated with the same server (Synology NAS running Samba) from the same host (AMD Ryzen 3800X running Windows 10 and VirtualBox), there really should not be that much performance variation, and yet there it was.
In all cases, NetBIOS over TCP/IP was used as the protocol underlying SMB, and it should be said that TCP/IP greatly complicates the picture. I used three different software stacks, mostly to get some sanity checking:
Microsoft Network Client 3.11 with included TCP/IP
IBM DOS LAN Services (DLS) version 4.0 with IBM TCP/IP
IBM DOS LAN Services 5.0 with Network TeleSystems TCP/IP
The SMB clients are in all cases very similar and in fact nearly identical. But the TCP/IP stacks are obviously different, and it matters.
Several years ago, I found out the hard way that old versions of DOS have trouble with hard disks with more than 17 sectors per track. To recap, DOS versions older than 3.3 may hang when booting from a hard disk with more than 17 sectors per track, but the exact behavior is somewhat complex.
Old DOS versions required IO.SYS to be contiguous and attempted to load it with a single disk read if possible, but perhaps to keep the code size small, the loading code isn’t very sophisticated and reads sectors from the beginning of IO.SYS until the end of the disk track. With 17 sectors per track, there can never be a problem. With lots of sectors (more than 30 can be a problem) DOS can hang, but the behavior depends on exactly where IO.SYS begins relative to the end of the disk track it’s on. Even on a disk with maximum (63) sectors per track, old DOS versions may boot fine if IO.SYS is sufficiently close to the end of the track.
A 1990 Western Digital WD1007V-SE2 ESDI Controller
Where exactly IO.SYS is depends on the size of the FAT tables and the root directory size. In practice the root directory size is more or less fixed, but the FAT size very much depends on the size of the DOS partition. Needless to say, the behavior is unpredictable to users and highly undesirable.
The problem was fixed in 1986 or early 1987, which implies that it was already known. Now I have a good idea why it was known.
Some time ago I wrote that IBM OS/2 1.0 and 1.1 cannot be installed in a VM due to the way it switches between real and protected mode. At the time I did not realize that there was another obstacle, namely IBM’s use of the undocumented 80286 LOADALL instruction.
For reasons that are not very clear, Microsoft’s editions of OS/2 used 386 instructions since before OS/2 1.0 was released, but IBM’s versions did not. When Microsoft’s OS/2 kernels detected that they were running on an 80386 (or newer) processor, they used 386-specific code to return from protected mode back to real mode. This was faster than the 286 method which required a CPU reset (OS/2 did not utilize the common method which relied on the PC/AT keyboard controller to reset the CPU and instead caused a processor shutdown through a triple fault, inducing a reset that way; the OS/2 method ought to be faster). Microsoft’s OS/2 kernels also didn’t use the LOADALL instruction when running on a 386.
IBM’s OS/2 1.0/1.1 kernels on the other hand didn’t care one bit about running on a 386, at least not on an AT-compatible machine—perhaps because at the time IBM had no AT-compatible 386 systems on the market or even in the pipeline. IBM’s OS/2 1.0 can still run on a 386 or newer CPU, but it requires a bit of help from the machine’s BIOS.
The calming Big Blue logo
The BIOS must properly support software that sets the CMOS shutdown status byte, resets the CPU (either by triple-faulting, through the keyboard controller, or by any other means), and regains control after the reset while minimally disturbing the machine state. That functionality is common enough because it was also used by DOS extenders and many OS boot loaders.
What’s less common is BIOS support for emulating a 286 LOADALL instruction. Even though the 80386 has its own undocumented LOADALL, a 486 or later CPU has nothing of the sort. However, a 386 or later can emulate enough of 286 LOADALL functionality to satisfy common uses of it, which includes things like old HIMEM.SYS, RAMDRIVE.SYS, or OS/2.
The VirtualBox BIOS has had support for LOADALL emulation for a while now, so it should be able to run IBM’s OS/2 1.0. And it is able to run it… but not install. Or not quite. Fortunately there’s a way around it…
Well, that too, but also nobody expects that a bland, run-of-the mill Novell NE2000 NDIS driver would crash/hang just because it runs on 486 or later CPUs.
I wanted to try the “basic” DOS redirector shipped with Microsoft’s LAN Manager 2.0 (1990) and more or less by a flip of a coin I decided to use the NE2000 NDIS driver that came with the package. Previously I had no trouble with Microsoft’s NE2000.DOS driver shipped with LAN Manager 2.1 and Microsoft’s Network Client 2.0.
But the old LAN Manager NE2000.DOS driver (16,342 bytes, dated 11-19-90, calls itself version 0.31) loaded and then promptly hung as soon as Netbind was started:
Netbind hangs with LAN Manager 2.0 NE2000 driver
At first I naturally suspected some problem with the card configuration or the NIC hardware, but what I found was much more surprising.
The reason the driver hung actually wasn’t related to networking at all. The driver hung in a routine that was clearly trying to detect the CPU type. How can someone screw something so simple so badly? Well…
Over the last few months I have been on and off digging into the history of early PC networking products, especially Ethernet-based ones. In that context, it is impossible to miss the classic NE2000 adapter with all its offshoots and clones. Especially in the Linux community, the NE2000 seems to have had rather bad reputation that was in part understandable but in part based on claims that simply make no sense upon closer examination.
A genuine Novell NE2000 card (1992) with DP83901
First let’s recap a bit. In late 1986, National Semiconductor introduced the DP8390/91/92 chip set including a complete Ethernet controller, encoder/decoder, and a transceiver. The DP8390 NIC was a relatively simple design, not as advanced as the Intel 82586 or AMD LANCE, but significantly more capable and cheaper than the low-end offering of the era, the 3Com 3C501 EtherLink.
For R&D purposes, I would very much like to get my hands on the circa 1991 Microsoft LAN Manager Network Device Driver Kit which was meant to support the development of NDIS 2.0 drivers. While it is obvious that some kind of development kit for NDIS 2.0 drivers must have existed, the exact name is actually known thanks to the Q80562 KB Article.
That same KB Article also mentions MTTOOL, a test tool that sounds very useful, but unfortunately I’ve not been able to find it anywhere. The tool itself would be helpful even without the rest of the kit.
The closest thing I could find is a 1993 NDDK (Network Device Development Kit) that supports NDIS 3.0 drivers for the Windows for Workgroups 3.11 environment. While the NDDK is valuable on its own, it is quite different and not immediately useful because it is oriented towards Windows NT and 32-bit environments, unlike the 16-bit NDIS 2.0 which supported DOS and 16-bit OS/2.
The old LAN Manager NDDK seems to have fallen through the cracks of the post-IBM-divorce chaos at Microsoft. It wasn’t documented in the older Microsoft Programmer’s Library and by the time MSDN was rolled out, the NDIS 3.0 NDDK was current. And because OS/2 had been disowned by then, Microsoft probably saw no need to widely distribute the older NDIS 2.0 kit.
Which is ironic because although NDIS 2.0 development might be finally dead now, it was not a few years ago, with Intel offering an updated DOS network driver package as late as 2019. The newest NDIS 2.0 driver in the set is dated December 28, 2015, which means NDIS 2.0 survived even past the Windows XP era, leave alone Windows 9x or Windows 3.x!
Update (June 23, 2021): Less than 3 months later, the 30+ year old NDDK popped up. It is now available here.
The other day I was trying to fill a couple of gaps in my understanding of the Intel 8237A DMA controller documentation. I wrote a small testcase that performed a dummy transfer and modified the base address and count registers in various ways, and then examined what happens to the current address and count registers.
I ended up with printing out the current DMA address and count at the beginning and end of the test. I noticed that the current address changed between test runs, which was quite unexpected. No one else should have been using the DMA channel and the current address can’t just randomly change.
8237A DMA is… fun?
The change itself wasn’t random at all: The current address was being set to the base address. That happens when the base address register is written, but I was pretty sure no one was doing that.
After much head scratching, I realized that my own code was triggering the change. I had some trivial code in place to save and restore the channel’s DMA page register, and it was restoring the page register that caused the current base address to change after the last state printout. That was definitely not expected to happen. So why was it happening?
For reference and for comparison, here’s the sketchy one:
A likely fake Adaptec 39160 SCSI HBA
The PCB is not quite the same (ASSY 1817206-00 vs. 1817206-01) but it’s close enough. The real one has none of the sketchy labeling—it says “CH 2/B” and not “CH 27B” and so on.