The floppy controller evolution

The floppy subsystem in PCs hadn’t mutated over time quite as much as, say, the hard disk subsystem, but prior to its extinction in the early 21st century, the floppy disk controller (FDC) did evolve noticeably.

In the original IBM PC (1981) and PC/XT (1983), the FDC was physically located on a separate diskette adapter card. The FDC itself was a NEC µPD765A or a compatible part, such as the Intel 8272A. It’s worth mentioning that nearly all floppy controllers supported up to four drives, but systems with more than two drives were extremely rare. The reason was that only two drives were supported per cable, while 99.99% of all systems only provided a single floppy cable connector.

The original FDC only supported two I/O ports: the read-only Main Status Register (MSR) and the Data Register, used for both reading and writing. The adapter card added another port, the Digital Output Register (DOR), used primarily for drive selection and controlling drive motors. Continue reading

Posted in PC architecture | 14 Comments

Xenix 286 in a VM

The fixes which were included in VirtualBox 4.0.8 happen to help not only OS/2 1.x but also Xenix. The 386 versions of Xenix 2.3.x (not necessarily older versions!) should install in a VM without trouble, but the 286 versions are trickier. The reason for that is deceptively simple: different distribution media.

Xenix is one of the few operating systems which determine the format of diskettes based on the floppy drive type. In contrast, all operating systems which use DOS formatted floppies (including DOS, OS/2, and Windows) use the information stored on the diskette itself to determine its format. Continue reading

Posted in VirtualBox, Xenix | 10 Comments

Installing OS/2 1.x in a VirtualBox VM

Installing 16-bit OS/2 in a virtual machine ranges between “tricky” and “impossible”, depending on the version of OS/2 and virtualization software used. In VirtualBox 4.0.8, things have moved further away from “impossible” and closer towards merely “tricky”. Version 4.0.8 fixed problems with floppy emulation which were making installation difficult, but plenty of hurdles still remain. Most of those have little to do with virtualization and are simply a consequence of OS/2’s age—the PCs of the late 1980s were not quite like today’s PCs. Continue reading

Posted in OS/2, VirtualBox | 29 Comments

The Fixed Disk Parameter Table

The Fixed Disk Parameter Table, or FDPT, is a structure primarily used by the BIOS in IBM compatible computers, but is also of critical importance to some (especially older) operating systems which do not use the BIOS.

The FDPT was introduced in the IBM PC/XT as a necessity in implementing hard drive (aka fixed disk) support; however, I’ll skip the PC/XT specific details. The IBM PC/AT redefined the FDPT format, but the purpose remained the same: Define the physical characteristics of a hard disk so that the same BIOS ROM could be used with more than a single drive model. Most importantly, the FDPT contained the disk geometry—the number of cylinders, heads, and sectors per track. Continue reading

Posted in BIOS, PC architecture, Virtualization | 3 Comments

Geometry Problems

When introducing hard disk support in the PC/XT back in early 1983, IBM made a very unfortunate design decision: the information about drive geometry was exposed in the BIOS, and even worse, in the boot sector stored on the disk.

SCSI storage devices always used logical block numbers to access data, and drive geometry was something internal to a disk drive, not visible to the user. But SCSI was not yet fully established in the early 1980s and IBM was looking for a cheap technology. Therefore, simple MFM hard disks and controllers were used in both the IBM PC/XT and later the PC/AT. On the hardware level, both addressed disk sectors in terms of cylinders, heads, and sectors (CHS).

Ironically, the DOS FAT file systems never cared about geometry and only used logical sector (or cluster) numbers, with a low-level BIOS driver translating from logical sector numbers to whatever addressing scheme the underlying storage required.

Continue reading

Posted in BIOS, PC architecture, Virtualization | 2 Comments

The PC floppy subsystem

The PC floppy subsystem, ubiquitous and indispensable until the early 21st century, suffered the typical fate of many “legacy” subsystems: The initial design was adequate, but did not adapt to newer and more complex hardware.

With the original IBM PC, things were simple. The only choices were single- or double-sided 5¼” double-density drive (the single-sided drives were soon obsoleted) and how many. The number of disks was configured via switches, and the number of sides was something the operating system could easily take care of when formatting a new disk. The NEC μPD765A floppy controller was used, setting the standard for years to come. Continue reading

Posted in PC architecture | Leave a comment

HIMEM.SYS, unreal mode, and LOADALL

The previous post talked about real mode on 286+ processors which behaves more like a slightly modified variant of protected mode rather than the old 8088/8086 processors. Real mode with non-compatible selector bases or limits is usually called unreal mode or big real mode. Even though Intel never clearly documented the behavior in detail, it is more or less set in stone thanks to our dear friend, backwards compatibility.

That’s because unreal mode had some rather prominent users—one of them was Microsoft’s HIMEM.SYS, used on nearly every PC in the 1990s. While disassembling HIMEM.SYS might be instructive, it would be a waste of time in this case, since Microsoft published the full source code to several HIMEM.SYS 2.x versions. That way, the evolution of the code can be easily traced. Continue reading

Posted in DOS, x86 | 4 Comments

Will the real Real Mode please stand up?

Every programmer familiar with the x86 architecture understands the difference between real and protected address mode of the processor. It is well known that the real mode is compatible with Intel’s old 16-bit 8088/8086 CPUs, while protected mode was a new feature introduced in the Intel 80286 and extended to 32-bit in the 80386. The 80386 further muddled things by introducing the Virtual-8086 mode (V86 mode) that was something of both.

Not every programmer knows exactly what the difference is between real and protected mode on processors that support both. Intel’s documentation, frankly, does its best to obscure and confuse the issue (a situation not unfamiliar to readers of Intel’s processor documentation) without actually lying outright. Intel suggests that real mode implies 16-bit segments with 64KB limits, 8086 style addressing, limited instruction set, no paging, no privilege levels, and no memory protection. Continue reading

Posted in x86 | 5 Comments

Display Drivers, NT and NeXTSTEP

It is instructive to compare the OS/2 and 16-bit Windows display driver model with other operating systems. Why NT and NeXTSTEP? NT because it was Microsoft’s third take on a (mostly) PC operating system, and NeXTSTEP because it was an OS with a very different heritage, ported to the PC relatively late in its life, and eventually spawning Mac OS X.

The general NT driver model is unusually complex and over-engineered. However, NT display drivers use a significantly different and much simpler model. Drivers are split into two separate components, a video miniport and the display driver proper. The video miniport always runs in kernel context, while the display driver is a DLL which runs in user or kernel context, depending on the NT version.

The miniport is primarily responsible for hardware initialization and setting modes. If applicable, it also manages the hardware cursor and DAC palettes. The display driver implements accelerated drawing operations and corresponds to the old GDI drivers in its responsibilities, but not in implementation. Both the miniport and the display driver are typically written in C, and assembly code is almost unheard of. Continue reading

Posted in NeXTSTEP, NT | 2 Comments

Display Drivers, OS/2 and 16-bit Windows

Not surprisingly, the display driver model of Windows 1.x/2.x/3.x and OS/2 1.x/2.x was quite similar. This was in sharp contrast to the drivers for just about every other device; while disks or network adapters already had existing drivers in DOS, the drivers for Windows GDI had to be written from scratch. When the Presentation Manager was created, Microsoft recycled much of the existing Windows driver model.

Both in OS/2 and Windows, displays and printers both used the same basic driver model. The driver model was designed with a C-callable interface, but traditionally only printer drivers were written in C, while display drivers were written in assembler.

Viewed from a modern perspective, the Windows and OS/2 driver model was byzantine, overly complex, with drivers being difficult to write and nearly impossible to fully debug. The drivers could implement nearly the entire graphics engine, and even the most basic driver had to implement significant functionality. This included rendering memory bitmaps (device contexts), which seems bizarre in hindsight. Continue reading

Posted in OS/2, Windows | Leave a comment