Tracking Down a Bug

When I first encountered the Unix vi editor many years ago, I recoiled in horror. It was nothing like the editors I was used to—Borland IDEs, DOS 5.0/6.x EDIT, or OS/2 and Windows editors. But over the years, I learned to use vi.

It turns out that on many exotic *nix systems, vi is the only game in town. Sure, there might be some way to install pico or nano or whatever, but even if that is possible, it might be very difficult to get there without editing a couple of files… with vi. And when the system in question is some ancient XENIX or PC/IX or Microport UNIX or whatever, the effort of installing some other editor vastly outweighs the effort of learning vi—and that’s if the system in question even comes with a compiler.

Some time later I started using the vi editor that comes with the (Open) Watcom compilers. It’s a decent vi clone, not great but workable. It has the nice property that there are console versions for DOS, OS/2 (both 32-bit and 16-bit), and NT available (and *nix, too). The DOS version that ships with the compiler is a 32-bit DOS-extended version, but it’s also possible to build a 16-bit version. Which will run on a 286 or even an 8086, and that’s occasionally useful.

Open Watcom vi/286, a 16-bit DOS version with EMS/XMS support
Open Watcom vi/os2, a 16-bit OS/2 version running in OS/2 1.1

I’ve been using the 16-bit DOS version of Open Watcom vi for some time, and it works well… except sometimes it hangs the system. But only sometimes and not very consistently. When it does hang, it usually happens when quitting the editor, but rarely also when starting it up.

I’ve never been able to track down the problem because it doesn’t happen frequently enough and when it does, all I can see that the system ends up in a corrupted state. Until yesterday.

Continue reading
Posted in Debugging, Development, DOS, Watcom | 14 Comments

First ROM Shadowing

The other day I was asked an interesting question: What was the first BIOS with support for ROM shadowing? In the 1990s, ROM shadowing was common, at first as a pure performance enhancement and later as a functional requirement; newer firmware is stored compressed in ROM and must be decompressed into RAM first, and firmware may also rely on writing to itself before being locked down and write protected.

Old PCs did not use ROM shadowing because it made no sense. ROMs were only marginally slower than RAM, if at all, and RAM was too precious to waste on mirroring the contents of existing ROMs. Over the years, RAM speeds shot up while ROM remained slow. By about 1990, executing BIOS code from ROM incurred a noticeable performance penalty, and at the same time devoting 64 or 128 KB to ROM shadowing was no longer prohibitively expensive.

But who did it first?

Continue reading
Posted in C&T, Compaq, IBM, PC architecture, PC history | 55 Comments

A wunderBAR Story

Or, what should be broken became solid, and what should be solid became broken.

While searching for something completely different, I came across a fascinating story of the “wunderBAR”. Since the story is very short, I’ll quote it here in full:

It seems that in the sixties, the intent was to have a vertical bar. But the keyboard standard was printed with a dirt on the camera-ready proof so that it was mass-reproduced as a broken bar. Then most manufacturer implementors did not read the book (…) and copied the drawings from the picture of the keyboard. And so we have this character for which nobody has a use, but many code tables contain it and compilers use its code point!

That’s a really interesting story! But… it doesn’t quite ring true. Then again, there could well be something to it, because no one quite seems to know what the broken bar is for.

In fact, the broken bar barely even exists anymore. In the days of DOS, the character used for the pipe symbol (on the DOS command line) or for logical OR (in C/C++, for example) used ASCII code 7Ch (124 decimal), which was rendered as a broken vertical bar by the fonts used at least by the IBM MDA, CGA, EGA, and VGA cards. But nowadays that is no longer the case. The same ASCII codepoint is rendered as a solid vertical bar in Windows 10 or Linux, and also shown as a solid vertical bar on contemporary keyboards. What happened?

Continue reading
Posted in Computing History, Corrections | 21 Comments

Hash Tables FTW

Over the last few weeks I did a bit of tinkering on a hobby software project. The code was written by two other people during the last several years and I finally managed to find some time to fix a couple of bugs.

The project is writing a functional clone of WGML 4.0 aka Watcom SCRIPT/GML, a text processor that came out of mainframe SCRIPT tradition (which goes back to 1968 or so). WGML is used by the Open Watcom project to produce documentation in a large number of formats: Windows and OS/2 native help files, homegrown Watcom help format, HTML, CHM, and last but not least PostScript/PDF.

Several slightly different WGML executables exist, and run either under OS/2 or 32-bit extended DOS. Running those on modern systems is possible (via DOSBox and such), but quite painful, and creates undesirable dependencies. The source code to WGML was presumably lost decades ago.

“Why don’t you just convert the documentation to some other format” is what a lot of people said, and when asked to show how, the discussion was over. Because it’s actually not that easy. I’m sure it’s not impossible but it’s much harder than it seems. As it turns out, SCRIPT includes a fairly flexible language with macros, variables (symbols), built-in functions, conditionals, and loops, and the documentation (several thousand pages) makes use of all that.

So we’re looking at either a huge amount of extremely boring and mind-numbing work converting the documentation source files, or a huge amount of interesting and challenging work rewriting the WGML text processor. Given that no one is paid for it, only the latter is actually a realistic option.

Continue reading
Posted in Development | 4 Comments

Windows 3.x VDDVGA

While working on my Windows 3.x display driver, I ran into a vexing problem. In Windows 3.1 running in Enhanced 386 mode, I could start a DOS session and switch it to a window. But an attempt to set a mode in the DOS window (e.g. MODE CO80) would destroy the Windows desktop, preventing further drawing from happening properly. It was possible to recover by using Alt+Enter to switch the DOS window to full screen again and then returning to the desktop, but obviously that wasn’t going to cut it.

Oddly enough, this problem did not exist in Windows 3.0. And in fact it also didn’t exist in Windows 3.1 if I used the Windows 3.0 compatible VDDVGA30.386 VxD shipped with Windows 3.1 (plus the corresponding VGA30.3GR grabber).

There was clearly some difference between the VGA VDD (Virtual Display Driver) in Windows 3.0 and 3.1. The downside of the VDD is that its operation is not particularly well explained in the Windows DDK documentation. The upside is that the source code of VDDVGA.386 (plus several other VDD variants) was shipped with the Windows 3.1 DDK.

First I tried to find out what was even happening. Comparing bad/good VGA register state, I soon enough discovered that the sequencer registers contents changed, switching from chained to planar mode. This would not matter if the driver used the linear framebuffer to access video memory, but for good reasons it uses banking and accesses video memory through the A0000h aperture.

But how could that even happen? The VDD is meant to virtualize VGA registers and not let DOS applications touch the real hardware. Something had to be very wrong.

Continue reading
Posted in 386, Development, Documentation, Graphics, PC history, Windows | 62 Comments

Learn Something Old Every Day, Part VII: 8087 Intricacies

The other day I investigated a report that a C runtime library modification causes programs to hang on a classic IBM 5150 PC with no math coprocessor. The runtime originally contained two separate routines, one to detect the presence of an FPU and the other to detect the FPU type.

Someone noticed that the code in the two routines looked really similar and decided to merge them. The reworked code runs just fine on 386 and later processors, with or without FPU (I’m unsure of its status on 286 machines). But it does not work on an FPU-less 8088; it causes the system to hang.

The old code looked like this:

    push  BP                 ; save BP
    mov   BP,SP              ; get access to stack
    sub   AX,AX              ; start with a preset value
    push  AX                 ; allocate space for ctrl word
    fninit                   ; initialize math coprocessor
    fnstcw word ptr -2H[bp]  ; store cntrl word in memory
    pop   AX                 ; get control word
    mov   AL,AH              ; get upper byte
    pop   BP                 ; restore BP

If the routine returned the value 3, a math coprocessor was found, otherwise there wasn’t one.

The new code looks like this:

        push    BP                  ; save BP
        mov     BP,SP               ; get access to stack
        sub     AX,AX
        push    AX                  ; allocate space for status word
        finit                       ; use default infinity mode
        fstcw   word ptr [BP-2]     ; save control word
        fwait
        pop     AX
        mov     AL,0
        cmp     AH,3
        jnz     nox87
        ...

It’s almost the same, but hangs on an 8088 without an 8087. Why does that happen?

Continue reading
Posted in 8086/8088, Development, IBM, Intel, PC history, x87 | 10 Comments

A House of Cards

As one step in the development of the Windows 3.x/2.x display driver, I needed to replace a BIOS INT 10h call to set the video mode with a “native” mode set code going directly to the (virtual) hardware registers. One big reason is that the (VBE 2.0) BIOS is limited to a predefined set of resolutions, whereas native mode set code can set more or less any resolution, enabling widescreen resolutions and such.

Replacing the code was not hard (I already had a working and tested mode set code) and it worked in Windows 3.1 and 3.0 straight away. When I got around to testing Windows 2.11, I noticed that although Windows looked fine and mouse worked, the keyboard didn’t seem to be working. Windows was just completely ignoring all keyboard input.

No keyboard input for you!

Curiously, the letters I fruitlessly typed in Windows popped up on the DOS command prompt as soon as I quit Windows (which was not hard using a mouse). This indicated that the keyboard input was not exactly lost, but it was not ending up in the right place somehow.

After double and triple checking, I assured myself that yes, using native display mode setting code instead of the BIOS broke the keyboard in Windows 2.11 (but not in Windows 3.x). That was, to put it mildly, not an anticipated side effect. How is that even possible?!

Continue reading
Posted in Bugs, Development, Microsoft, Windows | 49 Comments

Win16 Retro Development

Several months ago I had a go at producing a high resolution 256-color driver for Windows 3.1. The effort was successful but is not yet complete. Along the way I re-learned many things I had forgotten, and learned several new ones. This blog entry is based on notes I made during development.

Windws 3.1 running Word in a usable resolution

Source Code and Development Environment

I took the Video 7 (V7) 256-color SuperVGA sample driver from the Windows 3.1 DDK as the starting point. The driver is written entirely in assembler (yay!), consisting dozens of source files, over 1.5MB in total size. This driver was not an ideal starting point, but it was probably the best one available.

The first order of business was establishing a development environment. While I could have done everything in a VM, I really wanted to avoid that. Developing a display driver obviously requires many restarts of Windows and inevitably also reboots, so at least two VMs would have been needed for a sane setup.

Instead I decided to set everything up on my host system running 64-bit Windows 10. Running the original 16-bit development tools was out, but that was only a minor hurdle. The critical piece was MASM 5.NT.02, a 32-bit version of MASM 5.1 rescued from an old Windows NT SDK. The Windows 3.1 DDK source code is very heavily geared towards MASM 5.1 and converting to another assembler would have been a major effort, likely resulting in many bugs .

Fortunately MASM 5.NT.02 works just fine and assembles the source code without trouble. For the rest, I used Open Watcom 1.9 tools: wmake, wlink, and wrc (make utility, linker, and resource compiler). I used a floppy image to get the driver binary from the host system to a VM, a simpler and faster method than any sort of networking.

With everything building, the real fun started: Modifying the Video 7 driver to actually work on different “hardware”.

Continue reading
Posted in Debugging, Development, Microsoft, Windows | 27 Comments

Undefined Isn’t Unpredictable

The other day I discovered that 32-bit FreeBSD 11.2 has strange trouble running in an emulated environment. Utilities like ping or top would just hang when trying to print floating-point numbers through printf(). The dtoa() library routine was getting stuck in an endless loop (FreeBSD has excellent support for debugging the binaries shipped with the OS, so finding out where things were going wrong was unexpectedly easy).

Closer inspection identified the following instruction sequence:

    fldz
    fxch   st(1)
    fucom  st(1)
    fstp   st(1)
    fnstsw ax 
    sahf
    jne ...
    jnp ...

This code relies on “undefined” behavior. The FUCOM instruction compares two floating-point values and sets the FPU condition code bits. The FNSTSW instruction stores the bits into the AX register, where they can be tested directly, or the SAHF instruction first copies them into the flags register where the bits can be conveniently tested by conditional jump instructions.

The problem is the FSTP instruction in between. According to Intel and AMD documentation, the FSTP instruction leaves the FPU condition codes in undefined state. So the FreeBSD library is testing undefined bits… but it just happens to work on all commonly available CPUs, in a very predictable and completely deterministic manner, because the FSTP instruction in reality leaves the condition bits alone. What is going on?

Continue reading
Posted in AMD, Development, Documentation, Intel | 22 Comments

Does (E)IP Wrap Around in 16-bit Segments?

The 8086/8088 is a 16-bit processor and offsets within a 64K segment always wrap around. If a one-byte instruction at offset FFFFh is executed on an 8086, execution will continue at offset 0. This is simply a consequence of the Instruction Pointer (IP) being a 16-bit register.

Funny things happen when an access crosses a segment boundary. On an 8086, it will also wrap around; accessing a word at offset FFFFh will access one byte at offset FFFFh and one byte at offset 0 in a segment. Again, that is a consequence of 16-bit address calculations.

The 80286 got a lot smarter about this. Segment protection prevents accesses that wrap around the end of a segment, for both data and instructions. The 80386 continued using the same logic.

The 286 and 386 support one special case, stack wraparound. When the 16-bit Stack Pointer (SP) is zero, pushing (say) a word on the stack will wrap around and the new SP will be FFFEh. This feature was required for 8086 compatibility, because a full size 64K stack needs to start with SP=0 (the pushes and pops must be aligned for the wraparound to occur; unaligned accesses will cause protection faults).

Does the instruction pointer also wrap around in a way similar to the stack segment?

Continue reading
Posted in 386, 8086/8088, Intel, x86 | 9 Comments