ICAT and Watcom

Some time ago I embarked on a minor retro development project related to this post. For convenience, I decided to do the development on a Windows host machine and (obviously) test in a VM. For C code compiling, and linking, I used the Open Watcom tools, and for the assembler code, MASM 5.1 (because nothing else likely works anyway).

The initial phase went well and I only needed the basic OS/2 kernel debugger (KDB) to analyze a few problems. But then I started toying with the idea of reworking the code base a bit, and that would really benefit from understanding the code better. Tracing code in a debugger is an excellent way of learning how it works. But KDB is definitely not up to that task, or at least not for code written in a language other than assembler.

The problem with KDB is that it’s a rough equivalent of the ancient SYMDEB debugger. It understands symbols (that is, it can match names with addresses), but it can’t map code locations to source files, and worse, it has no idea about complex types (structures, arrays, etc.). Analyzing the code and data is manually is possible, by matching code and data offsets with source and header files, but in the end it is a very poor use of time which could be spent much more productively.

IBM ICAT Debugger

Back in 1996 or 1997, IBM released a debugger that could take care of this. ICAT (Interactive Code Analysis Tool) is a remote kernel debugger for OS/2, roughly similar to WinDbg. ICAT is a remote debugger, meaning that it runs on a host (OS/2) machine while debugging a target OS/2 system with kernel debugger installed.

The ICAT debugger running on OS/2

While the standard KDB simply drives a serial terminal, ICAT uses a custom packet protocol to communicate with the target system. This protocol was added to the OS/2 kernel debugger in Warp 4 and also integrated into the later Warp 3 FixPacks.

Continue reading
Posted in Debugging, Development, IBM, OS/2, Watcom | Leave a comment

When Networking Doesn’t Work

Last week I spent far too much time trying to get my Windows 11 machine to talk to an antique Tyan SMDC (Server Management Daughter Card) IPMI module over the network.

At first, I tried Tyan’s own old TSO (Tyan System Operator) software in a Windows XP VM using NAT networking. The software installed without incident but was not able to discover any IPMI-enabled servers. All my attempts to manually set up communications to the server also failed. I also tried native Windows software (ipmiutil) with effectively identical result — the SMDC would not respond.

Next I tried my old PC running Windows 10. It behaved the same — ipmiutil refused to talk to the SMDC. At this point, I strongly suspected that the SMDC was not set up right, but just to be sure, I rebooted the old system to Linux and tried ipmitool. Lo and behold, ipmitool on Linux could talk to the SMDC! I set up a similar Windows XP virtual machine on the Linux host and guess what, the TSO software worked fine as well.

All my attempts to disable Windows firewalls and such on the Windows hosts made zero difference. In desperation, I ran Wireshark on the Windows 11 system and established that yes, there is outgoing UDP traffic (to port 623 on the SMDC), and what’s more, the machine is in fact receiving replies from the SMDC! But somehow Windows was eating up the incoming UDP packets and software running on the machine never saw anything. That was, to put it mildly, suspicious.

Continue reading
Posted in Bugs, Intel, IPMI, Networking, PC hardware, TCP/IP | Leave a comment

Learn Something Old Every Day, Part XX: 8087 Emulation on 8086 Systems

Not too long ago I had a need and an opportunity to re-acquaint myself with the mechanism used for software emulation of the 8087 FPU on 8086/8088 machines.

As mentioned elsewhere, the 8086 CPU (1978) had a generic co-processor interface utilized by the Intel 8087 FPU (1980), initially called the Numeric Processor Extension or NPX. And although the interface was generic, the 8087 was likely the only chip which could actually use it.

The 8087 was a somewhat expensive add-on, assuming that a given system actually had a socket to plug the 8087 into (IBM PCs did, but other 8086/8088 systems did not necessarily have one). There was a largish class of software which could significantly benefit from the 8087 (e.g. spreadsheets), but in the era of shrink-wrapped software, there was a significant incentive to ship software which could use an 8087 when present, yet would still run on a bare 8086/8088 machine with no FPU.

There was also a desire to develop and test floating-point software without having to install an 8087 into every system. Given the initial limited availability of 8087 chips, it was in Intel’s best interest to give developers a way to write 8087 software without requiring 8087 hardware.

Intel released the E8087 software emulation package together with the 8087 chip. This is evidenced by the original Numerics Supplement to The 8086 Family User’s Manual from July 1980, Intel order no. 121586-001. The Numerics Supplement outlines how the E8087 package works. Actually there were two packages — the full E8087 library, and also a “partial” PE8087 library which implemented just enough functionality for Intel’s PL/M language tools. Intel’s PL/M compiler was the first high-level language translator capable of utilizing the 8087.

Because the 8086 had no facility for emulating an FPU (unlike the 80286 and later processors), the emulation mechanism was somewhat complex and required tight cooperation of assemblers/compilers, linkers, and run-time libraries.

Continue reading
Posted in 8086/8088, Development, Intel, LSOED, Microsoft, x87 | 19 Comments

Learn Something Old Every Day, Part XIX: Athlon XP May Be Athlon MP

Quite a while ago, I acquired a dual-socket (Socket 462 aka Socket A) board for the Athlon MP, AMD’s first entry into the multi-processor/multi-socket market. Over the course of several years, I spent quite some time searching for the board in my basement, to no avail. Until a few weeks ago I finally found it… while looking for something else, of course.

The board is a Tyan Thunder K7X Pro (S2469). It’s a nice board designed for 1U servers (it has angled DIMM brackets). The board supports up to 4GB registered PC2100 DDR DRAM (with or without ECC), it has three regular and two 64-bit PCI slots, and an AGP Pro slot. There’s an onboard ATI Rage XL graphics chip and Intel 10/100 as well as Intel Gigabit Ethernet. No onboard SCSI, which was a manufacturing option. There’s also an onboard ATA-100 controller with two channels.

The board interestingly supports both EPS12V and regular ATX (20 + 4 pin) power supplies. That is somewhat important because earlier Tyan Athlon MP boards (e.g. the very similar S2468) require ATX-GES power supplies, which are neither standard ATX nor EPS12V. And I have too many oddball PSUs already.

Unfortunately, Athlon MP processors are nowadays rather difficult to find at a reasonable price or at all, and they are even more difficult to find in pairs. So I ended up with one Athlon MP 1800+ and also several Athlon XP 1800+ processors.

Athlon XP 1800+, or is it really Athlon MP?

It is well known that Athlon XP and Athlon MP processors used the exact same core and it was possible to convert Athlon XP CPUs to the MP variant by restoring a bridge that was laser-cut during manufacturing. This hack was known at least since 2003.

Continue reading
Posted in AMD, K7, LSOED, PC hardware, PC history | 8 Comments

Mystery CPUID Bit

Yesterday I had the opportunity to test a recently acquired Athlon 1200 CPU (Thunderbird core, ceramic PGA package). I dreaded the first boot-up attempt because I have had rather bad experience with slightly newer Palomino and Thoroughbred OPGA processors—a surprisingly high percentage of them was DOA.

AMD Athlon Model 4 (Thunderbird), 2001

But the Thunderbird Athlon (two of them actually) with OPN A1200AMS3C sprang to life and worked just fine. While running some basic tests on the CPU, I noticed that it has a completely unknown CPUID bit set, specifically bit 18 in register EDX of CPUID leaf 80000001h.

The 8000xxxxh CPUID range was originally AMD specific and, among other things, used to indicate support for the 3DNow! instruction set in the AMD K6 processors.

When Intel introduced AMD64 support, they were more or less forced to support a subset of the 8000xxxxh CPUID range as well, for compatibility with existing AMD64 software. In any case, the Athlon CPU in question is from 2001 and pre-dates Intel’s x64 processors by several years.

The usually very reliable sandpile.org lists EDX bit 18 as “reserved”. I went through available AMD CPUID documentation but nope, bit 18 is listed as “Reserved on all AMD processors” in all the documents I could find.

Continue reading
Posted in AMD, K7, PC hardware, Undocumented | 20 Comments

Learn Something Old Every Day, Part XVIII: How Does FPU Detection Work?

This post ended up being much longer than originally intended because halfway into writing it, I found that 286 and later CPUs don’t behave the way I had assumed they would…

While investigating a bug related to a program using floating-point math on a 386SX system with no FPU, I started pondering how exactly FPU detection works on 286 and newer CPUs. Although math co-processors became standard some 30 years ago, on old PCs they were an uncommon and expensive add-on, and a 66 MHz 486SX2 would still have a usable yet FPU-less processor in the mid-1990s.

The CPU/FPU interface and FPU detection on the 8086/8088 was discussed before. To recap, the 8086/8087 interface is a little odd because it is in fact a generic co-processor interface. The 8086 was launched in 1978; probably sometime in 1979, the Intel 8089 I/O Coprocessor arrived; the 8087 only appeared in 1980.

The ESC instruction (opcode range D8h-DFh) was used for communication with a co-processor on the 8086. While the CPU didn’t exactly execute the instruction, it had to know how to decode it. The ESC instruction used a standard ModR/M byte to indicate an optional memory operand, which the CPU needed to be able to write to or read from the co-processor.

If there is no co-processor attached to an 8086, the ESC instructions simply do nothing because the co-processor isn’t there to read or write any data. However, the WAIT instruction designed for synchronization will (in a typical 8088/8086 PC design) hang indefinitely because the missing co-processor acts as if it were permanently busy. For that reason, FPU detection must use the non-waiting FNINIT/FNSTSW sequence (or an equivalent) to avoid hangs on 8086-class machines.

Additional information about what things look like from the 8087’s perspective has been recently published.

Continue reading
Posted in 286, 8086/8088, LSOED, x86, x87 | 39 Comments

Bitfield Pitfalls

Some time ago I ran into a bug that had been dormant for some time. The problem involved expressions where one of the operands is a bit-field.

To demonstrate the problem, I will present a reduced example:

#include <stdio.h>
#include <inttypes.h>

typedef struct {
    uint32_t    uf1 : 12;
    uint32_t    uf2 : 12;
    uint32_t    uf3 : 8;
} BF;

int main( void )
{
    BF          bf;
    uint64_t    u1, u2;

    bf.uf1 = 0x7ff;
    bf.uf2 = ~bf.uf1;

    u1 = bf.uf1 << 20;
    u2 = bf.uf2 << 20;

    printf( "u1: %016" PRIX64 "\n", u1 );
    printf( "u2: %016" PRIX64 "\n", u2 );

    return( 0 );
}

The troublesome behavior is demonstrated by the lines performing the left shift. We take a 12-bit wide bit-field, shift it left by 20 bits so that the high bit of the bit-field lines up with the high bit of uint32_t, and then convert the result to uint64_t.

The contents of u1 will be predictable. The contents of u2 perhaps not so much. Or more specifically, the resulting value of u2 depends entirely on who you ask.

Continue reading
Posted in C, Development, Standards | 4 Comments

DOS Memory Management

The memory management in DOS is simple, but that simplicity may be deceptive. There are several rather interesting pitfalls that programming documentation often does not mention.

DOS 1.x (1981) had no explicit memory management support. It was designed to run primarily on machines with 64K RAM or less, or not too much more (the original PC could not have more than 64K RAM on the system board, although RAM expansion boards did exist). A COM program could easily access (almost) 64K memory when loaded, and many programs didn’t rely on even having that much. In fact the early PCs often only had 64K or 48K RAM installed. But the times were rapidly changing.

DOS 2.0 was developed to support the IBM PC/XT (introduced in March 1983), which came with 128K RAM standard, and models with 256K appeared soon enough. Even the older PCs could be upgraded with additional RAM, and DOS needed to have some mechanism to deal with that extra memory.

The DOS memory management was probably written sometime around summer 1982, and it meshed with the newly added process management functions (EXEC/EXIT/WAIT)—allocated memory is owned by the current process, and gets freed when that process terminates. Note that some versions of the memory manager source code (ALLOC.ASM) include a comment that says ‘Created: ARR 30 March 1983’. That cannot possibly be true because by the end of March 1983, PC DOS 2.0 was already released, and included the memory management support. The DOS 2.0 memory management functions were already documented in the PC DOS 2.0 manual dated January 1983.

Continue reading
Posted in Development, DOS, Microsoft, PC history | 25 Comments

Cracking DXP and SXD

There are situations where software is available only in the form of a floppy image. This goes especially for historic hardware drivers and patches, which were often distributed only in the form of floppy images. This method was quite popular with large OEMs like IBM or Compaq.

Initially, floppy images were distributed as data files, with a separate program required to write them onto a physical diskette. Typically, such programs could only write the image to a floppy and had no ability to extract individual files from the file system on the floppy image (more or less universally the FAT file system). IBM’s LOADDSKF utility is one example of such a program.

Around 1990, someone realized that the program to restore an image onto a physical floppy could be small enough (5-20 KB) that self-extracting floppy images were feasible, similar to self-extracting archives. Compared to the size of a high-density floppy, the size of an self-extracting stub was negligible. Especially for distributing software that fit on 1-2 floppies, it was far simpler to publish 1-2 self-extracting floppy images than to provide a separate utility and documentation how to use it. The self-extracting floppy images also tended to be self-explanatory, and separate documentation was not necessary.

Continue reading
Posted in Archiving, Development, Floppy Images | 4 Comments

A 100 Year Old Consul Typewriter?

Spurred by a discussion about Polish keyboard layouts, I tried to find more about the history of Czech keyboard layouts. Unfortunately, finding actual documents turned out to be very difficult.

What I did find is that prior to the current Czech keyboard layout standard (ČSN 36 9050, published in 1994), typewriter keyboard layouts were governed by ČSN 17 8151 from March 1974, titled “Psací stroje. Klávesnice s latinkou, česká a slovenská mutace.” (Czechoslovak State Norm 17 8151, Typewriters. Keyboards with Latin keyboard, Czech and Slovak variants.). There was also ČSN 17 8152 specifying Cyrillic layouts (quite uncommon). Computer keyboards, unsurprisingly, tended to closely match typewriter layouts.

Prior to that, since about 1953, typewriter keyboards were specified by ČSN 01 6906 (Czech and Slovak layouts) and ČSN 01 6907 (Cyrillic). These supposedly replaced ČSN 1408 from 1949. I have not been able to find out anything about the content of these standards, or if there was any attempt to standardize Czech typewriter layouts before 1949. I am not even entirely sure that ČSN 1408–1949 really existed.

However, I did remember that my family still owns my grandfather’s typewriter that must have been made sometime before WWII, perhaps in the 1930s. It’s a portable typewriter in a wooden case, and research showed that it is in fact a well known model… mostly.

A 1926 Remington Portable with Czech-German keyboard

The typewriter is quite clearly a Remington Portable No. 2, easily recognizable from the shiny type guards which need to be raised together with the type bars when the typewriter is prepared for operation. Even better, Remington serial numbers are well documented and the typewriter’s serial number (NE61083) indicates that it was made in August 1926, a century ago.

But then there were questions that I had no answers for. Did Remington really make Czech typewriters in the US? If not, how did the typewriter get Czech types? And what’s with the Consul label?

Continue reading
Posted in Computing History, Keyboard, Typewriter | 33 Comments