Although WordStar was long suspected to be the reason (or at least one of the major reasons) for implementing the A20 gate hardware on the PC/AT and all the associated problems later on, it is now all but certain that that wasn’t the case.
To recap, the earliest versions of WordStar for the IBM PC were 3.02 (probably April or May 1982), and 3.20 (likely Summer 1982). Whatever version 3.02 did or didn’t do, it was not compatible with PC DOS 1.1 or later, and thus could not have been relevant when the PC/AT was being designed. WordStar 3.20 has now been examined and found not to use the CALL 5 system call interface or do anything else that would cause problems on the PC/AT. WordStar 3.2x did use the word at offset 6 in the PSP to query the available memory, but not the call at offset 5.
Then it turned out that a crucial piece of evidence has been hiding in (almost) plain sight all along. Richard Wells highlighted U.S. Patent 4,779,187, “Method and operating system for executing programs in a multi-mode microprocessor” by Gordon Letwin. The filing date of the patent is April 10, 1985, less than a year after the IBM PC/AT was introduced, when these sorts of problems would have been in fresh memory.
The patent contains the following text: Some programs written for the 8086 rely on [address wrap-around] to run properly. Unfortunately, memory locations extend above 1 megabyte in the real mode of the 80286 and are not wrapped to low memory locations. Consequently, programs including those written in MicroSoft PASCAL and programs which use the “Call 5 ” feature of MS-DOS will fail on the standard 80286 system.
Microsoft Pascal, huh? Two paragraphs later, Pascal is mentioned again, explaining how one might work around the problems: For example, no PASCAL programs are loaded into memory below 64K, and a special instruction is placed in the lower memory locations above 1 megabyte–for example, address 100000h or 100010h.
So… Pascal programs might have trouble when loaded below 64K? What does that have to do with the A20 line? A lot, it turns out.
Too Clever By Half
A Pascal compiler (IBM Pascal 1.0, supplied by Microsoft) was part of the first batch of software packages available when the IBM PC was announced in 1981. It was also used to build commercial software, including Microsoft/IBM MASM and the Pascal compiler itself.
The early versions of MS/IBM Pascal used a memory model which might be called “mostly small”, with separate code and data, and possible optional far code segments. Heap, stack, and data were all located in a single physical data segment (DGROUP, naturally up to 64K).
There were certain implementation details which can only be described as “baroque”. MS Pascal had a heap growing from the bottom and a stack growing from the top. Interestingly, the stack size did not have a fixed limit, and as long as there was space in the middle, both the stack and the heap could grow. So far so good.
The problem was that statically allocated data and constants were placed at the top of the data segment, rather than at the bottom. The Pascal runtime start-up code tried to use up to 64K of memory for the data segment, and copied the static data from wherever they were loaded as part of the EXE image into their final location. The layout was very helpfully illustrated in the IBM Pascal manual (August 1981) on page 2-32:
Because the source and destination might overlap, the copy had to be done in reverse direction (from high to low addresses). Because the data to be copied was always at the top of the data segment, the copying (REP MOVSW) started at offset 65534 and continued downward.
So what happened if the Pascal-compiled executable was loaded such that the end of the data to be copied was below 64K? Why, of course, the segment register was “negative” and relied on address wrap-around to access the data!
This caused one avoidable and one unavoidable problem. The copying could have been rewritten such that the segment register would point at the lowest data location to be copied and the offset would be adjusted accordingly. It would only have made the start-up code slightly more complicated.
A worse problem was that if there wasn’t enough memory in the system (and remember, the IBM PC was available in configurations with less than 64K RAM total!), the bottom of the data segment would still be “below zero” and DS had to be “negative” and rely on address wrap-around. That would have been much more difficult to solve because it would need applying additional relocations to code and data, and the Pascal run-time was not equipped to deal with that.
That is exactly why the Letwin patent says that the problem could be avoided by not loading Pascal programs below 64K. The minimal PC/AT configuration was 256K RAM, so it would have been theoretically doable—the difficulty would have been in detecting such programs.
And this is also likely why later language run-times were designed such that any static data were placed at the bottom of the data segment, with heap/stack above (rather than below) static data. Then there is no need to copy static data at load time and no need to potentially address DGROUP such that it requires address wrap-around.
To be absolutely clear, relying on address wrap-around was not some kind of a bug in the Pascal run-time, it was entirely intentional. How do we know that? Because Microsoft/IBM were kind enough to supply the start-up source code. The comments are unambiguous: DX is final DS (may be negative), and final DS value (may be negative).
When Is a Bug a Bug?
The address wrap-around exploitation together with an unrelated signed comparison bug raises an interesting philosophical question. Is software buggy when it fails in an environment that it was not written for, not tested with, and which didn’t even exist when the software was written?
If the answer is “yes”, then arguably all software ever written is buggy, including the simplest hello world programs. It is always possible to change the environment in ways the software never anticipated.
If the answer is “no”, then we must accept that Microsoft Pascal was not buggy but merely odd. In 1981, it used an artifact of the 8086 architecture without any ability to predict that it would go away in 1982 (when the 80286 was introduced). Likewise it used a signed comparison for memory size which failed on systems with more than 512K RAM… at a time when a beefy IBM PC had 128K RAM.
The A20 Gate
Thanks to the patent, we know that Microsoft/IBM Pascal was a notable concern for the address wrap-around and the A20 gating logic it necessitated. On a 386, it might be possible to run the CPU in Virtual-8086 mode and use paging to simulate the wrap-around (which was in fact done in some contexts; more on that later). But that was no help with the 286-based PC/AT, which predated the 386 in any case.
In a way the Pascal run-time created the worst possible problem for the PC/AT designers. Not only were known commercial applications written in Pascal affected, but also an unknown and unknowable number of user-written applications built with the compiler. The problem was also not confined to one clearly delineated interface (CALL 5) but fairly random code which relied on wrap-around to address arbitrary data.
While a new version of DOS might have solved the issue by forcing Pascal applications to be loaded above 64K (as the patent suggested), that is much easier said than done. By 1984, there were already several variants of the Pascal start-up code in the wild, and that was only considering official IBM/MS products. The reason the start-up source code was distributed was so that users could modify it, which meant that there was an unknown and unknowable number of modified variants of the start-up code out there. And even if all such code could be detected, software workarounds would still not have helped with any existing bootable floppies using pre-PC/AT versions of DOS (i.e. DOS 1.x/2.x).
In the end, implementing the A20 gate was the only safe choice. Initially it caused very little trouble because it was always turned off prior to booting an OS (as long as it was really turned off). The problems started a few years later, when DOS extenders came on the scene (circa 1988) and faced the unpleasant reality that there was no BIOS interface to control the A20 gate—and some not-so-PC-compatibles implemented A20 hardware controls which worked nothing like the IBM PC/AT. The problems were exacerbated in DOS 5.0 days (1991) when core DOS could be loaded into high memory and the A20 gate control really mattered.
Responsible Parties
It’s clear that the company most responsible for all the A20 gate trouble was Microsoft, although the ultimate decision no doubt lay with IBM. The actual person responsible for the Pascal start-up code may have been Bob Wallace, one of the first Microsoft employees and later the author of PC Write (written in Pascal); that is only speculation though.
In the 1990s, the most common troublemaker related to the A20 gate was EXEPACK (another Microsoft tool), or rather DOS executables packed with EXEPACK. However, it has now been established beyond reasonable doubt that—ironically—EXEPACK was created well after the A20 gate already was in place on the PC/AT.
More about EXEPACK in a follow-up post, and more about all the pain the A20 gate caused years later in another follow-up post.
Yea, even the original PC/AT only had 512K of RAM on the motherboard using stacked 128K DRAM chips. It was released at the same time as the Mac 512K with 256K DRAM chips and you can tell that IBM was more conservative.
And to be honest I think the DRAM prices falls in 1985 probably took everyone by surprise. For example, even NetWare 286 was not released until 1986.
WordStar should still be considered the reason for Call 5 requiring address wrapping. Fixing Call 5 to use a direct low address works fine but that breaks Wordstar 3.3 and WordStar 3.3 was still an important enough application that breaking it would have killed DOS 5. Other programs that used the same technique to determine segment size were gone by the late 80s and a more accurate explanation would take pages. Admittedly, it would have saved MS a lot of mockery for the roundabout way Call 5 did its thing.
OK, I see what you’re saying. Yes, there’s a good chance that WS 3.x was one of the bigger reasons why CALL 5 continued to be done the way it was initially done, even though Wordstar itself did not issue CALL 5. I still think there were alternatives not requiring wrap-around. But once the decision was made that MS Pascal required wrap-around, there was no reason to change the CALL 5 implementation.
DOS 5 was developed in 1990 and released in 1991 so I doubt that WordStar 3.3 really mattered at that point. As for CALL 5, besides WordStar 3.02 there is no software that is known to use it. As I recall EXEPACK was the most notable use of the address wraparound.
Wordstar mattered. It was tested by enough beta sites that MS included a note that DOS 5 was being fixed to run Wordstar. MS-DOS Small C uses Call 5; it caused FreeDOS to implement their Call 5 interface correctly. Obviously, there were also whatever programs Tim Paterson converted that led to discovery of the wrapping technique.
I think there more programs that used both Call 5 and Byte 6 segment size based on tech notes in Desqview. The PSP values included the size of the memory allocation from the DVP file. There was a suggestion to increase memory allocation block to 64k for COM programs that crash which would fit with a program requiring address wrapping for Call 5. Desqview’s design makes it unlikely for any of the other troublesome programs to fall within the first 64kB.
The problem with figuring out what other programs did the CP/M-80 to DOS-86 automatic conversion or adopted the CP/M-86 memory size check is that those are early versions. By the time DOS 5 rolls around, these applications would have been unmaintained for about 8 years. I have been looking at the old PC/Blue shareware archive to see if any of the MS-DOS COM programs ported from CP/M-80 did this.
WordStar mattered but not version 3.3 from 1983 as it was out-of-date. WordStar 5.5 was current when DOS 5 was being developed and WordStar 6.0 came out in 1991.
Small C ? I’m pretty sure Microsoft wouldn’t have cared about that at all (pretty much no one else did either).
WordStar 3.02 is supposed be the primary example of an old DOS program which used CALL 5. Anything written specifically for DOS, even DOS 1.x would be written to take advantage of the 8088 segmented architecture to access >64KB and almost certainly use the native INT 21h so it is highly unlikely there is much DOS software out there which used CALL 5. Even the first DOS version of VisiCalc uses INT 21h.
I think people ran loads of old software versions back in de days just because there wasn’t any real good reasons to upgrade. Another software or broken hardware could had been the reason for upgrading DOS, possible together with a new computer. Therefore it seems fully reasonable for people to run Wordstar 3.x on DOS 5.
Yuhong Bao: 128kbit DRAM chips? That sounds like an unusual size. Are you sure that you aren’t refering to 64kbit DRAM chips? (enough 64kbit chips to match a 16-bit data bus means 128kbytes).
They were two 64K DRAM chips stacked together, used only on the first 5170 PC/AT motherboard.
Wordstar 3.3 still has a big fan base. By the time Micropro got New Word and turned it into Wordstar 4, other word processors were much more advanced. The upgrade rate for Wordstar was modest. Conversely, with a lot of managers having Wordstar 3.3, it was an easy choice for testing compatibility with new versions of DOS. If it fails, why check if DOS 5 is compatible with obscure backroom programs.
More evidence that somewhere out there exists an important application using Call 5 is the 1991 upgrade of Novel shell. Major customers only cause these fixes.
o Corrected “call 5” functions for programs ported from CP/M to DOS.
ftp://ftp.mpoli.fi/unpacked/hardware/net/novell/dosup8.zip/_dosup8.exe/history.doc
“Call 5” is a poor term to use on web searches. It turns up an endless assortment of poorly OCRed advertising but not magazine articles. The Waite Group MS-DOS Developer Guide covers it. I expect there to be another developer guideline that suggests the use of the old CP/M functionality in more modern code. I just want to find it to know why since I doubt I will find the actual program.
Shame that Novell didn’t mention the specific software using CALL 5.
For completeness, the DOS 5 and Wordstar reference comes from the MS-DOS 5.0 build 224 beta README, date June 12, 1990. “Due to a known problem, some older versions of WORDSTAR don’t work correctly with this pre-release version DOS 5.0. We know what the problem is, but the fix was not incorporated in time for this beta release. WORDSTAR 2000 seems to work fine with DOS 5.0.”
Microsoft did fix that in the following betas. So Wordstar 3.x was definitely a thing in 1990/1991, no matter how outdated.
It’s actually not known that WS 3.02 used CALL 5. There are rumors to that effect, but there are lots of rumors out there. As far as I know, no one really examined WordStar 3.02. All we reliably know about that version is that it didn’t work with PC DOS 1.1 and was swiftly replaced with version 3.2x.
EXEPACK was indeed probably the #1 user of address wrap-around, even though it was developed in the early PC/AT days.
The Small C thing is new though, from the 2000s. Irrelevant to this discussion, really.
The address wrapping is something that Tim Paterson was well aware of, and deliberately exploited it for the CALL 5 implementation. Same with MS Pascal, address wrapping was intentionally used. But it wasn’t always intentional, for example with EXEPACK. In certain algorithms it could happen as a side effect, without the author being aware of it.
@dosfan
The world was moving more slowly back then. Many users who bought PC’s in the mid 80’s as a typywriter replacement and casual gaming didn’t replaced the PC and the software that came with it until Win3.1 become a thing.
I learnt Wordstar 3.3 at school in 1991 in a Amstrad 1640.
If WordStar 3.02 was indeed directly ported from CP/M as the rumors say it was then it stands to reason it would have used CALL 5 but until it is found this can’t be verified. Also there is the question of what the heck it did that caused it to fail on PC DOS 1.1.
The history indicates Wordstar was ported to CP/M-86 before going to DOS so it is unlikely to be using Call 5 even in the earliest DOS release. Also, references at the time indicated the first release of Wordstar for PC worked fine on DOS 1.0 but not DOS 1.1. Call 5 did not break between versions. I expect the problem to relate to the introduction of double sided disks.
Note that Wordstar 3.2x showed problems running on DOS 2 requiring patches to Spellstar and the WSOVLY1.OVR files. Wordstar was unusually brittle.
B.t.w. the rhetorical question, what is a bug, really exposes a big problem in the PC industry back then: Lack of specifications!
You had the schematics and the data sheets for all included IC’s, and there were documentation on what you could use various BIOS and DOS stuff for. But there weren’t any documentation stating what you should NOT do. This is the culprit of this kind of problems.
With more modern computers and operating systems, the API’s usually are rather well defined and as long as you can actually find where something goes wrong it’s easy to determine if the application is abusing an API or if the API were incorrectly implemented as compared to the documentation.
P.S. It’s strange how Wordstar 3.0x could have had problems with DOS 1.1. Maybe it did some display/calculations of disk space and something did overflow? Seems unlikely though.
“Lack of specifications” is a bit misleading way to put it. If there were no specifications, there would be no 3rd party hardware or software and no problems. The hardware was specified in the form of complete schematics, and the ROM BIOS source code (99% of it) was published. That is light years away from what you get these days.
On the software side it was very different though, and arguably the 8086/8088 promoted ill-behaved software by allowing any memory access. That made it really hard for the OS to set any boundaries, and impossible given the memory constraints. The problems were more caused by the fact that DOS was an extremely bare-bones OS, and almost every application needed to go beyond DOS. And the DOS manuals had nothing to say about that.
I too wonder what WordStar 3.02 did to be incompatible with DOS 1.1. Did it make some assumptions about DOS internals? All we have is this: “PC-DOS 1.1 came out a few weeks later. WordStar 3.02 did not run from PC-DOS 1.1, MicroPro blamed this on Microsoft for changing some of its operating system’s specs without advance warning.”
There are those who wouldn’t even call mess-dos (nor CP/M) an ‘OS’ 🙂
Wordstar did a lot of in place patching. IIRC, double sided disks under PC-DOS 1.1 had a cluster size of 2. That could throw off calculating the correct sector. Just a guess though.
Micropro had a number of other bugs preventing programs from working on more than 512kB systems. The bug in the install program for Wordstar 3.3 differed from the bug in the Wordstar application.
http://www.wordstar.org/index.php/wsdos-support/120-wordstar-for-dos-3-install-fails-with-too-little-memory-error
The link there to TN42.TXT is broken but over on Google Groups, similar information can be found as TECHNOTE WS-27.TXT Starindex needed a similar replacement of signed comparison with unsigned comparison. Starindex STYLE and two parts of Calcstar needed more extensive patches of 12 or 15 bytes each. Finally, there is a note that Formsort 1.4 requires a system with less than 512kB but no patch was provided instead recommending upgrading to Formsort 1.6.
Computing in the 80s was fun. And this is just with products from three larger vendors. The full range of software and hardware could turn up lots of interesting behaviors and that is without even going to the TSRs that hooked into other applications directly altering memory.
It stands to reason that WordStar 3.02 used DOS FCBs for file access so the underlying disk architecture should not have mattered. Perhaps it relied on some undocumented fields in the FCB that changed between DOS 1.0 and 1.1. DOS 1.1 did add time to directory entries (1.0 only had the date) and this info does appear in the FCB so it is possible that WordStar 3.02 used what was a reserved field in DOS 1.0 for its own purposes and got hosed when the field was used by DOS 1.1.
Well, the hardware were well documented, but there were not much (if any) specifications on how you should and shouldn’t do it on an IBM PC.
I know that at the time this was common practice to not have such specification, but still…
Also they probably didn’t realize for how long time the PC platform would survive in upgraded forms. At the time Apple II, TRS-80 and Commodore PET were probably the only microcomputers that had lived on for atleast a few years, but all those had the same CPU through all the years and the hardware upgrades were minor compared to the step from an IBM PC to an IBM AT. (Well, Apple II started using 65C02 and later 65816 but that was later than the introduction of the IBM PC).
Even 65C02 to 65C816 was a lesser step than 8088 to 80286, I’d say.
The longevity of the PC platform was neither planned nor predicted, that’s for sure. Of course if IBM had been trying to build a platform for the next 10+ years, it probably wouldn’t been nearly as successful!
Well, 40 years (roughly 1980-2020) is not bad for such a kludgy platform.
I’m sure if you told that to the original PC designers, they’d be dead certain you were joking. They’d think no one could possibly be so stupid as to use the PC platform for decades 🙂 And yet here we are.
Well, it can be argued that modern PCs do not really have much in common with the original IBM PC anyway. In terms of the hardware, operating systems ditch real mode as quickly as possible and interface with devices over PCIe/USB rather than ISA/directly (like the classical keyboard controller – which you may say can still be accessed but is in fact transparently emulated by the firmware!). In terms of software, DOS programs won’t run on a modern operating system without relying on an emulator or virtualization.
Basically the only common thing I can think of when we talk about the concept of a “PC compatible”, is expandability through a documented bus (like ISA). Everything else (like peripheral devices you connect to an external connector) I don’t really count as belonging to the PC concept.
The definition of PC is an interesting question. But for simplicity I assume that if computer can run (some) DOS programs on bare hardware, it’s a PC. Of course, the definition of bare hardware is another interesting question. Anyways, this condition requires at least working BIOS, ability to load the bootrecord and working keyboard/vga hardware (emulation).
The PC designers were aiming at a decade of use; five years of sales with the final sold PC having about 5 years of viable operation. On the whole, they succeeded. Some things were unexpected like the push for PCs to have lots more memory to run big spreadsheets instead of having CFOs purchase workstations designed for that memory.
Compare the 40 years of IBM PC derivatives development with the 10 year run of the ZX Spectrum.
It was rather recently that a modern PC started to really diverge from the original PC’s. Somewhere when P4 were new ISA bus started to be rare, and as we finally replaced Windows 95/98/ME with Windows XP the support for full scale DOS could be atleast partially removed.
Most PCs were able to boot DOS until about 2010 or so. P4 machines ran DOS just fine. I have a 2011 Sandy Bridge system (Intel DQ67OW board) which also runs DOS without difficulties.
Pingback: EXEPACK and the A20-Gate | OS/2 Museum
Are there PCs out there without UEFI CSM that provides BIOS compatibility ? Also is there technical info out there as to how compatible different CSM implementations are to a real BIOS ?
Ah thanks Michal. After all these years, I finally know what that LOADFIX command was for and how it worked.
AFAIK, CSM was always optional, but even that is going away.
(Ars Technica): “Intel announced that by 2020 it was going to phase out the last remaining relics of the PC BIOS by 2020, marking the full transition to UEFI firmware.”
https://arstechnica.com/gadgets/2017/11/intel-to-kill-off-the-last-vestiges-of-the-ancient-pc-bios-by-2020/
So we’ll be stuck with VMs for legacy OSes (which isn’t nearly as bad as it used to be). Still a little annoying, though.
Pingback: The A20-Gate Fallout | OS/2 Museum
Pingback: A Word on the CALL 5 Spell | OS/2 Museum