No, the OS/2 Museum does not have either a time machine or difficulty doing basic math. As of this writing, it is August 2021 and the IBM PC was announced in August 1981, 40 years ago.
But in August 1980, one year earlier, IBM started putting together the design of the IBM PC. If that (one year) sounds like an awfully short development cycle for a product like the PC, that’s because it was, especially for a company like IBM where the typical product development cycle was closer to five years at the time. The tight schedule determined the PC design: No custom or not yet available chips, no major software development, favor proven and familiar technologies.
With these constraints in mind, the design of the PC was sketched out on a stack of papers on August 10, 1980. Some of those drawings—not previously published—are now presented here; see explainer at the end of this article for a glossary.
A closer look at the drawings reveals that even though there were quite a few changes between August 1980 and August 1981, the core was all there from the beginning: A box with two floppy drives, five I/O expansion slots, and a detachable keyboard; an Intel 8088 CPU with optional 8087 FPU, 8259A interrupt controller, 8237-5 DMA controller; RAM on motherboard, with an option to install additional RAM expansion cards; a display adapter with separate monitor or TV.
Choosing these particular chips was no coincidence, and there was one very clear reason for it: The IBM System/23 Datamaster.
The Datamaster Legacy
The IBM PC hardware was very strongly influenced by the design team’s prior experience with the Datamaster. That may seem odd given that the Datamaster was announced in July 1981, just weeks before the PC. But in reality, the Datamaster development started in 1978 and the hardware was finished in Summer 1980. The product release was significantly delayed by difficulties with implementing IBM’s chosen BASIC variant.
The Datamaster used an 8-bit Intel 8085 CPU, 8259 interrupt controller, 8237 DMA controller, and 8253 programmable timer. It also utilized an expansion bus remarkably similar to the PC’s.
CPU Choice
The Datamaster team learned that the 64K address space of an 8-bit CPU wasn’t quite big enough for the tasks IBM had in mind for it. The Datamaster used paging to expand the addressing capabilities, but that complicated things significantly.
With that in mind, IBM wanted a CPU with a much larger address space for the PC. The Intel 8086 or 8088 was a logical choice; the slightly better performance of an 8086 was not considered worth the increased complexity and cost. The 8088 provided a good compromise between supporting 16-bit software with a huge (for the time) 1 MB address space while utilizing cheaper and familiar 8-bit infrastructure.
There was already an existing 8086 BASIC (Microsoft’s) and other tools, and porting software from the 8085 was not difficult. As an added bonus, IBM already had Intel MDS development systems that supported both 8085 and 8086 development.
With the above in mind, it’s easy to answer questions such as “why did IBM not use the Motorola 68000”. The CPU was barely available in 1980, there was no BASIC for it yet and no software, and IBM had no experience with it. Choosing the 68000 would have delayed the PC release well beyond what IBM was willing to accept; that alone was a sufficient reason to not pick the 68000.
Internal Layout
The initial design called for a power supply taking up the left side of the PC’s enclosure, and adapter cards competing for space with the internal floppy drives. The actual design moved the power supply behind the floppy drives, leaving more room for long adapter cards.
IBM took advantage of the extra space and the adapter cards released with the IBM PC were on the larger side.
Expansion Bus
The Datamaster didn’t only influence the choice of the PC’s CPU and support chips. It also strongly influenced the PC’s 62-pin I/O expansion bus, later known as the ISA bus. How significant was the Datamaster influence? The following two diagrams should answer that:
The above is a diagram from an IBM Datamaster service manual dated December 1980. Below is a diagram of the expansion connector from the IBM PC Technical Reference dated August 1981.
The diagram is mirrored (A pins on the left vs. A pins on the right), but clearly extremely similar. What used to be page select bits 0-3 neatly turned into address bits 16-19. Interrupt and DMA levels that had specific purpose on the Datamaster are generic on the PC. There are real differences, e.g. pin B20 used to be DMA request 0 on the Datamaster but turned into a system clock signal on the PC. Since the PC dedicated DMA channel 0 to DRAM refresh, DMA request 0 no longer made sense. Pin B04, unused on the Datamaster, turned into interrupt request 2.
This similarity meant that existing adapter cards for the Datamaster could be used in the IBM PC with very minimal changes, if any. This no doubt greatly sped up the initial stage of development because there was no need to design a brand new floppy controller or serial adapter, for instance.
Memory
The first IBM PC system board supported 16 to 64 KB RAM, with 64 KB being about the maximum of what a “personal computer” of the era would support. Also available were 32/64 KB cards which plugged into the PC’s expansion slots. With three such cards (a practical maximum; two of the five expansion slots would be taken by a display adapter and a floppy controller), the PC could be expanded to 256 KB RAM.
The prototype board shown below has Mostek MK4232 memory chips (32K×1), with room for 18 chips total. That allows installing up to 64 KB RAM (two banks of 32 KB) with parity.
Using parity was not at all typical for low-end computers at the time, but IBM felt that the added ability to detect errors was worth the small additional expense.
From the beginning, the 1 MB address space was carved into several regions. At the top, 128 KB was reserved for system firmware, 128 KB for other memory, and 128 KB for display memory. That left 640 KB available for system RAM, but that was purely theoretical—the original PC supported only up to 256 KB RAM. Even the PC/AT only supported 256 or 512 KB on the system board.
The PC’s memory map was quite reasonable and the infamous 640 KB memory limit didn’t come into play until the late 1980s, well after the expected design life of the PC. At that point, the problem wasn’t hardware (286 and 386 processors) but rather software (DOS) holding things back.
Storage
As the drawings show, the initial PC design anticipated external 8″ drives for the PC. That ended up not happening. But the rest of the PC storage subsystem turned out more or less exactly as the initial design.
While the Datamaster used 8″ drives, the PC used 5¼″ drives. The media are much more convenient to work with (if you’ve ever seen 8″ floppies, they’re huge), and the PC form factor would not have been possible with 8″ drives.
The Datamaster used the NEC μPD765 floppy controller. It did the job, IBM engineers knew how to use it, and saw no reason to pick a different controller for the PC.
The PC used floppies with 512-byte sector size, something that later became an ubiquitous default, so much so that any other size was extremely exotic; yet in 1980/81, typical sector sizes were 128 bytes, 256 bytes, or even 1,024 bytes. Once again, the Datamaster also used 512-byte sectors, except it used 128-byte sectors on the first track of a floppy, as was then common. The PC fortunately simplified things and used 512-byte sectors throughout.
The PC could probably have maximized the floppy storage capacity by using 1,024-byte sectors, but that was perhaps not even considered. There are interesting tradeoffs in that each sector requires a certain amount management overhead and additional slack on the floppy, but at the same time each file tends to waste some unused space in its last sector. 512-byte sectors strike a good compromise between keeping the disk sector overhead low and not wasting too much allocation space per file.
The PC storage subsystem was not required to be compatible with existing systems and set its own standards.
Display Hardware
A closer look at the PC design drawings shows that one area where the final PC noticeably differed from the initial design is display hardware. Not coincidentally, that’s also where the PC significantly differed from the Datamaster.
The Datamaster display was text only and used the Intel 8275 CRT controller (CRTC). To be able to display graphics, the PC needed a different CRTC; the Motorola 6845 was chosen. The PC also needed to support TVs as well as dedicated monitors, and the characteristics of NTSC television determined what IBM’s CGA would do.
The initial PC design notably called for 80×24 text resolution, the same resolution the Datamaster used, and also the standard resolution of terminals. The PC instead ended up using the 80×25 resolution, which is extremely common today but was far from typical in 1981.
The initial design also called for a 280×192 graphics resolution (same as Apple II). The released PC (CGA) instead used 320×200 graphics. The most likely answer as to why is ‘because that was possible within the constraints of NTSC TV and 16 KB display RAM’.
The CGA had 16 KB RAM, or 16,384 bytes. A 4-color (2 bits per pixel) 320×200 resolution or a 2-color 640×200 resolution uses exactly 16,000 bytes. There would still be enough memory for a few more lines of graphics, but not for another line of text using 8×8 character cells (about the smallest cell size that’s still legible).
The text-only MDA similarly had 4 KB (4,096 bytes) of RAM. With 160 bytes required to store a line of text (80 characters each with an attribute byte), there was room for 25 lines of text—utilizing 4,000 bytes of memory—but not more.
Although the 320×200 resolution appears to have been new with the IBM PC, the Commodore 64 (introduced a few months later) used it as well. On the PC, the supported text and graphics resolutions were about the maximum possible given the display memory size and attached display hardware.
Keyboard
The PC’s physical keyboard layout was identical to the Datamaster (Model F keyboard), but the key labels were different and the keyboard was not built into the system unit. While the Datamaster used a parallel connection to the keyboard within the system housing, the PC used a serial link through a long coiled cable.
The original 83-key PC keyboard now seems alien, since the enhanced 101/102-key layout took over in the late 1980s.
Note that the Ctrl-Alt-Del sequence (deliberately) required two hands on the original PC keyboard, since Ctrl and Alt were only on the left side of the keyboard and Del was on the right.
Technical Reference
A key ingredient of the PC’s success was the Technical Reference. While the Tech Ref wasn’t exactly a tutorial and didn’t have a whole lot of explanatory text, it included complete schematics of the IBM PC, as well as a full listing of the PC BIOS, quite thoroughly commented. Any halfway competent engineer could locate datasheets of the components IBM used and see exactly how they were connected in the PC. Or build a clone of the PC.
The IBM Technical Reference was, again, a direct result of the tight development schedule. There was no time to engage a team of technical writers and develop documentation in the style then typical for IBM products. Publishing existing schematics and BIOS source code was, on the other hand, very quick and easy—and much appreciated by engineers developing hardware and software for the IBM PC.
ASCII Island in a Sea of EBCDIC
In the 1980s, IBM systems (including the Datamaster) almost exclusively used the EBCDIC character set, different from ASCII. The PC, as the design drawings show, was meant to use ASCII from the very beginning. Perhaps surprisingly, this did not cause any significant problems during development.
The PC development team was deliberately separated from the rest of IBM. Microsoft’s BASIC used ASCII, DOS used ASCII. The Intel MDS machines also used ASCII, and that was where the PC BIOS was developed.
The ASCII/EBCDIC divide only caused a minor inconvenience for the PC development team when the BIOS listings (ASCII) were transferred to IBM’s mainframe systems before publication in the Technical Reference.
BIOS Interrupts
The Datamaster used interrupts as firmware entry points and this approach was continued on the IBM PC. While direct calls to known addresses were typical for similar machines at the time, the Datamaster could not easily use that approach because of paging.
While the software interrupt approach may seem unnatural, it turned out to be extremely flexible because it is easy to “hook” existing interrupts, add new functionality, and “chain” to the existing interrupt handler; the same concept as inheritance in object-oriented programming. IBM notably used this approach with the PC/XT hard disk controller (adding BIOS INT 13h for hard disks in a new ROM and chaining to the old BIOS service for floppy access) and the EGA (using add-on ROM to drive the EGA but falling back to the system BIOS for CGA/MDA support).
As it happens, DOS independently chose the same software interrupt approach to provide system services, with the same resulting flexibility and extensibility (and sometimes chaos).
PC Software
IBM’s original plan was to ship the PC with ROM BASIC and the CP/M operating system, both standard for personal computers at the time. Microsoft was the preeminent supplier of BASIC to OEMs and already had 8086 BASIC, so all IBM had to do was negotiate a contract for BASIC and implement the OEM interface required by Microsoft.
The PC shipped with a 32 KB ROM BASIC which was called ‘Cassette BASIC’, alluding to the fact that the BASIC ROM could only use an attached cassette tape for storage. As part of DOS, IBM also shipped Disk BASIC and Advanced BASIC (BASIC.COM and BASICA.COM) which both utilized the ROM BASIC and among other things provided additional “device drivers” allowing the ROM BASIC to use floppy disks for storage.
On the operating system side, things did not go so smoothly. There are many legends and conflicting stories, and not so many facts.
The biggest unanswered question is why the IBM PC did not ship with CP/M. There are many rumors about how Gary Kildall, the CP/M inventor and Digital Research (DRI) boss refused to meet with IBM executives and went flying instead, or how he flew in too late for a meeting. Which doesn’t really make any sense at all, since meetings can be rescheduled.
Other rumors say that Gary Kildall was not involved at all since it was his then-wife, Dorothy McEwen, who was in charge of negotiating OEM contracts. Another version of the story goes that she wouldn’t meet with IBM execs because she was already in a different meeting with HP and then went on vacation. Again, I do not find this credible. Meetings can be rescheduled.
I believe the real story is much more prosaic and straightforward: DRI did not have a product to sell. CP/M-86 simply didn’t exist in 1980, or really 1981 for that matter.
IBM was not going to wait and it was time for Plan B. Microsoft, already contracted to provide language tools for the IBM PC (assembler, Pascal, FORTRAN, etc.) needed an OS, and knew where to find one.
The DOS story is reasonably well documented. Back in 1979, Seattle Computer Products (SCP) started building 8086-based systems and needed an operating system. CP/M for the 8086 wasn’t available, and it was unclear when it might be. Tim Paterson stepped in and threw together QDOS, Quick and Dirty OS, soon renamed to 86-DOS: A bare-bones CP/M workalike that was good enough to manage files on a floppy and launch programs. The major advantage of 86-DOS was that it enabled relatively simple porting of existing 8085 CP/M applications to the 8086, largely accomplished through machine translation.
Microsoft bought 86-DOS for cheap, licensed it to IBM, and for much of the 1980s and the early 1990s, thus acquired a license to print money.
The IBM PC, Maker of Empires
It is fairly obvious that the IBM PC laid the foundations for two business empires, neither of them IBM.
Thanks to the IBM PC, DOS became the standard PC software and Microsoft was happy to license it to any OEM. For many years, Microsoft raked in cash from licensing DOS without needing to put much effort into improving the product. Digital Research briefly threatened Microsoft’s cash cow (and it’s clear that Bill Gates was very worried), but Microsoft managed to replace DOS with Windows before DRI could really eat into Microsoft’s bottom line.
Intel, by all appearances, stumbled into their x86 empire entirely by accident. The 8086 was considered a stopgap product. The 80286 was seen as a minor update and the 80386 started out as a sort of dead-end project, before turning into a matter of major strategic importance for Intel after the iAPX 432 abysmally failed. The PC effectively forced Intel to go the x86 route.
The importance of being in the right place at the right time cannot be possibly overstated.
Summary
The IBM PC development cycle was very short, only one year from the start of the design phase to a finished and announced product. The design was jumpstarted by heavily leaning onto the development team’s experience with the System/23 Datamaster. The core PC architecture was more Datamaster than not, with the notable exception of a CPU upgrade (Intel 8088 instead of 8085). The PC’s I/O subsystem, on the other hand, only had some (storage, communications) or barely any (display) relation to the Datamaster, setting new standards.
The tight schedule determined almost everything about the design of the PC, from the hardware (significant reuse of Datamaster design) to the software (using existing 3rd party software, no waiting for CP/M-86). The IBM PC was the right product at the right time, and its success and durability outstripped anyone’s wildest expectations.
Explainer
The design drawings use shorthand that may require explanation. Here’s an attempt to decode some of the acronyms and IBMese.
- CD: Card (feature card or I/O card)
- CH: Channel
- DEC: Decoding or decoder
- DRV: Drive
- 1LPC: 1 Line Per Channel—how many wires fit in the 0.1 inch space between pins on the components; higher LPC implies more expensive to manufacture
- MPU: MicroProcessor Unit aka CPU
- PCK: Parity Check
- Planar: System board
- ROS: Read-Only Storage aka ROM
- RQ/GT: Request/Grant
Sources
- The Creation of the IBM PC, David J. Bradley, BYTE, September 1990
- A Personal History of the IBM PC, David Bradley, IEEE Computer, August 2011
- Whence Came the IBM PC, Jon Titus, EDN, September 15, 2001
- Recollections of Gary Kildall, an interview with Gordon Eubanks by Clive Akass
- IBM PC Technical Reference, IBM, publication no. 6025008, August 1981
- IBM 5322 Computer Service Manual, IBM, publication SY34-0171-0, December 1980
- Personal correspondence with Dr. David J. Bradley
IBM poured a ton of money into the System/23 Datamaster…. only for it to be forgotten and rejected by the market. At least some of the engineering was reused, so it wasn’t a complete waste. Of interest is the amount of R&D IBM did on making the Datamaster user friendly: https://www.youtube.com/watch?v=TNrkvbouK14
I published this yesterday on neozeed Blog, but why not spam it here too, considering it is reelevant: https://zirblazer.github.io/htmlfiles/pc_evolution.html?ver=123
A good chunk of it includes my take on the IBM PC, among other things. However, my focus is on how the platform evolved. Most of what its there came from your blog posts and 7modem7 MinusZeroDegrees site.
I wanted to give it some visibility because it had been sitting hidden for a long time, mainly because it is not finished (Last time I recall working on it I was checking on the Compaq DeskPro 386 Technical Reference what the second 8254 PIT was being used for), and also because I wasn’t convinced about hotlinking heavy PDFs as I know that it is bad etiquette, but I didn’t knew whenever to mirror them on a site of my own, or what.
Besides the obvious thing about not having waited for the Motorola 68000 (Sure, no one had a crystal ball to know the implications of going x86. One can dream…), I also wondered why IBM decided to use the 8237A DMAC instead of the more modern 8089 IOP (I suppose that it would have been much more expensive, plus it was only 2 DMA Channels vs 4). The 8237A DMAC seems to have been fine at the time of the PC, but going forward it was perhaps the weakest subsystem and a pain in the butt to deal with.
There is another thing which I pondered, that is than given that the PC design focus was going for off-the-shelf parts, if at any point Multibus or S-100 were ever considered instead of having the PC to roll its own type of Bus and expansion cards. This last one I find interesing the most, since the available Processor choices are mentioned pretty much everywhere, but not if IBM considered the expansion slots to be compatible with something else (Including any possible card from the Datamaster, given that you mention the extremely similar Pinout).
I never saw someone doing a comparison about Multibus capabilities compared to I/O Channel (I’m aware than Multibus standard Pinout includes Pins for 8 IRQs but no DMA Channels, which is rather easy to mention as a possible technical reason). Also, weren’t Multibus cards possible Bus Masters if they used the 8289 Bus Arbitrator? I recall a lot of mentions about how bad I/O Channel then ISA were for Bus Mastering, even though the Intel MCS-86 chip set seems to have been able to do far more than what the IBM PC showcased in that regard.
One thing about the IBM PC is that the design was definitely cost conscious. It was not meant to be a fancy, all-powerful machine. The 8237A was entirely adequate for the PC where its main job was to support the floppy. It was less adequate for the PC/AT and just plain horrible on a 386+ machine with lots of RAM. But that was not a problem with the original PC design, the issue was that no one figured out how to evolve the PC hardware architecture to keep up with the CPUs and memory. All the major attempts to do so (PS/2, EISA) were total flops. The PC/AT was really the last significant PC architecture evolution that IBM managed to do, unfortunately.
The obvious counterexample of a constantly updated, evolving architecture is Apple. Clearly it’s not something customers value all that much, when it comes right down to it.
Bus mastering is an interesting topic. The PC could not do it, but the PC/AT could, only I’ve not been able to figure out why the PC/AT supported bus mastering adapters.
From a quick check it looks like Multibus is too new for the IBM PC. The S-100 bus is not, but I’m not sure it was even considered for either the Datamaster or the PC.
The 8237A was $20 in 1980. The 8089 was initially $200. The 8089 is also quite complex to program. It was, along with the 8271, part of Intel’s over-engineered support chips.
S100 was getting a revised specification which wasn’t finalized until 1983. Inconvenient for the PC. IBM could have done their own revision but that risked having incompatible cards. It also would require all cards and drives to have voltage converters since S100 used +8v, +16v, and -16v.
Intel Multibus specification was from around 1976 if I googled correctly. There were third party cards commercially available in 1979, as can be seen in this ad for a disk controller: https://books.google.com/books?id=Kz4EAAAAMBAJ&lpg=PA8&ots=ck6kUw0PUO&dq=Intel%20MDS-II%20multibus&hl=es&pg=PA8#v=onepage&q=Intel%20MDS-II%20multibus&f=false
Intel used Multibus for some of its reference platforms, including 8086 based ones, where it put the entire MCS-86 chip set to good use. When I learned than that was a thing in the late 70’s, I actually pondered whenever IBM considered using something based on that. From a time to market perspective and ease of design, it would have made a lot of sense to be closer to the Intel reference platform than rolling its own custom one, which is what IBM did.
I suppose that either IBM wanted features in the expansion slots that weren’t supported by Multibus standard pinout (The DMA Channels), they didn’t considered it because as the PC is based on the Datamaster design they sticked with what they already were happy with, or because a new incompatible slot served as a soft vendor lock-in allowing IBM to cash in with its own expansion cards before third party products began to appear in the market, whereas if being compatible with an existing standard Bus they would have competing expansion cards from Day One.
Either way, I never hear about whenever such option was ever considered at all.
I doubt it was considered. For the Datamaster, I don’t think Multibus was available (1978-9) or made sense. For the PC, using the existing Datamaster bus was much quicker than any alternative. I also don’t think IBM wanted to depend on Intel too much; all the Intel chips used in the PC were also second sourced.
For the PC, I don’t think vendor lock-in was a concern, the clear #1 concern was time and developing the core adapter cards internally was the fastest way to market.
And yes, for the floppy adapter, DMA was a must.
I suspect even PC/AT clones was not really popular until the rise of chipsets.
$200 sure sounds like enough reason to not use the 8089. The +8V/+16V/-16V voltages also sound extremely inconvenient for a machine built around 5V logic.
There’s a comment here that the MDA “started out life as an advanced video card for the IBM system 23 or some such”. If the System/23 and the PC had such a similar expansion pinout, I expect it’s quite possible that the prototypes were originally tested on a System/23. That might help to explain why it’s got some features that look unused or partly-implemented.
The Datamaster was an EBCDIC machine, not ASCII… then again a different character ROM should take care of that. The expansion connector was close enough that I think it’s plausible that cards for the PC might have been prototyped in a Datamaster machine, or inversely Datamaster cards might have been used in PC prototypes.
The Datamaster display adapter didn’t even use the 6845 CRTC and the monitor was built in, so I doubt the MDA directly grew out of some Datamaster hardware. But for prototyping, anything is possible. Looking at the initial PC design from August 1980 it’s clear that the display hardware noticeably evolved during the first few months, and maybe the weird MDA artifacts are simply a consequence of that.
The Byte magazine article by David Bradley mentions how MDA implemented a Datamaster equivalent display using 6845 instead of 8275.
The word processing feature card available for the System 23 Models 4xx takes over the video circuitry. It has built in memory and its own oscillator. There is very little information available about it since the word processing specific manuals have not been scanned. https://sysovl.info/gallery_ibm_s23guts.html leads to images of both the word processing feature card and the connector to the CRT. I can see a family resemblance to what shipped with the IBM PC.
A couple of comments on the initial hand drawn diagram.
The memory refresh was done the correct way with an 8202 attached to an external clock. The DMA controller is reserved solely for DMA. Of course, that was going to need to be changed to the 8203 when the 256K motherboards rolled out and each memory card would have needed its own DRAM controller to handle refresh. I haven’t found a price on any Intel DRAM controller but it couldn’t have been too expensive; PCJr memory sidecars used one and remained affordable.
The memory was only in 18 sockets. 32Kb (presumably stacked) chips would be used to get to the 64 KB total. I am glad IBM didn’t choose reducing the number of sockets for production units. Stacked chips are annoying to test and upgrade.
The notes about the Personal Cassette System having no memory lead me to wonder if there was a plan to do what the PCJr did later and use video memory as main memory. Giving a KB to hold the interrupt vectors, that would lead to a BASIC system with almost 2 KB available in high resolution modes or 14 KB in text mode. Just enough to run a demo before buying an upgrade.
Good catch that on the initial design they were using a proper DRAM Controller. Actually, I noticed that the 8253 PIT is completely missing from those schematics. So at some point it was decided to replace the 8202 with the discrete thing they did by using a DMA Channel and a PIT Counter to refresh the DRAM?
On the early 80’s being highly creative with how to use chips really paid off.
To add: I was reading an interview with the people that were at Intel during the Crush marketing campaign (When Intel repositioned itself from a mere component vendor to a full system solution provider) during the beginning of the 80’s, and they say the following about the IBM PC:
https://archive.computerhistory.org/resources/access/text/2015/09/102746836-05-01-acc.pdf
House: And I think an important thing to comment on. Although I don’t know that there was ever a direct link between the Crush campaign and the IBM PC design win, which was the defining design that made Intel the king of microprocessors for all time. But that product was introduced some time in early ’81.
We remember the telex came in early in December, and the whole internal campaign was put together in December. And right after Christmas, New Year’s, it was launched at the corporate level with our big session that we had all the corporate people.
Then after that, there were a series of things during 1980 that we were doing, including all the different presentations, advertising, futures catalog and so on that we were focused on. It was during that time, and this is another topic for another time, another video history but it was in that time that we were starting to hear about some activity in Boca Raton. That wound up, only a year after we launched the Crush campaign when IBM announced the IBM PC.
So the impact that might have had, I think we could speculate on. We don’t know. But clearly, you’re asking about the momentum in the summer we started seeing these design wins all of a sudden start piling up.
Katz: We didn’t see IBM at that time.
House: We did not see IBM. We did not see IBM. But the enthusiasm grew as we had the success in the second half of the year. And then we were hearing, as Dane knows, about the activities in Boca about the same time. So it wasn’t, I don’t know that it was ever a pin on the map.
McKenna: Well because, I think it took IBM a while to decide that they were going to outsource the processor and the software because they made everything themselves.
House: That’s another discussion but, that design, from the beginning, was using Intel.
In other less reelevant parts they do claim than many vendors were taking Intel single board computers (And they do mention about using Multibus for those) or other reference stuff and making products directly based on them, rarely rolling their own custom board version (Which is pretty much my point about sticking to reference platforms). Thus the IBM PC may even be an oddity, as it escaped from being Crushed into a reference MCS-86 Multibus platform had Intel manage to infiltrate a few field engineers, heh.
Yes, the 8253 PIT is not in the Aug’80 design, even though from what I can tell, the Datamaster had one. Not sure if that was an omission or if it was added later. There is a PIT shown on the PC prototype board which would have been circa Nov-Dec 1980. I’ll try to find out more.
The creativity was definitely worth it… if the discrete chips cost something in the $10 range at quantity, we’re talking millions of dollars pretty quickly in large production runs.
I too was curious about the RAM-less model. I’m not sure how they planned to do it since the BASIC needed some RAM to work at all. But using (for example) CGA video memory as RAM should have been possible. With the caveat that the interrupt vector table had to be at address 0.
@Zir,
If you want to see “creative”, look at the Apple II. Woz was known for his creative design choices to keep component count down. It lead to things like non-linear video memory addressing and disk drives that clunked on startup (because they lacked a zero track sensor, so just step the head down 40 times on boot to make sure its there!).
@Necasek, about the IVT:
One would naively think that Intel could’ve added a register instead of
hard-wiring that magic value.
Why? Hardwiring magic addresses was extremely common in that era. And requiring RAM to be at address zero rather than somewhere else does not sound like a terribly onerous requirement. The transistor budget was limited and you didn’t just add features that didn’t solve an actual problem.
The 286 had a relocatable interrupt table… albeit only in protected mode.
It’s logical for the program to be loaded at addy 0. Placing “random”
data there *is* a bit awkward.
Otherwise: like mesaid, “naively” 🙂 The transistor budget in particular
did cross me mind…
It would have been relatively easy to alias the CGA card memory to start at address zero in addition to its usual address. This circuitry would be disabled if any RAM is added to the motherboard. It would have allowed IBM to offer the absolute cheapest variation of the 5150 to schools. I expect IBM could have priced the 0K on the motherboard 5150 at about $600 factoring in the educational and volume discounts which seems fairly competitive with the Apple II and TRS-80 Model I Level 2 in their respective 16K + cassette versions.
I admit this design feels more like what Sinclair would introduce instead of IBM’s default design methodology.
The classic story is that IBM didn’t really want to enter the home
computer market all that much; if so: such tricks would indeed likely
have been outside their FOV.
Classic stories are seldom whole stories, though.
More on the software side of things: me wonders how they came to the
decision to put the BIOS in ROM, without actually providing a standard
way to extend it.
Mehas this horribly-coded[0], but basically functional, proof-of-concept
“DEBIOS”[1], present on the initial track of the disk, that loads and
intializes ROM images from right after it.[2]
Me’s not done much w/ it, and it’s a bit late… but me*does* wonder if
IBM didn’t have something like that during pee-cee development. Would be
particularly useful for them to have before the ROM contents were
more-or-less set in stone.
Also: why does the boot record have the signature at the *end* of the
sector? (That’s even more awkward than Intel putting the IVT at 0…)
What was wrong with using the ROM format on-disk?
[0] Me’s never been good at i86 assembler.
[1] “Disk-extended”.
[2] It also provides more general {boot disk,console} interfaces.
The BIOS of the IBM PC was not extendable until the third and last revision, dated 10/27/82. That one can load Option ROMs, allowing it to use certain VGA cards and other type of disk controllers.
Oh, that made me remember that there is an empty socket for a ROM, U28, whose purpose was never made really clear. Since the last BIOS revision it could be used as a Motherboard-plugged Option ROM since it was loaded in the same way (And I recall hearing about a Maynard SCSI Controller that came with a ROM chip to plug into U28), but what was its intended purpose before that? Or the loadable Option ROM “specification” was part of the original design but wasn’t finalized when the IBM PC launched?
And netbooting, me gathers.
U28 always seemed like some kind of insurance policy to me.
Also, what is wrong about “BIOS in ROM”? Where you would have put it? There aren’t alternatives. And you need a Firmware, no exceptions. The x86 Processors are hardwired to begin reading instructions from a specific location near the end of the 1 MiB Physical Address Space, so there had to be something non-volatile with Firmware code mapped there for the Processor to start bootstrapping the rest of the platform.
Moreover, the BIOS was socketed. At least that made it easy to change the BIOS ROM chip for another newer version by swapping chips. Early BIOS upgrades were distributed that way, as new ROM chips.
Needing a PROM burner to correct even minor defects?
As far as me understands it: S-100 machines tended to have a trivial
bootstrap in ROM, which loaded the BIOS — part of CP/M — from disk.
Me wasn’t around back then, so me might be in error on that one, though.
The issue is that prices for ROMs were falling. In 1979 Jameco ad, the 512 byte ROM chip and the 8 KB ROM chip were both about $10. IBM would be paying less in 1981; the 8KB EPROM was down to $3 by 1983 at Jameco. It didn’t make much sense to try for an Altair style minimal monitor that loaded more complex routines if the extra space was free.
IBM did release patches for the 5150 fairly early. TWOSIDE which allowed DOS 1.0 to use the extra surfaces of a double sided drive as drives C and D showed up in March of 1982.
Remember that DOS first booted on a 5150 in Feb 1981. IBM had done 6 months of development to reach that point but none of that development could be done with disks attached to the 5150. I think IBM had a PROM burner in house and could easily just make a new one every time a bug is located. It would probably be faster than creating a prototype only ROM with a monitor that would load new versions of the BIOS and BASIC off cassette, especially since that would have required equipping a different system with matching cassette routines.
IBM made one mistake with the 5150: the soldered in bank of RAM. Cassette BASIC should have been dropped with the PS/2 line at the latest. The extra space was needed and the PS/2 was breaking compatibility in plenty of other places.
What shape did those “patches” take…? A “U28” ROM? Driver in the
initial track? DOS-specific stuff?
Of course they’d have a PROM burner… just updating a sector on the
disk is a lot less of a hassle. They might also not have had established
the exact feature set to include. It’s about flexibility. (And, to a
much more minor extent: mobo space).
Who knows what kind of wire-wrapped prototypes they were working with,
early on.
Furthermore, that all still leaves the question: why are there separate
option ROM and boot record formats? And why is the latter so silly?
Some early fixes for the IBM PC. I’m sure there are more.
OCT 1981 updated BIOS – report problem fixed in update get new ROMs
I know the BASIC ROMs got updated but I don’t remember how it happened
TWOSIDE – DOS “driver” shipped as BASIC source that builds a short COM program
POKE 106,0 – included in some BASIC programs fixed hangs that occurred when INKEY$ is used right after a function key pressed
IBM APL – the big winner. A replacement 8088 could be supplied along with the 8087.
The initial motherboard prototype had 5 sockets for the ROM BASIC. U28 may have been kept in place in case the BASIC could not be shrunk to fit in 32K or if IBM decided to include a few more features.
Option ROM and boot record were designed by different people and do different things. I do not see anything silly in the boot record though with only a couple of months between first boot and DOS having to be locked in for release means mistakes were likely. I find its focus on maximizing data disk storage capability preferable to many of the other early disk formats.
One consideration re: things like U28: the IBM PC really does feel like IBM was trying to match or beat Apple spec-for-spec.
The Apple II as originally designed had the the D0 and D8 ROMs unpopulated, and there were a few ROMs available for those sockets – Programmer’s Aid #1, a utility ROM that added BASIC renumbering, BASIC program concatenation, tape verification, machine language program relocation, memory testing, and BASIC sound and graphics routines, in the D0 socket being the most popular.
With the introduction of Applesoft BASIC (Apple’s adaptation of Microsoft 6502 BASIC), it needed the D0 and D8 sockets for itself, ending that ecosystem.
This is a wild guess, but I wouldn’t be surprised if IBM was trying to do something similar with U28.
“Cassette BASIC should have been dropped with the PS/2 line at the latest.”
Are you saying that PC DOS 3.3 should have shipped with GW-BASIC? At the time MS was already selling packaged product versions of MS-DOS to OEMs. It would be nice to unify the two.
@Richard Wells:
That’s informative, thanks for the elaboration.
Though me still disagrees on the diff formats — and putting the
signature at the end of the boot sector, possibly in the middle of
where code would be (the latter is the “silly” part). The boot
record could just as well have been treated as a 1-sector option ROM,
after all. Given netbooting (or should mesay, “remote IPL”): the
mechanism isn’t all that different.
There is no signature at the end of a floppy boot sector required by the BIOS. Just look at boot sectors for DOS 1.x.
You’re right: meconsulted the BIOS source, and a ‘JMP BOOT_LOCN’ is
all that’s there. Me bad, entirely.
Which leaves me to wonder WTH introduced that check, and why, and why so
clumsily.
The other lesson aside from the right time and place of the 8088 and Microsoft is IBM did basically spend their budget in exactly the right place, most of the time. Most micros were compromised in one way or the other, and IBM’s basically cut the gordian knot of that. Most noticeable is the keyboard (which I hate to put focus on because that’s all modern people seem to focus on) – pretty much every review had a glowing review of the keyboard. That counted for a lot when most of your competitors skimped hard on it!
Most of the flaws I’d say are nitpicks – annoying, but not fatal, like the messed up drive change notification and the missed opportunities in MDA.
While me’d agree that those latter two examples could be considered
minor, there are a number of flaws that really are gratuitous, even if
somewhat tangential to the typical luser’s experience.
The outrageously overcomplicated kbd interface comes to mind. RS-232?
Wussdat again? (Especially weird since it was used right “next door”
for diff ports).
The keyboard interface was not outrageously complicated in the PC, far from it. It was quite simple. What exactly would be the point of using RS-232 when the keyboard was the only device that was supposed to be attached to it? The keyboard interface did exactly what it needed to do. Each key on the keyboard had a number and that was its scan code, very straightforward.
In the PC/AT it got a lot more complicated with the keyboard controller gaining its own intelligence and translating scan codes, but that’s not something the original PC can be blamed for.
Worth mentioning that the original packaged version of MS-DOS 3.20 in 1986 was very buggy, forcing MS to release 3.21.
@Necasek, about the kbd interface:
Me got me information from [0]. In short: it describes weirdo
inconsistencies between the roles of both endpoints. Now, it explicitly
talks about “PS/2”, so did the weirdness only get introduced w/ the AT?
Or did the fellow just don’t get it?
(Me’s sadly lacking on me schematic-reading skills, and the 5150 techref
doesn’t quite seem to spell it out.)
Either way: RS-232 was readily available and, as mesees it, there was no
need to implement a custom interface.
[0] https://www.snafu.priv.at/mystuff/sunkbd.html,
section “Preliminaries: PS/2”
Some of the complaints in the article are reasonable, some are not. IBM documented the keyboard behavior pretty well. The tone of the article is clearly “this is not a Sun keyboard, therefore it must be ugly and horrible”.
The keyboard interface did significantly change in the PC/AT. The PC and PC/XT used an Intel 8255 chip, which is a relatively dumb device, plus several discrete latches that converted the serial bits from the keyboard into parallel data presented to the 8255. The communication was only from keyboard to host (except the host could reset the keyboard). Scan codes directly corresponded to key positions. It really was pretty straightforward.
The PC/AT used an 8042 style microcontroller with custom firmware as a keyboard controller. The keyboard itself was significantly different and allowed bi-directional communication, accepting commands from the host (e.g. LED and typematic control). The 84-key AT keyboard produced significantly different scan codes compared to the PC keyboard, but the PC/AT keyboard controller (by default) translated them back. The Enhanced 101/102-key keyboard further complicated things. Due to changes in the keyboard layout, significant complexity was introduced so that the Enhanced keyboard would look much like the old PC keyboard to existing applications. The PS/2 keyboard design is not something that anyone sane would start out with, but it is the way it is for reasons that are not crazy.
Anyway, the part you seem to be missing is that the microcontroller in the keyboard communicated directly with a chip/microcontroler in the PC. RS-232 would have required additional chips, increasing the cost while solving no problem whatsoever.
The IBM keyboard designs were intended to make it easy to adapt to other languages and accept key chords. Most of the RS-232 keyboard designs I am familiar with were fixed to a single language and only accepted a single modifier key at a time. IBM planned on permitting CTRL-ALT-Shift-Function simultaneously pressed. Try to design an RS-232 keyboard that would do that. Keyboards across the industry were moving away from the terminal RS-232 design in 1980; IBM wasn’t alone in it.
Ah, so it became bidirectional w/ the AT. That medid certainly miss 🙂
Me’s well-aware that the scan code translation mess only appeared later
on.
Apart from that, it all depends on how one sees the keyboard: an
inseparable part of the machine, or a semi-standalone device that may
potentially be swapped with flag-waving or an optical telegraph if
desired? (Smoke signals would probably be a bad idea.)
@Richard Wells:
That would all seem pretty tangential to what interface would be used,
though.
“Existing keyboards aren’t flexible enough to work with this machine”
is, of course, a valid consideration. Just as much as “a 300 baud modem
*really* won’t do anymore”.
IBM could’ve easily raised the standard without destroying fundamental
compat.
Don’t forget that Type 5 keyboards are, indeed, basically RS-232 devices
themselves. (See the article me linked to.)
Sun was stuck in the land of multi-user Unix and thus kept to the mid-70s terminal design with RS-232. Read the Type-5 specification and see what limited values are available from the keyboard and how important the switch block is. I dislike the keycodes returned being different from US and International keyboards. The OS needs two different routines to handle the same key with the same label.
Entering a “£” symbol requires pressing 3 keys* and thus passing 6 bytes from keyboard to computer. Much more difficult than getting a modified keyboard customized for a specific nation and pressing the key with the correct symbol on it.
* 5 key on the IA. Ctrl-Shift-F1 is the compose key for keyboards without a dedicated compose key and then two more keys for the special key definition. Fun.
Me contests the notion that Sun was “stuck” there. The separation
between terminal and computer makes complete sense; it’s a shame it has
largely been abandoned.
Anyway, that doesn’t apply to the keyboard design; terminals of the time
were already quite a little smarter than blindly relaying the
keypresses.
The Sun folks made the decision to keep the keyboard interface, which is
*not* the terminal interface, simple and effective. From that point of
view, the IBM pee-cee kbd interface is, like with so many things IBM,
hellishly overcomplicated.
Hence the tone of that article.
Key mapping does not really belong in the OS, no. The point is: it
doesn’t belong in the keyboard either! It belongs in the terminal.
That’s the sweet spot. By absorbing the terminal into the OS, the
handling of *many* things got displaced and consequently muddled.
Putting it in the keyboard is making the opposite mistake (a point the
AT really drove home).
“why did IBM not use the Motorola 68000”
Actually, IBM did use the 68000 on a computer contemporary with the PC, the IBM System 9000 laboratory computer line[1] released in 1982. This was done by a different division of IBM, so knowing how IBM works the PC folks probably didn’t even know it existed. Even if they did know, reuse of the S/23 work and an 8088 was obviously the smart/correct decision given the constraints they were under.
Re: Multibus or S100
Multibus and S100 cards are large (in their standard form factor) relative to ISA, use many more signal pins than ISA, require bigger slots on the motherboard to install, all meaning relatively more expensive to put in a computer. And using one of them would have likely changed the form factor of the PC (and unlikely to have allowed 7-8 expansion slots) & would have forced a huge change in the power budget (already marginal in the PC-as-built).
Plus…S100 could be very finicky due to a variety of technical design decisions and I doubt IBM would have wanted to take on the support issues.
Re: Keyboard & Serial
I always thought the keyboard interface was odd as well. Why use an 40-pin 8255 + latches when you could have used a 28-pin 8251 (and you wouldn’t have even needed level converters…5v would have been fine). But, meh, it works; I’m sure they had a reason to do it that way.
N.B. – pretty much every port on your modern PC is a serial port now (USB, Thunderbolt, SATA, etc).
And, yes, the keyboard quality was a *huge* differentiator. Still is to some extent.
[1] https://en.wikipedia.org/wiki/IBM_System_9000
Re: 8089
I think the ‘secret’ problem with the 8089 was that Intel was pushing it heavily (right or wrong) as the “Intel version of IBM mainframe channel architecture”. I bet some IBM mainframe exec sent a memo at some point that said “thou shalt not use the 8089 in the PC, lest you step on our dainty toes”.
But in the end (as noted above), it was very expensive, limited & compared to plain old DMA required much more programming with it’s own toolchain (asm only). Wasn’t worth it.
@KJ, on all those newfangled serial interfaces:
The problem is that none of those are remotely as a generic as the
RS-232 interface. They’re all overspecified and/or special-purpose.
Something went wrong there.
The System/23 has a parallel connection for the keyboard. I haven’t used the floor model of the System/23 so I don’t have knowledge of how hard it is to move the detached keyboard with the heavy parallel cable. The 8255 allows the interior circuitry of the PC to keep similar parallel keyboard paths as the System/23. With the latches, the keyboard could be connected through a cheaper and lighter serial cable but translated to parallel.