ESDI Adventures

At long last, I got hold of a decently well functioning ESDI drive. From my earlier adventures, I had a WD1007V-SE2 controller, as well as an older WD1007A. The WD1007A (Compaq branded) used to live in a Hyundai 286 machine together with a Microscience ESDI drive. But the Microscience drive tragically died some years ago.

I also have a somewhat working ST4182E drive (152MB), but its heads have an unfortunate tendency to stick when the drive is not use and then require manual intervention, so the drive isn’t very usable.

Now I got a CDC/Imprimis/Seagate Wren VI drive, CDC model number 94196-766 or Seagate model ST4766E. It has a formatted capacity of about 664MB and it was about the biggest ESDI drive in Seagate’s product lineup.

A 1991 ST4766E ESDI drive

The drive was sold as untested and I didn’t expect much of it. Visual inspection revealed mild corrosion in one area of the PCB, but there was no obvious cause for it (no leaky capacitors or some such). It may have been a result of sub-optimal storage conditions.

Before powering up the drive for the first time, I was rather apprehensive. But the drive spun up just fine and made the right kind of noises—heads unlatching followed by a seek test; by now I have a very good idea what a Seagate drive of CDC lineage should sound like. There were no suspicions noises and for a full-height 5.25″ drive with 8 disks inside the ST4766E is fairly quiet.

Controller Setup

Setting up a system with the ST4766E and WD1007V-SE2 was not entirely trivial. The WD1007V has its own BIOS, but it is not what one might expect from a disk controller BIOS. The WD BIOS can format a drive but it does not provide an INT 13h service.

Instead, a standard AT compatible BIOS is assumed. That is because the WD1007V presents an ESDI drive through the standard PC/AT style disk interface, which also happens to look awfully like IDE.

I plugged the WD1007V into my favorite board (Alaris Cougar) after disabling the Adaptec VLB IDE controller on the motherboard and the floppy controller on the WD1007V.

Then I ran the WD BIOS to format the drive. That took a while but went well. Then I tried to partition the drive and install DOS on it, which went rather less well.

FDISK did its job, but took unusually long time. The DOS FORMAT command progressed… very… very… very slowly. It was in fact so slow that I was pretty sure something had to be wrong.

Which gave me some time to do a bit of research. The WD1007V manual claims that the controller only supports up to 53 sectors per track, and the drive was jumpered to 54 (which did not stop the low-level format from succeeding!). There is an old OnTrack Q&A document that also says 53 sectors was the maximum for the WD1007V. And there is an old Usenet post where the author complains that with 54 sectors per track, “the thing ran ridiculously slowly”. The drive’s own manual notes that 53 sectors is the most common setting, but does not explain why it would be.

I strongly suspect that the slowness is caused by the controller’s inability to handle 1:1 interleave at 54 sectors per track. If the drive misses every single sector when accessing the disk, that would sure slow things down a lot.

So I gave up on the DOS FORMAT in progress, re-jumpered the drive to 53 sectors (which reduces the capacity a little), and enabled alternate sectors when formatting. After the format was done, I let the controller apply the bad sector map which is stored on the drive itself.

Note that formatting with “alternate sectors” aka spare sectors means that 1 sector per track is set aside for defect management. The alternate sector is assigned ID 0 and therefore won’t be normally used. If one bad sector is found on a track, the controller can mark it as bad and use the alternate sector instead.

While this reduces drive capacity, it is critical for operating systems that can only manage a limited number of drive defects (and a 650 MB drive can have rather more defects than a 20 MB drive, unsurprisingly). It can also be useful for systems that can only mark entire tracks as bad. For the FAT file system, it might be on balance better to not use alternate sectors and just let DOS mark the corresponding clusters as bad.

One catch during post-formatting setup was that the motherboard BIOS detected the drive with CHS geometry 1632/15/53 (yes, it can be detected because the WD1007V ESDI controller supports the IDENTIFY DRIVE command). Except with alternate sectors enabled, that is wrong! The BIOS must be set to use 52 sectors per track, not 53.

This is a deficiency of the WD1007V controller. It could reduce the number of sectors per track it’s reporting when alternate sectors are enabled (there is a jumper on the controller), but WD probably didn’t think of that because in 1989, when the WD1007V was made, operating systems and BIOSes weren’t using IDENTIFY DRIVE yet.

I also let the motherboard BIOS apply geometry translation—obviously the drive’s native 1632/15/52 geometry has more than 1024 cylinders, which means that translation is required to access the full drive capacity.

DOS Setup and Experience

At any rate, with the drive jumpered to 53 SPT and the BIOS set to 52 (to account for the “missing” alternate sector), FDISK wasn’t weirdly slow and DOS FORMAT ran at a reasonable speed, finishing in a few minutes.

DOS FORMAT discovered three additional bad sectors (or at least three bad clusters), which seems quite good for a drive made in 1991. Then I installed PC DOS 2000 on the drive without any incident, and followed with a few random utilities.

Norton SysInfo shows that the BIOS translated the drive geometry from 15 sides to 30 and halved the number of tracks from 1630 to 815. Note that SysInfo shows the drive model as WD1007V, which is what the ESDI controller returns as the model in IDENTIFY DRIVE.

Over 600MB of disk space!

The drive is of course not that fast even by mid-1990s hard disk standards, but then again the ST4766E is a drive model which was released in 1988. It is a standard 3,600 RPM drive, which means average rotational latency is 8.33ms.

The seek times of the drive are actually quite impressive for a drive first available in 1988. Seagate gives the average access time as 15.5ms, with track-to-track seek times of 3ms and maximum seek time of 37ms.

Compare this to old stepper motor drives which often had average seek times on the order of 80ms. CDC of course used voice coil actuators in their Wren drives which is why their seek times were far better.

Norton SysInfo agrees with the data provided by Seagate:

ST4766E benchmark results

Average seek time of 15.1 ms is really good for a late 1980s drive model. The transfer rate is also very good at about 1.3 MB/sec; it can’t be too far from the theoretical maximum.

While 15 Mbps corresponds to 1.876 megabytes per second, the drive’s sustained transfer rate can never be that high. There is some storage overhead and each sector needs more than the equivalent of 512 bytes on disk. More importantly, there is additional overhead caused by switching heads and seeking to the next track.

ESDI vs SCSI

The CDC/Imprimis/Seagate Wren VI drive has a bigger brother, the Wren VII with an unformatted capacity of 1.2GB—quite a bit more than the Wren VI’s 766MB. The catch is that the Wren VII was not available with an ESDI interface, only SCSI. (The Wren VI was available with both ESDI and SCSI interfaces.)

Yet when one looks at the drive details, it turns out that the Wren VII and Wren VI are mechanically more or less identical. Same number of platters, same data density. How is that possible?

The more than 50% capacity increase was possible thanks to ZBR. The Wren VII is divided into several zones with different number of sectors per track. The innermost zone of the Wren VII is the same as on the Wren VI, with 15 Mbit/s transfer rate. But the outer zones use higher transfer rates and therefore can pack more sectors on a track; the outermost zone on the Wren VII uses 21 Mbit/s transfer rate.

And this is exactly where ESDI was at a clear disadvantage compared to SCSI and even IDE. Yes, there were high capacity ESDI drives, up to 1.5GB. But these drives required faster transfer rates, up to 24 Mbit/s. For any given ESDI drive, if the drive could store N sectors per track, a SCSI variant of the same drive could store N sectors on the innermost tracks but N + M on the outer tracks.

If ZBR could increase the drive capacity by 50%, and speed up the outer tracks as well, who was going to say no to that? Around 1990, ZBR was becoming more common in SCSI drives as well as IDE, whereas ESDI was limited by the fixed transfer rate. Again the problem wasn’t that the ESDI transfer rates were low, it was that they were generally fixed.

ESDI was actually intelligent enough that it was possible to implement ZBR, because the controller could change the drive’s sectors per track and possibly the transfer rate on the fly. However, I don’t think there was any defined way for the drive to tell the controller what it was capable of.

With technologies like ZBR, it was far easier to put a smart controller on the drive itself (SCSI or IDE) rather than designing a highly complex interface between the drive and controller. Because SCSI and IDE hid these details, drive vendors could ship more intelligent drives without having to wait for new interface specifications and new controllers.

ESDI, an Intermediate Step

ESDI is an interesting evolutionary step between completely dumb ST-506 drives and self-contained, intelligent SCSI or IDE drives. An ESDI controller can discover the drive geometry and other information about the drive, and ESDI drives can store a factory defect list for the controller to use when formatting. These are clear advances compared to ST-506.

The problem with ESDI is that to achieve higher capacities, drives needed to use higher transfer rates. ESDI started at 10 Mbit/s, continued with 15 Mbit/s, and went up to 24 Mbit/s. That meant a new drive quite probably needed a new controller. And as mentioned above, technologies like ZBR were difficult if not impossible to exploit with ESDI.

ESDI could have evolved with more and more complexity being added to the interface between drive and controller. But it made much more sense to put the drive and controller together, completely hide the internal complexity, and only expose a much higher level and more stable host interface like SCSI or IDE.

With those, drive vendors could use ZBR, use higher RPMs, perform advanced defect management, and do all kinds of things which ESDI could only do with great difficulty or not at all.

In a way, by about 1990 ESDI started getting in the way more than it helped. Which is why it was completely obsoleted by SCSI on the high end and IDE on the low end, and by 1992 ESDI drives all but vanished.

This entry was posted in ESDI, PC hardware, PC history, Seagate. Bookmark the permalink.

18 Responses to ESDI Adventures

  1. Barolo Baron says:

    15 sides? Why not 16, or 2 sides for each platter?

  2. rasz_pl says:

    >ESDI was actually intelligent enough that it was possible to implement ZBR, because the controller could change the drive’s sectors per track and possibly the transfer rate on the fly.

    Afaik ESDI drive itself is the source of reference clock. Controller doesnt know what clock to expect so must be build to accept anything up to its rated max speed. Looks like there is nothing stopping a theoretical purpose build ZBR ESDI drive from working on standard controller.

    ST506 drive probably wouldnt even notice being forced to work in ZBR mode by smarter controller.

  3. Richard Wells says:

    It wasn’t until the early 90s that a robust hard drive controller could be made cheaply enough to put on every hard drive. Until then, a somewhat limited controller that could be shared between multiple drives was a much more affordable option. I really doubt anyone would have bothered with IDE drives if they cost an extra $200. The stripped down integrated controllers Compaq initially used were cheap but rather the wrong direction.

    Note that there were floppy drives with built-in controllers as well. The SAM Coupe was a noted recipient of those for its internal drives. They were not a successful concept.

  4. Michal Necasek says:

    Because CDC drives typically used a dedicated “servo surface”. This was used by the controller to find tracks and give it an idea where it is during a single disk revolution. All newer drives use “embedded servo” markings where the positioning information is intermixed with sector data.

    With an 8-platter drive, losing one surface was not a big deal, especially if it enabled information to be packed tighter on the data surfaces. For a 2- or let alone 1-platter drive, a dedicated servo surface naturally wasn’t a viable proposition.

  5. Michal Necasek says:

    That (drive-driven ZBR) might work on the ESDI interface side, but not in an actual system. Because you couldn’t explain the drive geometry to the OS. Even the controller itself could have problems, certainly if it tried to do any geometry translation.

  6. Michal Necasek says:

    That depends on what kind of system you’re talking about. For larger systems with multiple drives, yes. For the typical PC, the vast majority of users needed exactly one drive, and there had to be a controller somewhere. Which is why IDE was already cost effective for Compaq in 1986-1987. That was especially true for portables where it might not even be physically possible to install more than one drive, and integrated controllers made a lot of sense.

    Looking at DISK/TREND, ESDI was effectively dead circa 1990 and drive vendors stopped introducing new ESDI drive models. Of course they kept building existing ESDI models for a while, but everything new was SCSI and IDE.

  7. MiaM says:

    Nice!

    I think you made a typo with “tracks per sector” rather than “sectors per track”. Sure, you can view it as sector 1 has more tracks than sector 70, but that seems backwards 🙂

    I wonder if ST-506 disks would actually support different speeds? I would think that the more or less analogue circuits are tuned for the data/clock rate that are used by MFM/RLL.

    Re floppies with a built in controller: I would think that the LS120 is the most common example, as it’s an IDE drive that can read/write regular PC 720k/1.44M floppies (and also obviously the LS120 disks).

    Technically not integrated with the drive, but almost integrated, are the floppy drive option for for example VAXstation 3100 and whatnot, that uses a separate PCB that converts SCSI to regular floppy. You can connect this PCB to any computer using SCSI and read/write PC compatible low level format floppies. When I did a simple test with this 20-30 years ago with my Amiga I think it never turned off the floppy motor.

    I’ve also seen a Wang external 5.25″ floppy that had a PCB that converted regular Shugart/PC floppy to SCSI/SASI. I don’t think I ever got that interface to work with anything, but the enclosure itself was great as a generic external enclosure.

    I agree that for the average user there were not much need for connecting more than one drive.

    The use case for multiple drives in say the early 90’s at least from my point of view was to be able to use two cheap old drives to achieve a capacity on par with what a way more expensive new drive would deliver. For many years I used an ST412 and a drive that was spec’d to 20M, with an RLL controller, giving me 15+30M, and one of the drives sat outside the PC case as it didn’t fit. (In theory I could had put the floppies outside, but that would had been more awkard). 🙂

    Btw with 50% more capacity thanks to variable data rate the extra cost of one controller per drive as compared to controllers shared by multiple drives would soon had been eaten up by the capacity increase.

    Would actually be rather easy to just check the prices in ads in old computer magazines from back in the days. Sure, the cost would be slightly higher to have variable data rates (and given the high speeds I would guess that you’d have a few crystals, one for each speed, similar to VGA pixel clocks at the same time, rather than some PLL or frequency divider like floppy controllers like the Commodore 8-bit ones used), but on the other hand only needing one product would had saved a bit.

    Re time to switch tracks: I would think that at least switching heads within the same tracks would be so fast that it can be done between sectors, and thus cause no speed reduction as compared to continuously read the same track over and over. When actually moving the heads I would think that any good controller would just read whichever sector that appears first. The problem might be the OS that requests a particular sector and thus would have to wait for that sector.

    I wonder if any cache program did any analysis on this, and used some sort of algorithm to determine the optimal next sector to request when moving the heads to a specific track (and reading a lot of sectors from that track)?

    P.S. if the disk/controller reports a sector too much then you’d likely have problems running Linux! I’ve written about this over and over, but I used to have a Conner 20M IDE drive that reported a sector too much and that made it not work in Linux. There surely would had been some boot arguments to the kernel to make it work, but that was outside my knowledge at the time.

    Also re multiple disks with one controller:
    If there really would had been a demand for cost savings there surely would had been controllers that could handle more than two disks?

    I can’t remember the model, but someone asked a few times in the Sharp 8-bit MZ computer facebook groups about some WD combined floppy+MFM hard disk interface that IIRC had three MFM hard disk ports and one floppy port, where the floppy were programmed as if it was another hard disk, more or less. Maybe the head selects for more than two heads were used to select multiple floppies? Either way, that controller was obviously related to the regular WD MFM that became the basis for IDE, but there were extra bits to select more than two drives. Unfortunately I can’t remember the name of the controller, and thus I can’t find any documentation for it. Looking at the original IBM AT hard disk controller, the drive/head register requires bits 7-6-5 to be set to 1-0-1, while bit 4 selects drive and bits 0-3 selects head 0-15. This would obviously allow bits 7-6-5 to be used to access more than two drives, i.e. up to 16 drives on the same controller. But this never happened.

    Side track, suggestion for a future blog post perhaps?: What about the ability to jumper this original IBM AT MFM hard disk controller as primary/secondary? What are the timeline for supporting this? I know that for example Linux would use the controller directly, and for IDE with the same addresses additional disks would just work. In DOS a secondary IDE channel could be used for CD-ROMs and similar. My impression is that BIOS support for more than two hard disks started to appear around the 486 era, while motherboard BIOS support for more than one floppy controller never appeared (and support for more than two drives on the primary floppy controller was only a thing on PC/XT/8088 class computers).

    Luke warm take: This thing with features that were never fully implemented seems like a red thread along the IBM / Microsoft PC line.

  8. JQW says:

    I can remember installing a couple of servers with ESDI drives, one 600MB, the other 1.3GB. Surface scans took a long time – essentially, I had to wait until the following day to complete the install.

    With SCSI being already available, I’ve no idea why we stuck with ESDI, particularly as ESDI controllers didn’t do DMA transfers. RAID arrays were probably also available by then, too. These servers may have had a SCSI adaptor installed anyway for the QIC tape drive, which makes even less sense.

  9. Michal Necasek says:

    I am 99% certain some ESDI controllers could do DMA. That was purely a function of the controller, not the drive.

    Were the servers ISA bus? Or EISA? Or something else?

  10. JQW says:

    These were standard ISA bus machines, installed around 1991.

    The OS they were running certainly didn’t support DMA on ESDI drives, as ESDI and RLL/MFM drives used the same basic driver.

  11. Word Merchant says:

    Apropos veteran disk controllers, here’s an article on an ancient Intel FDC, the 8271, released around 1977, and inexplicably used in the BBC Micro.

    Makes for absolutely fascinating reading and re-reading. The design was both highly idiosyncratic and either a bit ahead of its time, or so left-field that it occupied its own timeline.

    Here you go:

    https://scarybeastsecurity.blogspot.com/2020/11/reverse-engineering-forgotten-1970s.html

  12. Josh Rodd says:

    IBM’s high end drives in the early days of the PS/2 were ESDI, and they had a fairly good controller (for its time) that implemented DMA, complete with an ABIOS-compatible BIOS to control the whole thing.

    Rather bizarrely, they implemented a type of “IDE” except instead of presenting an ST-506 interface, it presented the Microchannel DMA interface, and the connector was a Microchannel attachment, not ISA. Sometimes this is called “Direct Bus Attachment” or ESDI-DBA. As far as I know, it only was implemented on the models 50, 70, 90, and P70.

    I have no idea why integrating the controller onto the drive made more sense than either having an adapter card (which the model 50 did for its 30MB cheaper disk option, for example) or else putting ESDI logic on the motherboard. The only motherboard controllers IBM ever did were SCSI on later models.

    On the 50/70/90, it had a unique way of making the hard drive “pluggable” right into the chassis, with the connector lined up so all you had to do was slide the drive in and out. The DBA connector also carried power, unlike IDE, and was a rather long card edge connector.

    Overall it worked elegantly, except the only drives that ever existed were IBM models sized 30, 60, 120, and 160MB. Swapping the physical disk from an IBM SCSI disk of the same type doesn’t work – the hardware is identical, but the low level formatting is not the same and performing a low level format isn’t “low enough”. Virtually any DBA-ESDI drives you can find now are dead.

    By the early 1990s, IBM moved on to SCSI and then later to IDE.

    Another curiosity is that IBM never bothered with RLL drives and controllers for the PS/2 line. It was either ancient and slow MFM designs (compatible with ST-506), their proprietary ESDI, or SCSI. They never offered ESDI faster then the original 10 MHz. But DBA-ESDI was ahead of ATA DMA by almost a decade!

  13. Richard Wells says:

    IBM also had a ST-412 style controller plus drive for the Model 25 and 30, effectively a hardcard without an expansion slot. The internal connector for the hard drive had 44 pins that electrically amounted to a reduced ISA slot. One might have thought the PCjr would have soured IBM on modified ISA slots. There were some offerings with a standard ISA controller but running the cable through the system was a challenge.

    Note that just about all ESDI and SCSI drives implemented RLL. The MFM drives with RLL were a bit unreliable and not something IBM would take a risk on.

  14. MiaM says:

    In hindsight the idea of having DMA lines in the ISA bus, and requiring separate logic for the I/O cards to handle DMA v.s. CPU access, was a mistake.

    I get why it was done this way – not having DMA lines would had required a split bus with “I/O” on one side and “memory” on the other side, and that would had been expensive and would also had caused problems like “XT slot 8” but worse.

    What IBM could had done though would had been to have a separate I/O enable line for each slot, and use a combination of I/O address decoding and also the DMA controller to drive this line. Given that this would had saved 4-5 of the 6 (or more) DMA lines, there could had been a four separate address lines that would reflect the regulra address bus for CPU cycles, but held low during DMA cycles. That way each card that would had been useful to use with DMA could just map the main data read/write register as the lowest within it’s I/O range and then it would be up to drivers/OS/BIOS to either use DMA or PIO to read/write that register, without the card noticing any difference.
    Combine this with removing most IRQ lines from each slot, only having two lines where one would be a general interrupt service request and one that pulses whenever a byte/word is ready for read/write, and logic on the main board could select if the latter would just increase the DMA counter or actually generating an interrupt.
    Given how large IBM was, they would likely had had the leverage to have Intel or possibly AMD create a chip to do all this.
    With this it would had been possible to do things like using DMA to transfer large chunks of data via serial ports, running in the background with no character loss, and obviously any disk controller would be both DMA and PIO capable.

    Re PS/2:
    Were there an MFM/ST412 drive for the model 25/30? I.E. were there ever an option where the 20/34-pin MFM/ST412 interface were exposed between controller and drive?

    Otherwise I would argue that both the interface on the model 25/30 and also that of the 50/70/90, are different types of “IDE like” interfaces. With a bus style interface between the drive assembly and the “computer”, I would say that there is no diference between ESDI, MFM and whatnot and those words were just used to cause confusion, since with a single PCB connected both to the drives heads and with a bus style connection to the “computer”/mainboard, there is no way to argue that a drive is ESDI as in the drive sends a clock to the controller, or MFM as in that the controller runs at a fixed clock rate, as it’s all on the same PCB and in this case the PCB can be treated as a black box.

  15. Josh Rodd says:

    The 25/30 was an XTA design, but the only drives IBM sold were MFM “IDE” drives.

    The 50 and 50 Z had 20, 30, and 60MB options. The 30MB was described by IBM as “ST-506 like”. The 60 was ESDI.

    The 70/P70/90 was strictly ESDI. For whatever reason, IBM wasn’t interested in MFM/ST-506 interfaces for their DBA Microchannel drives.

    Regarding DMA in the original IBM PC: it had two purposes… DRAM refresh, and then the ability to do diskette I/O with a very simply non-buffering controller. The CPU was not fast enough to do programmed I/O without some extra circuitry on the controlled, so they just used DMA instead. I view DMA as a cost saving measure.

    You’re right that tying some kind of slot ID information would have been very helpful, as would have been a dedicated IRQ and DMA line to each slot. Microchannel fixed this, with the extra wrinkle that Microchannel cards had to be able to behave like legacy ISA devices in many cases.

    Overall, direct bus attachment was an idea IBM liked to do starting with the PC/AT and then implemented for the 1987 PS/2s model 30, 50, and 70, yet did not do with the 60 or 80. The 90 implemented it solely so you could upgrade from a 50 or 70 and keep your OS and data – just slide the disk in. The P70 was the last model made with DBA.

  16. Richard Wells says:

    There was another MFM drive for the Model 25, a 20 MB drive paired with controller card that fits in an expansion slot. The IBM Product Reference lists it as 4110. The earlier planar boards did not support the drive options of the later boards so I suspect that once IBM ran out of the initial batch of drives, they used these combos instead of replacing the planar.

    There are a number of Model 25 and 30s which have a half length hard disk controller that has the connections on the end of the card and only supports a single drive. Could be MFM or ESDI but having a 34 pin cable and a single 10 pin cable shows its limits. Not sure if it was the same pairing as 4110 but I haven’t seen that card in a machine other than the Model 25 0r 30.

  17. Octocontrabass says:

    @MiaM: Bitsavers has datasheets for a WD1002-05 board that is probably similar to if not the same as the combined floppy+MFM board you’re thinking of. And, as you’ve guessed, it uses two of the head select bits to address multiple floppy drives.

    The WD1002-05 is not just related to the IBM AT drive controller, it actually uses the same chips, although probably with different firmware. The AT controller’s drive/head register requires fixed values in bits 7, 6, and 5 because those bits select the sector format, and IBM only supported 512-byte sectors with ECC. The AT controller’s “format track” command had to format the first 8 heads separately from the rest because some part of the firmware thinks bit 3 of the drive/head register is an additional drive select bit and not a head select bit.

    I doubt I’ll ever know for sure, but I wonder if someone at IBM got “8-bit” and “8 heads” mixed up when asking WD to modify their 8-bit drive controller to work on a 16-bit data bus…

  18. Michal Necasek says:

    The WD1010 controller chip is documented as supporting 8 heads and 4 drives. The 1984 IBM Options and Adapters Technical Reference does show a WD1010 chip on the adapter. Yet it also documents the adapter’s SDH register (which is mirrored in the WD1010 as well) as supporting two drives and 16 heads.

    Register 3F6h bit 3 is documented by IBM as follows: “A logical 0 enables reduced write current. A logical 1 enables head select 3.” The head select bit 3 is shown in the diagrams as HS3-/RWC- and the “more than 8 heads” bit comes from the drive table. On reflection, that probably has something to do with the ST-506 style drive interface more than with the adapter.

    What’s interesting is that the actual SDH bits are driven by the WD1014 chip. And WD documents that as supporting 8 heads and 4 drives. IBM appears to have repurposed pin 39 which WD documents as DSB2 (drive select bit 2) to mean HS3.

    Drive type 9 in the original PC/AT BIOS drive tables already had 15 heads so the support for more than 8 heads should have worked somehow. I don’t see anything special in the PC/AT BIOS format routine.

    I will note that newer circa 1986 IBM adapters have a different set of WD chips, probably equivalent to a WD1003-WA2. But a 1985 WD1002-WA2 board I have does indeed have WD1014, WD1010A, and WD10C20 chips on it, as well as a mysterious NEC 8602X7 chip which must be a WD1015 equivalent.

    At any rate, either IBM used WD chips that do not correspond to WD’s datasheets, or IBM/WD just wired it creatively. I suspect the latter.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.