Recently I had an occasion to find out why NetWare 3.12 using the shipped IDE driver (IDE.DSK) behaves, very, very strangely when let loose on disks bigger than about 500MB (a very foolish thing to even try). The driver loaded fine, discovered a 1GB disk fine, but when accessing it, it was extremely slow and produced a lot of errors.
After some debugging and disassembling IDE.DSK, the cause became obvious, but the chain of events that led to it was anything but.
The IDE.DSK driver in NetWare 3.12 dates from April 1993, meaning it’s older than the original ATA specification. The driver notably does not use LBA, because it was not yet standardized at the time; disks with LBA were perhaps just coming on the market, and they were generally not big enough to require LBA. Crucially, there were also no translating BIOSes yet.
The symptoms of the problem initially didn’t seem to make much sense. After working correctly for a short moment, NetWare would select non-existent device 1 (slave) on an IDE channel with only device 0 (master) present. Then it would read the alternate status register, get rather confused, and start endlessly re-calibrating and resetting the drive. In other words, completely unusable. So why would NetWare do such a thing?
The problem has to do with the dreaded geometry translation. Internally, NetWare uses logical block numbers like any sane OS. But the IDE.DSK driver needs to translate those into a CHS (Cylinder/Head/Sector) address that the IDE controller can use.
NetWare appears to query the disk geometry from the partition table if it can, or from the FDPT. If geometry translation is in use, IDE.DSK may end up with a translated CHS which is not suitable for programming into the IDE controller.
The way IDE.DSK translates from internal LBA to CHS is quite straightforward: Divide the LBA by sectors per track. The remainder plus one is written to the Sector Address register. The dividend is divided again by the number of heads. The resulting remainder is written into the Drive/head register, and the dividend is written to the Cylinder Low and High registers.
Now, the Drive/head register is laid out such that the low four bits contain the head number, and the next bit is the device select register. IDE.DSK was clearly written with the assumption that the head number must be in the 0-15 range and the remainder result cannot overflow four bits. Except when the geometry is obtained from the BIOS or the partition table, it can overflow, because the translated geometry the BIOS uses can have up to 255 heads.
And when the head number does overflow, it’s simply ORed with the high four bits of the Drive/head register. Which means that if NetWare is using drive 0 but the calculated head number is 16, the drive select bit will be set to 1. And that will confuse things quite a lot, because NetWare will end up reading the status register from the wrong (or non-existent) drive, and things just won’t go according to plan.
Needless to say, when the IDE drive size is under 500MB and physical geometry matches logical, there are no problems at all. NetWare can also handle big disks with lots of cylinders as long as the number of heads and sectors per track matches the BIOS translation.
Since NetWare 3.x (and 4.x, though not 2.x) typically boots from small DOS partition, the boot partition can easily fit well below 1024 cylinders, addressable by the BIOS. With sufficient care, old NetWare can handle disks within IDE CHS addressing limits (about 8GB).
There are hints, including the EDD 1.1 specification, that some version of NetWare’s disk drivers rely on the FDPT matching the physical drive geometry. It’s quite possible that drives not controlled by the BIOS would pose no difficulties, but boot drives do. Hale Landis suggests that a ‘Type 2 BIOS’ limits cylinders to < 1024 while exposing the true cylinder count in the FDPT (and explicitly mentions Novell).
As usual, the problem with such information is that it talks about “NetWare” or “Novell” as if there was just a single NetWare IDE driver implementation; in reality, Novell’s IDE drivers evolved, and newer versions (which can still be installed even in NetWare 3.x) handle much bigger drives just fine.
As for the head number overflowing into the drive select bit… that’s a pretty classic sleeper bug. The code is incorrect, but when it was written the assumption was that the number of heads of an IDE drive must always be less than 16. And at the time, that was true. Only after the driver was released, the landscape shifted such that the assumption was no longer correct.
Update: Novell documentation (NetWare 3.12 README.UPG) indicates that the 1993 versions of IDE.DSK read the boot sector and if a DOS partition is found, geometry from BIOS/CMOS is used. If no recognizable partitions are found, geometry from the disk (IDENTIFY DEVICE) is used. That is not crazy but may cause surprises.
If I recall it goes even more crazy with another same capacity drive as the slave.
3.12 behaved better with SCSI, I guess as the protocol has as larger for potentially bigger disks. But I’m sure a 2TB filesystem is out of the question
Also pretty much all big disks used to be SCSI, and yeah, SCSI always used LBA and handling large drives wasn’t a big deal.
And NetWare back then probably was the one OS most likely to have to deal with big drives.
IIRC I’ve stumbled upon BIOSes that can’t handle more than 528MB with DOS (or related operating systems), but where you could enter drive parameters that just won’t work with DOS. Maybe it even filled in the parameters by autoconfig from the IDE drive.
Btw, to get netware to boot from a DOS partition but still trick Netware to not detect it as a DOS partition, you might be able to use the special partition type and patched DOS version that Compaq did use for the setup partition on the computers that actually used patched versions of DOS + Windows 3.x for their disk loaded setup program.
(I’ve never taken the time to tinker with that, but judging by the output when the setup starts up and how it looks and feels, it’s 100% obvious that it’s just a version of DOS + Win 3.x in disquise)
@MiaM: Compaq’s setup partition used something other than Win 3.x, likely GeoWorks or in-house GUI.
Regarding IDE, what exactly was the first machine that came equipped with an ATA/IDE connector and what was the first hard drive to support it? Besides references to Compaq designing it in 1986, there isn’t much more in terms of model numbers of machines equipped or drives that actually supported it.
Drives supporting only CHS vs. both CHS/LBA is a very blurred line. I have a few of those generic USB to IDE bridges that won’t work with older drives. I suspect the bridge only works with LBA aware drives, yet I have quite a few 1+GB drives that still refuse to work with it! Is it possible that some vendors were still not equipping their drives with LBA well into the ATA2 era (1995-96)?
Compaq was involved with IDE (as the first OEM customer), also Western Digital (chips) and I believe CDC (drives). One of the first IDE drives on the open market was Conner, and I think Western Digital soon followed (once they started selling drives themselves). Getting reliable information about early IDE is tricky. In large part because to software it wasn’t really distinguishable from AT disk controllers (that was the point, after all).
Yes, it is very likely that you have non-LBA 1GB+ drives. LBA wasn’t required by ATA-1, not sure about ATA-2 at the moment. There definitely were non-LBA drives with capacities over 1GB. That’s why BIOSes can do CHS->CHS translation.
In the later ATA standards, CHS was completely deprecated, it’s LBA only.
There is a PDF from the old Storage SIG that lists a bit of history of the development of the original IDE designs from being a simpler adapter for ST-506 drives to the Conner true IDE drives which show up first in Compaq Portable III.
http://s3.computerhistory.org/groups/compaq-conner-cp341-ide-ata-drive.pdf
Not in the PDF, the translation used by the Conner drive to go from 26 sectors per track to 17 sectors per track seems a bit confusing to newer controllers. Also, the documentation on WD version of the Conner drive indicates on an actual IBM AT, the Conner drive must be the only drive attached to the controller and a special jumper needs to be set. AT clones did not need that special jumper and could have two drives attached to the controller; however, the Conner drive needed to be Master if paired with a WD drive.
Amstrad adopted the XT version of the IDE interface in 1988 (using WD drives) with poor results. It was a few years before anyone trusted IDE drives in important systems.
@MiaM, Chris M.
Compaq CMOS setup utilities use the Zinc Framework toolkit to get its GUI appearance and features on DOS.
http://openzinc.org/Screenshots.html
Old Paragon disk utilities, and some versions of Ontrack disk maintenance stuff, used the same. But PowerQuest utilities used Pharlap TNT Dos Extender Win32/NT API emulation layer for PQMagic/DriveImage apps… That’s why them behave and look different to Paragon Utilities.
Chris M and especially raijnikai: Interesting, now I’m even more inclined to actually have a look at the contents of the disk on my old Compaq Deskpro P166MMX machine 🙂
(The partition type would still be useable to boot Netware without Netware knowing that the disk has a “DOS-ish” partition).
Chris M: I’ve read some stuff about the problems with CHS and LBA, and it seems like with some USB-to-IDE controllers you can bypass parts of the controller and talk to the disk directly by sending commands that aren’t really intended as the first choice for reading/writing. Afaik the latest versions of UAE (the most known Amiga emulator) can do this out of the box to be able to access old IDE disks, particulary to read the contents from the disk of a physical Amiga and use it in the emulator.
Michal Necasek and Richard Wells:
One of the oldest IDE drives must be the Conner 5.25″ half height 60MB IDE drive which I found in a Compaq Deskpro 386/20 (the version of the Deskpro that supplies 12V DC for the monitor using a DIN connector, IIRC a 3-pin connector). That disk did report that it had one sector more than it actually had, an off-by-one error. That itself made it really hard to use the disk in Linux. The command to ask the disk what parameters it had was afaik the first major addition to IDE/ATA as compared to the original WD 1003 and compatible 16-bit ISA MFM/ST-506 controllers. However it took a rather long time for BIOSes to be able to use this command.
There were a bunch of non-ATA-compatible IDE-ish drives back in the days. Most notable are the drives in some IBM PS/2 computers. For example IIRC the PS/2 modell 30/286 uses four instead of three address lines to the drive, obviously for usage with something else than ATA drives.
Commodore used a 20MB 3.5″ 8-bit IDE-ish drive made by WD as the default drive shipped with their A590 hard disk controller + memory expansion unit for Amiga 500 back in the days. That disk were known to fail at some point in time like in the late 90’s or early 00’s, so I don’t think there are any working examples of that drive still left. The A590 device is still useable though as it also contains a SCSI controller, you only have to flip a jumper to make the disk activity LED reflect either SCSI or “XT-IDE”, otherwise both controller modes work simultaneously.
Btw that WD “XT-IDE” drive were super slow, and the “XT-IDE” interface itself were also super slow. That was actually a useable feature as the serial port of an Amiga didn’t have any fancy buffers like the 16550 on a PC, so while a sector were written or read to/from a SCSI disk using the DMA controller in the A590 you would loose characters with a 14.4 modem if you also had too much display DMA happening at the same time (i.e. at least 8 colors at 640*256), but with the slow “XT-IDE” disk there were enough free cycles for the CPU between the DMA cycles all the time so you didn’t drop any characters. 🙂 🙂
Hmm, did that Conner disk report the highest sector number rather than the number of sectors per track, I wonder? It’s not like anything was standardized back then, so it’s not necessarily an error at all. An yes, the IDENTIFY DRIVE command was super useful and made IDE drives plug-and-play (up to a point, anyway). In the olden days the BIOS had to tell the controller what the geometry was, and in the IDE days the drive told the BIOS what the geometry was.
I’m not sure when common BIOSes (AMI, AWARD) started being able to identify IDE drives but I suspect it wasn’t until 1991 or 1992.
The Conner disk reported that it had 18 sectors, while it had 17 like any other MFM drive had at the time.
I can’t remember how IDE/ATA is supposed to work, but it seems likely that you should report the number of sectors minus one, to make 0 mean one sector, as there is no use case for a disk which actually has zero sectors and by having this offset you could report disks with 256 instead of just 255 sectors using one byte.
Re first time computers actually read parameters from the disk: The Commodore Amiga 600 were afaik the first Amiga product with an IDE interface directly on the main board, and it were able to detect the drive parameters (although that was done by the software that were used to partition the disk, I assume that the ROM code to boot from the disk just read the Amiga partition table thing and used the data stored there, as that used the same format as used on older disks which didn’t support reading parameters from the disk).
ATA (the IDENTIFY DEVICE command) reports the number of sectors per track, meaning a value of zero is not valid (but it’s a 16-bit value, so 256 would be, except that can’t really be used). Don’t ask me why, that’s how it is. The lowest valid value is 1, the highest is 63.
But that means my theory makes no sense, and I have no idea what the Conner drive was really reporting. But like I said… there was no standard at the time. The drive only needed to work with the matching host BIOS.
Well, I’m 99.9999% sure (unless my memory is incorrect) that it stated one more sector (per head/track) than it actually had. So it must had been another kind of offset-by-one.
As almost nothing used the parameter query thing at that time, it did go unnoticed for a long while. Maybe me and my friend (who bought the Compaq Deskpro 386/20 at some charity shop in the late 90’s) might had been the only persons ever tried to run Linux on that disk model 🙂
We half-assed started to check if it would be simple enough to tell the Linux kernel to use the correct number of sectors, but I think we deemed it easier to use another disk instead, especially as iirc the initramdisk thing were rather new at the time and required some tedious process to “decompile” the initrd image, change stuff in it, and “recompile” it, and then tell lilo (iirc grub weren’t really a thing then) where it was, and hope that you didn’t screw up.
Conner had multiple 60± MB IDE drives. One had 38 sectors per track; another had 40 sectors per track but reported it as 39 sectors per track while also doubling heads and halving cylinders. The related SCSI model had an extra 2 cylinders with 39 sectors per track. Errors in identification seems likely with that.
Then there is the craziness of servo tracks and servo surfaces and dedicated landing zones which varied by manufacturer. A manufacturer recommended bundle was the generally only way to be sure of having a working setup in the early days of IDE.
IBM used a number of ST-506 and ESDI designs that had a card edge connector bolted on. Even less of a general purpose interface than early IDE since many machines only worked with one or two specific drive models. IBM tarnished the reputation of the dedicated PS/2 card edge system by using absurdly slow drives in the Model 50.
Richard Wells: But did they have that many 60MB IDE drives that were 5.25″?
The only other 5.25″ IDE drives I’ve seen except for this one is the much newer Quantum Bigfoot drives. More or less every other IDE drive I’ve seen were 3.5″ or smaller.
I remember that people jokingly said that the Z in the PS/2 model 50Z didn’t stand for Zero wait state but the Zzzznoring sound when someone falls asleep.
Which PS/2 models actually used the analogue ST-560 or ESDI signalling between the drive and the computer? I’ve read about that many times but never stumbled on one of those.
I’ve only ever had a look at the schematics of the PS/2 Model 30 (both the 8086 and the 286 models) (which afaik are kind of the same as the Model 25, except the 25 has a built-in monitor) and those surely use a weird non-standard IDE interface. Seems like it would be fairly easy to make an adapter cable for standard IDE drives if someone would make a patched version of the BIOS though.
Btw, my impression is that many “well known brands” made computers that were far slower than the no name computers right up till Intel started mass production of chipsets and killed off all kinds of other craps like Opti, CMD-640 and whatever were used previously. IBM might had been the first of the well known brands to start producing slow computers, but other manufacturers like HP and to some extent Compaq followed.
@MiaM
The PS/2 Model 25/30 used a modified version of the XT-IDE interface used in Tandy 1000 and Amstrad computers (which was basically ATA with only 8 data lines). The PS/2 Model 50/70 used what IBM called “direct bus attachment” drives, basically a MCA version of ATA (it was suspected to be a ESDI drive with a integrated MCA interface). The PS/2 Model 60/80 towers commonly had “bare” ESDI drives with a dedicated MCA interface card.
The above is why I was curious about the early history of the 40-pin ATA/IDE interface. One really didn’t see it in desktop computers until the 386 was common around 1990. Just able every 286 clone I came across over the years was still running MFM/RLL drives into the early 90s.
I think this might be true only for the later models, not the very early models.
That’s a good point… maybe the IDENTIFY DEVICE data was really incorrect, but if no one used it with the drive back when it was sold, it would not have been a problem. Do you know when that Conner drive would have been made?
The IBM Model 50 announcement letter refers to the included drive as ST-506. IIRC, the card the drive connects to is a specialized MCA to MFM controller. BIOS only supported that one drive model. The ESDI drive versions had a card that did nothing but pass MCA signals through to an MCA to ESDI controller mounted on the hard drive. The upgrade ESDI card added new ROMs alongside the pass through traces. http://www.walshcomptech.com/ohlandl/8550/8550_Controllers.html
IBM Personal Systems got dragged into the world of IDE and SCSI reluctantly but MFM had already run its course and ESDI combined high prices with limited capacity.
5.25″ inch IDE drives include the very early CDC Wren drives. Many WD 3.5″ drives were placed in special 5.25″ mountings to be sold as 5.25″ drives. Redhill has pictures of a number of them. CMS 150 MB IDE drives were 5.25″ FH as were Priam 200 MB IDE drives. Seagate had a number of 5.25″ HH IDE drives with larger platters which can be easily seen with the more common related SCSI and ESDI models.
My point is that I think “DBA” ESDA and MFM only came later on. Early PS/2 likely used normal ST-506 and ESDI drives.
No, the DBA MFM drive was in the Model 50 from the very first release of the PS/2. The DBA ESDI was released along with the Model 50z plus there was an upgrade for the Model 50. The DBA ESDI shows up on several other systems like the P70. The cramped cases are a good reason to run traces along cards instead of cables.
Tower systems had normal cables connected to MFM or ESDI controllers. I think the first drive offered was the 44MB MFM with the 70 MB ESDI being later and a couple of larger ESDI drives followed.
Apple still makes slow computers, and they’re doing pretty well (though nowadays it’s mostly their software, not hardware that’s slow). Not everyone cares primarily about speed, and I would even say that the more money a purchaser has, the less likely they’re likely to primarily care about speed. Compatibility and reliability are more important, because a 10% faster system has to run for an awfully long time to make up for servicing or customization costs.
I honestly don’t even know if Intel’s objective was to kill the other chipset makers, more likely they were faced with a purely practical problem in that the CPU development was too tied to chipset development, and to bring new CPUs to the market Intel needed new chipsets. And it was not just CPUs, it was things like PCI and AGP and AC’97 and whatever else Intel had their fingers in.
My experience is that OPTi actually made rather solid chipsets (and so did C&T), it was the VIAs and SiSes and various nearly-no-name chipsets that weren’t all that great. The CMD-640 was not a chipset but rather a specific early PCI IDE controller well known to have “quirks”. Intel used it on their own boards.
Hmm, the 1992 ThinkPads 700 as 720 used drives (2.5″) that sound similar to the Model 50/70. IBM called them ESDI, who knows what it really was. The laptops were of course MCA based. Later (bigger) hard drives for those ThinkPads were reportedly IDE with an IDE to ESDI adapter.
The newer ThinkPads (750 and on, since about 1993) were ISA and later ISA + PCI based, and used regular 2.5″ IDE drives, all the way up until the T43 in 2005.
Sounds like NIH was alive and well at IBM in the early 90s. Why not use the existing 2.5″ IDE and SCSI drives when you can come up with something new! SCSI wasn’t spared this either as IBM’s cards had non-standard cable connectors on them too.
As for them being slow…yes…. they were. IBM was selling 486SLC machines with 16-bit buses (the 24-bit addressing wasn’t an issue yet) for well above what everyone else was selling full 486DX machines for. Pushing all those SLC systems defeated one of the core reasons for creating MCA, the 32-bit data/address bus!
Regarding chipsets, SiS had very well regarded solutions in the 486 era (496/497 PCI set, the 471 VLB, and the 406/411 EISA/VLB). I haven’t used VIA sets that old. Intel’s push came when integrated peripherals (the “southbridge”) started to become commonplace. AMD was late to that party, why it took years of crappy VIA chipsets (and quirky driver’d nforces) for them to buy a chipset maker (ATI) and start branding their own is beyond me.
I’d say Intel’s big chipset push was Pentium/PCI. They made chipsets before (386SL for example), they bought Zymos, they had complete EISA chipsets, they had 486 PCI chipsets, but none of those were all that widespread I think. But when the Pentium came out, no one else had chipsets ready, and no one really managed to beat Intel at Pentium chipsets. The old 430NX and LX chipsets were a bit funky, but by FX they sorted it out, and 430HX was a Socket 7 mainstay for a long time.
You’re probably right that SiS 486 chipsets were good, I’m more familiar with the later stuff and SiS graphics chips which were nothing to write home about.
Totally agree that AMD should have gotten into the chipset business much earlier, they dabbled in it but weren’t really serious until they bought ATI.
Chris M:
How many address lines did Amstrad and Tandy use in their 8-bit IDE interfaces? IBM used four. The WD disk in Commodore Amiga A590 only used two, so it might had been compatible with some (to me unknown) MFM card for some XT class machine. But there seems to had been Seagate 3.5″ IDE disks that could be configured for either 16- or 8-bit operation. IIRC I tried the 8-bit setting on one such disk and hooked up to an A590 and IIRC the disk died, but it might had been failing before I tried that.
If the command set and register mapping on the disks for the PS/2 25/30 were the same as on a regular 16-bit IDE/ATA drive, but data access were 8-bits wide, A CF card with an adapter might work out of the box.
Kind of strange that they didn’t use 16-bit access as all 25/30 had a 16-bit data bus. (The two versions were 8086 and 80286, so no 8088 and thus no 8-bit limitation).
Michal:
Not sure what year the disk were made, but it was sitting in a Deskpro 386/20 which had this look (but was 20 instead of 25 MHz), which I believe is the original 386 from Compaq. Not sure for how long that Compaq model were made. It did look like the drive had been installed when the computer was new, or perhaps by certified field technicians, as everything were rather tidy and there was a Compaq branded (IIRC, or perhaps just the same brown-ish look) PCB for IDE and IIRC floppy. The front panel of the drive did match the color of the computer and had a perfect mechanical fit.
http://www.computinghistory.org.uk/det/16967/Compaq-Deskpro-386-25-Type-38/
It might also have had a tape streamer for DC600 or DC6150/DC6250 tapes IIRC, so it’s rather likely that it was used as a server.
And you are right, Opti wasn’t one of the bad guys. Via and SiS was as you say not very good though.
I know that CMD-640 was just an IDE controller, but still a not that good one and in practice computers with that IDE controller were also usually not that good in other ways too.
The buyers that had the most money were usually not the same person that had to use the computer. It seems to be really different how different persons percieve lags when using a computer. For me it’s really a pain, while for others it seems to not bother them that much.
It’s kind of strange that larger companies seems to not value the speed of the computers that their employees use. If there is a two second lag in an operation that is done 30 times a way, that’s two minuters per day. With 8 hours work per day, 5 days per week, that’s over a day lost yearly. In most cases annoying lag were far worse than just two seconds, and the operations were done far more often. (For example the lag might be due to “standard installations” of operating systems back in the DOS/Win3.11 days didn’t include chipset specific hard disk controller drivers and refused to use “32-bit file access” and/or “32-bit disk access” (due to that it didn’t work on some of the many computers in operation)).
I haven’t really compared Apples to other computers but nowdays it seems like web browser performance (which seems to be a question about enough CPU and a lot of RAM) is the thing that’s needed for an average user, while CPU and GPU performance is what for example gamers want. Things like disk access seems to be “fast enough” for most users nowdays. Back in the 90’s disk access could really be a bottleneck.
Re killing other chipset brands: Sure as you say they wanted to add stuff anyways, but at least in theory they could have made the south bridge – north bridge 100% universal PCI compatible (but with the legacy PC stuff at hard coded addresses, of course) and have had a business model where customers wouldn’t get any discount by buying both the north and the south bridge. That way any motherboard manufacturer could had bought an Intel north bridge and an Opti south bridge. If we look at how chipsets did perform back in the days, the most likely outcome of such politics would probably had been that many motherboards still had Intel north and south bridges, but the motherboards for AMD might have had an Intel south bridge. Maybe that would had been a bad thing for Intel though, as one of their selling points was that the chipsets for AMD CPUs were inferior (VIA…..).
IBM would probably had done far better with the PS/2 series if they at least had openly published (preferable on stickers inside their cases and on their drives) information about how to make adapters between their cables/connectors and the standard ones.
Btw all disks are MFM/RLL/ESDI/ internally, and talking about which of these kinds of data signals is kind of uninteresting when talking about disks with integrated controllers.
Re NIH, larger companies seems to usually have a hard time adopting to trending technilogy. Compare with how slow the well known brands were at adopting support for MP3 and later DivX files in DVD players just some 10-15 years ago, while no-name ones had all that far earlier.
In many cases it seems like the developement and managements act like if they live in a bubble not affected by the rest of the world.
IBM selling slow 486SLC were probably good business as they AFAIK made the 486SLC CPU’s themself.
Btw re AMD/ATI and re that companies tend to not test new products with old peripherals: Back in the mid-late 00’s, AMD/ATI made graphics cards were it was kind of impossible to get anything else than 60Hz if you didn’t have the identification thing in your VGA monitor. Kind of annoying for anyone still using CRT’s and possible for example having one VGA DE-15 connector and one set of BNC connectors on a dual input monitor. The DE-15 connector had to go to the computer with an ATI/AMD card to avoid a flickery picture.
On CMD640, let’s just say that if OS/2 2.0 had actually caught on instead of turning into a fiasco, it would have been unlikely that they would have shipped it with the problems they had.
This reminds me of the ALi aladdin-p4, which still used a PCI M1535 southbridge that have no APIC support. (Did any vendor make PCI southbridges that have FSB APIC support?)
@MiaM:
Tandy used 2 address lines, it was an interface that Western Digital used on their controllers/hard cards. The pinout and a photo of the WD controller can be found here: http://www.vcfed.org/forum/showthread.php?39749-Hard-Drive-Interfaces-XT-IDE-vs-AT-IDE
The Seagate “A/X” line of drives can be switched between ATA and XTA. Some CF cards can be run in 8-bit mode, but the list is pretty small. There was also at least one Apple II XTA interface that used the same pinout, the Applied Ingenuity InnerDrive. The circa 1990 Applied Engineering Vulcan IDE interface was full 16-bit (expensive as the Apple II bus was only 8-bit).
Here’s a nice 1990 Seagate catalog. Check out the SSDs they offered! Seagate was probably by far the #1 vendor of IDE drives in the late 1980s and early 1990s. Conner, Maxtor, and Quantum were others, but there was CDC/Imprimis, HP, and others as well.
Finding reliable information on IDE disks prior to 1990 is surprisingly difficult. Here’s a CDC/Imprimis (later Seagate) Wren VI HH which had a 32K cache and already offered the Read/Write Multiple commands.
I’m not even sure when the term “IDE” began to be commonly used. It was definitely used in 1991, but earlier it was often called “AT Bus” or similar (“PCAT” is what CDC/Imprimis called it in 1988).
So far I failed to find a detailed technical reference for any IDE drive (with complete pinout and command documentation) from before 1991 or so. Anyone?
The 8-bit mode is very common (actually required?) for CF cards. It’s a must for any memory-mapped IDE interface.
The first (1986) Deskpro 386 was only 16 MHz. I think the 20 MHz version was late 1987 or 1988. I would guess yours was 1988-ish then, and at that point there definitely were IDE drives. Do you happen to have the exact drive model?
The thing with speed is that the difference must be overwhelming to make up for any kind of problems it causes. If there’s an issue that causes an employee to not work for half a day and requires a technician, that gets real expensive real fast. If there’s data loss, it’s even worse. Worst of all, if something goes wrong, no one wants to be blamed.
To be fair Windows 3.1+ did have a standard “32-bit disk access” driver, wdctrl. Which was unusable with any halfway modern IDE drive (no LBA, no geometry translation, BIOS had to reserve a diagnostic cylinder). By the time Windows 3.1 came out that driver was almost obsolete already.
Not to the topic, but as CMD640 came up in discussion:
I have a Pentium MMX laptop with a dock to it. The dock contains a CMD640 IDE controller for a third IDE channel.
If you ever have something to test on this, I am happy to do. I prefer floppies, as I have to go to my “hobbylab” for burning CDs, which is ~30 minutes away…
Does the CMD640 control one or two IDE channels? Many of the issues would not exist if it only controlled one.
The early 8-bit XT-IDE format is documented by Tandy as the Smartdrive interface for the Tandy 1000/TL2. Looks a lot like the standard IDE connector except that all the even pins are set to ground (thus only 8-bit data) as are pins 21 and 29 (which later IDE connectors use for DMA signals).
Given the 16-bit IDE used 8 even pins for data from the beginning and 6 even pins stayed as ground to the end leaves only 6 even pins whose development needs to be tracked.
Afaik all CF cards can transfer data 8 bits at a time, but when doing that they still use three address lines and the ATA register / command set. That obviously makes them incompatible with any interface having only two address lines, unless there is some trickery that I don’t know about.
Were there any CF cards that can use the command set and interface that this 8-bit IDE interface uses?
Chris: Sounds like the same interface that the A590 did use. Maybe I just had bad luck, or maybe the ST351A/X disk had already failed when I tried to use it. Or in theory maybe the interface in the A590 had failed, but that seems highly unlikely as it’s connected directly to the same signals that drive a SCSI controller without buffers and shares the same DMA/bus interface to the host, and the SCSI controller still works.
Michal:
Sorry, I haven’t got any more info on the Deskpro 386. A friend had it, and in some kind of miscommunication about 20 years ago it ended up on the dump instead of at my place 🙁 The only thing I remember is that it was a Conner 5.25″ half height 60MB IDE drive, with a front faceplate matching the Deskpro.
The combination of “32-bit disk access” (protected mode IDE driver) and “32-bit file access” (protected mode filesystem with dynamic cache) in Windows 3.11 made a 486DX2-50 fly in circles around a P75 with those things switched off.
The really really strange thing is that there were combinations of disks and controllers that made the “32-bit file access” thing to not work properly IIRC, and that made everything painfully slow.
At the place I worked at back in the 90’s, there were a major upgrade rollout changing from some DOS based network client, DOS and Win 3.1 to DOS and the built in network client in Win3.11 for workgroups. The same config were rolled out to many computers, ranging from 386:es to Pentium systems. IIRC anything older than a P90 were scrapped during the Y2K compliance process thing, even though some computers didn’t need the extra performance and an incorrect clock wouldn’t had mattered.
Richard:
The XT-IDE interface in the Commodore Amiga A590 disk controller can be found on the second last page here:
https://computerarchive.org/files/comp/hardware/amiga/A590%20Service%20Manual%20-%20Manual-ENG.pdf
It seems to be similar to what you describe.
Based on what I read recently, all Compaq Deskpro 386 machines may have had some sort of IDE drive, but some of them were just regular ST-506 drives with an IDE controller bolted on. Conner was an early supplier of “real” IDE drives to Compaq, and the Conner CP342 was apparently the first IDE drive generally available to OEMs in June 1987. By late 1988, there were probably half a dozen of IDE drive manufacturers, although it wasn’t until 1991 or 1992 that IDE truly took over.
@Yuhong Bao:
The dock has space for two 5,25″ and also includes a SCSI controller, so I assume it is only one channel.
Two channels would be senseless in this case, where to put 4 ide drives in this dock?
For now I can say it is a HP OmniBook 2100 + it’s dock (which fits for a few different OmniBook models). I come back later with verified+detailed information.
All this talk of IDE & NetWare kind of reminds me, has there ever been a NetWare DDK surface? Was there such a thing?
It seems the answer would be to modify/write a tamer version of ISADISK to not freak out over larger geometries.
Although I have no idea just how big the filesystem can grow under NetWare.
Not that I’m any expert or anything, it was blind luck I got that Windows CE IDE driver to read disks, and I had absolutely no luck with the Darwin driver…
@Yuhong Bao + others intersted in this:
I had a closer look at the dock of this HP OmniBook 2100. The model number of the dock is “HP F1477A”.
I must admit, I had wrong memories. It contains a CMD0646U, *not* a 640.
Surprisingly the dock has two 40pin connectors and Windows 98’s device manager shows two CMD0646 devices and their setting is by default both IDE channels on. So in theory my OmniBook supports 8 IDE drives.
The SCSI controller in the dock is a Symbios Logic 8100S (NCR 53C810).
The laptop’s internal IDE is 2x Intel 82371AB/EB.
Later I will test the dock with 4 IDE hard disks – two per 40pin connector.
Despite I don’t have the buggy CMD chip, I am still willing to test something on this machine if anyone wants me to. It still is a not-so-common PC configuration.
I’m not sure if I have anything with CMD-640, but I have a board with RZ-1000, a differently broken early PCI IDE chip.
I’ve never seen one, but obviously Novell had something. Both for disk and LAN drivers. There’s sample code available for newer NetWare versions but not for NW 386 AFAIK, let alone NW 286.
I did come across a document describing how to write drivers for the high-speed link for NetWare SFT III (Mirrored Server Link, or MSL).
I’d bet someone still has those DDKs. Not necessarily Novell/MF though 🙂
AFAIK CMD646 did not fix the CMD640 problems.
Could it be that ISADISK.DSK doesn’t have this problem?
I notice that – at least on my installation – there are:
IDE.DSK, 14049 bytes, timestamp 1993-04-26 16:09
and
ISADISK.DSK, 11975 bytes, timestamp 1993-04-28 07:58
I always (well… as long as I can remember 🙂 ) used ISADISK or the relevant SCSI driver, in fact I didn’t even notice IDE.DSK. Might it be that ISADISK is using a different mechanism? I just looked in the manual and it specifies ISADISK for “AT, MFM, RLL, ARLL” controller types on ISA and EISA Architecture and IDE for IDE controller type(!)
I am – at present – using a Netware 3.12 server with a physical 4GB IDE HDD (not CF but old spinning rust). I also remember using (back in the day; that means around 1995-1996) a server with two 1GB HDDs running Novell Netware 3.12.
For what it’s worth: The current server is Siemens Nixdorf PCD-5H (Pentium/75 CPU), HDD is detected as 8912/15/63, LBA translation is Enabled in BIOS Setup. I don’t know the chipset on the MB and it’s quite difficult to get to it (however, if need be…) I think the HDD is a Samsung but I’m not really sure.
It’s very possible that ISADISK.DSK doesn’t have this problem. They are similar, and obviously IDE disks are compatible with the older AT drives, but there are differences here and there and the NetWare drivers certainly are not the same.
My understanding is that one of the big differences between ISADISK.DSK and IDE.DSK is that the latter can manage disks on secondary, tertiary, and quaternary IDE interfaces (in theory up to 8 IDE drives) and also does not use the BIOS for getting drive geometry information. I don’t know off the top of my head if ISADISK.DSK can handle two controllers or not.
See for example https://groups.google.com/d/msg/fa.linux.kernel/PcKyE67MD-4/e8fIWDTertUJ
The NetWare IDEATA.HAM solves the CMD640 problem a little differently. There’s the following comment in the source code (downloaded from novell.com) in IDEATA.C: “Check for the CMD 640B IDE Controller, for which this driver will currently only support a single load instance because the controller implements a single FIFO to support both the primary and secondary channels. Because of this implementation, the controller will not support simultaneous requests to both channels, the effect of which hangs the bus and possibly even suspends the processor. Attempts to serialize the commands across both channels, per suggestions from CMD engineers, did not solve the problem.” Note that “single load instance” means “single IDE channel”, since NetWare loads an instance of the IDEATA.HAM driver for each IDE channel.