After pondering the strange TeleDisk images of SCO XENIX 386 2.2.3 (released in 1988) and not being able to make heads or tails of it, I decided it was time to simply restore the TeleDisk images onto actual floppies and boot those on a real system.
This was not entirely straightforward. The images were of 720K 3.5″ disks. On a typical system, DOS is unable to format a 1.44M 3.5″ disk as 720K; I know that from experience. TeleDisk did not fare any better. It would simply not write the 720K image onto a high-density 1.44M disk (and of course I don’t have any actual 720K disks anymore, or at least not ones I’d want to overwrite). That raised two questions: why, and what to do about it?
The latter is easier to answer. Either tape over the ‘density hole’ on a HD disk, or fill the hole with a wadded piece of paper (newspaper works well). I took the latter route, fixed up two floppies, and that allowed TeleDisk to write the 720K images (and also enabled DOS to format the floppies as 720K).
Now back to the first question: Why won’t a PC format a 1.44M floppy as 720K? That should be perfectly possible, since 720K media just uses one-half the bit density. The preceding paragraph hints that it is physically possible, but something normally prevents it. Obviously the density detect hole (found opposite the write lock on HD 3.5″ floppies) has something to do with it.
In PS/2 systems, it is reportedly possible to format 1.44M media as 720K without too much hassle, with the caveat that the disks may cause trouble in actual 720K drives. In typical PCs, it simply doesn’t work. 3.5″ floppy drives have a mechanism to detect the media density; some drives can also be configured to report the detected density on pin 2 of the drive interface (HD OUT signal).
In typical PCs, the drive does not utilize the HD OUT signal as the host system wouldn’t know what to do with it, either because the floppy controller doesn’t support it or because the BIOS doesn’t. Instead, the floppy drive detects the media type and prevents the host system from reading/writing the medium unless the correct data rate is used (the 720K media use MFM encoding at 250kbps, with the disk rotating at 300rpm; the 1.44M media are very similar with the key difference that 500kbps data rate is used). In fact the data rate is how operating systems typically distinguish between 3.5″ media types—if it can be read at 250kbps, it must be a 720K disk, at 500kbps it must be a 1.44M medium.
The obvious benefit of this scheme is that a 720K (double-density, or DD) disk cannot be accidentally formatted to 1.44M; while that might be possible, data loss would be highly likely to occur sooner or later. The less obvious drawback (perhaps not even considered a drawback by the original implementors) is that a 1.44M disk cannot be formatted to 720K because the drive will prevent it.
Note that for 5.25″ high-density drives (1.2MB), the controller may need to correctly output the DENSEL (Density Select) signal on pin 2 of the drive interface. It appears that in most 3.5″ drives this signal is not utilized, or at least not by default. The drive itself is in control of media density detection, with no input from the host. The host system merely needs to select the data rate appropriate for the medium (250/500kbps).
Anyway, back to XENIX. TeleDisk restored the strange images without complaint, which confirms that there is nothing wrong with the TeleDisk images per se. Although it is questionable whether the disks corresponded to the originals; the trouble is that when sectors with duplicate IDs exist on a single track, they cannot be reliably written to. That might be a part of a copy protection scheme… then again it might not.
With the two boot floppies in hand (traditionally designated as N1 and N2), I set out to boot them on two different systems. On a Pentium II class machine, the XENIX 2.2.3 kernel panicked due to a trap almost immediately. Too new perhaps? Not sufficiently PC compatible? Who knows.
On an IBM ThinkPad (ironically a newer machine, with a Pentium M processor), XENIX got further. The kernel started up and started reading the second (filesystem) floppy. Sadly, after a while XENIX started making grinding floppy seek noises and announced that it couldn’t read a block off the disk. So that was that.
Maybe uncorrupted floppy images of XENIX 386 2.2.3 will surface eventually…
In PS/2 systems, it is reportedly possible to format 1.44M media as 720K without too much hassle, with the caveat that the disks may cause trouble in actual 720K drives.
It’s worse than that — in many PS/2 systems (I verified this on a PS/2 Model 30), you can format ANY 3.5″ media as 1.44MB. I had a friend in the late 80s who kept wondering why his disks were failing all the time — he was formatting 720K disks as 1.44MB and the PS/2 happily let him do it.
One caveat – the PS/2 Model 30 is different from the others. In fact “modern” floppy controllers (Intel 82077AA, NatSemi PC8477B) typically have AT, PS/2, and Model 30 modes. I don’t know if the user-visible behavior was different, but on the register level the PS/2 Model 30 was different from Model 50/60/80 etc. I guess I’ll have to try on my Model 80 when I’m near again – all I know is that the PS/2 floppy subsystem went through quite bit of evolution and especially the older models were substantially different from clones.
At any rate… yeah, I guess this is exactly why clone PCs implemented the hardware density select mechanism. I suspect that when they implemented it, it didn’t even cross their minds that anyone might want to voluntarily halve the capacity of a 3.5″ HD floppy 🙂
If I remember correctly, the density select pin was added about the same time Toshiba was pushing the concept of ED disks which had another density pin proposed. While using DD disks as HD disks seldom works and HD disks formatted for DD usage is viable for short time frames, ED disks were very poor when given a different format. At $10 a disk, accidentally formatting in a method that renders the disk unusable is not favored by users. These days the whole problem is solved by floppy drives that can only handle HD floppy formats and a decided lack of non-HD media.
The thicker material used on double density floppies was probably the best long term storage option. Thin HD floppies don’t stand up to abuse as well and ED floppies needed more development time.
I’m pretty sure the ‘HD hole’ has always been there… at least it’s there on my 3.5″ HD disks from 1987 (otherwise newer machines presumably couldn’t read them at all). But yes, ED floppies had the density detection offset a little.
Given that ED media used a completely different recording method (perpendicular recording) and a different magnetic substrate, I certainly believe that writing them as regular disks could make them completely unusable. With DD/HD disks, I think it was “just” a question of reliability, but the medium could be reformatted again and wasn’t destroyed.
I never used ED floppies myself, but my experience with HD floppies is that pre-recorded disks last quite well (still readable after 25 years with almost no problems), while user-recorded HD disks tend to go bad rather fast. And the quality of new floppies seems to have gone down significantly over the years, too.
michaln: PS/2 model 30 has only 720K drives. PS/2 model *30-286* has 1.44MB drives.
So what? I was talking about the “Model 30 mode” of floppy controllers. They never bothered to specify if it’s “Model 30-plain” or “Model 30-286”.
IBM shipped HD drives in 1987 so the production prototypes had to be manufactured in 1986. Toshiba shipped their first ED drives in 1991 with prototypes being demonstrated in 1989. InfoWorld has a short blurb about a company Practical Computer Technologies shipping 2.88 MB EHD drives using Toshiba technology in 1990 though I don’t know if those are the same as the 2.88 MB drives that NeXT and IBM used.
IBM’s early drives lacked the pin that checked for HD capability though IBM did talk about how the two holes on HD floppies allowed them to be attached to a binder. I don’t have access to any internal memos for the development process so I can’t be too sure exactly what happened. I see two possibilities:
1) IBM told the early manufactures to remove the density detecting pin from drives so users can enjoy the same disk versus drive mismatch that happened with 5.25″ and 8″ drives. I am not a fan of late-80s IBM management but that just seems too stupid to be plausible.
2) IBM introduced the extra hole but didn’t plan on having it detected by a computer. After the PS/2 shipped, the idea of using the IBM supplied hole for detection purposes occurred to various manufacturers plus adding more holes for additional densities. Toshiba was making a lot of floppy drives and had a big incentive to ensure correct density formatting. No more returns of drives when the incorrectly formatted diskette stops being readable after a few uses and the user loses all the irreplaceable data on the diskette.
Obviously, things are different now. But it seems a bit much to expect drive manufacturers to plan for 25 years in the future when DD disks haven’t been produced in many years but are occasionally still needed.
AFAIK 1.44MB floppy support was in the AT 339 BIOS, but I don’t think IBM ever shipped this configuration.
All I can say with 100% certainty is that the BIOS listings for the XT/286 (IBM model 5162) dated 04/21/86 do include 1.44MB floppy support. I could not find any mention of any 1.44 drives in that era supported by IBM, though obviously IBM must have had something available internally.
This is an interesting question. I don’t have a really old PS/2 floppy drive at hand; the one I have here (made by Mitsubishi, P/N 1619618, FRU 64F0162) clearly does have the density detect pin. Whether it was actually used I can’t tell.
My oldest 3.5″ HD floppies were shipped as part of the MS OS/2 1.02 SDK. The files on those disks are dated Dec 1987 and I’m pretty sure the disks did go out that month. At any rate, these must be among the oldest high-density 3.5″ disks available in the PC market, only 8 months after the PS/2 line was introduced. The disks do have the HD hole.
Anyway, how about theory 3) IBM wanted to implement density detection but for whatever reason, the drives weren’t available in time. So the diskettes all sported the density detect hole but actual systems couldn’t detect anything.
My guess is also that it might be pretty easy to upgrade an existing 720K 3.5″ drive to support 1.44M media as they are mechanically the same, but implementing density detection requires additional logic and a sensor.
A link regarding early PS/2 models and the lack of floppy detection:
http://ps-2.kev009.com/ohlandl/floppy/floppy.html#Format_720K_On_144MB
From my experience with older 8-bit machines, if you need to format a HD 3.5″ floppy disk to 720Kb, you only need to cover the hole to the right (the one that isn’t the write protect hole), and the disk drive will not complain about it.
It’s not supposed to be the best scenario, as the disks seems to deteriorate a bit earlier than DD disks.
Regards,
Rob
Yeah, that’s exactly what I did (see the previous article about XENIX 2.2.3). It worked just fine. If the data on the disks goes bad after a week, I can just write the disks again, it’s no bother.
The funny thing is that I probably have 30-50 3.5″ DD disks in good shape, but they all hold things like Windows 2.0, old Microsoft C and MASM, Windows 2.0 SDKs, and stuff like that. For a while in the late 1980s and early 1990s, Microsoft preferred DD 3.5″ media for DOS-based software. Apparently there were enough systems with (only) 720K 3.5″ drives that it made sense. Interestingly, their OS/2 software always came on HD floppies (both 5.25″ and 3.5″). Perhaps there weren’t any AT class machines (to speak of) with only a 720K drive.
Maybe some new xenix 2.2 images are coming:
http://www.betaarchive.com/forum/viewtopic.php?f=16&t=33666
Maybe — I’ll believe it when I see it (and it might be one of the known versions anyway).
> Instead, the floppy drive detects the media type and prevents the host system from reading/writing the medium unless the correct data rate is used (the 720K media use MFM encoding at 250kbps, with the disk rotating at 300rpm; the 1.44M media are very similar with the key difference that 500kbps data rate is used)
I put some thought on this and now have a question (please forgive my blogpost necromancy): If the floppy drive selectively prevents the host from accessing the medium, this means that the drive must have a way to detect the data rate (250 or 500kbps), correct? But IMHO this would be a rather difficult task for the drive. The 250kbps and 500kbps MFM data signals look very similar; only the former has smaller spectral bandwidth than the latter. The easiest way for the drive to distinguish these would be to have a PLL module with an 8MHz or 24MHz reference oscillator, just like in the PC XT/AT floppy controller boards. And that would still be an expensive, redundant, complex and cumbersome solution, with benefits hardly worth the effort.
That’s a really good question (and comments added years after something was posted are standard here, so no worries).
I don’t know how it is done. It is the drive doing it because most 3.5″ drives neither report nor accept any media density information. The answer should be in floppy drive datasheets but either it’s not there or I haven’t found it yet.
The drive clearly knows what the media type is (otherwise messing with the density hole would do nothing), and from the host it has no explicit information in a generic PC. So it must somehow recognize the data rate.
As for the benefits, there are at least two possible explanations. It could have been a customer requirement (customers being the OEMs). Or as mentioned in a previous comment, the drive manufacturers wanted to avoid end user complaints and drive returns caused by attempts to format DD media as HD.
Media manufacturers would certainly have an incentive to prevent formatting DD media as HD. It is possible that also preventing the other mismatch (HD media formatted as DD) was a more or less unintentional side effect.
I’m sure somewhere someone has detailed 3.5″ drive documentation which explains how it’s done.
Clearly no one here played with a TeleVideo machine with these TeleDisk disk drives. They were 720K, 5 1/4 disks, that required the DOS driver to format and use them.
They booted Dos 3.2.1
“Mitsubishi 4853 5.25″ 1/2 height 720K mini-floppy disk drives” Easy enough to find.
Televideo TS-1603 5mhz 8088
“Search Results
Featured snippet from the web
TeleVideo TS 1603 Computer System. The TS 1603 is a 16-bit single-board microcomputer that utilizes Intel’s 8088 microprocessor. The standard system memory is 128 kilobytes (128K) with an optional expansion to 256K. The 5 1/4-inch slim-line floppy drives have a formatted capacity of 737 kilobytes each.”
My guess is that these disks are Xenix-86 fort the TeleVideo TS 1603, which would be cool, if there was an emulator for it.
Yuhong Bao says:
January 25, 2013 at 4:08 am
“AFAIK 1.44MB floppy support was in the AT 339 BIOS, but I don’t think IBM ever shipped this configuration.”
Mr Bao, you are correct for ATs, but IBM did ship XT 286s with 1.44Drives in them( custom order) , which I would guess is that the ROMs are common between the AT 339 and the XT 286. ( like they shared Diagnostic disks )
Keep up the good work guys, I have a great respect for all the posters here. Thanks.
The ROMs are actually not the same, though the differences between the XT 286 and the 3rd AT BIOS are very minor. See here.
Those floppy images are definitely XENIX 386, so that can’t be it… they are almost certainly simply images of 3.5″ DD floppies. That’s a format SCO did use now and then.
The 96tpi 5.25″ DD (aka QD) floppies are interesting though. I had a pile of such floppies but never saw the corresponding drive. In a 1.2 MB drive it was no problem formatting the floppies with 80 tracks and 9 sectors per track for 720K total, exactly like a 3.5″ DD floppy. So Televideo was one of the few OEMs who used those drives.
Maybe I’ve already written about this in a comment on another post, but re density:
I had the classic problem where a modern PC seems to be unable to use a 360k drive. People have reported it to be a Windows XP issue, but to me it seems like some kind of bug related to the floppy controller and/or the BIOS.
My “solution” was to hook up a 1.44M 3.5″ and a 360k 5.25″ drive at the same time, with a straight-ish cable. I connected all signals from the controller to both drives. The signals that an old 360k drive actually outputs were connected to the controller, while pin 2, 4 and 34 were connected. Not sure why I did what I did with pin 4, but the pin 2 and 34 is obvious. In BIOS I configured the drive as a 1.44M 3.5″ drive. A bit dangerous as the software might bump the head at the edge behind track 39, but that would probably only happen if I try to format a disk. The 3.5″ drive just acts as a “dummy” providing the signals that the 360k drive lacks and which are required by the controller, BIOS and/or OS.
In this configuration, while running Windows XP, the OS both senses disk detect and also density from the 3.5″ drive. So I have to tape over or place a bit of paper over the HD hole on a 3.5″ disk and insert it to be able to use the 5.25″ drive.
As pin 34 is the disk change signal, and pin 4 seems to not be connected, it’s obvious that pin 2 is somehow used by the drive to tell the controller what density the disk has. Not sure if the controller relays that information to the OS (and if the OS/driver actually uses that information) or not though. So it must be the controller and/or the OS/driver that refuses to DD format a HD disk if the drive detects the HD hole.
I’ve also successfully used 3.5″ 1.44M HD drives with a Microbee CP/M computer which has a controller only supporting the DD format. That is proof that the drive (obviously, IMHO) doesen’t detect whatever is written or read and refuse some kinds of formats.
Btw, worth mentioning is that all 3.5″ DD disk drives used by the Commodore Amiga (which is basically the kind of drive that almost all Amigas have) has both a ready and a disk change signal, located at pin 2 and 34 (can’t remember which is which). To make a PC drive work with an Amiga you ar best of finding a PC drive that has a jumper to select between ready and disk change for pin 34, and solder wires directly to that jumper block to get both the ready and disk change signals. Preferable you can cut the trace to the media density signal (pin 2) and solder one of the required signals to that pin. In external disk drives for the Amiga, it was common for pin 2 and 34 to be swapped as different manufacturers of the drives (the actual “internal” drive mechanism which were used in external enclosures) made drives with these signals available in different pin configuration. This BTW proves that the disk change signal were kind of always available even on 720k DD drives, but were never connected to the 34-pin connector when the jumpers on the drives were set for PC compatible operation.
Btw I’ve always thought that all PC OS:es were kind of brain damaged re the disk change signal. I know that you have to access the drive for the signal to update, but you can either just step the head one track back and forth from time to time to detect that a disk were inserted, or on most drives just send a signal to step the drive to “track -1” which the drive will refuse to do physically but still update the disk change signal. AmigaOS uses this (the “step to -1” thing were only officially availabe on later versions, but for the earlier versions there were small third party “no clock” programs that would change the behavour so it would stay silent. Another common thing back in the days were to just have disks in all drives to keep them silent 🙂
I have a board (Intel DQ67OW which does not offer 5.25″ drive support in the BIOS and as far as I could ascertain, truly does not support 5.25″ drives. Probably a stripped down FDC (and BIOS adapted to reflect that).
I think you’re slightly confused about the disk change signal. You don’t need to do anything to see that the disk changed (drive door was opened, really) but you have to seek with a disk in the drive to clear the change signal. I’m not sure why they didn’t have a latch in a controller register that could be explicitly cleared. Basically the latch is in the drive instead.
Well, this computer has options for all disk types in BIOS, but my impression is that at some point in time people involved (PC manufacturers, chipset manufacturers, BIOS manufacturers e.t.c.) just stopped testing 360k drives in their systems.
Oh, ok. Well, the principle is the same, you have to “click” the disk to read the signal.
If it had been in the controller – when would it be set? Should the drive report a short pulse on the disk change signal the first time the drive is selected after a disk has been inserted, and if so how short should that pulse be? Or should it report the signal contignously the first time the drive is selected?
It would had been good if they had specified that it could (also) be cleared by changing the DIR signal without using STEP. That way we would have had a way of clearing the signal with a method that officially doesen’t affect any part of the drive mechanism in any way.
It is kind of strange that they didn’t think of the use case where the OS might want to know if a disk has been inserted even if the user (or any running program) doesen’t actively access the drive. IIRC the very fist Macintosh from 1984 presented an icon on the desktop when you inserted a disk – although it used a drive that differed quite a lot from the common drive types used in most other systems.
Maybe the idea was that you either used a Macintosh and thus used Mac drives, or you would still use an old style OS for the foreseable future…
If you tried getting a 360K drive in recent years, you know that the BIOS/board vendors definitely aren’t testing them. They’re much too hard to get, and much too pointless.
The drive does not need to be “clicked” to read the change signal. It needs to be “clicked” to clear the signal. The drive change signal will become active as soon as the drive door is open (or disk is ejected in a 3.5″ drive).
I would think that ideally the signal would become inactive when there’s a disk in the drive again, but it would have to be latched somewhere, ideally in the controller/adapter such that it could be read and explicitly cleared. So you’d know if the disk was changed but no need to mess with the drive itself.
I suspect the mechanism was designed such that the drive would provide all the machinery and existing controllers could be used (well, with very minimal adaptation in the PC/AT case, just exposing the drive change signal in a register). That resulted in a less than ideal interface.
NeXTSTEP (x86) used a method that allowed the OS to detect when a disk is inserted. They execute the READ ID command, which is specified to succeed whenever it finds a sector ID, and fail if the index signal occurs twice and no sector ID can be found. Guess what happens when there’s no disk in the drive… (I believe the method only works with a single drive, not two.)