The previously mentioned warez mega dump contains disk images of SCO 286 XENIX 2.1.0.
The release appears to be from February 1986. It is the oldest SCO 286 XENIX release that I know of. But there’s a hitch.
The warez archive only contains the system disks, N1 to N4 (all in 360K format of course). To install the system, the Basic Utilities disks (B1 to B3, most likely) are also required. Attempting to use a newer B1 disk from XENIX 2.1.3 failed, presumably because the older 2.1.0 kernel lacks some required interfaces. So it’s possible to boot 286 XENIX 2.1.0, but not possible to properly install it.
Who has more XENIX 2.1.0 disks? The ‘B’ disks were almost certainly the same between XENIX 286 and XENIX 86.
On a more positive note, the warez archive also contains images of 386 XENIX 2.2.2f. An archive of 386 XENIX 2.2.2f disks does exist… but those are of the 386PS kind, designed to run on PS/2 machines.
The warez dump contains images (1.2MB floppies) of 386 XENIX 2.2.2f in the 386AT variant, obviously designed for standard PC/AT compatibles.
As of this writing, this is the oldest surviving 386 XENIX, released probably in November 1987 (just about the same time as OS/2 1.0). It is also one of the oldest 386 operating systems overall.
Now what else is hiding out there?
“The release appears to be from February 1986. It is the oldest 286 XENIX release that I know of.”
Strange. Then how about the “IBM PC Xenix” from 1984, that is essentially Xenix 3.x (not System V) for the PC/AT?
Eh, that should have read “oldest SCO 286 XENIX”. Of course there are older 286 XENIX releases from Microsoft and others.
Fun fact: in Dutch, “XENIX” is often pronounced “‘k zie niks”, meaning:
“[I] don’t see anything”. (Just one of many such -ni{x,ks} puns.)
You have access to a conical set of Novell dusks. I would suggest that you start a conical list of Xenix disks. These disks will eventually surface, the sooner the better. Thanks for your work. My question, originally was which Xenix C compiler could compile Hack. There were two Dos ones that could, but Xenix would have to wait. The best we have is this: which is missing file creation dates: https://museo.freaknet.org/gallery/software/xenix/versions.txt
From the study of Tenox’s picture archive I can discriminate at least 5 diffrent era’s of MS Xenix:
1) Fresh from HCR/LSI-11 type written manuals. LSI-11 with programmers card.
2) Early ports, including a 1984 Xenix 286. ( all the 8086 ports had custom hardware for memory management ). (Note: These are GREEN i.e. https://qph.cf2.quoracdn.net/main-qimg-347224a294827c8b28d0240cf1b61b62 ).
3) Followed by IBM Xenix 1.0 ( blue labels ). Then SCO white labels ( for which its System V, and the BSD me too stuff. ).
That is the set I got with Run time OS. (n1~n4), the basic extended utilities, b1~b6 ( basically all the bin commands and the man contents), and the Software development tool kit. ( The Extended System Commands for Development ).
4) SCO Supported programming and distributions pre NT ( NT basically killed Xenix at MS ).
5) The whole post SCO stuff/Caldera/Xinuos mess. ( Intel admitted their compiler was the wrong endian at the beginning of this era.
There is a manifest in one of the release notes, however it is missing two pages of the file list.
Somewhere there is a version check…
If someone could start posting the tar contents on github….
( p.s. with out the help of Tenox, you and nCommander ( ALL GODS! ), I will still be lost. \m/ \m/ /m\ /m\ )
Those green Microsoft Xenix 286 disks are actually identical to the blue IBM Xenix disks.
SCO’s 286 Xenix looks quite a bit different… but it ought to, because the MS/IBM 286 Xenix was System III based, and SCO’s was System V.
@ForOldHack:
My personal descriptions of the eras of Xenix is as follows
1. Xenix 1.x . This is essentially Unix V6 for the PDP-11. Microsoft starts playing with Unix. No commercial products were released.
2. Xenix 2.x . This is based on Unix V7. Microsoft makes modifications and ports Xenix to various platforms in addition to PDP-11, like Motorola MC68000, Zilog Z800x and possibly others. A port for Intel 8086 is also provided, but it is for Altos server machines, not for PCs. A notable fact about the Altos is that they include a PMMU, which allows multiple concurrent processes built according to the large memory model, with fork and everything. In other words, a Unix optimized machine.
3. Xenix 3.x . This is based on Unix System III. It’s still multiplatform, and now supports proper PCs as well. SCO has been brought on board to help with the porting efforts. Microsoft made a Xenix release for the TRS-80 model 16 with MC68000 CPU. SCO made a release for the IBM PC/XT in 1983, but it seems to be lost, except its manual. Microsoft made two releases for the IBM PC/AT in 1984/1985 on behalf of IBM.
4. Xenix System V 2.x . This is based on Unix SVR2. Microsoft gradually lost interest in Xenix (being concentrated on OS/2) and mostly handed it down to SCO. I’m only aware of PC releases of Xenix from this point on. SCO made version for 8086, 80286 and 80386 CPUs.
5. SCO Unix – based on Unix SVR3/SVR3.2, requires 80386 or better, has good backward compatibility with Xenix. La(te)st version is SCO OpenServer 5.0.7 .
@Michal Necasek Would you provide more information on warez mega dump? I’m looking for Vibrantgraphics Soft-Engine for decades now :-(. It was my mistake then to record on no-name instead of Verbatim CDRs
It’s here: https://archive.org/details/ibm-wgam-wbiz-collection
I see something called “Liquid Speed” from Vibrant Graphics in there, but that doesn’t sound like what you’re looking for.
See also here: https://www.vogons.org/viewtopic.php?t=41020
For so many years my curiosity has been Xenix at Apple Lisa… Why???, Why port Xenix to a expensive computer with a frame buffer and few serial ports directed to be personal computer. Why somebody that bought a Lisa would bought Xenix?. Since there is a copy floating in the Internet somebody should have bought it, but it makes no sense for me. I can see a much better case for Xenix in the Tandy with the 68000 than in the Lisa.
@Fernando:
IMHO, the cases of Tandy machines with MC68000 and Apple Lisa have many similarities. Both sold poorly, both had few software titles, both were expensive (In fact, Lisa was twice as expensive as the Tandy, but this is the “Apple Tax” for you). Xenix support somewhat revived the sales of Tandy TRS-80 models 16/6000, making them more desirable to users. Perhaps there was hope that Xenix would bring the same benefits to Lisa too, but as we know now, it was not to be.
Could Xenix availability for the Lisa had been a way to tick a box to allow it to be bought within certain organizations with for example a dedicated budget for “unix workstations” and whatnot?
Xenix on Lisa was the result of Apple trying to sell machines to the government, requiring UNIX, ( or was that slightly later with A/UX ). But since Microsoft wanted to sell OSs for machines like Sun workstations, there was a 68000 port, to a bunch of machines, Sun-1, TRS-16, Forward was the first, and its manual is on archive.org. Fortune was next. Naturally, the Portable C Compiler was the same endian as the hardware. ( Xenix on Lisa has some files that were created 01/15/1984. Disks (c) 1983:
http://toastytech.com/guis/lisa3.html
https://thumbs.worthpoint.com/zoom/images1/1/1217/13/1983-apple-computer-xenix-lisa-1_1_3ae79fd850a287e2a96d152b5afdb8c9.jpg
http://seefigure1.com/images/xenix/xenix-timeline.jpg
Most unfortunately, there are no version numbers or release/build dates.
“Both Sold poorly” The TRS-80 became the most popular, and largest installation base for Unix. It so happens that there were stacks in the back of radio shack locations. ( My roommates location has 1 in production, 1 spare, and 3 random, some with 30+ address tags ) They also had the largest Unix sales force ( I am joking ), as well as an acknowledgement from Dennis Ritchie.
The conical reference has to remain The Unix history timeline, which Brian Kernigan cited in his memoir:
https://wpollock.com/Unix/UnixHistoryChart.htm
Xenix announced August 25th, 1980.
Xenix 2.3 Late 1981.
Xenix 3.0, announced april 1983. ( It would not see the light of day until November,
following the announcement by IBM of the IBM AT and the Availability of IBM Xenix 1.0 ( Unix system III, MS Xenix 3.0). ( are you confused yet? )( Now confirmed to be identical disks to MS Xenix 3.0 )
SCO Xenix would follow less than a month later with SCO Xenix 3.0 ( Unix system III, MS Xenix 3.0 )
And the Unix wars were off and running.
More things are revealed:
https://archive.org/details/xenix-release-notes/page/n37/
SCO Xenix Release notes 2.1.3
yes, Xenix 2.1.3 only came on 360k diskettes.
pg 14
Xenix operating system -volumes N1-N5 (48tpi) [ Double Density ]
-volumes N1-N2 (96tpi) [ High Density ]
pg 36ff
The .etc/fd96boot0 file has been included in this release to help you create a hi-density bootable floopy. The following commands will allow you to use this file to create high density (96 tpi) bootable floppies, if you save these instructions into a file, you may use the shell script, on the following page as a boot floppy generator.”
Well, looks like SCO Xenix 2.1.3 11/84 did not come with HD floppies,
you had to create them.
Do we have any creation dates for the files on MS Xenix 2.1?
“The TRS-80 became the most popular, and largest installation base for Unix.”
The sales of the Lisa and the TRS80 16/16B seem to have been comparable, in tens of thousands (in the range of 60000-80000). Coupled with the fact that Lisa was about twice as expensive than the Tandy, I wouldn’t say that Tandy was more successful than Lisa, which was deemed a commercial failure. Yes, for a Unix machine the Tandy was a good seller, but for a PC, it was poor.
“Well, looks like SCO Xenix 2.1.3 11/84…”
Isn’t it “SCO Xenix System V 2.1.3”? Anyway I think this version came out in June 20, 1986, as is written in the Release Notes you’re quoting…
Judging by the copyright of 1986 written on the booting screen of this Xenix 2.1, it must be from somewhere in the first half of 1986.
oops, a correction for my previous post. I don’t think that such a thing like “MS Xenix System V 2.1” ever existed (just my personal opinion). A “MS Xenix 2.1” probably did exist, but as a very old thing, Unix V7 based, and not relevant to our discussion. We shouldn’t confuse “Xenix 2.x” and “Xenix System V 2.x”, which are two totally different things.
The SCO Xenix System V 2.1 that this article is about is certainly from the first half of 1986.
I found these files *EN286-1~4.ZIP, but the teledisks 2 and 4 are corrupted. It was in the *BIZ49 archive. Also there is a *ENDEM-1~4.ZIP in archive *BIZ35 archive which says Xenix Demo. I would like to get these files uncorrupted to get them untarred, and look at the dates. Did tar from that far back support dates? The System V tar did, and it claims that the Xenix tar did too. I would guess that the disks were originally created on their VAX, sent to the distribution department for QA and eventually to manufacturing.
I don’t think the XEN286* files in *biz49 are corrupted. The XENIX demo in *biz35, yes, the ZIP archives are corrupted.
Yes tar supported timestamps, and AFAIK they have always been there in XENIX.
Buried on page 5 of the Xenix 3.0 (MS) is this curious reference:
“[Section]4.8 2.3[version] Binaries.
The system DOW executes Microsoft version 2.3 8086 binaries (for example. Altos
586 or lntel86/330 binaries).
[Note: MS Xenix 2.3 is nowhere on seefig1’s map, and the Intel86/330 is not either, but PC/XT Follows the Altos586 ]
Which makes it possible that the binaries for the extended utilities could run, but not install, which would be extremely ugly, but … par for the course. MS Xenix 3.0 would only be a OS upgrade.
Also it says there is a minimal OS that runs in 384K or less, 256k being the minimum. ( No vi or vsh, except with 384k+ ).
I had 512k, soon upgraded to 640k, an 8087, a second hard disk, a 10Mhz v20, a 14” paper white monitor. A Hercules clone. Everything ran perfectly and fast. All the way through K&R. until I tried to compile hack. Total and complete scorched earth. Total. Unrecoverable file system crash on both file systems.
@ForOldHack: Wow, that last bit sounds a *lot* like the event that
finally made me leave Luni^H^H^Hinux with it unstable fs, and well,
unstable everything…
(For NetBSD of course, not windoze or anything :x)
Get your shovels. ( I am Kidding ):
Found this:
NCR PC-8 80286 8.00 653 649 XENIX SCO SVR2.0.4,cc large,
Here:
https://web.archive.org/web/20141202093953/http://wiretap.area.com/Gopher/Library/Techdoc/Bench/dhryst.txt
Which points to this:
https://www.ncr.org.uk/pc8
So if we can find these disks for the NCR PC-8 286… or better, the release notes.
Windows can have “unstable fs”, because effectively, it has nothing but NTFS. While Linux has at least half a dozen filesystems that are feasible to use as a computer’s main filesystem. If btrfs crashes, just use ZFS 🙂
Truth is, UFS{,2} works just fine. On systems that actually bother to
implement it.
Pushing ZFS for the sake of stability seems like a comical move,
frankly. The more complex something is, the more can go wrong.
I wouldn’t describe UFS as sufficient for much of anything. By the 1990s, it was routine to buy Veritas on systems that didn’t come with an adequate file system like IRIX’s XFS or AIX’s JFS (or packaged Veritas themselves, like HP-UX). Linux had a notable shortcoming in this regard. I’ve wasted many hours waiting on an fsck on a Solaris or Linux system after an unplanned crash and then the inevitable hour-long fsck on a multi-gigabyte volume and rubbish splattered all over /lost+found.
OS/2 wasn’t much better in this regard until they ported JFS to it, and it still couldn’t be used for the boot volume in 4.5. HPFS was pretty aggravating (I spent many more hours waiting on CHKDSKs when trying to develop drivers; eventually I got the good sense to switch to the smallest possible FAT volume on the target system.)
NT, on the other hand, had a surprisingly stable file system and did so from day one. I can’t remember which beta NTFS appeared in, but I don’t recall any serious problems with it.
In the modern era I run btrfs, and used to run ZFS before btrfs was stable. Things like checkpoints are pretty essential on a modern system.
I’m not talking about UFS on some commercial garbage ahundred years ago,
I’m talking about UFS on (mainly) BSD now.
Things like checkpoints aren’t essential on a modern system, they’re
essential on a broken system. A system that just keeps working doesn’t
need checkpoints (although of course they can be appreciated as a fancy
feature). Besides…
A lot of fancy features that keep being added to the file system layer
actually belong a layer lower, in the form of a “block system”. Me’s
long since figured that out.
IME, L{uni,inu}x folks need to start realizing that the fast-and-loose
design decisions that made some sense in the 1990s make no sense at all
anymore today. Back then it was difficult to find the space for much
complexity; now our machines are overflowing with it.
I agree with Josh re filesystem stability
Even back in the NT4 days NTFS was rock solid compared to the competition.
Also agree that fsck taking ages was a real downside of various file systems.
Re HPFS: I remember that it was recommended (not sure if it was actually based on facts or if it was like the IT equivalence of astrology) to run CHKDSK more than one tine on HPFS just to be sure that it really repaired any faults. Maybe the recommendation was to run CHKDSK until it found no further faults?
Re file systems: I think that the Amiga doesn’t get as much credit as it deserves for it’s file system. Although it also suffered from long file system checks if the computer crashed or was rebooted with files open for writing (i.e. dirty bit), and this was even worse in that it had no clue about multiple partitions on the same disk (i.e. it started checking all partitions at the same time – you had to manually disable all but one partition at a time for the partitions that had files open for write in a crash, otherwise it would take ages). But the check actually repaired everything, and corruption was extremely rare. At most you lost parts of the file that was open for writing, which is expected since data that was supposed to be written might not had ended up on the disk. There were two versions of the Amiga file system, “old file system” and “fast file system”. Both were solid, but the old one was extra solid in that every data block had links both forward and backwards so even if a sector was unreadable it was still possible to determine what was part of the same file. Fast file system got rid of the backwards link and did some other stuff that gave more space for user data and less for meta data. And of course it wasn’t perfect, for example there was no dedicated directory listing but rather all files were linked to each other and you had to traverse the linked chain to get a directory listing. There was a hash thing though so accessing files with known names was still fast. Eventually this was solved (sort of) with a directory cache thingie addition to the file system. By that time most people that actually did anything productivity related probably had a hard disk anyways and the slow directory listing were mostly a problem for floppies, but still. Remember that at the time FAT was the main competition, with it’s ridiculously high risk of corruption unless you always remembered to run chkdsk (or later scandisk) if there were any chance that files were open for writing at a crash / reboot.
Sorry for going off on an even wider tangent, but another tidbit re file systems: The 1541 disk drive for the Commodore 64 had nasty bugs related to saving files by overwriting an existing file with the same name. It turned out that in some circumstances certain disk operations failed to release all allocated buffers, and the save-overwrite feature was the only one that required all buffers free, and if it couldn’t allocate all buffers it needed it would corrupt the disk. I.E. directly two bugs (failing to release allocated buffers and missing error checking if failing to allocate all buffers) but discussions as late as about ten years ago or so ended up finding that there were at least a third bug involved. Can’t remember the details. Btw, a reason for very different mileage re this is that some uses didn’t have/use any extensions/utilities for handling disks, and in that case they rarely if ever ran into the failed to release allocated buffers bug, while users who had/used extensions/utilities for handling disks were way more likely to run into the bug. The reason for this is that that without any utility you had to erase/overwarite your basic program to get a directory listing, and thus you tended to just turn on your computer and drive, load your program with a known name, do whatever work on it, and then save-overwrite with the same known name, without ever doing a directory listing (and simply relying on the status LED on the drive to tell if the save operation was successful or not), and thus the bug wasn’t triggered. However all utilities for disk handling had a feature to list the directory without overwriting your basic program, and that apparently was a way to trigger the bug.
Also, if anyone wonders why directory listings overwrote your basic program, the reason is that disk drives were added about two years after the release of the first Commodore 8-bit computer, and apparently no-one thought it was worth having dedicated code to list the directory taking up ROM space before there even were any disk drives available, and the solution to be able to list the directory was that the drive would spit out the directory as a basic program, with fake line numbers representing the file sizes. This was obviously not great, but on the other hand it was great that everything was already built into ROM which made sure that any software that would work with tapes would also work with disks (unless it relied on some sort of copy protection, or was hard coded to the tape device number for data files), unlike other computers like the Atari 8-bit or ZX Spectrum where the memory map shifted if you added any other storage than tapes. It’s also neat that you don’t need any boot disk to get going, the DOS is built into ROM in the drive and can do everything including formatting disks, so you just need a blank disk to use it.
@zeurkous:
“Pushing ZFS for the sake of stability seems like a comical move,
frankly.”
Then just use ext4. It’s very stable and robust and IMHO, can do everything that UFS2 can. Granted, it can’t do checkpoints, but I already learned that you don’t use them. I don’t use them either; I just see no benefit in them to justify the increase in complexity (both in their implementation and in MY workflow).
About ext4 (and its predecessor ext3) – they’ve both been dependable as rocks in my experience running multiple Linux servers for many years. I don’t think I have ever run fsck on them manually, and the occasions where they ran fsck automatically can be counted on the fingers on one hand, always due to power or motherboard failures; fsck has always been reasonably fast and the systems were up and running in no time, with no loss of data and no lost+found nonsense.
Sorry vbadesc, but after the earlier disasters (where even a humongous
ext3 journal didn’t help a fsck), me’s not going to try ext fses, ever
again.
It’s quite simple: for me uses, me’s found UFS (a bit of an exonym as us
BSD weirdos just call it “ffs”, inviting more comparisons w/ the state
of affairs on the Amiga) to be stable and reliable, while the opposite
is true of me experiences even w/ ext3. That’s enough for me to know
what to rely on.
Me’ll accept that your use case is probably quite different. Me’ll also
accept that when everything else goes fine, ext is probably more more
less[0] fine. It’s when shit hits the fan that the reliability of an fs
becomes an issue, and me’s found, well… medoens’t need to repeat
meself again.
[0] Me’s not sure if it’s specific to ext, but there’s this nasty
security hole where the permissions of symlinks are forced to
777.
vbdasc*
Me sincere apologies — it’s hot here and me’s making silly tyops all
day :/
The permissions of symlinks being 0777 is not a security hole because nothing ever uses the permissions of symlinks for anything. (This has also been routine Unix behaviour on many Unix variants for many decades — it’s not something Linux-specific, and it’s *certainly* not filesystem-specific.)
Actually, the handling of symlink permissions is “somewhat”
inconsistent among Unices; therefore, the permissions’d better be
correct in all cases, except where compatibility — and possible future
sanity — is not a concern.