Discussion:
z/PDOS-generic
Add Reply
Paul Edwards
2024-07-18 15:07:52 UTC
Reply
Permalink
For 35+ years I have wondered why there was no MSDOS for
the mainframe. I now have an EBCDIC FAT32 file system on
FBA disks on the mainframe and an operating system that can
do basic manipulation, like typing files.

Search for z/PDOS-generic at https://pdos.org

PDOS-generic has never been fleshed out because I wasn't sure
if it was truly portable or whether I was missing something. The
mainframe is always my go-to place for proving portability.

I'm not sure where to go from here. I think I might get an Atari
clone operational under PDOS-generic (I already have the Amiga)
to try to prove the technique of zapping a BSS variable on load
to inform the executable of the new environment so that it doesn't
do a real trap and instead does a callback. Actually it's mainly on
the mainframe that I need to do that, as the Atari has a control
block on entry that I can fill in with the callback overrides. Note
that I have an Amiga mini-clone already using this technique,
which I run under qemu-m68k (ie user, not system) on my
Manjaro Linux on a Pinebook Pro (ARM). My main development
system is still Windows 2000 running under qemu on the PBP,
and I just remembered today that that gave me access to Outlook
Express which I used a long time ago for News, and it still works.
So I didn't need to get my ArcaOS operational after all (which
has Thunderbird).

BFN. Paul.
Grant Taylor
2024-07-19 03:40:29 UTC
Reply
Permalink
For 35+ years I have wondered why there was no MSDOS for the mainframe.
The answer is in the name.

MS-DOS

Microsoft DOS

Micro

micro-computers are the smallest end of the system with mainframes and
supers at the other end of the system.

IBM provided a Disk Operating System for early and / or smaller mainframes.

But Microsoft never provided DOS for mainframes.
--
Grant. . . .
Paul Edwards
2024-07-19 10:43:13 UTC
Reply
Permalink
Sure - but why not make it available anyway? What's the barrier
to someone doing that? No-one is interested? Too much work?
It didn't need to be Microsoft personally. And it can be written
in C to make things easier. Or even some other language - e.g.
CP/M was written in PL/M I think.

BFN. Paul.
Post by Grant Taylor
For 35+ years I have wondered why there was no MSDOS for the mainframe.
The answer is in the name.
MS-DOS
Microsoft DOS
Micro
micro-computers are the smallest end of the system with mainframes and
supers at the other end of the system.
IBM provided a Disk Operating System for early and / or smaller mainframes.
But Microsoft never provided DOS for mainframes.
--
Grant. . . .
Scott Lurndal
2024-07-19 16:18:13 UTC
Reply
Permalink
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.

No mainframe user would ever be interested in something
so simplisticly useless.
BGB-Alt
2024-07-19 22:12:40 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
No mainframe user would ever be interested in something
so simplisticly useless.
It has a FAT filesystem, MZ loader, and basic console printing and
memory allocation... These cover the main bases for what one needs for
an operating system.


Granted, if one wants memory protection and multiple processes/threads,
this is no longer sufficient as now the OS needs to be able to do all
the other stuff programs might want to be able to do.

Granted, other types of things one might need to deal with is how
programs should be able to interface with OS facilities and device drivers.


Say, for example:
Unix style: System calls identified by number, and treated like a
function call. Most devices are presented as file-like objects (mostly
using file operations or "ioctl()").

COM style interfaces: An object is given with various methods, and a
mechanism exists for mapping these method calls from userspace to kernel
space or between processes.


In my project, I used a hybrid approach, where a range of system-call
numbers were set aside for method calls. There is a system call used to
request an interface object for a given interface.

In this case, an interface ID is given as a pair of 64-bit numbers,
which may be interpreted as FOURCC's, EIGHTCC's, or a UUID/GUID. When
needed, it is possible to tell them apart by looking at bit patterns.
Current thinking is mostly that OS APIs would use FOURCC or EIGHTCC
pairs, whereas private interfaces would use GUIDs.

The object is presented (to the client application) with its VTable
mostly filled up with methods which merely exist to forward their
arguments to the corresponding system-call number (for their location
within the VTable).


Some other devices could present themselves with a file-like or
socket-like interface though.

Though, say, for things like GUI/audio/etc interfaces, a COM-like
interface routed directly over syscalls would have lower overhead, say,
than trying to shoe-horn it through message passing over a socket or
similar.

...
Scott Lurndal
2024-07-19 23:21:22 UTC
Reply
Permalink
Post by BGB-Alt
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
No mainframe user would ever be interested in something
so simplisticly useless.
It has a FAT filesystem
Poor performance, silly filename length limitations.
Post by BGB-Alt
, MZ loader,
whatever that might be.
Post by BGB-Alt
and basic console printing and
memory allocation... These cover the main bases for what one needs for
an operating system.
Not on a millon dollar mainframe.
Dan Cross
2024-07-19 23:31:32 UTC
Reply
Permalink
Post by Scott Lurndal
Post by BGB-Alt
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
No mainframe user would ever be interested in something
so simplisticly useless.
It has a FAT filesystem
Poor performance, silly filename length limitations.
Post by BGB-Alt
, MZ loader,
whatever that might be.
Post by BGB-Alt
and basic console printing and
memory allocation... These cover the main bases for what one needs for
an operating system.
Not on a millon dollar mainframe.
Please don't feed the troll. Or do; it's not like this
newsgroup gets much more traffic except for this guy's
weird dos clone and ramblings aout mainframes.

- Dan C.
BGB
2024-07-20 06:30:29 UTC
Reply
Permalink
Post by Scott Lurndal
Post by BGB-Alt
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
No mainframe user would ever be interested in something
so simplisticly useless.
It has a FAT filesystem
Poor performance, silly filename length limitations.
True enough.

But, I guess everyone thought 8.3 filenames were fine in the 80s and
early 90s (or, for some of us, might bring back memories of childhood
nostalgia or similar, a memory of the times before most everything went
over to free-form long filenames).


Personally, I suspect a limit of 32 or 64 characters would probably be
fine for most uses, though most modern systems have settled on a 256
character name limit.

However, given a lot of systems have settled on a 260 character
"maxpath" or similar, the practical use of a 256 character name limit is
debatable (one can only really use a full length filename in the root
directory, which is less useful).

If it were just me, I would assume a 32-character filename limit, and a
512 character maxpath.


Granted, a 32 character limit might seem imposing for people who prefer
to use the "Hey check it out, my filename is a whole sentence or
paragraph.txt" naming convention...
Post by Scott Lurndal
Post by BGB-Alt
, MZ loader,
whatever that might be.
The MS-DOS ".EXE" format...

It was useful on MS-DOS, granted, not so much at this point.


On more modern systems, this role is typically served by ELF or PE/COFF.

Where, PE/COFF was generally a COFF binary glued onto an MZ stub (which
traditionally displayed "This program can not be run in MS-DOS mode."
and exits).


In my own uses, I dropped the MZ EXE stub, beginning the file at the
'PE' marker. This isn't quite back to being COFF, as this typically
started at the machine-type ID. But, having a magic FOURCC here is
useful (typically 'PEL4' or similar in my current use).
Post by Scott Lurndal
Post by BGB-Alt
and basic console printing and
memory allocation... These cover the main bases for what one needs for
an operating system.
Forgot to mention, it also had:
keyboard input handling;
Optional support for ANSI escape codes;
...

Well, and a variety of built-in programs, like "edit", "fdisk", and
"format".
Post by Scott Lurndal
Not on a millon dollar mainframe.
Probably not...


I was more asserting that MS-DOS can be used as an operating system (and
was used as such, at one point, on PCs), not really defending that it
would make sense to run it on a mainframe.

So, yeah, how porting an MS-DOS variant to a mainframe would make any
sense, I don't know.


I guess technically, the MS-DOS source has been released, but given much
of it is 8086 assembler, how much use it is to try to port it, is
debatable...
John Ames
2024-07-22 14:51:54 UTC
Reply
Permalink
On Fri, 19 Jul 2024 23:21:22 GMT
Post by Scott Lurndal
Poor performance, silly filename length limitations.
I dunno, 8.3 is downright spacious compared to a number of actual
mainframe operating systems...
Dan Cross
2024-07-22 15:22:26 UTC
Reply
Permalink
Post by John Ames
On Fri, 19 Jul 2024 23:21:22 GMT
Post by Scott Lurndal
Poor performance, silly filename length limitations.
I dunno, 8.3 is downright spacious compared to a number of actual
mainframe operating systems...
I can't think of any now for which that would be true. Maybe
DOS/VS or something?

The idea of a PC operating system on a mainframe is silly. A
single-tasking, unprotected, glorified program loader like DOS
that provided synchronous, programmed IO would be hopelessly
inefficient for a heavy-use mainframe. It's one thing on a
cheap 8- or 16-bit micro where you don't care about wasting
cycles while the user thinks about what to type next. Quite
another on your big compute engine when you want to keep the
CPUs and IO devices running as close to capacity as you can, to
maximize the return on your multi-million dollar hardware
investment.

- Dan C.
John Ames
2024-07-22 16:07:30 UTC
Reply
Permalink
On Mon, 22 Jul 2024 15:22:26 -0000 (UTC)
Post by Dan Cross
I can't think of any now for which that would be true. Maybe
DOS/VS or something?
TENEX's six-character filename limit is the reason Colossal Cave
Adventure is also known as ADVENT ;)
Post by Dan Cross
The idea of a PC operating system on a mainframe is silly.
No argument there. But there's room in life for silliness.
Dan Cross
2024-07-22 17:37:56 UTC
Reply
Permalink
Post by John Ames
On Mon, 22 Jul 2024 15:22:26 -0000 (UTC)
Post by Dan Cross
I can't think of any now for which that would be true. Maybe
DOS/VS or something?
TENEX's six-character filename limit is the reason Colossal Cave
Adventure is also known as ADVENT ;)
Oh, I thought we were being specific to IBM mainframes,
which is almost certainly what the OP was talking about.

ITS certainly had six-character filenames, as did TOPS-10 IIRC,
but TENEX had no such limit; consider the existence of
<SYSTEM>DIRECTORY, for instance. Certainly, any unreasonably
short name limit did not survive into TOPS-20.

https://github.com/PDP-10/tenex/blob/master/pdf/TEN-SYS-2.pdf
suggests that the "primary name string" is of
"indefinite length".
Post by John Ames
Post by Dan Cross
The idea of a PC operating system on a mainframe is silly.
No argument there. But there's room in life for silliness.
Indeed. I don't think OP is making that distinction, though.

- Dan C.
Scott Lurndal
2024-07-22 18:07:19 UTC
Reply
Permalink
Post by Dan Cross
Post by John Ames
On Mon, 22 Jul 2024 15:22:26 -0000 (UTC)
Post by Dan Cross
I can't think of any now for which that would be true. Maybe
DOS/VS or something?
TENEX's six-character filename limit is the reason Colossal Cave
Adventure is also known as ADVENT ;)
Oh, I thought we were being specific to IBM mainframes,
which is almost certainly what the OP was talking about.
ITS certainly had six-character filenames, as did TOPS-10 IIRC,
but TENEX had no such limit; consider the existence of
<SYSTEM>DIRECTORY, for instance. Certainly, any unreasonably
short name limit did not survive into TOPS-20.
https://github.com/PDP-10/tenex/blob/master/pdf/TEN-SYS-2.pdf
suggests that the "primary name string" is of
"indefinite length".
Post by John Ames
Post by Dan Cross
The idea of a PC operating system on a mainframe is silly.
No argument there. But there's room in life for silliness.
Indeed. I don't think OP is making that distinction, though.
Agreed. Even the ANSI Magtape format had 17-character filenames
back in the day. Some older Burroughs systems were limited to 12
characters (six for pack/volume name and six for filename), but
large systems (e.g. B6500 et al) had a longer limit.

The original unix filesystem was limited to 14, IIRC.
Dan Cross
2024-07-22 19:38:41 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Dan Cross
Post by John Ames
On Mon, 22 Jul 2024 15:22:26 -0000 (UTC)
Post by Dan Cross
I can't think of any now for which that would be true. Maybe
DOS/VS or something?
TENEX's six-character filename limit is the reason Colossal Cave
Adventure is also known as ADVENT ;)
Oh, I thought we were being specific to IBM mainframes,
which is almost certainly what the OP was talking about.
ITS certainly had six-character filenames, as did TOPS-10 IIRC,
but TENEX had no such limit; consider the existence of
<SYSTEM>DIRECTORY, for instance. Certainly, any unreasonably
short name limit did not survive into TOPS-20.
https://github.com/PDP-10/tenex/blob/master/pdf/TEN-SYS-2.pdf
suggests that the "primary name string" is of
"indefinite length".
Post by John Ames
Post by Dan Cross
The idea of a PC operating system on a mainframe is silly.
No argument there. But there's room in life for silliness.
Indeed. I don't think OP is making that distinction, though.
Agreed. Even the ANSI Magtape format had 17-character filenames
back in the day. Some older Burroughs systems were limited to 12
characters (six for pack/volume name and six for filename), but
large systems (e.g. B6500 et al) had a longer limit.
The original unix filesystem was limited to 14, IIRC.
Correct. Two bytes for the inode number, and 14 for
the filename, in a 16-byte directory entry. Fixed in
4BSD, where the 4.2 filesystem has a variable length
filename (up to 255 characters) and a "reclen" field
that points to the next (occupied) entry in any given
dir. Creating a new file in some directory basically
meant doing a first-fit search through the directory
file until one could find a suitably sized "slot".

Good times.

- Dan C.
John Ames
2024-07-22 18:18:57 UTC
Reply
Permalink
On Mon, 22 Jul 2024 17:37:56 -0000 (UTC)
Post by Dan Cross
ITS certainly had six-character filenames, as did TOPS-10 IIRC,
but TENEX had no such limit; consider the existence of
<SYSTEM>DIRECTORY, for instance. Certainly, any unreasonably
short name limit did not survive into TOPS-20.
I stand corrected...!
BGB
2024-07-22 19:16:26 UTC
Reply
Permalink
Post by John Ames
On Fri, 19 Jul 2024 23:21:22 GMT
Post by Scott Lurndal
Poor performance, silly filename length limitations.
I dunno, 8.3 is downright spacious compared to a number of actual
mainframe operating systems...
Looking some, it seems:
MS-DOS: 8.3
Commodore: 15.0
Apple ProDOS: 16.0
Apple Macintosh: 31.0 (HFS)
Early Unix: 14 (~ N.M where N+M+1 <= 14)

Whereas TENEX and some others were 6 character.
OS4000: 8 character
VAX/VMS (and others): 6.3
It seems 6.3 was fairly common on DEC OS's.

Others:
ISO 9660 30 (variable format, similar to Unix)
UDF: 255
FAT32 and NTFS: 256 (UTF-16)
EXT2/3/4: 256 (UTF-8)

For most uses, a 32 character limit would probably be fine.


In many Apple systems, file type and similar was given in a hidden
"resource fork" rather than encoded in the filename via a file extension
or similar. This seems to be a bit of weirdness fairly specific to Apple
systems.


For an experimental filesystem design of mine (not used much as of yet),
I had used 48-character base names (sufficient "most of the time"), with
an optional encoding for longer names.

Basically using free-form names following Unix-like conventions, albeit
with semi-mandatory file extensions more like in Windows land (binaries
typically use '.exe' and '.dll' extensions; however, unlike Unix style
shells, the file extension is not usually given when invoking a command;
and the extension will be inferred when loading the program).


However, it allows longer names using a scheme similar to FAT32 LFN's,
just with names encoded as UTF-8. Otherwise, the design was similar to
an intermediate between EXT2 and NTFS; though trying to avoid the sorts
of needless complexity seen in NTFS. The LFN's could be omitted, in
which case the name limit would be 48 bytes as UTF-8.


For directories, I went with organizing directory entries in an AVL tree:
Typical directories are not big enough to justify the relative
complexity of a B-Tree (unless aggregating the entire directory tree
structure into a shared B-Tree).
I had gone the route of using disk blocks to encode directories.
Many directories are still big enough that linear search is undesirable.

Hashed directory lookup seems to be popular, but I went with AVL here
(but, with balancing requirements relaxed to depth +/- 3 rather than +/-
1, to reduce the number of rotations needed).


For directory lookups, generally the tree is walked using a specialized
version of "strncmp()" over the 48 character base-name. Names are
encoded as UTF-8, and the "strncmp()" variant is designed to assume that
'char' is unsigned (the standard version could give different results
based on the signedness of 'char' or other factors).

Though, "memcmp()" could probably be used and would give the same
results here (with names NUL padded to 48 bytes as-needed).


As I saw it, fully variable length directory entries (like seen in EXT2)
are also undesirable.
So, in this case, directory entries are 64 bytes, with 48 bytes for the
name, and the rest for tree management data and holding inode index.

Another major structure is the inode table, which:
Is semi-recursive, the inode table itself has an inode,
is allocated much like a file.
Inodes are built from a tagged structure.
Partially inspired by NTFS.
Currently uses a block-allocation scheme similar to EXT2.
Small table of block indices:
Index 0..15: Points directly at target block;
Index 16..23: One level of indirection.
Index 24..27: Two levels of indirection.
Index 28/29: Three levels of indirection.
Index 30: Four levels of indirection.
Index 31: Five levels of indirection.
Span-based allocation was a close second place.
The tagged inode structure could also allow for span-based files.
But, I went with an EXT2 like scheme for now.
Span based allocation would have been more complicated.

The current implementation mostly assumes 512 byte inodes, but
technically it is variable.

In the block indirection tables, unlike EXT2, the lower-levels of
indirection have "shadowed" spaces in the higher levels of indirection.
This was mostly for sake of simplicity (it seemed simpler to just waste
some of the table entries than to go the EXT2 route). Theoretically, the
deeper tables could mirror the shallower tables, but this wasn't done in
the current implementation (easier to not bother).

Similar to filesystems like EXT2 and similar, the first 16 inodes are
currently special/reserved, and used mostly to encode filesystem
metadata (inode table, inode bitmap, root directory, block bitmap, ...).
However, one minor difference being that block numbering is relative to
the start of the partition (so, for example, block 0 in this case is a
NULL block, but technically the superblock exists at this location).
Higher numbered inodes would be used for files and similar.

For now, the special inodes are identified by magic index, unlike the
NTFS MFT which encodes a name for these special entries (maybe later
could add a "magic ID" tag or similar).

TODO might be to consider file compression. No immediate plans for
journaling support.



While a case could have been made for "just use EXT2 or similar", my
main development system is Windows, so pretty much any choice (other
than FAT32 or NTFS or similar) is a similar level of hassle.

So:
FAT32, mostly what I had ended up using thus far.
But, with some hacks to support things like symlinks and similar.
NTFS, possible, but significant needless complexity.
Main issue is that it has too much needless complexity.
EXT2, mostly more sane than NTFS, but still some questionable choices.
ExFAT, doesn't address the issues in my case.
Basically FAT but with redesigned directories
Still patent encumbered.
(For FAT32 and the core of NTFS, patents have expired).


Thus far, had been using FAT32, but using cruft to try to add things
like symlinks and similar on top of FAT32 is ugly.

...
Scott Lurndal
2024-07-22 20:14:29 UTC
Reply
Permalink
Post by BGB
Post by John Ames
On Fri, 19 Jul 2024 23:21:22 GMT
Post by Scott Lurndal
Poor performance, silly filename length limitations.
I dunno, 8.3 is downright spacious compared to a number of actual
mainframe operating systems...
MS-DOS: 8.3
Commodore: 15.0
Apple ProDOS: 16.0
Apple Macintosh: 31.0 (HFS)
Early Unix: 14 (~ N.M where N+M+1 <= 14)
Although file suffixes had no intrinsic meaning
for Unix, and were seldom more than a single
character.
Post by BGB
Whereas TENEX and some others were 6 character.
OS4000: 8 character
VAX/VMS (and others): 6.3
VMS filenames were 17 character orignally, openvms
allows much longer names.
Post by BGB
ISO 9660 30 (variable format, similar to Unix)
UDF: 255
FAT32 and NTFS: 256 (UTF-16)
EXT2/3/4: 256 (UTF-8)
POSIX defines the minimum path length (generally 1024),
but any implementation of POSIX can choose to support
longer filenames; most filesystem are limited to 255
or 256 characters for a path component.
Post by BGB
For most uses, a 32 character limit would probably be fine.
In your use cases, perhaps.
Post by BGB
Basically using free-form names following Unix-like conventions, albeit
with semi-mandatory file extensions more like in Windows land (binaries
typically use '.exe' and '.dll' extensions; however, unlike Unix style
shells, the file extension is not usually given when invoking a command;
and the extension will be inferred when loading the program).
Extensions were, and are, a pile of steaming stuff. They're
completely unnecessary as a component of a filesystem. As
a user-selected convention they're ok (for example, the gcc
driver program selects which language to compile for from
the extension (but it's optional anyway)), but the operating
system knows nothing of extensions.

Some mainframe operating systems encoded the file type in
metadata (Burroughs in the Disk File Header, unix: inode,
apple: resource fork), but that has downsides as well.
BGB
2024-07-22 23:03:29 UTC
Reply
Permalink
Post by Scott Lurndal
Post by BGB
Post by John Ames
On Fri, 19 Jul 2024 23:21:22 GMT
Post by Scott Lurndal
Poor performance, silly filename length limitations.
I dunno, 8.3 is downright spacious compared to a number of actual
mainframe operating systems...
MS-DOS: 8.3
Commodore: 15.0
Apple ProDOS: 16.0
Apple Macintosh: 31.0 (HFS)
Early Unix: 14 (~ N.M where N+M+1 <= 14)
Although file suffixes had no intrinsic meaning
for Unix, and were seldom more than a single
character.
There were/are lots of 3 or 4 character file extensions, like ".cpp" or
".html", ...

In Linux, there are lots of multi-part extensions, like ".tar.gz", etc.

Though, I guess in traditional Unix, 1 character was common.
Post by Scott Lurndal
Post by BGB
Whereas TENEX and some others were 6 character.
OS4000: 8 character
VAX/VMS (and others): 6.3
VMS filenames were 17 character orignally, openvms
allows much longer names.
When I was looking at it, VAX/VMS was listed as 6.3, whereas OpenVMS was
longer. Could be wrong, it was a fairly quick/dirty search.
Post by Scott Lurndal
Post by BGB
ISO 9660 30 (variable format, similar to Unix)
UDF: 255
FAT32 and NTFS: 256 (UTF-16)
EXT2/3/4: 256 (UTF-8)
POSIX defines the minimum path length (generally 1024),
but any implementation of POSIX can choose to support
longer filenames; most filesystem are limited to 255
or 256 characters for a path component.
OK.

Windows has a filename limit of 256, but a path-length limit of 260, so
as noted, you can only put a full-length filename into the root
directory, and putting a long-name file in a long-name directory is
likely to run into the limit.

Things like video downloaders seem to limit the first part of the
filename to around 120 characters or so (typically using the video title
as the filename, and truncating it after this point).


But, yeah, 1024 for an overall path limit makes more sense than 260.
For my own project, I had assumed 512, but either way...

Well, excluding AF_UNIX sockets, which as-is will have a 104 character
name limit... Though, this is more because of the layout for
"sockaddr_un" (where "sockaddr_storage" generally supports up to 128
bytes for the total size).

Internally though, the idea isn't that the actual path for these sockets
is used though, but rather they are mashed into a 128-bit hash (where,
internally pretty much everything can be treated as-if it were IPv6).
Post by Scott Lurndal
Post by BGB
For most uses, a 32 character limit would probably be fine.
In your use cases, perhaps.
IME, the vast majority of "normal" files tend to have names shorter than
32 characters.

The video files (within YouTube or similar) seem to primarily use
shorter alphanumeric names, but the video downloaders tend to use the
title as a filename (so may generate longer names...).
Post by Scott Lurndal
Post by BGB
Basically using free-form names following Unix-like conventions, albeit
with semi-mandatory file extensions more like in Windows land (binaries
typically use '.exe' and '.dll' extensions; however, unlike Unix style
shells, the file extension is not usually given when invoking a command;
and the extension will be inferred when loading the program).
Extensions were, and are, a pile of steaming stuff. They're
completely unnecessary as a component of a filesystem. As
a user-selected convention they're ok (for example, the gcc
driver program selects which language to compile for from
the extension (but it's optional anyway)), but the operating
system knows nothing of extensions.
In my case, the filesystem driver and VFS doesn't really know much about
file extensions, but at the level of the shell and program loader, it
knows about extensions.


So, for things like opening files or "readdir()" or similar, it doesn't
care. The VFS doesn't know about LFN's either (rather, these are local
to the FAT driver). Internally, names are normalized to UTF-8 and
treated as case-sensitive (generally normalizing FAT 8.3 names to lower
case).

The handling for generating SFN's from LFN's differs slightly from
WIndows regarding FAT32:
Windows: "Program Name.txt" => "PROGNA~1.TXT"
TestKern: "~HHHHHHH.~~~", where HHH is an hash of the LFN.

Mostly because the "~1" convention requires figuring out which names
already exist and advancing a sequence number (what happens when 10+
conflict?...). Simply hashing the LFN is easier (and, if an LFN exists,
no need to care about the SFN as mostly no one will see it).

It will just use an 8.3 name in cases where the filename matches an 8.3
pattern (and the case can be encoded using WinNT rules).


There may also be some "$META.$$$" files, but these are used internally
by the FS driver and not exposed to programs (but, would be visible if
the drive viewed from Windows). These mostly being part of a hacky
scheme to add additional metadata (along vaguely similar lines to Linux
UMSDOS; just using native VFAT LFN's for the filenames). Unlike UMSDOS
though, in the table is keyed using the SFN rather than the location in
the directory (and is at least slightly less brittle).


With a new filesystem, the filesystem itself would not need to care
about file extensions, just encoding filenames (as a UTF-8 blob).

General idea was a scheme like:
0- 48: 1 entry;
49-100: 2 entry;
101-220: 4 entry.
221-256: 5 entry (though, has space for 280 bytes).

Where, each extended entry adds 60 bytes, but cuts 8 bytes off the
base-name (for the filename hash).
"OverlyLongFileNameThatIsASentance_NeedTOFindMoreToStickOnHere.txt"
Has a base name like:
"OverlyLongFileNameThatIsASentance_NeedT~HHHHHHH"
Where 'H' is the hash of the full name, and cut-off when rebuilding the
name from the LFN entries.



Though, in the case of the program loader, the extension doesn't really
determine how the file is loaded, as the loader itself mostly uses file
magic, eg:
'MZ': PE loader.
'PE': PE loader.
0x7F,'ELF': ELF Loader
'#!': Redirect ("#!pathname\n")

If it appears to be ASCII text, the extension is considered:
".bas": BASIC interpreter.
Else: Shell Script

The shell will have a list of known executable extensions, and when a
command is typed, will look it up in the following pattern:
Check current directory:
Check first for no extension;
Then tries each known executable extension.
Check everything in the PATH environment variable:
Check first for no extension;
Then, try each known extension.
Else, give up.

Once it finds a matching file, it passes it off to the loader (via a
system call). Current strategy involves trying to open each possible
name (if the open succeeds, it is seen as a hit).
Post by Scott Lurndal
Some mainframe operating systems encoded the file type in
metadata (Burroughs in the Disk File Header, unix: inode,
apple: resource fork), but that has downsides as well.
OK.

Metadata is annoying when files are mostly handled on systems that only
have the filename and the contents (as a big blob of bytes).

Though, generally, it is also preferable to have a file magic, such as a
FOURCC right at the start of the file or similar.

...
Scott Lurndal
2024-07-22 23:58:50 UTC
Reply
Permalink
Post by BGB
Post by Scott Lurndal
Post by BGB
Post by John Ames
On Fri, 19 Jul 2024 23:21:22 GMT
Post by Scott Lurndal
Poor performance, silly filename length limitations.
I dunno, 8.3 is downright spacious compared to a number of actual
mainframe operating systems...
MS-DOS: 8.3
Commodore: 15.0
Apple ProDOS: 16.0
Apple Macintosh: 31.0 (HFS)
Early Unix: 14 (~ N.M where N+M+1 <= 14)
Although file suffixes had no intrinsic meaning
for Unix, and were seldom more than a single
character.
There were/are lots of 3 or 4 character file extensions, like ".cpp" or
".html", ...
In Linux, there are lots of multi-part extensions, like ".tar.gz", etc.
The point is, they are arbitrary and not required. Tar quite happly
will unpack an archive named archive, without any extension.

We've seen, with windows, that when the operating system
(or the user) trusts the extension to accurately reflect the
content of the file, bad things happen.
Post by BGB
Post by Scott Lurndal
Post by BGB
Whereas TENEX and some others were 6 character.
OS4000: 8 character
VAX/VMS (and others): 6.3
VMS filenames were 17 character orignally, openvms
allows much longer names.
When I was looking at it, VAX/VMS was listed as 6.3, whereas OpenVMS was
longer. Could be wrong, it was a fairly quick/dirty search.
I was a systems programmer on the VAX 11/780 for four years
back in the day. And we had a source license :-)
Post by BGB
But, yeah, 1024 for an overall path limit makes more sense than 260.
For my own project, I had assumed 512, but either way...
As noted, that's the POSIX minimum. Implementations are free
to support more, if properly documented.
Post by BGB
Well, excluding AF_UNIX sockets, which as-is will have a 104 character
name limit... Though, this is more because of the layout for
"sockaddr_un" (where "sockaddr_storage" generally supports up to 128
bytes for the total size).
A different namespace, of course, will have different rules.
Post by BGB
Internally though, the idea isn't that the actual path for these sockets
is used though, but rather they are mashed into a 128-bit hash (where,
internally pretty much everything can be treated as-if it were IPv6).
Post by Scott Lurndal
Post by BGB
For most uses, a 32 character limit would probably be fine.
In your use cases, perhaps.
IME, the vast majority of "normal" files tend to have names shorter than
32 characters.
The video files (within YouTube or similar) seem to primarily use
shorter alphanumeric names, but the video downloaders tend to use the
title as a filename (so may generate longer names...).
There's more to the world than what you see.

<snip>
Post by BGB
So, for things like opening files or "readdir()" or similar, it doesn't
care. The VFS doesn't know about LFN's either (rather, these are local
to the FAT driver). Internally, names are normalized to UTF-8 and
treated as case-sensitive (generally normalizing FAT 8.3 names to lower
case).
FAT? Why on earth?
Post by BGB
The handling for generating SFN's from LFN's differs slightly from
Windows: "Program Name.txt" => "PROGNA~1.TXT"
TestKern: "~HHHHHHH.~~~", where HHH is an hash of the LFN.
None of this should be necessary, and it's inherently broken
from a UI standpoint.
BGB
2024-07-23 04:06:44 UTC
Reply
Permalink
Post by Scott Lurndal
Post by BGB
Post by Scott Lurndal
Post by BGB
Post by John Ames
On Fri, 19 Jul 2024 23:21:22 GMT
Post by Scott Lurndal
Poor performance, silly filename length limitations.
I dunno, 8.3 is downright spacious compared to a number of actual
mainframe operating systems...
MS-DOS: 8.3
Commodore: 15.0
Apple ProDOS: 16.0
Apple Macintosh: 31.0 (HFS)
Early Unix: 14 (~ N.M where N+M+1 <= 14)
Although file suffixes had no intrinsic meaning
for Unix, and were seldom more than a single
character.
There were/are lots of 3 or 4 character file extensions, like ".cpp" or
".html", ...
In Linux, there are lots of multi-part extensions, like ".tar.gz", etc.
The point is, they are arbitrary and not required. Tar quite happly
will unpack an archive named archive, without any extension.
We've seen, with windows, that when the operating system
(or the user) trusts the extension to accurately reflect the
content of the file, bad things happen.
Bigger problem I think is that the OS defaults to hiding the file
extensions and many users trust the icon...

So, if they download something with a filename like "SurveyForm.pdf.exe"
with an Acrobat icon, they will assume it is a PDF.
Post by Scott Lurndal
Post by BGB
Post by Scott Lurndal
Post by BGB
Whereas TENEX and some others were 6 character.
OS4000: 8 character
VAX/VMS (and others): 6.3
VMS filenames were 17 character orignally, openvms
allows much longer names.
When I was looking at it, VAX/VMS was listed as 6.3, whereas OpenVMS was
longer. Could be wrong, it was a fairly quick/dirty search.
I was a systems programmer on the VAX 11/780 for four years
back in the day. And we had a source license :-)
OK.

I didn't exist at the time that machine was new...


When my span of existence began, Compaq was making IBM PC clones, and
the NES had already been released. So, some of what information I can
gather is second hand.

Well, and I guess there was also TRON, and the "north american video
game crash" (where apparently they buried a crapload of Atari 2600 E.T.
cartridges in a landfill, ...).

Well, and then I guess Nintendo releasing the NES and Super Mario Bros,
etc. At this point, I existed.


But, like, about the earliest memories I have, are mostly of watching
the "Super Mario" cartoons, and shows like "Captain N" (at a time before
I really started messing with computers, memories from this time are
rather fragmentary).

But, these went away, and were replaced by the "Sonic The Hedgehog"
cartoons, and shows like "ReBoot". I started using computers as Windows
3.x gave way to Windows 95 (was still in elementary school at the time).

Mostly, started using computers around 3rd grade or so; at the time
computers generally running Windows 3.11 or similar (then followed by
Windows 95).

By middle school, the world had mostly moved on to Windows 98, but I was
odd and decided to run Windows NT4 (and by high-school went over to
Windows 2000, with Windows XP then making its appearance, ...).

Well, and also poking around on/off with Linux.


For me though, computers now are not all that much different from what I
had in high-school (in the early 2000s).

Most obvious changes being:
More RAM, bigger HDDs;
Loss of floppy drives and CRT monitors;
No more parallel port;
Going from IDE to SATA;
...

Well, and other changes:
The world went from flip-phones to smartphones;
Tablets appeared, and became semi popular;
Laptops went from being cheap and decent, to expensive and kinda trash.


But, now I am an aging millennial and have arguably not accomplished all
that much with my life.
Post by Scott Lurndal
Post by BGB
But, yeah, 1024 for an overall path limit makes more sense than 260.
For my own project, I had assumed 512, but either way...
As noted, that's the POSIX minimum. Implementations are free
to support more, if properly documented.
Fair enough; could increase the internal limit if needed...
Post by Scott Lurndal
Post by BGB
Well, excluding AF_UNIX sockets, which as-is will have a 104 character
name limit... Though, this is more because of the layout for
"sockaddr_un" (where "sockaddr_storage" generally supports up to 128
bytes for the total size).
A different namespace, of course, will have different rules.
Possible.

Some stuff I read implied that AF_UNIX socket addresses were supposed to
map to files in the VFS, but on current systems (like Linux) this does
not seem to be the case.

So, pretty much any arbitrary string will work, but by convention it is
meant to be a VFS path.
Post by Scott Lurndal
Post by BGB
Internally though, the idea isn't that the actual path for these sockets
is used though, but rather they are mashed into a 128-bit hash (where,
internally pretty much everything can be treated as-if it were IPv6).
Post by Scott Lurndal
Post by BGB
For most uses, a 32 character limit would probably be fine.
In your use cases, perhaps.
IME, the vast majority of "normal" files tend to have names shorter than
32 characters.
The video files (within YouTube or similar) seem to primarily use
shorter alphanumeric names, but the video downloaders tend to use the
title as a filename (so may generate longer names...).
There's more to the world than what you see.
From what I have seen, we have:
Traditional Unix paths, like:
"/usr/local/bin/x86_64-linux-elf-gcc"
Traditional Windows paths:
"C:\Program Files (x86)\Some Program\ProgName.EXE"
Traditional source-code naming conventions;
...

Most tending to, most of the time, leading to file-names shorter than 32
characters.

But, as noted, the main exception is using YouTube video titles as
filenames, but even most of these tend to only rarely exceed 100 characters.


Like, say, a "typical" example (actual file name):
"Raggedy Ann - Andy A Musical Adventure 1977 35mm Ultra HD.mp4"

Which weighs in at 62 characters... Also this movie was kinda odd.

But, yeah, I have watched some older shows / movies as well.

Well, another example, in the form of a video title:
"Rainbow Brite Beginning of Rainbow Land Part 1.mp4"

Dunno, this stuff is probably still on YouTube (goes and checks; yeah,
seems 80s Rainbow Brite is still around... I found the show enjoyable at
least).


Well, and I guess technically, if someone wanted, they could go and
binge watch all of "H.R. Pufnstuf" on YouTube, ... But, like, meh.

Well, and/or "He-Man and the Masters of the Universe" (which is at least
kinda amusing at times).


But, decided mostly to not go into writing about a bunch of old TV shows
and similar.

...
Post by Scott Lurndal
<snip>
Post by BGB
So, for things like opening files or "readdir()" or similar, it doesn't
care. The VFS doesn't know about LFN's either (rather, these are local
to the FAT driver). Internally, names are normalized to UTF-8 and
treated as case-sensitive (generally normalizing FAT 8.3 names to lower
case).
FAT? Why on earth?
Because:
I am mostly doing development from Windows;
The only filesystems that Windows natively supports on SDcards are
FAT32, NTFS, and exFAT.

If I were developing on a Linux system, I would probably have jumped
ship over to EXT2 or similar.


Comparably, neither UFS2 or MINIX-FS are particularly compelling either.
MINIX filesystem is limited;
UFS/UFS2 is crufty and weird.

Most of the other "modern" filesystems are fairly complicated (more
focused on performance and reliability on high-end systems, rather than
being designed for a resource-constrained system running from an SDcard).



Like, say, if I can run the filesystem with less LOC than what I already
need for FAT32, and little memory overhead beyond what is needed for a
block-cache and dirent cache and similar, this is good.

So, say, it needs to under 2.5 kLOC and preferably have less than 128K
of required memory overhead (say, allowing 64K for the block-cache).


My recent experimental filesystem currently weighs in around 1.0 kLOC
(but, would be reduced a bit if read-only; around 400 LOC).

Memory reservation is currently:
~ 64K block-cache (128 sectors, 16x 4K blocks);
~ 32K inode cache (64 inodes);
~ 8K dirent cache (32 extended dirents);
~ 0.5K (superblock header).
...


This is less than currently needed for my FAT32 driver, mostly because
it needs to support 32K clusters (16x 32K = 512K). Granted, a case could
have been made for smaller block caching rather than per-cluster (I
probably would have approached caching differently had I done it now).

Though, for a read/write filesystem, 4K is a sensible block-size as this
would match the internal block size in typical SDcards (say, they expose
512B sectors to a region of SLC NAND flash, which is then backed in 4K
blocks or similar to a region of QLC NAND flash).
Post by Scott Lurndal
Post by BGB
The handling for generating SFN's from LFN's differs slightly from
Windows: "Program Name.txt" => "PROGNA~1.TXT"
TestKern: "~HHHHHHH.~~~", where HHH is an hash of the LFN.
None of this should be necessary, and it's inherently broken
from a UI standpoint.
This part of the process is mostly buried inside the FAT driver in my
case (unlike on Windows 9x, where one could see it alongside the long
filename).

Though, it seems like Windows 10 no longer exposes the shortname
directly (and short names visible via the Win32 API seem to be synthetic).
Paul Edwards
2024-08-20 20:31:16 UTC
Reply
Permalink
Post by BGB
But, now I am an aging millennial and have arguably not accomplished all
that much with my life.
Didn't you email me decades ago to get some changes implemented
to PDPCLIB and you mentioned you were writing a phenomenal
number of lines of code per day? Where did all that effort go?

Regardless, what sort of thing would you consider to be
"accomplished a significant amount"? You're not going to
single-handedly reproduce Windows 11. So if that is the
bar, no-one at all has accomplished much. It's even difficult
to credit Windows itself. Who are you going to credit?
Tim Paterson? Or Bill Gates's father's (or was it his mother's?)
money?

Note that I am not dismissing Bill Gates's technical achievements
with Microsoft BASIC, but that's not Windows 11 by a very
very very long shot.

BFN. Paul.
BGB
2024-08-28 07:28:14 UTC
Reply
Permalink
Post by Paul Edwards
Post by BGB
But, now I am an aging millennial and have arguably not accomplished all
that much with my life.
Didn't you email me decades ago to get some changes implemented
to PDPCLIB and you mentioned you were writing a phenomenal
number of lines of code per day? Where did all that effort go?
FWIW:

I ended up with a 3D engine, which was around a 1 MLOC, sort of like
Minecraft with a Doom3 style renderer. No one cared, performance wasn't
so good (was painfully laggy), and this project fizzled.

Part of the poor performance was the use of a conservative garbage
collector, and rampant memory leaks, ... Another part was was "Minecraft
style terrain rendering and stencil shadows don't mix well". Though, for
small light sources, could subset the scene geometry mostly to a
bounding-box around the light source.

But, the sun, well, the sun was kinda evil. Did later move to
shadow-maps for the sun though (though, IIRC, did RGB shadow maps to
allow for colored shadows through colored glass).


Then I wrote a new 3D engine ground-up, which was smaller and had better
performance. Few people cared, I lost motivation, and eventually it
fizzled as well. Was roughly around 0.5 MLOC, IIRC.

It had replaced the complex dynamic lighting with the use of
vertex-color lighting (with a single big rendering pass).


I started on my CPU ISA project, which checking, is around 2 MLOC (for
the C parts.

It is ~ 3.8 MLOC total, if one includes a lot of ASM and C++ code; but a
fair chunk of this is auto-generated (Verilator output, or debug ASM
output from my compiler).

There is also around 0.8 MLOC of Verilog in my project; but this drops
to 200 kLOC if only counting the current CPU core.




Ironically, the OS for my current ISA project has reused some parts from
my past 3D engine projects.

In the course of all this, ended up doing roughly 3 separate
re-implementations of the OpenGL API (the 3rd version was written to try
to leverage special features of my ISA; though was originally written to
assume a plain software renderer, and since implementing a ).

In my current project, I have ports of GLQuake and Quake 3 Arena working
on it; though performance isn't good on a 50MHz CPU.


Ironically, parts of PDPCLIB still remain as a core part of the "OS",
though I had ended up rewriting a fair chunk of it to better fit my
use-case (the "string.c" and "math.c" stuff ended up almost entirely
rewritten, though a fair chunk of "stdio.c" and similar remains intact).
It was also expanded out to cover much of C99 and parts of C11 and C23.

Some wonky modifications were made to support DLLs, which ended up
working in an unusual way in my case:
The main binary essentially exports a COM interface to its C library;
Most of the loaded DLLs have ended up importing this COM interface,
which provides things like malloc/free, stdio backend stuff, ...


It also has a small makeshift GUI, though mostly just displays a shell
window that can be used to launch programs.


Besides my own ISA, my CPU core also runs RISC-V.

There is a possible TODO effort of trying to implement the Linux syscall
interface for RISC-V Mode, which could potentially allow me to run
binaries letting GCC use the "native" GLIBC, which could make porting
software to it easier (vs the hassle of getting GCC to use my own
runtime libraries; or trying to get programs to build using my own
compiler as a cross-compiler).


Though, I did more or less get my compiler to pretend to be GCC well
enough that for small programs, it is possible to trick "./configure"
scripts to use it as a cross compiler (doesn't scale very well, as apart
from some core POSIX libraries, most anything else is absent).

Where, for my own ISA, I am using BGBCC.
BGBCC is ~ 250 kLOC, and mostly compiles C;
Also compiles BGBScript, which sorta resembles ActionScript;
And, BGBScript2, which sorta resembles Java mixed with C#;
Albeit, unlike Java and C#, it uses manual and zone allocation.
Technically could be mixed with C, all using the same ABI;
Also an EC++ like subset of C++.
But, kinda moot as no "Modern C++" stuff has any hope of working.
But, for my current uses, C is dominant.
It is sorta wonky in that it does not use traditional object files.
It compiles into a stack-oriented bytecode and "links" from this.
The bytecode IR could be loosely compared with MSIL / CIL.
ASM code is preprocessed and forwarded as text blobs.
The backend then produces the final PE/COFF images.
Though, this mutated some as well:
Lacks MZ stub / header;
PE image is typically LZ4 compressed.
LZ4 compression makes the loading process faster.
Resource section was replaced with a WAD2 variant.
Made more sense to me than the original PE/COFF resource section.
Compiler also has a built-in format converter.
Say, to convert TGA or PNG into BMP (*1), ...

*1: General resource-section formats:
Graphics:
BMP, 4/8/16/24/32 bit.
Ye Olde standard BMP.
For 16 and 256 color, fixed palettes are used.
BMPA, 4/8 bit with a transparent color.
Basically standard, but with a transparent color.
Generally, the High-Intensity Magenta is transparent.
Or, #FF55FF (or, Color 13 in the 16-color palette)
BMP+CRAM: 2 bpp 256-color, image encoded as 8-bit CRAM.
Supports transparency in a limited form:
Only 1 non-transparent color per 4x4 block,
vs 2 colors for opaque blocks.
QOI: An image in the QOI format (lossless)
LCIF: Resembles a QOI/CRAM hybrid, lossy low/intermediate quality.
Though, BMP+CRAM is faster and has lower overhead.
UPIC: Resembles a Rice-coded JPEG
Optimized for a small low-memory-overhead decoder.
Lossy or Lossless, higher quality, but comparably slow.
Audio:
WAV, mostly PCM, A-Law, or ADPCM.


BGBCC originally started as a fork off of my BGBScript VM, which was
used as the main scripting language in my first 3D engine.

By the 2nd 3D engine, it had partly been replaced by a VM running my
(then) newer BGBScript2 language, with the engine written as a mix of C
and BGBScript2.

While I could technically use BGBScript2 in my TestKern OS, it is almost
entirely C, only really using BGBScript2 for some small test cases (it
is technically possible to use both BS and BS2 in kernel and bare-metal
contexts; and there is partial ISA level assistance for things like
tagged pointers and dynamic type-checking). Where, BS2 retains (from BS,
and its JS/AS ancestors) the ability to use optional dynamic types and
ex-nihilo objects (also BGBCC technically allows doing so in C as well,
with some non-standard syntax, but doing so is "kinda cursed").

Ironically, I am using a memory protection scheme in my ISA based on
performing ACL checks on memory pages. The basic idea for this scheme
was carried over from my original BGBScript VM (where it was applied per
object), where the idea for the scheme was (ironically) inspired partly
by how object security was passed off in the "Tron 2.0" game (in
context, as a more convoluted way of passing off the use of keycards for
doors). But, I was left thinking at the time that the idea actually
sorta made sense. But, in its present form mostly involves applying
filesystem-style checks to pages (the MMU remembers this, but raises an
exception whenever it needs the OS to sort out whether a given key can
access a given ACL).

Well, and also the use of pointers in my ISA with a 48-bit address, and
16 bits of tag metadata in the high order bits, was also itself partly a
carry-over from my Script VMs.



Near the end of my 2nd 3D engine (before the project fizzled out
entirely): It also gained the ability to load BJX2 images into the 3D
engine. In the effect, the 3D engine would itself take on a role like an
OS, effectively running logical processes inside the VM (though, there
wasn't really an API to glue these into the game world).

IIRC, the idea I think was to make the "game server" able to run
programs OS style, which could then run parts of the game logic (rather
than necessarily using my BGBSCript2 language running in a VM;
potentially the BS2 VM code could be ported to BGBCC and run inside the
BJX2 VM). Though, potentially, one could also make a case for using
RISC-V ELF images. Wouldn't necessarily want to run native x86-64 code
as it would be desirable to be able to sandbox the programs inside of a
VM. In such a case, the idea would be that things like voxel
manipulation or interaction with world entities could be via COM
objects, or potentially game objects could signal events into the script
programs.


Can note that my BJX2 project was preceded by BJX1, where BJX1 started
out as a modified version of the Hitachi SH-4 ISA (most popularly used
in the SEGA Dreamcast). I had revived BGBCC initially as I needed a
compiler to target BJX1 (and SH-4). As BJX1 turned into a horrible mess
(turned 64-bit, and fragmented into multiple variants), I eventually did
a "partial reboot".

At the ASM level, initially BJX2 was very similar to BJX1, mostly
carrying over the same ASM and ABI, but with minor changes (and gaining
some features and notation inspired by the TMS320). The BJX2 ISA mutated
over time, and has since also fragmented to some extent (and its current
form also has some similarities to SH-5).

It has since drifted towards being more like RISC-V in some areas,
mostly because my CPU core can now also run RISC-V code (and, if RISC-V
needs a feature, and it is useful, may as well also have it in my own ISA).

ASM syntax/style mostly borrowed from SH-4, which seems to be in a
similar category that also includes the likes of MSP430, M68K, PDP-11,
and VAX. Well, as opposed to RISC-V using a more MIPS-like style.



I also don't really have a "proper" userland as of yet, more the kernel,
shell, and most basic programs, all exist as a single binary (so, say,
if you type "ls", the shell handles it itself; with shell instances as
kernel mode threads).

Any "actual" programs are loaded and then spawned as a new process.

Only recently-ish added the ability to redirect IO, but still doesn't
support piping IO between programs.
Supports basic shell-scripts, but lacks most more advanced shell
features (non-trivial Bash scripts will not work).


There was a 3rd 3D engine of mine, mostly because my 2nd 3D engine would
have still been too heavyweight to run on my CPU core (tried to write
something Minecraft-like that would run in a similar memory footprint to
Quake and was fast enough to be tolerable on a 50MHz CPU).

Between the engines:
Chunk Size: 16x16x16 in both engines;
Region Size: 16x16x16 in 2nd engine, 8x8x8 in 3rd.
32x32x8 in first engine.
Thus, in 3rd engine, each region was a 128x128x128 meter cube.
Block Storage:
1st engine: 8 bit index or unpacked;
2nd engine: 4/8/12 bit index into table of blocks;
3rd engine: 4/8 bit index into block table, or unpacked block array.
World Size:
1st engine: Planar
2nd engine: 1024km (1048576 meters), world wraps on edge
3rd engine: 64km (65536 meters), world wraps on edge
Rendering:
1st engine: global vertex arrays, filled from each chunk
2nd engine: Per-chunk vertex arrays
3rd engine: Raycast, visible blocks drawn into global vertex arrays.
Chunk Storage:
1st engine: RLEW (same format as used for maps in Wolf3D and ROTT)
2nd engine: LZ77 + AdRiceSTF
3rd engine: RP2 (similar to LZ4)
Graphics storage:
1st engine: JPEG (modified to support Alpha channel)
2nd engine: BMP + BTIC4B (8x8 Color-Cell, AdRice Bitstream)
3rd engine: DDS (DXT1)
VFS File Storage:
1st engine: ZIP
2nd engine: BTPAK (Hierarchical Central Directory, Deflate)
Large files broken up into 1MB fragments;
3rd engine: WAD4 (Hierarchical Central Directory, RP2)
Large files broken into 128K fragments.
Audio:
1st engine: WAV (PCM)
2nd & 3rd engine: WAV (IMA ADPCM)

Both 2nd and 3rd engine used the same block-type numbers and the same
texture atlas.

Both 2nd and 3rd engine had used mostly sprite graphics, for my first 3D
engine, I had used 3D models (and skeletal animation), but this was a
lot of effort.

I then noted that sprite graphics still worked well in Doom, and
attempted to mimic the use of sprites, though generally using 4 angles
rather than 8, as 4 was easier to draw. Also using a trick as seen in
some old RPG's where one could pull off idle animations and walking by
horizontally flipping the sprite on a timer.

Initial goal (before the 2nd engine effort fizzled out) was to try to
build something like Undertale, but this was more effort, and I was
lacking a good system for dialog and managing game-event dependency trees.

My 3rd engine never got much past "try to make something that works on
my ISA and can fit in under 40-60MB of RAM".

One minor difference was for live entity serialization within regions,
where my 2nd engine had mostly embedded data within ASCII strings,
whereas the 3rd engine had used binary-serialized XML blobs (reusing the
XML code from BGBCC, where for better or worse; BGBCC had uses XML DOM
style ASTs; but reworked to be a lot more efficient than the original DOM).



Also, some amount of specialized image and video codecs, etc.
There is an experimental video player and Mod/S3M player, though not at
present generalized enough to be usable as media players (would need
some level of UI for this; thus far these load a hard-coded file and
just play it in a loop).

And, some on/off fiddling with things like Neural Nets, etc.


Recently I wrote a tool to import UFO / GLIF fonts and convert them to
an custom font format (mostly as the actual TTF format seemed needlessly
complicated). This was along with the code to render this style of font.
Unclear if it will replace the use of bitmap fonts and SDFs.

Where:
Bitmap font:
Specialized for specific glyph sizes;
Looks good at that size;
Don't really scale.
SDF font:
Scalable (works best for medium glyphs);
Relatively cheap in a computational sense;
But, relatively bulky and eat a lot of memory.
I stored them mostly as 8bpp BMP images, 4-bit X/Y.
Where, each 16x16 glyph page is a 256256 BMP,
Variable / Geometric Font:
Scalable (but works best for large glyphs);
Attempts to draw small glyphs give poor results ATM.
Currently need to draw at 4x final size then downsample.
Higher per-pixel cost;
Less memory needed to hold font;
Can be used to generate SDF's or triangles.



For my ISA / OS project, the fonts and some other things had been
carried over from my 3D engine projects. Well, along with a lot of the
VFS and memory management code (wasn't too much effort to adapt my 3D
engine VFS code to work as an OS VFS).

Main practical difference being that, for an OS VFS, it has a FAT32
driver and similar.



But, I have slowed down in recent years I suspect.
Post by Paul Edwards
Regardless, what sort of thing would you consider to be
"accomplished a significant amount"? You're not going to
single-handedly reproduce Windows 11. So if that is the
bar, no-one at all has accomplished much. It's even difficult
to credit Windows itself. Who are you going to credit?
Tim Paterson? Or Bill Gates's father's (or was it his mother's?)
money?
Note that I am not dismissing Bill Gates's technical achievements
with Microsoft BASIC, but that's not Windows 11 by a very
very very long shot.
Dunno...

Just is seems like a lot of other people are getting lots of
recognition, seem to be doing well off financially, etc.

Meanwhile, I just sort of end up poking at stuff, and implementing
stuff, and it seems like regardless of what I do, no one gives a crap,
or like I am little better off than had I done nothing at all...
Post by Paul Edwards
BFN. Paul.
Paul Edwards
2024-08-28 08:54:20 UTC
Reply
Permalink
Post by BGB
Just is seems like a lot of other people are getting lots of
recognition, seem to be doing well off financially, etc.
Meanwhile, I just sort of end up poking at stuff, and implementing
stuff, and it seems like regardless of what I do, no one gives a crap,
or like I am little better off than had I done nothing at all...
Did you consider asking anyone at all if they were after
something?
Post by BGB
Where, for my own ISA, I am using BGBCC.
BGBCC is ~ 250 kLOC, and mostly compiles C;
We have struggled and struggled and struggled to try to get
a public domain C90 compiler written in C90 to produce 386
assembler.

There have been a large number of talented people who tried
to do this and fell flat on their face. I never even tried.

The closest we have is SubC.

Is this a market gap you are able and interested in filling?

By either modifying BGBCC (and making public domain if
it isn't already), or using your skills to put SubC over the line?

I can only guarantee that I will recognize your work if you do
this, but that's potentially better than no-one at all. Also, there
is likely to be more than just me who appreciate having a C90
compiler in the public domain.

We currently use the copyrighted GCC 3.2.3 (my modification
of it) in order to get full C90.

There are some other targets of interest besides 386, namely
370, ARM32, x64, 68000. ARM64 would be good too, but
we don't have that at all.

8086 is another target of interest. SubC is already being used
to produce a bootloader for PDOS/386, but Watcom is better
because of SubC's primitive nature.

Thanks. Paul.
Paul Edwards
2024-08-28 08:58:27 UTC
Reply
Permalink
Linas Vepstas was kind enough to assist in debugging
binutils i370 and now z/PDOS-generic has a GCC that
is able to do an optimized compile without crashing.

It is also able to make directories.

This is all EBCDIC.

https://pdos.org/zpg.zip

BFN. Paul.
BGB
2024-08-28 23:03:44 UTC
Reply
Permalink
Post by Paul Edwards
Post by BGB
Just is seems like a lot of other people are getting lots of
recognition, seem to be doing well off financially, etc.
Meanwhile, I just sort of end up poking at stuff, and implementing
stuff, and it seems like regardless of what I do, no one gives a crap,
or like I am little better off than had I done nothing at all...
Did you consider asking anyone at all if they were after
something?
I mostly just did stuff, occasionally posting about it on Usenet,
occasionally on Twitter (now known as X...).


For my 3D engines, I posted stuff about them on YouTube; relatively
little feedback, in the time of the first 3D engine, was mostly people
complaining about "ugly graphics" and "looks like Minecraft" (which was
sorta the thing).

The 2nd engine looked even more like Minecraft, apart from also taking
minor influences from things like Undertale and Homestuck (but,
generally, was closer to Minecraft than Undertale; apart from the use of
billboard sprites for things like NPCs).


The 3rd engine had some particularly awful sprites, mostly because:
The 2nd engine sprites were generally fairly high res;
For the 3rd engine I just quickly drew some stuff and called it good;
But, the 3rd engine was more meant as a technical proof of concept than
an actual game.

Arguably, I could have tried to "lean into it", maybe do characters as
32x64 pixel art style (with nearest sampling), but didn't bother.

Terrain generation algorithms:
1st engine had used Perlin Noise.
2nd engine had just used X/Y/Z hashing functions and interpolation.
3rd engine, basically same as 2nd engine.

Hash functions generally being better behaved than Perlin Noise. Though,
some care is needed, as poor hashing may lead to obvious repeating patterns.



Eventually, I mostly gave up on gamedev, as I couldn't seem to come up
with anything that anyone seemed to care about, and my own motivation in
these areas had largely dried up (and most of the time, I ended up being
more motivated to fiddle with technical stuff, than to really do much in
artistic/creative directions; as "artistic creativity" seems to be an
area where I am significantly lacking).
Post by Paul Edwards
Post by BGB
Where, for my own ISA, I am using BGBCC.
BGBCC is ~ 250 kLOC, and mostly compiles C;
We have struggled and struggled and struggled to try to get
a public domain C90 compiler written in C90 to produce 386
assembler.
There have been a large number of talented people who tried
to do this and fell flat on their face. I never even tried.
The closest we have is SubC.
Is this a market gap you are able and interested in filling?
By either modifying BGBCC (and making public domain if
it isn't already), or using your skills to put SubC over the line?
It is MIT licensed, but doesn't currently produce x86 or x86-64 (as I
mostly just used MSVC and GCC for PC based development).

Rather, backends it currently has are:
BJX2
BJX1 and SH-4 (old)
BSR1 (short lived)
Another custom ISA, inspired by SuperH and MSP430.
Very early versions targeted x86 and x86-64.
But, this backend was dropped long ago.
Did briefly attempt a backend for 32-bit ARM, but this was not kept.
This was in a different fork.
Performance of the generated code was quite terrible.
Didn't really seem worth the bother at the time.

Much of the current backend was initially derived from an 'FRBC'
backend, which was an attempt to do a Dalvik style register IR.
The FRBC VM was dropped, as while fast, the VM was very bulky in terms
of code footprint (combinatorial mess). But, at the time, wasn't a big
step to go from a register IR to an actual CPU ISA, and (for a sensibly
designed ISA), it is possible to emulate things at similar speeds to
what one could get with a similar VM.

My current emulator (for BJX2) is kinda slow, but this is more because
it is usually trying to be cycle-accurate, and as long as it is possible
for it to be (on the PC side of things) faster than the CPU core on the
target FPGA, this is good enough...



AFAIK, whether declaring something as public domain is legally
recognized depends on jurisdiction. I think this is why CC0 exists.

Personally, I am not all that likely to bother with going after anyone
who breaks the terms of the MIT license, as it is pretty close to "do
whatever", similar for 3 clause BSD.

It is also more C95 style, making significant use of // comments and
"long long" and similar, more or less the C dialect that MSVC supported
until around 2015 or so (when they started adding C99 stuff).



I had at one point wanted to try to make a smaller / lighter weight C
compiler, but this effort mostly fizzled out (when it started to become
obvious that I wasn't going to be able to pull off a usable C compiler
in less LOC than the Doom engine, which was part of the original design
goal).

I had also wanted to go directly from ASTs to ASM, say:
Preproc -> Parser/AST -> ASM -> OBJ -> Binary
Vs:
Preproc -> Parser/AST -> RIL -> 3AC -> Machine Code -> Binary


But, likely the RIL and 3AC stages are in-fact useful.
And, it now seems like a stack-based IR (for intermediate storage) has
more advantages than either an SSA based IR (like in Clang/LLVM) or
traditional object files (like COFF or ELF). Well, except in terms of
performance and memory overhead (vs COFF or ELF), where in this case the
"linker" needs to do most of the heavy lifting (and needs to have enough
memory to deal with the entire program).

A traditional linker need only deal with compiled machine-code, so is
more a task of shuffling memory around and doing relocs; with the
compiler parts only needing to deal with a single translation unit.
Though, the main "highly memory intensive" part of the process tends to
be parsing and dealing with ASTs, which is gone by the time one is
dealing with a stack bytecode; but, there is still the memory cost of
translating the bytecode into 3AC to actually compile stuff. This
doesn't ask much by modern PC standards, but is asking a lot when RAM is
measured in MB and one wants to be able to run stuff without an MMU (it
is a downside if the compiler tends to use enough RAM as to make virtual
memory essentially mandatory to be able to run the compiler).


But, RIL's design still leaves some things to be desired. As-is, it
mostly exists as big linear blobs of bytecode, and the compiler needs to
deal with the whole thing at once. This mostly works for a compiler, but
would be undesirable for use by a VM (or for a more resource-constrained
compiler, which can't just load everything all at once).

But, efforts to change this have tended to fizzle out.
Post by Paul Edwards
I can only guarantee that I will recognize your work if you do
this, but that's potentially better than no-one at all. Also, there
is likely to be more than just me who appreciate having a C90
compiler in the public domain.
We currently use the copyrighted GCC 3.2.3 (my modification
of it) in order to get full C90.
There are some other targets of interest besides 386, namely
370, ARM32, x64, 68000. ARM64 would be good too, but
we don't have that at all.
8086 is another target of interest. SubC is already being used
to produce a bootloader for PDOS/386, but Watcom is better
because of SubC's primitive nature.
BGBCC doesn't currently support any 16-bit targets, mostly only 32 and
64 (well, and an experimental mode that used 128-bit pointers, but this
was shelved due to "at best, it was gonna suck").
Post by Paul Edwards
Thanks. Paul.
Paul Edwards
2024-08-29 03:14:13 UTC
Reply
Permalink
Post by BGB
Post by Paul Edwards
Post by BGB
Where, for my own ISA, I am using BGBCC.
BGBCC is ~ 250 kLOC, and mostly compiles C;
We have struggled and struggled and struggled to try to get
a public domain C90 compiler written in C90 to produce 386
assembler.
There have been a large number of talented people who tried
to do this and fell flat on their face. I never even tried.
The closest we have is SubC.
Is this a market gap you are able and interested in filling?
By either modifying BGBCC (and making public domain if
it isn't already), or using your skills to put SubC over the line?
It is MIT licensed, but doesn't currently produce x86 or x86-64 (as I
mostly just used MSVC and GCC for PC based development).
BJX2
BJX1 and SH-4 (old)
BSR1 (short lived)
Another custom ISA, inspired by SuperH and MSP430.
Very early versions targeted x86 and x86-64.
But, this backend was dropped long ago.
Did briefly attempt a backend for 32-bit ARM, but this was not kept.
This was in a different fork.
Performance of the generated code was quite terrible.
Didn't really seem worth the bother at the time.
Much of the current backend was initially derived from an 'FRBC'
backend, which was an attempt to do a Dalvik style register IR.
The FRBC VM was dropped, as while fast, the VM was very bulky in terms
of code footprint (combinatorial mess). But, at the time, wasn't a big
step to go from a register IR to an actual CPU ISA, and (for a sensibly
designed ISA), it is possible to emulate things at similar speeds to
what one could get with a similar VM.
My current emulator (for BJX2) is kinda slow, but this is more because
it is usually trying to be cycle-accurate, and as long as it is possible
for it to be (on the PC side of things) faster than the CPU core on the
target FPGA, this is good enough...
AFAIK, whether declaring something as public domain is legally
recognized depends on jurisdiction. I think this is why CC0 exists.
And if you believe that, then you're welcome to say that this
is public domain, but you may follow the CC0 license instead
if you wish.
Post by BGB
Personally, I am not all that likely to bother with going after anyone
who breaks the terms of the MIT license, as it is pretty close to "do
whatever", similar for 3 clause BSD.
We're not after someone who is allegedly "not going to go
after anyone", we're after some code that is NOT OWNED
by the original author because he/she has RELEASED IT
TO THE PUBLIC DOMAIN.

If the answer is "no", then please say "no".
Post by BGB
It is also more C95 style, making significant use of // comments and
"long long" and similar, more or less the C dialect that MSVC supported
until around 2015 or so (when they started adding C99 stuff).
Actually, so long as it handles C90 syntax, this would be
a step up from what we currently have.
Post by BGB
I had at one point wanted to try to make a smaller / lighter weight C
compiler, but this effort mostly fizzled out (when it started to become
obvious that I wasn't going to be able to pull off a usable C compiler
in less LOC than the Doom engine, which was part of the original design
goal).
We don't necessarily need a lighter weight compiler. That
could be done at a later date. The first thing we need is
something that will take C90 syntax.
Post by BGB
Preproc -> Parser/AST -> ASM -> OBJ -> Binary
Preproc -> Parser/AST -> RIL -> 3AC -> Machine Code -> Binary
But, likely the RIL and 3AC stages are in-fact useful.
And, it now seems like a stack-based IR (for intermediate storage) has
more advantages than either an SSA based IR (like in Clang/LLVM) or
traditional object files (like COFF or ELF). Well, except in terms of
performance and memory overhead (vs COFF or ELF), where in this case the
"linker" needs to do most of the heavy lifting (and needs to have enough
memory to deal with the entire program).
A traditional linker need only deal with compiled machine-code, so is
more a task of shuffling memory around and doing relocs; with the
compiler parts only needing to deal with a single translation unit.
Though, the main "highly memory intensive" part of the process tends to
be parsing and dealing with ASTs, which is gone by the time one is
dealing with a stack bytecode; but, there is still the memory cost of
translating the bytecode into 3AC to actually compile stuff. This
doesn't ask much by modern PC standards, but is asking a lot when RAM is
measured in MB and one wants to be able to run stuff without an MMU (it
is a downside if the compiler tends to use enough RAM as to make virtual
memory essentially mandatory to be able to run the compiler).
But, RIL's design still leaves some things to be desired. As-is, it
mostly exists as big linear blobs of bytecode, and the compiler needs to
deal with the whole thing at once. This mostly works for a compiler, but
would be undesirable for use by a VM (or for a more resource-constrained
compiler, which can't just load everything all at once).
But, efforts to change this have tended to fizzle out.
We don't need the world's best C compiler. At least not
as a first step.
Post by BGB
Post by Paul Edwards
I can only guarantee that I will recognize your work if you do
this, but that's potentially better than no-one at all. Also, there
is likely to be more than just me who appreciate having a C90
compiler in the public domain.
We currently use the copyrighted GCC 3.2.3 (my modification
of it) in order to get full C90.
There are some other targets of interest besides 386, namely
370, ARM32, x64, 68000. ARM64 would be good too, but
we don't have that at all.
8086 is another target of interest. SubC is already being used
to produce a bootloader for PDOS/386, but Watcom is better
because of SubC's primitive nature.
BGBCC doesn't currently support any 16-bit targets, mostly only 32 and
64 (well, and an experimental mode that used 128-bit pointers, but this
was shelved due to "at best, it was gonna suck").
32 and 64 would be a fantastic start and 99% of the problem..

But if the answer is "no", the answer is "no".

So far the answer is an implied "no".

BFN. Paul.
George Neuner
2024-08-30 10:49:49 UTC
Reply
Permalink
On Thu, 29 Aug 2024 11:14:13 +0800, "Paul Edwards"
Post by Paul Edwards
Post by BGB
AFAIK, whether declaring something as public domain is legally
recognized depends on jurisdiction. I think this is why CC0 exists.
And if you believe that, then you're welcome to say that this
is public domain, but you may follow the CC0 license instead
if you wish.
BGB is correct: not all countries recognize the notion of "public
domain".

In WIPO convention countries it generally is possible to release a
work under a license that explicitly grants all rights, but the result
is not quite the same as placing the work in public domain. Without a
legal notion of "public domain" it is not possible for an author to
give up the rights afforded by the (automatic) Berne convention
copyright.

[Of course every country is a WIPO or Berne signatory ... but most
recognize one or both conventions.]

So if you really want a work to be freely usable anywhere in the
world, you can declare it as "public domain" for those countries that
recognize that notion ... but for everywhere else you have to provide
an alternative license that explicitly grants all rights.
George Neuner
2024-08-30 14:27:09 UTC
Reply
Permalink
On Fri, 30 Aug 2024 06:49:49 -0400, George Neuner
Post by George Neuner
On Thu, 29 Aug 2024 11:14:13 +0800, "Paul Edwards"
Post by Paul Edwards
Post by BGB
AFAIK, whether declaring something as public domain is legally
recognized depends on jurisdiction. I think this is why CC0 exists.
And if you believe that, then you're welcome to say that this
is public domain, but you may follow the CC0 license instead
if you wish.
BGB is correct: not all countries recognize the notion of "public
domain".
In WIPO convention countries it generally is possible to release a
work under a license that explicitly grants all rights, but the result
is not quite the same as placing the work in public domain. Without a
legal notion of "public domain" it is not possible for an author to
give up the rights afforded by the (automatic) Berne convention
copyright.
[Of course every country is a WIPO or Berne signatory ... but most
^ not
Post by George Neuner
recognize one or both conventions.]
So if you really want a work to be freely usable anywhere in the
world, you can declare it as "public domain" for those countries that
recognize that notion ... but for everywhere else you have to provide
an alternative license that explicitly grants all rights.
Sorry, should have been "... not every country ..."
Paul Edwards
2024-08-31 02:21:53 UTC
Reply
Permalink
Post by George Neuner
On Thu, 29 Aug 2024 11:14:13 +0800, "Paul Edwards"
Post by Paul Edwards
Post by BGB
AFAIK, whether declaring something as public domain is legally
recognized depends on jurisdiction. I think this is why CC0 exists.
And if you believe that, then you're welcome to say that this
is public domain, but you may follow the CC0 license instead
if you wish.
BGB is correct: not all countries recognize the notion of "public
domain".
In WIPO convention countries it generally is possible to release a
work under a license that explicitly grants all rights, but the result
is not quite the same as placing the work in public domain. Without a
legal notion of "public domain" it is not possible for an author to
give up the rights afforded by the (automatic) Berne convention
copyright.
[Of course [not] every country is a WIPO or Berne signatory ... but most
recognize one or both conventions.]
So if you really want a work to be freely usable anywhere in the
world, you can declare it as "public domain" for those countries that
recognize that notion ... but for everywhere else you have to provide
an alternative license that explicitly grants all rights.
Isn't that what I just said?

Release it as public domain but say you can use CC0 if you prefer.


BFN. Paul.
George Neuner
2024-08-31 19:30:03 UTC
Reply
Permalink
On Sat, 31 Aug 2024 10:21:53 +0800, "Paul Edwards"
Post by Paul Edwards
Post by George Neuner
On Thu, 29 Aug 2024 11:14:13 +0800, "Paul Edwards"
Post by Paul Edwards
Post by BGB
AFAIK, whether declaring something as public domain is legally
recognized depends on jurisdiction. I think this is why CC0 exists.
And if you believe that, then you're welcome to say that this
is public domain, but you may follow the CC0 license instead
if you wish.
BGB is correct: not all countries recognize the notion of "public
domain".
In WIPO convention countries it generally is possible to release a
work under a license that explicitly grants all rights, but the result
is not quite the same as placing the work in public domain. Without a
legal notion of "public domain" it is not possible for an author to
give up the rights afforded by the (automatic) Berne convention
copyright.
[Of course [not] every country is a WIPO or Berne signatory ... but most
recognize one or both conventions.]
So if you really want a work to be freely usable anywhere in the
world, you can declare it as "public domain" for those countries that
recognize that notion ... but for everywhere else you have to provide
an alternative license that explicitly grants all rights.
Isn't that what I just said?
Release it as public domain but say you can use CC0 if you prefer.
BFN. Paul.
I apologize for any offense. I only meant to add information for
those who don't know the reason for the discussion.
wolfgang kern
2024-08-30 11:29:58 UTC
Reply
Permalink
On 29/08/2024 01:03, BGB wrote:
...
Post by BGB
Just is seems like a lot of other people are getting lots of
recognition, seem to be doing well off financially, etc.
Meanwhile, I just sort of end up poking at stuff, and implementing
stuff, and it seems like regardless of what I do, no one gives a crap,
or like I am little better off than had I done nothing at all...
seems we two entered the OS-arena from opposite entries,
I started to write my OS on a paying clients demand ... :)

it never was a general purpose system, but successful solutions sold my
OS w/o any advertising. so I could deliver >200 individual tailored PCs.
Most earned money came from user desired applications rather than from
OS and hardware [I stopped all hardware production 1997].

all my guaranty and maintenance contracts end this year.
and because I couldn't buy main-boards w/o UEFI&GPT I stopped working
on the OS as well.

just recently I was asked by long time clients to give it a try again,
I'm old, tired and I hate all bloatware BS, I started reading UEFI docs.
and I had to learn [hate that like pest] a bit C to convert this huge
document into technical readable RBIL styled short pages [in progress].
__
wolfgang
Paul Edwards
2024-09-05 23:46:38 UTC
Reply
Permalink
Mainframes are too expensive to use them as simple PC from 30 years
ago. And unlike old PC there is almost no programs ready to
run on Paul's system.
When I've "finished", there should be a complete toolchain
and microemacs editor, so any C90 source code should work
(including with embedded ANSI for text fullscreen).
Times have changed and users now want more from their machines.
Times are changing again, and I want more from my users.

I'm looking forward to the day when everyone switches on their
machines and they're all bricked because Intel and AMD had a
drop dead date in their CPUs.

I'll be last man standing in the Philippines with my Zhaoxin CPU,
and of course the mainframes will still be working.

Or something like that. There was a recent bricking of machines
worldwide due to an ACCIDENT at Crowdstrike.

Now what happens when there is a DELIBERATE attack from
someone in (or who has hacked) Microsoft?

I do my development on Windows 2000 - the last version that
didn't need authentication - and I can run it under Linux on a
Zhaoxin CPU. The Zhaoxin comes with a BIOS (in Chinese -
good grief) - that allows me to run PDOS/386 too.

Last. Man. Standing.

I have my backup plan. Good luck to everyone else.

I charge $1000/minute for programming services, and $1000/minute
for time on my Zhaoxin.

You got PDOS for free though.

BFN. Paul.
J. Curtis
2024-09-06 19:08:20 UTC
Reply
Permalink
Post by Paul Edwards
I'll be last man standing in the Philippines with my Zhaoxin CPU,
and of course the mainframes will still be working.
Not without power. Without refrigeration city people won't last long.
Paul Edwards
2024-09-07 00:12:10 UTC
Reply
Permalink
Post by J. Curtis
Post by Paul Edwards
I'll be last man standing in the Philippines with my Zhaoxin CPU,
and of course the mainframes will still be working.
Not without power. Without refrigeration city people won't last long.
The power grid is dependent on computers being operational?

Regardless, while I am currently in Ligao City, Albay Province,
where I have a manually pumped water well available for when
the public water is either non-existent or dirty, I normally live
halfway between Ligao and Pio Duran. The house opposite us
slaughters pigs at 2am or something.

The grid electricity goes up and down like a yoyo in both places.

I finally found the right portable solar which can be found by
searching for "solar" at pdos.org and I lived for a couple of
months purely off solar for my computing needs. I was using
a Pinebook Pro rather than the Zhaoxin though. While both
have USB-C to charge, only the Pinebook Pro can definitely
be charged from a powerbank. The Zhaoxin says it is
charging but reality appears to be different and I'm not sure
what the situation is and regardless I was planning on getting
a different powerbank.

Actually - I'll take any advice on that. The solar I referenced
has a PD (power delivery) outlet that would potentially give
me a lot more power, but I need a matching outdoor powerbank
to accept it, and I don't know of anything suitable (an Amazon
reference would be good and hopefully they ship to the
Philippines).

I already have Fidonet technology software theoretically operational
on PdAndro on my Android phone that will allow me to replace
the internet. Ditto on PDOS/386 - it was actually tested there.

Admittedly there aren't a lot of people to talk to here, but
hopefully there will be some western refugees turning up in
small boats to access the last operational computer network.

Oh yeah - we have a manually operated well on our normal
property too - also protected by dwendes rather than trolls -
that's a potentially valuable concept that may have been lost
in the West, although in both places we're not actually
drinking that water. I asked if we could boil it but didn't get
a good answer and it hasn't been priority to push the issue.

The irony is that I was happy to be a city slicker in Sydney,
but being in this new environment made me take an interest
in how basic needs were able to be satisfied, and especially
whether we were dependent on Saudi Arabia. I'm obviously
not expecting further deliveries of solar panels, but I will be
armed with some computing power for some time even
without ALECO.

Theoretically.

BFN. Paul.
Paul Edwards
2025-01-22 06:34:09 UTC
Reply
Permalink
Post by Paul Edwards
For 35+ years I have wondered why there was no MSDOS for
the mainframe. I now have an EBCDIC FAT32 file system on
FBA disks on the mainframe and an operating system that can
do basic manipulation, like typing files.
And now I can compile C programs, and gccmvs (3.2.3) can
reproduce itself, byte-exact.

Which is what I normally use to judge integrity.

zpg.zip and herc32.zip from https://pdos.org

BFN. Paul.
J. Curtis
2024-07-19 23:02:30 UTC
Reply
Permalink
Post by Scott Lurndal
MS-DOS is, was, and always will be a toy
Small toys and big toys, are all toys.
Salvador Mirzo
2025-03-08 17:42:47 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
Dan Cross
2025-03-09 02:09:42 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
keynote:

The operating system is that body of software that:
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)

It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really. While it does provide a primitive filesystem,
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff. Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.

So it's hard to see how DOS really qualifies as an OS, despite
the OS-like abstractions it provides.

- Dan C.
Scott Lurndal
2025-03-09 15:40:23 UTC
Reply
Permalink
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really. While it does provide a primitive filesystem,
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff.
To partially aleviate these defects, a concept called TSR (Terminate and
Stay Resident) was developed for MS-DOS. However, conflicts
between various TSRs were endemic and there was no hardware
protection between them or between them and the application
code.

https://en.wikipedia.org/wiki/Terminate-and-stay-resident_program#Faults
Post by Dan Cross
Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.
So it's hard to see how DOS really qualifies as an OS, despite
the OS-like abstractions it provides.
- Dan C.
Dan Cross
2025-03-10 12:38:00 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really. While it does provide a primitive filesystem,
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff.
To partially aleviate these defects, a concept called TSR (Terminate and
Stay Resident) was developed for MS-DOS. However, conflicts
between various TSRs were endemic and there was no hardware
protection between them or between them and the application
code.
https://en.wikipedia.org/wiki/Terminate-and-stay-resident_program#Faults
Oh gee, as a form of mental self-protection, I had blanked that
madness out of my mind. What an awful interface. I do hand it
to people who were writing DOS programs; without any real form
or protection between programs, let along between DOS and
programs, it's amazing that anything there worked at all.

Still, lack of multiprogramming shows that they were targetting
an audience that was pretty unsophisticated. I get that the
hardware was limited, but Unix on the PDP-7 supported multiple
users in 8KiW of 18-bit memory in 1969. A 16-bit system 12
years later could have supported a real OS, albeit without
useful memory protection (CPU rings and the CPL didn't show up
until the 80286). I suppose one could have played games with
segmentation to isolate a small kernel; as I understand it,
that was how the various Unix ports worked. It's weird to me
how the 8086 included support for multiprocessing systems, but
not a mode bit for a kernel.

It's fun to speculate how the world could have been different:
had Intel chosen the 68000 for the IBM PC, I imagine we could
have had much more reasonable software rather quickly. The
ripple effects of the legacy of DOS and the 8086 are
unfortunate, at best. Oh well.

- Dan C.
Scott Lurndal
2025-03-10 14:49:52 UTC
Reply
Permalink
Post by Dan Cross
Post by Scott Lurndal
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really. While it does provide a primitive filesystem,
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff.
To partially aleviate these defects, a concept called TSR (Terminate and
Stay Resident) was developed for MS-DOS. However, conflicts
between various TSRs were endemic and there was no hardware
protection between them or between them and the application
code.
https://en.wikipedia.org/wiki/Terminate-and-stay-resident_program#Faults
Oh gee, as a form of mental self-protection, I had blanked that
madness out of my mind. What an awful interface. I do hand it
to people who were writing DOS programs; without any real form
or protection between programs, let along between DOS and
programs, it's amazing that anything there worked at all.
Still, lack of multiprogramming shows that they were targetting
an audience that was pretty unsophisticated. I get that the
hardware was limited, but Unix on the PDP-7 supported multiple
users in 8KiW of 18-bit memory in 1969.
I cut my teeth on https://en.wikipedia.org/wiki/TSS/8

Which supported multiple users in 8KW 12-bit memory in 1967.
Dan Cross
2025-03-10 15:00:00 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Dan Cross
Still, lack of multiprogramming shows that they were targetting
an audience that was pretty unsophisticated. I get that the
hardware was limited, but Unix on the PDP-7 supported multiple
users in 8KiW of 18-bit memory in 1969.
I cut my teeth on https://en.wikipedia.org/wiki/TSS/8
Which supported multiple users in 8KW 12-bit memory in 1967.
My point exactly. :-) I get how PC designers may have wanted
to avoid timesharing, "hey, it's _my_ *personal* computer!", but
there was no excuse for not supporting multiprogramming.

- Dan C.
Salvador Mirzo
2025-03-10 12:21:38 UTC
Reply
Permalink
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
Thanks for the definition and the reference.
Post by Dan Cross
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really. While it does provide a primitive filesystem,
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff. Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.
So it's hard to see how DOS really qualifies as an OS, despite
the OS-like abstractions it provides.
Thanks for the explanation. I now think that DOS is useful today in
illustrating the definition (in a negative way) as you just did. I
actually plan to understand more about DOS just to be able to personally
give an answer like that.

It also seems very useful precisely to expose a programmer to the entire
machine.
Dan Cross
2025-03-10 13:50:31 UTC
Reply
Permalink
Post by Salvador Mirzo
[snip]
So it's hard to see how DOS really qualifies as an OS, despite
the OS-like abstractions it provides.
Thanks for the explanation.
Certainly; happy to do it.
Post by Salvador Mirzo
I now think that DOS is useful today in
illustrating the definition (in a negative way) as you just did. I
actually plan to understand more about DOS just to be able to personally
give an answer like that.
This, I think, is reasonable.
Post by Salvador Mirzo
It also seems very useful precisely to expose a programmer to the entire
machine.
But this I'd push back on, at least until I understood the goal
a bit better. Is the intent to understand how systems work at a
low level? To understand systems architectural more generally?
Or as a, "how did we get here?" exercise in systems evolution?

While understanding some bits of DOS, the BIOS, and the context
around the late 70s/early 80s PC/home computer indistru is
essential for the last, for first two, there are other, better
ways to understand the machine than studying DOS and the BIOS.
And in particular, modern systems, even those built around x86
CPUs, bear little resemblence to the original IBM PC.

Personally, I think one is better off coming at things from a
fresher, more modern perspective, unencumbered by the follies of
the past, as opposed to looking at things through the lens of
1979 Boca Raton. To understand hardware at the most basic
level, one would be better off looking at something like the
Arduino platform, which is almost stupidly simple, but by
design, very approachable. To understand architecture at a more
rational level, look at something like RISC-V; x86, and x86_64,
carries too much baggage from the past that obfuscates
understanding. To understand OS design, or even firmware, there
are better examples out there. I'd look at something like MIT's
materials for their OS course, in particular xv6.

The negative case study aside, or spelunking into the history of
the PC platform, I can't think of a good reason to study DOS.
Historically important, yes; but otherwise an exemplar of how
over-compensating for technical constraints can lead to bad
technology.

- Dan C.
Salvador Mirzo
2025-03-10 17:10:08 UTC
Reply
Permalink
[...]
Post by Dan Cross
Post by Salvador Mirzo
It also seems very useful precisely to expose a programmer to the entire
machine.
But this I'd push back on, at least until I understood the goal
a bit better. Is the intent to understand how systems work at a
low level? To understand systems architectural more generally?
Or as a, "how did we get here?" exercise in systems evolution?
Heads up---this is a long sharing of personal interests, which might be
awfully uninteresting.

I seem to have a certain interest in how things work. I never got into
hardware, though, even though my first computer book was a book called
Hardware---it helped me to put together my computer and made sense of
some BIOS options, but I can't quite remember anything else from it,
after all the years.

Hardware (by Gabriel Torres, a Brazilian author)
https://www.clubedohardware.com.br/livros/disponiveis/hardware-2%C2%AA-edi%C3%A7%C3%A3o-r33/

I was surprised to see the first edition being published in 2022. (I
read the first edition most likely in 1994 or 1995, around those days.)

Although I know the POSIX API superficially well enough to write network
servers in C, say, I also did not get at all into the UNIX
kernel---always thought of doing, but never quite managed to. So it
seems to me that that my curiosity doesn't get too low level. The same
thing seems to happen on the network. I've been fascinated by DNS, SMTP
et cetera and also IP, but then when I read TCP/IP Illustrated, for
example, I don't care too much about, say, Nagle's algorithm. In other
words, there's a certain depth that my curiosity doesn't seem to care so
much about.

Nevertheless, I would certainly enjoy studying anything at all if
there's a certain context to do it. It's too not clear what the context
is for each subject. I've ignored hardware for most of my life and
focused on understanding how to use a system like a UNIX system. Now I
think I ignored hardware too much. Lately, I even started taking notes
on hardware. (It's incredible how much acronyms you guys use, each one
abstracting some concept or some mechanism.)
Post by Dan Cross
While understanding some bits of DOS, the BIOS, and the context
around the late 70s/early 80s PC/home computer indistru is
essential for the last, for first two, there are other, better
ways to understand the machine than studying DOS and the BIOS.
And in particular, modern systems, even those built around x86
CPUs, bear little resemblence to the original IBM PC.
Personally, I think one is better off coming at things from a
fresher, more modern perspective, unencumbered by the follies of
the past, as opposed to looking at things through the lens of
1979 Boca Raton. To understand hardware at the most basic
level, one would be better off looking at something like the
Arduino platform, which is almost stupidly simple, but by
design, very approachable. To understand architecture at a more
rational level, look at something like RISC-V; x86, and x86_64,
carries too much baggage from the past that obfuscates
understanding. To understand OS design, or even firmware, there
are better examples out there. I'd look at something like MIT's
materials for their OS course, in particular xv6.
The negative case study aside, or spelunking into the history of
the PC platform, I can't think of a good reason to study DOS.
Historically important, yes; but otherwise an exemplar of how
over-compensating for technical constraints can lead to bad
technology.
Nice! I happen to agree. I did think at first that x86 should be the
choice and even started reading Randall Hyde while I still used Windows
daily---not anymore. But I did come to the conclusion that it's too
complicated and something prettier should take its place. Recently I
was reading about the 6502, but lately also concluded that a RISC-V
system could be the most educational to me. Nevertheless, at least so
far this is not my main focus. It's like I'm taking the topic like a
university course that's not main my area of study. I'm interested in
it, but I have other duties to care for as well.

Some time ago, I gave the xv6 book first chapters a read and it turns
out they now do it on RISC-V, which reinforces the choice of
architecture. I liked what they were doing.

I signed myself up to be reminded to buy

https://arace.tech/products/milk-v-jupiter-spacemit-m1-k1-octa-core-rva22-rvv1-0-risc-v-soc-2tops-miniitx

when it'd be available in stock again. It did and I got the mail. But
then I asked myself why did I do it---I have no use for a new computer;
I have no project. I am not really a hardware person. But there's no
question I like to know what you guys are talking about, say. It's hard
for me to ignore things that are quite relevant to my
interests---whatever they might really be, they're definitely computer
oriented.

And let me stress that I think history is definitely important to me
because it seems to be a big part of what I consider understanding. I
seem unable to feel that I understand something unless I understand the
historical evolution of how the thing came about. But I am not a
historian for sure; after I get the overall description of the facts on
the time line, I usually move on.

For instance, now that I understand UNIX somewhat, I try to understand
what came before it---say MULTICS, ITS, TOPS-20, TENEX are names that I
believe were systems that started before UNIX. But surely I don't
really want to write programs for MULTICS, say, even though it would be
a lot of fun for me to somehow emulate it and write notes on how it all
works, say. I would never feel I understand a system unless I put my
hands on it. (That's why I made the DOS comment about helping me put my
hands on a computer architecture [that it runs on].)

Here's an example. Being very fond of UNIX, I became interested in Plan
9---at least to see what it's about. At some point I found a book on
Plan 9 that's authored by Francisco Ballesteros. I believe it's never
been published, but it's available as a PDF on his homepage. Oh, here
it is:

Introduction to Operating System Abstractions
Using Plan 9 From Bell Labs
Francisco J. Ballesteros
https://doc.cat-v.org/plan_9/9.intro.pdf

I read the book. I wrote programs. I did some exercises. It gave me a
feeling that I have an introduction to Plan 9. Would I like to go back
to it? I would love to, but then there's a lot of other fun stuff.
Lately, I've been working on an idea in Common Lisp.

I have a lot of fun writing Lisp. My first exposure to Lisp was Racket,
which I now consider a big mistake. I am so much more compatible to
Common Lisp than Racket. (Macro writing is so much simpler in Common
Lisp than in Racket, for example. And although I've bought in to some
of the Racket elegance, I decided to just simplify my life in the end.)
It took me many years to realize that, but I finally did. On my first
weekend with Common Lisp I noticed I never had more fun with a
programming language---perhaps when I studied C while writing an IRC
robot without using the standard io libraries.

If I were to understand enough about RISC-V, say, to the point of being
able to read and make small changes to a certain toy operating system or
something like that, I would be very pleased to, say, give a lecture to
a computer user's group on how things work---what a computer is, how it
works, what the function of an operating system is. Of course I could
do a presentation on the topic without knowing how these things really
work, but that's not me. I like to say that 1 + 1 = 2 and then define
the successor function, define addition, define some of the numerals and
show Peano meant with ``1 + 1 = 2''. In fact, one the chief things I
love about computers is that we can---without having access to a
particle accelerator, if you know what I mean---show what everything in
it is about. It's accessible. (As math is.)

But I wouldn't major in electrical engineering (at first), you see? I
wouldn't get as low as electricity---it's a complete change of subject.
It turns out I majored in mathematics because back then I thought it was
smarter to follow the ideas and not quite the hardware. The hardware
was too fast-changing, so I thought it was a lose to study it seriously.
Now I think that was a mistake. (It doesn't *significantly* change.) I
still think it was right to spend the years on math, but the mistake was
in ignoring hardware. I ended up falling in love with mathematics and
went as deep into it as I could, but because computers had always been
my passion, I ended up coming back to them later and I can't see much
use for all the math I studied, even though it still seems pretty useful
indirectly---so no great regrets. (Math has been my best training in
reading. After I graduated, I thought I could read anything---clearly
an exaggeration, but I'm sure you get what I mean.) I feel pretty happy
and lucky that I enjoy with so many things. It doesn't help me to
became an expert in anything, but I am not trying to win any prize
anyway. I'm not competitive; rather, I'm cooperative.
Dan Cross
2025-03-10 18:29:13 UTC
Reply
Permalink
Post by Salvador Mirzo
[...]
Post by Dan Cross
Post by Salvador Mirzo
It also seems very useful precisely to expose a programmer to the entire
machine.
But this I'd push back on, at least until I understood the goal
a bit better. Is the intent to understand how systems work at a
low level? To understand systems architectural more generally?
Or as a, "how did we get here?" exercise in systems evolution?
Heads up---this is a long sharing of personal interests, which might be
awfully uninteresting.
[snip]
I don't have much to say in response to your message other than
that I think it's perfectly fine to dabble and see where your
interests take you; have at it! You are under no obligation to
become an expert in all areas. Enjoy yourself. :-)

Plan 9 is a fun system to work with and wrap your head around.
Nemo's notes are excellent (he wrote those for his students).

- Dan C.
Paul Edwards
2025-03-10 18:38:35 UTC
Reply
Permalink
Post by Salvador Mirzo
If I were to understand enough about RISC-V, say, to the point of being
able to read and make small changes to a certain toy operating system or
something like that, I would be very pleased to, say, give a lecture to
a computer user's group on how things work---what a computer is, how it
works, what the function of an operating system is.
That looks like a semi-concrete/coherent goal to me.
Post by Salvador Mirzo
I'm not competitive; rather, I'm cooperative.
This too.

First of all - you're going to hit time constraints. So while a
fantastic multiprocessing OS and GUI would be nice, it's
not something that Tim Paterson came up with in the first
version of QDOS. Even if he had less resource constraints.

And you have already answered why you don't just stick
Linux on RISC-V and call it a day - it's not what others
call a "toy" OS.

Note that MSDOS was used for a very long time in business
and I never heard anyone call it a toy at the time. And use
what instead? The Amiga? Macintosh?

You're after something simple.

And ask yourself what the simplest OS that could theoretically
be written is. And you're basically back to MSDOS or
something similar. And that's exactly what I did - write something
similar to MSDOS. I challenge you to find something in PDOS
that is "excess baggage that could be removed to simplify things
for new starters".

However, I did indeed simplify PDOS by way of redesign - by
creating PDOS-generic.

So I go back to a previous question. Do you accept the concept
of a BIOS - or perhaps a BIOS-like layer - or UEFI?

If so, if you provide a BIOS for RISC-V (or use UEFI), then
PDOS-generic (the OS) will run under that already. You just
need to compile it. Other than you will need to implement
setjmp/longjmp (assembler) too.

And one thing that isn't in PDOS-generic that I intend to add,
is to replace functions like syscall PosOpenFile with a syscall
fopen. But even that is internal to the OS. Because apps don't
need syscalls at all. Inspired by the Amiga, you can simply
call into callback functions. DLLs on Windows are similar.
Those things may or may not devolve into syscalls - it is hidden
from the user (which I consider to be a good thing).

And you don't specifically need to buy a RISC-V machine.
You can write an emulator. The mainframe emulator I wrote
is 3000 lines long - all that was needed to get gcc 3.2.3 to
recompile itself.

Alternatively you could focus on the BIOS (or pseudo-bios)
portion of RISC-V support. Or use Linux as a glorified
pseudo-bios. That's how PdAndro for Android phones works.

Basically, for simplicity, separate out a BIOS (or similar) from
the (simple) OS proper, and more options open up.

BFN. Paul.
Scott Lurndal
2025-03-10 19:07:14 UTC
Reply
Permalink
Post by Paul Edwards
Note that MSDOS was used for a very long time in business
Not really. Most business software ran on mainframes
and minicomputers in the MSDOS era. And once NT arrived, DOS was done.

There was very little _real_ business software written for
MSDOS.

Spreadsheets and word processing are a very small portion of
business computing. Material planning, resource planning,
human resources, enterprise payroll applications, etc
were not really available for MSDOS at any scale.
Post by Paul Edwards
and I never heard anyone call it a toy at the time.
Then you weren't listening.
Dan Cross
2025-03-10 19:09:21 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Paul Edwards
Note that MSDOS was used for a very long time in business
Not really. Most business software ran on mainframes
and minicomputers in the MSDOS era. And once NT arrived, DOS was done.
There was very little _real_ business software written for
MSDOS.
Spreadsheets and word processing are a very small portion of
business computing. Material planning, resource planning,
human resources, enterprise payroll applications, etc
were not really available for MSDOS at any scale.
Hear, hear.
Post by Scott Lurndal
Post by Paul Edwards
and I never heard anyone call it a toy at the time.
Then you weren't listening.
He's still not.

- Dan C.
John Ames
2025-03-10 20:00:06 UTC
Reply
Permalink
On Mon, 10 Mar 2025 19:07:14 GMT
Post by Scott Lurndal
Not really. Most business software ran on mainframes
and minicomputers in the MSDOS era. And once NT arrived, DOS was done.
There was very little _real_ business software written for
MSDOS.
Spreadsheets and word processing are a very small portion of
business computing. Material planning, resource planning,
human resources, enterprise payroll applications, etc
were not really available for MSDOS at any scale.
This is a nicely self-illustrative post, in that you start out with an
incendiary but extremely blinkered (if not flatly untrue) statement and
then spend the remainder of it relocating the goalposts to align with
where you kicked the ball. *Plenty* of business software ran on MS-DOS,
same as other single-tasking, unprotected microcomputer OSes that card-
carrying partisans of larger systems like to count as "not a *real* OS"
Because Reasons, and it's only by redefining "real business" to mean
"large (multi)national corporations" and "business software" to mean
"end-to-end computerized management of the entire business enterprise"
that you can even move your argument out of the realm of "demonstrably
false" and into "arguable, sort of, if you can get people to accept
your definitions exclusively."

C-, see me after class.
Dan Cross
2025-03-10 20:20:02 UTC
Reply
Permalink
Post by John Ames
On Mon, 10 Mar 2025 19:07:14 GMT
Post by Scott Lurndal
Not really. Most business software ran on mainframes
and minicomputers in the MSDOS era. And once NT arrived, DOS was done.
There was very little _real_ business software written for
MSDOS.
Spreadsheets and word processing are a very small portion of
business computing. Material planning, resource planning,
human resources, enterprise payroll applications, etc
were not really available for MSDOS at any scale.
This is a nicely self-illustrative post, in that you start out with an
incendiary but extremely blinkered (if not flatly untrue) statement and
then spend the remainder of it relocating the goalposts to align with
where you kicked the ball. *Plenty* of business software ran on MS-DOS,
same as other single-tasking, unprotected microcomputer OSes that card-
carrying partisans of larger systems like to count as "not a *real* OS"
Because Reasons, and it's only by redefining "real business" to mean
"large (multi)national corporations" and "business software" to mean
"end-to-end computerized management of the entire business enterprise"
that you can even move your argument out of the realm of "demonstrably
false" and into "arguable, sort of, if you can get people to accept
your definitions exclusively."
C-, see me after class.
I don't know much about "business software", so I can't really
comment on that, except to say that I imagine a lot of small and
perhaps even medium-sized businesses got a lot out of PCs and
DOS programs or whatnot. Maybe individuals or small teams in
bigger organizations, too. I can also imagine that they would
hit a scaling limit pretty quickly, at which point they would
want to step up to something more capable.

But I do know a lot about operating systems, and the objections
to categorizing things like MS-DOS as "a *real* OS" are not mere
handwaving that boils down to "Because Reasons"; there are
actual definitions in use across the field one can look to, and
MS-DOS et al simply do not meet them. It's great that control
software in the early PC era let people do useful work with
those machines; that doesn't mean that software was good or fit
reasonable definitions of what an "Operating System" is.

- Dan C.
Paul Edwards
2025-03-10 20:59:13 UTC
Reply
Permalink
Post by Dan Cross
Post by John Ames
On Mon, 10 Mar 2025 19:07:14 GMT
Post by Scott Lurndal
Not really. Most business software ran on mainframes
and minicomputers in the MSDOS era. And once NT arrived, DOS was done.
There was very little _real_ business software written for
MSDOS.
Spreadsheets and word processing are a very small portion of
business computing. Material planning, resource planning,
human resources, enterprise payroll applications, etc
were not really available for MSDOS at any scale.
This is a nicely self-illustrative post, in that you start out with an
incendiary but extremely blinkered (if not flatly untrue) statement and
then spend the remainder of it relocating the goalposts to align with
where you kicked the ball. *Plenty* of business software ran on MS-DOS,
same as other single-tasking, unprotected microcomputer OSes that card-
carrying partisans of larger systems like to count as "not a *real* OS"
Because Reasons, and it's only by redefining "real business" to mean
"large (multi)national corporations" and "business software" to mean
"end-to-end computerized management of the entire business enterprise"
that you can even move your argument out of the realm of "demonstrably
false" and into "arguable, sort of, if you can get people to accept
your definitions exclusively."
C-, see me after class.
I don't know much about "business software", so I can't really
comment on that, except to say that I imagine a lot of small and
perhaps even medium-sized businesses got a lot out of PCs and
DOS programs or whatnot. Maybe individuals or small teams in
bigger organizations, too. I can also imagine that they would
hit a scaling limit pretty quickly, at which point they would
want to step up to something more capable.
But I do know a lot about operating systems, and the objections
to categorizing things like MS-DOS as "a *real* OS" are not mere
handwaving that boils down to "Because Reasons"; there are
actual definitions in use across the field one can look to, and
MS-DOS et al simply do not meet them. It's great that control
software in the early PC era let people do useful work with
those machines; that doesn't mean that software was good or fit
reasonable definitions of what an "Operating System" is.
- Dan C.
It is you that doesn't have a "reasonable definition" of "operating system".

At the time of MSDOS, I never saw one columnist or any
individual who ever said that MSDOS was a misnomer,
since it isn't technically an OS.

Nor Tim Patterson called out for putting "OS" in "QDOS"
because "it ain't an OS".

But either way, it isn't a very useful semantic debate.

BFN. Paul.
John Ames
2025-03-10 22:11:14 UTC
Reply
Permalink
On Mon, 10 Mar 2025 20:20:02 -0000 (UTC)
Post by Dan Cross
But I do know a lot about operating systems, and the objections
to categorizing things like MS-DOS as "a *real* OS" are not mere
handwaving that boils down to "Because Reasons"; there are
actual definitions in use across the field one can look to, and
MS-DOS et al simply do not meet them. It's great that control
software in the early PC era let people do useful work with
those machines; that doesn't mean that software was good or fit
reasonable definitions of what an "Operating System" is.
So let's dig into that a bit. Merriam-Webster defines an "operating
Post by Dan Cross
software that controls the operation of a computer and directs the
processing of programs (as by assigning storage space in memory and
controlling input and output functions)
Wikipedia, being edited by Wikipedians, is a little more weird and
Post by Dan Cross
Software that is designed for controlling the allocation and the use
of various hardware resources to tasks and remote terminals.
MS-DOS very definitely takes control of the computer - it does not
*hold onto it* very tightly, but there's no particular reason it should
have to. In a single-tasking, single-user environment any operation the
user invokes can be Considered Legitimate, and this loose approach to
protection makes it possible for third-party or user-written software to
hook into interrupts/API calls and extend the system easily (although
DOS users generally made less use of this than classic MacOS users did.)

It also manages memory allocation (enabling applications, drivers, and
TSRs to co-exist safely, provided they behave themselves) and handles
input and output to/from screen/keyboard, disk, and parallel and serial
ports. Again, it does not *prevent* programs from taking control of
these things themselves, but that's a trade-off - yes, you lose some
security,* assuming you even care about that, but you gain flexibility.
(Supporting new hardware is generally as simple as writing a program to
frob the appropriate ports, unless the OS needs to be able to treat it
as a standard storage/communications channel. Even then, hooking into
the necessary interrupts is fairly straightforward.)

* (And it's worth noting that, in the original PC architecture pre-286,
it's functionally impossible to do protection anyway. There's not a
damn thing *any* OS running on an 8086 can do to prevent an errant
program from scribbling over the OS/another process or frobbing an
I/O port something else is trying to manage.)
Dan Cross
2025-03-10 23:11:49 UTC
Reply
Permalink
Post by John Ames
On Mon, 10 Mar 2025 20:20:02 -0000 (UTC)
Post by Dan Cross
But I do know a lot about operating systems, and the objections
to categorizing things like MS-DOS as "a *real* OS" are not mere
handwaving that boils down to "Because Reasons"; there are
actual definitions in use across the field one can look to, and
MS-DOS et al simply do not meet them. It's great that control
software in the early PC era let people do useful work with
those machines; that doesn't mean that software was good or fit
reasonable definitions of what an "Operating System" is.
So let's dig into that a bit. Merriam-Webster defines an "operating
Post by Dan Cross
software that controls the operation of a computer and directs the
processing of programs (as by assigning storage space in memory and
controlling input and output functions)
Msrs. Merriam and Webster were not, to my knowledge, computer
scientists.
Post by John Ames
Wikipedia, being edited by Wikipedians, is a little more weird and
Post by Dan Cross
Software that is designed for controlling the allocation and the use
of various hardware resources to tasks and remote terminals.
Indeed obtuse. What about a system that does not use "remote
terminals"?

I already posted the definition I like to use, which came to me
via Mothy Roscoe, at ETH, but I'll post it again:

He defines the operating system as,
That body of software that,
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software principles from each other
(using the hardware)

I further posted how I feel that MS-DOS dos not meet these
criteria, and why.

So arguing about a definiton from a mass-market English langauge
dictionary and/or wikipedia is not, frankly, very compelling in
comparison.
Post by John Ames
MS-DOS very definitely takes control of the computer - it does not
*hold onto it* very tightly, but there's no particular reason it should
have to.
Given the above definition, there's a very good reason: how does
it otherwise protect _itself_, as a software principle, from an
errant, let alone malicious, program?

Also, a boot loader takes control of the computer, albeit
temporarily: is that also an operating system? Merely taking
control of the computer is insufficient. The OpenBoot PROM
monitor on a SPARCstation could be entered via a keyboard
shortcut, suspending Unix and the SPARC processor; was that an
"operating system"? I don't think anyone working on it thought
of it that way.
Post by John Ames
In a single-tasking, single-user environment any operation the
user invokes can be Considered Legitimate, and this loose approach to
protection makes it possible for third-party or user-written software to
hook into interrupts/API calls and extend the system easily (although
DOS users generally made less use of this than classic MacOS users did.)
While an operation a *user* invokes, such as running a command
or invoking a program, can be "Considered Legitimate" in that
context, it does not follow every operation performed by a
program is legitimate. Programs have bugs; those bugs can
corrupt the state of the system; at best this may simply crash
the system. At worst it can lead to data corruption or loss.

DOS can't protect against that. So it fails criteria (3) of
Roscoe's definition.
Post by John Ames
It also manages memory allocation (enabling applications, drivers, and
TSRs to co-exist safely, provided they behave themselves) and handles
Actually, it did not allow TSRs to "co-exist" safely, because
the set of things required to do so goes far beyond mere memory
allocation. Since you brought up Wikipedia, the article on TSRs
that Scott linked to earlier went into this in detail, and is
well worth reading.
Post by John Ames
input and output to/from screen/keyboard, disk, and parallel and serial
ports. Again, it does not *prevent* programs from taking control of
these things themselves, but that's a trade-off - yes, you lose some
security,* assuming you even care about that, but you gain flexibility.
(Supporting new hardware is generally as simple as writing a program to
frob the appropriate ports, unless the OS needs to be able to treat it
as a standard storage/communications channel. Even then, hooking into
the necessary interrupts is fairly straightforward.)
* (And it's worth noting that, in the original PC architecture pre-286,
it's functionally impossible to do protection anyway. There's not a
damn thing *any* OS running on an 8086 can do to prevent an errant
program from scribbling over the OS/another process or frobbing an
I/O port something else is trying to manage.)
See above.

- Dan C.
Paul Edwards
2025-03-10 23:51:38 UTC
Reply
Permalink
Post by Dan Cross
Post by John Ames
On Mon, 10 Mar 2025 20:20:02 -0000 (UTC)
Post by Dan Cross
But I do know a lot about operating systems, and the objections
to categorizing things like MS-DOS as "a *real* OS" are not mere
handwaving that boils down to "Because Reasons"; there are
actual definitions in use across the field one can look to, and
MS-DOS et al simply do not meet them. It's great that control
software in the early PC era let people do useful work with
those machines; that doesn't mean that software was good or fit
reasonable definitions of what an "Operating System" is.
So let's dig into that a bit. Merriam-Webster defines an "operating
Post by Dan Cross
software that controls the operation of a computer and directs the
processing of programs (as by assigning storage space in memory and
controlling input and output functions)
Msrs. Merriam and Webster were not, to my knowledge, computer
scientists.
I am. And I assume Tim Patterson and Bill Gates are too.

And I'm telling you that Merriam and Webster have the
correct definition.

Mothy - whoever he is - don't bother telling me - I don't
give a shit who he is - doesn't get to unilaterally dictate
the meaning of the term.
Post by Dan Cross
Post by John Ames
Wikipedia, being edited by Wikipedians, is a little more weird and
Post by Dan Cross
Software that is designed for controlling the allocation and the use
of various hardware resources to tasks and remote terminals.
Indeed obtuse. What about a system that does not use "remote
terminals"?
I already posted the definition I like to use, which came to me
He defines the operating system as,
That body of software that,
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software principles from each other
(using the hardware)
I further posted how I feel that MS-DOS dos not meet these
criteria, and why.
So arguing about a definiton from a mass-market English langauge
dictionary and/or wikipedia is not, frankly, very compelling in
comparison.
Arguing Mothy the alleged computer scientiest god who
dictates a very common English term - very very common -
is not compelling at all.
Post by Dan Cross
Post by John Ames
MS-DOS very definitely takes control of the computer - it does not
*hold onto it* very tightly, but there's no particular reason it should
have to.
Given the above definition, there's a very good reason: how does
it otherwise protect _itself_, as a software principle, from an
errant, let alone malicious, program?
It doesn't.

Do you think there is one single person in this OPERATING SYSTEM
forum who doesn't know that MSDOS doesn't have protected memory?

Drop a name.

You don't have one.

We don't need a peanut like you to tell us again and again
that you have an operating system that has memory
protection while some of us dweebs don't.

Yeah - we get it. You're Dan the Man.

In fact I'm thinking of writing to Merrian-Webster to put
your name under the definition of "genius".
Post by Dan Cross
Also, a boot loader takes control of the computer, albeit
temporarily: is that also an operating system?
That's a specious argument. No-one here is claiming that
that is the sole definition of an operating system.

We all know already. We wouldn't be in this obscure
group if we didn't.

What we don't agree with is that Dan the Man gets to
define the English language, or who can post in this
group etc etc.
Post by Dan Cross
DOS can't protect against that. So it fails criteria (3) of
Roscoe's definition.
And you fail the first definition of "not being a moron".

The first definition being "not saying moronic things".

BFN. Paul.
John Ames
2025-03-11 15:37:45 UTC
Reply
Permalink
On Mon, 10 Mar 2025 23:11:49 -0000 (UTC)
Post by Dan Cross
I already posted the definition I like to use, which came to me
He defines the operating system as,
That body of software that,
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software principles from each other
(using the hardware)
You surely did; however, that does not mean that anybody else is
obligated to accept it. Specifically, let me ask:

1. Why must it multiplex anything, in a single-user system? Multi-
tasking is certainly a nicety, but why should it be considered
a necessary criterion for "real" status?
2. What is the minimum level of hardware abstraction, and why? MS-DOS
does in fact abstract the details of, e.g., filesystem access, pipes,
and to a lesser extent serial/parallel communications. You seem to be
fixated on the fact that its ABI uses x86 interrupts rather than an
alternative method; why is this important?
3. Again, in an 8086 environment this is *literally impossible.* There
is no operating system for the IBM PC or XT that *can* implement any
kind of protection. Additionally, in a single-user system, why is
this a requirement rather than a nicety?
Post by Dan Cross
Post by John Ames
MS-DOS very definitely takes control of the computer - it does not
*hold onto it* very tightly, but there's no particular reason it
should have to.
Given the above definition, there's a very good reason: how does
it otherwise protect _itself_, as a software principle, from an
errant, let alone malicious, program?
It doesn't, because it cannot. The 8086 offers absolutely no facility
for protection, nor does the PC hardware implement any kind of bolt-on
mechanism for this. That did not even become *possible* until the
introduction of the PC/AT in 1984, three years after DOS was released,
and that was limited by the infamously "brain-damaged" protected mode of
the 286.
Post by Dan Cross
Also, a boot loader takes control of the computer, albeit
temporarily: is that also an operating system? Merely taking
control of the computer is insufficient. The OpenBoot PROM
monitor on a SPARCstation could be entered via a keyboard
shortcut, suspending Unix and the SPARC processor; was that an
"operating system"?
In the strict sense, I don't see why not. A primitive one, granted, but
if ROM BASIC counts as an operating system, OpenBoot certainly would.
Dan Cross
2025-03-11 17:28:44 UTC
Reply
Permalink
Post by John Ames
On Mon, 10 Mar 2025 23:11:49 -0000 (UTC)
Post by Dan Cross
I already posted the definition I like to use, which came to me
He defines the operating system as,
That body of software that,
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software principles from each other
(using the hardware)
You surely did; however, that does not mean that anybody else is
obligated to accept it.
Sure. That's why I prefaced this entire subthread by saying,
"it depends on your definition." I gave the definition that I
like, and explained how DOS does not meet those criteria. Is
that unfair? Perhaps, but the question was, "why do you say
this?" and that's the answer of why I say it.
Post by John Ames
1. Why must it multiplex anything, in a single-user system? Multi-
tasking is certainly a nicety, but why should it be considered
a necessary criterion for "real" status?
Note, I said multiplexing, not multitasking. CPU is just one
hardware resource that can be multiplexed, often via
multiprogramming but sometimes via spatial allocation (e.g., in
a multiprocessor system).

But others include storage devices, memory, peripherals like
timers and other IO devices such as serial/parallel ports,
maybe a real-time clock, and so on. You like having a file
abstraction? Well, that's something that the OS often provides
for you, and is _a_ way that e.g. a storage device can be
logically multiplexed between multiple programs, even if they
do not run concurrently. Similarly with a memory allocator.

Getting back to the specific example of MS-DOS, it does provide
both, but does Not provide a way to e.g., multiplex IO and
computation temporally; so performing a disk operation will
just block the program until the operation completes. Don't
want that? Go touch the hardware yourself. The control program
doesn't really help me here.
Post by John Ames
2. What is the minimum level of hardware abstraction, and why? MS-DOS
does in fact abstract the details of, e.g., filesystem access, pipes,
and to a lesser extent serial/parallel communications. You seem to be
fixated on the fact that its ABI uses x86 interrupts rather than an
alternative method; why is this important?
I would hardly say I'm "fixated" on it. It's simply a fact.

Generally, we draw a distinction between an API and ABI; the
former is often somewhat abstract, while the latter is the
concrete implementation of that abstraction.

In a lot of ways, the ABI is irrelevant for workaday
programming: you leave it up to a tool, or a system library, or
something like that, but you rarely have to think about it. The
issue with the DOS API is that it is defined _solely_ in terms
of the hardware interface, and moreover it is highly specific to
that hardware.

Consider the memory allocation API, for example. According to,
"Advanced MS-DOS Programming, 2nd Ed", by Ray Duncan, if a user
program wants to use expanded memory, the program must do work
to a) first determine whether expanded memory is even available,
and b) use a different, special, interrupt to delegate to the
"expanded memory manager" to actually allocate it.

Now, one might argue that this doesn't matter that much. "Who
cares? What's the difference between saying that there's a
thing called `malloc` and poking a hex constant into %ah and
then doing `int 0x67`?" And that's a valid question; one could
imagine an interface that has both `malloc` and `expandedmalloc`
or something like that to have an API divorced from the specific
ABI, but those are _leaky abstractions_: they exist solely to
represent hardware artifacts.

So while you could "name" those things and not just number them,
the things themselves are still very tightly coupled to the
hardware. Those aren't great abstractions.
Post by John Ames
3. Again, in an 8086 environment this is *literally impossible.* There
is no operating system for the IBM PC or XT that *can* implement any
kind of protection. Additionally, in a single-user system, why is
this a requirement rather than a nicety?
Yup. An 6802 microcontroller with 128 bytes of RAM doesn't have
an operating system, either. That doesn't mean it isn't useful.

It's interesting that there was a port of Unix to the XT that
was, of course, subject to the same limitations. Sometimes you
are just constrained, so you make due with what you have. But
Unix at least used the segmentation facilities in the processor
to _attempt_ to shield the kernel from errant user processes;
DOS made no such attempt.
Post by John Ames
Post by Dan Cross
Post by John Ames
MS-DOS very definitely takes control of the computer - it does not
*hold onto it* very tightly, but there's no particular reason it
should have to.
Given the above definition, there's a very good reason: how does
it otherwise protect _itself_, as a software principle, from an
errant, let alone malicious, program?
It doesn't, because it cannot. The 8086 offers absolutely no facility
for protection,
Not true. It supported segmentation. It's harder to corrupt
RAM if it's not in a segment that's currently addressible.
Post by John Ames
nor does the PC hardware implement any kind of bolt-on
mechanism for this. That did not even become *possible* until the
introduction of the PC/AT in 1984, three years after DOS was released,
and that was limited by the infamously "brain-damaged" protected mode of
the 286.
And when the 286 came out, MS-DOS didn't grow to use it, even
though it was there. Nor did they try to incorporate larger
segments or virtual memory when the 386 came out. But the 386
was designed for the Unix market, not the PC, so maybe MSFT gets
a pass on that one.
Post by John Ames
Post by Dan Cross
Also, a boot loader takes control of the computer, albeit
temporarily: is that also an operating system? Merely taking
control of the computer is insufficient. The OpenBoot PROM
monitor on a SPARCstation could be entered via a keyboard
shortcut, suspending Unix and the SPARC processor; was that an
"operating system"?
In the strict sense, I don't see why not. A primitive one, granted, but
if ROM BASIC counts as an operating system, OpenBoot certainly would.
I wouldn't count ROM BASIC as an operating system, sorry.
Similarly, the people I know who worked on OpenBoot did not
consider it a real operating system in the way they thought of
SunOS.

Now my question to you: why do you care what specific label
people apply to MS-DOS?

- Dan C.
Paul Edwards
2025-03-11 18:40:03 UTC
Reply
Permalink
Post by Dan Cross
Post by John Ames
On Mon, 10 Mar 2025 23:11:49 -0000 (UTC)
Post by Dan Cross
I already posted the definition I like to use, which came to me
He defines the operating system as,
That body of software that,
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software principles from each other
(using the hardware)
You surely did; however, that does not mean that anybody else is
obligated to accept it.
Sure. That's why I prefaced this entire subthread by saying,
"it depends on your definition." I gave the definition that I
like, and explained how DOS does not meet those criteria. Is
that unfair?
Yes, it is unfair because you are using your own definition
of a term from the English language. You can call a dog a
cat too. It's not common usage. If you want to talk in a
foreign language, so be it. In fact, posting in Swahilli would
literally be better than redefining the English language just
to confuse people or more likely - try to pull the ladder up
from under you.
Post by Dan Cross
In a lot of ways, the ABI is irrelevant for workaday
programming: you leave it up to a tool, or a system library, or
something like that, but you rarely have to think about it. The
issue with the DOS API is that it is defined _solely_ in terms
of the hardware interface, and moreover it is highly specific to
that hardware.
Nonsense. There are some functions (like memory allocation
and exec) that introduce the concept of segmentation, but
even those can be masked by the C90 equivalents in the
same way Posix does.
Post by Dan Cross
Consider the memory allocation API, for example. According to,
"Advanced MS-DOS Programming, 2nd Ed", by Ray Duncan, if a user
program wants to use expanded memory, the program must do work
Expanded memory is exactly that - not normal memory. So it
is basically accessing some non-standard device, and you
could use a device driver or some other method to access it,
it is not really something you expect to be provided by an
API. And has no concept in C90 either. If there happens to
be a DOS API to access this particular bit of hardware - so
be it. There are DOS APIs to access the CDROM too. But
that should be considered "iciing on the cake". There is no
requirement for an OS to provide access to either device
and still be an OS.
Post by Dan Cross
to a) first determine whether expanded memory is even available,
and b) use a different, special, interrupt to delegate to the
"expanded memory manager" to actually allocate it.
Now, one might argue that this doesn't matter that much. "Who
cares? What's the difference between saying that there's a
thing called `malloc` and poking a hex constant into %ah and
then doing `int 0x67`?" And that's a valid question; one could
imagine an interface that has both `malloc` and `expandedmalloc`
or something like that to have an API divorced from the specific
ABI, but those are _leaky abstractions_: they exist solely to
represent hardware artifacts.
So - live within the constraints of standard memory.
Post by Dan Cross
So while you could "name" those things and not just number them,
the things themselves are still very tightly coupled to the
hardware. Those aren't great abstractions.
Nonsense. And regardless, now do the same analysis
for accessing a file using the open() DOS API as
exposed and documented via Microsoft C.
Post by Dan Cross
Post by John Ames
3. Again, in an 8086 environment this is *literally impossible.* There
is no operating system for the IBM PC or XT that *can* implement any
kind of protection. Additionally, in a single-user system, why is
this a requirement rather than a nicety?
Yup. An 6802 microcontroller with 128 bytes of RAM doesn't have
an operating system, either. That doesn't mean it isn't useful.
It's interesting that there was a port of Unix to the XT that
was, of course, subject to the same limitations. Sometimes you
are just constrained, so you make due with what you have.
Porting Unix to the XT, and losing memory protection in
the process, does not stop Unix from being an OS.

Even if the XT was the original port, Unix was an OS then too.
Post by Dan Cross
But
Unix at least used the segmentation facilities in the processor
to _attempt_ to shield the kernel from errant user processes;
DOS made no such attempt.
Pardon?
Post by Dan Cross
Post by John Ames
Post by Dan Cross
Post by John Ames
MS-DOS very definitely takes control of the computer - it does not
*hold onto it* very tightly, but there's no particular reason it
should have to.
Given the above definition, there's a very good reason: how does
it otherwise protect _itself_, as a software principle, from an
errant, let alone malicious, program?
It doesn't, because it cannot. The 8086 offers absolutely no facility
for protection,
Not true. It supported segmentation. It's harder to corrupt
RAM if it's not in a segment that's currently addressible.
Nonsense. All conventional memory is addressable by
any program. And if the program was compiled using
the compact, large or huge memory model, then any
data pointer at all, can accidentally trash the OS if
corrupted.
Post by Dan Cross
Post by John Ames
nor does the PC hardware implement any kind of bolt-on
mechanism for this. That did not even become *possible* until the
introduction of the PC/AT in 1984, three years after DOS was released,
and that was limited by the infamously "brain-damaged" protected mode of
the 286.
And when the 286 came out, MS-DOS didn't grow to use it, even
though it was there.
Irrelevant. There is no technical barrier to doing that, depending
on your definition of "MSDOS".
Post by Dan Cross
Nor did they try to incorporate larger
segments or virtual memory when the 386 came out. But the 386
was designed for the Unix market, not the PC, so maybe MSFT gets
a pass on that one.
A "pass" for following a business direction not approved by you?
Post by Dan Cross
Post by John Ames
Post by Dan Cross
Also, a boot loader takes control of the computer, albeit
temporarily: is that also an operating system? Merely taking
control of the computer is insufficient. The OpenBoot PROM
monitor on a SPARCstation could be entered via a keyboard
shortcut, suspending Unix and the SPARC processor; was that an
"operating system"?
In the strict sense, I don't see why not. A primitive one, granted, but
if ROM BASIC counts as an operating system, OpenBoot certainly would.
I wouldn't count ROM BASIC as an operating system, sorry.
Similarly, the people I know who worked on OpenBoot did not
consider it a real operating system in the way they thought of
SunOS.
No-one at all is disputing that SunOS is far more sophisticated
than OpenBoot as an OS.

The question is whether OpenBoot meets the definition of
an OS at all.

MSDOS certainly does. And the Wikipedia authors have it right.
I didn't bother to check whether they consider OpenBoot to be
an OS.
Post by Dan Cross
Now my question to you: why do you care what specific label
people apply to MS-DOS?
It's more when you insist that MSDOS is not an OS, going
against the common English term. Repeatedly. As if anyone
in here doesn't already know that MSDOS is not as good
as your favorite OS.

You can come in here and use your freedom of speech to
insist that a dog is a cat too. Trying to inspire an alt.os.development
revolution to change the English language. Why would I care
that you insist on being a jackass? Because it takes time to
correct the misinformation that you are using the English
language in a non-standard way, not benefitting anyone.

Just the same as you lied about PDOS having other people's
copyrighted code in it. Takes time to correct your lies.

We get it already. You have access to what you consider
to be a fantastic OS, and you want to belittle OSes that
aren't as good as that, to boost your fragile ego somehow in
reflective glory about being part of that bandwagon/movement.

You may as well be posting "you suck - ha ha ha". In fact,
that would be much better, because it doesn't require a
technical correction as it is a self-evident ad hominem attack.

You may have noticed that I never reply to your childish
attempts to get adults to not talk to me with "don't feed
the troll". If an adult is influenced by your puerile shit to
not talk to me - sounds like a win/win situation.

It's your lies that require effort to correct.

BFN. Paul.
John Ames
2025-03-11 19:41:58 UTC
Reply
Permalink
On Tue, 11 Mar 2025 17:28:44 -0000 (UTC)
Post by Dan Cross
Getting back to the specific example of MS-DOS, it does provide
both, but does Not provide a way to e.g., multiplex IO and
computation temporally; so performing a disk operation will
just block the program until the operation completes. Don't
want that? Go touch the hardware yourself. The control program
doesn't really help me here.
I mean, asynchronous I/O is certainly a nicety, but why does not having
it make an OS not "real?"
Post by Dan Cross
In a lot of ways, the ABI is irrelevant for workaday
programming: you leave it up to a tool, or a system library, or
something like that, but you rarely have to think about it. The
issue with the DOS API is that it is defined _solely_ in terms
of the hardware interface, and moreover it is highly specific to
that hardware.
So while you could "name" those things and not just number them,
the things themselves are still very tightly coupled to the
hardware. Those aren't great abstractions.
So an OS that is specifically tied to a particular architecture and
uses specific sequences of instructions is "not a real OS?" Funny, you
never hear that against ITS.
Post by Dan Cross
It's interesting that there was a port of Unix to the XT that
was, of course, subject to the same limitations. Sometimes you
are just constrained, so you make due with what you have. But
Unix at least used the segmentation facilities in the processor
to _attempt_ to shield the kernel from errant user processes;
DOS made no such attempt.
Absent any means of hardware protection, "segmentation" on the part of
the OS is a gentle suggestion at best; it cannot even protect against
an *errant* process, let alone a malicious one. DOS also uses the
segmented model to allocate space to applications/drivers/etc., but
pretending that this actually impedes hazardous behavior is just empty
security theater.
Post by Dan Cross
Not true. It supported segmentation. It's harder to corrupt
RAM if it's not in a segment that's currently addressible.
Again *there is no protection.* None whatsoever. Altering the segment
registers is a non-privileged operation on the 8086, and the address-
translation mechanism is a simple shift-and-add; it is trivial for any
process to write to any part of memory, whatever its initial segment-
register values were.
Post by Dan Cross
And when the 286 came out, MS-DOS didn't grow to use it, even
though it was there. Nor did they try to incorporate larger
segments or virtual memory when the 386 came out.
MS and IBM did in fact try to extend DOS using the capabilities of 286
protected mode, under both Windows/286 and OS/1 v.1. Neither worked
well or saw wide adoption, because the 286 was terrible. By the time
the 386 was gaining significant acceptance, MS was already moving on to
Windows NT, which eventually did offer protected (if limited) DOS
support via NTVDM.
Post by Dan Cross
Now my question to you: why do you care what specific label
people apply to MS-DOS?
Primarily because Certain Types insist on parroting the same pithy
dismissals over and over again, year after year after year, for no
readily apparent reason and despite their arguments being predicated on
nonintuitive definitions of common terms, which are in the best case
overly narrow and, in the least charitable interpretation, pretty
clearly constructed to support the argument they wanted to make.

Now, in fairness to yourself, Scott is the one who's really been
tossing cherry bombs in the toilet here, and I think you're catching
some flak that really ought to go his way; but you *also* are insistent
on defining common terms in such a way as to exclude things that pretty
much anybody without a partisan stake wouldn't hesitate to count in
with the group - it's not enough, apparently, to say that MS-DOS is a
*simplistic* OS, or even *not a particularly good one,* but that it's
somehow not *really* an OS at all, even though you yourself admit that
Post by Dan Cross
So it's hard to see how DOS really qualifies as an OS, despite
the OS-like abstractions it provides.
So why the semantic games? What is the actual *point* to this argument?
Dan Cross
2025-03-11 21:00:36 UTC
Reply
Permalink
Post by John Ames
On Tue, 11 Mar 2025 17:28:44 -0000 (UTC)
Post by Dan Cross
Getting back to the specific example of MS-DOS, it does provide
both, but does Not provide a way to e.g., multiplex IO and
computation temporally; so performing a disk operation will
just block the program until the operation completes. Don't
want that? Go touch the hardware yourself. The control program
doesn't really help me here.
I mean, asynchronous I/O is certainly a nicety, but why does not having
it make an OS not "real?"
I did not say that it did not.

I said that MS-DOS does very limited multiplexing of hardware
resources, that multiplexing of such sources doesn't only mean
timeslicing the CPU as you seemed to suggest, and used
overlapped IO and computation as an example in answering your
question of why this might matter for a single-user,
single-tasking system.

I pointed out that MS-DOS does not, and _cannot_ support this.

You'll note that I did acknowledge ways that DOS _does_
multiplex some resources, such as providing a file abstraction
and providing a memory allocator. But it's anemic.
Post by John Ames
Post by Dan Cross
In a lot of ways, the ABI is irrelevant for workaday
programming: you leave it up to a tool, or a system library, or
something like that, but you rarely have to think about it. The
issue with the DOS API is that it is defined _solely_ in terms
of the hardware interface, and moreover it is highly specific to
that hardware.
So while you could "name" those things and not just number them,
the things themselves are still very tightly coupled to the
hardware. Those aren't great abstractions.
So an OS that is specifically tied to a particular architecture and
uses specific sequences of instructions is "not a real OS?" Funny, you
never hear that against ITS.
It's funny that you entered this thread by accusing someone else
of moving the goalposts, and yet now you yourself do so.

You asked about how much abstraction of the hardware is enough?
That is a debatable point worthy of discussion, but DOS does
none.
Post by John Ames
Post by Dan Cross
It's interesting that there was a port of Unix to the XT that
was, of course, subject to the same limitations. Sometimes you
are just constrained, so you make due with what you have. But
Unix at least used the segmentation facilities in the processor
to _attempt_ to shield the kernel from errant user processes;
DOS made no such attempt.
Absent any means of hardware protection, "segmentation" on the part of
the OS is a gentle suggestion at best; it cannot even protect against
an *errant* process, let alone a malicious one. DOS also uses the
segmented model to allocate space to applications/drivers/etc., but
pretending that this actually impedes hazardous behavior is just empty
security theater.
The point is that DOS actively encourages users to side-step it
and do their own thing.
Post by John Ames
Post by Dan Cross
Not true. It supported segmentation. It's harder to corrupt
RAM if it's not in a segment that's currently addressible.
Again *there is no protection.* None whatsoever. Altering the segment
registers is a non-privileged operation on the 8086, and the address-
translation mechanism is a simple shift-and-add; it is trivial for any
process to write to any part of memory, whatever its initial segment-
register values were.
I'm aware of how the 8086 segmentation model works, thanks, but
you miss the point. In order to manipulate with memory outside
of a presently loaded segment, a program must first load a
segment register to point to some segment that contains the
memory you want to manipulate. Conversely, if no such segment
is loaded, that memory cannot be manipulated, even if I know its
linear address. Loading the segment registers is an _explicit_
operation; a random store won't necessarily overwrite memory.

Crude and fallible as it is, MS-DOS (again) encourages stepping
past even this feeble mechanism to provide some primitive
semblence of protection.
Post by John Ames
Post by Dan Cross
And when the 286 came out, MS-DOS didn't grow to use it, even
though it was there. Nor did they try to incorporate larger
segments or virtual memory when the 386 came out.
MS and IBM did in fact try to extend DOS using the capabilities of 286
protected mode, under both Windows/286 and OS/1 v.1. Neither worked
well or saw wide adoption, because the 286 was terrible.
Oh, I'm sorry, I didn't realize you worked for Microsoft when
this was going on and knew that's why those efforts failed. Are
you still in Seattle? Maybe you know some folks I know who were
there around that time? Most of them were Windows kernel folks,
but a few were around in the DOS days.
Post by John Ames
By the time
the 386 was gaining significant acceptance, MS was already moving on to
Windows NT, which eventually did offer protected (if limited) DOS
support via NTVDM.
Post by Dan Cross
Now my question to you: why do you care what specific label
people apply to MS-DOS?
Primarily because Certain Types insist on parroting the same pithy
dismissals over and over again, year after year after year, for no
readily apparent reason and despite their arguments being predicated on
nonintuitive definitions of common terms, which are in the best case
overly narrow and,
"Nonintuitive" to whom, exactly?
Post by John Ames
in the least charitable interpretation, pretty
clearly constructed to support the argument they wanted to make.
Honestly, it strikes me that it's really the other way around.
Some people appear to have gotten much of their identity wrapped
so up in the idea that MS-DOS is an "Operating System" that the
suggestion that that might not be how people doing serious work
in the field universally see it, that it's akin to someone
calling their baby ugly.
Post by John Ames
Now, in fairness to yourself, Scott is the one who's really been
tossing cherry bombs in the toilet here, and I think you're catching
some flak that really ought to go his way; but you *also* are insistent
on defining common terms in such a way as to exclude things that pretty
much anybody without a partisan stake wouldn't hesitate to count in
with the group - it's not enough, apparently, to say that MS-DOS is a
*simplistic* OS, or even *not a particularly good one,* but that it's
somehow not *really* an OS at all, even though you yourself admit that
Post by Dan Cross
So it's hard to see how DOS really qualifies as an OS, despite
the OS-like abstractions it provides.
I said it is difficult to see how DOS qualifies as an OS, given
the definition I presented. I don't get where you are saying
that I "admit that it does fundamentally fill the role of one."

I stand by that, though I admit that I don't feel the need to
condescend to those who might see it differently. However, you
_do_ have to bring a better definition that "lol because it is
because I'm tired of people making fun of it and this is what an
English langauge dictionary says about it." My copy of Merriam
Webster has definitions for all kinds of common words that have
nothing to do the definition of those same words in specialist
contexts.
Post by John Ames
So why the semantic games? What is the actual *point* to this argument?
It may be hard to accept, but words have meaning, and
specialists in the field get to define those meanings, not
dictionary editors. If you want to talk seriously about
operating systems, then one has to engage with _those_ meanings,
and not what is merely intuitive.

- Dan C.
Paul Edwards
2025-03-12 06:30:27 UTC
Reply
Permalink
Post by Dan Cross
Post by John Ames
Post by Dan Cross
It's interesting that there was a port of Unix to the XT that
was, of course, subject to the same limitations. Sometimes you
are just constrained, so you make due with what you have. But
Unix at least used the segmentation facilities in the processor
to _attempt_ to shield the kernel from errant user processes;
DOS made no such attempt.
Absent any means of hardware protection, "segmentation" on the part of
the OS is a gentle suggestion at best; it cannot even protect against
an *errant* process, let alone a malicious one. DOS also uses the
segmented model to allocate space to applications/drivers/etc., but
pretending that this actually impedes hazardous behavior is just empty
security theater.
The point is that DOS actively encourages users to side-step it
and do their own thing.
Pardon? DOS "encourages" WHAT? Loading a far address
is as normal as loading a normal linear address in a 68000.
If you are using more than 64k of memory, it is something
you do. If you aren't, there may not be any need. It depends
what you are doing.

Be specific about your apparently specious claim.
Post by Dan Cross
Post by John Ames
Post by Dan Cross
Not true. It supported segmentation. It's harder to corrupt
RAM if it's not in a segment that's currently addressible.
Again *there is no protection.* None whatsoever. Altering the segment
registers is a non-privileged operation on the 8086, and the address-
translation mechanism is a simple shift-and-add; it is trivial for any
process to write to any part of memory, whatever its initial segment-
register values were.
I'm aware of how the 8086 segmentation model works, thanks, but
Sure doesn't sound like it.
Post by Dan Cross
you miss the point. In order to manipulate with memory outside
of a presently loaded segment,
What do you mean a "presently loaded segment"? Far memory
isn't "loaded", it is merely allocated/assigned and accessed.
Post by Dan Cross
a program must first load a
segment register to point to some segment that contains the
memory you want to manipulate.
Yes, and on a S/370 or 68000 you load a linear address
for the memory you wish to address. That's just a different
way of accessing more than 64k of memory. You can use
a flat 32-bit address or you can use two 16-bit registers.
In both cases it is unrelated to a separate concept of
memory protection - which doesn't exist on either the
68000 or 8086.
Post by Dan Cross
Conversely, if no such segment
is loaded, that memory cannot be manipulated, even if I know its
linear address. Loading the segment registers is an _explicit_
operation; a random store won't necessarily overwrite memory.
Loading a linear address on a 68000 - accidentally or
deliberately pointing to the OS, is also a logically
equivalent identical _explicit_ operation.
Post by Dan Cross
Crude and fallible as it is, MS-DOS (again) encourages stepping
past even this feeble mechanism to provide some primitive
semblence of protection.
I have no idea what you are talking about. Be specific.
Post by Dan Cross
Post by John Ames
Post by Dan Cross
Now my question to you: why do you care what specific label
people apply to MS-DOS?
Primarily because Certain Types insist on parroting the same pithy
dismissals over and over again, year after year after year, for no
readily apparent reason and despite their arguments being predicated on
nonintuitive definitions of common terms, which are in the best case
overly narrow and,
"Nonintuitive" to whom, exactly?
Anyone speaking English - both natives and non-natives.
Post by Dan Cross
Post by John Ames
in the least charitable interpretation, pretty
clearly constructed to support the argument they wanted to make.
Honestly, it strikes me that it's really the other way around.
Some people appear to have gotten much of their identity wrapped
so up in the idea that MS-DOS is an "Operating System" that the
suggestion that that might not be how people doing serious work
in the field universally see it, that it's akin to someone
calling their baby ugly.
Ok, first of all, no group speaks with one voice. So your
"universally" is complete horseshit. It's only true if you
DEFINE "serious work" as "anyone who agrees with me"
and DEFINE "in the field" as "anyone who agrees with me".

And I personally don't mind you calling (someone else's -
Tim Patterson in this case)'s baby "ugly" (although I would
dispute that). But what is unacceptable is calling his baby
an orangutan when it is absolutely definitely homo sapiens.

It's just dishonest. And note that you have a history of being
dishonest - including just above where you claim that an
entire group of "serious" OS developers "in the field" speaks
with one voice as if you've surveyed them all - when reality
is you're just talking about of your ass.
Post by Dan Cross
I stand by that, though I admit that I don't feel the need to
condescend to those who might see it differently. However, you
_do_ have to bring a better definition that "lol because it is
because I'm tired of people making fun of it and this is what an
English langauge dictionary says about it." My copy of Merriam
Webster has definitions for all kinds of common words that have
nothing to do the definition of those same words in specialist
contexts.
The "specialist context" you are talking about is "a select
group of wankers that I associate with".

You are using that "specialist term" in alt.os.development
where everyone who isn't a complete and utter wanker
uses the Merriam Webster definition because we're
TALKING IN ENGLISH.

If you want to say that "alt.os.development" is itself a
"specialist context" - and your non-English use of the
term should apply, well I can guarantee you that we
don't speak with one voice in this "specialist context"
either.

So you will only cause confusion and/or annoy people.

If you want to do that - deliberately - fine - but you may
as well be lying about something else too - like PDOS
containing code that is copyrighted by others. Oh - you
do that too.
Post by Dan Cross
Post by John Ames
So why the semantic games? What is the actual *point* to this argument?
It may be hard to accept, but words have meaning, and
specialists in the field get to define those meanings, not
dictionary editors.
No they don't. Not in this group where not everyone is a
wanker and we are speaking in English.
Post by Dan Cross
If you want to talk seriously about
operating systems, then one has to engage with _those_ meanings,
and not what is merely intuitive.
If you want to talk seriously about operating systems, where
"seriously" is DEFINED as "wankers like you", you need to
go to that group where all the wankers that speak with one
voice reside. This is alt.os.development - most of us speak
English. You may get the occasional person who agrees
with the wankers, but it's far from universal, and we've
politely asked you to just add an adjective of "protected"
to your use of the word "operating system".

BFN. Paul.
John Ames
2025-03-11 16:04:15 UTC
Reply
Permalink
(And furthermore: what does "real" in this context even *mean,* and why
should anybody care what someone on the Internet does or doesn't count
in that category?)
Paul Edwards
2025-03-10 21:29:00 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Paul Edwards
and I never heard anyone call it a toy at the time.
Then you weren't listening.
I was - I was listening to people who were actually using
MSDOS to do productive work.

Telling me what productive work they had done.

They didn't tell me they spent the whole day getting paid
to play with a toy.


And regardless, English is DEFINED by common usage.

The Wikipedia people have correctly used the term:

https://en.wikipedia.org/wiki/MS-DOS

MS-DOS acronym for Microsoft Disk Operating System, also known as Microsoft
DOS) is an operating system

https://en.wikipedia.org/wiki/Operating_system

For around five years, the CP/M (Control Program for Microcomputers) was the
most popular operating system for microcomputers



You are free to update that and insist "CP/M is a toy, not an operating
system",
but it will be reverted very quickly, because you are wrong, and they are
right.

If you want to be understood by others, you need to create your
own term that would exclude MSDOS.

You could perhaps go with "a REAL (TM) You-Beaut Operating
System as approved by Scott and Dan and various other sophists -
wankers like Tim Patterson and others can go fuck themselves".

Or you can go with some other term.

But "operating system" is already taken, I'm afraid.

I'm not sure what your goal is by insisting on changing the
term. I get it. There are better operating systems than
QDOS etc. So? Why do you feel the need to keep pointing
that out, to the point of even illegitimizing them? Why are
you trying to put the authors down? Are you one of these
hypocritical Christians who instead of "turning the other cheek"
or "loving thy enemy" you instead "attack the innocent" and
hope that when you get to the Pearly Gates you'll fall
back on the "Jesus has me covered, right?" copout and expect
some god to not bellow with laughter?

BFN. Paul.
Paul Edwards
2025-03-10 18:06:42 UTC
Reply
Permalink
Post by Dan Cross
Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values.
I don't believe the word "inexorably" is appropriate.

If you look at the MSDOS 4 source code, you can
see that Microsoft created wrappers for these things
for their own use. There is a DosOpen() function for
example.

The fact that they apparently didn't publish that - and
then went and repurposed that same name for 16-bit
OS/2 with a (at least nominally) different interface, is
not a technical issue.

I created my own wrappers (since I didn't have access
to the MSDOS 4 source code at the time), but even
without that, there are existing wrappers (like "open"),
provided by Microsoft's C compiler.

I don't see any reason why Unix's open() is considered
to be "proper" but Microsoft's open() isn't. That's more
of a documentation issue. But which documentation
anyway? The INT 80H used by Linux is documented too.
So MSDOS is illegitmate because Microsoft didn't move
the open() function from one bit of documentation to
another bit of (unspecified) documentation?

I don't like the name open() in either MSDOS or Unix.
I like the name DosOpen(). But I don't like what OS/2
did for types. I like what MSDOS 4 did. I may switch
my own Pos* wrappers to Dos* wrappers one day,
now that the reference exists.

BTW, I now have a standalone mainframe, depending
on definition. No Windows or Linux involved. No ASCII
seen. No Intel or AMD either.

https://groups.io/g/hercules-380/message/3143

BFN. Paul.
Scott Lurndal
2025-03-10 19:01:48 UTC
Reply
Permalink
Post by Paul Edwards
? The INT 80H used by Linux is documented too.
Linux has used SYSENTER et alia since they were
introduced by Intel and AMD. INT 80 was legacy on 32-bit systems.
Post by Paul Edwards
So MSDOS is illegitmate because Microsoft didn't move
the open() function from one bit of documentation to
another bit of (unspecified) documentation?
MSDOS isn't portable (nor particularly useful).

Nobody claims it is 'illegitimate'. It is
not a real operating system, just a glorified
program loader tied to a single, long obsolete
processor architecture (8088/8086).

Useful today for hobby programming by those
interested in historical operating systems. Not useful
for current or future production software
development.
Paul Edwards
2025-03-10 21:04:04 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Paul Edwards
? The INT 80H used by Linux is documented too.
Linux has used SYSENTER et alia since they were
introduced by Intel and AMD. INT 80 was legacy on 32-bit systems.
Post by Paul Edwards
So MSDOS is illegitmate because Microsoft didn't move
the open() function from one bit of documentation to
another bit of (unspecified) documentation?
MSDOS isn't portable (nor particularly useful).
Says who? Which bit of open() isn't portable?
Post by Scott Lurndal
Nobody claims it is 'illegitimate'. It is
not a real operating system, just a glorified
program loader tied to a single, long obsolete
processor architecture (8088/8086).
It's MOSTLY not tied to the 8086 unless you specifically
define it that way.

The defined interface (open() etc) - if you insist on
using that instead of fopen() - isn't tied to the 8086.

And if you simply use fopen() and other C90 functions
(which may as well be included as "MSDOS", just as
they are included in "Posix", it isn't tied to the 8086 at all.

BFN. Paul.
Paul Edwards
2025-03-10 21:47:16 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Paul Edwards
? The INT 80H used by Linux is documented too.
Linux has used SYSENTER et alia since they were
introduced by Intel and AMD. INT 80 was legacy on 32-bit systems.
So? Even when INT 80H was used, it didn't mean that Linux
was tied to the 80386 and either the OS or the applications
wouldn't port to the 68000 which has a different instruction
to do an interrupt.

Because a wrapper existed and was expected to be used.

Just as there are wrappers provided by Microsoft for MSDOS.

You can argue (I'm not particularly arguing) that Linux's
documentation was better, or that it was bundled better,
but that is a separate issue.

You could also argue that the specific MSDOS source code
that you see in releases had 8086 assembler included.

So? Linux has assembler source code too. Lots of it.

You could argue that MSDOS has a higher percentage of
assembler source code. So? What's wrong with that? They
can for example create a mostly C version for the 68000
(I have done that myself in fact), and then decide to have
a hand-optimized assembler version also for the 68000 (I
may choose to do that myself too). So? What does that
prove? That just proves that engineers are doing a good
job spending time and effort to write assembler for a
particular platform. Is your complaint that MSDOS never
had that theoretical C version to allow easy porting?
CP/M was written in a language that allowed porting.
If Microsoft saw some market demand for a 68000 version
of MSDOS, I'm pretty sure their engineers could do that.

BFN. Paul.
Waldek Hebisch
2025-03-11 18:15:54 UTC
Reply
Permalink
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
This is oversimplified definition, any definition of similar
length will be oversimplified. But let us see how this
works.
Post by Dan Cross
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really.
Well, it does. Origin of name "operationg system" comes from
automating part of work of machine operator. In particular,
operatining system loads a program and when program is done
operatining system regains control, decides what to run next
and loads it. Operating system may load multiple programs
at the same time and multiplex between them. But loading
programs seqentially still is form of multiplexing. If
you amend definition above excluding such form of multiplexing,
then you get rather narrow definition which exculdes a lot
of historical and even current operatining systems.

Also, you seem to ignore a file system. For definition
above to make any sense multipling machine hardware
resources must include mutiplexing (coordinating) access
to external storage which is (part of) function of file system.
Post by Dan Cross
While it does provide a primitive filesystem,
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff. Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
System calls are numbered in almost all operating systems.
Names are in documentation and/or as part of programming
language support, but actual interface is in terms of numbers.
Clearly system call mechanizm is has strong connection to
hardware and freqently defined in terms of instructions like
SVC.
Post by Dan Cross
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.
Well, DOS is close to best possible protection given the
hardware. In modern times hardware protection gained
importance, but putting hardware protection as mandatory
part of operating system definition distorts history
quite a lot.

There is a lot of valid critique of DOS, but saying that it is
not an OS is just silly game of words. You can pile adjectives
on OS, like "multitasking OS", "proteded OS" (or better
"OS using hardware protection") and DOS will be outside such
restricted classes of OS-es. But is clearly an OS.
--
Waldek Hebisch
Dan Cross
2025-03-11 18:29:35 UTC
Reply
Permalink
Post by Waldek Hebisch
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
This is oversimplified definition, any definition of similar
length will be oversimplified. But let us see how this
works.
Mmm...not really.
Post by Waldek Hebisch
Post by Dan Cross
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really.
[snip]
Also, you seem to ignore a file system. For definition
Funny how in the very next paragraph you quoted, I was talking
about a filesystem. ;-P
Post by Waldek Hebisch
above to make any sense multipling machine hardware
resources must include mutiplexing (coordinating) access
to external storage which is (part of) function of file system.
Post by Dan Cross
While it does provide a primitive filesystem,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(See note above)
Post by Waldek Hebisch
Post by Dan Cross
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff. Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
System calls are numbered in almost all operating systems.
You're talking about the ABI.
Post by Waldek Hebisch
[snip]
Post by Dan Cross
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.
Well, DOS is close to best possible protection given the
hardware. In modern times hardware protection gained
importance, but putting hardware protection as mandatory
part of operating system definition distorts history
quite a lot.
By the time the IBM PC came along, we'd had systems where the
OS was protected from errant programs for 20 years. For example
look up the Manchester Atlas system.
Post by Waldek Hebisch
There is a lot of valid critique of DOS, but saying that it is
not an OS is just silly game of words. You can pile adjectives
on OS, like "multitasking OS", "proteded OS" (or better
"OS using hardware protection") and DOS will be outside such
restricted classes of OS-es. But is clearly an OS.
Well, except perhaps it is not. At least not by a very
reasonable definition that's widely accepted in the field.

I really don't see why people are so upset about this; it's not
a huge deal. DOS was ok for it's time and for what it enabled
on the original IBM PC; the hardware was very limited, and so it
wasn't nearly as capable as larger systems with "real" operating
systems. Why is it a priori a bad thing to acknowledge that?

It sure seems like some people are getting worked up about a
very minor thing.

- Dan C.
Paul Edwards
2025-03-11 18:52:09 UTC
Reply
Permalink
Post by Dan Cross
Post by Waldek Hebisch
Well, DOS is close to best possible protection given the
hardware. In modern times hardware protection gained
importance, but putting hardware protection as mandatory
part of operating system definition distorts history
quite a lot.
By the time the IBM PC came along, we'd had systems where the
OS was protected from errant programs for 20 years. For example
look up the Manchester Atlas system.
Irrelevant.
Post by Dan Cross
Post by Waldek Hebisch
There is a lot of valid critique of DOS, but saying that it is
not an OS is just silly game of words. You can pile adjectives
on OS, like "multitasking OS", "proteded OS" (or better
"OS using hardware protection") and DOS will be outside such
restricted classes of OS-es. But is clearly an OS.
Exactly.
Post by Dan Cross
Well, except perhaps it is not.
It is.
Post by Dan Cross
At least not by a very
reasonable definition that's widely accepted in the field.
Total nonsense. Only complete jackasses "in the field"
would say that MSDOS is a misnomer that doesn't meet
"the" technical definition of an OS.

Most people I have seen in the field - including the author
of QDOS - do not make that claim.
Post by Dan Cross
I really don't see why people are so upset about this; it's not
a huge deal.
If it's not a huge deal, then please stop insisting that
MSDOS isn't an OS, and instead just use the correct
adjective, such as "non-protected OS" to describe
MSDOS if you have some point you are trying to
make.

Hint - you have no point you are trying to make. You're
just being a jackass. You're not telling anyone here
anything they don't already know.
Post by Dan Cross
DOS was ok for it's time and for what it enabled
on the original IBM PC; the hardware was very limited, and so it
wasn't nearly as capable as larger systems with "real" operating
systems. Why is it a priori a bad thing to acknowledge that?
Nobody is not acknowledging that MSDOS was less capable.
That's just your strawman.

All we're doing is saying is that the English words used in
Wikipedia are correct, the OS in MSDOS is not a misnomer,
and please stop insisting that the whole world is using the
English language incorrectly, because you and some of your
jackass friends are trying to change the language.
Post by Dan Cross
It sure seems like some people are getting worked up about a
very minor thing.
If it's a minor thing, then take your own advice and just admit
you were wrong according to the widespread use of the term
both inside and outside of computer science, and drop it.

Apparently it's not minor, and you are bluffing in an attempt
to delegitimize any OS that doesn't have specific features,
for ulterior motives.

BFN. Paul.
Waldek Hebisch
2025-03-11 23:05:08 UTC
Reply
Permalink
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
This is oversimplified definition, any definition of similar
length will be oversimplified. But let us see how this
works.
Mmm...not really.
Post by Waldek Hebisch
Post by Dan Cross
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really.
[snip]
Also, you seem to ignore a file system. For definition
Funny how in the very next paragraph you quoted, I was talking
about a filesystem. ;-P
Post by Waldek Hebisch
above to make any sense multipling machine hardware
resources must include mutiplexing (coordinating) access
to external storage which is (part of) function of file system.
Post by Dan Cross
While it does provide a primitive filesystem,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(See note above)
You did not state relation with "multiplex the machine's hardware"
and quick "Well, no; not really" suggests that you do not
count this as multiplexing, I think that you should.
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff. Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
System calls are numbered in almost all operating systems.
You're talking about the ABI.
Yes, that is what matters for programs.
Post by Dan Cross
Post by Waldek Hebisch
[snip]
Post by Dan Cross
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.
Well, DOS is close to best possible protection given the
hardware. In modern times hardware protection gained
importance, but putting hardware protection as mandatory
part of operating system definition distorts history
quite a lot.
By the time the IBM PC came along, we'd had systems where the
OS was protected from errant programs for 20 years. For example
look up the Manchester Atlas system.
Sure, there were system with memory protection. But a lot
of hardware had not memory protection and even now such
hardware is in extensive use (granted some foljs are not
aware of them and even more would not count them as computers).
Post by Dan Cross
Post by Waldek Hebisch
There is a lot of valid critique of DOS, but saying that it is
not an OS is just silly game of words. You can pile adjectives
on OS, like "multitasking OS", "proteded OS" (or better
"OS using hardware protection") and DOS will be outside such
restricted classes of OS-es. But is clearly an OS.
Well, except perhaps it is not. At least not by a very
reasonable definition that's widely accepted in the field.
I really don't see why people are so upset about this; it's not
a huge deal. DOS was ok for it's time and for what it enabled
on the original IBM PC; the hardware was very limited, and so it
wasn't nearly as capable as larger systems with "real" operating
systems. Why is it a priori a bad thing to acknowledge that?
I do not care about DOS. And I acknowledge limitations of DOS.
I do care about clear terminology. Terminology where removing
memory protection from Linux (to make it run on hardware not
capable of memory protection) turns it into "not an OS" is
a nonsense. Concerning widely accepted: I do not think that your
interpretation of definition is widely accepted. I mean,
putting hardware memory protection as part of definition
may acknowlege its importance for some operationg systems,
but leave it optional in general. If hardware memory
protection was intended as mandatory thing, then this is
politcal statement with which I think a lot of specialists
disagree. For example Wirth gives "system Oberon" which he
calls operating system, but which has no hardware memory
protection.

More generally, basic terminology should be incusive. It
is easy to add extra qualifiers to narrow meaning. It
is awkward to use phrases "something like OS, but which
does not satify some random guy definition of OS".
--
Waldek Hebisch
Dan Cross
2025-03-11 23:48:42 UTC
Reply
Permalink
Post by Waldek Hebisch
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
This is oversimplified definition, any definition of similar
length will be oversimplified. But let us see how this
works.
Mmm...not really.
Post by Waldek Hebisch
Post by Dan Cross
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really.
[snip]
Also, you seem to ignore a file system. For definition
Funny how in the very next paragraph you quoted, I was talking
about a filesystem. ;-P
Post by Waldek Hebisch
above to make any sense multipling machine hardware
resources must include mutiplexing (coordinating) access
to external storage which is (part of) function of file system.
Post by Dan Cross
While it does provide a primitive filesystem,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(See note above)
You did not state relation with "multiplex the machine's hardware"
Actually, I did. Perhaps you are having a hard time
understanding what I wrote? Is there some way I could make it
claerer?
Post by Waldek Hebisch
and quick "Well, no; not really" suggests that you do not
count this as multiplexing, I think that you should.
See above.
Post by Waldek Hebisch
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff. Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
System calls are numbered in almost all operating systems.
You're talking about the ABI.
Yes, that is what matters for programs.
Post by Dan Cross
Post by Waldek Hebisch
[snip]
Post by Dan Cross
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.
Well, DOS is close to best possible protection given the
hardware. In modern times hardware protection gained
importance, but putting hardware protection as mandatory
part of operating system definition distorts history
quite a lot.
By the time the IBM PC came along, we'd had systems where the
OS was protected from errant programs for 20 years. For example
look up the Manchester Atlas system.
Sure, there were system with memory protection. But a lot
of hardware had not memory protection and even now such
hardware is in extensive use (granted some foljs are not
aware of them and even more would not count them as computers).
Post by Dan Cross
Post by Waldek Hebisch
There is a lot of valid critique of DOS, but saying that it is
not an OS is just silly game of words. You can pile adjectives
on OS, like "multitasking OS", "proteded OS" (or better
"OS using hardware protection") and DOS will be outside such
restricted classes of OS-es. But is clearly an OS.
Well, except perhaps it is not. At least not by a very
reasonable definition that's widely accepted in the field.
I really don't see why people are so upset about this; it's not
a huge deal. DOS was ok for it's time and for what it enabled
on the original IBM PC; the hardware was very limited, and so it
wasn't nearly as capable as larger systems with "real" operating
systems. Why is it a priori a bad thing to acknowledge that?
I do not care about DOS. And I acknowledge limitations of DOS.
I do care about clear terminology. Terminology where removing
memory protection from Linux (to make it run on hardware not
capable of memory protection) turns it into "not an OS" is
a nonsense. Concerning widely accepted: I do not think that your
interpretation of definition is widely accepted. I mean,
putting hardware memory protection as part of definition
may acknowlege its importance for some operationg systems,
but leave it optional in general. If hardware memory
protection was intended as mandatory thing, then this is
politcal statement with which I think a lot of specialists
disagree.
As I said, this is the definition due to Mothy Roscoe at ETH.
It was given in the keynote for one of the two major conferences
on the subject. Perhaps watch for yourself and then judge.

https://www.usenix.org/conference/osdi21/presentation/fri-keynote
Post by Waldek Hebisch
For example Wirth gives "system Oberon" which he
calls operating system, but which has no hardware memory
protection.
Speaking of ETH....
Post by Waldek Hebisch
More generally, basic terminology should be incusive. It
is easy to add extra qualifiers to narrow meaning. It
is awkward to use phrases "something like OS, but which
does not satify some random guy definition of OS".
Sounds like you have some studying to do, my boy.

- Dan C.
Waldek Hebisch
2025-03-12 02:23:00 UTC
Reply
Permalink
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
Post by Salvador Mirzo
Post by Scott Lurndal
Post by Paul Edwards
Sure - but why not make it available anyway?
MS-DOS is, was, and always will be a toy. It's not even
a real operating system.
And why is that? Is it mainly because it doesn't time-share the CPU?
It depends on your definition of an operating system, I suppose.
I like the definition Mothy Roscoe (ETH) used in his OSDI'21
1. Multiplexes the machine's hardware resources
2. Abstracts the hardware platform
3. Protects software princples from each other
(using the hardare)
This is oversimplified definition, any definition of similar
length will be oversimplified. But let us see how this
works.
Mmm...not really.
Post by Waldek Hebisch
Post by Dan Cross
It's hard to see how MS-DOS fits that definition in a meaningful
way. Does it multiplex the machine's hardware resources? Well,
no; not really.
[snip]
Also, you seem to ignore a file system. For definition
Funny how in the very next paragraph you quoted, I was talking
about a filesystem. ;-P
Post by Waldek Hebisch
above to make any sense multipling machine hardware
resources must include mutiplexing (coordinating) access
to external storage which is (part of) function of file system.
Post by Dan Cross
While it does provide a primitive filesystem,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(See note above)
You did not state relation with "multiplex the machine's hardware"
Actually, I did. Perhaps you are having a hard time
understanding what I wrote? Is there some way I could make it
claerer?
Post by Waldek Hebisch
and quick "Well, no; not really" suggests that you do not
count this as multiplexing, I think that you should.
See above.
Post by Waldek Hebisch
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
and exposes some interface for memory management, it only lets
one program run at a time, and that program doesn't have to use
or honor DOS's filesystem or memory management stuff. Further,
the system interface is inexorably tied to the hardware; it's
defined in terms of synchronous software traps and specific
register values. System calls are numbered, not named.
System calls are numbered in almost all operating systems.
You're talking about the ABI.
Yes, that is what matters for programs.
Post by Dan Cross
Post by Waldek Hebisch
[snip]
Post by Dan Cross
Finally, the last one is really the nail in the coffin: MS-DOS
makes absolutely no effort to protect the software principles
from each other, or even themselves; a user program can take
over and just never cede control back to DOS.
Well, DOS is close to best possible protection given the
hardware. In modern times hardware protection gained
importance, but putting hardware protection as mandatory
part of operating system definition distorts history
quite a lot.
By the time the IBM PC came along, we'd had systems where the
OS was protected from errant programs for 20 years. For example
look up the Manchester Atlas system.
Sure, there were system with memory protection. But a lot
of hardware had not memory protection and even now such
hardware is in extensive use (granted some foljs are not
aware of them and even more would not count them as computers).
Post by Dan Cross
Post by Waldek Hebisch
There is a lot of valid critique of DOS, but saying that it is
not an OS is just silly game of words. You can pile adjectives
on OS, like "multitasking OS", "proteded OS" (or better
"OS using hardware protection") and DOS will be outside such
restricted classes of OS-es. But is clearly an OS.
Well, except perhaps it is not. At least not by a very
reasonable definition that's widely accepted in the field.
I really don't see why people are so upset about this; it's not
a huge deal. DOS was ok for it's time and for what it enabled
on the original IBM PC; the hardware was very limited, and so it
wasn't nearly as capable as larger systems with "real" operating
systems. Why is it a priori a bad thing to acknowledge that?
I do not care about DOS. And I acknowledge limitations of DOS.
I do care about clear terminology. Terminology where removing
memory protection from Linux (to make it run on hardware not
capable of memory protection) turns it into "not an OS" is
a nonsense. Concerning widely accepted: I do not think that your
interpretation of definition is widely accepted. I mean,
putting hardware memory protection as part of definition
may acknowlege its importance for some operationg systems,
but leave it optional in general. If hardware memory
protection was intended as mandatory thing, then this is
politcal statement with which I think a lot of specialists
disagree.
As I said, this is the definition due to Mothy Roscoe at ETH.
It was given in the keynote for one of the two major conferences
on the subject. Perhaps watch for yourself and then judge.
https://www.usenix.org/conference/osdi21/presentation/fri-keynote
Post by Waldek Hebisch
For example Wirth gives "system Oberon" which he
calls operating system, but which has no hardware memory
protection.
Speaking of ETH....
Post by Waldek Hebisch
More generally, basic terminology should be incusive. It
is easy to add extra qualifiers to narrow meaning. It
is awkward to use phrases "something like OS, but which
does not satify some random guy definition of OS".
Sounds like you have some studying to do, my boy.
Mind you, I know how keynote lectures at conferences work.
I know enough about operating systems to have my own
opinion what an OS is, no need to defer to some external
authority.
--
Waldek Hebisch
Dan Cross
2025-03-12 02:34:09 UTC
Reply
Permalink
Post by Waldek Hebisch
Post by Dan Cross
Sounds like you have some studying to do, my boy.
Mind you, I know how keynote lectures at conferences work.
I know enough about operating systems to have my own
opinion what an OS is, no need to defer to some external
authority.
Pardon me if I'm skeptical of these claims. Perhaps you'd care
to cite some sources, or offer some credentials, or something of
that nature, to establish some credibility for your assertions?

- Dan C.
Paul Edwards
2025-03-12 07:41:25 UTC
Reply
Permalink
Post by Dan Cross
Post by Waldek Hebisch
Post by Dan Cross
Sounds like you have some studying to do, my boy.
Mind you, I know how keynote lectures at conferences work.
I know enough about operating systems to have my own
opinion what an OS is, no need to defer to some external
authority.
Pardon me if I'm skeptical of these claims. Perhaps you'd care
to cite some sources, or offer some credentials, or something of
that nature, to establish some credibility for your assertions?
Which claim? That he knows about OSes? You want him to
list the books he has read or the source code he has looked
at and if you've read more books than him, you are superior
to him, despite the fact that you are a lying jackass who has
trouble understanding the 8086 despite claims to the
contrary?

Superior in your eyes, perhaps.

I can understand why he wouldn't want to defer to some
external authority nominated by you. So that claim?

He's no more likely to defer to your nominated external
authority than you are likely to defer to mine.

BFN. Paul.

John Ames
2024-07-19 16:35:33 UTC
Reply
Permalink
On Fri, 19 Jul 2024 18:43:13 +0800
Post by Paul Edwards
Sure - but why not make it available anyway? What's the barrier
to someone doing that? No-one is interested? Too much work?
It didn't need to be Microsoft personally. And it can be written
in C to make things easier. Or even some other language - e.g.
CP/M was written in PL/M I think.
Well, 35 yrs. ago, x86 wasn't even a thing in the "mainframe" (by which
we presumably mean "large-scale, heavy-duty business computing") space;
in 1989 e.g. CompuServe was still entirely a PDP-10 shop, IBM had just
rolled out its AS/400 line, and the IA-32 architecture was still four
years away from even going superscalar. Others here have far more
direct knowledge of the "mainframe" space than myself, and can feel
free to correct me, but AFAIK x86 systems didn't see broad acceptance
in truly heavy-duty business computing 'til the mid-'00s.

And while MS-DOS can certainly be used for classic batch processing, it
has practically no support for multitasking, which was already a thing
in the mainframe space all the way back to the '60s, because any given
batch job will not *necessarily* make maximal use of the computer, and
at large scale it makes no sense to leave available resources idle.
It's possible to set specialized utilities running as TSRs in DOS, but
the system as a whole is not designed for more than one "real" program
to run at a time - so sharing the system between large numbers of
individual jobs in a generalized way simply isn't possible.

So, in short: there was no mainframe hardware platform that it could be
ported to back in the day, and it's not well-suited for that use case.
One certainly *could* get it running on, say, a large x86 cluster as a
novelty, but it's not a huge surprise that, thus far, nobody has been
thus inclined.
Paul Edwards
2024-08-20 20:20:24 UTC
Reply
Permalink
Post by John Ames
And while MS-DOS can certainly be used for classic batch processing, it
has practically no support for multitasking, which was already a thing
So, in short: there was no mainframe hardware platform that it could be
ported to back in the day, and it's not well-suited for that use case.
One certainly *could* get it running on, say, a large x86 cluster as a
novelty, but it's not a huge surprise that, thus far, nobody has been
thus inclined.
I'm not familiar with "clusters". Could you tell me what this
"novelty" port would look like?

Thanks. Paul.
John Ames
2024-08-21 16:51:01 UTC
Reply
Permalink
On Wed, 21 Aug 2024 04:20:24 +0800
Post by Paul Edwards
I'm not familiar with "clusters". Could you tell me what this
"novelty" port would look like?
"Clusters" being "large numbers of discrete systems across which work
is distributed," an idea that goes back at least to the Transputer but
which really took off in high-performance computing in the early '00s
(IIRC,) when commodity PC hardware reached a performance level such that
it was practical to use "a bunch of PCs in a network" as a replacement
for a single high-performance computer of some other flavor, depending
on the job.

So, for the sake of argument, let's say you got MS-DOS running on such
a platform - certainly possible, since it's fundamentally a PC (leaving
aside issues of e.g. real-mode BIOS vs. UEFI or getting packet drivers
for the NIC, as well as getting the network stack going.) You then have
a large number of DOS PCs on which you can run one (1) job at a time.

Now, assuming that you're doing this because you have a large number of
jobs which you'd like to power through in a maximally efficient manner,
you'd also need some kind of supervisory system to distribute jobs from
the pile to individual nodes in the cluster. There's no reason this
couldn't also run on MS-DOS; in any case, you'd need software on both
ends to *A.* schedule the job for a particular node, *B.* provide that
node with access to the files/resources needed to do it, and *C.* keep
it rolling from one job right into the next.

Now you've got things going; but it's a far cry from maximum efficiency,
because the hardware on each node in the cluster is almost certainly
capable of multi-threaded operation, but DOS has no support for multi-
processing at all. (It's also probably a 64-bit system, but we'll say
for the sake of argument that you've got some kind of amd64 equivalent
to a DPMI going - has anyone written one yet? - so that your application
can at least make full-ish use of a single CPU core.)

There's two ways you could go about handling this. You could attempt to
extend your single-threaded MS-DOS application into a multi-threaded
one, handling all the scheduling and resource-contention issues within
itself. At this point, you've more or less implemented a different OS
on top of DOS (like pre-NT Windows.) Not terribly ideal, since you are
(presumably) still handling I/O through DOS and your DOS-based network
stack, which are single-threaded and will bottleneck all the other
threads. (This was something that Amiga programmers used to deal with:
pre-emptive multitasking bolted awkwardly to a single-threaded DOS.)

Alternatively, you could choose to virtualize; modern implementations
of the amd64 architecture support this natively in hardware, so you can
put even your 64-bit DPMI-enabled mutant MS-DOS program in a container
such that it thinks it's running by itself on a single-threaded CPU,
and then run as many of those in parallel as you have CPU threads. Of
course, you'll need a hypervisor system in place for this; for the sake
of argument, you could probably *also* run this on MS-DOS, but I very
much doubt anyone's written such a beast, so you'd probably have to do
it yourself. You might also need to extend your supervisor-node
software to parcel out multiple jobs to each machine, unless all the
hypervisor-guest systems appear as individual nodes on the network
(which they certainly could.)

So, to summarize: all you need in order to accomplish this is 1. a DOS-
based hypervisor which almost certainly doesn't exist, 2. a 64-bit DPMI
extender which probably doesn't, 3. DOS-based remote job execution
tools which might conceivably already exist, but may not, 4. your own
particular mutant 64-bit MS-DOS application, and 5. a task sufficiently
large/intensive to justify all this effort on in the first place.

Should make for a nice weekend project!
Paul Edwards
2024-08-22 05:24:02 UTC
Reply
Permalink
Post by John Ames
So, to summarize: all you need in order to accomplish this is
This is a very complicated new system. That is not my goal.
My goal is a simple starter system. z/PDOS-generic is an
example of a simple starter system.
Post by John Ames
1. a DOS-
based hypervisor which almost certainly doesn't exist, 2. a 64-bit DPMI
extender which probably doesn't,
Note that I have 32-bit MSDOS which is accomplished by
switching from PM32 to RM16 in order to make BIOS calls.
This works on an AMD64-like processor if someone has
made a BIOS availlable. I actually bought a Lenovo Kaitian
with a Zhaoxin processor in order to get this. The BIOS is
literally in Chinese and I needed help from a friend in order
to know how to switch between UEFI and legacy BIOS.

Also note that I have a thin wrapper on top of UEFI that switches
a UEFI system into a mini Windows 64-bit clone. That is also
MSDOS-like.

And at this level you need to define wihat "MSDOS" actually means.

BFN. Paul.
Grant Taylor
2024-07-20 00:46:03 UTC
Reply
Permalink
Post by Paul Edwards
Sure - but why not make it available anyway? What's the barrier
to someone doing that? No-one is interested? Too much work?
I believe you answered your own question.
Post by Paul Edwards
It didn't need to be Microsoft personally.
Assuming the MS in MS-DOS stands for Microsoft, yes, it does need to be
Microsoft.

If you just want DOS on a mainframe, IBM did that.

Link - DOS/360 and successors - Wikipedia
- https://en.wikipedia.org/wiki/DOS/360_and_successors
--
Grant. . . .
Paul Edwards
2024-08-20 20:18:11 UTC
Reply
Permalink
Post by Grant Taylor
Post by Paul Edwards
Sure - but why not make it available anyway? What's the barrier
to someone doing that? No-one is interested? Too much work?
I believe you answered your own question.
Post by Paul Edwards
It didn't need to be Microsoft personally.
Assuming the MS in MS-DOS stands for Microsoft, yes, it does need to be
Microsoft.
If you just want DOS on a mainframe, IBM did that.
Link - DOS/360 and successors - Wikipedia
- https://en.wikipedia.org/wiki/DOS/360_and_successors
And that is really crappy compared to Microsoft's version. The
Microsoft version (or equivalent) could have been used for
debugging system problems, or experimenting on a DR site.

BFN. Paul.
Salvador Mirzo
2025-03-08 17:41:40 UTC
Reply
Permalink
Post by Grant Taylor
For 35+ years I have wondered why there was no MSDOS for the mainframe.
The answer is in the name.
MS-DOS
Microsoft DOS
Micro
micro-computers are the smallest end of the system with mainframes and
supers at the other end of the system.
IBM provided a Disk Operating System for early and / or smaller mainframes.
And why is /Disk/ Operating System? What's so /disky/ about it?
Dan Cross
2025-03-09 01:58:26 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Grant Taylor
For 35+ years I have wondered why there was no MSDOS for the mainframe.
The answer is in the name.
MS-DOS
Microsoft DOS
Micro
micro-computers are the smallest end of the system with mainframes and
supers at the other end of the system.
IBM provided a Disk Operating System for early and / or smaller mainframes.
And why is /Disk/ Operating System? What's so /disky/ about it?
Simple: it drove a system with a disk. Most early mainframes
didn't have disks, so once they came along, system software had
to evolve to meet the needs of new hardware.

IBM's DOS/360 was pretty anemic compared to it's flagship OS360.
But it was built as something of a stopgap because OS was behind
schedule.

- Dan C.
Salvador Mirzo
2025-03-10 12:31:00 UTC
Reply
Permalink
Post by Dan Cross
Post by Salvador Mirzo
Post by Grant Taylor
For 35+ years I have wondered why there was no MSDOS for the mainframe.
The answer is in the name.
MS-DOS
Microsoft DOS
Micro
micro-computers are the smallest end of the system with mainframes and
supers at the other end of the system.
IBM provided a Disk Operating System for early and / or smaller mainframes.
And why is /Disk/ Operating System? What's so /disky/ about it?
Simple: it drove a system with a disk. Most early mainframes
didn't have disks, so once they came along, system software had
to evolve to meet the needs of new hardware.
IBM's DOS/360 was pretty anemic compared to it's flagship OS360.
But it was built as something of a stopgap because OS was behind
schedule.
Thanks! Changing the subject a bit to the history of DOS, if that's
okay. I was not quite aware that there was a mainframe DOS in the IBM
world. So it seems to me tbat Microsoft found the DOS made by ``Seattle
Computer Products'' the right choice to buy because they wanted to
produce a system for IBM micro-computers---it makes sense in sort of
keeping the same user interface. But this strategy assumes that the
users of micro-computers would be the more or less the same users as IBM
mainframes. Am I imagining things correctly here and did the strategy
really make sense? (It could also be the case that Microsoft just
didn't have any other option.) (Background: I've watched the film
``Pirates of Sillicon Valley'' a long time ago. That's how much I know
about the history of MS-DOS.)
Dan Cross
2025-03-10 14:28:28 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Dan Cross
Post by Salvador Mirzo
Post by Grant Taylor
For 35+ years I have wondered why there was no MSDOS for the mainframe.
The answer is in the name.
MS-DOS
Microsoft DOS
Micro
micro-computers are the smallest end of the system with mainframes and
supers at the other end of the system.
IBM provided a Disk Operating System for early and / or smaller mainframes.
And why is /Disk/ Operating System? What's so /disky/ about it?
Simple: it drove a system with a disk. Most early mainframes
didn't have disks, so once they came along, system software had
to evolve to meet the needs of new hardware.
IBM's DOS/360 was pretty anemic compared to it's flagship OS360.
But it was built as something of a stopgap because OS was behind
schedule.
Thanks! Changing the subject a bit to the history of DOS, if that's
okay.
Yes, of course.
Post by Salvador Mirzo
I was not quite aware that there was a mainframe DOS in the IBM
world. So it seems to me tbat Microsoft found the DOS made by ``Seattle
Computer Products'' the right choice to buy because they wanted to
produce a system for IBM micro-computers---it makes sense in sort of
keeping the same user interface. But this strategy assumes that the
users of micro-computers would be the more or less the same users as IBM
mainframes. Am I imagining things correctly here and did the strategy
really make sense? (It could also be the case that Microsoft just
didn't have any other option.) (Background: I've watched the film
``Pirates of Sillicon Valley'' a long time ago. That's how much I know
about the history of MS-DOS.)
Well, I would urge some caution here; I don't think that DOS/360
had much resemblence, if any, to MS-DOS: it was a batch system
for very low-end mainframes in the IBM 360 line. The name clash
is just a coincidence. At the time, lots of manufacturers were
starting to introduce "DOS" systems, since disks were relatively
new and gaining favor for long-ish term secondary storage of
data (tape was still preferred for really long-term storage; in
lots of places, this was true even up until the 1990s and into
the early 2000s). Before that, tape dominated, with occasional
use of drums for high-speed temporary storage that was nearly
random-access. When PCs started to show up on the scene, and
started to ship with floppy disks, the name "DOS" was recycled.
Indeed, lots of early PCs had "DOS" operating systems, but these
are generally completely unrelated to one another; it was just a
common term for systems that were disk-oriented.

The MS-DOS interface, inherited from QDOS, which mimmicked that
of CP/M, has much more in common with DEC operating systems than
anything in the IBM mainframe world. The interface of IBM's
time sharing systems, like VM/CMS (now z/VM) has more in common
with Multics, or CTSS (which was the predecessor of both), than,
say, TOPS-10 or TENEX or DOS/8. It may be worth clarifying that
these things didn't usually spring forth from a bubble; a lot of
the peole who were building these things in their garages and
who started the early PC companies had some experience with
mainframe and minicomputer systems; they naturally drew some
influence from those when they started putting together the UIs
for their machines.

IBM's larger machines (what we usually associate with
"mainframes") had come out of a world that was bifurcated
between scientific and business computing; systems like the 1401
were targeted towards business, which needed high throughput,
but performed relatively simple (usually decimal or integer)
calculations. Systems like the 7094 were targeted towards
scientific computing, which needed fast floating point for
complex calculations, but relatively low throughput. To
illustrate, consider charging compound interest on a bank's
portfolio of mortgage loans at the beginning of each month,
versus calculating the trajectory of a rocket. The rules for
the former may be complex, but the math is pretty simple ("take
this number, add 10 percent to it, and store it somewhere"); the
latter is helaciously complex ("evaluate this integral to
compute the area under this curve as time varies from a to b,
but mass decreases nonlinearly as a function of fuel consumption
and decreasing drag as we move out of the atmosphere..."). A
large bank might run their mortgage interest program over a
million or more loans, while NASA's only doing the trajectory
calculation for a single mission at a time; they may run it more
than once, of course, but probably not a million times.

The IBM 360 line was supposed to unify these two worls onto a
single ISA, hence "360" in the name, as in "a 360 degree view of
the world of computing." The problem was that the software for
the 360 was famously delivered behind schedule, well after the
hardware, as recounted in Fred Brooks's masterful, "The Mythical
Man Month"; so IBM had 360 systems sitting on loading docks but
no software to go with them. While OS/360 was still being
developed, they quickly put together stopgap operating systems
so that they could move their machines into customer hands.

DOS/360 was one of those, and it was small enough that it could
run on a 360/30 with something like 8 or 16KiB of RAM and a
disk. They also shipped a TOS/360 ("tape operating system") for
systems without disks. But it was a batch system, with no real
user interface that would be meaningful in the context of a PC
or interactive timesharing system.

IBM got into the PC market largely because they saw a business
opportunity, but it's not clear that they really believed in it;
the original IBM PC project, coming out of Florida, was run very
differently than projects in New York and is a reflect of that.

- Dan C.
Scott Lurndal
2025-03-10 14:46:50 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Dan Cross
Post by Salvador Mirzo
Post by Grant Taylor
For 35+ years I have wondered why there was no MSDOS for the mainframe.
The answer is in the name.
MS-DOS
Microsoft DOS
Micro
micro-computers are the smallest end of the system with mainframes and
supers at the other end of the system.
IBM provided a Disk Operating System for early and / or smaller mainframes.
And why is /Disk/ Operating System? What's so /disky/ about it?
Simple: it drove a system with a disk. Most early mainframes
didn't have disks, so once they came along, system software had
to evolve to meet the needs of new hardware.
IBM's DOS/360 was pretty anemic compared to it's flagship OS360.
But it was built as something of a stopgap because OS was behind
schedule.
Thanks! Changing the subject a bit to the history of DOS, if that's
okay. I was not quite aware that there was a mainframe DOS in the IBM
world.
There were several "Disk Operating Systems" available from various
computer manufacturers a decade before CP/M was developed and
microsoft introduced the PC DOS
operating system. IBM, Burroughs, various others all had some form
of DOS.
Post by Salvador Mirzo
So it seems to me tbat Microsoft found the DOS made by ``Seattle
Computer Products'' the right choice to buy because they wanted to
produce a system for IBM micro-computers---it makes sense in sort of
keeping the same user interface.
https://en.wikipedia.org/wiki/Disk_operating_system

https://en.wikipedia.org/wiki/CP/M
Loading...