Post by Grant TaylorThe modem is going to look identical to a Windows computer with
Cygwin installed.
Because it is going to be a Windows computer with Cygwin installed.
What does that even mean.
Cygwin provides Unix, so you can write crappy Unix
programs on Windows. Windows is crappy too, so
no matter which way you turn, you need to write crap.
Or someone needs to, anyway.
I've found a way of isolating myself from the crap.
Even if I put on another hat and write crappy Unix
code for a while.
Post by Grant TaylorCygwin supports fork().
How does that help the system(s) that are connected to the modem via
{serial,Ethernet} and {PSTN,DSL,DOCSIS}?
What does the magical modem's ability to fork() actually mean to the
connected computers? How does the modem's ability to fork() help the
connected computers?
I don't care if the modem can magically fry me an egg exactly the way
that I like it, or not, because that doesn't actually have anything to
do with the data I send out the {serial,Ethernet} port to the
{PSTN,DSL,DOCSIS} connection.
What does the modem's ability to fork() do for the connected device?
No, this is not my point. Did you know that even GCC,
a friggin compiler, that could have been (and now, is)
written in pure C90, managed to create a dependency
on "fork"? I don't use Unix crap like that myself, but
others plaster their code with it.
So long as this code that deals with TCP/IP, wall to
wall Unix crap, full of forks and opens and god knows
what else, is isolated to the modem (virtual modem
running under Cygwin under Windows), I don't really care.
So long as it just looks like a really fast modem, doing
100 Mbps via INT 14H, I don't care.
Post by Grant TaylorTelstra even ripped up the copper wire. You have no choice but to
use fiber or 3G.
Sure I do. I always have a choice. There are always options to convert.
You didn't answer what "all I want is my modem back" means either.
I only recently found out that INT 14H even existed, and
also that MSDOS supports fopen of COM1 (but not "COM1:").
So I have only recently found out "the rules".
Now that I know the rules, I am retrospectively rewriting all
my software. Otherwise it would be unreasonable to
carpet bomb Taiwan.
But if I obey the rules, and the Taiwanese don't, it is justified
to carpet bomb them.
So, I have no problem with manufacturers bumping the speed
of my comms link up to 100 Mbps, but unless there's some
technical barrier, like the speed of light, I expect INT 14H to
give me some of that 100 Mbps. Currently INT 14H on my
Dell gives me crickets.
If INT 14H is restricted to 1 Mbps by the speed of light, no
problem, I'm happy to call INT 14H function 55 instead
which does a block write instead of a character write.
And my OS will gracefully check to see if function 55 exists
or not (not sure how to do that, but other interrupts have
the same issue), and if 55 (which would enable 100 Mbps)
doesn't exist, then it will fall back to function 1, which is
(allegedly) restricted to 1 Mbps.
Post by Grant TaylorI had assumed that a "virtual modem" didn't entail a physical phone
line.
I'll rephrase slightly: how are the computers connected to your modem
going to change what they are doing to be in sync with the modem's change?
What happens to the data that the two computers are exchanging when your
modem does it's fork()?
Does the data stream get interrupted?
Does the modem switch to a different data stream?
How are the computers connected via your modem supposed to deal with
this atypical behavior?
Your question is too complicated for me. What I know
is that my modem is already working. You can see it here:
https://sourceforge.net/p/mvs380/mvssrc/ci/master/tree/ozpd/c/modem.c
It's not pretty though.
Post by Grant TaylorI have no knowledge of certificates,
I strongly suggest that you get at least some knowledge of certificates.
Especially seeing as how certificates are the data that SSL / TLS
encryption is based off of.
I can highly recommend TLS Mastery by Michael W. Lucas. N.B. the "W."
is important.
I don't want prior art to put me into a rut. When my (working)
modem and my (working) BBS need to be enhanced to
handle "casual snooping", I'll consider what to do about
security.
The first thing I'll ask is why my ISP doesn't encrypt data
if there are so many snoopers between my ISP and the
other guy.
And if my ISP refuses, and I can't get the Australian
government to arrest them for indecent behavior, I'll
instead ask the US to carpet bomb Australia to get
some decent laws enacted.
And if the US is unwilling to bomb Australia, but is
willing to bomb Taiwan, I'll request my firmware to
take care of this instead of my ISP.
I sure as hell can't see (at this stage) why this has
anything to do with me.
Post by Grant Taylorbut I assume Windows has a way of doing that,
Contemporary Windows does. You don't have to go back too far to find
Windows that doesn't have commands to manage certificates.
And the command can't be added by a 3rd party?
Post by Grant Taylorand the virtual modem will be running under Windows or Linux or BSD
or MacOS.
Didn't you say "Because it is going to be a Windows computer with Cygwin
installed." a moment ago? So ... why the change to now include Linux,
BSD, or macOS?
Most of this Unix crap works on all of those environments.
Even the IBM mainframe claims to be POSIX compliant.
PDOS/386 isn't POSIX compliant though. It uses some sort
of MSDOS API called Pos*, as distinct from Bos* which is
the IBM hardware interface standard.
Post by Grant TaylorYes, I'm throwing that in too.
I'm not sure what's more complicated, a TCP/IP stack or OpenSSL.
These GNU asshats can do both.
Post by Grant TaylorOne day someone may produce a physical modem that meets the above
criteria and that is the size of a matchbox, but until then, it will
look exactly the size of a PC.
How is it going to look (exactly) the size of a PC if it's /virtual/?
I'm lost. The virtual modem needs to run on *something*.
Why not a PC?
Post by Grant TaylorVirtual anything still needs something physical, like RAM.
I would normally say that people know what I mean. But I'm not sure
about you.
No, I probably need something like that spelled out.
Don't assume that I have prior knowledge of anything
in particular, or am familiar with any particular
terminology.
I was surprised when I was doing Amiga work that
someone said that all programmers know how to
do 32-bit multiplication using 16-bit instructions.
I'm missing some code I need to support the
68000 and am dependent on a 68020. News to me
that everyone else knows that.
If you describe something to me in terms of C90, I
will likely understand that.
Post by Grant TaylorI need bluetooth or fiber or something in order to do any sort of
communication.
So you /do/ need a /physical/ network interface for your /virtual/ modem.
Of course.
Post by Grant TaylorBelieve it or not, even fiber can respond to "ATDE" with "CONNECT".
I choose "not". Fiber, as the medium for transmitting light, doesn't
respond to commands. Modems or transceivers connected to fiber /may/
respond to commands. Though the commands will probably not be "CONNECT"
and almost certainly not "ATDE".
Upgrade your virtual modem. :-)
Post by Grant Taylor100 Mbps.
Why 100 Mbps?
Why not 25 Mbps or 250 Mbps?
100 Mbps is indicative. The speed is only limited by technology.
Post by Grant TaylorATDAnntp://eternal-september.org
Or more importantly, what might it represent?
What significance does what it represents have?
With the above, a decent operating system will know
what to do. If your operating system doesn't know what
to do, time to upgrade.
Post by Grant TaylorNETWORK=COM1,ATD
meaning a simple fopen of nntp://eternal-september.org
What is the association between nntp://eternal-september.org and NETWORK
(COM1,ATD)?
The OS (that reads config.sys) will recognize that for
any networking request ("nntp://" is recognized as a
request for a network connection) should be directed
via COM1 (the OS knows about that too, so does
MSDOS in fact), and that it should use the "ATD"
prefix when dialing out via COM1, as that is what
this particular modem expects.
I'm happy to negotiate these low-level details.
Post by Grant TaylorWhat would be different to use an encrypted connection? How do you
communicate to the modem that it needs to use TLS?
At the moment I'm just trying to get a BBS to run.
It hasn't quite passed "proof of concept" stage at
the moment.
I would rather not be burdened by encryption requirements
when I haven't even got sign-off on using XON. Or demonstrated
that zmodem can work across an ASCII to EBCDIC gateway.
Or indeed, what to even code for ZRQINIT in zmodem when
running on an EBCDIC platform that may be communicating
with either another EBCDIC system or an ASCII system via
the gateway.
When I have 1990 modems working to my satisfaction (as I
said, I didn't even know INT 14H existed until recently), I'll
consider what to do about encryption, and hope that I haven't
painted myself into a corner.
But I would hope that any encryption would be handled
by something like the "Squid" that you mentioned. If there
is some technical barrier to putting Squid into my virtual
modem, ok, that's my bad luck.
Post by Grant TaylorHow do you tell the modem to connect to a different port on the remote end?
ATDAeternal-september.org:12345
Post by Grant TaylorDo you have pre-defined devices for each possible remote destination?
It would be ideal to configure the modem or OS with something
like (in config.sys):
DEFAULT_UUCP=eternal-september.org:550
So that the end user just needs to type:
getnews DEFAULT_UUCP
But I'm willing to negotiate. That's just an example.
Post by Grant TaylorDo you have one (or more) device(s) that you open and then pass a
parameter to?
If there is some technical reason for the firmware/modem
to need two ports to communicate on, then in config.sys:
NETWORK_SPLIT=(COM1,COM2)
Post by Grant Taylorwill do an INT 14H to port 0
What ist he association between INT 14H, port 0,
nntp://eternal-september.org and NETWORK (COM1,ATD)?
The OS recognizes that to get any network connection
it needs to talk to a modem designed to accept ATD
commands, and the modem is attached to COM1.
Post by Grant Taylorand send an ATDA followed by the destination I am interested in.
You seem to dislike layers, but layers serve a valuable purpose.
I dislike layers if they interfere with anything I am trying
to do. I read about TLS in Wikipedia and they stated outright
that it doesn't fit neatly into any of the OSI layers.
Post by Grant TaylorOne layer is the physical (or virtual counterpart) communications
between devices. E.g. RS-232, Ethernet, FDDI, etc.
The next layer is signaling protocol that runs on top of and is
independent of the underlying physical connection. E.g. ATD...
The next layer is the application data that the client application
exchanges with the remote server that you're talking to, independent of
how you direct the modem or the physical connection.
Separating the things into distinct layers that have defined
interactions with each other makes each of them a LOT simpler.
Ok, I don't have a problem with the above, but I was
doing that anyway, wasn't I?
My main concern is at the application level. I want
fopen() to behave nicely.
I care much less about the OS, and I don't care at all
about the hardware. Bluetooth/greentooth/redtooth,
you can invent as many things as you want, and so
long as the Taiwanese hide it all behind INT 14H,
I don't care. And the C library hides it behind fopen.
And the OS hides it behind either PosOpen or
CreateFile, I don't really care that much, as my apps
do not directly call either, they always go via fopen.
Post by Grant TaylorI might have another config file ebcdic_hosts.cfg which, if the
filename in fopen is in that list, an ATDE is issued instead.
So ... you'll have an abstraction /layer/ to simplify what other things do.
I guess so. Isn't that reasonable?
Post by Grant TaylorIt should have. That was the defined interface.
I think the operative word in your statement is "was", as in past tense.
All PC compatibles used to have an 8-bit ISA slot in them. Most
businesses on main street used to have a hitching post in front of them.
Times change. Things move on.
That's hardware changes. I don't care about that moving on.
I care about the software API to interact with the hardware.
There's no reason to invalidate that, although I will accept
switching to 32-bit registers and then 64-bit registers
instead of 16-bit registers as the register size changes.
That's why INT 14H should be hidden behind a BosSerial*()
function.
Post by Grant TaylorIf manufacturers insisted that I need to call INT 14H in protected
mode, I can live with that. But that's the interface. That's what
should exist.
"should" being the operative word.
You would be amazed about how well dum-dum bullets
drive a point home. And Gurkhas are available for hire.
And I know where Taiwan is.
Post by Grant TaylorThose are my user requirements.
I suspect that your requirements are going to mean that you have fewer
and fewer systems that meet your requirements.
Ok, sure. If I run out of manufacturers (maybe someone
carpet-bombed them or something), I'll resort to running
Bochs on an IBM mainframe or something.
My BBS might start malfunctioning if I do that though,
as it is designed to respond to keystrokes, not lines.
It is ironic that I haven't even secured characters, while
other people insist that graphics are mandatory.
ie I'm not even at the point where I can safely write a
character-mode program.
Post by Grant TaylorAnd I'll zap the firmware to get them.
That assumes that there is a different firmware to put on the system.
There may be, or there may not be.
Isn't that what GNU asshats are for?
Post by Grant TaylorOr carpet-bomb Taiwan. Whatever is easiest.
I seriously doubt that carpet-bombing Taiwan will get them to put
functionality you want in a system. Especially if the systems you're
using come from somewhere else.
It's the thought that counts.
Post by Grant TaylorMy user requirements are that manufacturers should continue to support
the interface that they provided, unless modern computers don't have
enough CPU power to do so, because we ran out of silicon.
And yet (effectively all) modern computers don't have 8-bit ISA slots.
And it's not because we ran out of connectors.
I don't mind that.
Post by Grant TaylorWell, the manufacturers already said (one way or another) that INT
14H will time out instead of blocking.
Okay....
They defined that interface, so I coded to that interface.
Who is "they"? Do you by chance mean "IBM from the '80s"?
I guess so. I just look up RBIL. I assume he got it from
somewhere definitive.
Post by Grant TaylorBut if I have a BIOS enhancement to block, that shouldn't affect any
OS that obeyed the rules.
And yet a program / OS that relies on something to timeout and detect
that there was no data won't be able to report the lack of data /
timeout to the end user when it's blocked by the BIOS.
Sure. So the end user shouldn't set "blocked reads" if
he is running such an application+OS. And the
manufacturer should probably have it time out by
default or risk being carpet-bombed.
Post by Grant TaylorMy OS obeys the rules - rules that existed in 1990 or earlier -
and thus it isn't me that is at fault.
8-bit ISA, hitching posts, ... times change. Things move on. Old rules
go out and new rules come in.
S/360 applications that obeyed the rules continue
running in 2021. That's what professionals (IBM) do.
BFN. Paul.