Post by ***@gmail.comPost by Joe MonkPost by ***@gmail.comRitchie created a language that, as far as I can tell, was
designed to last for eternity.
Which is why it has already been superseded...
https://dlang.org
https://www.rust-lang.org
https://go.dev
Which future architecture do you see on the horizon,
even long term horizon, that allows those languages
to run, but not C?
I know of past architectures that C can't really run on,
at least in any sensible form, but not future ones.
I'm not familiar with those other languages to know if
they are capable of running on those older architectures,
which gives you the confidence that they, but not C,
are more likely to run on the future ones.
This is not really question of "not running", if you really
want you will find some way to run C programs (maybe using
soemthing like Bochs). Rather, in programming there are
3 desirable properties: runtime efficiency (speed and memory use),
programming convenience and safety/correctnes. And they
point in different directions. C was and still is reasonably good
at runtime efficiency. C from the start was rather bad in
safety/correctnes part: it is easy to write C code with
errors and langage offers little help in finding them.
Concerning convenience, among languages of comparable
efficiency C was about average.
Now, once you drop efficiency requirements a bit it is possible
to significantly increase safety/correctnes. Namely, compiler
inserted checks, in particular array bounds checks, can catch
many errors. In the past bounds checks were frequently
dismissed as inefficient. But with modern optimizing
compiler bounds checks typically add less than 10% to runtime,
which means that optimized program with bounds checks
will typically run faster than unoptimized one without
bounds checks. However, C is rather special. Unlike Pascal,
Ada, Fortran or Cobol it is almost impossible to automatically
insert bounds checks into C program without large loss
of efficiency (of order 3-4 times slower and bigger).
This is because in C arrays are almost immediately converted
to pointers and all what compiler sees are pointers. And
compiler does not know if a pointer points to single item
or an array (and if it points to array does not know
how what are array bounds).
In C++ one can use C arrays, with trouble as above or one
can use C++ vector classes which allow efficient bounds
checking.
Concerning convenience, there is large gain in conveniece
and also safety/correctness from using automatic memory
management. Usually this takes form of so called garbage
collection and several languages offer this. Old time
wisdom was that garbage collected language is not good
for writing an OS. But even this is debatable. For example,
VM370 is written in assembler. But is has "memory
management" subsystem. I looked a bit at this subsystem
and AFAICS this is a garbage collector for various OS
data structures. So, allocation is manual and there
are some manual deallocations but there is also
garbage collection. In principle making this more
automatic by using garbage collected language could
lead to easier programming and smaller number of errors.
And whatever difficulties garbage collection bring to
an OS are already there.
There are also another trends, using partial automation.
Here C++ has "smart pointers" and Rust introduced so
called "borrow checker". Rust promises that compiler
can check correcness of all deallocation. It will
take some time to see if this works as promised, but
even if this works only partially, it is better to have
partial checks than none at all.
Coming back to C, it is possible that there will be tools
which bring some of those developements to C. There is
historical precedent: in early C history there was
program called "lint" that performed extra checks
(IIUC in spirit of Pascal) on C programs. Classic
"lint" seem to be redundant in modern C, because
standard is more tight now and compilers issue many
warnings for legal but suspicous programs. But there
is room for new tools. Time will tell if C world manages
to adapt. AFAICS for some kind of code C++ is much
more convenient than plain C. Currently programming
small micros seem to be C stronghold. But C++ is very
strong competitor there, one can have programs where
object code is say 2k (so really small), C and C++
produce essentially the same size and speed, but C++
is more convenient to write. In fact, due to convenice
C++ programmer is likely to have more efficiency
improvments in the program than C programmer, so
in real life C++ may be more efficient.
Let me mentions extra thing related to OS. Namely,
C adoped linking model from linkers used around 1970.
Currently, major OS-es use C lining model and
effectively C conventions are forced on other
languages. But it seems that one could have better
linking model, more adaped to other langages. In
traditional model symbol have one meaning and only
thing that matter is name and how it is realted to
memory adresses. But in higher level languages
symbols have type, and types could be passed to the
linker. So linker could refuse linking symbols with
mismatched types or (for C++) could allow multiple
meanings, one for each type. That could simplify
linking from other languages to C++. OTOH typed linker
could break some C programs. I am not sure if it would
break legal C programs (IIUC in the past there was
belief that it would break some). Typed linker
would break assumption present in GNU configure,
namly that it could link with wrong types to
determine presence of functions in libraries.
--
Waldek Hebisch