[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Where to use SELF - your opinion



Very well put.  I would put one point slightly differently:

>   4. typechecking doesn't necessarily find all bugs,
>      so testing will always be necessary

I would say "testing and other reliability-enhancing measures" will always
be necessary.  Other such measures might include formal correctness proofs
(manual and/or mechanical), code walkthroughs, static checking of things
other than types (e.g., Hermes' method for detecting uninitialized data),
mandated restrictions on coding style, etc.

> We usually
> don't need efficiency badly enough to write in assembler,
> after all. Perhaps some day we won't need efficiency
> badly enough to write in C++ or C.

If it weren't for the fact that a good C compiler can come within much less
than a factor of 2 of assembler in performance, we'd still be writing in
assembler.  For *system* programming (as opposed to *application*
programming -- and the dividing line admittedly is fuzzy), I think an
argument like this will hold indefinitely.  C is good enough that for a
given level of machine performance, there will be commercially and
technically valuable things that can and will be done in C that simply
cannot be done in a language that runs 5 times slower, because humans have
relatively fixed speed requirements.

I don't rule out the possibility that some much better language will come
along, with an implementation that still achieves that factor of
2-or-whatever.  When it does, people will (eventually) switch to it.  But
it will have to be *much* better, because C is very solidly entrenched in
its ecological niche.

> but the
> cost of encountering a bug was so low, I soon got over my
> psychological addiction to the security of typechecking.

The cost of encountering a bug in a research environment where fewer than a
dozen people (and often no one else) are affected by a bug is, indeed, low.
But the cost of a bug in delivered commercial software is astronomical,
because of support load, possible patch distribution, and liability.  I
suggest you talk to someone who has had experience with this.  Alberto
Savoia can probably either give you the figures outright, or point to you
someone who can.  My model of software is the commercial one -- i.e., I'm
interested in developing software that many people will pay for, use, and
rely on.

I think the word "addiction" reveals a pretty strong bias.  See below on
the value of declarations for purposes other than checking.

> I don't want to have to push a lot of type information around every time I
> make a design change, as I did in Mesa.

I agree entirely.  I think our tools for evolving designs are pretty
pitiful.  (I think that's true in Smalltalk and Self too, but the mechanics
of the evolution are simpler than in C++.)

> I'd like to see a single memory-safe language with a flexible,
> discretionary type system. During exploratory development, type errors
> could be discovered at only at run time. As parts of the system become
> mature and stable, the programmer could add type annoations to help the
> system find errors and to make the program more understandable to the
> reader. Type errors would be reported but could be ignored if the
> programmer chose to run a partially correct program.

I think this reveals another difference in our opinions.  I value type
declarations for their checking properties, but I also think they provide
an extremely valuable aid to readability and to the design process itself.
If I had better tools for handling the mechanics of design changes, I would
use type declarations from the beginning, because they provide a check on
the consistency of the design, not just the code.  They are like the
skeleton of a building, on which the plumbing, wiring, walls, ceilings,
etc. then get hung.  When I design new code, I usually design the types
first; if I find myself unable to write the type signature for a routine,
or find myself writing type casts, I consider it a warning signal that the
design may need revisiting.

> I've had good luck writing high-perfmance and real-time code
> in Self and Smalltalk. The time-critical low-level code is carefully
> written to pre-allocate all the storage it needs. Since it never
> allocates anything, this code never triggers garbage collection.
> I think it's better to do things this way than to open the door
> to the possibility of dangling references by having the programmer
> explicitely allocate and free storage.

Do you have a static checking tool that can tell you definitely that "this
subsystem of the program doesn't do any dynamic allocation"?  Admittedly,
the amount of code involved is probably small enough that hand-checking is
not too onerous.  Remember also that some languages make it very difficult
to determine whether allocation is being done or not.  (C++ constructors)
Also, remember that this may be a property of the compiler.  For example,
you can't assume that a block won't require dynamic allocation.  For that
matter, in a scavenging system, you can't assume that an assignment won't
require dynamic allocation (to expand the remembered set).

I agree that if you have a language and system in which you can relatively
easily ensure statically (there's that nasty word again :-)) that
particular code won't do allocation, that's much better than running the
risk of manual allocation bugs.

This is an interesting discussion.  Let's keep it up.