[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Self UI

Jecel writes:
> I keep wondering how you can have traditional editing capabilities in Self
> Artificial Reality.  Maybe I am missing the point entirely and you don't
> need editors, is that it?

Well, I agree that some sort of editor is probably still needed. The
paper "Experiencing Self Objects" describes the prototype Self UI, as
included in the release.  This prototype merely allows browsing and
inspecting of objects, with no support for creating objects, adding slots,
or editing code.  Of course, in a full UI these functions must be supported.
In an artificial reality model, however, actions such as object creation and
slot addition most likely would not be achieved by conventional text editing:
you would probably grab some prototype object, clone it, and add or remove
slots as necessary by dragging slots onto or off the object.  Editing code
(in methods) could conceivably also be done in a graphical way, though I
think it may almost always be easier to type the code conventionally.
Editing capabilities would be built in to the face of objects with code

Jecel also brings up the question of different "views" of objects in the UI.
The representation of objects in the UI currently is a very low-level one,
and it would be useful to have alternative representations that may be
determined by the kind of object.  For example, a collection object could
have a representation that was some sort of container holding the element
objects.  Since we seek to maintain a one-to-one correspondence between
Self objects and their representations in the UI, these high-level
representations would not be a separate object on the screen, but the
same object that would perhaps mutate itself on command from low-level
rep to high-level rep.  Or as Jecel suggests, would turn to reveal another
face, which would be the high-level rep.

> When I design integrated circuits I like to open multiple coordinated
> windows on the same layout showing different parts at the same time.  This
> would be impossible in the last paragraph's model.

Being able to compare two views of the same object at the same time certainly
has advantages in some circumstances.  I suspect that there are ways to
achieve this while still maintaining the integrity of the object identity.
The sense of identity of the Self object is crucial in our UI, we believe,
because the immediacy of the object helps dissolve some of the subtle barrier
that any interface necessarily introduces.