Addendum in 1999: I've been learning more about Jini and the simple
comparison is with General Magic's Telescript. Telescript was supposedly
necessary to do operations like looking up phone numbers. Simpler passive
queries have proven much more effective and Telescript is no more.
John Gage and Bill Joy put on a practiced show of their view of computing.
The fundamental problem with the ideas they were presenting is that they were not
fundamental. Bill presented the Internet as a TCP/IP infrastructure rather than focusing
on the IP transport itself. I've written more about this in the IP Everywhere Initiative. It
is the IP infrastructure that is important the ability to provide connectivity
without biasing it towards specific applications.
There is, of course, the tendency to focus on the a set of familiar appliances and
applications. The problem is that it isn't until we've used the technology that we really
understand the implications. The underlying mechanisms must support this exploration and
create a marketplace for solutions that benefit from leveraging the common mechanisms.
A traditional approach is to build a powerful set of elements or objects and use these,
in turn, to build applications. Hierarchical systems of objects tends to exhibit
inflexibility since they inherit those assumptions without full freedom to reinvent. One
of the strengths of the COM (AKA OLE or ActiveX) object model is that it is
nonhierarchical. One can create new interfaces at will. In practice, the COM applications
tend to build on layers of objects and are thus dependent upon all of these objects
behaving perfectly. It is this layering which is one of the bad ideas in computer
science.
It works well in the small and within single projects but it makes the interaction
among independent applications problematic. The dependency chains act like whispering
chains. Even if there were no obvious errors, slight semantic drifts accumulate.
More serious is the inscrutability of errors. The interfaces tend to hide complex interior
behavior.
Since the objects are procedural, they are opaque. This is a characteristic of
procedural implementations. It is much better to use descriptions that can be understood.
Nondynamic HTML is a great example. Browsers can recover very well from errors in HTML.
But with procedural approaches such as JavaScript, errors tend to be fatal. If the
JavaScript is just cute, one can recover by ignoring it. But if the key elements are
programmatic, one can do little but observe the strange behavior. And JavaScripts
typically fail because not only are they inscrutable to the environment in which they run,
they don't understand their environment so are likely to encounter unanticipated
situations.
It is much better to provide a passive description that can be interpreted in context.
This doesn't mean one can never use procedural code. But procedural representations are
like apprenticeship. They can be used when neither the teacher nor the student
(application and environment) understand the subject. But if there is a common
understanding of the subject, it is much more effective to simply describe the goals. If
there is a misunderstanding, it can be identified and dealt with.
Approaches like Jini, which ship code around, or Bluetooth that knows the meaning of the
objects it is exchanging, are simply not fundamental. There are applications that must
compete in the marketplace. If the focus is on those applications without providing
simpler and more fundamental mechanisms, then they will fail or behave perversely.
While Bill and John can complain about the difficulties of configuring Windows and
dealing with drivers, their solution is no better and likely to be much worse because the
inscrutability is distributed over a network.
The presentation disparaged disk-based programming in favor of network-based
programming. But being network-dependent, one is simply at the mercy of too many elements
beyond one's control. So-called disk-based programming is really an approach that builds a
locally robust system which can act as one's agent when dealing with the network. This is
a much safer approach to being a network participant than accepting a lobotomy and being
reduced to total dependence on the benevolence of strangers.
And problem was that the demo was done in a room without good connectivity to the
wireless network they were touting. There was an IP network connection to the demo laptop.
But each of the other devices had its own strange networks such as the camera that used
the IEEE-1394 (firewire) bus. The lack of a real demo made the case against their approach
very strong.
If we simply had devices cooperating on an IP (
V6
) network, the talk would have been pointless since
things would "just work", at least in theory. The talk would have had to focus
on working out various interactions rather than trying to explain what would happen if it
really were true that a collection of inscrutable objects would self-configure into a
meaningful system that could exchange the high-level objects that we have predefined.
Instead of building fancier layers of Java objects, we must focus on an approach for
creating a marketplace of interfaces accessible in a shallow (versus a layered)
environment. This would be an environment where the network boundary is very visible,
since it presents a "trust" boundary. I use "trust" to include more
than security, it represents the likelihood that one's expectations would be met.