October 9, 1998

Internet World

While in New York I dropped by to see Internet World. The attendees were more interesting than the show itself. Since the Internet is such a pervasive infrastructure, it is not clear what the boundaries are, and the lack of a clear focus shouldn't have been a surprise.

Bill Joy and John Gage on Computing

Addendum in 1999: I've been learning more about Jini and the simple comparison is with General Magic's Telescript. Telescript was supposedly necessary to do operations like looking up phone numbers. Simpler passive queries have proven much more effective and Telescript is no more.

John Gage and Bill Joy put on a practiced show of their view of computing.

The fundamental problem with the ideas they were presenting is that they were not fundamental. Bill presented the Internet as a TCP/IP infrastructure rather than focusing on the IP transport itself. I've written more about this in the IP Everywhere Initiative. It is the IP infrastructure that is important – the ability to provide connectivity without biasing it towards specific applications.

There is, of course, the tendency to focus on the a set of familiar appliances and applications. The problem is that it isn't until we've used the technology that we really understand the implications. The underlying mechanisms must support this exploration and create a marketplace for solutions that benefit from leveraging the common mechanisms.

A traditional approach is to build a powerful set of elements or objects and use these, in turn, to build applications. Hierarchical systems of objects tends to exhibit inflexibility since they inherit those assumptions without full freedom to reinvent. One of the strengths of the COM (AKA OLE or ActiveX) object model is that it is nonhierarchical. One can create new interfaces at will. In practice, the COM applications tend to build on layers of objects and are thus dependent upon all of these objects behaving perfectly. It is this layering which is one of the bad ideas in computer science.

It works well in the small and within single projects but it makes the interaction among independent applications problematic. The dependency chains act like whispering chains. Even if there were no obvious errors, slight semantic drifts accumulate. More serious is the inscrutability of errors. The interfaces tend to hide complex interior behavior.

Since the objects are procedural, they are opaque. This is a characteristic of procedural implementations. It is much better to use descriptions that can be understood. Nondynamic HTML is a great example. Browsers can recover very well from errors in HTML. But with procedural approaches such as JavaScript, errors tend to be fatal. If the JavaScript is just cute, one can recover by ignoring it. But if the key elements are programmatic, one can do little but observe the strange behavior. And JavaScripts typically fail because not only are they inscrutable to the environment in which they run, they don't understand their environment so are likely to encounter unanticipated situations.

It is much better to provide a passive description that can be interpreted in context. This doesn't mean one can never use procedural code. But procedural representations are like apprenticeship. They can be used when neither the teacher nor the student (application and environment) understand the subject. But if there is a common understanding of the subject, it is much more effective to simply describe the goals. If there is a misunderstanding, it can be identified and dealt with.

Approaches like Jini, which ship code around, or Bluetooth that knows the meaning of the objects it is exchanging, are simply not fundamental. There are applications that must compete in the marketplace. If the focus is on those applications without providing simpler and more fundamental mechanisms, then they will fail or behave perversely.

While Bill and John can complain about the difficulties of configuring Windows and dealing with drivers, their solution is no better and likely to be much worse because the inscrutability is distributed over a network.

The presentation disparaged disk-based programming in favor of network-based programming. But being network-dependent, one is simply at the mercy of too many elements beyond one's control. So-called disk-based programming is really an approach that builds a locally robust system which can act as one's agent when dealing with the network. This is a much safer approach to being a network participant than accepting a lobotomy and being reduced to total dependence on the benevolence of strangers.

And problem was that the demo was done in a room without good connectivity to the wireless network they were touting. There was an IP network connection to the demo laptop. But each of the other devices had its own strange networks such as the camera that used the IEEE-1394 (firewire) bus. The lack of a real demo made the case against their approach very strong.

If we simply had devices cooperating on an IP ( V6 ) network, the talk would have been pointless since things would "just work", at least in theory. The talk would have had to focus on working out various interactions rather than trying to explain what would happen if it really were true that a collection of inscrutable objects would self-configure into a meaningful system that could exchange the high-level objects that we have predefined.

Instead of building fancier layers of Java objects, we must focus on an approach for creating a marketplace of interfaces accessible in a shallow (versus a layered) environment. This would be an environment where the network boundary is very visible, since it presents a "trust" boundary. I use "trust" to include more than security, it represents the likelihood that one's expectations would be met.

Also at the Main Show..

Very short statements.

  • Biz Travel. Spoke to George Roukas (formerly at American Express). They want to provide services to the frequent flying business traveler. Might make it more useful than the consumer sites. I'll need to give it a try.
  • RCN. Claims to have much of New York wired. Still waiting for them to do cable modems...
  • Elastic Networks. This is the Nortel division that provides wiring to hotel rooms and apartments using existing phone wires. I've seen them before but in this environment their focus on apartment wiring, when paired with cable modems at al, because interesting.
  • Bell-Atlantic. Pushing their DSL (called Infospeed or something obscure) but their rates are triple that of Cable Modems. Silly.

Internet Showcase Preview

In the evening there was an Upside (David Coursey) and Microsoft-sponsored event. It was a chance for smaller companies to show their wares. Some of the offerings were:

  • Internet Naming System. Provides for dynamically updating DNS entries. Useful for transient users and others without stable addresses.
  • WRQ has a program, @Guard for providing users with tools for IP security. Their current product is too technically demanding. But there is the potential for a more effective product.
  • Arriba is working with Kodak to provide effective management of photos.
  • Sprint was there. I talked to the representative about their claims for replacing the Internet with point-to-point connections, completely missing the point of the Internet. On the other hand, in the ATT keynote on Thursday, emphasized a commitment to IP.

Probably more to write about but it's getting late..


V6 I mention version 6 of the Internet Protocol since the current version (V4) doesn't have enough addresses to support a large number of devices.