July 15, 1998

In Flight

I'm writing this while flying to California. The advantage is that I am forced to be isolated from the myriad of projects I keep finding. But is a forced isolation better than learning how to manage time better? The price I pay is that I don't have the opportunity to pursue references or interests that require the now, assumed Internet connection. Living with a full-time cable modem connection means that the Internet is "just there" as it should be.

First, the interesting question of the value of enforced isolation. Why does it work? A closely related issue is why is it useful to get magazines on paper even though I only get a chance to read a small percentage. Some of the issues are purely transient. Publishing electronically is rapidly becoming a viable alternative to paper simple in terms of readability. There's also the form-factor. My laptop (a Toshiba Libretto 100CT) is smaller than the book I brought along to read on the flight and it is much easier to read than the paper using the dim overhead light. But not all material is available electronically and there is still a limited amount of space on the current screens.

But the real issue is the perturbation that is forced. At home (AKA, my office) there is a long list of high priority tasks, interruptions and simply interesting stuff. This is compounded by having the freedom to pursue any number of projects without the confines of a traditional job.

It's similar to the Innovator's Dilemma. How does one avoid the "hill-climbing" problem of locally optimal decisions. The short answer is diversity. Perhaps, for now, being forced by the isolation of my cubbyhole on the plane serves the function. But as connectivity improves and flexibility increased, I (as all as others) will need to find it in ourselves to vary our experiences.

Traveling does afford me the opportunity to catch-up on these columns. So this one will be relatively long though the individual topics will still be only visited rather than pursued in full depth.


Wires wires everywhere and what a pain! It's not simply the mess but the arbitrary limitations and the acceptance of the limitations of old, analog technologies though analog is not unique here. Of course, I do come with the solution of "IP Everywhere".

We do need to distinguish between control/data and power. In fact, to a large extent, we have a common power distribution system though it's still messier than it has to be due to the propensity to use lots of long wires. Even worse are the systems with external power adapters with their own variety of voltage and tendency to take up lots of space near the outlet. Though I'm not an expert here, I would think that transformer technology has advanced to the point where we can share some common low voltage wiring.

But my real focus is control and data. SCSI is a good example of the problem. When it was first designed it might have made sense but we can now run arbitrarily high speed Ethernets, especially for short distances. And SCSI does have major distance limitations. SCSI is basically a packet protocol. Why not just use IP with SCSI packets using UDP. The SCSI protocol already handles such datagrams. We could then place the devices on a common wire without worrying about which device is on which system. A SCSI/IP disk drive can still be owned by a single system simply by associating the unique ID's (Ethernet or V6 address) with a given host. Moving the device to a different system is then a matter of simply changing this association. Of course, I do assume there is also a key associated with authorization (see discussion below). Imaging a single cable running past ones devices without the thick SCSI cable.

Yes, there is a cost associated with connecting to the high speed Ethernet but the volumes of a common interface give it a major advantage of the mechanically complex SCSI wiring. As an aside, this does force windows to deal with devices that are not at fixed SCSI positions but this is desperately needed anyway.

Yes, the speed will vary. For highest speed, one can use a short dedicated network segment. For a scanner, just placing it on a 10mbps network should be fine. A scanner can use a fancy protocol like Jetsend or simply continue with the current SCSI protocols between it and an associated PC.

This also applies to the video wiring. I don't know where my analog signal is losing quality and I need a different wire for each kind of connection. RG-6, RG-59, S-Video, RCA, DV-1394 (4 and 6 pin), etc. So why not just 1394? Because it is another special-purpose cabling. For local connections, the common IP wiring should work fine. 100mbps is very cheap and 1gbps is coming soon. Or one can run IP over fiber. I do assume that video can be converted to MPEG-2 though not necessary highly compressed MPEG2.

IP allows all these choices to be mixed and matched. And it opens up the opportunity for many new devices.

USB tries to deliver on some of this but it no longer offers a compelling advantage over other protocols. There is a value in a 50¢ connector but there are other ways to achieve this economy. For now, in practice, a serial adapter USB is $350 because of the limited market size.

Imagine just plugging in a scanner and using it anywhere. Imaging allowing a disk drive to be adopted by a backup system when the primary crashes (a casual cluster).

The benefits are very compelling. It's a more a matter of just doing it than designing it given the growing availability of Ethernet and IP technology.

Access and Identity

Much of the attention on access control is focused on authenticating identity but that's really a side issue. Just as the separation of TCP vs IP was a key insight for creating the Internet, it is necessary to separate authentication and, more generally, the establishment of identity from the mechanics of defining access.

The current system is already broken. Windows/NT has a very powerful access control system that is useless because there is no effective way a user can actually manage the access. And when one has specified access, it is in terms of the NT Domain definition of the user. But this definition is associated with a single domain. In a noncorporate, managing a domain is not feasible. But then the access specifications are lost when one boots a different instance of a system or moves the data to another machine. Even in a corporate environment the movement of laptops from one domain to another makes the whole model problematic.

The other problem is the over reliance on identity. In a system that is running many tasks simultaneously, the assumption that one's identity is established for all operations at login times means that visiting a system to read one's mail means a complete restart of the environment.

The answer is naively simple -- use keys (long integers) as intermediaries between the various notions of roles and identities and the resources that are being protected. One can give the keys out directly but, ideally, there would be intermediate key cabinets that provide for revokability and management.

This is obviously a deep topic that I'm only starting to address. My real concern is that the naive notions that authentication of an individual is the key to solving access problems is sadly naive and will only compound the problems rather than providing real solution. Unless the we allow for the complexities and ambiguties in normal interactions, computers will only reflect a naive and frustrating model of the real world.