20090430

On alternative interfaces

Reading William Gibson's Virtual Light got me thinking about the interfaces we use to work with computers and other devices and media.
Neal Stephenson's In the beginning was the command line (the link goes to the original homepage for the essay) is a good place to start any discussion of user interfaces and covers the basics of CLI versus GUI (it also has some nice material about the OS/software industry and marketing of their products). More importantly at the moment, he makes an excellent point about the state of 'physical' interfaces, the keyboard, mouse, display, etc.
Each of these screens is called, in Unix-speak, a tty, which is an abbreviation for teletype. So when I use my Linux system in this way I am going right back to that small room at Ames High School where I first wrote code twenty-five years ago, except that a tty is quieter and faster than a teletype, and capable of running vastly superior software, such as emacs or the GNU development tools.

This follows a bit after his reminiscences of his experiences with computers in high school, using a teletype to batch process simple programs on a remote mainframe at a state college.
Similarly to this, we are still using keyboards (almost all of which (excepting things like Dvorak layouts, laptops and 'ergonomic' keyboards) are modeled after keyboards developed by IBM in the 70's and early 80's. Our monitors and displays are LCDs instead of CRTs but they haven't changed significantly in the way they are used or interact with the user since Amiga hooked up their computers to TV sets in the 70's. For navigating GUIs we usually employ mice which might have cutting edge laser trackers instead of small rubberized balls and scroll wheels and a couple of extra (but generally useless) buttons, but in essence they are the same devices which Apple popularised with it's first Mac OS. Just about the only real innovation in input devices in the past several decades have been stylus and touchscreen based input, which can be used for GUI navigation (thus replacing mice) and, on some devices (mostly PDAs and newer smartphones) can be used for text input (although generally using a graphical keyboard instead of a physical one).

Future interfaces
First I would like to look at display and output devices as I think these have the most potential for change in the short term.
Virtual Light introduces the ideas of "telepresence" rigs and "virtual light" glasses.

Telepresence rigs already have something roughly equivalent in existence today. Personal video displays have been popping up from various ventures or occasionally as a concept item from some of the larger electronics firms. These displays are similar to the VR helmets that were used for some video games in the 80's and 90's and essentially consist of a pair of very small LCD screens mounted in place of lenses on a pair of large glasses or a visor. Combined with an accelerometer or other form of motion sensor/tracker, this sort of system can become the core of a decent VR rig (to be honest though that sort of thing is primarily of use only to LEO (Law Enforcement Organizations) and military organizations as a training tool, and it would still be much more expensive than blanks or live fire training exercises in use now.
The main reason these things tend to not catch on is that they are primarily marketed as displays for iPods and similar portable media devices (PMDs) which just doesn't mesh well with the general use pattern of PMDs. You can't really wear a audio/video headset to watch a movie on your iPod while you're driving (although you shouldn't watch videos while driving anyway) and good luck doing that on the subway.

Virtual light glasses on the other hand, should they ever come into existence, might actually be of use. The first obvious use would be the same as the original purpose claimed in the book, allowing blind people (provided they have working optic nerves) to see. Another use would be similar to the purpose Warbaby puts them to, as a sort of HUD for displaying relevant information about his surroundings. Again, such a use is particularly well suited to military and law enforcement purposes but does have some civillian applications as well (imagine them as a replacement for tourist guides or as a means of displaying news or even as a display for portable computers). The major issue with such a system though is information overload, particularly while moving.

Future Input devices
3d and contact-less mice have been tried before but they were largely answers to questions no one was asking. If our output is still 2-dimensional, why do we need to interact with it using a 3-dimensional input system? Contact-less mice (using an accelerometer or similar) have largely failed as, like personal video goggles, they are very useful for mobile devices but almost impossible to find the chance to use them as such (admittedly the Nintendo Wii's remotes are of this type and seem to be popular, but I think this just shows the importance of my next point).
Why have these devices largely failed to enter the mainstream? aside from the issue of limited usability, our user interfaces aren't designed to work with them. Using one of these devices with the current generation of user interfaces is like trying to use a joystick to control a rowboat. You have too many unused capabilities in the input device and not enough in the controlled device. Unless your UI and your input devices are designed with each other in mind they just won't work. Users will simply become frustrated by the limitations imposed on them (how many people still use mice without scroll wheels (or equivalent)?)
and customers quickly tire of paying a premium for features and capabilities that they cannot use (eg. programmable buttons that require special proprietary software to do anything).

No comments:

Post a Comment