Tools for Thought by Howard Rheingold The idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry or orthodox computer science, nor even homebrew computerists; their work was rooted in older, equally eccentric, equally visionary, work. You can't really guess where mind-amplifying technology is going unless you understand where it came from.
- HLR
Chapter Seven:
Machines to Think WithIn the spring of 1957, while he continued to carry out the duties of an MIT researcher and professor, Dr. J.C.R. Licklider noted every task he did during the day and kept track of each one. He didn't know it then, but that unofficial experiment prepared the way for the invention of interactive computing--the technology that bridged yesteryear's number crunchers and tomorrow's mind amplifiers.
Licklider's research specialty was psychoacoustics. During World War II, he had explored ways electronics could be applied to understanding human communications. Specifically, he wanted to learn how the human ear and brain are able to convert atmospheric vibrations into the perception of distinct sounds. After the way, MIT was the center of a number of different attempts to use electronic mechanisms to model parts of the nervous system--a movement in biology and psychology as well as engineering that was inspired by the work of Wiener and others in the interdisciplinary field of cybernetics. Licklider was one of the researchers attracted to this paradigm, not strictly out of the desire to build a new kind of machine, but out of the need for new ways to simulate the activities of the human brain. This need, inspired by cybernetics, was extended simultaneously into engineering and physiology. Computers were the last thing on Licklider's mind--until his theoretical models of human perceptual mechanisms got out of hand.By the late 1950s, Licklider was trying to build mathematical and electronic models of the mechanisms the brain uses to process the perception of sounds. Part of the excitement generated during the early days of cybernetic research came from the prospect of studying mechanical models of living organisms to help create theoretical models of the way those organisms function, and vice-versa. Licklider thought he might be onto a good idea with an intricate neural model of pitch perception, but quickly learned, to his dismay, that his mathematical model had grown too complex to work out by hand in a reasonable length of time, even using the analog computers that were then available. And until the mathematical model could be worked out, there was no hope of building a mechanical model of pitch perception.
The idea of building a mathematical or electronic model was meant to simplify the task of understanding the complexities of the brain, like plotting a graph to see the key relationships in a collection of data. But the models themselves now began to grow unmanageably complex. Like Mauchly with his meteorological data, twenty years before, Licklider found he was spending more and more of his time dealing with the calculations he needed to do to create his models, which left less time for what he considered to be his primary occupation--thinking about what all that information meant. Beneath those numbers and graphs was his real objective--the theoretical underpinnings of human communication.
Although he was primarily interested in how the brain processes auditory information, he felt that he was spending most of his time putting things into files or taking them out, as well as managing the increasing amounts of numerical data he needed to construct the models he had in mind. Out of curiosity, he wondered if any of his colleagues had looked into the way scientific researchers spent their time.
When he couldn't find any time-and-motion studies of information-shuffling researchers like himself, Licklider decided to keep track of his own activities as he went through his normal working day. "Although I was aware of the inadequacy of the sampling," he later wrote, with the modesty that he is known for among his colleagues, "I served as my own subject."
It didn't take long to discover that his main occupation, even when he wasn't keeping records of his behavior, was centered on keeping records of everything else. Astonishing as it must have seemed to any self-respecting scientist like himself, his observations revealed that about 85% of his "thinking" time was actually spent "getting into a position to think, to make a decision, to learn something I needed to know. Much more time went into finding or obtaining information than into digesting it."
Like almost any other experimentalist, he couldn't begin to make sense of psychoacoustic data until he could see it translated into the form of graphs. Plotting the graphs took days. Even teaching his assistants how to plot graphs took hours. As soon as the graphs were finished and he was able to look at them, the relationships he was seeking became immediately obvious. It was grossly inefficient and tedious to spend days plotting graphs that took seconds to interpret.
While he had always thought of interpretation and evaluation as his most important function as a scientist, Licklider's analysis of his research behavior showed that most of his tasks were clerical or mechanical: "searching, calculating, plotting, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt or not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capacity."
The conclusion he reached, while it doesn't sound so radical today, was shocking when it occurred to him in 1957. A less modest man might not have been able to bring himself to face the conclusion: Licklider decided, on the basis of his informal self-study, that most of the tasks that take up the time of any technical thinker would be performed more effectively by machines.
This was a thought that was occurring to one or two other people at about the same time--notably Doug Engelbart, out in California. But because of his association with certain military-sponsored research projects at MIT in the 1950s, there was an important difference between Licklider and the others who dreamed of converting computers into some kind of mind-amplifying tool. This crucial difference was the fact that Licklider had reached his conclusion not long before circumstances put him at the center of power in the one institution capable of sponsoring the creation of an entire new technology.
At that point in the history of computer technology--a field in which Licklider had been only tangentially involved until then--no respectable computer scientist would dare suggest that computer technology ought to be totally revamped so that scientists could use these machines to help keep track of data and build theoretical models of the phenomena they were studying. To those who were wild enough to make such a suggestion--especially the young MIT computer mavericks who were founding the field of artificial intelligence around that time--the idea might have seemed too obvious and too trivial to pursue. In any case, the AI founders were more interested in replacing the scientist than the scientist's file clerk. Licklider, however, was neither a respectable computer scientist nor a computer maverick, bus a psychologist with some expertise in electronics. And like any other competent investigator, he followed where the data left him.
In the late 1950s, Licklider had no real expertise in digital computer design, and although he knew that only a computer could give him what he needed, he didn't think that the kinds of computers then available, and the kinds of things they did, were suitable for building a sort of "electronic file clerk." He knew that data processing wasn't what he wanted.
If you were the Census Office, overflowing with information on a couple of hundred million people, and for some crazy reason you wanted to find out how many divorced people over sixty lived on farms in the sun belt, you could use a UNIVAC to perform the sorting and calculating needed to tell you what you wanted to know. That was data processing. If you had a payroll for 10,000 employees to calculate every other Friday and needed to transform time sheets into entries in a ledger and print up all the checks--data processing power was just what you could buy from your local IBM representative.
Data processing involved certain constraints on what could be done with computers, and constraints on how one went about doing these things.. Payrolls, mathematical calculations, and census data were the proper kinds of tasks An arcane process known as "batch processing" was the proper way to do these things. If you had a problem to solve, you had to encode your program and the data that the program was meant to operate upon, usually in one of the two major computer languages--FORTRAN and COBOL. The encoded program and data were converted into boxes full of what had become universally known as "IBM cards"--the kind you weren't supposed to spindle, fold, or mutilate. The cards were delivered to a systems administrator at the campus "computer center" or the corporate "data processing center." This specialist was the only one allowed to submit the program to the machine, and the person from whom you would retrieve your printout hours or days later.
But if you wanted to plot ten thousand points on a line, or turn a list of numbers into a graphic model of airflow patterns over an airplane wing, you wouldn't want data processing or batch processing. You would want modelling--an exotic new use for computers that the aircraft designers were pioneering. All Licklider sought, at first, was a mechanical servant to take care of the clerical and calculating work that accompanied model building. Not long after, however, he began to wonder if computers could help formulate models as well as calculate them.
When he attained tenure, later that same year, Licklider decided to join a consulting firm near Cambridge named Bolt, Beranek & Newman. They offered him an opportunity to pursue his psychoacoustic research--and a chance to learn about digital computers.
"BB&N had the first machine that Digital Equipment Company made, the PDP-1," Licklider recalled in 1983. The quarter-million-dollar machine was the first of a continuing line of what came to be called, in the style of the midsixties, "minicomputers." Instead of costing millions of dollars and occupying most of a room, these new, smaller, powerful computers only cost hundreds of thousands of dollars, and took up about the same amount of space as a couple of refrigerators. But they still required experts to operate them. Licklider therefore hired a research assistant, a college dropout who was knowledgeable about computers, an exceptionally capable young fellow by the name of Ed Fredkin, who was later to become a force in artificial intelligence research--the first of many exceptionally capable young fellows who would be drawn to Licklider's crusade to build a new kind of computer and create a new style of computing.
Fredkin and others at BB&N had the PDP-1 set up so that Licklider could directly interact with it. Instead of programming via boxes of punched cards over a period of days, it became possible to feed the programs and data to the machine via a high-speed paper tape; it was also possible to change the paper tape input while the program was running. The operator could interact with the machine for the first time. (The possibility of this kind of interaction was duly noted by a few other people who turned out to be influential figures in computer history. A couple of other young computerists at MIT, John McCarthy and Marvin Minsky, were also using a PDP-1 in ways computers weren't usually used.)
The PDP-1 was primitive in comparison with today's computers, but it was a breakthrough in 1960. Here was the model builder that Licklider had first envisioned. This fast, inexpensive, interactive computer was beginning to resemble the kind of device he dreamed about back in his psychoacoustic lab at MIT , when he first realized how his ability to theorize always seemed constrained by the effort it took to draw graphs from data.
"I guess you could say I had a kind of religious conversion," Licklider admits, remembering how it felt, a quarter of a century ago, to get his hands on his first interactive computer. As he had suspected, it was indeed possible to use computers to help build models from experimental data and to make sense of any complicated collection of information.
Then he learned that although the computer was the right kind of machine he needed to build his models, even the PDP-1 was hopelessly crude for the phenomena he wanted to study. Nature was far too complicated for 1960-style computers. He needed more memory components and faster processing of large amounts of calculations. As he began to think about the respective strengths and deficiencies of computers and brains, it occurred to him that what he was seeking was an alternative to the human-computer relationship as it then existed.
Since the summer of 1956, when they met at Dartmouth to define the field, several young computer and communication scientists Licklider knew from MIT had been talking about a vaguely distant future when machines would surpass human intelligence. Licklider was more concerned with the shorter-term potential of computer-human relations. Even at the beginning, he realized that technical thinkers of every kind were starting to run up against the problems he had started noticing in 1957. Let the AI fellows worry about ways to build chess-playing or language translating machines. What he and a lot of other people needed was an intelligent assistant.
Although he was convinced by his "religious conversion to interactive computing"--a phrase that has been used over and over again by those who participated in the events that followed--Licklider still knew too little about the economics of computer technology to see how it might become possible to actually construct an intelligent laboratory assistant. Although he didn't know how or when computers would become powerful enough and cheap enough to serve as "thinking tools," he began to realize that the general-purpose computer, if it was set up in such a way that humans could interact with it directly, could evolve into something entirely different from the data processors and number crunchers of the 1950s. Although the possibility of creating a personal tool still seemed economically infeasible, the idea of modernizing a community-based resource, like a library, began to appeal to him. He got fired up about the idea Vannevar Bush had mentioned in 1945, the concept of a new kind of library to fit the world's new knowledge system.
"The PDP-1 opened me up to ideas about how people and machines like this might operate in the future," Licklider recalled in 1983, "but I never dreamed at first that it would ever become economically feasible to give everybody their own computer." It did occur to him that these new computers were excellent candidates for the super-mechanized libraries that Vannevar Bush had prophesied. In 1959, he wrote a book entitled Libraries of the Future, describing how a computer-based system might create a new kind of "thinking center."
The computerized library as he first described it in his book did not involve anything as extravagant as giving an entire computer to every person who used it. Instead he described a setup, the technical details of which he left to the future, by which different humans could use remote extensions of a central computer, all at the same time.
After he wrote the book, during the exhilarating acceleration of research that began in the post-Sputnik era, Licklider discovered what he and others who were close to developments in electronics came to call "the rule of two": Continuing miniaturization of its most important components means that the cost effectiveness of computer hardware doubles every two years. It was true in 1950 and it held true in 1960, and beyond even the wildest imaginings of the transistor revolutionaries, it was still true in 1980. A small library of books and articles have been written about the ways this phenomenon has fueled the electronics revolution of the past three decades. It looks like it will continue to operate until at least 1990, when personally affordable computers will be millions of times more powerful than ENIAC.
Licklider then started to wonder about the possibility of devising something far more revolutionary that even a computerized library. When it began to dawn on him that this relentlessly exponential rate of growth would make computers over a hundred times as powerful as the PDP-1 at one tenth the cost within fifteen years, Licklider began to think about a system that included both the electronic powers of the computer and the cortical powers of the human operator. The crude interaction between the operator and the PDP-1 might be just the beginning of a powerful new kind of human-computer partnership.
A new kind of computer would have to evolve before this higher level of human-machine interaction could be possible. The way the machine was operated by people would have to change, and the machine itself would have to become much faster and more powerful. Although he was still a novice in digital computer design, Licklider was familiar with vacuum tube circuitry and enough of an expert in the hybrid discipline of "human factors engineering" to recognize that the mechanical assistant he wanted would need capabilities that would be possible only with the ultrafast computers he foresaw in the near future.
When he began applying the methods he had been using in human factors research to the informational and communication activities of technical thinkers like himself, Licklider found himself drawn to the idea of a kind of computation that was more dynamic, more of a dialogue , more of an aid in formulating as well as plotting models. Licklider set forth in 1960 the specifications for a new species of computer and a new mode of thinking to be used when operating them, a specification that is still not fully realized, a quarter of a century later:
The information processing equipment, for its part, will convert hypotheses into testable models and then test the models against data (which the human operator may designate roughly and identify as relevant when the computer presents them for his approval). The equipment will answer questions. It will simulate the mechanisms and models, carry out procedures, and display the results to the operator. It will transform data, plot graphs, ("cutting the cake" in whatever way the human operator specifies, or in several alternative ways if the human operator is not sure what he wants). The equipment will interpolate, extrapolate, and transform. It will convert static equations or logical statements into dynamic models so the human operator can examine their behavior. In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.In addition, the computer will serve as a statistical-inference, decision-theory, or game-theory machine to make elementary evaluations of suggested courses of action whenever there is enough basis to support a formal statistical analysis. Finally, it will do as much diagnosis, pattern matching, and relevance recognizing as it profitably can, but it will accept a clearly secondary status in those areas.
The first research in the 1950s into the use of computing equipment for assisting human control of complex systems was a direct result of the need for a new kind of air defense command-and-control system. Licklider, as a human factors expert, had been involved in planning these early air defense communication systems. Like the few others who saw this point as early as he did, he realized that the management of complexity was the main problem to be solved during the rest of the twentieth century and beyond. Machines would have to help us keep track of the complications of keeping global civilization alive and growing. And humans were going to need new ways of attacking the big problems that would result form our continued existence and growth.
Assuming that survival and a tolerable quality of existence are the most fundamental needs for all sane, intelligent organisms, whether they are of the biological or technological variety, Licklider wondered if the best arrangement for both the human and the human-created symbol-processing entities on this planet might not turn out to be neither a master-slave relationship nor an uneasy truce between competitors, but a partnership.
Then he found the perfect metaphor in nature for the future capabilities he had foreseen during his 1957-1958 "religious conversion" to interactive computing and during those 1958-1960 minicomputer encounters that set his mind wandering through the informational ecology of the future. The newfound metaphor showed him how to apply his computer experience to his modest discovery about how technical thinkers spend their time. The idea that resulted grew into a theory so bold and immense that it would alter not only human history but human evolution, if it proved to be true.
In 1960, in the same paper in which he talked about machines that would help formulate as well as help construct theoretical models, Licklider also set forth the concept of the kind of human-computer relationship that he was later to be instrumental in initiating:
The fig tree is pollinated only by the insect Blastophaga grossorum. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative "living together in intimate association, or even close union, of two dissimilar organisms" is called symbiosis.The problems to be overcome in achieving such a partnership were only partially a matter of building better computers and only partially a matter of learning how minds interact with information. The most important questions might not be about either the brain or the technology, but about the way they are coupled."Man-computer symbiosis" is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. . . . The hope is that, in not too many years, human brains and computers will be coupled together very tightly, and that the resulting partnership will think as no human being has ever thought and process data in a way not approached by the information-handling machines we know today.
Licklider, foreseeing the use of computers as tools to build better computers, concluded that 1960 would begin a transitional phase in which we humans would begin to build machines capable of learning to communicate with us, machines that would eventually help us to communicate more effectively, and perhaps more profoundly, with one another.
By this time, he had strayed far enough off the course of his psychoacoustic research to be seduced by the prospect of building the device he first envisioned as a tool to help him make sense of his laboratory data. Like Babbage who needed a way to produce accurate logarithm tables, or Goldstine, who wanted better firing tables, or Turing, who wanted a perfectly definite way to solve mathematical and cryptological problems, Licklider began to move away from his former goals as he got caught up in the excitement of creating tools he needed.
Except Licklider wasn't an astronomer and tinkerer like Babbage, a ballistician like Goldstine, or a mathematician and code-breaker like Turing, but an experimental psychologist with some practical electronic experience. He had set out to build a small model of one part of human awareness--pitch perception--and ended up dreaming about machines that could help him think about models.
As other software visionaries before and after him knew very well, Licklider's vision, as grandiose as it might have been, wasn't enough in itself to ensure that anything would ever happen in the real world. An experimental psychologist, even an MIT professor, is hardly in a position to set armies of computer engineers marching toward an interactive future. Like von Neumann and Goldstine meeting on the railroad platform at Aberdeen, or Mauchly and Eckert encountering each other in an electronics class at the Moore School, Licklider happened upon his destiny through accidental circumstances, because of the time he spent at a place called "Lincoln Laboratory," an MIT facility for top-secret defense research, where he was a consultant during a critical transition period in the history of information processing.
It was his expertise in the psychology of human-machine interaction that led Licklider to a position where he could make big things out of his dreams. In the early and mid 1950's MIT and IBM were involved in building what were to be the largest computers ever built, the IBM AN/FSQ-7, as the control centers of a whole new continental air defense system for the United States. SAGE (Semi-Automatic Ground Environment) was the Air Force's answer to the new problem of potential nuclear bomber attack. The computers weighed three hundred tons, took up twenty thousand feet of floor space, and were delivered in eighteen large vans apiece. Ultimately, the Air Force bought fifty-six of them.
MIT set up Lincoln Laboratory in Lexington, Massachusetts, to design SAGE. At the other end of the continent, System Development Corporation in Santa Monica (the center of the aircraft industry) was founded to create software for SAGE. Some of the thorniest problems that were encountered on this project had to do with devising ways to make large amounts of information available in human-readable form, quickly enough for humans to make fast decisions about that information. It just wouldn't do for your computers to take three days to evaluate all the radar and radio-transmitted data before the Air Defense Command could decide whether or not an air attack was underway.
Some of the answers to these problems were formulated in the "Whirlwind" project at the MIT computing canter, where high-speed calculations were combined with computer controls that resembled aircraft controls. Other answers came from specialists in human perception (like Licklider), who devised new ways for computers to present information to people. With the exception of the small crew of the earlier Whirlwind project, SAGE operators were the first computer users who were able to see information on visual display screens; moreover, operators were able to use devices called "lightpens" to alter the graphic displays by touching the screens. There was even a primitive decision-making capacity built into the system: the computer could suggest alternate courses of action, based on its model of the developing situation.
The matter of display screens began to stray away from electronics and into the area of human perception and cognition which was Licklider's cue to join the computer builders. But even before Lincoln Laboratory was established in 1953-1954, Licklider had been consulted about the possibility of developing a new technology for displaying computer information to human operators for the purpose of improving air defense capabilities. Undoubtedly, the seeds of his future ideas about human-computer symbiosis were first planted when he and other members of what was then called "the presentation group" considered the kinds of visual displays air defense command centers would need.
The presentation group was where he first became acquainted with Wesley Clark, one of MIT's foremost computer builders. Clark had been a principle designer of Whirlwind, the most advanced computer system to precede the SAGE project. Whirlwind, the purpose of which was to act as a kind of flight simulator, was in many ways the first hardware ancestor of the personal computer, because it was designed to be operated by a single "test pilot." It was also used for modeling aerodynamic equations. While it was only barely interactive in the sense that Licklider desired, Whirlwind was the first computer fast enough to solve aerodynamic equations in "real time"--as the event that was being modeled was actually happening. Real-time computation was not only a practical necessity for the increasingly complicated job of designing high-speed jet aircraft; it was a necessary prerequisite for creating the guidance systems of rockets, the technological successors to jet aircraft.
Ironically, by the time SAGE became fully operational in 1958, the entire concept of ground-based air defense against bomber attack had been made obsolete on one shocking day in October, 1957, when a little beeping basketball by the odd name of "Sputnik" jolted the American military, scientific, and educational establishments into a frenzy of action. The fact that the Russians could put bombs in orbit set off the most intensive peacetime military research program in history. When the Soviets repeated their triumph by putting Yuri Gagarin into space, a parallel impetus started the U.S. manned space effort on a similar course.
In the same way that the need for ballistics calculations indirectly triggered the invention of the general-purpose digital computer, the aftermath of Sputnik started the development of interactive computers, and eventually led directly to the devices now known as personal computers. Just as von Neumann found himself in the center of political-technological events in the ENIAC era, Licklider was drawn into a central role in what became known at "the ARPA era."
The "space race" caused a radical shakeup in America's defense research bureaucracy. It was decided at the highest levels that one of the factors holding up the pace of space-related research was the old, slow way of evaluating research proposals by submitting them for anonymous review by knowledgeable scientists in the field (a ritual known as "peer review" that is still the orthodox model for research funding agencies).
The new generation of Camelot-era whiz kids from the think tanks, universities, and industry, assembled by Secretary McNamara in the rosier days before Vietnam, were determined to use the momentum of the post-Sputnik scare to bring the Defense Department's science and technology bureaucracy into the space age. Something had to be done to streamline the process of technological progress in fields vital to the national security. One answer was NASA, which grew from a tiny sub-agency to a bureaucratic, scientific, and engineering force of its own. And the Defense Department created the Advanced Research Projects Agency, ARPA. ARPA's mandate was to find and fund bold projects that had a chance of advancing America's defense-related technologies by orders of magnitude--bypassing the peer review process by putting research administrators in direct contact with researchers.
Because of their involvement with previous air defense projects, a few of Licklider's friends from Lincoln, like Wesley Clark, were involved in the changeover to the fast-moving, forward-thinking, well funded, results-oriented ARPA way of doing things. Clark designed the TX-0 and TX-2 computers at MIT and Lincoln. The first of these machines became famous as the favorite tool of the "hackers" in "building 26," who later became the legendary core of Project MAC. The second machine was designed expressly for advanced graphic display research.
Graphic displays were esoteric devices in 1960, known only to certain laboratories and defense facilities. Aside from the PDP-1, almost every computer displayed information via a teletype machine. But there was an idea floating around Lincoln that SAGE-like displays might be adapted to many kinds of computers, not just the big ones used to monitor air defenses. By 1961, the psychology of graphic displays had become something of a specialty for Licklider. Between BB&N and Lincoln, he was spending more time with electrical engineers than with psychologists.
Through his computer-oriented colleagues, Licklider became acquainted with Jack Ruina, director of ARPA in the early 1960s. Ruina wanted to do something about computerizing military command and control systems on all levels--not just air defense--and wanted to set up a special office within ARPA to develop new information processing techniques. ARPA's goal was to leapfrog over conventional research and development by funding attempts to make fundamental breakthroughs. And Licklider's notion of creating a new kind of computer capable of directly interacting with human operators via a keyboard and a display screen interface (instead of relying on batch processing or even paper-tape input) convinced Ruina that the minority of computer researchers Licklider was talking about might just lead to such a possible breakthrough.
"I got Jack to see the pertinence of interactive computing, not only to military command and control, but to the whole world of day-to-day business," Licklider recalls. "So, in October, 1962 I moved into the Pentagon and became the director of the Information Processing Techniques Office." And that event, as much as any other development of that era, marked the beginning of the age of personal computing.
The unprecedented technological revolution that began with the post-Sputnik mobilization and reached a climax with Neil Armstrong's first step on the moon a little more than a decade later was in a very large part made possible by a parallel revolution in the way computers were used. The most spectacular visual shows of the space age were provided by the enormous rockets. The human story was concentrated on the men in the capsules atop the rockets. But the unsung heroics that ensured the success of the space program were conducted by men using new kinds of computers.
Remember the crew at mission control, who burst into cheers at a successful launch, and who looked so cool nineteen hours later when the astronaut and the mission depended on their solutions to unexpected glitches? When the bright young men at their computer monitors were televised during the first launches from Cape Canaveral, the picture America saw of their working habitat reflected the results of the research Licklider and the presentation group had performed. After all, the kinds of computer displays you need for NORAD (North American Air Defense Command) aren't too different from the kind you need for NASA--in both cases, groups of people are using computers to track the path of multiple objects in space. NASA and ARPA shared results in the computer field--a kind of bureaucratic cooperation that was relatively rare in the pre-Sputnik era.
Because the Russians appeared to be far ahead of us in the development of huge booster rockets, it was decided that the United States should concentrate on guidance systems and ultralight (i.e., ultraminiature) components for our less powerful rockets--a policy that was rooted in the fundamental thinking established by the ICBM committee a few years back, in the von Neumann days. Therefore the space program and the missile program both required the rapid development of very small, extremely reliable computers.
The decision of the richest, most powerful nation in history to put a major part of its resources into the development of electronic-based technologies happened at an exceptionally propitious moment in the history of electronics. The basic scientific discoveries that made the miniaturization revolution possible--the new field of semiconductor research that produced the transistor and then the integrated circuit--made it clear that 1960 was just the beginning of the rapid evolution of computers. The size, speed, cost, and energy requirements of the basic switching elements of computers changed by orders of magnitude when electron tubes replaced relays in the late 1940s, and again when transistors replaced tubes in the 1950s, and now integrated circuits were about to replace transistors in the 1960s. In the blue-sly labs, where the engineers were almost outnumbered by the dreamers, they were even talking about "large-scale integration."
When basic science makes breakthroughs at such a pace, and when technological exploitation of those discoveries is so deliberately intensified, a big problem is being able to envision what's possible and preferable to do next. The ability to see a long range goal, and to encourage the right combination of boldness and pragmatism in all the subfields that could contribute to achieving it, was the particular talent that Licklider brought onto the scene. And with Licklider came a new generation of designers and engineers who had their sights on something the pre-Sputnik computer orthodoxy would have dismissed as science fiction. Suddenly, human-computer symbiosis wasn't an esoteric hypothesis in a technical journal, but a national goal.
When Licklider went to ARPA, he wasn't given a laboratory, but an office, a budget, and a mandate to raise the state of the art of information processing. He started by supporting thirteen different research groups around the country, primarily at MIT; System Development Corporation (SDC); the University of California at Berkeley, Santa Barbara, and Los Angeles; USC; Rand; Stanford Research Institute (now SRI International); Carnagie-Mellon University; and the university of Utah. And when his office decided to support a project, that meant providing thirty or forty times the budget that the researchers were accustomed to, along with access to state-of-the-art research technology and a mandate to think big and think fast.
A broad range of new capabilities that Licklider then called "interactive computing" was the ultimate goal, and the first step was an exciting new concept that came to be known as time-sharing.
Time-sharing was to be the first, most important step in the transition from batch processing to the threshold of personal computing (i.e., one person to one machine). The idea was to create computer systems capable of interacting with many programmers at the same time, instead of forcing them to wait in line with their cards or tapes.Exploratory probes of the technologies that could make time-sharing possible had been funded by the Office of Naval Research and Air Force Office of Scientific Research before ARPA stepped in. Licklider beefed up the support to the MIT Cambridge laboratory where AI researchers were working on their own approach to "multiaccess computing." Project MAC, as this branch became known, was the single node in the research network where AI and computer systems design were, for a few more years, cooperative rather than divergent.
MAC generated legends of its own, from the pioneering AI research of McCarthy, Minsky, Papert, Fredkin, and Weizenbaum, to the weird new breed of programmers who called themselves "hackers," who held late night sessions of "Spacewar" with a PDP-1 they had rigged to fly simulated rockets around an ocilloscope screen and shoot dots of light at one another. MAC was one of the most important meeting grounds of both the AI prodigies of the 1970s and the software designers of the 1980s. By the end of the ARPA-supported heyday, however, the AI people and the computer systems people were no longer on the same track.
One of Licklider's first moves in 1962-1963 was to set up an MIT and Bolt, Beranek and Newman group in Massachusetts to help Systems Development Corporation in Santa Monica in producing a transistorized version of the SAGE-based time-sharing prototypes, which were based on the old vacuum tube technology. The first step was to get a machine to all the researchers that was itself interactive enough that it could be used to design more interactive versions--the "bootstrapping" process that became the deliberate policy of Licklider and his successors. The result was that university laboratories and think tanks around the country began to work on the components of a system that would depend on engineering and software breakthroughs that hadn't been achieved yet.
The time-sharing experience turned out to be a cultural as well as a technological watershed. As Licklider had predicted, these new tools changed the way information was processed, but they also changed the way people thought. A lot of researchers who were to later participate in the creation of personal computer technology got their first experience in the high-pressure art and science of interactive computer design in the first ARPA-funded time-sharing projects.
One of the obstacles to achieving the kind of interactive computing that Licklider and his growing cadre of "converts" envisioned lay in the slowness and low capacity of the memory component of 1950-style computers; this hardware problem was solved when Jay Forrester, director of the Whirlwind project, came up with "magnetic core memory." The advent of transistorized computers promised even greater memory capacity and faster access time in the near future. A different problem, characterized by the batch-processing bottleneck, stemmed from the way computers were set up to accept input from human operators; a combination of hardware and software innovations were converging on direct keyboard-to-computer input.
Another one of the obstacles to achieving the overall goal of interactive computing lay not in the way computer processed information--an issue that was addressed by the time-sharing effort--but the primitive way computers were set up to display information to human operators. Lincoln Laboratory was the natural place to concentrate the graphics effort. Another graphics-focused group was started at the University of Utah. The presentation group veterans, expanded by the addition of experts in the infant technology of transistor-based computer design, began to work intensively on the problem of display devices.
Licklider remembers the first official meeting on interactive graphics, where the first wave of preliminary research was presented and discussed in order to plan the assault on the main problem of getting information from the innards of the new computers to the surface of various kinds of display screens. It was at this meeting, Licklider recalls, that Ivan Sutherland first took the stage in a spectacular way.
"Sutherland was a graduate student at the time," Licklider remembers, "and he hadn't been invited to give a paper." But because of the graphics program he was creating for his Ph.D. thesis, because he was a protÈgÈ of Claude Shannon, and because of the rumors that he was just the kind of prodigy ARPA was seeking, he was invited to the meeting. "Toward the end of one of the last sessions," according to Licklider, "Sutherland stood up and asked a question of one of the speakers." It was the kind of question that indicated that this unknown young fellow might have something interesting to say to this high-powered assemblage.
So Licklider arranged for him to speak to the group the next day: "Of course, he brought some slides, and when we saw them everyone in the room recognized his work to be quite a lot better than what had been described in the formal session." Sutherland's thesis, a program developed on the TX-2 at Lincoln, demonstrated an innovative way to handle computer graphics--and a new way of commanding the operations of computers. He called it Sketchpad, and it was clearly evident to the assembled experts that he had leaped over their years of research to create something that even the most ambitious of them had not yet dared.
Sketchpad allowed a computer operator to use the computer to create, very rapidly, sophisticated visual models on a display screen that resembled a television set. The visual patterns could be stored in the computer's memory like any other data, and could be manipulated by the computer's processor. In a way, this was a dramatic answer to Licklider's quest for a fast model-builder. But Sketchpad was much more than a tool for creating visual displays. It was a kind of simulation language that enabled computers to translate abstractions into perceptually concrete forms. And it was a model for totally new ways of operating computers; by changing something on the display screen, it was possible, via Sketchpad, to change something in the computer's memory.
"If I had known how hard it was to do, I probably wouldn't have done it," Alan Kay remembers Sutherland saying about his now-legendary program. Not only was the technical theory bold, innovative, and sound, but the program actually worked. With a lightpen, a keyboard, a display screen, and the Sketchpad program running on the relatively crude real-time computers available in 1962, anyone could see for themselves that computers could be used for something else beside data processing. And in the case of Sketchpad, seeing was truly believing.
When he left ARPA in 1964, Licklider recommended Sutherland as the next director of the IPTO. "I had some hesitance about recommending someone so young," remembers Licklider, "but Bob Sproull, Ruina's successor as ARPA director, said he had no problem with his youth if Sutherland was really as bright as he was said to be." By that time, Sutherland, still in his early twenties, had established a track record for himself doing what ARPA liked best--racing ahead of the technology to accomplish what the orthodoxy considered impossible or failed to consider altogether.
When Sutherland took over, the various time-sharing, graphics, AI, operating systems, and programming language projects were getting into full swing, and the office was growing almost as fast as the industries that were spinning off the space-age research bonanza. Sutherland hired Bob Taylor, a young man from the research funding arm of NASA, to be his assistant, and ultimately his successor when he left IPTO in 1965. Licklider went to the IBM research center in 1964, and then back to MIT to take charge of Project MAC in 1968.
In 1983, over a quarter of a century since the spring day he decided to observe his own daily activities, Licklider is still actively counseling those who build information processing technologies. After three decades of direct experience with "the rule of two," he is not sure that information engineers have even approached the physical limits of information storage and processing.
One thing scientists and engineers know now that they didn't know when he and the others started, Licklider points out, is that "Nature is very much more hospitable to information processing than anybody had any idea of in the 1950s. We didn't realize that molecular biologists had provided an existence proof for a fantastically efficient, reliable, information processing mechanism--the molecular coding of the human genetic system. The informational equivalent of the world's entire fund of knowledge can be stored in less than a cubic centimeter of DNA, which tells us that we haven't begun to approach the physical limits of information processing technology."
The time-sharing communities, and the network of communities that followed them, were part of another dream--the prospect of computer-mediated communities throughout the world, extending beyond the computer experts to thinkers, artists, and business people. Licklider believes it is entirely possible that the on-line, interactive human-computer community he dreamed about will become technologically feasible sometime within the next decade. He knew all along that the frameworks of ideas and the first levels of hardware technology achieved in the 1960s and 1970s were only the foundation for a lot of work that remained to be done.
When the bootstrapping process of building better, cheaper, experimental interactive information processing systems intersects with the rising curve of electronic capabilities, and the dropping curve of computational costs, it will become possible for millions, rather than a thousand or two, to experience the kind of information environment the ARPA-sponsored infonauts knew.
In the early 1980s, millions of people already own personal computers that will become obsolete when versions a hundred times as fast with a thousand times the memory capacity come along at half of today's prices. When tens of millions of people get their hands on powerful enough devices, and a means for connecting them, Licklider still thinks the job will only be in its beginning stages.
Looking toward the day when the "intergalactic network" he speculated about in the midsixties becomes feasible, he remains convinced that the predicted boost in human cultural capabilities will take place, but only after enough people use an early version of the system to think up a more capable system that everybody can use: "With a large enough population involved in improving the system, it will be easier for new ideas to be born and propagated," he notes, perhaps remembering the years when interactive computing was considered a daring venture by a bunch of mavericks. The most significant issue, he still believes, is whether the medium will become truly universal.
"What proportion of the total population will be able to join that community? That's still the important question," Licklider concludes, still not sure whether this new medium will remain the exclusive property of a smaller group who might end up wielding disproportionate power over others by virtue of their access to these tools, or whether it will become the property of the entire culture, like literacy.
|
read on to Chapter Eight: Witness to History: The Mascot of Project Mac |
|
howard rheingold's brainstorms
©1985 howard rheingold, all rights reserved worldwide. |