Tools for Thought by Howard Rheingold The idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry or orthodox computer science, nor even homebrew computerists; their work was rooted in older, equally eccentric, equally visionary, work. You can't really guess where mind-amplifying technology is going unless you understand where it came from.
- HLR
Chapter Thirteen:
Knowledge engineers and Epistemological Entrepreneurs
". . . It is extremely important that the development of intelligent machines be pursued, for the human mind is not only limited in its storage and processing capacity but it also has known bugs: It is easily misled, stubborn, and even blind to the truth, especially when pushed to its limits.
"And, as is nature's way, everything gets pushed to the limit, including humans. We must find a way of organizing ourselves more effectively, of bringing together the energies of larger groups of people toward a `common goal. Intelligent systems, built from communications technology, will someday know more than any individual about what is going on in complex enterprises involving millions of people, such as a multinational corporation or a city. And they will be able to explain each person's part of the task. We will build more productive factories this way, and maybe someday a more peaceful world. We must keep in mind . . . that the capabilities of intelligence as it exists in nature are not necessarily its natural limits."
Are future computers going to become tools for extending the power of our minds, or are they going to evolve into a new kind of intelligent species that operates far beyond the limits of biological intelligence? Avron Barr, the author of the statement quoted at the beginning of this chapter, is exploring one of the most potentially explosive areas human-computer evolution--the field that has come to be known as "knowledge engineering."To me, Barr's specialty seems to be rooted in the same ides that goes back to Licklider and Bush--the inevitability of a human-computer symbiosis. But to many other people, the idea of artificial intelligence seems to be fundamentally different from augmentation, in that the artificial intelligentsia appear to be more interested in replacing human intelligence than extending it.
Knowledge engineering is but one part of that ever-expanding area of hardware and software research that constitutes the field of AI. Unlike other artificial intelligence researchers, Avron Barr is not concerned with systems that can direct an optical sensor to recognize visual patterns, or to help a speech-recognition system to understand natural languages, or direct a robot in the task of climbing stairs. He and his colleagues are trying to build systems that can transfer knowledge from experts to novices and that can use the transferred knowledge to help people make decisions about specific problems.
Barr's specialty seems to bridge the gap between those who see the future of computers in terms of "mind tools" and those who see it in terms of "the next step in the evolution of intelligence." Like the other people I met who have been involved in building tomorrow's software tools, Barr has a firm belief in the epochal quality of the changes we will face when these experiments filter down to the level of public experience. For example, consider the following scenario:
A general practitioner in a small town in the Southwest was awakened late one night by an emergency call--a six-year-old girl had been admitted to the local hospital. She was comatose, and she had a high fever. The doctor ordered all clinical tests that were available at that hour in a one-hospital town and called the pathologist. The symptoms, and the results of the first tests, weren't anything the GP or the pathologist had seen before. Drugs were available--the pharmacy was well equipped, even if specialized expertise was in short supply. But which drug?
Choosing the proper antibiotic from the hundreds of possibilities was a matter of life and death for the little girl, and neither the GP nor the pathologist was comfortable about staking the young patient's life on guesswork. They took their laboratory results over to the local community college, where one of the young programmers who always seemed to be around in the middle of the night used a microcomputer and a telephone to put them in contact with an expert in Palo Alto, California, who knew just the right questions to ask about a case like this.
"Has the patient recently had symptoms of persistent headache or other abnormal neurologic symptoms (dizziness, lethargy, etc.)?" asked the specialist in California.
"Yes," replies the local attending physician.
"Has the patient recently had objective evidence of abnormal neurological signs (nuchal rigidity, coma, seizures, etc.) documented by physician observation or examination?"
"Yes," replied the pathologist.
With the help of clues provided over the telephone by the expert, the local doctors were able to administer one more test that narrowed their search for the disease-causing organism down to one of the three possibilities suggested by the specialist. There were drugs on hand for treating the infection that the long-distance expert had helped them pinpoint. The little girl recovered. The doctor, the pathologist, and the child's family were grateful.
The specialist, a computer program named MYCIN residing in a mainframe computer at Stanford Medical Center, chalked up another diagnostic triumph to its already impressive record.
Although this particular story is fictional, the dialogue is an excerpt from a real MYCIN consultation. The program does indeed exist, and is in use as a strictly experimental diagnostic assistant. It is an example of a whole range of new computer programs known as expert systems that are now serving as intelligent assistants to human experts in fields as diverse as medicine and geology, mathematics and molecular biology, computer design and organic chemistry. Expert systems are just the first of a whole new variety of software probes that infonauts like Avron Barr are launching into the unknown regions of human-machine relationships.
These systems are both research tools and commercial products. A program called PROSPECTOR has recently helped pinpoint a molybdenum deposit worth tens of millions of dollars. A program named DENDRAL, which started out as an artificial intelligence experiment, is now owned by a consortium of chemical companies, whose chemists use it to design and synthesize potentially useful new compounds.
One important difference between an expert system and other kinds of computer programs is that the program does not simply provide answers to questions, the way a calculator provides the solutions to equations. Expert systems do, of course, suggest answers, and eventually they will venture answers accompanied by a numerical statement of "confidence" in the answer. But they do more than that. The most important part of an expert system is in the interaction between the program and the person who uses it.
The human who is faced with a specialized problem can consult the specialized program, which is able to ask the human questions of its own regarding the particulars of the problem. The consultation is a dialogue that is tailored to the specific case at hand. The program simulates the decision process of human experts, and feeds back the results of that process to the human who consults it, thus serving as a reference and guide for the person who uses it.
Expert systems as they exist today are made of three parts --a base of task-specific knowledge, a set of rules for making decisions about that knowledge, and a means of answering people's questions about the reasons for the program's recommendations. The "expert" program does not know what it knows through he raw volume of facts in the computer's memory, but by virtue of a reasoning-like process of applying the rule system to the knowledge base; it chooses among alternatives, not through brute-force calculation, but by using some of the same rules of thumb that human experts use.
Statistics about how often experts turn out to be right are the ultimate criteria for evaluating expertise--whether the expert is a person who has studied for years, or a computer program that was literally born yesterday. The methodology for conducting such an evaluation was suggested in the 1950s, by Alan Turing. The "Turing test" bypasses abstract arguments about artificial intelligence by asking people to determine whether or not the system they are communicating with via teletype is a machine or a person. If most people can't distinguish a computer from another human, strictly by the way the other party responds to questions, then the other party is deemed to be intelligent. A similar strategy has been employed to judge the efficacy of expert systems. Why not just ask some human experts to distinguish human from machine diagnoses?
One experiment conducted by the Stanford Medical School began by submitting to MYCIN case histories of ten patients with different types of infectious meningitis. At the same time, eight human physicians, including five faculty specialists in infectious diseases, a research fellow, and a resident, were given the same information that had been fed to MYCIN. MYCIN's recommendations were sent, along with the human physicians' recommendations, also unidentified as such, and a record of the therapy the patients actually received, to eight non-Stanford specialists. The outside specialists gave the highest rating to MYCIN.
In the 1980s, there is little question that expert systems can be highly effective, if not superior to human expertise, in certain highly specialized fields. Twenty years ago, few people, even inside the artificial intelligence community, were confident that it could be done at all. The normally "pure" research field of artificial intelligence strayed into this potentially controversial area of applied AI, as it was bound to, because the questions surrounding expertise are at the core of the effort to simulate human intelligence.
Edward A. Feigenbaum was one of the people from artificial intelligence research who decided, in the mid-1960s, that it is important to know how much a computer program can know, and that the best way to learn something about the question would be to try to construct an artificial expert. Joshua Lederberg, the Nobel laureate geneticist, suggested the task of determining the molecular structure of compounds, based on data from mass spectrography and guided by the rules that are known to govern molecular bonds, was an appropriately difficult and potentially useful problem for artificial intelligence techniques. Together with software expert Bruce Buchanan and Nobel laureate biochemist Carl Djerassi, Lederberg and Feigenbaum started to design DENDRAL, the first expert system, in 1965, at Stanford University.
Human chemists know that the possible spatial arrangement of the molecules that make up any chemical compound depends on a number of basic rules about how different atoms can bond to one another. They also know a lot of facts about different atoms in known compounds. When they make or discover a previously unknown compound, they can gather evidence about the compound by analyzing the substance with a mass spectrograph. The mass spectrograph provides a lot of data, but no clues to what it all means.
Conventional computer-based systems had failed to provide a tool for discovering molecular structures, based on spectrographic data. The problem is that the rules allow a very large number of "near misses"--possible structures that almost, but not quite, fit all the data. There appears to be a "complexity gap" when it comes to the task of sifting through all the near misses. The far simpler computing processes that were used to discover simple structures are just not adequate for more complex structures. DENDRAL was designed to find that one "structure in a haystack" that perfectly fit the spectrographic data and the rules of chemical bonds.
It turns out that you can't just feed all the known facts into a computer and expect to get a coherent answer. That isn't the way human experts make decisions, and apparently that isn't the way you coax a computer into making a decision. What you need is an "inference engine" to fit together the rules of the game, the body of previously known facts, the mass of new data, then venture a guess about what it all means.
Building the right kind of "if-then" program, one with enough flexibility to use the kind of rules of thumb that human experts employ, was only the first major problem to be solved. Once you've created the program structure capable of manipulating expert knowledge, you still have to get some knowledge into the system. After feeding the computer program lots of data about molecules, and rules about how they can be combined in molecular structures, the creators of DENDRAL interviewed expert chemists, trying to specify how the experts made their decisions about which combinations and structures are likely to be useful. The resulting program became a milestone in the evolution of software, and the first of a series of software tools for chemists, biologists, and other researchers.
The process of constructing DENDRAL had another useful, unexpected side effect: The task of extracting judgment-related knowledge from human experts led to a new subfield known as "knowledge engineering." "Knowledge engineering" is the art, craft, and science of observing human experts, building models of their expertise, and refining the model until the human experts agree that it works. One of the first spinoffs from MYCIN was EMYCIN--an expert system for those people whose expertise is in building expert systems. By separating the inference engine from the body of factual knowledge, it became possible to produce expert tools for expert-systems builders, thus bootstrapping the state of the art.
While these exotic programs might seem to be distant from the mainstream of research into interactive computer systems, expert-systems research sprouted in the same laboratories that created time-sharing, chess playing programs, Spacewar, and the hacker ethic. DENDRAL had grown out of earlier work at MIT (MAC, actually) on programs for performing higher level mathematical functions like proving theorems. It became clear, with the success of DENDRAL and MYCIN, that these programs could be useful to people outside the realm of computer science. It also became clear that the kind of nontechnical questions that Weizenbaum and others had raised in regard to AI were going to be raised when this new subfield became more widely known. As the first frighteningly practical applications to the field of medicine proved when they were created, the field of artificial expertise involves important ethical as well as philosophical, psychological, and engineering considerations.
The clearest area of potential danger in applying knowledge engineering to human medicine is the possibility of misuse through misunderstanding. Although the people who built the system see it as a marvelous but thoroughly fallible tool, many people tend to give too much weight to the recommendation of a computer simply because it comes from a computer. Since medical advice often deals in life and death matters, you have to take into consideration the potential psychological impact of such an "automatic doctor" when you attempt to build something that gives medical advice to an expert.
Like all complex issues, the ethics of medical knowledge engineering have another side. It might be noted by someone from a non-Western, nonindustrial, or nonurban culture that expertise, particularly medical expertise, is a desperately scarce resource. The few medical, hygiene, and agricultural experts who are fighting the biggest humanitarian problems of the world--epidemics and famine--are spread too thin and are working too hard to keep up with scientific progress in their fields. Even in major medical centers, expertise in certain important specialties is a rare commodity.
While so many of the trappings of "modern medicine"--like CAT scanners and other medical imaging technologies--are so expensive as to be limited to a few wealthy or well-insured patients, the potential cost per patient of a software-based system is absurdly low, almost low enough to do some good in a near-future when the number of critically ill people on earth might number in the hundreds of millions.
Medicine--with all its promise and all its difficult ethical implications--appears to be one of the most promising areas of application for commercial knowledge engineering. In the mid 1970s, a physician and computer scientist at Stanford Medical School, Dr. Edward H. Shortliffe, developed MYCIN, the diagnostic system quoted in the earlier dialogue. The problems associated with diagnosing a certain class of brain infections was a technically appropriate area for expert-system research, and an area of particularly pressing human need because the speed with which the infecting agent is identified is critical to successful treatment.
MYCIN's inference engine (the part of the program that makes decisions by applying general rules to scientific specific data), known as E-MYCIN, was used by researchers at Stanford and Pacific Medical Center to produce PUFF, an expert system that assists in diagnosing certain lung disorders. An even newer system, CADUCEUS (formerly known as INTERNIST), uses AI techniques to simulate the diagnostic skills of a specific human physician--Dr. Jack Meyers of the School of Medicine at the University of Pittsburgh. Meyers and his partner, Harry Pople, Jr., a Carnagie-Mellon-trained AI expert, have been storing parts of Meyers' problem-solving style and his knowledge about the entire range of medicine, along with an impressive body of information from the medical literature. CADUCEUS is not yet complete, but it can already perform creditably when it is submitted difficult cases from the medical journals.
People told Katherine Fishman, the author of The Computer Establishment, that their object is to provide "something the physician would use instead of going to the library or consulting a specialist. There aren't that many experts available, even at major centers." Among the sponsoring agencies who have shown interest in CADUCEUS are NASA, which has an obvious need for such a medical helper in manned space missions, and The Navy, which could use something similar for nuclear submarines. Special gear for astronauts and nuclear submariners might sound remote from most people's daily lives, but in recent history, the transistor radio, handheld calculators, and many other examples of new technologies have traveled from the exotic confines of NASA to the breast pockets of teenagers around the world in less than ten years.
Like the creators of previous technological advances, knowledge engineers first had to prove that expert systems could be built at all and that they were useful. That took about ten years. Next, they had to find potential areas of application--a task that didn't take nearly as long. About two dozen corporations are currently developing and selling expert systems and services. TeKnowledge, founded by Feigenbaum and associates in 1981, was the first. IntelliGenetics is perhaps the most exotic, specializing in expert systems for the genetic engineering industry. Startups in this field tend toward science-fictionoid names--Machine Intelligence Corporation, Computer Thought Corporation, Symbolics, etc. Other companies already established in non-AI areas have entered the field--Xerox, DEC, IBM, Texas Instruments, and Schlumberger among them.
Expert systems are now in commercial and research use in a number of fields. A partial sampling:
KAS (Knowledge Acquisition System) and TEIRESIAS help knowledge engineers build expert systems. ONCOCIN assists physicians in managing complex drug regimens for treating cancer patients. MOLGEN helps molecular biologists plan DNA experiments. GUIDON is an education expert system that teaches students by correcting answers to technical questions. GENESIS assists scientists in planning cloning experiments. TATR helps the Air Force plan attacks on enemy airbases. It's hard to argue with a molybdenum deposit or a significantly high rate of successful diagnoses. As the debate over whether software is capable of acting intelligently dies down in what mathematicians call an "existence proof," the question of whether computer technology ought to be applied to such areas as medicine, air traffic control, nuclear power plant operations, or nuclear weapons delivery systems is just beginning.
Some critics, prominent members of the artificial intelligentsia among them, have been sounding alarms over the potential ethical dangers of relying too much on electronic artifacts like expert systems to make decisions. Joseph Weizenbaum fears that there is great peril in relying too much on a technology that is very good at mimicing what are actually much deeper human thought processes. Expert systems are the epitome of the kind of "imperialism of instrumental reasoning" Weizenbaum rails against--the kind of thinking that sees all problems as solvable through the kind of analytical, mechanical processes a computer uses.
In a 1983 interview, Weizenbaum said: "To think that one can take a very wise teacher, for example, and by observing her capture the essence of that person to any significant degree is simply absurd. I'd say people who have that ambition, people who think that it's going to be that easy or possible at all, are simply deluded."
Avron Barr is a knowledge engineer who does not feel that he is deluded, and knowledge-based educational systems happen to be one of the areas of his expertise. Surprisingly, Barr agrees with Weizenbaum about the potential ethical danger of mixing human lives and artificial intelligence research: "Artificial intelligence doesn't exist yet," Barr emphasizes, "but I believe that the kind of research we have started to explore with knowledge-based expert systems can eventually create a tool that truly understands human inquiries. And I'm not sure that people are prepared for the ethical decisions that will accompany that kind of power."
From our conversations, and from my perusal of his written work, it has been evident to me that Barr also feels that the potential for using this technology to assist humanity is well worth pursuing, despite the dangers of misuse. Besides developing and distributing automated expertise to both specialists and ordinary citizens as an informational antitoxin to life in a complicated world, Barr likes to wonder aloud how else might these software entries be used to further positive ends. His personal dream is to eventually build an expert system that is an expert in helping humans reach agreement. If chemists and physicians can use intelligent assistants, why can't diplomats and arms-control negotiators avail themselves of the same assistance? Avron Barr's odyssey through philosophy, psychology, and computer programming has led him to suspect a deep connection between what we know individually and how we agree collectively.
I met Avron Barr in a short-order restaurant in the heart of artificial intelligence country--an establishment named "late for the Train," located next to the Menlo Park train station. If there is an eavesdropping hit list for technological spies, this seismographic hotcake-and-sprouts joint has to be in the top five. SRI International, one of the oldest robotics research centers, and the birthplace of PROSPECTOR, the molybdenum-sniffing software assistant, is a few shady, tree-lined, affluent blocks away. The tweedy old fellow buttering a scone at the next table looked like a central-casting stereotype of a Nobel laureate.
Barr was wearing a white shirt and tie when we met. He appears to be in his midthirties. His hair is brown and well-groomed, his moustache neatly trimmed--another one of the many babyboomers who might have been hippies in the sixties, but who now go to hairstylists twice a month. He looks like the young man who used to put your groceries in the bag.
Barr got into programming in the first place because he needed a job, and he became involved with artificial intelligence because AI programmers seemed to have the only tools he could find that were capable of helping him to create the kind of programs he needed in his work for a research team. His need for a job came after he dropped out of graduate school. His undergraduate work in physics and math at Cornell led to Berkeley, in 1971, where a few months as a physics graduate student made it clear to him that he really didn't want to be a physicist, after all.
At that point, a career in computer science wasn't even on his list of goals, but programming happened to be one of his marketable skills--he had worked his way through Cornell doing scientific programming for various faculty members, stumbling along in FORTRAN, which he taught himself from a book one weekend. After he abandoned his physics career and he began to look for employment, an announcement for a research associate with programming experience came to his attention. The Stanford job called for a resident software handyman in a laboratory that was exploring the technology of instruction. He took it.
He had become a significant contributor to the research team, as well as the hired computer jockey, when he joined a small research group at Stanford Institute of Mathematical and Social Sciences. Over the next several years, he helped design a program that taught beginners how to program in the language BASIC.
"Which meant that I had to go back to thinking about what kinds of people were going to be dealing with computers," Barr recalls, "and finding out what kinds of problems those people might have in the process of learning their first computer language.
"One of the first things that is evident is that computer programs are very different from most of the things we learn in school because programmers rarely if ever hit the right answer the first time out. Programming is debugging. So being wrong is not so much something to be avoided at all costs, but should be seen as a clue to the right way of doing it. That's why it was actually an environment rather than just an instructional program. We tried to build a curriculum for teaching BASIC, along with the handholding help people seemed to need in learning software, right into the BASIC language interpreter."
An interpreter, it must be remembered, is not a person who specializes in deciphering computer jargon, but a kind of computer program that can convert programming commands written in the kind of high-level language that people find easier to write into a machine-language form that the computer can read.
The very primitive communications between programmer and interpreted created much of what beginners have always found frustrating about learning old-style programming. Interpreters cannot create programs that will run successfully on computers unless the programs they are written perfectly, without a single minor error. If a parenthesis is out of place, the interpreter simply stops operation and puts some spine-chilling message on the screen--the infamous "Fatal Error" or the enigmatic "Syntax Error."
The communication between first-time BASIC programmers and the BASIC interpreter necessary to run their programs was the part of the system Avron Barr and his colleagues were trying to make easier and less frustrating to the human user: "Usually, interpreters return cryptic 'error messages' when they are fed a program with a bug in it," Barr explains. "The program we were building was meant to use the error messages and the debugging as a way to learn how to program."
In order to build an interpreter that not only is able to identify errors, but also can give beginning users hints about how to go about solving the problem, Avron had to go beyond the normal tricks of the programming trade and learn about some of the exotic new notions that were beginning to emerge from AI research. This wasn't standard operating procedure for the vast majority of programmers: To most computer programmers, even scientific programmers, AI was esoteric hocus-pocus that a clique of obsessed academics did with a lot of money from the Defense Department.
When the intelligent interpreter project was finished, Barr entered the computer science department as a graduate student at Stanford, where he encountered Ed Fiegenbaum. Although he had been working as a professional programmer, and he was surrounded by artificial intelligence types, and had even picked up a few tricks from AI hackers, this was Barr's first formal exposure to the field. Feigenbaum had an idea about writing and editing a book. Avron took on the task. They thought they could produce a general handbook on AI by the end of the summer. It took five and a half years.
Besides the course requirements of his graduate work, Barr's paying job required him to produce a general text from the contributions of hundreds of AI researchers, a book that someone in a noncomputer related field could use to get an overview of the most significant work that had been done in AI. The job stretched out longer and longer, and during the time it took to complete his editing duties, he progressed from his master's degree to a Ph.D. in cognitive science.
By the late 1970s, Barr was not alone in feeling that the exploration and engineering of knowledge --learning how it is acquired by humans or machines, how it represented in the mind or in software, how it is communicated between humans and computers and disseminated throughout a culture-- was a central problem in philosophy, psychology and artificial intelligence that might well be answered in surprising ways by the new discipline created by the builders of expert systems.
Computers can track large amounts of information, and they can move through that information very quickly. But when it comes to solving any but the simplest problems--the kind that a human toddler or a chessmaster can handle easily--computers run up against a severe problem. Large is never large enough when it comes to the computer memory needed, and fast is never fast enough in terms of computational speed. There is simply too much information in the world to solve problems by checking every possible solution. The difference between brute-force calculation and human knowledge is the missing link (and holy grail) of hard-core AI research.
Personal knowledge is a tricky thing to describe, and hence a difficult thing for a computer to emulate. Knowledge is more than a collection of facts, frozen into some rationally coded order. How do our minds do all the things they do when we're thinking, without consciously thinking about how to do it? How do you know which details in a sea of information are worth your attention? The difference between a novice and an expert, for example, is not simply a quantitative question of more stored facts about the area of expertise; the difference hinges, instead, on the ability to make judgments about novel problems in the field.
Chess has been the classic example of the difficulties of emulating expertise with computer programs. It is a finite game, with a limited number of clearly allowable moves, each of which have perfectly specified outcomes. Chess qualifies as a formal system in the Turing machine sense, and hence can be imitated by a computer. Give the computer the rules, the starting position, and the opponent's first move, and the computer is capable in principle, of calculating all the possible responses to that move and formulating a response based on that calculation.
Yet, after a quarter of a century of effort, nobody has come up with an unbeatable chess playing program. The reason that brute-force calculation hasn't defeated a human grandmaster is not rooted so much in technology as in mathematics: the combinational explosion is the term for the brute-force barrier noted by Shannon back in 1950. Even with only 64 squares and a limited number of allowable moves, the number of possible moves in chess multiplies so quickly that it would take uncountable years to evaluate all legal possibilities.
In chess and many other formal systems, the correct answer is a member of a very large number of possible alternatives. The problem posed by an opponent's move is best answered by a move that will lead to capturing an opponent's king. Hidden among the huge number of possible countermoves for each one of the opponent's move is one answer or a small group of answers that would have the best chance of achieving the final goal or some intermediate goal. The abstract domain in which the solution is hidden is known as a "problem space."
The brute-force method of finding the right chess move by generating and checking each and every possibility that could exist according to the rules is known as an "exhaustive search of the problem space." Problem space is where the combinational explosion lurks, waiting to be triggered by any branching more than a few levels deep.
The problem of the combinational explosion can be easily visualized as a tree structure. If the decisions needed to choose between different are seen as the branches of a tree, then a simple two-decision example would yield two branches on the first move, four on the next, eight on the one after that. By the time you get to sixty-four moves, each with twice as many branches as the previous move, you won't be able to see the forest for the branches. If you increase the number of cases to be decided between from two to three, it gets even more snarled: After two moves on a triple-branching tree, there are nine branches (instead of four); after three moves there are twenty-seven (instead of eight), etc., ad infinitum. So you have to build a system to weed out the legal but absurd moves, as well as a strategy to evaluate two or three moves in advance.
What a machine needs to know, practically before it can get started, is that the mysterious something that human chessmasters know that enables them to rule out all but a few possibilities when they look at a chessboard (or hear a chess situation described to them verbally). When a human contemplates a chess position, that person's brain accomplishes an information processing task of cosmic complexity.
The human brain has obviously found a way to bypass the rules of exhaustive search --a way to beat the numbers involved in searching problem space. This is the vitally important trick that seems to have eluded artificial intelligence program designers from the beginning.
What does the human chessmaster do to prune the tree created by brute-force programs, and how can computers help other humans perform similar tasks? The point of expert-system building is not to outdo the brain but to help human reasoning by creating an intelligent buffer between brain processes and the complexities of the world--especially information-related complexities. A problem-pruning tool could be an important component of such an informational intermediary.
Human brains seem to accomplish tasks in ways that would require absurd amounts of computer power if they were to be duplicated by machines. The first expert-systems experiments were not focused exclusively on machine capabilities nor on human capabilities, but on the border between the two types of symbol processors. How could a machine be used to transfer expertise from one human to another? The emerging differences between machine capabilities and human cognitive talents were brought into sharper focus when it was demonstrated by systems like MYCIN that this kind of software was capable of measurably augmenting the power of human judgment. Doctors who used MYCIN to aid their diagnostic decision-making ended up making accurate diagnoses more often than they did before they used the program to assist them. The "reasoning" capabilities of the first expert systems were actually quite primitive, but the way these systems worked as "consultation tools" made it clear that there was great potential power in designing software systems that could interact with people in ways that simulated and augmented human knowing.
The present link between the technology of augmenting human intellect, the business of building expert systems, and the science of artificial intelligence, Avron Barr and his colleagues, is the role of transfer of expertise both as a practical, valuable tool and as a probe as a probe for understanding the nature of understanding:
A key point in our current approach to building expert systems is that these key programs should not only be able to apply the corpus of expert knowledge to specific problems, but that they should also be able to interact with the users just as humans do when they learn, explain, and teach what they know. . . . These transfer of expertise (TOE) capabilities were originally necessitated by "human engineering" considerations--the people who build and use our systems needed a variety of "assistance" and "explanation" facilities. However, there is more to the idea of TOE than the implementation of needed user features: These social interactions--learning from experts, explaining one's reasoning, and teaching what one knows--are essential dimensions of human knowledge. These are as fundamental to the nature of intelligence as expert-level problem-solving, and they have changed our ideas about representation and about knowledge.In order to make a decision with the help of an expert system, a human user must know more than just the facts of the system's recommendation. First, the human has to learn how to communicate with the computer; then he or she needs to know how the system arrived at its conclusion, in terms that he or she can understand. And in order to tell the human about the steps of its reasoning process, such systems must have a means for knowing what they know.
By this point, the exercise has become more than a mechanical search through long lists of possibilities. Problem-solving is only part of the function of a system that must convince a human it has found is indeed the correct one. The internal and external communication aspects of this transfer process, Barr suspects, offer clues to some of the most significant problems in artificial intelligence as well as intellectual augmentation research:
We are building systems that take part in the human activity of transfer of expertise among experts, practitioners, and students in different kinds of domains. Our problems remain the same as they were before: We must find good ways to represent knowledge and metaknowledge, to carry on a dialogue, and to solve problems in the domain. But the guiding principles of our approach and the underlying constraints on our solutions have been subtly shifted: Our systems are no longer being designed solely to be expert problem solvers, using vast amounts of encoded knowledge. These are aspects of "knowing" that have so far remained unexplored in AI research: By participation in human transfer of expertise, these systems will involve more of the fabric of behavior that is the reason we ascribe knowledge and intelligence to people.Like Doug Engelbart and Alan Kay, Barr feels that future generations will be less inhibited than present-day computer builders and users when it comes to stretching our ideas of what machines and humans can do. This adjustment of human attitudes and computer capabilities is a present-day pragmatic concern of knowledge engineers, and a long-term prerequisite for the kind of human-machine symbiosis predicted by Licklider.
In his conversations, lectures, and writing, Barr often refers to what he and other cognitively oriented computer scientists call "the flight metaphor." Early AI researchers, who were seeking pragmatic means to deal with the question of whether machines could think, compared themselves to those human inventors who not so long ago believed they would eventually build flying machines: "Today, despite our ignorance, we can we can point to that biological milestone, the thinking brain, in the same spirit as the scientists many hundreds of years ago pointed to the bird as a demonstration in nature that mechanisms heavier than air could fly," wrote Feigenbaum and Feldman in 1963.
"It is instructive to pursue this analogy a bit farther," Barr wrote in 1983:
Flight, as a way of dealing with the environment, takes many forms--from soaring eagles to hovering hummingbirds. If we start to study flight by examining its forms in nature, our initial understanding of what we are studying might involve terms like feathers, wings, weight-to-wing-size ratios, and probably wing flapping, too. This is the language we begin to develop--identifying regularities and making distinctions among the phenomena. But when we start to build flying artifacts, our understanding changes immediately.Barr then cited another contributor to the flight metaphor, Seymour Papert of MIT, Project MAC, and LOGO fame, who pointed out that the most significant insights into aerodynamics occurred when inventors stopped thinking so extensively about how birds flew. Papert stated to a 1972 European seminar attended by Barr: "Consider how people came to understand how birds fly. Certainly we observed birds. But mainly to recognize certain phenomena. Real understanding of bird flight came from understanding flight; not birds."
The most difficult barrier faced by the first designers of artificial aviation was not in the environmental obstacles their inventions faced, nor in the nature of the materials and techniques they had available, but in their ideas of what flight could and could not be. The undeniable proof of the simple but incredible idea that flight does not require flapping wings was the most important thing achieved by the Wright brothers.
At the turn of the century, a fundamental part of the problem facing aviation designers lay in abandoning prejudices about the way things actually were so that the possible might be discerned. Those who wanted to build flying machines had to abandon their fixation with the way nature solved the problem of evolving a flying lifeform so that they might see beyond birds to understand the nature of flight. In the same sense, a fundamental part of the problem of artificial intelligence design lies in the ability to see beyond brains or computers to understand something about the nature of intelligence.
Cognitive scientists know that such knowledge can shed light on the way human brains work. Barr points out that such knowledge might expand into varieties of intelligence as different from human intelligence as a jet plane is different from an eagle.
If the flight metaphor could be faithfully extrapolated to the artificers of thinking machines and engineers of programs that understand, Barr claims, new worlds of unimaginable information processing mechanisms would become possible--mechanisms that would be compatible but quite different from the way human brains do things:
. . . Every new design brings new data about what works and what does not, and clues as to why. Every new contraption tries some different design alternative in the space defined by our theory language. And every attempt clarifies our understanding of what it means to fly.Intelligence, like flight, is a way of dealing with the environment. Intelligence, again like flight, conveys a survival advantage to the organism or species that possesses it. The sheer usefulness, the practical value to society of being able to fly from place to place ensured better artificial ways to fly. Barr suggests that expert systems and other knowledge-based technologies are the kind of "flying machines of the mind" that will have an equally high utilitarian value, and the economics of the marketplace will therefore drive the future exploration of their capabilities.But there is more to the sciences of the artificial than defining the "true nature" of natural phenomena. The exploration of the artifacts themselves, the stiff-winged flying machines, because they are useful to society, will naturally extend the exploration of the various points of interface between the technology and society. While nature's explorations of the possibilities is limited by its mutation mechanism, human inventors will vary every parameter they can think of to produce effects that might be useful--exploring the constraints on the design of their machines from every angle. The space of "flight" phenomena will be populated by examples that nature has not had a chance to try.
The "applied" part of "applied AI" is one of the most significant aspects of expert systems, in Barr's opinion, because the linkage of intelligent systems with valuable social goals guarantees the further development of the young science. Because the development of better products in this particular market also means the development of better means of augmenting human intelligence, the evolution of this kind of machine will be rather closely coupled with the future evolution of human thought:
It is the goal of those who are involved in the commercial development of expert-systems technology to incorporate that technology into some device that can be sold. But the environment in which expert systems operate is our own cognitive environment; it is within this sphere of activity--people solving their problems--that the eventual expert-system products must be found useful. They will be engineered to our minds.. . . It is a long way from the expert systems developed in the research laboratories to any products that fit into people's lives; in fact it is difficult even to envision what such products will be. Egon Loebner of Hewlett-Packard Laboratories tells of a conversation he had many years ago with Vladimir Zworykin, the inventor of television technology. Loebner asked Zworykin what he had in mind for his invention when he was developing the technology in the 1920s--what kind of product he thought his efforts would produce. The inventor said that he had a very clear idea of the eventual use of TV: He envisioned medical students in the gallery of an operating room getting a clear picture on their TV screens of the operation being conducted below them.
One cannot, at the outset, understand the application of a new technology, because it will find its way into realms of application that do not exist. Loebner has described this process in terms of the technological niche, paralleling evolution theory. Like the species and their environment, inventions and their applications are co-defined--they constantly evolve together, with niches representing periods of relative stability, into a new reality. . . . Thus, technological inventions change as they are applied to people's needs, and the activities that people undertake change with the availability of new technologies. And as people in industry try to push the new technology toward some profitable niche, they will also explore the nature of the underlying phenomena. Of course, it is not just the scientists and engineers who developed the new technology who are involved in this exploration: Half the job involves finding out what the new capabilities can do for people.
In order to build an expert system, a knowledge engineer needs to encode the rules a human expert uses to make decisions about problems in a specific field, then connect those decision rules with a large collection of facts about that field. The human expert is asked to test the software model. If the human expert disagrees with the system's suggested solution to a problem, then the human asks the system to reconstruct the chain of rules and facts that led to its decision.
By pinpointing the places where the program went wrong, the human expert and the knowledge engineer turn their rough mock-up into a working expert system by a process of progressive debugging. Eventually, they end up with a program that will agree with the human expert a very high proportion of the time. Consensus comes in when you ask a second expert to evaluate the system. In real life, human experts disagree with one another, even at the highest levels of expertise. Which means that no matter how well an expert system agrees with one particular human expert, that does not guarantee that another expert won't catch the software making a wrong decision.
The key to taking advantage of these natural disagreements between experts, Barr realized, was to build in a mechanism for "remembering experiences," for keeping around old decisions, even if they were wrong, and creating new rules from the outcome of disagreements. Taken far enough, this aspect of the system leads directly to one of the hottest issues in AI research--the question of whether programs can learn from experience. Barr was only interested in one specific aspect of this issue--the possibility of creating a means of tracking decisions and keeping track if instances where human experts disagree with each other.
"When two experts disagree," Barr explains, "they try to find ways to show each other cases where the other's knowledge is not appropriate to produce what they both agree would be the right result. The first steps of establishing consensus, then, involve figuring out where you do agree. Then you can get on to the second step--trying to find exactly where in your individual knowledge systems the disagreement lies.
"Locating the point of disagreement usually turns out to be an important part of the process, because in consciously looking for disagreements the experts realize that they don't share the same meanings for the terms they are using or that they don't share a compatible description of the goal.
"This kind of debugging isn't exciting, but it creates a foundation for the third step of consensus, where the experts have to decide what to do about each other. They can agree that one of them was wrong, they both can remain convinced that they are right, they can decide that they are both wrong or both right. They can look for an investigation or experiment that could decide the issue. Or they can decide that they both have to wait for new knowledge."
Barr believes consensus assistance is only a start on "the ultimate kind of thing we can do with intelligent assistants. Consensys started out as a way of describing how you communicate with one of these systems, in particular, how you might push the expert system to deal with two different human experts and incorporate the value of the differences that the two experts might have.
"My dream has to do with the idea that there is a purpose for us all being here, and we're all necessary for discovering that purpose. Each of us has our own little peephole onto the building being constructed. None of us know what it is, but each of us has a slightly different perspective. And all of those perspectives are necessary to figure out what's being built. It's strange that we can achieve so much as a culture in such short time, and we can get all these great ideas about how we got here and how the universe works, and yet know so little about the point of it all. I think that's a clue that computation has a role to play.
"I think of computation as an abstract idea about what it is to share an interpretation of the environment. Computation involves systematic manipulation of symbols, and symbols have a cognitive relation to the world. We need those intermediate messages between our internal representations in order to share perspectives on the world.
"I think it is indeed possible that these kinds of systems will someday be used as a way to work out differences between people. The understanding that is necessary for that to begin to happen involves admitting that we don't know what the purpose is, then finding out why we don't know, and figuring out together how we might come to understand. Perhaps computers can play a role in understanding that purpose.
"This might sound very philosophical, but the nature of understanding is at the core of the problems AI programs are up against right now. Pattern recognition in artificial vision or hearing, the ability to understand natural language, the emulation of problem-solving, the design of an intelligent computer interface-- all of these research questions involve the nature of understanding. We don't know what the purpose of understanding is, or why you have to know a whole lot about the world in general to recognize a face or understand a sentence.
"I think most of us believe that understanding is better than not understanding, and that the more we understand the better off we'll be. And I think that the descendants of today's knowledge-based expert systems will help us all to better understanding. Each of us will be able to understand better because we'll be interacting with people and with information through the assistance of expert tools. They may even help us understand things that nobody understands."
Few people object to the notion of understanding things that nobody understands--until it is suggested that the agent for achieving that understanding might be an intelligence that is made of silicon rather than protoplasm. The AI infonauts might be on a track that ultimately will bypass the near-future technologies that augment, but do not surpass, human intelligence. If Barr and his colleagues are correct, then their ideas offer strong reinforcement for the speculations that Licklider made in 1960, when he introduces the idea of a coming human-machine symbiosis. Licklider suggested that such a symbiosis was an intermediate step for the interim decades or centuries before the machines surpass our ability to keep up with them.
Even if the human-machine partnership is to be an intermediate relationship, lasting only a few human generations, those next few generations promise to be exciting indeed. When we look at the history of computing, it is clear that the experts consistently underestimate the rate at which this technology changes. Even the boldest AI pundits might be seriously underestimating the technological changes that will occur in the next fifty or one hundred years.
The paths to the future of mind-augmenting technology appear to be fanning out, the range of alternatives becoming wider and less predictable. It is possible, given past developments, that all of these paths will lead to distinct new technologies, and will precipitate significant changes in human culture. One direction seems to involve the kind of interactive, first-person fantasy amplifiers exemplified by the work of people like Alan Kay and Brenda Laurel. Engelbart's dreams of intellectual augmentation furnish a different model of how the universal tool might evolve. In the next chapter, we'll look at yet another path--one that is more connected to the history of literature than the history of machines.
Ted Nelson, our final infonaut, envisions a future in which the entire population joins the grand conversation of human culture that has heretofore been restricted to those few creators whose works have found their way to library shelves. Wild as his predictions may be, they have to be considered seriously, in light of the uncannily accurate forecasts he made back in the "old days" of personal computer history--the 1960s and 1970s.
|
read on to Chapter Fourteen: Xanadu, Network Culture, and Beyond |
|
howard rheingold's brainstorms
©1985 howard rheingold, all rights reserved worldwide. |