The opportunity to do graduate studies didn’t even enter my mind. Women were not really invited, and the kind of philosophy I was interested in was quite different from the mainstream at that time. I didn’t begin my Ph.D. work at Stanford until I was about 35, and I had already experienced 15 years working in the data processing and computing environment as a programmer, systems analyst, and manager. 

At that time, computers were just beginning to become interactive — that is, using the immediate responses of the human running the program to determine the next actions of the program. Usually, these were simple choices, often coded as language responses, numbers, or “yes” or “no.” This was difficult because it required predicting human behavior. I wanted to study this topic when I entered Stanford. Unfortunately, few faculty had studied this except through research on human conversation, and no one had applied it to the general problem of human-computer interaction.

I went to a professor teaching a course in natural language and asked, “Who could I work with on this?” They said there’s this organization called Xerox PARC. I spent two or three years there working on how people use analogy to think about the things they’re learning. This would be integrated into AI programs, where the programs are the teacher, called intelligent tutoring systems.

I received my Ph.D. in 1983 on a topic I created called cognitive ergonomics, a combination of computer science, psychology, and linguistics. My dissertation was on how people learned to use computer text editors when they had used a typewriter. The idea is that if you’re going to teach someone, you need to know what they’re going to do wrong and how to correct that. Teach concepts that will be analogous to their prior experience. That seems really simple, but it was very controversial in the AI community at the time — and still is! 

Share.

Comments are closed.