ACM Ubiquity Vol. 9, Issue 36 October 7 - 13, 2008
Talking with Terry Winograd
Convergence, ambient technology, and success in innovation
Terry Winograd is Professor of Computer Science at Stanford University, where he directs the program
on human-computer interaction. His SHRDLU program done at the MIT AI Lab was one of the early
explorations in natural language understanding by computers. His book with Fernando Flores,
Understanding Computers and Cognition, critiques the underlying assumptions of AI and much of
computer system design, introducing directions from phenomenology. He was a founder and national
president of Computer Professionals for Responsibility, and is currently on sabbatical at Google, Inc.
What has surprised you most about the turn that information technology has taken in
In some sense, the way that the Web unfolded was a surprise. I was using the Internet
and FTP and Telnet 20 years before the Web. When I first saw the Web, which was pre-
Mosaic, I didn't find it very interesting. The first time I saw Mosaic, I was surprised at how
totally different the feeling was. It was immediately obvious that the introduction of
graphics with text would make a big difference and that it was a new phenomenon. But I
didn't have a feel for the way it would go commercial and spawn all the e-business and
other companies, partly because that's not an area that I ever thought about much. I was
still thinking of it in terms of academic computing people, which, of course, didn't turn out
to be its main audience at all.
Why did you find its precursor uninteresting?
Because it didn't have pictures. Transferring text and following links to other pieces of
text seemed very academic. Putting images in completely changed the feel. That was at
a point where any exchange of pictures was a mess, because you had to have the right
applications that knew the right formats. The idea that you would just click and a picture
would be on your screen was a huge difference in fluency compared to what was there. I
think the key thing was having graphics and layout, all the stuff HTML brought.
I'll interject two things: One is the Web, which was amazing, and another is a search
engine like Google.
What surprised me, which Google was part of, is that superficial search techniques over
large bodies of stuff could get you what you wanted. I grew up in the AI tradition, where
you have a complete conceptual model, and the information retrieval tradition, where you
have complex vectors of key terms and Boolean queries. The idea that you can index
billions of pages and look for a word and get what you want is quite a trick. To put it in
more abstract terms, it's the power of using simple techniques over very large numbers
versus doing carefully constructed systematic analysis.
You were an undergraduate mathematics major at a liberal arts college. Did you start out
as a mathematician?
I was never a mathematician, really. When I got to graduate school at MIT, I realized that
most sophomores probably knew more math than I did as a beginning graduate student.
I was formally in the applied math department because my advisor, Seymour Papert,
was in the applied math department. In fact, I was doing what would now be called
computer science. But of course it wasn't in those days.
Have you ever thought of yourself as a linguist?
Linguist is an interesting term. Certainly I was interested in linguistics. When I came to