6.S899 Info

From 6.034 Wiki

Jump to: navigation, search



Week 1

Steps Toward AI by Marvin Minsky. Available here [1]

A few comments to guide your reading.

Keep in mind first that this paper was written in 1961, 57 (fifty seven!) years ago. I suspect that’s likely before most of you had even entered middle school.

As the guest editor’s comment indicates, this is very early on in the birth of the modern version of the field; Minsky had been invited to write an tutorial overview.

As this is not a technical paper about an idea, technique or program, the standard format for writing about the paper doesn’t apply.

Recall your job is to summarize the paper in one page. Do that, and try to comment on these things as well:

a) how many of the ideas Minsky mentions do you recognize as still in use?

b) does he do a good job of laying out the structure of the field?

c) consider the sentence near the top of page 9 beginning “A computer can do, in a sense…” There are several reasons why he starts off that way. List some that seem compelling to you.

Week 2 -- Sept 28th

Note: there is no class on September 21. It's a student holiday.

The paper below is for discussion on Friday, 28 September (yes, right after the 6.034 quiz). We will discuss the evolution of rule-based expert systems. The discussion will be based on your comments and insights on the paper:

Robert K. Lindsay, Bruce G. Buchanan, Edward A. Feigenbaum, and Joshua Lederberg. "DENDRAL: A Case Study of the First Expert System for Scientific Hypothesis Formation." in Artificial Intelligence 61, 2 (1993): 209-261.

This is a rather large paper, but you have two weeks to review it and to consider the interesting ideas in it. Given the size and depth of the paper, it would be a bad idea to wait until the last minute to read it.

The paper is available at


Also: as several people found out, it's a very bad idea to wait until just before class to try to produce a hardcopy of your writeup. Printers can be hard to find and can be ornery. Plan ahead.

Week 3 -- October 5

What Are Intelligence? And Why? by Randall Davis Available here [2]

This is another overview of AI paper, rather than a technical examination of a particular technique or program. Instead it tries to take a step back and answer a core question -- what is it that we're talking about when we talk about intelligence? The paper suggests that intelligence is many things and has been interpreted differently from several different intellectual foundations.

Your task is to evaluate how successful the paper is in answering the questions it raises. And pay no attention to the name of the author. I expect you to be hard-headed and clear-headed in your evaluation and/or criticisms.

Week 4 -- October 12

Friday's lecture is about deep neural nets, which have been strikingly successful in computer vision, speech understanding, and a range of other classification tasks. But there is also an interesting problem with them. Deep Neural Nets are Easily Fooled, which is capitalized because that's the title of a very interesting paper, available here:


[If you are off-campus, that link might not work, in which case use this one:]


You may also wish to look at this web page:


Keep in mind though that I've seen it as well, so just repeating what you see there will not be considered a good write up.

As usual the point is not to simply summarize the paper, but to think about what interesting ideas are in there, describe those, and then evaluate them. Make your own judgments and tell me what you think and why.

Week 5 -- October 19

Given the other things going on this week, I have selected a more non-technical paper to read. It's still challenging and needs some thought, but it's also fun. It concerns a famous argument about the possibility of computers thinking. Read it over and explain how you react to the arguments. Please, don't just summarize the paper; write out your own response to it. Is it convincing? If not, why not?


Note that there is no easy or obvious answer here, the idea is to take the argument seriously and think about how you might respond.

Week 6 -- October 26

You'll be learning about genetic algorithms this week. The first of these papers introduces the subject generally and hence overlaps somewhat with the class lecture; that's ok, sometimes it's effective to hear about something twice from two different sources, it sticks better. The second paper describes an example of GAs applied to a real world problem.

The papers:



You should still hand in only one page of writeup. Use the first paper to learn about the technique. In writing about it, consider these issues:

think carefully about where the analogies to biology are informative and where they can be misleading.
how might you improve the fitness function for the maze path problem?

In the tax evasion paper, what is the co-evolution that is going on? Does it work? (Don't worry too much about section 3.4).

In addition to answering these specific questions, identify your own interesting issues and discuss them.

Week 7 -- November 2

You've been learning about near miss as an interesting model of learning. One of the issues in this technique is the origin of the near misses -- where do they come from? How do we know what near misses to supply?

This paper offers a real use of near-miss learning to inform a sketch recognition system that works from descriptions of shapes. The difficult part is getting those descriptions and getting them to be correct.


We've read a several overview papers thus far in the course; this one is a description of a particular system and a particular approach to learning. That makes the framework that I supplied at the beginning of the term more relevant:

1. What is the author trying to accomplish?

2. What technical methods is the author bringing to bear?

3. Did the work succeed? What does “succeed” mean in this case?

4. If it worked, why did it work. Where it failed, why did it fail? (Failures are typically among the most interesting and revealing behaviors of a program.)

And remember, don't read the paper back to me; read it and think about it and evaluate the ideas.

Week 8 -- November 9

This Wednesday Professor Winston will be giving a overview of a core topic in AI -- representation -- that has been somewhat overshadowed recently as a result of the interest in statistical models (e.g., neural nets, SVMs, etc.) It's important to understand how knowledge might be represented explicitly in a program, rather than indirectly (e.g., in the weights of several million neurons). His lecture will review several of the key representations that have been developed.

The paper for Friday is an overview of the topic of representation in general, trying to consider what a representation is by thinking of the fundamental roles it plays. The paper lays out 5 such roles -- consider each of them and explain how it does (or does not) help you understand the concept of knowledge representation. The paper assumes a familiarity with traditional knowledge representations of the sort Prof. Winston will describe on Wednesday, so pay attention.


(You will find Table I of the paper familiar; it was used in the later paper "What Are Intelligence" that we read earlier in the term.)

Week 9 -- November 16

In 6.034 we have seen several models of learning (nearest neighbor, neural nets, SVMs, etc.). All of them share the property that they require large numbers of examples in order to be effective. Yet human learning seems nothing like that. It's remarkable how much we manage to learn from just a few examples, sometimes just one. How might this work?


As the paper indicates:

A central challenge is to explain these two aspects of human-level concept learning: How do people learn new concepts from just one or a few examples? And how do people learn such abstract, rich, and flexible representations? An even greater challenge arises when putting them together: How can learning succeed from such sparse data yet also produce such rich representations? For any theory learning (4, 14–16), fitting a more complicated model requires more data, not less, in order to achieve some measure of good generalization, usually the difference in performance between new and old examples. Nonetheless, people seem to navigate this trade-off with remarkable agility, learning rich concepts that generalize well from sparse data.

The paper for this week explores this notion, proposes a mechanism, and demonstrates its performance on a number of problems. It's one of the more challenging papers we'll cover. Don't get lost in the math (there isn't much) or the mechanism (probabilistic programming); instead stay try to determine why this works. Why is the program successful? As usual there is no one right answer here, but see what you can come up with.

A old text describing the Spanish conquest of the Aztecs and Incas provides an obscure hint at one line of thought: "The Conquistadors on their Spanish Horses were seen as Centaurs, so at one were they with their horses during the conquest of Mexico and Peru."

Note also that the very last page of the pdf has pointers to supplemental material. This is becoming more commonplace, as authors are asked to include more information to support their claims. Sometimes the supplemental material is raw data and code, sometimes it's pointers to additional papers/memos. Be sure to check out the material available so you'll know to do this in the future.

Week 10 -- November 30

With the skyrocketing interest in AI and machine learning has come a recognition that the models we create can be biased, sometimes by accident and at times even despite our best efforts. Given the application of these models to real world issues -- like being approved or turned down for a lown, or being grsnted parole or not -- these systems can have significant real world consequences.

This week we'll tiptoe into a quite deep and challenging subject: the fairness of algorithms.

We'll also proceed a little differently, with a two part exercise to be done before class.

Note that order here is very important: do #1 before #2. The papers in #2 will likely change your view of the answer to #1, but the whole idea is for you to think about this on your own, trusting your own intuitions on the subject, and grappling with the issue the way everyone has to.

Don't try to rewrite your answer to #1 after reading the papers; your answers to this part will be graded on whether you made a good faith effort to think about the issue on your own.

1) For the sake of concreteness, select either one of these systems:

a) a system that takes data about applicants and decides who to admit to a very competitive college (perhaps one not far from here that dates back to 1635)
b) a system that decides whether to approve a loan request

Describe what it would mean for either of these to be "fair." Try to be as explicit as you can, as a good definition would be computational, so it could be applied effortlessly to double check the outcome of the system. At the very least be explicit about what kinds of things make an algorithm fair or unfair. That is, what are your criteria for something being fair?

Make it at most one page long.

2) Read and comment on both of these papers in the usual 1-page summary. In the summary, please do not report what the papers say; assume I have read them (I have) and tell me what you think about what they say.



In the Gentle Introduction paper pay particular attention to the different models of fairness. Does any one of them seem particularly appropriate or inappropriate to you?

In the Semantics paper, comment in particular on the reference in the abstract to a bias that is "veridical". What does that mean and why does it matter? (Yes, the second half of this question is vague, but you should be used to that from me by now. It means I want you to think about it carefully on your own.)

Just to be clear: this week you'll hand in two one-page writeups.

Week 11 -- December 7

Our final class will folow up on algorithmic fairness and look at a related issue that has been increasing in importance -- ethics in AI.

First, a followup on an issue we didn't have time for last class: Consider this statement from the Naryanan paper

Greenwald et al. found extreme effects of race as indicated simply by name. A bundle of names associated with being European American was found to be significantly more easily associated with pleasant than unpleasant terms, compared with a bundle of African-American names.

The justified implication here is of bias against African-Americans. But go a step deeper than that and ask some appropriate followup questions. To get you started: does the paper say who found the European names more easily associated with pleasant terms? Make the obvious guess and then formulate a followup question or two to see whether you can put this in a broader context, and indicate what interesting insights the answers to your questions might reveal. [Yes, it's Prof. Davis being vague again, but it's a good exercise in going past the obvious inference.]

For the ethical issues study, we'll want to consider issues like, What does it mean to do ethical research in AI? What for that matter does it mean to do ethical research in any technology?

One good, easy to read source of guidance on this is available from the Markkula Center:


In particular, read at least these two sections:

1) Overview of Ethics in Tech Practice
2) Framework For Ethical Decision Making

Then read about two recent projects that produced controversy for Google:

The Maven project --


A search engine for China --


Answer these questions about those projects and the articles.

a) What actually is project Maven? Do you consider it unethical? Why or why not (feel free to use the framework in the Markkula Center materials).

b) What is the image at the beginning of the Foreign Policy story and what does it suggest about the publication's view of the work?

c) Evaluate this claim from that article:

Officials stress that partnering with commercial industry on AI is a national security priority, particularly as potential U.S. adversaries such as Russia and China ramp up investments in that area. China, in particular, is dedicating $150 billion to AI through 2030, Floyd said. Meanwhile, the Defense Department is spending $7.4 billion on AI, big data, and the cloud in fiscal 2017, according to Govini.

d) It's easy to criticize Google's efforts to build a censored search engine for China, as there are numerous problems with it. But take the other side -- what possible benefits might come from it? (Serious answers only, please. "It'll make Google a lot of money" is not a serious answer, even if true.) In all ethical issues it's important (ethically!) to be able to see both sides of an issue. Ethics questions typically involve careful tradeoffs and balancing acts. You have to be able to see both sides in other to judge the tradeoffs.

Personal tools