6.844 Info

From 6.034 Wiki

(Difference between revisions)
Jump to: navigation, search
(Week 6 -- October 25)
(Week 6 -- October 25)
Line 158: Line 158:
With that background, the main article for this week is from Quanta Magazine:  
With that background, the main article for this week is from Quanta Magazine:  
 +
[https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-they-understand-20191017/]
[https://www.quantamagazine.org/machines-beat-humans-on-a-reading-test-but-do-they-understand-20191017/]
 +
Points for your writeup (in addition to the sentence re-writing above):
Points for your writeup (in addition to the sentence re-writing above):

Revision as of 22:38, 21 October 2019

Contents

Welcome to the 2019 Edition of 6.844

Overview

6.844 was created in response to requests from grad students who wanted to take 6.034, but needed graduate level credit.

It is a supplement to 6.034---you will take 6.034 as usual and do all of that work (lectures, labs, quizzes), and in addition attend the 6.844 session and do the work required there. That session will meet every Friday 11am-12pm in 32-155. Each week there will be a reading assignment focusing on one or more of the foundational, provocative, or intriguing papers from the research literature. You will be expected to do the reading, write up a one page response to a set of questions that will be provided with the reading, and come to class prepared to discuss your (and others') answers to those questions.

The papers will help you learn how to read original research papers in the field and will focus on the science side of AI, addressing the larger scientific questions, rather than existing tools for building applications.

The class is heavy on interaction; you will not be able to just sit back and listen. To keep the class size manageable and to encourage active class participation, we do not allow listeners.

More information about the class can be found here.

Staff

Prof. Randall Davis
Instructor
davis@mit.edu
Jack Cook
Teaching Assistant
cookj@mit.edu
Image:Rdavis.jpg Image:jackCook.jpg

Week 1

The paper below is for discussion on Friday, 13 September:

"Steps Toward AI" by Marvin Minsky, available here.

A few comments to guide your reading:

Keep in mind first that this paper was written in 1961, 58 (fifty eight!) years ago.

As the guest editor’s comment indicates, this is very early in the birth of the modern version of the field; Minsky had been invited to write a tutorial overview.


Recall that your job is to summarize the paper in one page. Do that, and also try to comment on these things as well:

1. How many of the ideas Minsky mentions do you recognize as still in use?

2. Does he do a good job of laying out the structure of the field?

3. What is that structure?

4. Consider the sentence near the top of page 9 beginning “A computer can do, in a sense.…” There are several reasons why he starts off that way. List some reasons that seem compelling to you.

Week 2

Note: There is no class on Friday, 20 September. It's a student holiday.

The paper below is for discussion on Friday, 27 September. We will discuss the evolution of rule-based expert systems. The discussion will be based on your comments and insights on this paper:

Robert K. Lindsay, Bruce G. Buchanan, Edward A. Feigenbaum, and Joshua Lederberg. "DENDRAL: A Case Study of the First Expert System for Scientific Hypothesis Formation" in Artificial Intelligence 61, 2 (1993): 209-261.

The paper is available here.

This is a rather long paper, but you have extra time to review it and to consider the interesting ideas in it. Given the size and depth of the paper, it would be a bad idea to wait until the last minute to read it.


A reminder from the overview info about the course:

Your weekly write-up should not be longer than one page and should be readable by someone who hasn't read the paper. The ability to write such a review is an important skill to develop. The idea is not to include a pile of mathematical formulas or lots of code in your review. We want you to learn to extract the essential take-away message of the paper, including such things as:

1. What is the author trying to accomplish, i.e., what is the problem they are trying to solve? Why is it difficult?

2. What technical methods is the author bringing to bear?

3. Did the work succeed? What does “succeed” mean in this case?

4. If it worked, why did it work? Where it failed, why did it fail? (Failures are typically among the most interesting and revealing behaviors of a program.)


Week 3

This paper is for discussion on Friday, 4 October: "What Are Intelligence? And Why?" by Randall Davis, available here.

This is another overview of AI paper, rather than a technical examination of a particular technique or program. It tries to take a step back and answer a core question: What is it that we're talking about when we talk about intelligence? The paper suggests that intelligence is many things and has been interpreted differently from several different intellectual foundations.

Your task is to evaluate how successful the paper is in answering the questions it raises. And pay no attention to the name of the author. I expect you to be hard-headed and clear-headed in your evaluation and/or criticism.


Week 4

The paper below is for discussion on Friday, 11 October.

You've recently been learning about deep neural nets, which have been strikingly successful in computer vision, speech understanding, and a range of other classification tasks. But there is also an interesting problem with them. Deep Neural Nets are Easily Fooled, which is capitalized because that's the title of a very interesting paper, available here.

If you are off-campus, that link might not work, in which case use this one.


You may also wish to look at this web page.

Keep in mind though that I've seen it as well, so just repeating what you see there will not be considered a good write up.

As usual the point is not just to simply summarize the paper, but to think about what interesting ideas are in there, describe those, and then evaluate them. Make your own judgments, and tell me what you think and why.

Week 5 -- October 18

Given the other things going on this week, I have selected a more non-technical paper to read. It's still challenging and needs some thought, but it's also fun. It concerns a famous argument about the possibility of computers thinking. Read it over and explain how you react to the arguments. As usual, don't just summarize the paper; summarize the issues but then describe your response to them. Are they convincing? If so, why? If not, why not?

The paper is here.

Note that there is no easy or obvious answer here, the idea is to take the argument seriously and think about how you might respond.


Week 6 -- October 25

This week's reading seems a nice way to continue our discussion of what we mean by "understand", whether talking about people or about programs. It appeared online the day before our last class and mentions the Chinese Room thought experiment (which is why it came to the attention of one of our class members, who sent me a pointer).


It discusses the efforts to design successively more challenging language understanding tasks created as a way of woring toward the goal of programs that understand natural language. (And hence more generally, understand.)


One piece of background you'll need is a basic understanding of word embedding. This brief article summarizes a research paper (cited at the end). It introduces the idea, shows you one of the very well known examples, and explores how robust this technique is:

[1]


In a throw-back to primary school, you'll also need to understand sentence diagramming. A quick intro (or refresher):

[2]


Now imagine that a sentence is just a noun phrase followed by a verb phrase. In your writeup, write this sentence with parentheses that separate it into the noun phrase and the verb phrase:

The tall person who owns the hammer that is too heavy for Sam to lift likes chocolate.


Notice how much simpler the sentence is to understand once it's divided up into its constituent parts.


Notice how two words "person" and "likes" that are very far apart in the original sentence can now far more easily be seen as connected to one another.

You'll need this to understand the part of the article that describes treelike representations of sentences and neural nets trained with non-sequential representations.


With that background, the main article for this week is from Quanta Magazine:


[3]


Points for your writeup (in addition to the sentence re-writing above):

1. Explain the challenge tasks the paper describes in the efforts to reach a better level of understanding in software. Do these seem to be working? Why or why not?


2. These efforts are sometimes described as a cat and mouse game. How does that apply here?


3. Evaluate the claim the neural network building is now a well defined engineering practice, in the sense that the right architecture is easily determined, built and trained. If not, why not?


Consider this from the article:

The only problem is that perfect rulebooks don't exist, because natural language is far too complex and haphazard to be reduced to a rigid set of specifications. Take syntax, for example: the rules (and rules of thumb) that define how words group into meaningful sentences. The phrase "colorless green ideas sleep furiously" has perfect syntax, but any natural speaker knows it's nonsense. What prewritten rulebook could capture this "unwritten" fact about natural language -- or innumerable others?


4. Presumably you understood the sentence as meaningless in the literal sense (i.e., leave aside poetic interpretations). How did you do that? How did you do that in a way that would allow you to do it for innumerable other such sentences? Do you have a rule book full of unwritten facts in your head? If not, how did you figure out that this sentence (and others like it) are problematic? What do you know?


Personal tools