6.S966: A Graduate Section for 6.034

From 6.034 Wiki

(Difference between revisions)
Jump to: navigation, search
(Week 11,12:)
Current revision (06:00, 30 November 2017) (view source)
(Week 11, 12:)
 
Line 284: Line 284:
Have fun.
Have fun.
 +
 +
=Week 13:=
 +
 +
Well, we are nearing the end of the term.  Our next meeting is 8
 +
December 2017.  I hope that you have been enjoying this graduate
 +
seminar attached to Patrick Winston's undergraduate subject.  Please
 +
be sure that you have given me a one-page writeup for each of the
 +
sessions that we have had.
 +
 +
For this week I thought it would be fun to think about what AI and
 +
linguistics can say about music, and what music can say about AI and
 +
linguistics.  So I have two papers for you to read.  One is a deep
 +
paper by Marvin Minsky and the other is a rather radical hypothesis
 +
about the similarity of music to language by linguists Katz and
 +
Pesetsky (hereafter KP).  Each of these papers is a technical
 +
challenge to read, because they depend on knowledge outside of the
 +
papers, but the challenge is well worth the effort.  The Minsky paper
 +
expects that you are familiar with some music theory, and the KP paper
 +
expects that you know some linguistics and a related linguistic theory
 +
of music called "Generative theory of tonal music" (GTTM).  You can
 +
find out about GTTM in the Wikipedia article by that name.
 +
 +
Anyway, here are the papers, with URLs where you can retrieve them:
 +
 +
 +
Jonah Katz and David Pesetsky; "The Identity Thesis for Language and
 +
Music"; in Sounds and Structures, Freie Universitat Berlin, 2009.
 +
 +
http://web.mit.edu/6.034/www/6.s966/katzEtAl_11_The-Identity-.3.pdf
 +
 +
 +
Marvin Minsky; "Music, Mind, and Meaning"; in Computer Music Journal,
 +
Fall 1981, Vol. 5, Number 3.
 +
 +
http://web.mit.edu/6.034/www/6.s966/Minsky-MusicMindMeaning.pdf

Current revision

Contents

Prospectus

Leader: Gerald Jay Sussman

This term I will experimentally run a graduate section of 6.034, the Introduction to Artificial Intelligence taught by Patrick Henry Winston. You will receive graduate credit if you sign up for 6.S966. However, if you sign up for 6.S966 you will be required to do a bit more work: in addition to attending the three lectures and one recitation of 6.034 each week and doing the 6.034 homework and quizzes, you will be required to attend an extra section led by me. This section will be on Fridays from 11:00AM to noon, just after the Friday lecture, in room 34-303.

While the final details for this extra section are not yet determined, each week you will be required to read a research paper selected to elaborate on the material presented in 6.034 for that week and write a one-page review of that paper to be handed in (on printed paper, not by email!) at the start of the Friday meeting. With that preparation we will have a discussion of the material of the week elaborated by the paper you have read and commented on.

Your weekly review should not be longer than one page. Your review should be readable by someone who has not read the paper that is being reviewed. The ability to write such a review is an important skill for you to develop. It is not helpful to include a pile of mathematical formulas or lots of code in your review. What I want is for you to learn to extract the essential take-away message of the paper:

  1. What is the author trying to accomplish?
  2. What technical methods is the author bringing to bear?
  3. How successful was the resulting work?
  4. Is there some lesson for us in the paper?

If you need graduate credit, you can drop 6.034 and sign up for 6.S966 (12 units) on registration day. If you are unsure about whether you want to take 6.S966, you can decide later. Either way, attend the first session, this Friday, 8 September. I will say more about what will be involved and answer questions.

Week 1:

For the meeting on Friday, 15 September you should read the famous paper "Steps toward Artificial Intelligence", by Marvin Minsky, in Proceedings of the IRE, January 1961.

You should write a 1-page review of this paper and hand it in at the beginning of the meeting (on paper!)

You can find a pdf of this paper at

https://courses.csail.mit.edu/6.803/pdf/steps.pdf

Week 2:

On Friday, 22 September we will discuss the evolution of "rule-based expert systems". The discussion will be based on your reviews of the paper:

Robert K. Lindsay, Bruce G. Buchanan, Edward A. Feigenbaum, and Joshua Lederberg. "DENDRAL: A Case Study of the First Expert System for Scientific Hypothesis Formation." in Artificial Intelligence 61, 2 (1993): 209-261.

This is a rather large paper, but it is full of deep ideas.

The paper is available at

http://web.mit.edu/6.034/www/6.s966/dendral-history.pdf

Weeks 3, 4

Whoops! I forgot that the MIT calendar says that 29 September is a "Student Holiday -- no classes." So our next class will be on 6 October rather than 29 September.

On Friday, 6 October we will discuss constraint propagation and efficient dependency-directed backtracking. The discussion will be based on your reviews of the paper:

Alexey Radul and Gerald Jay Sussman; "The Art of the Propagator," MIT-CSAIL-TR-2009-002; Abridged version in Proc. 2009 International Lisp Conference, March 2009.

I am an author of this paper! Please do not feel that you have to be nice to me: I enjoy my ideas being criticized and I do not take offence. So please, let's fight, if that seems to be appropriate.

The paper is available at

http://web.mit.edu/6.034/www/6.s966/MIT-CSAIL-TR-2009-002.pdf


Week 5:

On Friday, 13 October we begin to think about learning as well as problem solving. Recent astonishing progress in "machine learning" has eclipsed much of the traditional work on symbolic thinking. But problems remain: the systems that result from work on machine learning research have no concept of meaning--the "words" do not have referents outside of the ways in which they are used. Such systems may perform well on many tasks but they do not smoothly interface with systems that are organized around modeling the world, which is probably essential to solving really deep problems of common sense and science.

The discussion will be based on your reviews of the paper:

"Building Machines That Learn and Think Like People", by Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, Samuel J. Gershman

The paper is available at https://arxiv.org/abs/1604.00289 and

http://web.mit.edu/6.034/www/6.s966/arXiv1604.00289v3.pdf


Week 6:

On Friday, 20 October we will think about neural nets. Unfortunately, most of the papers about neural nets, including the most famous ones, are pretty awful. They are generally of the form "I made a net with the ... architecture and ... hyperparameters. I trained it on the ... dataset. Look! Its performance is spectacular and it beats ..., ..., and ... by some margin." There is little analysis about how or why the particular system performs as described.

As a consequence, I am assigning two short papers that may provide some insight into this phenomenon, by highlighting the spectacular ways that a neural-network system may fail. The papers are:

"Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images", by Anh Nguyen, Jason Yosinski, and Jeff Clune:

https://arxiv.org/abs/1412.1897

"Understanding deep learning requires rethinking generalization", by Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals:

https://arxiv.org/abs/1611.03530

I have placed copies of these papers on

http://web.mit.edu/6.034/www/6.s966/2014_Nguyen_et_al_Deep_Neural_Nets_are_Easily_Fooled.pdf
http://web.mit.edu/6.034/www/6.s966/2017_Zhang_et_al_T_Understanding_deep_learning_requires_rethinking_generalization.pdf

On a more positive note, I suggest that you read the blog post by Christopher Olah:

http://colah.github.io/posts/2015-09-NN-Types-FP/

I still only want to see only one page from you. Please do not try to summarize the papers. (Some of you seem to want to do that!) I want your reaction to the assignment: What do you think is really going on here? Do you think that this mess will soon clarify? Why or why not?


Week 7:

On Monday, 23 October we will have a lecture about genetic algorithms. These are computational mechanisms that attempt to develop solutions to problems by variation, mixing, proliferation, and selection, in a population of competing partial solutions. This idea is inspired by our very fruitful understanding of biological evolution. Genetic algorithms have been rather successful at "discovering" interesting and possibly useful optimization results. However, they are part of a bigger set of evolutionary strategies. To dig a little deeper into this idea I am assigning, for the Friday, 27 October class, the following early paper on this subject:

Thomas Back and Hans-Paul Schwefel; "An overview of evolutionary algorithms for parameter optimization"; in Evolutionary Computation, Volume 1 Issue 1, Spring 1993, Pages 1-23.

I am putting the paper up for you to read at:

http://web.mit.edu/6.034/www/6.s966/baeck-ec93.pdf

Week 8:

On Friday, 27 October 2017 Patrick will tell us about Support-Vector Machines, a rather nice mechanism that we CAN understand! Of course, things that we understand are described precisely and concisely with mathematics, and this can become an obscure cottage industry. In this case I hunted around for a relatively elementary description that is clear and readable. So on Friday, 3 November 2017 we will discuss the paper:

Javier M. Moguerza, Alberto Muñoz, "Support Vector Machines with Applications," in Statistical Science, Vol. 21, No. 3, pp. 322-336 (2006).

The paper is available at:

https://arxiv.org/abs/math/0612817

I also put it up on:

http://web.mit.edu/6.034/www/6.s966/MoguerzaMunoz-0612817.pdf

Week 9, 10:

Sorry; I forgot that November 10 is the Veteran's day holiday. So I am moving the date of this assignment to November 17. GJS

We have been working pretty hard, reading some mathematically deep works about learning. I think it is time to take a break for some fun. So for 17 November 2017 I want you to examine some classical papers that argue that "strong AI" is impossible.

The first paper is

J. R. Lucas; "Minds, Machines and Gödel", in Philosophy Vol. 36, No. 137 (Apr. - Jul., 1961), pp. 112-127.

In this paper Lucas makes the following argument: Godel showed that for any consistent system that contains arithmetic there is a proposition (roughly "This proposition cannot be proved") which cannot be proved or disproved in the system. (Note that if it could be proved the system would be inconsistent, and if it could be disproved the system would also be inconsistent.) However, it is apparent to a human mathematician that the proposition is "True" because it cannot be proved in the system! From this he concludes that the human is superior to the machine because the machine cannot lift itself from its formal system to see the truth of the proposition. The actual printed paper by Lucas is in JSTOR.

MIT people can access it at:

https://www.jstor.org/stable/3749270?seq=1#page_scan_tab_contents

However, there is an html version that is freely available at:

http://users.ox.ac.uk/~jrlucas/mmg.html


Another paper is:

Hubert L. Dreyfus, "Alchemy and Artificial Intelligence", December 1965, Rand Corporation technical report.

In this screed Dreyfus claims that some AI researchers are dishonest about the difficulties. In particular he complains about the early computer chess programs. He is sufficiently convinced that chess is so hard that "significant developments" in chess playing have to wait for an entirely new kind of computer. Of course, it was fun for me to watch Professor Dreyfus trounced by Richard Greenblatt's chess program in the AI Laboratory of MIT Project MAC in 1967 (I think...). (Part of the impetus for Greenblatt to write the program was to beat Dreyfus!)

This paper is available on

http://web.mit.edu/6.034/www/6.s966/Dreyfus-AlchemyAndArtificialIntelligence-P3244.pdf

These are both well-written papers. The Dreyfus one is rather long, but easy to read. I want you to consider to what extent these authors were possibly right or wrong, in the light of the current state of AI.


Yet another famous paper along these lines is:

Searle, John. R. "Minds, brains, and programs". in Behavioral and Brain Sciences 3 (3): 417-457, (1980)

Here we find the famous "Chinese Room" example. I hope you enjoy it.

This paper is available on

http://web.mit.edu/6.034/www/6.s966/Searle10.1.1.83.5248.pdf

Week 11, 12:

Have a nice Thanksgiving.

Our next meeting is Friday, 1 December 2017. We are nearing the end of the term. As in the beginning, I want you to read a beautiful paper by Marvin Minsky. The paper is:

Marvin Lee Minsky; "Logical Versus Analogical, or Symbolic Versus Connectionist, or Neat Versus Scruffy", in AI Magazine Volume 12 Number 2 (1991).

It is available at the URL

https://www.aaai.org/ojs/index.php/aimagazine/article/view/894/812

It is also available at

http://web.mit.edu/6.034/www/6.s966/Minsky-NeatVsScruffy.pdf

Have fun.

Week 13:

Well, we are nearing the end of the term. Our next meeting is 8 December 2017. I hope that you have been enjoying this graduate seminar attached to Patrick Winston's undergraduate subject. Please be sure that you have given me a one-page writeup for each of the sessions that we have had.

For this week I thought it would be fun to think about what AI and linguistics can say about music, and what music can say about AI and linguistics. So I have two papers for you to read. One is a deep paper by Marvin Minsky and the other is a rather radical hypothesis about the similarity of music to language by linguists Katz and Pesetsky (hereafter KP). Each of these papers is a technical challenge to read, because they depend on knowledge outside of the papers, but the challenge is well worth the effort. The Minsky paper expects that you are familiar with some music theory, and the KP paper expects that you know some linguistics and a related linguistic theory of music called "Generative theory of tonal music" (GTTM). You can find out about GTTM in the Wikipedia article by that name.

Anyway, here are the papers, with URLs where you can retrieve them:


Jonah Katz and David Pesetsky; "The Identity Thesis for Language and Music"; in Sounds and Structures, Freie Universitat Berlin, 2009.

http://web.mit.edu/6.034/www/6.s966/katzEtAl_11_The-Identity-.3.pdf


Marvin Minsky; "Music, Mind, and Meaning"; in Computer Music Journal, Fall 1981, Vol. 5, Number 3.

http://web.mit.edu/6.034/www/6.s966/Minsky-MusicMindMeaning.pdf
Personal tools