From 6.034 Wiki
This lab is due by Thursday, October 14th, at 10:00 pm.
To work on this lab, you will need to get the code:
- You can view it at: http://web.mit.edu/6.034/www/labs/lab5/
- Download it as a ZIP file: http://web.mit.edu/6.034/www/labs/lab5/lab5.zip
- Or, on Athena, attach 6.034 and copy it from /mit/6.034/www/labs/lab5/.
This lab covers k-Nearest Neighbors and Identification Trees. Your answers for this lab belong in the main file lab5.py.
During Obama's visit to MIT, you get a chance to impress him with your analytical thinking. Now, he has hired you to do some political modeling for him. He seems to surround himself with smart people that way.
He takes a moment out of his busy day to explain what you need to do. "I need a better way to tell which of my plans are going to be supported by Congress," he explains. "Do you think we can get a model of Democrats and Republicans in Congress, and which votes separate them the most?"
"Yes, we can!" You answer.
You acquire the data on how everyone in the previous Senate and House of Representatives voted on every issue. (These data are available in machine-readable form via voteview.com. We've included it in the lab directory, in the files beginning with H110 and S110.)
data_reader.py contains functions for reading data in this format.
read_congress_data("FILENAME.ord") reads a specially-formatted file that gives information about each Congressperson and the votes they cast. It returns a list of dictionaries, one for each member of Congress, including the following items:
- 'name': The name of the Congressperson.
- 'state': The state they represent.
- 'party': The party that they were elected under.
- 'votes': The votes that they cast, as a list of numbers. 1 represents a "yea" vote, -1 represents "nay", and 0 represents either that they abstained, were absent, or were not a member of Congress at the time.
To make sense of the votes, you will also need information about what they were voting on. This is provided by read_vote_data("FILENAME.csv"), which returns a list of votes in the same order that they appear in the Congresspeople's entries. Each vote is represented a dictionary of information, which you can convert into a readable string by running vote_info(vote).
The lab file reads in the provided data, storing them in the variables senate_people, senate_votes, house_people, and house_votes.
You decide to start by making a nearest-neighbors classifier that can tell Democrats apart from Republicans in the Senate.
We've provided a nearest_neighbors function that classifies data based on training data and a distance function. In particular, this is a third-order function:
- First, call nearest_neighbors(distance, k), with distance being the distance function you wish to use and k being the number of neighbors to check. This returns a classifier factory.
- A classifier factory is a function that makes classifiers. You call it with some training data as an argument, and it returns a classifier.
- Finally, you call the classifier with a data point (here, a Congressperson) and it returns the classification as a string.
Much of this is handled by the evaluate(factory, group1, group2) function, which you can use to test the effectiveness of a classification strategy. You give it a classifier factory (as defined above) and two sets of data. It will train a classifier on one data set and test the results against the other, and then it will switch them and test again.
Given a list of data such as senate_people, you can divide it arbitrarily into two groups using the crosscheck_groups(data) function.
One way to measure the "distance" between Congresspeople is with the Hamming distance: the number of entries that differ. This function is provided as hamming_distance.
An example of putting this all together is provided in the lab code:
senate_group1, senate_group2 = crosscheck_groups(senate_people) evaluate(nearest_neighbors(edit_distance, 1), senate_group1, senate_group2, verbose=1)
Examine the results of this evaluation. In addition to the problems caused by independents, it's classifying Senator Johnson from South Dakota as a Republican instead of a Democrat, mainly because he missed a lot of votes while he was being treated for cancer. This is a problem with the distance function -- when one Senator votes yes and another is absent, that is less of a "disagreement" than when one votes yes and the other votes no.
You should address this. Euclidean distance is a reasonable measure for the distance between lists of discrete numeric features, and is the alternative to Hamming distance that you decide to try. Recall that the formula for Euclidean distance is:
[(x1 - y1)^2 + (x2 - y2)^2 + ... + (xn - yn)^2] ^ (1/2)
- Make a distance function called euclidean_distance that treats the votes as high-dimensional vectors, and returns the Euclidean distance between them.
When you evaluate using euclidean_distance, you should get better results, except that some people are being classified as Independents. Given that there are only 2 Independents in the Senate, you want to avoid classifying someone as an Independent just because they vote similarly to one of them.
- Make a simple change to the parameters of nearest_neighbors that accomplishes this, and call the classifier factory it outputs my_classifier.
So far you've classified Democrats and Republicans, but you haven't created a model of which votes distinguish them. You want to make a classifier that explains the distinctions it makes, so you decide to use an ID-tree classifier.
idtree_maker(votes, disorder_metric) is a third-order function similar to nearest_neighbors. You initialize it by giving it a list of vote information (such as senate_votes or house_votes) and a function for calculating the disorder of two classes. It returns a classifier factory that will produce instances of the CongressIDTree class, defined in classify.py, to distinguish legislators based on their votes.
The possible decision boundaries used by CongressIDTree are, for each vote:
- Did this legislator vote YES on this vote, or not?
- Did this legislator vote NO on this vote, or not?
(These are different because it is possible for a legislator to abstain or be absent.)
You can also use CongressIDTree directly to make an ID tree over the entire data set.
If you print a CongressIDTree, then you get a text representation of the tree. Each level of the ID tree shows the minimum disorder it found, the criterion that gives this minimum disorder, and (marked with a +) the decision it makes for legislators who match the criterion, and (marked with a -) the decision for legislators who don't. The decisions are either a party name or another ID tree. An example is shown in the section below.
An ID tree for the entire Senate
You start by making an ID tree for the entire Senate. This doesn't leave you anything to test it on, but it will show you the votes that distinguish Republicans from Democrats the most quickly overall. You run this (which you can uncomment in your lab file):
print CongressIDTree(senate_people, senate_votes, homogeneous_disorder)
The ID tree you get here is:
Disorder: -49 Yes on S.Con.Res. 21: Kyl Amdt. No. 583; To reform the death tax by setting the exemption at $5 million per estate, indexed for inflation, and the top death tax rate at no more than 35% beginning in 2010; to avoid subjecting an estimated 119,200 families, family businesses, and family farms to the death tax each and every year; to promote continued economic growth and job creation; and to make the enhanced teacher deduction permanent.: + Republican - Disorder: -44 Yes on H.R. 1585: Feingold Amdt. No. 2924; To safely redeploy United States troops from Iraq.: + Democrat - Disorder: -3 No on H.R. 1495: Coburn Amdt. No. 1089; To prioritize Federal spending to ensure the needs of Louisiana residents who lost their homes as a result of Hurricane Katrina and Rita are met before spending money to design or construct a nonessential visitors center.: + Democrat - Disorder: -2 Yes on S.Res. 19: S. Res. 19; A resolution honoring President Gerald Rudolph Ford.: + Disorder: -4 Yes on H.R. 6: Motion to Waive C.B.A. re: Inhofe Amdt. No. 1666; To ensure agricultural equity with respect to the renewable fuels standard.: + Democrat - Independent - Republican
Some things that you can observe from these results are:
- Senators like to write bills with very long-winded titles that make political points.
- The key issue that most clearly divided Democrats and Republicans was the issue that Democrats call the "estate tax" and Republicans call the "death tax", with 49 Republicans voting to reform it.
- The next key issue involved 44 Democrats voting to redeploy troops from Iraq.
- The issues below that serve only to peel off homogenous groups of 2 to 4 people.
Implementing a better disorder metric
You should be able to reduce the depth and complexity of the tree, by changing the disorder metric from the one that looks for the largest homogeneous group to the information-theoretical metric described in lecture.
You can find this formula on page 429 of the reading.
- Write the information_disorder(group1, group2) function to replace homogeneous_disorder. This function takes in the lists of classifications that fall on each side of the decision boundary, and returns the information-theoretical disorder.
information_disorder(["Democrat", "Democrat", "Democrat"], ["Republican", "Republican"]) => 0.0
information_disorder(["Democrat", "Republican"], ["Republican", "Democrat"]) => 1.0
Once this is written, you can try making a new CongressIDTree with it. (if you're having trouble, keep in mind you should return a float or similar)
Evaluating over the House of Representatives
Now, you decide to evaluate how well ID trees do in the wild, weird world of the House of Representatives.
You can try running an ID tree on the entire House and all of its votes. It's disappointing. The 110th House began with a vote on the rules of order, where everyone present voted along straight party lines. It's not a very informative result to observe that Democrats think Democrats should make the rules and Republicans think Republicans should make the rules.
Anyway, since your task was to make a tool for classifying the newly-elected Congress, you'd like it to work after a relatively small number of votes. We've provided a function, limited_house_classifier, which evaluates an ID tree classifier that uses only the most recent N votes in the House of Representatives. You just need to find a good value of N.
- Using limited_house_classifier, find a good number N_1 of votes to take into account, so that the resulting ID trees classify at least 430 Congresspeople correctly. How many training examples (previous votes) does it take to predict at least 90 senators correctly? What about 95? To pass the online tests, you will need to find close to the minimum such values for N_1, N_2, and N_3. Keep guessing to find close to the minimum that will pass the offline tests. Do the values surprise you? Is the house more unpredictable than the senate, or is it just bigger?
- Which is better at predicting the senate, 200 training samples, or 2000? Why?
The total number of Congresspeople in the evaluation may change, as people who didn't vote in the last N votes (perhaps because they're not in office anymore) aren't included.
Please answer these questions at the bottom of your lab5.py file:
- NAME: What is your name? (string)
- COLLABORATORS: Other than 6.034 staff, whom did you work with on this lab? (string, or empty string if you worked alone)
- HOW_MANY_HOURS_THIS_LAB_TOOK: Approximately how many hours did you spend on this lab? (number or string)
- WHAT_I_FOUND_INTERESTING: Which parts of this lab, if any, did you find interesting? (string)
- WHAT_I_FOUND_BORING: Which parts of this lab, if any, did you find boring or tedious? (string)
- (optional) SUGGESTIONS: What specific changes would you recommend, if any, to improve this lab for future years? (string)
(We'd ask which parts you find confusing, but if you're confused you should really ask a TA.)
When you're done, run the online tester to submit your code.
Q: For the N's for the limited_house_classifier, I got some (large) values, and it passed the offline tests, but it failed the online tests. If I subtract even 1 from the values, it doesn't classify enough people correctly. What's wrong?
A: The number of correct classifications is not a monotonic function of N. For example, if N_1 is 60, then 426 representatives are classified correctly, but if N_1 is 40, then 428 representatives are classified correctly. The online tests are not exactly the same as the offline tests. You will need to just try smaller values of N.
Q: My code passes the local eval_test but not the online one.
A: Make sure you did the part of the lab where you adjust the nearest-neighbors parameters (for my_classifier) to avoid classifying too many people as Independents.
Q: My code passes all the local tests, but my values for N_1, N_2, and N_3 (or some subset thereof) don't pass the online tests.
A: There is probably an error in your information_disorder function. Check that you've implemented the disorder formula correctly. If your information_disorder function is incorrect, it's possible to find values of N_1, N_2, and N_3 that are consistent with your information_disorder function (which is why the local tests pass) but inconsistent with the correct definition of disorder (which is why the online tests fail).