Lab 9: Adaboost

From 6.034 Wiki

(Difference between revisions)
Jump to: navigation, search
(Problems: Adaboost: initialize_weights, calculate_error_rates)
(Helper functions: calculate_voting_power)
Line 15: Line 15:
= Problems: Adaboost =
= Problems: Adaboost =
In this lab, you will code the Adaboost algorithm to perform boosting.
In this lab, you will code the Adaboost algorithm to perform boosting.
 +
 +
Throughout this lab, we will assume that there are exactly two classifications, meaning that every not-misclassified training point is classified correctly.
== Helper functions ==
== Helper functions ==
Line 30: Line 32:
* <tt>classifier_to_misclassified</tt>: maps each classifier to a list of the training points that it misclassifies.  For example, this dictionary may contain entries such as <tt>"classifier_0": ["point_A", "point_C"]</tt>.
* <tt>classifier_to_misclassified</tt>: maps each classifier to a list of the training points that it misclassifies.  For example, this dictionary may contain entries such as <tt>"classifier_0": ["point_A", "point_C"]</tt>.
-
Implement <tt>calculate_error_rates</tt> to return a dictionary mapping classifiers (strings) to their error rates (numbers):
+
Implement <tt>calculate_error_rates</tt> to return a dictionary mapping each weak classifier (a string) to its error rate (a number):
  def calculate_error_rates(point_to_weight, classifier_to_misclassified):
  def calculate_error_rates(point_to_weight, classifier_to_misclassified):
=== Pick the best weak classifier ===
=== Pick the best weak classifier ===
 +
Once we have calculated the error rate of each weak classifier, we
 +
TODO
=== Calculate voting power ===
=== Calculate voting power ===
 +
After selecting the best weak classifier, we'll need to compute its voting power.  If ε is the error rate of the weak classifier, then its voting power is:
 +
1/2 * ln((1-ε)/ε)
 +
 +
Implement <tt>calculate_voting_power</tt> to compute a classifier's voting power, given its error rate 0 ≤ ε ≤ 1.
 +
 +
def calculate_voting_power(error_rate):
 +
 +
Hint: What voting power would you give to a weak classifier that classifies all the training points correctly?  What if it misclassifies all the training points?
=== Is H good enough? ===
=== Is H good enough? ===

Revision as of 22:08, 20 November 2015

Contents


This lab is due by Friday, December 4 at 10:00pm.

To work on this lab, you will need to get the code, much like you did for the first two labs.

Online tests will be made available by the end of Tuesday, November 24. In the meantime, the local tester provides thorough unit tests for each section of the lab.

Your answers for this lab belong in the main file lab7.py.

Problems: Adaboost

In this lab, you will code the Adaboost algorithm to perform boosting.

Throughout this lab, we will assume that there are exactly two classifications, meaning that every not-misclassified training point is classified correctly.

Helper functions

Initialize weights

First, implement initialize_weights to assign every training point a weight equal to 1/N, where N is the number of training points. This function takes in a list of training points, where each point is represented as a string. The function should return a dictionary mapping points to weights.

def initialize_weights(training_points):

Calculate error rates

Next, we want to calculate the error rate of each classifier. The error rate for a classifier h is the sum of the weights of the training points that h misclassifies.

calculate_error_rates takes in two dictionaries:

  • point_to_weight: maps each training point (represented as a string) to its weight (a number).
  • classifier_to_misclassified: maps each classifier to a list of the training points that it misclassifies. For example, this dictionary may contain entries such as "classifier_0": ["point_A", "point_C"].

Implement calculate_error_rates to return a dictionary mapping each weak classifier (a string) to its error rate (a number):

def calculate_error_rates(point_to_weight, classifier_to_misclassified):

Pick the best weak classifier

Once we have calculated the error rate of each weak classifier, we TODO

Calculate voting power

After selecting the best weak classifier, we'll need to compute its voting power. If ε is the error rate of the weak classifier, then its voting power is:

1/2 * ln((1-ε)/ε)

Implement calculate_voting_power to compute a classifier's voting power, given its error rate 0 ≤ ε ≤ 1.

def calculate_voting_power(error_rate):

Hint: What voting power would you give to a weak classifier that classifies all the training points correctly? What if it misclassifies all the training points?

Is H good enough?

Update weights

Adaboost

Using all the helper functions you've written above, implement the Adaboost algorithm.

Keep in mind that Adaboost has three exit conditions: TODO

Survey

Please answer these questions at the bottom of your lab6.py file:

  • NAME: What is your name? (string)
  • COLLABORATORS: Other than 6.034 staff, whom did you work with on this lab? (string, or empty string if you worked alone)
  • HOW_MANY_HOURS_THIS_LAB_TOOK: Approximately how many hours did you spend on this lab? (number or string)
  • WHAT_I_FOUND_INTERESTING: Which parts of this lab, if any, did you find interesting? (string)
  • WHAT_I_FOUND_BORING: Which parts of this lab, if any, did you find boring or tedious? (string)
  • (optional) SUGGESTIONS: What specific changes would you recommend, if any, to improve this lab for future years? (string)


(We'd ask which parts you find confusing, but if you're confused you should really ask a TA.)

When you're done, run the online tester to submit your code.

Personal tools