1. Discussion on different types of loss functions studied so far for classification algorithms: 0-1, squared, SVM (hinge), logistic and exponential loss. 2. Review of boosting and AdaBoost as stagewise minimisers of exponential risk. 3. Detailed derivation of parameters (thetas) and weights (alphas) for component classifiers. 4. Some examples of counting the number of labellings that can be achieved for a set of classifiers (decision stumps, arbitrary halfplanes, quadrants, triangles)