This introductory course on machine learning will give an overview of many techniques and algorithms in machine learning, beginning with topics such as simple perceptrons and ending up with more recent topics such as boosting, support vector machines, hidden Markov models, and Bayesian networks. The course will give the student the basic ideas and intuition behind modern machine learning methods as well as a bit more formal understanding of how and why they work. The underlying theme in the course is statistical inference as this provides the foundation for most of the methods covered.
Tue/Thu 2:30-4pm in E37-212, first lecture is on September 11
Wednesdays 1-2:30, 36-144
Friday 11-12:30, 34-302
The problem sets will be graded by a rotating group of students (yes, by yourselves) with the guidance of your TA. Each problem set will be graded in a single grading session, usually on the Monday after it is due, starting at 5pm. Every student is required to participate in one grading session. Students should sign up for grading, using the electronic sign-up sheet, before the due date of the first problem set.
If you drop the class after signing up for a grading session, please be sure to let us know so we can keep track of students available for grading. If you add the class durring the term, please remember to sign up for grading
Due dates for the problem sets are indicated in the course calendar.
The projects are due December 6.
We will provide you with more detailed suggestions and guidelines for the projects but they can be literature reviews, theoretical derivations or analyses, applications of machine learning methods to problems you are interested in, or something else.
You will not be able to find all the course material in the text nor do we plan to go through the chapters in order or in full. You are responsible for the material covered in lectures, recitations, problem sets, as well as in the chapters/sections of the text specifically indicated.