Attention conservation notice: This is an attempt to increase the attendance at this semester's complex systems colloquia by blogging about them in advance. Of minimal relevance if you're not in Ann Arbor or don't care about complex systems, statistics, machine learning, improving your data analysis, or collective cognition and decision-making.
Boosting is one of a large family of procedures in statistics and machine learning, with semi-cute names ("bagging", "stacking", "binning", etc., etc.) for combining predictive models or classifiers, which tend to work very, very well, for reasons which have not been at all clear. After all, taking the average of a large number of simple models is equivalent to building a single really big model, which Occam's Razor tells you not to do, because it's a recipe for over-fitting. Yet boosting works --- why? Ji has a profound understanding of boosting and its kin (he was too modest to mention a novel application he's worked on, so I will), but his talk will be quite accessible. (I've seen the slides.)
4pm, Thursday, December 9th, in room 335 West Hall, Central Campus.
Posted at December 07, 2004 15:00 | permanent link