Machine LearningWikiPaths

Classical Supervised ML

Fill in the classical classification concepts around margins, kernels, asymmetric costs, and imbalance-aware objectives.

Estimated time: ~75 min

Study this path with flashcards
5 cards
Study →
  1. Step 1
    Hinge loss penalizes examples that are misclassified or that lie too close to the decision boundary. It is the convex margin-based loss underlying soft-margin SVMs and emphasizes confident separation rather than calibrated probabilities.
  2. Step 2
    A support vector machine finds the decision boundary that maximizes the margin between classes, depending only on the support vectors nearest the boundary. With kernels, SVMs can model nonlinear separators while retaining a convex optimization objective.
  3. Step 3
    Kernel methods turn linear algorithms into nonlinear ones by replacing inner products with a kernel function that implicitly measures similarity in a higher-dimensional feature space. This is the core trick behind SVMs, kernel ridge regression, and Gaussian processes.
  4. Step 4
    Cost-sensitive learning assigns different penalties to different kinds of mistakes instead of treating every error equally. It is the right framework when the real objective is to minimize downstream harm or utility loss rather than raw misclassification rate.
  5. Step 5
    Class imbalance means some labels are much rarer than others, so an unweighted objective can be dominated by the majority class. Reweighting changes the loss or sampling scheme so rare classes exert more influence during training.