Skip to main content
Taylor & Francis Group Logo
Advanced Search

Click here to search books using title name,author name and keywords.

  • Login
  • Hi, User  
    • Your Account
    • Logout
Advanced Search

Click here to search books using title name,author name and keywords.

Breadcrumbs Section. Click here to navigate to respective pages.

Chapter

Semi-Supervised Learning

Chapter

Semi-Supervised Learning

DOI link for Semi-Supervised Learning

Semi-Supervised Learning book

Semi-Supervised Learning

DOI link for Semi-Supervised Learning

Semi-Supervised Learning book

ByKaushik Sinha
BookData Classification

Click here to navigate to parent product.

Edition 1st Edition
First Published 2014
Imprint Chapman and Hall/CRC
Pages 26
eBook ISBN 9780429102639

ABSTRACT

The criterion for choosing this function g is the low probability of error, i.e., the algorithm must choose g in such a way that when an unseen pair (x,y) ∈ X ×Y is chosen according to PX×Y and only x is presented to the algorithm, the probability that g(x) = y is minimized over a class of functions g ∈ G . In this case, the best function that one can hope for is based on the conditional distribution P(Y |X) and is given by η(x) = sign [E(Y |X = x)], which is known as the so called Bayes optimal classifier. Note that, when a learning algorithm constructs a function g based on a training set of size l, the function g is a random quantity and it depends on the training set size l. So it is a better idea to represent the function g as gl to emphasize its dependence on training set size l. A natural question, then, is what properties should one expect from gl as the training set size l increases? A learning algorithm is called consistent if gl converges to η (in appropriate convergence mode) as the training set size l tends to infinity. This is the best one can hope for, as it guarantees that as the training set size l increases, gl converges to the “right” function. Of course, “convergence rate” is also important, which specifies how fast gl converges to the right function. Thus, one would always prefer a consistent learning algorithm. It is clear that performance of such an algorithm improves as the training set size increases, or in other words, as we have capacity to label more and more examples.

T&F logoTaylor & Francis Group logo
  • Policies
    • Privacy Policy
    • Terms & Conditions
    • Cookie Policy
    • Privacy Policy
    • Terms & Conditions
    • Cookie Policy
  • Journals
    • Taylor & Francis Online
    • CogentOA
    • Taylor & Francis Online
    • CogentOA
  • Corporate
    • Taylor & Francis Group
    • Taylor & Francis Group
    • Taylor & Francis Group
    • Taylor & Francis Group
  • Help & Contact
    • Students/Researchers
    • Librarians/Institutions
    • Students/Researchers
    • Librarians/Institutions
  • Connect with us

Connect with us

Registered in England & Wales No. 3099067
5 Howick Place | London | SW1P 1WG © 2021 Informa UK Limited