Scalings in which the graph Laplacian approaches a differential operator are used to develop understanding of a number of semi-supervised learning algorithms. Both optimization and Bayesian approaches are considered.
Scalings in which the graph Laplacian approaches a differential operator in
the large graph limit are used to develop understanding of a number of
algorithms for semi-supervised learning; in particular the extension, to this
graph setting, of the probit algorithm, level set and kriging methods, are
studied. Both optimization and Bayesian approaches are considered, based around
a regularizing quadratic form found from an affine transformation of the
Laplacian, raised to a, possibly fractional, exponent. Conditions on the
parameters defining this quadratic form are identified under which well-defined
limiting continuum analogues of the optimization and Bayesian semi-supervised
learning problems may be found, thereby shedding light on the design of
algorithms in the large graph setting. The large graph limits of the
optimization formulations are tackled through convergence, using the
recently introduced metric. The small labelling noise limits of the
Bayesian formulations are also identified, and contrasted with pre-existing
harmonic function approaches to the problem.