A PAC teaching model -under helpful distributions -is proposed
which introduces the classical ideas of teaching models within the
PAC setting: a polynomial-sized teaching set is associated
with each target concept; the criterion of success is PAC
identification; an additional parameter, namely the inverse of the
minimum probability assigned to any example in the teaching set, is
associated with each distribution; the learning algorithm running
time takes this new parameter into account.
An Occam razor theorem and its converse are proved. Some classical
classes of boolean functions, such as Decision Lists, DNF and CNF
formulas are proved learnable in this model. Comparisons with other
teaching models are made: learnability in the Goldman and Mathias
model implies PAC learnability under helpful distributions. Note
that Decision lists and DNF are not known to be learnable in the
Goldman and Mathias model.
A new simple PAC model, where "simple" refers to Kolmogorov
complexity, is introduced. We show that most learnability results
obtained within previously defined simple PAC models can be simply
derived from more general results in our model.