Paper ID: 2502.06089 • Published Feb 10, 2025
On the Computability of Multiclass PAC Learning
Pascale Gourdeau, Tosca Lechner, Ruth Urner
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
We study the problem of computable multiclass learnability within the
Probably Approximately Correct (PAC) learning framework of Valiant (1984). In
the recently introduced computable PAC (CPAC) learning framework of Agarwal et
al. (2020), both learners and the functions they output are required to be
computable. We focus on the case of finite label space and start by proposing a
computable version of the Natarajan dimension and showing that it characterizes
CPAC learnability in this setting. We further generalize this result by
establishing a meta-characterization of CPAC learnability for a certain family
of dimensions: computable distinguishers. Distinguishers were defined by
Ben-David et al. (1992) as a certain family of embeddings of the label space,
with each embedding giving rise to a dimension. It was shown that the
finiteness of each such dimension characterizes multiclass PAC learnability for
finite label space in the non-computable setting. We show that the
corresponding computable dimensions for distinguishers characterize CPAC
learning. We conclude our analysis by proving that the DS dimension, which
characterizes PAC learnability for infinite label space, cannot be expressed as
a distinguisher (even in the case of finite label space).