We prove that every online learnable class of functions of Littlestone dimension d admits a learning algorithm with finite information complexity. Towards this end, we use the notion of a globally stable algorithm. Generally, the information complexity of such a globally stable algorithm is large yet finite, roughly exponential in d. We also show there is room for improvement; for a canonical online learnable class, indicator functions of affine subspaces of dimension d, the information complexity can be upper bounded logarithmically in d.
Rakesh Chawla, Andrea Rizzi, Matthias Finger, Federica Legger, Matteo Galli, Sun Hee Kim, Jian Zhao, João Miguel das Neves Duarte, Tagir Aushev, Hua Zhang, Alexis Kalogeropoulos, Yixing Chen, Tian Cheng, Ioannis Papadopoulos, Gabriele Grosso, Valérie Scheurer, Meng Xiao, Qian Wang, Michele Bianco, Varun Sharma, Joao Varela, Sourav Sen, Ashish Sharma, Seungkyu Ha, David Vannerom, Csaba Hajdu, Sanjeev Kumar, Sebastiana Gianì, Kun Shi, Abhisek Datta, Siyuan Wang, Anton Petrov, Jian Wang, Yi Zhang, Muhammad Ansar Iqbal, Yong Yang, Xin Sun, Muhammad Ahmad, Donghyun Kim, Matthias Wolf, Anna Mascellani, Paolo Ronchese, , , , , , , , , , , , , , , , , , , , , , , ,