We examine a flexible algorithmic framework for solving monotone variational inequalities in the presence of randomness and uncertainty. The proposed template encompasses a wide range of popular first-order methods, including dual averaging, dual extrapolation and optimistic gradient algorithms – both adaptive and non-adaptive. Our first result is that the algorithm achieves the optimal rates of convergence for cocoercive problems when the profile of the randomness is known to the optimizer: O (1/√T) for absolute noise profiles, and O (1/T) for relative ones. Subsequently, we drop all prior knowledge requirements (the absolute/ relative variance of the randomness affecting the problem, the operator’s cocoercivity constant, etc.), and we analyze an adaptive instance of the method that gracefully interpolates between the above rates – i.e., it achieves O (1/√T) and O (1/T) in the absolute and relative cases, respectively. To our knowledge, this is the first universality result of its kind in the literature and, somewhat surprisingly, it shows that an extra-gradient proxy step is not required to achieve optimal rates.
Rakesh Chawla, Andrea Rizzi, Matthias Finger, Federica Legger, Matteo Galli, Sun Hee Kim, João Miguel das Neves Duarte, Tagir Aushev, Hua Zhang, Alexis Kalogeropoulos, Yixing Chen, Tian Cheng, Ioannis Papadopoulos, Gabriele Grosso, Valérie Scheurer, Meng Xiao, Qian Wang, Michele Bianco, Varun Sharma, Joao Varela, Sourav Sen, Ashish Sharma, Seungkyu Ha, David Vannerom, Csaba Hajdu, Sanjeev Kumar, Sebastiana Gianì, Kun Shi, Abhisek Datta, Siyuan Wang, Anton Petrov, Jian Wang, Yi Zhang, Muhammad Ansar Iqbal, Yong Yang, Xin Sun, Muhammad Ahmad, Donghyun Kim, Matthias Wolf, Anna Mascellani, Paolo Ronchese, , , , , , , , , , , , , , , , , , , ,
Nicolas Lawrence Etienne Longeard
Sophia Haussener, Isaac Thomas Holmes-Gentle, Roberto Valenza, Franky Esteban Bedoya Lora