Publication
This paper studies the non-Asymptotic classification performance of the social machine learning strategy. This strategy involves an independent training phase followed by a cooperative inference phase to classify a growing number of samples. By considering instead a finite number of samples, we provide an upper bound for the probability of misclassification. This bound helps characterize the generalization ability of the social machine learning strategy, in terms of the statistical properties of the classification problem and the combination policy among the distributed classifiers. The analysis establishes the exponential decay of the probability of error with the number of samples when the training phase is consistent.