From Agreement to Asymptotic Learning
ABSTRACT: We consider a group of Bayesian agents who are each given an independent signal about an unknown state of the world, and proceed to communicate with each other. We study the question of asymptotic learning: do agents learn the state of the world with probability that approaches one as the number of agents tends to infinity? We show that under general conditions asymptotic learning follows from agreement on posterior actions or posterior beliefs, regardless of the communication dynamics. In particular, we prove that asymptotic learning holds for a model on undirected networks and non-atomic private beliefs.
Joint work with Elchanan Mossel and Allan Sly.
BIO:
Omer Tamuz is a Mathematics Ph.D. student at the Weizmann Institute of Science in Israel. He is advised by Prof. Elchanan Mossel, currently at U.C. Berkeley. His areas of interest include combinatorial statistics, estimation and inference, and game theory.