Exploiting Myopic Learning

Traditional models of rationality in economics assume that agents can immediately play the equilibrium of any game they find themselves in. However, more often than not playing an equilibrium is not immediate, but rather the result of a learning process that the agents undertake. A natural question is whether a principal can interfere with the learning process in order to steer the agents towards a certain outcome. I provide an answer to this question by showing that a principal can exploit myopic learning in a population of agents to implement social or selfish outcomes that would not be possible under the traditional fully-rational agent model. For a variety of games, I show that the principal can obtain strictly better outcomes than the corresponding Nash solution and I show how these outcomes can be implemented. The framework is general enough to accommodate many scenarios, and powerful enough to generate predictions that agree with empirically-observed behavior.

Speaker Details

Mohamed Mostagir is a doctoral candidate in Economics at the California Institute of Technology, where he is an Information Sciences and Technology fellow. His interests are in microeconomic theory, industrial organization, and topics at the interface of economics, information, and operations.

Date:
Speakers:
Mohamed Mostagir
Affiliation:
California Institute of Technology
    • Portrait of Jeff Running

      Jeff Running