ABSTRACT

Coalition formation is a type of mixed-motive game in which n players strategically negotiate to secure positions in advantageous contracts. In this study, we find that systems of agents which learn by a simple linear updating rule successfully can model the outcomes of human players across five coalition formation games studied experimentally by Kahan and Rapoport (1974). “Greedy” agents, which are deterministic and maximizing in their selection of whom to offer to, achieve outcomes on par with humans within a few hundred trials. In comparison, “Matching” agents which use probability matching 2 for selecting whom to offer to achieve overall outcomes qualitatively similar to those of humans, but not as closely as the Greedy agents do.