ABSTRACT

In a finitely repeated linear public good game, we compare a random cash payoff mechanism for subjects, which depends on the token earning of one randomly chosen round after a whole session is completed (RRPM), to a payoff mechanism based on accumulated token earning through overall rounds (APM). RRPM has been widely used in individual decision-making experiments because it is known to be able to control a wealth effect.1 However, a generic linear public good game has a unique dominant strategy so wealth effects should not matter. Hence, almost all experiments in the finitely repeated linear public good game literature have used APM.2