A number of studies have concluded that polarization may be rational in the sense that even ideal Bayesian agents can end up seriously divided on an issue given exactly the same evidence. In this spirit, Pallavicini, Hallsson, and Kappel demonstrate that group polarization is a very robust phenomenon in the Bayesian so-called Laputa model of social network deliberation. However, in their view, polarization arises due to a failure of Laputa to take into account higher-order information in a particular way, making the model incapable of capturing full rationality. I show that taking into account higher-order information in the way proposed by Pallavicini et al. fails to block polarization. Rather, what drives polarization is expectation-based updating in combination with a modeling of trust in a source that recognizes the possibility that the source is systematically biased. Finally, I show that polarization may be rational in a further sense: group deliberations that lead to polarization can be, and often are, associated with increased epistemic value at the group level. The upshot is a strengthened case for the rationality of polarization.