ABSTRACT

Speakers’ efforts to manipulate listeners— i.e. their efforts to covertly influence their targets’ decision-making without their targets’ conscious awareness—are far from new. Today’s digital technologies, however, increasingly enable the widespread and effective deployment of user interfaces to manipulate their targets’ decision-making about commercial services and products, elections and voting and more. These manipulative interfaces, made possible by predictive algorithms informed by big data, are often called “dark patterns.” First Amendment law generally presumes that listeners can protect themselves from unwelcome or harmful speech through the traditional remedies of exit and voice, avoidance and rebuttal. But First Amendment law recognises that such self-help is often unavailable or ineffective in certain settings where comparatively knowledgeable or powerful speakers harm comparatively vulnerable listeners through their false or misleading speech, through their nondisclosures and through their coercion. For these reasons, First Amendment law sometimes permits the government to protect listeners by prohibiting speakers’ false and misleading speech, by requiring speakers to make accurate disclosures or by requiring speakers to stay away from listeners who prefer to be left alone. This essay proposes to add manipulation to the list of harms to listeners’ interests that justify legal protection in certain settings consistent with the First Amendment. I propose that First Amendment law can and should be understood to permit the government to intervene to protect listeners from speakers’ manipulative efforts in settings where listeners cannot protect themselves precisely because they’re unaware of their manipulation.