Chapter five addresses a common argument in favor of SRM, as well as a prominent objection to that argument put forward by Stephen Gardiner. According to the “Lesser of Two Evils Argument,” there are possible future cases in which SRM would offer the best (or least bad) response to dangerous climate change, and therefore SRM would be (morally) justified in such cases. The thought behind this type of reasoning is that SRM, despite its many problems, could be less problematic than the available alternatives. Some have argued further that this gives us sufficient reason to research and develop SRM, thus affording future generations the opportunity to deploy it if necessary. This is the so-called “Arm the Future Argument.” Against arguments of this kind, Gardiner has suggested that cases in which SRM is the best option might constitute genuine moral dilemmas, or scenarios in which all available actions involve moral wrong-doing. In genuinely dilemmatic cases, no course of action is morally permissible, meaning that SRM would not be morally justified even if it was the best option. This approach does well in acknowledging both our past moral failure to respond to climate change and the fact that SRM carries various moral ills. However, this approach also incurs certain costs. First, it relies on the controversial claim that there are genuine moral dilemmas. Second, and perhaps more problematically for purposes of climate policy, it risks undermining moral action-guidance in pessimistic scenarios. In bad situations, we might look to moral theory for guidance on how we ought to act, but in a genuine moral dilemma there is no fact of the matter regarding this, given that all available actions are impermissible. Alternatively, I defend the view that scenarios in which SRM is the best (or least bad) option are not genuine moral dilemmas but rather situations calling for agent-regret, which is an attitude whereby one harbors regret toward some action one has performed. Importantly, this need not involve regretting that one acted in a certain way. One might instead harbor agent-regret regarding the fact that some action was necessary, that better options were not available, or that some action produced certain outcomes. I argue that, when SRM is the best (or least bad) option available, it can be permissible to deploy it, but that it is appropriate to foster agent-regret for that deployment. This allows us to acknowledge serious ethical problems with SRM, but unlike a moral dilemma framing it does not undermine moral action guidance.