There are many population models that involve a spatial component [27, 45, 155]. For example, recall the bioreactor model used in Example 12.4 and Lab 12. There, we assumed contaminant and bacteria levels were spatially uniform, but this may not always be a valid assumption. Different locations in the bioreactor may promote or discourage bacterial growth. In this case, we would add a spatial variable (or variables). The second volume of Murray [150] contains many different examples of models with spatial features. Of course, depending on the scale of the spatial resolution, the introduction of space variables can alter our models from ODEs (with just time as the underlying variable) to partial differential equations (PDEs). If the spatial structure gives a metapopulation model of ODEs [79], then the systems approach to optimal control already presented is appropriate. We now turn our attention to consideration of optimal control of PDEs. J.-L. Lions laid the foundation of the basic ideas of optimal control of partial

differential equations in the 1970’s [129]. There is no complete generalization of Pontryagin’s Maximum Principle to partial differential equations, but the book by Li and Yong [128] deals with corresponding “maximum principle” type results. There are also some counterexamples for certain infinite dimensional systems (systems of PDEs are considered infinite dimensional systems, but ODEs are finite dimensional). The examples we treat here have maximum principle type results. We also call the reader’s attention to the books by Barbu, Lasiecka and Triggiani, Fattorini, and Mordukhovich for a variety of results on optimal control of PDEs [5, 10, 11, 57, 110, 111, 112, 147]. Choosing the underlying solution space for the states is a crucial feature for

optimal control of PDEs. Classical solutions (solutions with all the derivatives occurring in the PDE being continuous) will not exist for most nonlinear PDE problems. Deciding in what “weak” sense we are solving the PDEs is essential. We refer to Evans [56] and Friedman [66] for the rigorous definitions of Sobolev spaces and weak derivatives and give only an informal treatment. This chapter will require more background in analysis and PDEs than the other chapters. Let Ω be an open, connected subset of Rn. From now on, x (and occasion-

ally y) will be the space variable associated to Ω. One can think of a weak derivative as the function which makes the appropriate integration by parts work: for u and v, which are integrable (in the Lebesgue sense) on Ω, we say v is the weak xi-derivative of u if


uφxi dx = − ∫ Ω

v φ dx

for all φ in C∞c (Ω), which is the set of all infinitely differentiable functions on Ω with compact support. For most parabolic PDE control problems, such as those involving diffu-

sion, the appropriate solution space is L2([0, T ];H10 (Ω)). Roughly speaking, this space consists of all functions square-integrable in time with two weak derivatives in space, which are also square-integrable. The control set is frequently the Lebesgue integrable functions, which have specified upper and lower bounds. The general idea of optimal control of PDEs starts with a PDE with state

solution w and control u. Take A to be a partial differential operator with appropriate initial conditions (IC) and boundary conditions (BC),

Aw = f(w, u) in Ω× [0, T ], along with BC, IC, assuming the underlying variables are x for space and t for time. We are treating problems with space and time variables, but one could treat steady state problems with only spatial variables [26, 122, 125]. Again, the objective functional represents the goal of the problem; here we

write our functional in an integral form. We seek to find the optimal control u∗ in an appropriate control set such that

J(u∗) = inf u J(u),

with objective functional

J(u) = ∫ T 0

∫ Ω

g(x, t, w(x, t), u(x, t)) dx dt.