Biology Reference
In-Depth Information
Optimal control is a slightly different notion. In an optimal control problem, we ask
the following question: given a particular state that we hope to reach, what is the most
efficient way of reaching that state? In other words, we know the state of the system
that we hope to reach, and the control problem is to find the solution that steers the
system to that state in the best manner. Using a model of the immune system fighting
an infection as an example, we may formulate an optimal control problem as follows:
given that we wish to eliminate the number of infected cells in one month, what is
the best drug treatment schedule we can devise? To summarize: in an optimal control
problem we know the state we hope to reach and we search for the best way to reach
it, whereas in an optimization problem, a goal is stated and we compare solutions to
see which solution maximizes or minimizes the stated goal of the problem.
We now present an example explaining several terms related to optimization and
optimal control that will be used in Sections 5.6 and 5.7 . We then conclude this
section with definitions of said terms. We do this in order to standardize and clarify
the terminology within this chapter. Note that while many of these terms are also used
in optimal control for continuous systems, there might be subtle differences in their
meanings when applied to agent-based models. Furthermore, as this chapter is meant
to provide an introduction to the topic, more formal definitions are outside the scope
of this text. For a more formal treatment see [ 11 ].
Example 5.1. Suppose that we are modeling lung cancer and we wish to study the
effect of a certain drug. We formulate our optimal control problem as follows: which
treatment schedule should we choose in order to reduce the number of cancer cells
so that it remains below some fixed threshold over the course of one year, given that
we wish to minimize the number of times the drug is administered and maximize the
number of healthy cells? Note here that we use the term “treatment schedule” instead
of “treatment” because the drug we are administering is fixed—we want to determine
which days the drug should be administered in order to obtain optimal results.
In this case, there will be many variables: the number of healthy cells, the number
of cancer cells, the rate at which cancer cells grow, the rate at which healthy cells
regenerate, the expected lifespan of the patient, the frequency with which the drug
is administered, the type of drug, and so on. Some of these variables will have fixed
values: for example, the rate at which healthy cells regenerate (during intervals when
no treatment is administered) can be determined through experimental measurements,
and in fact this rate helps to define the model itself. Such variables are referred to as
model parameters —they are a part of the specification of the model. The repeated
interactions of the entities in the model, such as immune cells or rabbits, are referred
to as the model dynamics . Note that we will only have direct control over some of the
variables, and others we will simply measure by observation. For example, during
each day of the simulated treatment we can decide whether or not to administer the
drug; thus we have direct control over the value of this variable. We refer to these as
control variables , because we have direct control over their values and they exercise
control over certain aspects of the model. On the other hand, we might not be able to
control over other aspects, such as the number of white blood cells present at the site
 
Search WWH ::




Custom Search