Biomedical Engineering Reference
In-Depth Information
drugs was modelled by linear differential equations. The authors apply Pontryagin's
maximum principle on this model with the objective of minimising the quantity
of tumour cells and as constraints an upper bound on the drug instantaneous flow
and an upper bound on drug total dose. The authors obtained the optimal control
as a function of the adjoint vector. The optimal control reaches the dose bound
when a function related to the adjoint is nonzero and it follows a singular curve
when this function vanishes along an interval. They then used a shooting method
to construct the optimal control as a feedback function from these adjoint-vector
dependent singular curves. Algorithmic details are given in [ 99 ].
In general, shooting methods give very precise results but the structure of
commutations, given by studying Pontryagin's maximum principle, must be known
in advance for them to be efficient. When this structure is unknown, one can still
perform direct methods, which we describe next.
Direct Methods
Direct methods consist of a total discretisation of the control problem and then of
solving the finite dimensional optimisation problem obtained. The discretisation
of an optimal control problem results in an optimisation problem with a large
number of variables. The theory of differentiable optimisation is the classical tool
for such problems [ 24 , 29 , 111 ]. However, in order to overcome the limits of
differentiable optimisation, some authors use stochastic algorithms to solve the
discretised problem. We next give some examples of these techniques in the context
of chemotherapy.
Gradient algorithm
When the problem is formulated without any state constraint, one can use the
gradient algorithm, as in [ 118 ]. The authors proposed a cell-cycle-dependent model
written with one ODE by cell-cycle phase. They controlled the transition and death
rates and optimised a linear combination of the number of cancer cells and of the
total dose of drugs. The gradient algorithm starts here with an initial control strategy
u 0 and the associated trajectory x u 0 . It consists in successive improvements of the
discretised objective F 0
N
l
0 f 0
g 0
(
u
)=
(
t l ,
x u (
t l ) ,
u
(
t l ))+
(
T
,
x u (
T
))
by
=
F 0
u k + 1 =
P U (
u k α∇
(
u k )) ,
where U is the set of admissible controls and
is a length step chosen in order
to guarantee a sufficient decrease of the objective, for instance with an Armijo or
Wolfe line search rule. When computing the gradient of the objective with respect to
the control, there appears an adjoint vector which is a discrete version of the adjoint
vector in Pontryagin's maximum principle.
α
Search WWH ::




Custom Search