This paper deals with the nite horizon stochastic optimal control problem with the expectation of the p-norm as the objective function and jointly Gaussian, although not necessarily independent, additive disturbance process. We develop an approximation strategy that solves the problem in a certain class of nonlinear feedback policies while ensuring satisfaction of hard input constraints. A bound on suboptimality of the proposed strategy in this class of nonlinear feedback policies is given for the special case of p = 1. We also develop a recursively feasible receding horizon policy with respect to state chance constraints and/or hard control input constraints in the presence of bounded disturbances. The performance of the proposed policies is examined in two numerical examples.
André Hodder, Mario Paolone, Lucien André Félicien Pierrejean, Simone Rametti
Farhad Rachidi-Haeri, Marcos Rubinstein, Nicolas Mora Parra, Elias Per Joachim Le Boudec, Emanuela Radici, Chaouki Kasmi