Numerical Optimization with Computational Errors

Numerical Optimization with Computational Errors

Zaslavski, Alexander J.

103,99 €(IVA inc.)

This book studies of approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space, these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors  are examined with the aim to find an approximate solution and the amount of necessary iterations. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative.

 This monograph contains 16 chapters. Chapter 1 contains an introduction and overview of the concepts necessary to the book . Chapter 2 studies the subgradient projection algorithm for minimization of convex and nonsmooth functions. The mirror descent algorithm is considered in chapter 3. The gradient projection algorithm for minimization of convex and smooth functions is analyzed in chapter 4. Chapter 5 contains an extension of the algorithm for minimization of convex and smooth functions which is used for solving linear inverse problems arising in signal/image processing. The convergence of the Weiszfelds method in the presence of computational errors is discussed in chapter 6. Chapter 7 solves constrained convex minimization problems using the extragradient method. Chapter 8 is devoted to a generalized projected subgradient method for minimization of a convex function over a set which is not necessarily convex. The convergence of a proximal point method in a Hilbert space under the presence of computational errors is explored in chapter 9.  Chapter 10 demonstrates the local convergence of a proximal point method in a metric space under the presence of computational errors. Chapter 11 brings  the convergence of a proximal point method to a solution of the inclusion induced by a maximal monotone operator, under the presence of computational errors. In chapter 12 the convergence of the subgradient method for solving variational inequalities is proved under the presence of computational errors. The convergence of the subgradient method to a common solution of a finite family of variational inequalities and of a finite family of fixed point problems, under the presence of computational errors, is shown in chapter 13. Chapter 14 is devoted to the continuous subgradient method. Penalty methods are studied in chapter 15 and chapter 16 is dedicated to Newton’s method. 

  • ISBN: 978-3-319-30920-0
  • Editorial: Springer
  • Encuadernacion: Cartoné
  • Páginas: 270
  • Fecha Publicación: 12/07/2016
  • Nº Volúmenes: 1
  • Idioma: Inglés