If we classify the numerical optimization technique, which is based on the way of improving the design point after each iteration, there are three kinds of optimization techniques: non-gradient-based, gradient-based, and hybrid optimization techniques。 They are described briefly as follows:
Non-gradient based optimization techniques do not require an objective function, f(x), to be differentiable because the
algorithms do not use derivatives of f(x)。 Examples of non-gradient-based optimization techniques are adaptive simulated annealing, Hooke-Jeeves direct search, and genetic algorithm (GA)。 These optimization techniques tend to reach a global optimum but require the huge number of function evaluations。 GA is a well-known non-gradient based optimization tech- nique。 It is a stochastic search or optimization algorithm that mimics Darwin’s theory of biological evolution。
Gradient-based techniques define the search directions by the gradient of the function at the current point。 In practice, there are many kinds of gradient-based optimization techniques such as generalized reduced gradient, conjugate gradient, method of feasible directions, mix integer optimization, sequential linear programming, sequential quadratic programming, and Davidon–Fletcher–Powell。 Gradient-based techniques, in general, give a quick convergence, but they may require a long run when the number of variables increases。 Gradient-based techniques can also get risk of local extremum for high nonlin- ear optimization problem。
Hybrid optimization techniques use the combination of both non-gradient based and gradient-based techniques subse- quently in order to take the advantages and reduce the disadvantages of single optimization technique。 Presenting all of these optimization techniques is beyond the scope of this paper。
2。2。 The common optimization methods
The terminology optimization method used in this paper refers to whether or not the explicit objective functions are for- mulated。 For simulation-based optimization, the objective functions are often in the form of implicit equations。 The value of the objective function is unknown until simulation results are obtained。 There are two approaches that are used to resolve the optimization problem including direct optimization and metamodel-based optimization methods as shown in Fig。 1。 The detail of these two optimization methods is described as follows。
2。2。1。 Direct optimization methods
Direct numerical optimization is an approach that explicit objective functions are not required。 Both gradient-based opti- mization techniques and non-gradient based optimization techniques can be applied to solve the optimization problem。 Sometimes, direct optimization methods combine the GA and other optimization techniques。 It is well known that GA tends
Direct discrete optimization method
(No explicit mathematical functions showing the relationship between inputs and outputs)
Gradient-based None gradient-
optimization based
techniques optimization
(Derivative by techniques
finite difference)
Fig。 1。 Classification of optimization methods。
to reach a global extremum, but this method requires a large number of function evaluations。 On the contrary, gradient- based methods are efficient to guarantee a local extremum。 If these two algorithms are combined as a hybrid system, they can strengthen the advantages and remove the disadvantages。