ΔE ΔEi 1 ΔEi f (X i 1) f (X i ) (48)
The new state or design point Xi+1 can be found using the Boltzmann’s probability distribution:
Algorithm [22]
The SA algorithm [22] can be summarized as follows。 Start with an initial design vector X1 (iteration number i = 1) and a high value of temperature T。 Ge- nerate a new design point randomly in the vicinity of the current design point and find the difference in function values:
ΔE Δfi 1 Δfi f (X i 1) f (X i ) (51)
If fi+1 is smaller than fi (with a negative value of
∆f), accept the point Xi+1 as the next design point。 Otherwise, when ∆f is positive, accept the point Xi+1 as the next design point only with a probability exp(ΔE / kT ) 。 This means that if the value of a ran- domly generated number is larger than exp(ΔE / kT ) , accept the point Xi+1; otherwise, reject the point Xi+1。
P (E ) min1, e
ΔE
kT (49)
This completes one iteration of the SA algorithm。 If
The Boltzmann’s constant serves as a scaling factor in simulated annealing and, as such, can be chosen as 1 for simplicity。 Note that if ∆E ≤ 0, Eq。 (49) gives P(Ei+1) = 1 and hence the point Xi+1 is always accepted。 This is a logical choice in the context of mi- nimization of a function because the function value at Xi+1, fi+1, is better (smaller) than at Xi, fi, and hence the design vector Xi+1 must be accepted。 On the other hand, when ∆E > 0, the function value fi+1 at Xi+1 is worse (larger) than the one at Xi。 According to most conventional optimization procedures, the point Xi+1 cannot be accepted as the next point in the iterative process。 However, the probability of accepting the point Xi+1, in spite of its being worse than Xi in terms of the objective function value, is finite (although it may be small) according to the Metropolis criterion。
the point Xi+1 is rejected, then the process of gene-
rating a new design point Xi+1 randomly in the vicinity of the current design point, evaluating the corres- ponding objective function value fi+1, and deciding to accept Xi+1 as the new design point, based on the use of the Metropolis criterion, Eq。 (50), is continued。 To simulate the attainment of thermal equilibrium at every temperature, a predetermined number (n) of new points Xi+1 are tested at any specific value of the temperature T。 Once the number of new design points Xi+1 tested at any temperature T exceeds the value of n, the temperature T is reduced by a pre-specified fractional value c (0 < c < 1) and the whole process is repeated。 The procedure is assumed to have con- verged when the current value of temperature T is sufficiently small or when changes in the function va- lues (∆f) are observed to be sufficiently small。 The choices of the initial temperature T, the number of
iterations n before reducing the temperature, and the temperature reduction factor c play important roles in the successful convergence of the SA algorithm。 For example, if the initial temperature T is too large, it re- quires a larger number of temperature reductions for convergence。 On the other hand, if the initial tempe- rature is chosen to be too small, the search process may be incomplete in the sense that it might fail to thoroughly investigate the design space in locating the global minimum before convergence。 The tempe- rature reduction factor c has a similar effect。 Too large a value of c (such as 0。8 or 0。9) requires too much computational effort for convergence。 On the other hand, too small a value of c (such as 0。1 or 0。2) may result in a faster reduction in temperature that might not permit a thorough exploration of the design space for locating the global minimum solution。 Si- milarly, a large value of the number of iterations n will help in achieving quasi equilibrium state at each tem- perature but will result in a larger computational effort。 A smaller value of n, on the other hand, might result either in a premature convergence or convergence to a local minimum (due to inadequate exploration of the design space for the global minimum)。 Unfortunately, no unique set of values is available for T, n, and c that will work well for every problem。 As per recommen- dation of Rao [22], the initial temperature T can be chosen as the average value of the objective function computed at a number of randomly selected points in the design space。 The number of iterations n can be chosen between 50 and 100 based on the computing resources and the desired accuracy of solution。 The temperature reduction factor c can be chosen be- tween 0。4 and 0。6 for a reasonable temperature re- duction strategy (also termed the cooling schedule)。 More complex cooling schedules, based on the ex- pected mathematical convergence rates, have been used in the literature for the solution of complex prac- tical optimization problems。 In spite of all the research being done on SA algorithms, the choice of the initial temperature T, the number of iterations n at any spe- cific temperature, and the temperature reduction fac- tor (or cooling rate) c still remain an art and generally require a trial-and-error process to find suitable va- lues for solving any particular type of optimization problems。 The SA procedure is shown as a flowchart in Figure 2 [22]。