The compactness of Lf (f (x0)) implies the existenceof a global minimum point ˇ x ∈ Lf (f (x0)) of Q(x, ˜ x∗). For all ˆ x ∈ ∂Lf (f (x0)) wehave that f( ˆ x) = f(x0) ≥ f( ˜ x∗) since ˜ x∗ is obtained by a local minimization withinthe level set Lf (f (x0)). ThenQ( ˆ x, ˜ x∗) = exp −ˆ x −˜ x∗2γ 2 + 1−exp(−τ [f( ˆ x)−f( ˜ x∗)+ ])> 0.Let x∗ be a global minimum point of f(x). Then (4) implies that there exists a τ3such that for all τ ≥ τ3Q(x∗, ˜ x∗)< 0,from which we haveQ( ˇ x, ˜ x∗) ≤Q(x∗, ˜ x∗)< 0, (12)which implies that ˇ x is in the strict interior of Lf (f (x0)). Moreover, we haveQ( ˜ x∗, ˜ x∗) = 2− exp (−τ )> 0,and hence (12) implies that ˇ x ∈ LQ(Q( ˜ x∗, ˜ x∗)). Therefore, since by point (i) weknow that ˇ x ∈ {x ∈ Lf (f (x0)) : f(x) ≥ f( ˜ x∗)}, and we proved that it is in the interiorof Lf (f (x0)), we get that ˇ x ∈{x ∈ Lf (f (x0)) : f(x)<f( ˜ x∗)}Thus, choosing ¯ τ ≥ max{τ2,τ3}, the result follows. The properties so far described hold for the unconstrained problem (2). The prob-lem we are interested in, however, is problem (1) which presents some nonlinearconstraints g(x) ≤ 0 and some box constraints l ≤ x ≤ u. In order to handle the con-straints and obtaining a problem of form (2), we use an ∞−penalty function. Theresulting problem isminx∈Rnp (x) = f(x)+ 1max{0,g1(x),...,gm(x)}+ 1max{0,l1 −x1,...,ln −xn,x1 −u1,...,xn −un}, (13)where > 0isa penalty parameter which should be chosen sufficiently small in or-der for problem (13) being equivalent to problem (1). We note that function p (x) isnon-smooth on the boundary of the set {x ∈[l,u]: g(x) ≤ 0}, but two times continu-ously differentiable in its interior, where p (x) = f(x). Hence, under Assumption 1,p (x) is twice continuously differentiable in a sufficiently small neighborhood ofevery strictly feasible point x∗. Moreover, we note that since f(x) satisfies Assump-tion 1, then all the level sets of p (x) are compact as well.2.2 A deterministic generation of starting pointsA critical aspect of algorithm FILLED resides in Step 3. Namely, depending on thestarting point ¯ x it could happen that point y produced by Step 3 does not satisfycondition f(y)<f(x∗k ), because the sequence of points produced by the local min-imization algorithm leaves the level set Lf (f (x0)). Therefore, it could be necessaryto reiterate Step 3 by choosing a different starting point ¯ x.
The algorithm based ona filled function proposed in Lucidi and Piccialli (2002) solves this problem by ex-tracting the needed starting points from a pseudo-random sequence. In this paperwe propose a different selection rule based on a deterministic generation of points.In particular, we adopt a generation strategy, named DIRGEN, inspired by the DI-RECT algorithm proposed in Jones et al. (1993). Whenever a new starting point isneeded a single iteration of a DIRECT-type algorithm is performed. This amountsto subpiding one of the largest hyperintervals of the current partition of the fea-sible set using the same partitioning strategy as used by DIRECT. In this way it isguaranteed that the distance between successively generated starting points decreasesquite uniformly.We stress the difference between the original DIRECT algorithm andthe point generator DIRGEN, which consists in that DIRGEN does not perform anyobjective function based hyperinterval selection. In the limit, if applied an infinitenumber of times, DIRGEN produces a dense set of point in the box [l,u]. The algo-rithm which implements the above strategy and uses the new filled function appliedto problem (13) is the following:Algorithm FILLDIRData: l ≤ x0 ≤ u, > 0. Set k←0, k−1 ←∅.Step 1. Compute x∗k by applying a local minimization algorithm to the solution ofproblem (13) starting from xk . Step 2. Define a filled functionQ(x, x∗k ) = exp −x −˜ x∗2γ 2 +1− exp(−τ [p (x)− p ( ˜ x∗)+ ]).Step 3. Generate a set of points k using DIRGEN. Set k ←k ∪ k−1.Repeat.Choose a point ¯ x ∈ k ,set k ←k \{¯ x} and findy ∈ arg minx∈RnQ(x, x∗k )by using a local minimization algorithm starting from ¯ x.Until p (y) < p (x∗k ) or k =∅.Step 4. If p (y) < p (x∗k ) then set xk+1 ←y, k←k +1 and goto Step 1else set x∗k+1 ←x∗k , k←k + 1 and goto Step 3.End AlgorithmIn our practical implementation, we set the constant γ = 1, the parameters τ = 100,ρ = 10−3, = 10−3. Moreover, since our final aim is to solve a problem where thederivatives are not available, we use as a local minimization tool the unconstrainedderivative free minimization method DF proposed in Lucidi and Sciandrone (2002),that is the same used in the algorithmFILLED proposed in Lucidi and Piccialli (2002)we compare with.3 Particle swarm optimizationThe Particle Swarm Optimization (PSO) algorithm is a recent addition to the list ofglobal search methods (Russell et al. 2001). Since it was originally introduced inKennedy and Eberhart (1995), the PSO algorithm has been studied by many authors(see Kennedy and Spears 1998; Parsopoulos and Vrahatis 2002; Shi and Eberhart1998a, 1998b), and successfully applied to solve several engineering problems (seePinto et al. 2004; Shi and Eberhart 2001; Venter and Sobieszczanski-Sobieski 2003,2004). In the following, the standard PSO method is described and a new version isthen introduced and tested.3.1 Description of algorithm PSOThe swarm strategy simulates the social behavior of a set of inpiduals (particles)which share information among them while exploring the design variables space. InPSO method each particle has its own memory to remember the best places that ithas visited, whereas the swarm has a global memory to remember the best place evervisited. Moreover, each particle has an adaptable velocity to move itself in the de-sign space. 船舶设计问题上的新全局优化英文文献和中文翻译(3):http://www.youerw.com/fanyi/lunwen_64778.html