The basic elements of the method (presented in [6]) are easily summarized. The shape design problem (before starting the optimization phase) is considered affected by uncertainty: i.e. the optimal design in, obviously, unknown and a uniform probability of occurrence in the design space is assumed. The design space is then sampled randomly: a number S of geometries or shapes 1 {}Skjs are generated (but no objective function evaluation is performed!) by using any arbitrary geometry deformation techniques (we have been using a Free Form Deformation technique, preferred since it allows high design flexibility and it is independent of grid topology) to sample the space. KLE is then used to find the so called principal direction of this design space. The average geometry or shape, s, is defined as: 11/Sj js S s and any geometry in the space can be representeed as: 1Kkk ks s z (the Karhunen–Lòeve expansion). kz are the principal directions, defined as the solutions of the eigenproblem: k k kRz z , where ( ) /TR GG S , with 1 [ ,..., ] S G s s s s . 3 G KN is the actual dimension of the design space, and G N is the number of grid points used to discretize the hull of the sampled geometries. The eigenvalues k represent the geometric variance associated with the corresponding eigenvector kz and are used to assess the total geometric variance and build a reduced-dimensionality space. The generic geometry or shape is then defined as: defining the geometrical variance of the design space. In conclusion, by taking a random set of designs, KLE provides us the principal directions of the design space (eigenvectors) with the associated geometric variance (eigenvalues). New designs are generated by linear combination of principal geometries. According to KLE theory, no greater geometric variance can be retained by any other linear expansion of order n. 3.2 New Development in Global Optimization, Derivative-Free Algorithms Here is briefly described an evolutionary type, derivative free, global optimization method, the Deterministic Particle Swarm Optimization (DPSO) [7], where substantial modifications to the basic algorithm were introduced with respect to the original PSO version. The fundamental DPSO steps are here briefly and can be summarized as follows: Step 0. (Initialization) Distribute a set of particles inside the design space with deterministic distribution and given initial velocities. Set the iteration index 0 i . Step 1 (Analysis) Set 1 ii . For each particle of the swarm evaluate the objective function, identify the minimum value in the swarm ijg, and the minimum value encountered by the single particle in its own history ijp. Step 2.(Velocity vector) For each particle calculate a velocity vector jv using the particle's memory and the knowledge gained by the swarm: 112i i i i i ij j j j jv wv c p x c g x (8) Here, χ is a limit factor introduced to limit the maximum speed of the particles, w is the inertia weight, c1 and c2 are positive parameters (cognitive and social respectively), and ijx is the position of the j-th particle at the step i. Step 3. (Updating) Update the position of each particle xj using the velocity vector and previous position: 11 i i ij j jx x v; Step 4. (Stopping criterion) Go to Step 1 and repeat until some convergence criterion is satisfied. This basic algorithm has been very recently modified [8], integrating the global phase with a line search-based derivative-free method (local phase), providing a robust method to force the convergence of a subsequence of points toward a stationary point, which satisfies first order optimality conditions for the objective function. The method (LS-DF PSO, [8]), extends the DPSO scheme with a line search-based method. Specifically, a Positively Spanning Set (PSS) is used, where the set of search directions () D is defined by the unit vectors ke , 1, , kn , as shown in the following equation and in Figure 1. 0 1 0 1,,,1 0 1 0D SWATH的参数化设计和多目标优化英文文献和中文翻译(3):http://www.youerw.com/fanyi/lunwen_63562.html