In particular, thefilled function in Lucidi and Piccialli (2002) has the favorable property to be globallyconvexized, but the price that must be paid is the existence of a known local minimumpoint worse than x∗k that has a large basin of attraction, so that the local minimizationin step 3 is often attracted by this local minimum. The filled function we introducehere is not globally convexized, but does not have stationary points at all where theobjective function is higher than f(x∗k ), and the numerical experiments prove that thenew filled function is most reliable in practice to locate the global minimum point.2.1 A new filled functionIn this subsection we introduce a new filled function which has the following expres-sion:Q(x, ˜ x∗) = exp −x −˜ x∗2γ 2 +1 −exp (−τ [f(x)− f( ˜ x∗)+ ]), (3)where ˜ x∗ is a known stationary point of the original objective function f(x), γ> 0is a constant, and > 0 and τ ≥ 1 are real parameters. This filled function is constituted by two terms; the first one, exp(−x −˜ x /γ 2), makes point ˜ x∗ a local maximum of the filled function Q(x, ˜ x∗) (drawing ourinspiration from Renpu 1990 and Xu et al. 2001). The second term, 1−exp(τ [f(x)−f( ˜ x∗)+ ]), filters the stationary points of f(x) which have objective values greateror equal to f( ˜ x∗) and ensures that, for right values of the parameters, Q(x, ˜ x∗) has alocal minimum point in a point with lower objective value than f( ˜ x∗).More in detail,it is possible to prove the following theoretical result:Proposition 2.1 There exists a ¯ τ> 0 such that for all τ ≥¯ τ the filled function hasthe following properties:(i) The point ˜ x∗ is an isolated local maximizer of the filled function Q(x, ˜ x∗).(ii) Q(x, ˜ x∗) has no unconstrained stationary point in {x ∈ Lf (f (x0)) : f(x) ≥f( ˜ x∗)} except ˜ x∗.(iii) If ˜ x∗ is not a global minimum of f(x) and satisfies the condition0 < <f( ˜ x∗)− f(x∗), (4)where x∗ is a global minimum of f(x), then all the global minimum pointsˇ x of the filled function Q(x, ˜ x∗) over Lf (f (x0)) belong to the region {x ∈Lf (f (x0)) : f(x)<f( ˜ x∗)}.Proof First of all we note that the gradient of Q(x, ˜ x∗) has the following expression:∇Q(x, ˜ x∗) =−2(x −˜ x∗)γ 2exp −x −˜ x∗2γ 2 + τ∇f(x) exp (−τ(f(x) −f( ˜ x∗)+ )). (5)We begin by proving point (i). Since the point ˜ x∗ is a stationary point of problem (2),it satisfies ∇f( ˜ x∗) = 0. Therefore, (5) implies that where λmax(∇2f( ˜ x )) is the maximum eigenvalue of the matrix ∇2f( ˜ x ). The aboveinequality implies that there exists a τ1 > 0 such that, for all τ ≥ τ1, the Hessianmatrix ∇2Q( ˜ x∗, ˜ x∗) is negative definite. Therefore the point ˜ x∗ is an isolated localmaximizer of Q(x, ˜ x∗) for all τ ≥ τ1.As for point (ii), recalling the expression (5) of the gradient of Q(x, ˜ x∗) wenote that if there would exists an unconstrained stationary point ˆ x ∈ Lf (f (x0)) ofQ(x, ˜ x∗) such that ˆ x = ˜ x∗ and f( ˆ x) ≥ f( ˜ x∗),itmustsatisfy2ˆ x −˜ x∗γ 2exp −ˆ x −˜ x∗2γ 2 = τ∇f( ˆ x) exp(−τ(f( ˆ x)−f( ˜ x∗)+ )). (9)Point (i) implies the existence of > 0 such that ˆ x −˜ x∗ >. By Assumption 1the compactness of Lf (f (x0)) follows. Therefore there exist two constants D and Lsuch that ˆ x −˜ x∗≤ D and ∇f(x)≤ L for all x ∈ Lf (f (x0)). This implies thefollowing estimates for the two sides of (9):2γ 2exp −D2γ 2 ≤ 2ˆ x −˜ x∗γ 2exp −ˆ x −˜ x∗2γ 2 (10)andτ∇f( ˆ x) exp(−τ(f( ˆ x)−f( ˜ x∗)+ )) ≤ τLexp(−τ ). (11)Therefore (10) and (11) imply that there exists a τ2 ≥ τ1 such that for all τ ≥ τ2condition (9) does not hold.Finally, we prove point (iii).