ABSTRACT

Various improvements have been made by researchers, engineers, and decision makers in the foundry industry to obtain high-quality products at reasonable operational costs and minimal energy consumption� Thus, various optimization approaches have been practiced by personnel in this industry ranging from experiential strategies to statistical and metaheuristic-based approaches� The primary recipe for achieving the mentioned goals in the foundry industry is by the expert manipulation of the parameters involved in the molding process�

Many concerns have been raised when dealing with emerging technologies in engineering optimization which present themselves in a multiobjective (MO) setting (Eschenauer, Koski, & Osyczka, 1990; Statnikov & Matusov, 1995)� Strategies in MO optimization can be rudimentarily classified into two groups� First, there are methods that use the concept of Pareto optimality to trace the non-dominated solutions at the Pareto curve, for instance: strength Pareto evolutionary algorithm (Zitzler & Thiele, 1998) and non-dominated sorting genetic algorithm II (NSGA-II) by Deb, Pratap, Agarwal, and Meyarivan (2002)� The second class of techniques is known as the weighted (or scalarization) approaches� During the application of these methods, the objective functions are aggregated into a single weighted function which is then solved for various scalar (weight) values� Some well-known scalarization techniques include the weighted sum method (Triantaphyllou, 2000), goal programming (Luyben & Floudas, 1994), and normal-boundary intersection (NBI) method (Das & Dennis, 1998)� Using these techniques, the scalars (or weights) are used to consign relative preferences to the objectives during the aggregation procedure� Hence, alternative near-optimal solution options are generated for various values of the scalars�

In MO optimization problems, determining the most efficient solution set can be a very daunting process� Many varieties of concepts (such as diversity and convergence) have been proposed in the past years� These ideas were then used as indicators to evaluate the solution set produced by the optimization algorithm (Grosan, 2003)� Such evaluations were then used to benchmark the algorithm’s performance� These concepts unfortunately could not absolutely state and rank the superiority of

solution sets produced by an algorithm against other such sets generated by other algorithms� The only concept that can be used generally for the overall ranking of solution sets is the idea of “Pareto-dominance�” The hypervolume indicator (HVI) (Zitzler & Thiele, 1998) is a set measure reflecting the volume enclosed by a Pareto frontier approximation and a reference set (Knowles & Corne, 2003; Igel, Hansen, & Roth, 2007)� The HVI thus guarantees strict monotonicity regarding Pareto dominance (Fleischer, 2003; Zitzler, Thiele, Laumanns, Fonseca, & Grunert da Fonseca, 2003)� This makes the ranking of solution sets and hence algorithms possible for an arbitrary MO problem�

This chapter discusses the procedures and outcomes involving the implementation of metaheuristics to dominantly approximate the Pareto frontier in the objective space� Three non-modified techniques which are differential evolution (DE), particle swarm optimization (PSO), and gravitational search algorithm (GSA) are implemented on the green sand system� In addition, the modified versions of PSO and DE techniques which are chaotic DE (CDE), chaotic PSO, and game-theoretic DE approaches are developed and applied to the mentioned problem� These techniques are utilized in conjunction with the weighted sum method to improve the dominance of the solution set during successive iterations� In addition, the solution characteristics (diversity and degree of dominance) of the solutions with regard to their level of dominance are also measured using two performance metrics� The next section provides details on the green sand mold system and its associated process model�

In green sand mold systems, the quality of the product obtained from the molding process is very much dependent on the physical properties of the molding sand (e�g�, hardness, permeability, green compression strength, and bulk density)� Faulty extent of the mentioned properties may result in casting defects such as poor surface finish, blowholes, scabs, pinhole porosity, and so on� Controllable parameters such as percentage of water, percentage of clay, grain fineness number, and number of strokes heavily influence the physical properties of the molded sand� Therefore, by classifying these parameters as decision variables and mold sand properties as the objective function, the MO optimization problem was formulated in Surekha, Kaushik, Panduy, Vundavilli, and Parappagoudar (2012)� The purpose of this formulation is the determination of the best controllable parameters for optimal final product of the molding process� A more rigorous study on the optimization and model development of mold systems can be seen in Sushil, Satsangi, and Prajapati (2010)�

In the green sand mold system, the response parameters of the mold heavily influence the quality of the final product� In Surekha et al� (2012), these parameters are selected as the objective functions� The response parameters are: green compression strength ( f1), permeability ( f2), hardness ( f3), and bulk density ( f4)� These objectives on the other hand are influenced by the process variables which are: the grain fineness number (A), percentage of clay content (B), percentage of water content (C),

and number of strokes (D)� The objective functions and the range of the decision variables are shown as follows:

= − − + +

+ + − −

+ − −

+ + +

f A B C D A B C D

AB AC AD

BC BD CD

17.2527 1.7384 2.7463 32.3203 6.575

0.014 0.0945 7.7857 1.2079

0.0468 0.1215 0.0451

0.5516 0.6378 2.689

(9�1)

= − − + −

+ + − + +

+ + + + −

f A B C D A B C D AB

AC AD BC BD CD

1192.51 15.98 35.66 9.51 105.66

0.07 0.45 4.13 4.22 0.11

0.2 0.52 1.19 1.99 3.1

(9�2)

= − + + +

+ − − −

− − −

− − +

f A B C D A B C D

AB AC AD

BC BD CD

38.2843 0.0494 2.4746 7.8434 7.774

0.001 0.00389 1.6988 0.6556

0.0015 0.0151 0.0006

0.075 0.1938 0.65

(9�3)

= + − −

+ − + +

− − − +

− − −

f A B C D A B C

D AB AC AD

BC BD CD

1.02616 0.01316 0.00052 0.06845

0.0083 0.00008 0.0009 0.0239

0.00107 0.00004 0.00018 0.00029

0.00302 0.00019 0.00186

(9�4)

≤ ≤

≤ ≤

≤ ≤

≤ ≤

A

B

C

D

52 94

8 12

1.5 3

3 5

(9�5)

To obtain the size distributions of the silica sand and the grain fineness number, sieve analysis tests were carried out in Parappagoudar, Pratihar, and Datta (2007)� Similarly, in Surekha et al� (2012), the authors conducted gelling index tests for the determination of the strength of clay� Next, experiments were conducted by varying the combination of the parameters using the central composite design� The mathematical model of the green mold system was developed; its objective functions are

given in Equations 9�1 through 9�4 and the constraints are given in Equation 9�5� The MO optimization problem statement for the green mold system problem is shown as follows:

≤ ≤

≤ ≤

≤ ≤

≤ ≤

f f f f A

B

C

D

Max ( , , , ) subject to 52 94

8 12

1.5 3

3 5

(9�6)

In the past two decades, visual observation is the most common method employed to gauge the degree of diversity and convergence to the Pareto frontier (Mostaghim & Teich, 2003)� For cases involving low-dimensional objective spaces (n < 3) these methods are deemed sufficient� However, for cases involving high-dimensional objective spaces (n ≥ 3), analysis by visualization per se is very difficult to practice and is a very inaccurate method to draw any information out of the solution vectors in the objective space� Thus, many quantitative performance metrics have been proposed and used effectively in recent times� A concise and complete review on the developments of performance metrics were given in Okabe, Jin, and Sendho (2003)� The application of the convergence and diversity metrics to test problems was done in Knowles and Corne (2003) and Farhang-Mehr and Azarm (2002) while the HVI was utilized for dominance determination in Zitzler (1999)� The convergence and diversity metrics used in this work are specifically for cases involving highdimensional objective spaces� In this work, real-world industrial cases with highdimensional objective spaces are solved� These cases provide no information about the Pareto-optimal front� Therefore, the performance metrics essentially deliver convergence and diversity information relative to an approximated set�

The sigma diversity metric (SDM) evaluates the locations of the solution vectors in the objective space relative to the sigma vectors (Mostaghim & Teich, 2003)� For lower-dimensional objective spaces (n < 3), metrics which are based on spherical and polar coordinates may be used� However, as the dimensions increase beyond three (n ≥ 3), the mentioned coordinate systems do not define the distribution of the solution vectors as well� In such scenarios, the SDM is highly recommended for effective computing of the solution distribution�

To begin the computation of the SDM, two types of sigma lines would have to be constructed� There are two types of sigma lines: those that represent the solution vectors (σ′) and lines that represent the reference lines (σ)� The sigma lines that represent the solution vectors can be computed as follows:

′σ = − ∀ ≠

ij f f f

i j( ) such thatk j j

(9�7)

where k denotes the index that represents the number of solution vectors; i, j, and l denote the index that represents the number of objectives; and n denotes the total number of objectives� Then the magnitude sigma ′σ k is computed as follows:

∑∑′σ = ′σ

(9�8)

Thus, for each line in the objective space (solution vector or reference line), there exists a unique sigma value� For the sake of clarity, a two-objective distribution of the reference sigma line as well as the solution vector sigma line is given in Figure 9�1�

Next is the task to determine the reference sigma lines, σ � It is crucial to take note that the total number of reference sigma lines must be equivalent to the total number of solution vector sigma lines which is also equivalent to the number of solution vectors present� For instance, let there be a total of k amount of σ lines, p amount of ′σ lines, and m be the total number of solution vectors in some arbitrary n-dimensional objective space� Then, k = l = m� The reference sigma lines have to be uniformly distributed in the objective space� To achieve this, the most minimum and the maximum values of ′σ are determined� Then the step interval, Δ, for even distribution is computed as follows:

( ) ( )∆ = ′σ − ′σ m

max min

(9�9)

Next, the reference sigma lines, σ , are determined as follows:

( )σ = σ + ∆ σ = ′σ+ such that mink k k1 0 (9�10)

To compute the metric, the distance of the solution vector sigma lines ′σ from the reference sigma lines σ , θk, is calculated by taking the difference of its respective values such as

θ = ′σ − σk k k (9�11)

The piece-wise satisfaction condition, μk, is then imposed such that it returns binary values relative to some user-specified threshold distance, c, as follows:

µ =

θ ≤

θ >

  

c

c

1 iff

0 iff k

(9�12)

The sigma diversity metric, Dk, is then calculated by averaging the piece-wise satisfaction condition, μk, over the number of solution vectors, m:

µ =D m

(9�13)

The algorithm for computing the sigma diversity metric, Dk is shown in Algorithm 9�1�

Alogrithm 9.1: Sigma Diversity Computation

Step 1: Determine the sigma lines for the solution vectors, ′σk Step 2: Compute the magnitude sigma ′σk for each of the solution vectors Step 3: Calculate the step interval, Δ Step 4: Construct the reference sigma lines σk Step 5: Compute the distance, θk, of sigma lines ′σ from σ Step 6: Impose the piece-wise satisfaction condition and compute μk Step 7: Compute sigma diversity metric, Dk

Thus, high values of the sigma diversity metric, Dk, indicates high uniformity and diversity in terms of the distributions of the solution vectors in the objective space�

Many past investigations have been conducted on methods to apply evolutionary algorithms to MO problems� In addition, many ideas have also been proposed on methods to attain a good solution spread over the Pareto frontier� Unfortunately, in all these works, the optimization goal remains vague since there are no established techniques that measure the quality of a solution sets produced by an arbitrary MO algorithm (especially in cases where there are more than two objectives in the problem)� The impact of this issue magnifies especially in real-world problems where it is often the case that the Pareto frontier is unknown (unlike theoretical test functions where the Pareto frontier is known and can be used to benchmark the solution quality)�

One approach that has been effective in measuring the quality of the solution set that constructs the Pareto frontier in cases where the Pareto frontier is unknown is the HVI (Beume, Naujoks, & Emmerich, 2007)� Recently, this indicator has been frequently applied in many works involving MO problems (Igel et al�, 2007)� The HVI is the only indicator which is strictly Pareto compliant that can be used to measure the quality of solution sets in MO optimization problems (Zitzler & Thiele, 1998)� To provide a clearer picture of the idea of Pareto compliance, the concepts of Pareto dominance are formally defined in the subsequent paragraphs�

Pareto dominance can be categorized into three types which are strictly dominates (), weakly dominates (), and indifferent (~)� Let two solution vectors be a and b� Then if the solution vector a dominates the vector b in all the objectives then a strictly dominates b ( a b)� If the solution vector a dominates the vector b in some of the objectives but not all, then a weakly dominates b ( a b)� Finally, if the solution vector a does not dominate the vector b and the solution vector b does not dominate a as well in all the objectives, then a is indifferent to b (a ~ b)�

Strictly Pareto compliance can be defined as follows� Let there be two solution sets say X and Y for a particular MO problem� If the HVI value for X is greater than that of Y, then the solution set X Y or X Y � The HVI measures the volume of the dominated section of the objective space and can be applied for multidimensional scenarios� When using the HVI, a reference point or a “nadir point” needs to be defined� The nadir point is a point which is dominated by all the solutions from the approximate Pareto frontier� Relative to this point, the volume of the space of all dominated solutions can be computed� The HVI of a solution set xd ϵ X can be defined as follows:

X r x r xHVI( ) vol [ , ] [ , ]d d x x X

= × ⋅⋅⋅×  

 ∈

(9�14)

where r1,…,rd is the reference point and vol(�) is the Lebesgue measure� For illustration purposes, a two-objective depiction of the construction of the hypervolume is given in Figure 9�2�

It can be observed in Figure 9�2 that the larger the value of the HVI, the more dominant the solution is in the objective space� The characteristics of the HVI are as follows:

a� Strictly monotonic indicator� b� Its computational effort is exponential to the amount of solution vectors� c� It requires an upper-bounding vector (nadir point)�

The GSA algorithm is a metaheuristic algorithm first developed by Rashedi, Nezamabadi-pour, and Saryazdi (2009)� This technique was inspired by the law of gravity and the idea of interaction of masses� This algorithm uses the Newtonian gravitational laws where the search agents are the associated masses� Thus, the gravitational forces influence the motion of these masses, where lighter masses gravitate toward the heavier masses (which signifies good solutions) during these interactions� The gravitational force hence acts as the communication mechanism for the masses (analogous to “pheromone deposition” for ant agents in ant colony optimization [Colorni, Dorigo, & Maniezzo, 1991] and the “social component” for the particle agents in PSO)� The position of the masses correlates with the solution space in the search domain while the masses characterize the fitness space� As the iterations increase, and gravitational interactions occur, it is expected that the masses would conglomerate at their fittest position and provide an optimal solution to the problem�

Initially the GSA algorithm randomly generates a distribution of masses, mi(t) (search agents), and also sets an initial position for these masses, xi

d� For a minimization problem, the least fit mass, m t( )iworst , and the fittest mass, m t( )ibest , at time t are calculated as follows:

=

∈ m t m t( ) min ( )

=

∈ m t m t( ) max ( )

For a maximization problem, it is simply vice versa� The inertial mass, ′m t( )i , and gravitational masses, Mi(t), are then computed based on the fitness map developed previously�

′ =

m t m t m t

m t m t ( ) ( ) ( )( ) ( )i

(9�17)

=

M t m t

m t

( ) ( )

( ) i

(9�18)

such that

[ ]= = = ∈M M M M i N: 1,i i ii ia p (9�19) Then the gravitational constant, G(t + 1), and the Euclidean distance, Rij(t), are computed as follows:

+ =

−α 

 G t G t

t T

( 1) ( )exp max

(9�20)

( )( )= −R t x t x t( ) ( ) ( )ij i j2 2 (9�21)

where α is some arbitrary constant, Tmax is the maximum number of iterations, and xi(t) and xj(t) are the positions of particles i and j at time t� The interaction forces at time t, F t( )ijd , for each of the masses are then computed:

( )= ×

+ ε

 

  × −F t G t

M t M t R t

x t x t( ) ( ) ( ) ( )( ) ( ) ( )ij i j

where ε is some small parameter� The total force acting on each mass i is given in a stochastic form as follows:

∑ [ ]

=

F t w F t

w

( ) rand( ) ( ) :

rand( ) 0,1

(9�23)

where rand(wj) is a randomly assigned weight� Consequently, the acceleration of each of the masses, a t( )id , is given as follows:

=

 

 a t

F t M t

( ) ( )( )i i

(9�24)

After the computation of the particle acceleration, the particle position and velocity are calculated:

+ = + +v t w v t a t( 1) rand( ) ( ) ( )i j i i d d d

(9�25)

+ = + +x t x t t v t t( 1) ( ( ) ( ( 1)i i id d d (9�26)

where rand(wj) is a randomly assigned weight� The iterations are then continued until all mass agents are at their fittest positions in the fitness landscape and some

stopping criterion which is set by the user is met� The GSA algorithm is presented in Algorithm 9�2�

Algorithm 9.2: Gravitational Search Algorithm (GSA)

Step 1: Initialize number of particles, mi, and initial positions, xi(0) Step 2: Initialize algorithm parameters, G(0), α Step 3: Compute gravitational and inertial masses based on the fitness map Step 4: Compute the gravitational constant, G(t) Step 5: Compute distance between agents, Rij(t) Step 6: Compute total force, F t( )id and the acceleration a t( )id of each agent Step 7: Compute new velocity vi(t) and position xi(t) for each agent Step 8: If the fitness criterion is satisfied and t = Tmax, halt and print solutions,

else proceed to step 3

The algorithmic enhancement presented here involves the concept of the application of chaotic logistic maps (Lorenz, 1963; Jakobson, 1981; Flake, 1998) to the PSO algorithm to improve its diversification capabilities� Commonly, in most PSO algorithms, a random number generator is used to establish the initial position and velocity distributions of the particles� This would give the initial diversity in the population of solutions represented by each of the particles� The diversification capabilities of the PSO are enhanced by plugging in the chaotic logistic map after the random number generator segment in the algorithm� This way, the random number generator produces the initial conditions, Ni(0), for the chaotic logistic map� The logistic mapping is given as follows:

= λR t N t( ) ( )i i (9�27)

[ ]+ = −N t R t N t N t( 1) ( ) ( ) 1 ( )i i i i (9�28)

+ = + ′λR t R t( 1) ( )i i (9�29)

where N(t) and R(t) are variables in the logistic chaotic map , and λ′ and λ are relaxation constants specified by the user� The chaotic logistic function is then iterated until the limit Nmax is reached� The solutions obtained at the end of these iterations are then used as the initial distributions for the particle position and velocity (xi(0) and vi(0))� Hence, another variant for the PSO algorithm (chaotic PSO or Ch-PSO) with magnified diversification capabilities is described in this chapter� The algorithm and the flowchart for this Ch-PSO method are given in Algorithm 9�3 and Figure 9�3 respectively�

Algorithm 9.3: Chaotic Particle Swarm Optimization (Ch-PSO)

Step 1: Set number of particles, i, and initialize parameter settings w, c1, c2, r1, r2, n0 Step 2: Randomly initialize particles’ position xi(t) and velocity vi(t) Step 3: Randomly set initial conditions for the chaotic logistic map Step 4: Iterate chaotic logistic map Step 5: If the iterations, n > Nmax, proceed to step 6 else go to step 4 Step 6: Calculate cognitive and social components of the particles Step 7: Compute position xi(t + 1) and velocity vi(t + 1) of the particles at next iteration Step 8: Proceed with the evaluation of the fitness of each particle in the swarm Step 9: If the fitness conditions are satisfied and t < Tmax, stop program and print

solutions, else go to step 6

Specific modifications were performed in the DE algorithm to enhance its diversification capabilities by the addition of the chaotic component� First, the population of vectors, xi

G, was generated� The consequent steps are similar to the regular DE algorithm where one principal parent xi

selected� Differential mutation is then performed and the mutated vector, Vi, is generated� The Vi is then recombined with xi

obtained xi child is used as the input to the chaotic logistic map (Jakobson, 1981)� This

chaotic logistic mapping is presented as follows:

=N t x t( ) ( )i ichild (9�30)

= λR t N t( ) ( )i i (9�31)

[ ]+ = −N t R t N t N t( 1) ( ) ( ) 1 ( )i i i i (9�32)

+ = + ′λR t R t( 1) ( )i i (9�33)

where N(t) and R(t) are variables in the logistic chaotic map, and λ′ and λ are relaxation constants specified by the user� Then the logistic mapping is continued until a specific number of iterations is satisfied�

The final value at maximum number of iterations of N(tmax) is incorporated into the child trial vector, xi

round of mutation by the chaotic map� Next, the “knock-out” competition for nextgeneration survival selection is performed� The fitness function for the child trial vector, xi

child, is evaluated� Thus, another variant of the DE algorithm with enhanced diversification capabilities called the CDE is presented� The flowchart and the algorithm are given in Figure 9�4 and Algorithm 9�4 respectively:

Algorithm 9.4: Chaotic Differential Evolution (CDE)

Step 1: Initialize population size Step 2: Randomly initialize the population vectors, xi

Step 3: Randomly select one principal parent, xi p

Step 4: Randomly select three auxiliary parents, xi a

Step 5: Perform differential mutation and generate mutated vector, Vi Step 6: Recombine Vi with xi

Step 7: Iterate chaotic logistic map Step 8: If n > Nmax, proceed to next step else go to step 7 Step 9: Evaluate fitness of the new xi

Step 10: If the fitness criterion is satisfied halt and print solutions, else proceed to step 3

In past years, game theory has been used extensively to model and explain systems with conflicting scenarios (e�g�, biology, economics, and other social sciences) (Axelrod, 1984; Myerson, 1991; Weibull, 1995)� The single primary proposition of game theory is:

Using this as the core idea, a more general form of game theory, the evolutionary game theory (EGT), was produced� In EGT, the system being analyzed consists of a large number of players that randomly interact in pairs� The EGT has proven to be extremely efficient in modeling emergence and dynamics involving collective behaviors� Some elements of the EGT are used to modify and improve the conventional DE technique� The EGT scheme functions as a regulating subsystem in the DE approach resulting in the game-theoretic differential evolution (GTDE) approach� In the proposed technique, the offspring and the parents are engaged competitively to achieve the user-specified aim in each generation� The aim is to obtain highly optimal solutions that satisfy all the required constraints� The EGT-based strategies are used to construct regulatory subsystems for the DE algorithm� The EGT strategies are stated as follows:

• Cooperative move-Reducing the mutation factor of the child vector • Defective move-Modifying relation to increase degree of mutation for the

child vector�

The algorithm for the GTDE approach employing the mentioned strategies are shown in Algorithm 9�5 and Figure 9�5�

Algorithm 9.5: Game-Theoretic Differential Evolution (GTDE)

Step 1: Initialize individual size and initialize the population vectors, xi G

Step 2: Select one principal parent, xi p, and three auxiliary parents, xi

a, at random Step 3: Evaluate fitness of current solution relative to the previous solution Step 4: IF current fitness, FIT(gen), is better than previous fitness, FIT(gen−1)

Employ cooperative strategy between parent and child vector ELSE Employ defective strategy between parent and child vector

Step 5: Perform differential mutation and generate mutated vector, Vi Step 6: Recombine Vi with xi

Step 7: Perform “knock-out” competition for next-generation survival selection Step 8: If the fitness criterion is satisfied and gen = Tmax, halt and print solutions,

else proceed to step 3

The solution sets which are the approximations of the Pareto frontier were obtained using the DE, PSO, and GSA methods� The approximate Pareto frontiers obtained using the DE, PSO, and GSA algorithms are shown in Figures 9�6 through 9�8� The level of dominance gauged by the HVI for the entire Pareto frontier produced by the DE, PSO, and GSA techniques is given in Figure 9�9�

The frontier produced by the GSA algorithm in Figure 9�8 seems to be conglomerated and localized at certain portions of the objective space� This spacing property is seen to heavily influence the ability of the algorithm to approximate the Pareto

frontier� Localized solutions of the frontier such as the ones produced by the GSA algorithm miss out on certain solutions in the objective space� Therefore, this causes the GSA algorithm to have a lower HVI as compared to the DE and PSO algorithms� Hence, as seen in Figure 9�9, this causes the approximated Pareto frontier produced by the GSA algorithm to become less dominant as compared to the one produced by the DE and PSO algorithms� It can be observed that the PSO algorithm produces the most dominant frontier followed by the DE and GSA algorithms sequentially� The PSO algorithm dominates the DE algorithm by 18�08%� To obtain a better understanding on the two most efficient algorithms (DE and PSO), further analysis on the solution diversity characteristics is conducted using the sigma diversity metrics� The diversity value of the frontier generated by the PSO, GSA, and DE techniques are 0�762, 0�75, and 0�6071, respectively� In the diversity analysis,

the GSA technique would have to be excluded since, as seen in Figure 9�8, the solutions produced by GSA technique are localized although they are diversely spread� This gives a false impression that the solutions have a high diversity covering many regions of the objective space� This issue could be identified as one of the drawbacks of the sigma diversity metric whereby it cannot flag off localized solutions as compared to diversely spread solutions� Thus, looking at the dominance levels of the PSO and DE techniques in Figure 9�9, it can be clearly seen that the degree of frontier diversity corresponds to the level of frontier dominance� Therefore, to further boost the performances of these two effective techniques, the enhanced algorithms, CPSO, CDE, and GTDE, are implemented to the green sand molding problem� The frontiers generated by these techniques are given in Figures 9�10 through 9�12�

The level of dominance gauged by the HVI for the entire Pareto frontiers produced by the CPSO, CDE, and GTDE techniques are given in Figure 9�13�

Based on Figure 9�12, it can be stated that the CPSO technique produces the most dominant Pareto frontier since it optimizes all the objectives to an equal extent as compared to the other enhanced algorithms� The frontier diversities of the CPSO, CDE, and GTDE techniques are 0�778, 0�75, and 0�75, respectively� Therefore, similar to the non-enhanced algorithms, the diversity of the solution spread influences the levels of dominance of the Pareto frontier� In this case, the CDE and the GTDE techniques seem to produce an equally diverse frontier� The CPSO has the most diversely spread as well as dominant Pareto frontier as compared to all the other approaches employed to solve the green sand molding problem�

The MO optimization of the green sand mold system was performed via an exhaustive and rigorous implementation of a series of metaheuristic approaches� These explorations and numerical experimentations have proven that algorithmic enhancements are necessary to boost the performance of metaheuristics during optimization� In addition, by the employment of measurement metrics, critical features of the solution space could be ascertained� Besides, knowledge on the interaction between the individual algorithms and the problem at hand may also be extracted using such measurement approaches� The information gained regarding these features may prove to be helpful to the decision maker when considering algorithmic enhancement options or other improvements related to the optimization procedures�