ABSTRACT

In the manufacturing sector and other heavy industries, problems encountered by engineers and decision makers often present themselves in a multiobjective (MO) setting (Ganesan, Vasant, & Elamvazuthi, 2011; Ganesan, Vasant, Elamvazuthi, & Shaari, 2012)� Due to stringent environmental regulations and tight energy requirements, there exists a high demand for obtaining efficient strategies to optimize operations in these industries� Thus, to effectively optimize these systems, the decision maker is required to have a comprehensive understanding of computational intelligence and material engineering�

Sand casting is one of the earliest forms of casting where cavities are introduced into the sand and then liquid metal is poured into the cavity� Since oiled sand molds pack well and have the capability to hold their shape, they are often employed for metal casting� The liquid metal then takes the shape of the cavity and forms the metal product once cooled� Even today, sand casting remains the cheapest form of casting due to its simplicity and availability of the materials� In addition, sand has very suitable physical properties such as permeability, strength, cohesiveness, refractoriness, adhesiveness, flowability, and collapsibility�

Due to this fact, many research efforts have been directed to optimizing the sand casting process� For instance, Kumar and Singh (2011) carried out a designof-experiments approach using the Taguchi technique (Guharaja, Noorul Haq, & Karuppannan, 2006)� In Kumar and Singh (2011), three manipulated parameters, the grain fineness number, clay content, and moisture content, were selected and optimized using the Taguchi technique (Guharaja et al�, 2006) to obtain the maximum tensile strength� In their work, the clay content was found to have the most significant effect on the tensile strength� Thus, they concluded that higher clay content and grain fineness number along with a medium-level moisture content lead to optimal tensile strength� In Keswani and Upadhye (2012), similarly, the optimization of the casting process to minimize casting defects was carried out� The optimization strategy was directed to identify optimal factors affecting the casting process by running a small number of experiments� The Taguchi technique

was employed to maximize the signal-to-noise ratios and to minimize the noise factors� Using this strategy the authors managed to reduce casting defects to a significant level�

In the past, the Taguchi technique has been widely applied for optimizing mold casting processes� Another interesting application involving this technique was carried out in the work of Kim, Shin, Lee, and Moon (2007)� In that work, the correlations between the process parameters and the mechanical properties of the product were investigated using the Taguchi technique on 14�5% silicon-containing corrosion-resistant cast iron� The mechanical properties were analyzed using fractography studies, microscopic test, hydrogen content studies, and acid resistance test�

Besides the Taguchi approach, metaheuristics such as particle swarm optimization (PSO) have been implemented to certain types of casting problems� In Ji and Jiang (2012), an optimization tool was developed to improve the cooling conditions for the spray zones of a real continuous caster� This optimization tool consists of a two-dimensional heat transfer model and the adaptive evolutionary PSO technique� By using the adaptive evolutionary PSO, the authors managed to obtain optimal control of the secondary cooling water distribution according to certain metallurgical criteria and operational constraints� In Xudong, Zhaofeng, and Man (2011) PSO has also been used to approximate mold oscillation status during the continuous casting process� In that work, the PSO was implemented effectively and the effects of the variations in the algorithmic parameters on the oscillation approximation were studied� The PSO was successfully applied in the work of Xudong et al� (2011) to solve the inverse estimation for the mold oscillation status (e�g�, stiffness, empty driving, and dampness)�

Recently, genetic algorithms (GAs) have also been used for optimizing the continuous casting process� In Bhattacharya, Debjani, Roychowdhury, and Das (2007), the optimum selection of mold oscillation parameters for continuous casting in steel manufacturing was carried out using GA� The main aim of that work was to improve the lubrication inside the mold so as to reduce the intensity of oscillation marks during high casting speeds� The optimization in Bhattacharya et al� (2007) was done by implementing the GA to improve the frequency, amplitude, and waveform of the oscillation during the continuous casting in a way that it maximizes the lubrication, minimizes the depth of oscillation marks, and minimizes the cycle peak friction� The constraints used during the optimization were the machine operational bounds� As compared to the original parameters selected using the conventional/heuristic approaches set by the original equipment manufacturer, the selection done using GA by Bhattacharya et al� (2007) has been shown to be more effective in optimizing the individual oscillation effects� Another interesting application of GAs in the casting system was carried out by Lagdive and Inamdar (2013)� In that work, the optimization of the design of the riser or feeder was done� Risers are incorporated into the setup to feed liquid metal into casting which is solidified to compensate for solidification shrinkage� Therefore, in Lagdive and Inamdar (2013), by utilizing the GA technique, the optimal riser size was identified for each of the casting components� It was also found that the GA technique could be applied to standard feeder shape since the geometric formulation can be used to calculate the volume and the area� An illustration of the optimization scenario in sand mold casting is shown in Figure 8�1�

The MO optimization of the cement-bonded sand mold system is presented in this chapter� The main idea behind these sorts of molds is that a binder element (in this case cement) is used to enhance the mold strength, mold hardness, and the casting accuracy� The primary constituents of the cement-bonded sand mold system are cement, silica sand, and water (Surekha, Vudavilli, & Parappagoudar, 2012)� However, a lot of time is taken for the production of cement-bonded sand molds that respect the strength and hardness requirements� To rectify this issue, a constituent called an “accelerator” (which is usually calcium formate) is introduced�

The cement-bonded sand mold system problem presented in Surekha et al� (2012) was tackled using two metaheuristic techniques� Differential evolution (DE) (Storn & Price, 1995) is a population-based evolutionary algorithm that has been derived from GAs (Holland, 1992)� DE was developed in the 1990s and has been used extensively to solve problems which are non-differentiable, noncontinuous, nonlinear, noisy, flat, multidimensional, have many local minima and constraints, or have high degree of stochasticity� DE has been widely applied in many areas ranging from engineering and economics to operational research (Babu & Angir, 2003; Das, Abraham, & Konar, 2008; Ali, Pant, & Abraham, 2009)�

In addition, a new variant of the DE technique which utilizes concepts from Hopfield networks (Hopfield, 1982, 1984; Vasant, Ganesan, & Elamvazuthi, 2012) was developed� This variant was developed with an aim to improve the convergence properties of the DE algorithm and by implication improving its overall efficiency in attaining optimal solutions� This improved technique called the Hopfield

differential evolution (Hopf-DE) algorithm was then tested and compared with the standard DE algorithm in this work� Both optimization techniques employed were used in conjunction with the weighted sum scalarization approach (Fishburn, 1967; Triantaphyllou, 2000) to generate a series of solutions that dominantly approximate the Pareto frontier� Analysis was then conducted to identify the individual best, median, and worst solutions as well as the frontier approximations obtained in this work� The details of the formulation employed to optimize the cement-bonded mold system are presented� The optimization strategy is discussed and an in-depth explanation of the metaheuristic mechanism is provided� The process of measuring the performance of the metaheuristic algorithm during the optimization is analyzed�

In the cement-bonded sand mold, cement is used as a binder to produce highly accurate casting� Besides, using cement as a binder element also causes the mold core to have high compressive strength� The three critical constituents in the cement-bonded sand mold are sand, water, and cement� The major constituents of cement required are dicalcium silicate (2CaO·SiO2), tricalcium aluminate (3CaO·Al2O3), tricalcium silicate (3Ca·SiO2), and tetracalcium aluminoferrite (4CaO·Al2O3·Fe2O3)� To produce a mold with high compressive strength, the total casting time would have to be very long� Therefore, accelerators are often employed to reduce the casting time without sacrificing the compressive strength of the mold by increasing the rate of hydration� An accelerator is a chemical constituent that complies with the American Society for Testing Materials (ASTM) Type C standard� Examples of commonly used accelerators in concrete mixtures are calcium chloride, calcium nitrate, sodium chloride, calcium nitrite, calcium aluminate, sodium formate, and calcium formate (Neville, 2000)� The cement-bonded sand mold model considered in this chapter was developed by Surekha et al� (2012)� In that work, the accelerator utilized was calcium formate (Ca[OOOCH]2)� Hydration also increases the resistance of the mold to chemical reactions (Sleicher & Providence, 1943)�

In the past, very few works have been conducted on the modeling of the physical properties of the cement-bonded molds relative to their input responses (e�g�, amount of cement, water content, etc�)� The work of Uchikawa and Uchida (1976) has provided useful information regarding the responses of compressive strength with respect to the amount of water, amount of jet cement, duration of testing time, and the amount of accelerator� Their rigorous studies have resulted in the identification of the critical parameters that critically influence the compressive strength of the sand mold� The utilization of the statistical viewpoint has been demonstrated in Montgomery (2001) to be capable of generating the response models of the cement-bonded sand mold system� The statistical view highlighted is the design-of-experiments approach encompassing efficient data collection and regression modeling of the input-output relations� Another excellent approach for ascertaining the nonlinear response relations of the cement-bonded system was used in Parappagoudar, Pratihar, and Datta (2007) by utilizing the response surface methodology (RSM) in tandem with the design-of-experiments approach� The RSM approach uses polynomial and localized estimations from empirical data to obtain the linear response relations�

The combination of the two approaches allows the modeling to be done accurately and efficiently with minimal amount of experimental rigor while capturing all the critical features of the system involved� However, due to the low accuracy of the linear model when it comes to nonlinear responses, the authors in Parappagoudar et al� (2007) employ the central composite design (CCD) framework (Montgomery, 2001) to construct the nonlinear response models� In Parappagoudar et al� (2007), an accelerator was employed to speed up the cement-bonded sand mold development� Experiments were then conducted where the data obtained were employed to construct linear models for responses of the compressive strength and mold hardness� These linear models were then tested using randomized statistical tests� Consequently, the nonrotatable CCD (Anderson & Whitcomb, 2005) was used to develop the nonlinear models for the mentioned responses� Similarly, the robustness of these nonlinear models was gauged using randomized statistical tests� The authors of Parappagoudar et al� (2007) then performed comparative studies between their linear and nonlinear models with the aid of some randomly developed test cases� They then identified the best model which had the minimal percentage of deviation for the prediction of each responses� Finally, they successfully completed the work by developing the relationship between the responses of the cement-bonded mold system�

Therefore, the process parameters of the mold heavily influence the quality of the final product� In Surekha et al� (2012), these parameters were identified as the decision variables while the hardness (H) and the compression strength (CS) of the mold were used as the objective functions� The process parameters were the ratio in terms of percentage of Portland cement (A), percentage of accelerator (B), percentage of water content (C), and the testing time in hours (D)� The objective functions and the range of the decision variables are shown as follows:

= + + + +

− − − +

− + + +

+ +

H A B C D

A B C D

AB AC AD BC

BD CD

13.5199 7.38194 11.6111 6.76042 0.113812

0.388889 1.68056 9.22222 0.0493827

0.289062 1.57813 0.0130208 1.59375

0.0989583 0.30208

(8�1)

= − − + −

+ − − +

+ − + −

+ −

A B C D

A B C D

AB AC AD BC

BD CD

CS 83.8913 3.59381 2.45386 135.424 18.6127

0.213865 3.28139 22.7782 1.39739

2.75252 4.12608 2.61247 4.07222

2.12947 7.73506

(8�2)

≤ ≤

≤ ≤

≤ ≤

≤ ≤

A

B

C

D

8 12

2 4

3 8

2 8

(8�3)

The MO optimization problem statement for the green mold system problem is shown as follows:

( ) ≤ ≤

≤ ≤

≤ ≤

≤ ≤

f f A

B

C

D

Max , subject to 8 12

2 4

3 8

2 8

(8�4)

In MO optimization, the weighted sum approach, and other scalarization techniques are considered to be classical approaches� “Classical” is used in the sense that they precede most of the more modern metaheuristic approaches such as evolutionary and swarm-based techniques� However, currently, classical approaches are often employed in tandem with metaheuristics since they provide a strong foundation to the overall computational strategy (Elamvazuthi, Ganesan, & Vasant, 2011; Ganesan, Elamvazuthi, Shaari, & Vasant, 2013)�

One useful way to classify MO optimization is to divide it into two groups which are the preference-based and ideal procedures� The preference-based procedures usually involve the aggregation of objective functions to form a single composite function� This function is the weighted total of the individual objective functions� This composite function is then maximized or minimized depending on the problem definition� This approach is very useful if the relative preference of the objectives are known prior to the optimization procedures� The relative preferences are then represented as the weights which would be utilized to aggregate the individual objectives�

The ideal procedure revolves around the idea of finding multiple (non-weighted) solutions rigorously for a large range of values in the decision variables� For the selection of the desired solution, this approach requires the decision maker to have access to higher-level knowledge to aid with design of the criterion (approximate function or vector) employed for the selection� The ideal procedure is non-preference method that does not use any preference information about any of the objective functions� Non-preference methods make use of some form of heuristic or utility function to identify a single optimal solution� This approach is usually computationally intensive and requires a thorough search of the objective space prior to obtaining the desired solution option� Preference-based procedures can be further grouped into three classes which are the posteriori, a priori, and interactive methods� Posteriori methods are carried out by employing preference information about the objective functions to produce a Pareto-optimal solution set�

A priori methods use preference information about the objective function to identify a single Pareto-optimal solution while interactive procedures involve the employment of preference information throughout the execution of the optimization procedures� The classification of these procedures is graphically presented in Figure 8�2�

The weighted sum approach falls under the subclass of preference methods since relative importance/preference values are assigned to the individual objective functions prior to aggregation� The weighted sum approach is the most widely used approach in the past and is still highly relevant in today’s optimization scene� Its simplicity and its strong mathematical foundation make it highly suitable to be applied per se or to be implemented hand in hand with other more modern techniques such as metaheuristics�

DE is a class of evolutionary metaheuristic algorithms first introduced in 1995 by Storn and Price� The core idea of this technique is the incorporation of perturbative methods into evolutionary algorithms� Before diving into the technical details of the DE algorithm, a brief overview on some of the applications of DE in material engineering is provided in the subsequent paragraphs�

In material engineering and industrial manufacturing, DE has become increasingly popular due to its robustness, effectiveness, and ease of use� For instance, in Bhattacharya and Sambasivam (2009), DE was employed for the optimization of the oscillation parameters in the continuous casting process of steel manufacturing� In this casting process, liquid steel is poured into a buffer vessel and is consequently flowed into a water-cooled copper mold via a nozzle� The water is circulated to cool the liquid steel� This continuous casting process consists of two other processes namely mold oscillation and lubrication addition� These processes are targeted to reduce the sticking of the formative steel shell to the mold and nonuniformity of the shell development due to irregular heat transfer� In Bhattacharya and Sambasivam (2009), the authors develop a mathematical formulation to express the correlation between the oscillation of the free parameters and their respective responses� This formulation is then used to develop an optimization model to

maximize the lubrication and minimize the depth of the oscillation by manipulating the oscillation frequency, oscillation amplitude/stroke, and sinusoidal variation� Therefore, the authors in Bhattacharya and Sambasivam (2009) are faced with a bi-objective optimization problem� The primary goal of that work was to employ DE and GA to solve the formulated optimization problem and to see both algorithms return the same solutions for various casting speeds� Second, the authors compared their algorithmic efficiencies with respect to convergence characteristics and computational time� It was found that the DE approach yielded the same quality of results as compared to the GA approach� However, the DE approach has a higher convergence rate as compared to the GA approach�

The work of Jawaha and Ramamoorthy (2012) involves the dynamic optimization of the injection molding using three evolutionary approaches (DE, GA, and PSO)� Injection molding is a highly nonlinear, nonconvex, and multivariate procedure employed for processing plastics from thermoplastic materials� Using this procedure, thermoplastic material can then be molded into complex geometrical shapes� An injection molding process consists of four different mechanisms which are plasticizing, injection, clamping, and ejection� The plasticizing mechanism melts the resin material to improve flowability into the mold� Prior to injection, the resin is mixed and melted in a temperature-controlled barrel� Next, the clamping unit adjusts the opening and closing of the mold to maintain a steady force on the mold during the injection� The ejector pin ejects the final mold product from the die� The injection molding cycle is heavily influenced by the processing conditions such as injection pressure, back pressure, barrel temperature, nozzle temperature, holding pressure, clamping force, cooling time, filling time, and screw rotating speed� The authors in Jawaha and Ramamoorthy (2012) optimized the mentioned process parameters online so as to minimize the product defects by employing PSO, DE, and GA� The authors found that all metaheuristics performed well with the exception of PSO which undergoes fast convergence and generates unsatisfactory solutions as compared to GA and DE�

Another interesting application of DE in material engineering can be seen in the work done in Holis, Skanderova, Placek, Dvorsky, and Zelinka (2013)� In that work, the DE and the self-organizing migration algorithm (SOMA) was utilized to solve the steel alloying problem� The steel alloying problem involves manipulation of the weight of input steel, composition of input steel, list of alloying materials (and their prices per unit), and the target steel grade� These optimal settings of these parameters are identified to maximize the quality of the manufactured steel (i�e�, final steel grade) at a reduced per-unit price with minimal energy consumption� The DE and the SOMA techniques were found to be effective in solving the problem as compared to the Simplex method� The authors in Holis et al� (2013) found that although the Simplex approach may produce the cheaper solution, the implementation of its metaheuristics struck a fine balance between steel grade, price per unit, and energy consumption�

The DE technique starts by the initialization of a population of at least four individuals denoted as P� These individuals are real-coded vectors with some size N� The initial population of individual vectors (the first generation denoted as gen = 1)

is randomly generated in appropriate search ranges� One principal parent denoted xi p

and three auxiliary parents denoted xi a are randomly selected from the population, P�

In DE, every individual in the population, P, would become a principal parent, xi p, at

one generation or the other and thus have a chance in mating with the auxiliary parents, xi

a� The three auxiliary parents then engage in “differential mutation” to generate a mutated vector, Vi�

( )= + −V x F x xi 1a 2a 3a (8�5) where F is the real-valued mutation amplification factor which is usually between 0 and 1�

Next Vi is then recombined (or discreetly crossed over) with xi p to generate child

trial vector, xi child� The crossover operator is defined as follows:

⊗ →CR V x x: i i i p child

(8�6)

The probability of the crossover, CR, is the probability of amount of codes from the mutated vector, Vi, with respect to the principal parent, xi

p being inherited by the child trail vector� The CR is an input parameter set by the user� In DE, the survival selection mechanism into the next generation is called “knock-out competition�” This is defined as the direct competition between the principal parent, xi

p, and the child trial vector, xi

child, to select the survivor of the next generation as follows:

( ) ( ) + =

  

x x f x f x

x

(gen 1) (gen) better than

(gen) otherwise i

(8�7)

Therefore, the knock-out competition mechanism also serves as the fitness evaluation scheme for the DE algorithm� The DE method is described in Algorithm 8�1 and Figure 8�3�

Algorithm 8.1: Differential Evolution (DE)

Step 1: Initialize parameters N, P, CR, and F Step 2: Randomly initialize the population vectors, xi

Step 3: Randomly select one principal parent, xi p

Step 4: Randomly select three auxiliary parents, xi a

Step 5: Perform differential mutation & generate mutated vector, Vi Step 6: Recombine Vi with xi

Step 7: Perform “knock-out” competition for next-generation survival selection Step 8: If the fitness criterion is satisfied and t = Tmax, halt and print solutions,

else proceed to step 3

The central ideas used during the design of the Hopf-DE spring from notions first introduced in Ising spin models (Amit, Gutfreund, & Sompolinsky, 1986) and the Hopfield neural networks (HNN) (Hopfield, 1982, 1984)� The statistical physics models developed in the Ising model (Dyson, 1969) then inspired the development of a new type of neural net called the Hopfield recurrent artificial neural network with improved convergence properties� The HNN developed by Hopfield (1982, 1984) was observed to have applications in optimization problems (Tank & Hopfield, 1986; Lee, Sode-Yome, & Park, 1998)� One of the key features of the HNN is that there is a decrease in the energy by a finite amount whenever there is a change in the network’s state� This essential property confirms or accentuates the convergence of the output whenever the network state is changed�

HNNs are usually constructed by a finite number of interlinked neurons� These neurons update their weights or their activation values (outputs from threshold neurons) independently relative to other neurons in the network� It is important to take note that the neurons in these sorts of networks are not directly connected to themselves and each neuron functions as an input as well as output� In HNNs, the activation values are usually binary (+1 or −1) and all the weights of neurons are symmetric (wij = wji)� Since HNNs are back-propagating auto-associator networks, the input of the neurons is updated by the activation values of the output� Therefore, the input of a neuron i at a cycle t+1 is as follows:

∑∑+ = + θ ∀ ≠

s t y t w j i( 1) ( ) such thati j ji ij 11

(8�8)

where yj(t) is the activation value from the output, wji is the neural weight, and θ is some arbitrary constant set by the user� A piece-wise threshold function is applied to the neural inputs to obtain the activation values of the outputs, yj(t + 1), at the next cycle (t + 1):

+ =

+ + >

− + <

+ =

 

 

y t s t U s t U

y t s t U ( 1)

1 iff ( 1) 1 iff ( 1) ( ) iff ( 1)

(8�9)

Therefore

+ = +y t s t( 1) sgn( ( 1))j i (8�10)

where U is some user-defined threshold value� A neuron i in the HNN is termed “stable” if the following condition is satisfied:

= −y t s t( ) sgn( ( 1))j i (8�11)

In HNN systems, an additional symmetry restriction on the weights (wij = wji) is imposed� This is done so that the neurons are bidirectional� Hence, the network has the capability to settle to a stable state and this enforces the convergent behavior of the algorithm� With this additional restriction, the energy function of the system can be defined as follows:

∑∑∑= − − θ ≠

E y y w y j i1 2

(8�12)

Therefore, after a finite number of steps the network converges into a stable state and the energy reaches its local minima (analogous to the atomic energy of the simulated annealing (SA) algorithm (Metropolis, Rosenbluth, Rosenbluth, Teller, & Teller, 1953; Kirkpatrick, Gelatt, & Vecchi, 1983; Ingber, 1993)� Similarly, the ideas that built the foundations of the HNN are applied to the DE algorithm in this work� This was done to improve the convergence capabilities of the DE algorithm�

First, the population of vectors, xi G, was initialized� Then, a set of random weights

wij was introduced and the symmetric property was imposed� The consequence steps are similar to the regular DE algorithm where one principal parent, xi

p, and three auxiliary parents, xi

a, are randomly selected� Differential mutation is then performed and the mutated vector, Vi, is generated� The Vi is then recombined with xi

G to generate child trial vector, xi

child� Consequently, the function A (which arguments are the weights wij and the child trial vector, xi

child) is introduced as follows:

∑=A t w t k( ) ( )i ij x

(8�13)

where k is a constant which is an input parameter specified by the user� This way Ai(t) replaces and changes the functionality of the xi

child in the regular DE algorithm� Then the piece-wise threshold function, si(t), is computed as follows:

+ =

+ >

− <

 

 

s t

A t U A t U

s t

( 1) 1 iff ( ) 1 iff ( ) ( ) iff otherwise

(8�14)

Algorithm 8.2: Hopfield DE (Hopf-DE)

Step 1: Initialize parameters N, P, CR, and F Step 2: Randomly initialize the population vectors, xi

Step 3: Randomly initialize weights, wij(t) Step 4: Enforce symmetry condition on weights Step 5: Randomly select one principal parent, xi

Step 6: Randomly select three auxiliary parents, xi a

Step 7: Perform differential mutation and generate mutated vector, Vi Step 8: Recombine Vi with xi

Step 9: Compute the function Ai(t) Step 10: Compute the energy function, E Step 11: Perform “knock-out” competition for next-generation survival selection Step 12: If dE < 0, proceed to next step else go to step 4 Step 13: If the fitness criterion is satisfied and t = Tmax, halt and print solutions,

else proceed to step 6

where U is a constant which is an input parameter specified by the user� The energy function is then computed as follows:

∑∑∑= − + − θ

E A t A t w s t1 2

( 1) ( ) ( )j j ij iij 111

(8�15)

where θ is a relaxation constant defined by the user�

Hence, new variant for the DE algorithm is developed and is termed the Hopfield DE (Hopf-DE) algorithm� The workflow of the Hopf-DE algorithm is given in Figure 8�4� The algorithm and the flowchart for the Hopf-DE method are given in Algorithm 8�2�

The weighted sum approach was used in conjunction with the DE method to solve cement-bonded sand mold MO problem� The objective functions (Equations

8�1 and 8�2) were aggregated to form a single weighted function (F) such as the following:

(8�16)

where w1 and w2 are the scalar weights� The solution sets (approximations of the Pareto frontier) were obtained using the DE and Hopf-DE techniques in conjunction with the weighted sum approach� The quality of these solutions was measured by using the value of the aggregated function, F� Since this problem involves objective maximization, the higher the F value the more dominant the solution� For the approximation of the Pareto frontier, 50 solutions for various weights were obtained using both the DE and the Hopf-DE methods� The approximate Pareto frontiers traced using the DE algorithm are shown in Figure 8�5�

The maximum, median, and minimum values of the aggregated objective function, F, and their associated weights are given in Table 8�1� Based on these values the best, median, and worst individual solutions were identified� These values are shown

in Table 8�2�

TABLE 8.1 Ranked Solutions and the Associated Weights Produced by the DE Algorithm

The approximate Pareto frontiers traced using the Hopf-DE algorithm are shown in Figure 8�6�

Similarly, the solutions produced by the Hopf-DE algorithm were ranked (maximum/best, median, and the minimum/worst) based on the values of the aggregated objective function, F� The ranked solutions and their associated weights are given in Table 8�3 while the values of the objective functions, decision variables, and the computational time are provided in Table 8�4�

The total computational time taken by the DE and the Hopf-DE algorithms to obtain the optimal solutions that construct the Pareto frontier is given in Figure 8�7�

The DE algorithm takes 14�92 seconds while the Hop-DE algorithm takes 107�73 seconds to compute the optimal solutions for the entire Pareto frontier� In Figure 8�4, it can be observed that the DE algorithm is more computationally efficient as compared to the Hopf-DE algorithm by approximately 6�175% although in Tables 8�2 and 8�4 it can be seen that Hopf-DE algorithm outperforms the DE algorithm in terms of

TABLE 8.2 Ranked Solutions and the Respective Computational Time Produced by the DE Algorithm

solution optimality� This may be attributed to the increase in algorithmic complexity of the Hopf-DE technique due to the addition of the Hopfield component in the original DE technique� Therefore, although the Hopf-DE may be superior in terms of obtaining Pareto-efficient solutions, nevertheless, it is computationally inferior as compared to the DE algorithm� The values of the aggregated function, F, produced by the DE and Hopf-DE algorithms with respect to various weights are shown in Figures 8�8 and 8�9 respectively�

TABLE 8.3 Ranked Solutions and the Associated Weights Produced by the Hopf-DE Algorithm

TABLE 8.4 Ranked Solutions and the Respective Computational Time Produced by the Hopf-DE Algorithm

It can be seen from Figures 8�8 and 8�9 that the values of F increase sharply as w1 → 1 and as w2 → 0� In material sciences, it is well known that CS conflicts with hardness (H) for most of the known materials� However, in this case, prioritizing hardness (H) and neglecting CS boosts up the aggregated function, F, value and not vice versa� This characteristic may be mainly attributed to the unique nature of the cement-bonded mold system that the objectives are intercorrelated in such a fashion�

Hence, it can be said that as the second objective (CS) is compromised and the first objective (H) is prioritized, the higher the value of F obtained (optimal value)�

From Tables 8�2 and 8�4, it can be stated that Hopf-DE method outperforms the DE technique in terms of obtaining optimal values of CS and hardness (H)� This may be attributed to the enhanced capabilities of the Hopfield component in handling nonlinearities since the objective functions used in this problem are nonlinear in nature� With modifications to the energy function of the Hopf-DE as well as its respective parameters, the Hopf-DE method could be customized for solving a broader class of nonlinear problems� This capability can be mainly attributed to the convergence capabilities of the Hopfield component to handle factors arising from problems which are multivariate and nonlinear�

The DE and the Hopf-DE methods make on average 31�92 and 524�96 function evaluations per optimal solution point on the Pareto frontier� Thus, the DE method is the most efficient in program execution as compared to the Hopf-DE method� Similar to computational time the prime contributing factor to this difference is algorithmic complexity where the Hopf-DE method has a higher degree of complexity as compared to the DE method� Due to the energy function component in the Hopf-DE method, the algorithm showed stable performance and converged during each program execution�

A global solution with a better computational time may be obtained by implementing the DE and Hopf-DE methods on a high-performance computer (HPC)� The HPC should be a dedicated system capable of rigorous search and processing� It is crucial to remark that the Hopf-DE and DE methods were easily implemented using the language of C++� During the implementation of the DE and the Hopf-DE algorithms on the cement-bonded mold application, both methods performed very well in terms of solution feasibility� Hence, solutions produced by all three methods did not break any of the given constraints in this problem�

An increase in the algorithmic features to include more optimization as well as constraint handling features may improve both approaches� Hence, this strategy may produce solution sets that are closer to the global optimal� Since the execution of these sorts of programs (algorithmically complex) would be computationally costly, a more powerful computer would be required� Another method to improve the three methods utilized in this work would be to further hybridize it with other methods like simulator annealing (Kirkpatrick et al�, 1983), genetic programming (Koza, 1992), or other classes of evolutionary algorithms� Hybridizing these methods with algorithms like DE may provide it with a more efficient system for handling constraints and thus pave the way for a solution closer to the global optimum�

The main ideas presented in this chapter involve the implementation of the DE and the improved Hopf-DE techniques in material engineering� The DE and Hopf-DE approaches were applied to the MO cement-bonded sand mold system to construct the approximate Pareto frontier which was then employed to identify the best solution option� Prior to the implementation of the DE and Hopf-DE techniques, other metaheuristics such as swarm-based algorithms were attempted on the cement-bonded

sand mold problem� However, due to poor results these approaches were abandoned and the DE technique was instead applied to this problem� Due to the “No Free Lunch Theorems” (Wolpert & Macready, 1997), not all problems can be tackled using the same metaheuristic strategy� This fact applies to any form of computational technique� Thus, a certain degree of algorithmic testing is required prior to the identification of a suitable metaheuristic for a particular problem� Detailed analyses were performed on the computational results produced by the DE and its variant the Hopf-DE� Examination of the performance and the quality of the solutions obtained by the DE and Hopf-DE algorithms was presented�