Breadcrumbs Section. Click here to navigate to respective pages.

Chapter

Chapter

# Multiobjective Optimization of Bioethanol Pretreatment Process

DOI link for Multiobjective Optimization of Bioethanol Pretreatment Process

Multiobjective Optimization of Bioethanol Pretreatment Process book

# Multiobjective Optimization of Bioethanol Pretreatment Process

DOI link for Multiobjective Optimization of Bioethanol Pretreatment Process

Multiobjective Optimization of Bioethanol Pretreatment Process book

## ABSTRACT

Multiobjective (MO)-based situations are increasingly being encountered in process optimization applications� Hence, it is imperative that a decision maker in the plant environment has multiple solution selections to decide on the most optimal solution� In this chapter, a real-world industrial scale problem, that is, MO bioethanol pretreatment was solved using a number of algorithms such as genetic algorithm (GA), gravitational search algorithm (GSA), particle swarm optimization (PSO), and differential evolution (DE)� In addition, ideas from chaos theory were introduced to enhance the DE algorithm to become chaotic DE (CDE)� Performance metrics such as convergence, diversity, and hypervolume indicator (HVI) were used in the analysis of the dominance of the solution set produced by these algorithms� Studies on the performance as well as the solution quality produced by these algorithms are presented in this chapter�

Over the past years, MO optimization techniques have been implemented in numerous engineering applications in the industry� MO optimization can be generally divided into two classes: “bi-objective” and “more than two objectives” problems� This chapter discusses the problems involving chemical processes which contain more than two objectives�

In Sankararao and Gupta (2007), the MO optimization of an industrial fluidizedbed catalytic cracking unit (FCCU) was carried out� In this work, a “jumping gene” adaptation was done on the multiobjective simulated annealing (MOSA) and thus creating a new algorithm: the jumping gene MOSA (or MOSA-jG)� These adaptations were shown to improve the algorithmic performance and reduce computational time� The authors tested their new algorithm on three bi-objective test cases and then went on to solve two formulations of the industrial FCCU problem� The first formulation was a bi-objective problem while the second was a triple-objective problem subject to operational constraints� In Lim, Floquet, and Joulia, (2001), the normal boundary intersection (NBI) approach was hybridized with the summation of weighted objective functions (SWOF) method to solve an industrial MO optimization problem� The problem considered was a bi-objective chemical process optimization problem where the objectives were set to minimize cost and maximize

pollution prevention� It was also stated that by using this hybrid NBI approach, reliable optimization results were obtained even in the nonconvex sectors of the Pareto frontier� Using this hybrid approach, an effective trade-off between the objectives was obtained by the authors from the Pareto frontier� Similarly, a MO optimization of a steam reformer performance using GA was done by Rajesh, Gupta, Rangaiah, and Ray (2000)� Here, an adaptation was introduced to the non-dominated sorting genetic algorithm (NSGA) for MO optimization applications� In that work, the two objectives considered were the minimization of the methane feed rate and the maximization of the flow rate of carbon monoxide in the syngas stream while taking into account process requirements, heat integration, and economics (constraints)� By this MO optimization procedure, the authors aimed to minimize the operating costs and maximize profits� The MO optimization procedure was done successfully and the Pareto optimal operating conditions were obtained�

A MO chemical process problem for the thermal processing of food was solved by Sendín, Alonso, and Banga (2010)� In that work, two case studies (bi-objective and triple-objective problems) with nonlinear dynamic models were tackled� A hybrid approach consisting of the weighted Tchebycheff and the NBI approaches was utilized� The MO optimization using this novel approach was successful in constructing the Pareto optimal set for the thermal processing of foods� In the works of Martinsa and Costa (2010), a MO optimization of a benzene production process by toluene hydrodealkylation was done� In that work, the MOSA algorithm was utilized to solve this bi-objective problem� The objectives in that work were to maximize the net present value and minimize potential environmental impact of the process� By using the MOSA approach, the authors managed to construct a Pareto optimal set and identify the optimal solutions to the benzene production process� In Salari, Naiei, and Nabavi (2008), the MO optimization of an ethane thermal cracking reactor was carried out� The NSGA-II approach was used for MO optimization where a bi-objective problem involving ethylene conversion and selectivity was maximized� The Pareto frontier was obtained by this method and authors also discussed extensively on the effects of the decision variables on the objectives of this process� MO scenarios have also surfaced in the field of separation process� For instance, in Fiandaca and Fraga (2009), MOSA technique was used to optimize the design of the pressure swing adsorption process (cyclic separation process)� The design problem considered in that work involved the bi-objective maximization of the nitrogen recovery and nitrogen purity� The results obtained in that work provided a good approximation of the Pareto frontier with acceptable trade-offs between the objectives�

In this chapter, the details on the model as well as the strategies employed to optimize the MO bioethanol pretreatment process are discussed� The solutions obtained using the GA, GSA, PSO, DE, and CDE were analyzed using the performance metrics such as convergence, diversity, and HVI�

The pretreatment technique optimized in this chapter is based on the work by Banerjee et al. (2009)� The existence of significant amounts of sugars such as holocellulose (57-61% by weight) in rice husks makes it a good potential source to be utilized

for the production of bioethanol� Rice husk is a form of lignocellulosic biomass� Lignocelluloses are a complex structure made up of cellulose, hemicelluloses, and lignin� These complex structures are resistant to degradation and limit the capabilities of potential sources like rice husk for methanol production (Zaldivar, Nielsen, & Olsson, 2001)� Therefore, complex structures like lignocellulose require a series of treatments to release their constituent monomer structured sugars� These sugars can then be fermented to produce ethanol� The reduction of the biomass particle size, maximizing lignin removal, limiting the formation of inhibitors (in the form of degradation products), minimizing the loss of pentose and hexoses, and reducing the overall cost of the process was achieved through the research work by Mosier et al� (2005)�

The pretreatment technique optimized in the work of Banerjee et al� (2009) is the “wet air oxidation” technique� This technique consumes very low amounts of fuel and is low in terms of operation costs� This method is a potentially effective pretreatment technique for fractionating lignocellulose into a solubilized hemicellulose fraction and a solid cellulose-rich fraction with minimum inhibitor formation� This then encourages enhanced enzymatic hydrolysis of the material (prior treatment) for the subsequent ethanol fermentation while having minimal inhibitor formation (Schmidt & Thomsen, 1998; Klinke, Ahring, Schmidt, & Thomsen, 2002)�

Thus, the MO optimization of the pretreatment process of rice husk using the wet air oxidation method to obtain a cellulose-rich substrate amenable to further enzymatic hydrolysis was the main theme of the work by Banerjee et al. (2009)� In their work, the data from the experiments were used to build a MO optimization model through multiple regression analysis� A block diagram that depicts the wet air oxidation process with the manipulated variables as the inputs and the objectives as the outputs is shown in Figure 6�1�

This model and the associated constraints are generally presented as follows:

Maximize → Cellulose yield, f1 Maximize → Lignin Removal, f2 Maximize → Hemicelluloses Solubilization, f3 subject to process constraints� (6�1)

The objective functions (which are cellulose yield in [%], lignin removal in [%], and hemicelluloses solubilization in [%]) are as follows:

= − + + +

− − +

f X X X X X X X X X

38.3467 0.6179 1.7429 3.0846

0.0177 0.0206 0.0937

= − + + −

− + +

f X X X X X X X X X

3.8678 0.5587 18.8545 6.9167

0.1066 0.0347 0.0377

= − + − +

+ − +

f X X X X X X X X X

330.757 2.257 9.612 23.287

0.033 0.134 0.26

where X1 is the reaction temperature (°C), X2 is air pressure (MPa), and X3 is the reaction time (minutes)� The decision variables are constrained as per the experimental setup and the constraints are as follows:

∈

∈

∈

X

X

X

[170,195] [0.5, 1] [10, 20]

(6�5)

In this work, the GA, PSO, GSA, and DE were employed to solve the MO oxidation process optimization problem� In addition, for rigorous analyses the hybrid PSO-DE approach was also implemented to the problem� The results achieved are discussed further in Section 6�5�

GAs are categorized as a class of population-based search and optimization algorithms� Detailed information about GA could be found in Holland (1992), Melanie (1996), Hugh (1993), David (1999a,b), and Lance (2000)� The parameter settings initialized prior to the execution of the GA used in this work are shown in Table 6�1 and the GA program applied in this work is shown in Figure 6�2�

Since its introduction in 1995 by Kennedy and Eberhart (1995), the PSO algorithm could be found in many applications� It is essentially based on the concept of swarming (or flocking) behaviors of certain species of organisms (such as birds, fishes, etc�)� Detailed description of the PSO algorithm could be found in Maurice (2006), Konstantinos and Michael (2010), Aleksandar (2009), Acar (2012), and Said and Ahmed (2008)� The initialization parameters for the PSO algorithm are shown in Table 6�2 and the execution scheme is provided in Figure 6�3�

Inspired by the law of gravity and the mass interactions, the GSA algorithm was first developed in 2009 by Rashedi, Nezamabadi-pour, and Saryazdi (2009)� This algorithm uses the Newtonian gravitational laws where the search agents are associated with massive objects� Thus, the gravitational forces influence the motion of these objects where lighter objects gravitate toward the heavier objects during these interactions� The gravitational force hence acts as the communication mechanism for the objects (Cui & Gao, 2011) similar to the “group component” for the particles in the PSO algorithm� The fitness landscape is closely linked to the position of the masses� During execution, the masses would

TABLE 6.2 PSO Settings

gravitate to their fittest position reaching the optima� The initial parameters for the GSA algorithm are shown in Table 6�3 and the execution scheme is provided in Figure 6�4�

If during the iteration process, the maximum fitness has been reached such that the fitness criterion (no further optimization occurs in the objective function, no constraints are broken, and all the decision variables are nonnegative) is satisfied, the program is halted, and results are printed�

TABLE 6.3 GSA Parameter Settings

This pivotal idea of this technique is the incorporation of perturbative methods into evolutionary algorithms (Storn & Price, 1995)� DE initializes a population of at least four real-coded vectors with some size N (individuals)� The initial population of individual vectors is randomly generated in appropriate search ranges where one principal parent and three auxiliary parents are randomly selected from the population� Each individual in the population would have a chance to become a principal parent at one generation or the other� Thus, the principal parent has a chance of mating with the auxiliary parents engaged in “differential mutation” to generate a mutated vector� Using knock-out competition, the fitness evaluation scheme for the DE algorithm is carried out� The parameter setting for the DE algorithm is given in Table 6�4 and the DE execution scheme is shown in Figure 6�5�

The diversification mechanism used in this work is based on the concept of chaos theory (Lorenz, 1963)� The idea here is to use chaos-based methods to increase the diversity of the population of solutions produced by the algorithm� In mathematical physics, chaotic systems are dynamical systems that are deterministic, behave in an irregular manner, are highly sensitive to initial conditions, and are

TABLE 6.4 Differential Evolution (DE) Parameter Settings

impossible (or difficult) to predict in the long term (Flake, 1998)� In this work, a one-dimensional chaotic map was used to increase the diversity of the population of solutions by embedding the map into the random number generation component in the algorithm� The one-dimensional chaotic map is represented as follows:

( )ψ = ψ+ fn n1 (6�6) The most widely studied one-dimensional map is the logistic map (Jakobson, 1981) which is as follows:

( ) ( )ψ = ψ − ψf r 1n n n n (6�7) = ++r r 0.01n n1 (6�8)

where ψn ϵ [0,1] and r ϵ [0,5]� In this mapping, like all chaotic maps, the dynamics of the system varies for different sets of initial conditions (x0 and r0)�

Augmentations were performed in the DE algorithm to enhance its diversification capabilities by the addition of the chaotic component� First, the population of vectors, xi

G, was generated� The consequent steps are similar to the regular DE algorithm where one principal parent, xi

p, and three auxiliary parents, xi a, are randomly

selected� Differential mutation is then performed and the mutated vector, Vi, is generated� The Vi is then recombined with xi

p to generate child trial vector, xi child� The

obtained xi child is used as the input to the chaotic logistic map� This chaotic logistic

mapping is presented as follows:

=N t x t( ) ( )i ichild (6�9)

= λR t N t( ) ( )i i (6�10)

+ = −N t R t N t N t( 1) ( ) ( )[1 ( )]i i i i (6�11)

+ = + ′λR t R t( 1) ( )i i (6�12)

where N(t) and R(t) are variables in the logistic chaotic map, λ and λ′ are relaxation constants specified by the user� Then the logistic mapping is continued until a specific number of iterations is satisfied� The final value at maximum number of iteration of N(tmax) is incorporated into the child trial vector, xchild� Hence, the child trial vector, xi

child, undergoes another round of mutation by the chaotic map� Next, the “knock-out” competition for next-generation survival selection is performed� The fitness function for the child trial vector, xi

child, is evaluated� Thus, another variant of the DE algorithm with enhanced diversification capabilities is developed� In this work, this algorithm is called the CDE� The parameter settings specified in the CDE algorithm are shown in Table 6�5 and the execution scheme is given in Figure 6�6�

In this work, the performance metrics comprise of convergence, diversity, and HVI that will be discussed briefly in the following sections� The convergence metric used in this work was developed in Deb and Jain (2002)� This metric gauges the convergence property of a solution set with respect to a reference set� For this convergence metric, low metric values indicate high convergence characteristics among the solution vectors� The diversity metric used in this work is the sigma diversity metric (SDM) (Mostaghim & Teich, 2003)� The SDM evaluates the locations of the solution vectors in the objective space relative to the sigma vectors� The high values of the SDM indicate high uniformity and diversity in terms of the distributions of the solution vectors in the objective space� One approach that has been effective in measuring the quality of the solution set that constructs the Pareto frontier in cases where the Pareto frontier is unknown is the HVI� The HVI is the only indicator which is

TABLE 6.5 Parameter Settings for the CDE Algorithm

strictly Pareto compliant that can be used to measure the quality of solution sets in MO optimization problems� Further details about this can be found in Beume, Naujoks, and Emmerich (2007) and Zitzler and Thiele (1999)�

First, the solution characteristics in the objective space are attained� This is done by determining the convergence and diversity values of all the algorithms when applied to this problem� The values given by the convergence metric for all the algorithms in this study are as shown in Figure 6�7 and Table 6�6�

As mentioned previously, the smaller the value of the convergence metric, the more convergent the solutions are on Pareto frontier� From Figure 6�7 and Table 6�6, it can be observed that the Pareto frontier produced by the GSA algorithm is the most convergent followed by the DE algorithm and the GA algorithm, respectively� The least convergent frontier was produced by the PSO algorithm�

The values given by the SDM for all the mentioned algorithms are shown in Figure 6�8 and Table 6�7�

As for the diversity metric, the higher its value, the more diverse the solutions are on Pareto frontier� From Figure 6�8 and Table 6�7, it can be observed that the Pareto frontier produced by the PSO algorithm is the most diverse followed by the GSA algorithm� The DE and the GA algorithms are equivalent in this respect� The values given by the HVI for all the mentioned algorithms are shown in Figure 6�9 and Table 6�8�

The HVI functions such that the higher its value, the more dominant the Pareto frontier� From Figure 6�9 and Table 6�8, it can be observed that the Pareto frontier

TABLE 6.6 Values of the Convergence Metric for Each Algorithm

produced by the DE algorithm gives the most dominant frontier followed by the PSO algorithm and the GSA algorithm, respectively� The least dominant frontier was produced by the GA algorithm�

Due to the application of the NBI method, multiple runs were executed for each algorithm to obtain the individual optima� Then, the β-subproblem was solved on a

TABLE 6.7 Values of the Sigma Diversity Metric for Each Algorithm

TABLE 6.8 Values of the HVI for Each Algorithm

single run by each algorithm� To calculate the total computational time, the average computational time for the multiple runs to obtain the individual optima is summed with the computational time taken to solve the β-subproblem� The lowest computational time during execution was achieved by GSA (5�104 s), followed by the GA (6�391 s) and the DE (13�742 s) algorithms� The PSO algorithm (20�926 s) took the longest computational time during execution� The Pareto frontier produced by the GA, GSA, DE, and PSO algorithms when applied to this application is presented graphically in Figure 6�10�

The next step is to analyze the influence of diversity and convergence of the solutions on the Pareto frontiers (produced by each algorithm) on the degree of dominance� From the HVI values, two algorithms with the lowest values can be eliminated from further analysis since the Pareto frontiers produced by these algorithms are the least dominant� In this case, the GSA and the GA algorithms are these candidates (see Figure 6�10)� It can be observed that the DE algorithm is the best and the PSO algorithm is the second best algorithm in terms of Pareto frontier dominance�

In any optimization problem, an optimal solution (or for MO scenarios, the most dominant Pareto frontier) can only be achieved if the degree of diversity and convergence of the solution sets produced by an algorithm matches the characteristics of the objective space� Keeping this in mind, the degree of diversity and convergence of the solution sets that construct the Pareto frontier influences the degree of dominance measured by the HVI metric�

In Figure 6�10, it can be observed that the PSO algorithm produces solutions that are less convergent but more diverse than the DE algorithm� Enhancement is done on the algorithm with the highest HVI (which in this case is the DE algorithm) to boost

further its performance� From observing the second best algorithm (PSO) in terms of dominance (HVI), necessary enhancements to the DE algorithm are carried out� By observing the information of the solution characteristics given by the solution sets produced by the DE algorithm, appropriate modifications can be carried out� It can be seen that the DE algorithm produces the solution set which is more convergent than the PSO algorithm� Hence, in this respect, no modifications can be done to the DE algorithm� However, it can also be observed that the solution set produced by the PSO algorithm is more diverse than the DE algorithm� Hence, the DE algorithm lacks in terms of diversity of the solution set� Therefore, the DE algorithm is modified such that the diversity of its solutions increases� This way, the DE algorithm is modified to match the diversity properties of the PSO algorithm�

Thus, an additional component is incorporated into the DE algorithm that increases its capabilities to produce solutions that are more diverse� By improving its capabilities to produce diverse solutions, a higher degree of dominance of the Pareto frontier would be achieved� By this interpretation, the diversity properties of the most dominant algorithm (in this case: DE) is improved by adding the chaotic component to the original DE to obtain the CDE algorithm� The values given by the convergence metric, diversity metric, and the HVI for the DE and CDE algorithms are shown in Figures 6�11 through 6�13 respectively:

To provide more rigorous analyses, the hybrid PSO-DE technique was applied to the MO application problem� Some of the results presented below involve the PSODE approach� The metric values given by the HVI for the DE, CDE, and PSO-DE algorithms are shown in Table 6�9�

It can be observed from Figure 6�13 and Table 6�9 that the diversity level of the Pareto frontier produced by the CDE algorithm is higher than that by the DE algorithm� In terms of convergence (see Figure 6�11), the Pareto frontier produced by the DE algorithm is better than the CDE algorithm� As can be observed in Figure 6�12 and Table 6�9, the HVI value of the Pareto frontier produced by the CDE algorithm is more dominant than the DE algorithm by 62�944%� Hence, it can be inferred that the solution diversity at the Pareto frontier heavily affects the degree of dominance�

The Pareto frontier produced by the CDE and the PSO-DE algorithms is presented graphically in Figures 6�14 and 6�15 respectively�

The computational time for the execution of the CDE and PSO-DE algorithms are 33�48 and 66�301 s, respectively� Hence, the CDE algorithm is more computationally efficient as compared to the PSO-DE algorithm since the PSO-DE algorithm takes the most computational time� The comparison of the individual best solution options obtained by the DE, CDE, and the MINITAB 14 method (used in Banerjee et al., 2009) is shown in Table 6�10�

TABLE 6.9 Values of the HVI, Convergence, and Diversity Metrics for DE, CDE, and PSO-DE Algorithms

TABLE 6.10 Comparison of Individual Best Solution Options

It can be observed in Table 6�10 that the individual solution produced by the CDE and the DE methods employed in this work outweighs the solution produced by the method used in Banerjee et al. (2009) by 507�11% and 334�057%, respectively� In this industrial application, it can be observed that the CDE method outperforms all other methods in terms of Pareto frontier dominance� It can be stated that the CDE method is more computationally efficient and gives better solution quality as compared to the hybrid approach, the PSO-DE method� All algorithms applied in this application performed stable computations during the search of the individual optima as well as while solving the β-subproblems (aggregated objective function)� All Pareto-efficient solutions produced by the algorithms developed in this work were feasible and no constraints were compromised� The advantage of using the CDE algorithm as compared to the other algorithms in this application is that it produces solutions that are highly dominant in terms of MO optimization of the process parameters�

In this chapter, six algorithms (GA, PSO, GSA, DE, PSO-DE, and CDE) were employed in optimization of MO problem of bioethanol pretreatment process� The parameter settings as well as the details regarding algorithms and executions were described� In addition, three measurement metrics such as convergence metric, diversity metric, and HVI were used to gauge the performance of the algorithms� It was found that the CDE outperformed all the other algorithms�