Continue the Investigation of the Role of R by Considering Eq 1 6 8
Newton-Raphson Method
Introductory Numerical Methods
S.J. Garrett , in Introduction to Actuarial and Financial Mathematical Methods, 2015
The Newton-Raphson method
As in the previous discussions, we consider a single root, x r, of the function f(x ). The Newton-Raphson method begins with an initial estimate of the root, denoted x 0≠x r, and uses the tangent of f(x) at x 0 to improve on the estimate of the root. In particular, the improvement, denoted x 1, is obtained from determining where the line tangent to f(x) at x 0 crosses the x-axis. This represents a single iteration of the Newton-Raphson method and is illustrated in Figure 13.5(a). The next iteration uses the line tangent to f(x) at x 1 to generate x 2 in exactly the same way. This is shown in Figure 13.5(b). The direction of the tangent line is determined by the sign of the local gradient at each x n and, although the illustration has a positive local gradient, the method works analogously for functions with a negative local gradient.
Figure 13.5. Illustration of the Newton-Raphson method. (a) First iteration leading to x 1 from the tangent line at f(x 0). (b) Second iteration leading to x 2 from the tangent line at f(x 1).
Before we discuss the termination criteria for the Newton-Raphson method, it is important to consider the mathematical formulation of the first iteration of the process. Essentially, we require a expression that gives x 1 from x 0 and the properties of f(x) at x 0. The gradient of the line tangent to f(x) at x 0 is, by definition, f′(x 0) which can be determined from the expression of f(x). We can also form an approximation to this gradient from the fact that the tangent line crosses the points (x 0,f(x 0)) and (x 1,f(x 1)) ≈ (x 1,0); which of course arises from the definition of x 1. The gradient is then
which is equated to f′(x 0) and rearranged to give
(13.4)
That is, we have an expression for estimate x 1 determined entirely from known properties of the function at x 0. Clearly the expression assumes that f′(x 0)≠0. If this was the case, the tangent line of the function at x 0 would be horizontal and not cross the x-axis; the Newton-Raphson method would then have failed to improve on x 0.
Example 13.9
Use the Newton-Raphson method to determine an improvement on the initial estimate of the root in the following cases.
- a.
-
f(x) =e x − 4 from initial estimate x 0 = 1.5.
- b.
-
g(x) = x 3 − 2x − 4 from initial estimate x 0 = 2.5.
- c.
-
from initial estimate x 0 = 8.
- d.
-
from initial estimate x 0 = 5.5.
Solution
In each case, we apply Eq. (13.4).
- a.
-
f(x) =e x − 4 and f′(x) =e x , therefore .
- b.
-
g(x) = x 3 − 2x − 4 and g′(x) = 3x 2 − 2, therefore .
- c.
-
and , therefore .
- d.
-
and , therefore .
Equation (13.4) can be generalized to give an expression for the (n + 1)th estimate under the Newton-Raphson method from properties of the nth iteration,
(13.5)
for and f′(x n )≠0.
The standard error estimate used in an implementation of the Newton-Raphson method is ϵ n = |x n − x n−1|. This means that an exit criteria is simply that ϵ n < ϵ for some predetermined tolerance, ϵ. That is, we terminate the iterative process when successive approximations become only marginally different.
Example 13.10
Use the Newton-Raphson method with error tolerance ϵ = 0.0001 to find the single real root of from the initial estimate x 0 = 8.
Solution
As in Example 13.9(c), and so Eq. (13.5) is written as
Table 13.7 shows the results of the first few iterations of this expression starting from x 0 = 8. We see that only three iterations are required to achieve the root to the required error tolerance and x r ≃ 7.38906. The actual root is given by x r =e2 = 7.38906 which agrees with the estimate to the five decimal places reported.
Table 13.7. Iterations of the Newton-Raphson method for Example 13.10
| n | x n | f(x n ) | f′(x n ) | ϵ n |
|---|---|---|---|---|
| 0 | 8.00000 | 0.07944 | 0.12500 | – |
| 1 | 7.36447 | -0.00333 | 0.13579 | 0.63553 |
| 2 | 7.38902 | -0.00001 | 0.13534 | 0.02455 |
| 3 | 7.38906 | 0.00000 | 0.13534 | 0.00004 |
For a given initial estimate x 0 and a required error tolerance ϵ , the algorithm for the Newton-Raphson method can be written as follows.
Algorithm for the Newton-Raphson method
- 1.
-
n = 0
- 2.
-
- 3.
-
if |x n+1 − x n |≤ ϵ accept x r ≃ x n+1 then end, otherwise
- 4.
-
n = n + 1 and go to 2.
Although the description of the Newton-Raphson method has been given for functions with a single root, the method can be applied perfectly well to functions with multiple roots. The root on which the method converges is of course determined by the starting value, x 0. As with the interval methods, it is sensible to have a rough idea of the number of roots that a function may possess and the approximation location of each. There are no definitive rules on how an initial estimate should be determined such that the Newton-Raphson method approaches the particular root of interest and common sense should be applied in practice.
Example 13.11
Use the Newton-Raphson method to determine all real roots of the function . An error tolerance of ϵ = 0.0001 should be used.
Solution
It should be clear that, in this case, f(x) is an even function about x = 1 and has a root either side of this value. We might then guess that two initial values and would converge to the two distinct roots. In either case, the iterative formula (Eq. 13.5) is the same and given by
The iterations are shown in Table 13.8 where we see rapid converge to roots and .
Table 13.8. Iterations of the Newton-Raphson method for the two roots in Example 13.11
| n 1,2 |
|
|
|
|
|---|---|---|---|---|
| 0 | 0.00000 | 0.71828 | -5.43656 | |
| 1 | 0.13212 | 0.12382 | -3.68643 | 0.13212 |
| 2 | 0.16571 | 0.00580 | -3.34685 | 0.03359 |
| 3 | 0.16744 | 0.00001 | -3.33026 | 0.00173 |
| 4 | 0.16745 | 0.00000 | -3.33022 | 0.00000 |
| 0 | 2.00000 | 0.71828 | 5.43656 | |
| 1 | 1.86788 | 0.12382 | 3.68643 | 0.13212 |
| 2 | 1.83429 | 0.00580 | 3.34685 | 0.03359 |
| 3 | 1.83256 | 0.00001 | 3.33026 | 0.00173 |
| 4 | 1.83255 | 0.00000 | 3.33022 | 0.00000 |
Example 13.12
Repeat Example 13.8 using the Newton-Raphson method with error tolerance ϵ = 0.0001.
Solution
We are required to find the four roots of over x ∈ [−5,5]. As in the solution to Example 13.8, we exploit the fact that the function is even and consider only x ∈ [0,5]. We note that and Eq. (13.5) becomes
Table 13.8 suggests that initial values and are sensible choices, although we could have equally chosen and . The iterations are shown in Table 13.9 and we find four roots over the full interval [−5,5], and .
Table 13.9. Iterations of the Newton-Raphson method for the two roots in [0,5] in Example 13.12
| n 1,2 |
|
|
|
|
|---|---|---|---|---|
| 0 | 1.00000 | -0.35853 | 1.38177 | |
| 1 | 1.25947 | -0.00107 | 1.33773 | 0.25947 |
| 2 | 1.26027 | 0.00000 | 1.33726 | 0.00080 |
| 3 | 1.26027 | 0.00000 | 1.33726 | 0.00000 |
| 0 | 2.50000 | 0.29618 | -1.40439 | |
| 1 | 2.71090 | -0.06819 | -2.04582 | 0.21090 |
| 2 | 2.67756 | -0.00165 | -1.94688 | 0.03333 |
| 3 | 2.67672 | 0.00000 | -1.94435 | 0.00085 |
| 4 | 2.67672 | 0.00000 | -1.94435 | 0.00000 |
Although the definition of the error tolerance is different between the regula falsi and Newton-Raphson methods, it should be clear from the second column of Table 13.9, for example, that the Newton-Raphson method can lead to more rapid convergence. However, the Newton-Raphson method is not without its drawbacks.
The most immediate problem with the Newton-Raphson method is that it requires an explicit expression for the derivative of the function. This may not be possible to determine in practice. A strategy for avoiding this requirement will lead us to the secant method in the next section. The Newton-Raphson method (and indeed the secant method) suffers from further disadvantages concerning their use with ill-behaved functions. In terms of these gradient methods, a function is said to be ill-behaved if it has repeated roots or a very small gradient at a particular x n . In both cases, the convergence to the root may be very slow, or indeed impossible. This is of course due to the effect of the small gradient at x n on the calculation of x n+1. In particular, Eq. (13.5) shows that
We have previously noted that interval methods fail to cope with repeated roots, and so, even though gradient methods may be slow to converge to such roots, they are still superior in this respect.
Example 13.13
Use the Newton-Raphson method to find the repeated root of f(x) = (x−1)3 with an error tolerance ϵ = 0.0001.
Solution
The Newton-Raphson method requires iteration of
The process beginning at x 0 = 1.5 takes 20 iterations to converge. The reader is invited to confirm this using Excel, for example.
Example 13.14
Explore the use of the Newton-Raphson method to determine all real roots of the function g(x) = (x−2)2 − 1. Use an error tolerance of ϵ = 0.0002.
Solution
The function clearly has two real roots at x r = 1 and x r = 3 but it also has a zero gradient at x = 2. The Newton-Raphson method requires iteration of
While the Newton-Raphson method converges quickly to the relevant root for a reasonable estimate of or , it can be thrown off for x 0 ∈ (1,3) and will not converge if x n = 2 for some n. An example of where the convergence is relatively slow is given in Table 13.10. Note the large jumps in successive values when x n is close to 2.
Table 13.10. Slow convergence of the Newton-Raphson method close to a turning point at x = 2
| n | x n | g(x n ) | g′(x n ) | ϵ n |
|---|---|---|---|---|
| 0 | 2.10000 | −0.99000 | 0.20000 | |
| 1 | 7.05000 | 24.50250 | 10.10000 | 4.95000 |
| 2 | 4.62401 | 5.88543 | 5.24802 | 2.42599 |
| 3 | 3.50255 | 1.25767 | 3.00511 | 1.12146 |
| 4 | 3.08404 | 0.17515 | 2.16809 | 0.41851 |
| 5 | 3.00326 | 0.00653 | 2.00652 | 0.08079 |
| 6 | 3.00001 | 0.00001 | 2.00001 | 0.00325 |
| 7 | 3.00000 | 0.00000 | 2.00000 | 0.00001 |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128001561000133
Matrix-Based Methods
Esam M.A. Hussein , in Computed Radiation Imaging, 2011
10.7.5 Levenberg-Marquardt
This method combines the steepest descent method with the Newton-Raphson's method by modifying than latter scheme, Eq. (10.113), to (Rao, 1996):
(10.117)
where the Lagrange multiplier ensures that is positive definite, even if is not. When , the effect of in Eq. (10.117) diminishes, and the iterative process is driven toward the steepest descent, while for small the terms influence more the iterative scheme and it becomes more like the Newton-Raphson method. Since the latter is more suited when , iterations should start with a large value, gradually decreasing as the iterative process progresses, eventually reaching zero. This in practice is determined by defining the so-called trust-region radius. The value of is compared versus the metric . As long as , a positive value of is applied, but when the condition is reached, is imposed (Morè, 1977). Notice the resemblance between in Eq. (10.117) and the corresponding term in the standard Tikhonov regularization, Eq. (10.17) with , indicating the regularizing effect of the Levenberg-Marquard scheme, which adds to its effectiveness.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123877772000100
FINITE ELEMENT METHODS
S.S. Rao , in Encyclopedia of Vibration, 2001
Iterative Method
The iterative method is similar to the Newton and Newton–Raphson methods used for the solution of nonlinear equations. In this method, the total load is applied to the structure in each iteration and the displacement is computed using an approximate but constant value of stiffness. Since an approximate value of stiffness is used in each iteration, equilibrium conditions may not be satisfied. Hence at the end of each iteration, the part of the total load that is not balanced is computed and used in the next iteration to calculate the additional increment of the displacement. The iterative procedure is continued until the equilibrium equations are satisfied to some tolerable degree. The equations corresponding to the jth iteration can be derived as follows. Let P be the total load and P (eq) j−1 be the load equilibrated at the end of j − 1th iteration. Then the unbalanced load to be applied in the jth iteration (P j ) is given by:
[73]
The incremental displacement in the jth iteration (ΔQ j ) can be found by solving the linear equations:
[74]
so that the total displacement at the end of jth iteration can be expressed as:
[75]
The stiffness to be used in the jth iteration, namely K j−1, can be computed by using the tangent modulus as in the incremental method. Thus K j−1 will be the tangent stiffness (slope of the load–deflection curve) computed at the point (P j−1, Q j−1). The equilibrating load P j−1 (eq) is nothing but the load necessary to maintain the displacement Q j−1 of the actual structure. It can be calculated by first finding the strains ε j−1 as:
[76]
and then finding the load vector P j−1 (eq) from the relation:
or:
[77]
The iterative procedure is shown graphically in Figure 4.
Figure 4. Iterative method.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122270851000047
Data Gathering, Analysis and Protection of Privacy Through Randomized Response Techniques: Qualitative and Quantitative Human Traits
M.J.L.F. Cruyff , ... L.E. Frank , in Handbook of Statistics, 2016
3.2 Estimation
The logistic regression model for RR data is estimated by setting up the likelihood and maximizing over the parameters. Let the observed RR data for individual i be (n 1i *,n 2i *), with (n 1i *,n 2i *) = (1,0) if individual i has answer 1, and (n 1i *,n 2i *) = (0,1) if individual i has answer 2. Then the log-likelihood
(9)
For further model development it is useful to write this in matrix terms, similar as in (6). Let u be a unit vector of length 2 × 1, and collect (n 1i *,n 2i *) in vector and of length 2 × 1 respectively. Then Eq. (9) can be written as
(10)
where the elements of π i are defined in (7). Thus maximizing ℓ(β 0, β ) over the parameters (β 0, β ) yields the maximum likelihood estimates. The incorporation of person weights w i , alluded to in Section 3.1, is simply accomplished by reformulating the log-likelihood as
(11)
Maddala (1983) provides first and second order derivatives of the log-likelihood and suggests to use the Newton–Raphson method to maximize the log-likelihood (see also Scheers and Dayton, 1988 and Van der Heijden and van Gils, 1996). Van den Hout et al. (2007) show that the model is a member of the family of generalized linear models and propose to fit the model with the iterative reweighted least-squares algorithm, which is a very stable fitting procedure.
Example
As an illustration, we report an example taken from Lensvelt-Mulders et al. (2006). The dependent variable is the work item "In the last 12 months have you taken on a small job alone or together with your friends that you got paid for without informing the social welfare agency?". As an explanatory variable we choose "I think it is more beneficial to me not to follow the rules connected to my disability insurance benefit", abbreviated as "benefit", that is measured on a five-point scale and has mean 3.67 and standard deviation 0.777. We note that many more explanatory variables have been measured in this survey, motivated from a rational choice framework, and refer to Lensvelt-Mulders et al. (2006) and Elffers et al. (2003) for details. The logistic regression model logit(noncompliance) = constant + b * benefit has estimates 0.765 for the constant and 0.751 for b. In order to study the impact of these estimates, we compare the estimated probability of noncompliance for the mean value of benefit (ie, 3.67), and the mean plus or minus one standard deviation (ie, 3.67 + 0.78 = 4.45 and 3.67 − 0.78 = 2.89). For the mean value of benefit, the estimated probability of noncompliance is 12%, for 4.45 the estimated probability is 20% and for 2.89 the estimated probability is 7%. This shows that benefit has a strong relation with the decision not to comply with the above disability insurance benefit regulation: the more people perceive their benefit if they do not comply, the more they do not comply with this work regulation.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0169716116300165
Numerical Methods
Xin-She Yang , in Engineering Mathematics with Examples and Applications, 2017
20.3 Newton-Raphson Method
Newton's method is a widely-used classical method for finding the solution to a nonlinear univariate function of on the interval . It is also referred to as the Newton-Raphson method. At any given point shown in Fig. 20.2, we can approximate the function by a Taylor series
Figure 20.2. Newton's method of approximating the root x ⁎ by x n + 1 from x n .
(20.5)
where
(20.6)
which leads to
(20.7)
or
(20.8)
Since we try to find an approximation to with , we can use the approximation in the above expression. Thus we have the standard Newton iterative formula
(20.9)
The iteration procedure starts from an initial guess value and continues until a predefined criterion is met. A good initial guess will use fewer steps; however, if there is no obvious initial good starting point, you can start at any point on the interval . But if the initial value is too far from the true zero, the iteration process may fail. So it is a good idea to limit the number of iterations.
Example 20.3
To find the root of
we use Newton-Raphson method starting from . We know that
and thus the iteration formula becomes
Since , we have
We can see that (only three iterations) is very close (to the 6th decimal place) to the true root which is , while is accurate to the 10th decimal place.
We have seen that Newton-Raphson's method is very efficient and that is why it is so widely used. Using this method, we can virtually solve almost all root-finding problems, though care should be taken when dealing with multiple roots. Obviously, this method is not applicable to carrying out integration.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128097304000276
Advances in Photovoltaics: Part 2
Karsten Bothe , David Hinken , in Semiconductors and Semimetals, 2013
7.1.1 Technical implementation
To carry out the calculations as indicated in Eq. (5.79), all parameters which are available as mappings have to be matched by means of scaling, rotation and translation. Note that A loc is limited by the measurement technique having the lowest resolution.
Since the local two-diode model (see Eq. 5.56) is given implicitly for V appl we apply Newton–Raphson's method. Already, after a few iterations, a value for J extr,i follows with a very high precision. As indicated in Eq. (5.79), we add up all local currents to the global current and thus obtain one IV data pair. IV data pairs with a typical resolution as used for standard IV measurements have to be calculated to obtain a complete IV characteristic.
The calculated light-IV characteristics from the LIA approach has to be compared to the experimentally measured light-IV characteristics. We take a good agreement of the model data with the experimental data as an indication of the physical validity of the used parameters and the applicability of the model of independent diodes.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123813435000057
Elements of Mathematical Optimization
André A. Keller , in Mathematical Optimization Terminology, 2018
1.5 Design and Choice of an Algorithm
Yang (2014, pp. 23–44) developed an analysis of optimization algorithm. An algorithm is viewed as an iterative process. Many optimization algorithms are in the literature. For a given application, the question arises as to select the best algorithm providing accurate results using only a reduced cost of computing. This section contains some elements of this presentation.
1.5.1 Design of an Algorithm
Newton method for nonlinear programming problems seeks to attain an optimum from a starting point x 0 in the univariate case for which f(x) is optimized. The objective is to converge to a stationary point where the derivative is zero. In the univariate case, this iterative method generates a sequence of iterations of the form
In the multivariate case where we have
The algorithm of the Newton method is illustrated by a pseudo-code in Table 1.1.
Table 1.1. Newton method
| Algorithm 1 | |
|---|---|
| 1 | Set \⁎initial step⁎\ [l, u] \⁎initial interval⁎\ |
| 2 | while |
| 3 | Calculate |
| 4 4a 4b | \⁎remain iterates in the interval⁎\ If then If then |
| 5 | If then STOP |
| 6 | \⁎next iteration⁎\ Goto 3 |
Solving the nonlinear system of KKT (Karush-Kuhn-Tucker) necessary conditions for an optimization problem with n design variables and m constraints, we have , where and . For solving this nonlinear system, the Newton-Raphson method assumes that x (k) where iteration k is known and a change like Δx (k) is calculated. Linearizing by using the Taylor expansion, we have to solve
The Newton-Raphson iterated procedure is continued until a stopping criterion is satisfied (see Arora, 2012, pp. 554–557).
1.5.2 Choice of an Algorithm
In 1997, the study on No Free Lunch (NFL) theorems by Wolpert and Macready (1997) was a significant step in the development of better algorithms. Indeed, theorems proved that there exists no better universal algorithm for all applications. Thus, the most efficient algorithm should be found for a given class of problems.
1.5.3 Basic Cycle of an Evolutionary Algorithm
The basic cycle is shown in Figure 1.2. The initial step consists of a population in which individuals are created at random. In the evaluation phase of the basic cycle, we evaluate all the individuals by using the objective functions of the programming problem. Next, fitness values can be assigned to individuals on this basis. Then, the fittest individuals can be selected for reproduction. Thereafter, new individuals are created by using genetic operators, such as with crossover and mutation. Closing the basic cycle, the new population including the selected individuals and offspring is transferred to the first step for evaluation, and a new cycle goes on.
Figure 1.2. Basic cycle of an evolutionary algorithm.
Reprint of Figure 1.1 from Keller, A. A. (2017). Multi-objective optimization in theory and practice. II. Evolutionary algorithms. Bentham eBooks.Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128051665000010
The General Mixed Model
Barry Kurt Moser , in Linear Models, 1996
10.3 RANDOM PORTION ANALYSIS: RESTRICTED MAXIMUM LIKELIHOOD METHOD
The maximum likelihood estimators (MLEs) of the variance components are the values of σ2 1,…, σ2 m that maximize the function
where the maximization is performed simultaneously with respect to the k + m terms μ, σ2 1,…, σ2 m . By Theorem 6.3.1, for complete, balanced designs, the MLE of μ equals the ordinary least-squares estimator (C′DC)−1 C′R′Y and the MLE of σ2 f for f = 1,…, m equals a linear combination of the sums of squares for the m random effects and interactions. However, in general, for unbalanced designs, derivation of the MLEs of μ, σ2 1,…, σ2 m can be tedious and usually involves numerical techniques such as the Newton–Raphson method.
Russell and Bradley (1958), Anderson and Bancroft (1952), W. A. Thompson (1962), and later Patterson and R. Thompson (1971, 1974) suggested what is called a restricted maximum likelihood (REML) approach. The REML estimators of σ2 1,…, σ2 m are derived by first expressing the likelihood function in two parts, one involving the fixed parameters, μ, and the second free of these fixed parameters. To construct the REMLs, let G be any n × (n−k) matrix of rank n−k such that . For example, G can be defined such that GG′ = I n − RC (C′DC)−1 C′R′ and G′G = I n-k where D = R′R. Next, transform the n × 1 random vector Y by the n × 1 nonsingular matrix [RC, G]′ where
The distribution of G′Y is free of the fixed parameters μ. The REML estimators of the variance components are the values of σ2 1,…, σ2 m that maximize the marginal likelihood of G′Y:
As with the maximum likelihood approach, numerical techniques are often necessary to determine the REML estimates of σ2 1,…, σ2 m . Furthermore, the maximization is performed under the restriction that the estimators of σ2 1,…, σ2 m are all positive.
The SAS PROC VARCOMP routine has a Type I sum of squares and a restricted maximum likelihood option. In the next section a numerical example is analyzed using PROC VARCOMP.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780125084659500107
Problem Solving and the Solution of Algebraic Equations
Robert G. Mortimer , in Mathematics for Physical Chemistry (Fourth Edition), 2013
5.2.4 Solving Equations Numerically with Excel
Excel is a large and versatile program, and we do not have the space to discuss all of its capabilities. It has the capability to solve equations numerically, using a command called Goal Seek . This command causes the software to change a variable until a defined function of that variable attains a specified value. It uses a method called the Newton-Raphson method, which we will discuss in a later chapter. This method begins with a trial value of the independent variable and varies it to find the root. If there is more than one root, you must select a trial value not too far from the desired root. To use this function, you open Excel, select a cell, and enter a trial value of the independent variable. Select another cell and enter a formula giving the function that you want to equal zero (or whatever value you need). Use the address of the trial value in this formula. Now click on the tab labeled "Data" in the ribbon at the top of the window. Click on the icon labeled "What-If Analysis," which gives you three options. Click on "Goal Seek." A window opens up with three boxes. In the top box you specify the address of the cell in which your formula is entered. In the second box, you enter the value you want the function to attain, and in the third box you enter address of the cell where you entered the trial value of the root. Click on OK and the computer does the analysis and places the root in the cell in which you put the trial value. We illustrate the process in the following example.
Example 5.9
Using Excel, find the real roots of the equation
We determine by graphing that there is a real root near x = 1 and a real root near x = 4. To find the first root, we enter the trial value 1 in cell A1 and type the following formula in cell B1: =A1 ˆ4–5*A1ˆ3 + 4*A1ˆ2–3*A1 + 2 and press the "Return" key. We click on the "Data" tab and then click on the "What-If Analysis" icon. A window appears, and we select "Goal Seek" in that window. A window appears with three blanks. The first says "Set cell:" and we enter the address B1 in the blank. The second blank says "To value:" and we type in a zero, since we want the function to attain the value zero. The third blank says "By changing cell:" We type in A1, since that is the cell containing our trial root. We click on "OK" and the software quickly finds the root and places the value of the root, 0.802 309, in cell A1. We then repeat the process with a trial value of 4 in cell A1. The software quickly places the value of root, 4. 188 847 in cell A1.
Exercise 5.10
Use Excel to find the real root of the equation
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124158092000057
Multisite Occupancy Adsorption on Heterogeneous Solid Surfaces
W. RUDZINSKI , D.H. EVERETT , in Adsorption of Gases on Heterogeneous Surfaces, 1992
12.4 Nitta's method of evaluating the averaged adsorption isotherm
The importance of Nitta's work referred to in Section 12.3 can hardly be overestimated. It is the first solution in the literature for the analytical form of the 'local' adsorption isotherm in multisite occupancy adsorption on the heterogeneous surfaces characterised by random surface topography. The problem of evaluating the averaged adsorption isotherm θt is another separate problem which can probably be solved in a variety of ways. Therefore, Nitta's proposal concerning the numerical evaluation of θt is presented here in a separate section.
To evaluate θt , three kinds of surface coverages, θt , {θtj } and {θij } are defined by Nitta:
(12.4.1)
The constraint condition of equation 12.3.6 and the stationary-point condition of equation 12.3.8 are rewritten by use of surface coverages as follows:
(12.4.2)
(12.4.3)
where fi = Mi /M = fraction of sites of type i, and
(12.4.4)
Equations 12.4.2 and 12.4.3 are W · s simultaneous equations of W · s variables {θij }. Since θij approaches zero when θtj approaches zero, positive non-zero variables, {Yij }, are defined through equation 12.4.5:
(12.4.5)
Then the surface coverage of a pair of a site i and a group j, θij , is transformed by means of {Yij } and {θtj } as
(12.4.6)
Substituting equation 12.4.6 into equation 12.4.2, using the relation that Yij = rij Y1j for all pairs of {ij} and abbreviating Y 1j as Yj , one obtains, for each i, s simultaneous equations of s variables {Yj }:
(12.4.7)
The above simultaneous equations were solved by means of the Newton–Raphson method, details of which are described in Appendix 2 of Nitta's work.
The chemical potential for an ideal gas of pressure p is
(12.4.8)
where ∧ is the thermal de Broglie wavelength, and q g is the internal molecular partition function for the gas phase. From the equilibrium condition that μs = μg, we obtain an adsorption isotherm, written in terms of total surface coverage as
(12.4.9)
where K is an adsorption equilibrium constant defined by
(12.4.10)
and
(12.4.11)
That equation 12.4.9 is a generalisation of equation 12.4.1 for a heterogeneous surface is seen by noting that for a homogeneous surface f 1 = 1, θij = θtj , Σ θij = θt , so that Yj = 1/(1 — θt ) and the logarithmic term is zero. Equation 12.4.9 now becomes
(12.4.12)
which is equation 12.4.1.
To illustrate how the surface heterogeneity affects the behaviour of adsorption isotherms, Nitta et al. first performed some model investigations.
Figure 12.3 shows the adsorption isotherm when the adsorbed molecule consists of four identical groups each occupying one site and the surface has two different sites, a and b; the assigned parameters being such that n = 4, r a1 = 1.0, r b1 = r = 1, 10 or 100. The solid lines are the isotherms calculated at equimolar site fraction (f a = f b = 0.5).
Figure 12.3. Nitta's model investigation of the effect of r on the behaviour of θt , for a unigroup molecule with n = 4 (r = r b1; r a1 = 1.0).
After Nitta et al. 2Figure 12.4 shows the results of the calculation when the molecule consists of two kinds of group, 1 and 2, varying the group fractions in a molecule under the condition that (n 1 + n 2) = 4; two different sites, a and b, of equal amounts are considered (f a = f b = 0.5). The value for r b1 is assumed to be unity; r b2 is fixed at 100.
Figure 12.4. Nitta's calculation of θt in the case of molecules composed of two types of segment and adsorbed on two types of active sites: f a = f b = 0.5; n 1 + n 2 = n = 4; r b1 = 1.0; r b2 = 100.
After Nitta et al. 2An important conclusion that can be drawn from Figure 12.4 is that when f b ⩾ n 2/n,the isotherms resemble those for a homogeneous surface in shape, being parallel to each other. This is because the group 2 in the molecule occupies the more active site b without competition with the other, being accompanied by pairs a-1 in stoichiometric amounts corresponding to the b-2 pairs.
To demonstrate the applicability of their theory and their numerical method, Nitta et al. 2 analysed the adsorption data for nitrogen, oxygen and carbon monoxide on Molecular Sieves 5A and 10X, reported by Danner and coworkers. 11, 12
For the sake of data reduction, Nitta et al. assumed that each gas molecule consists of identical segments and that the surface has two kinds of surface sites, a and b; the latter site being labelled as more active.
The site fraction of the active sites, f b, was assumed to be the ratio of cations to the amount of aluminium and silicon atoms, i.e.
The values of 0.22 and 0.33 for f b were estimated in this way for the Molecular Sieves 10X and 5A respectively. Other parameters determinated by Nitta et al. so as to fit the single adsorption isotherms are collected in Tables 12.1 and 12.2.
Table 12.1. The parameters for oxygen, nitrogen and carbon monoxide adsorption on the Molecular Sieve 10X, determined by Nitta et al. Here M is the amount of sites per gram of adsorbent.
| Gas | n | |||
|---|---|---|---|---|
| O2 | 1.106 | 0.0 | 2.19 × 10−2 | 4.16 × 10−3 |
| N2 | 1.455 | 584.7 | 5.38 × 10−2 | 7.02 × 10−3 |
| CO | 1.364 | 893.2 | 0.184 | 1.76 × 10−2 |
M = 8.10 mmol g−1; f a = 0.78, f b = 0.22.
.
Table 12.2. The parameters for oxygen, nitrogen and carbon monoxide adsorption on the Molecular Sieve 5A, found by Nitta et al.
| Gas | n | ||
|---|---|---|---|
| O2 | 1.375 | 0.0 | 0.108 |
| N2 | 1.624 | 326.5 | 0.412 |
| CO | 1.564 | 500 ** | 2.26 |
*M = 7.84 mmol g−1; f a = 0.67, f b = 0.33.
* .
- **
- Estimated from the values obtained for MS 10X.
Concerning the values , Nitta et al. draw attention to the fact that they are in the order of the quadrupole moments of the adsorbate molecules.
Figures 12.5 and 12.6 show the agreement between the experimental and calculated adsorption isotherms at –200 °F. The agreement looks fairly good, but it would be more convincing if similar agreement were illustrated at both the investigated temperatures.
Figure 12.5. Nitta's computer fitting of the experimental adsorption isotherms of CO, N2 and O2 on Molecular Sieve 10X at –200 °F (p 0 = 1 kPa) reported by Danner and Wenzel, 11 using the best-fit parameters collected in Table 12.1.
After Nitta et al. 2
Figure 12.6. Nitta's computer fitting of the experimental adsorption isotherms of CO, N2 and O2 on Molecular Sieve 5A at – 200 °F (p 0 = 1 kPa), reported by Danner and Wenzel 11 using the best-fit parameters collected in Table 12.2.
After Nitta et al. 2We remark, however, that with the same number of four best-fit parameters, Rudzinski and Jagiello 13 were able to fit adsorption isotherms of N2 on the Molecular Sieve 10X at four temperatures in an excellent way.
Rudzinski's calculation was based on a model of mobile adsorption on a heterogeneous solid surface. This seems to be another example in which an analysis of adsorption isotherms only does not make it possible to discriminate definitely between various adsorption models. No doubt an analysis of calorimetric effects of adsorption would be very helpful in such cases.
The numerical method of evaluating θt , proposed by Nitta et al., can be applied only with a discrete distribution of adsorption energies. This is surely a serious disadvantage of the method. Another disadvantage lies in the fact that the expression for θt does not possess a compact analytical form, suitable for further thermodynamic analysis, i.e. further mathematical operations to be carried out to develop expressions for other thermodynamic quantities (enthalpy of adsorption, heat capacity, etc.).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780126016901500175
Source: https://www.sciencedirect.com/topics/mathematics/newton-raphson-method
0 Response to "Continue the Investigation of the Role of R by Considering Eq 1 6 8"
Post a Comment