ABSTRACT

Small signal stability is the ability of a system to maintain stability when subjected to small disturbances. Small signal analysis provides valuable information about the inherent dynamic characteristics of the system and assists in its design, operation, and control. Time domain simulation and eigenanalysis are the two main approaches to study system stability. Eigenanalysis methods are widely used to perform small signal stability

studies. The dynamic behavior of a system in response to small perturbations can be determined by computing the eigenvalues and eigenvectors of the system matrix. The locations of the eigenvalues can be used to investigate the system’s performance. In addition, eigenvectors can be used to estimate the relative participation of the respective states in the corresponding disturbance modes. A scalar λ is an eigenvalue of an n × n matrix A if there exists a nonzero

n× 1 vector v such that Av = λv (7.1)

where v is the corresponding right eigenvector. If there exists a nonzero vector w such that

wTA = λwT (7.2)

then w is a left eigenvector. The set of all eigenvalues of A is called the spectrum of A. Normally the term “eigenvector” refers to the right eigenvector unless denoted otherwise. The eigenvalue problem in Equation (7.1) is called the standard eigenvalue problem. Equation (7.1) can be written as

(A− λI) v = 0 (7.3) and thus is a homogeneous system of equations for x. This system has a nontrivial solution only if the determinant

det (A− λI) = 0 The determinant equation is also called the characteristic equation for A and is an nth degree polynomial in λ. The eigenvalues of an n× n matrix A are the roots of the characteristic equation

λn + cn−1λn−1 + cn−1λn−1 + . . .+ c0 = 0 (7.4)

Therefore, there are n roots (possibly real or complex) of the characteristic

Electric

The power method is one of the most common methods of finding the dominant eigenvalue of the n×n matrix A. The dominant eigenvalue is the largest eigenvalue in absolute value. Therefore, if λ1, λ2, . . . , λn are eigenvalues of A, then λ1 is the dominant eigenvalue of A if

|λ1| > |λi| (7.5) for all i = 2, . . . , n. The power method is actually an approach to finding the eigenvector v1 cor-

responding to the dominant eigenvalue of the matrix A. Once the eigenvector is obtained, the eigenvalue can be extracted from the Rayleigh quotient:

λ = 〈Av, v〉 〈v, v〉 (7.6)

The approach to finding the eigenvector v1 is an iterative approach. Therefore, from an initial guess vector v0, a sequence of approximations vk is constructed which hopefully converges as k goes to ∞. The iterative algorithm for the power method is straightforward:

The Power Method

1. Let k = 0 and choose v0 to be a nonzero n× 1 vector. 2. wk+1 = Avk

3. αk+1 = ‖wk+1‖ 4. vk+1 = w

5. If ‖vk+1 − vk‖ < ε, then done. Else, k = k + 1, go to Step 2. The division by the norm of the vector in Step 4 is not a necessary step,

but it keeps the size of the values of the eigenvector close to 1. Recall that a scalar times an eigenvector of A is still an eigenvector of A; therefore, scaling has no adverse consequence. However, without Step 4 and αk = 1 for all k, the values of the updated vector may increase or decrease to the extent that the computer accuracy is affected.