微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

具有SSOR右预处理器的一般最小残差方法

如何解决具有SSOR右预处理器的一般最小残差方法

我正在尝试使用右预处理器P来实现GMRES算法,以解决线性系统Ax = b

enter image description here

代码正在正常运行。但是,由于我的错误非常大,因此对我来说,结果不准确。对于GMRES方法(不使用预处理矩阵-在算法中删除P),我得到的误差约为1e ^ {-12},并且它与相同的矩阵收敛。

import numpy as np
from scipy import sparse
import matplotlib.pyplot as plt
from scipy.linalg import norm as norm
import scipy.sparse as sp
from scipy.sparse import diags

"""The program is to split the matrix into D-diagonal; L: strictly lower matrix; U strictly upper matrix
    satisfying: A = D - L - U  """
def splitMat(A):
    n,m = A.shape
    if (n == m):
        diagval = np.diag(A)
        D = diags(diagval,0).toarray()
        L = (-1)*np.tril(A,-1)
        U = (-1)*np.triu(A,1)
    else: 
        print("A needs to be a square matrix")
    return (L,D,U)

"""Preconditioned Matrix for symmetric successive over-relaxation (SSOR): """
def P_SSOR(A,w): 
    ## Split up matrix A: 
    L,U = splitMat(A)
    Comp1 = (D - w*U)
    Comp2 = (D - w*L)
    Comp1inv = np.linalg.inv(Comp1)
    Comp2inv = np.linalg.inv(Comp2)
    P = w*(2-w)*np.matmul(Comp1inv,np.matmul(D,Comp2inv))
    return  P

"""GMRES_SSOR using right preconditioning P: 
A - matrix of linear system Ax = b
x0 - initial guess
tol - tolerance
maxit - maximum iteration """

def myGMRES_SSOR(A,x0,b,tol,maxit):
    matrixSize = A.shape[0] 
    e = np.zeros((maxit+1,1))
    rr = 1
    rstart = 2
    X = x0
    w = 1.9 ## in ssor
    P = P_SSOR(A,w) ### preconditioned matrix 
    ### Starting the GMRES #### 
    for rs in range(0,rstart+1):
        ### first check the residual: 
        if rr<tol: 
            break 
        else: 
            r0 = (b-A.dot(x0))
            rho = norm(r0)
            e[0] = rho 
            H = np.zeros((maxit+1,maxit))
            Qcol = np.zeros((matrixSize,maxit+1))
            Qcol[:,0:1] = r0/rho 
        for k in range(1,maxit+1):
            ### Arnodi procedure ##
            Qcol[:,k] =np.matmul(np.matmul(A,P),Qcol[:,k-1])  ### This step applies P here: 
            for j in range(0,k):
                H[j,k-1] = np.dot(np.transpose(Qcol[:,k]),j])
                Qcol[:,k] = Qcol[:,k] - (np.dot(H[j,k-1],j]))
            
            H[k,k-1] =norm(Qcol[:,k])
            Qcol[:,k]/H[k,k-1]

            ###  QR decomposition step ### 
            n = k 
            Q = np.zeros((n+1,n))
            R = np.zeros((n,n))
            R[0,0] = norm(H[0:n+2,0])
            Q[:,0] = H[0:n+1,0] / R[0,0]
            for j in range (0,n+1):
                t = H[0:n+1,j-1]
                for i in range (0,j-1):
                    R[i,j-1] = np.dot(Q[:,i],t)
                    t = t - np.dot(R[i,j-1],Q[:,i])
                R[j-1,j-1] = norm(t)
                Q[:,j-1] = t / R[j-1,j-1]

            g = np.dot(np.transpose(Q),e[0:k+1])
            Y = np.dot(np.linalg.inv(R),g) 

            Res= e[0:n] - np.dot(H[0:n,0:n],Y[0:n])
            rr = norm(Res) 

            #### second check on the residual ###
            if rr < tol: 
                break 

        #### Updating the solution with the preconditioned matrix ####
        X = X  + np.matmul(np.matmul(P,0:k]),Y) ### This steps applies P here: 
    return X 

######
A = np.random.rand(100,100)
x = np.random.rand(100,1)
b = np.matmul(A,x) 
x0 = np.zeros((100,1))
maxit = 100
tol = 0.00001
x = myGMRES_SSOR(A,maxit) 
res = b - np.matmul(A,x) 
print(norm(res))
print("Solution with gmres\n",np.matmul(A,x))
print("---------------------------------------")
print("b matrix:",b)

我希望有人能帮助我解决这个问题!

解决方法

我不确定您从何处获得了“ Symmetric_successive_over-relaxation” SSOR代码,但这似乎是错误的。您似乎还假设A是对称矩阵,但在您的随机测试用例中则不是。

SSOR's Wikipedia entry之后,我将您的P_SSOR函数替换为

def P_SSOR(A,w): 
    L,D,U = splitMat(A)
    P = 2/(2-w) * (1/w*D+L)*np.linalg.inv(D)*(1/w*D+L).T
    return P

和您的测试矩阵

A = np.random.rand(100,100)
A = A + A.T

,您的代码最多可以处理12位数字的错误。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。