微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

OpenCV 关键点匹配器匹配 Python 中的错误区域,用于证书模板和图像

如何解决OpenCV 关键点匹配器匹配 Python 中的错误区域,用于证书模板和图像

我使用 this site 上给出的代码将我的证书照片与我通过将我的 pdf 证书转换为 png 并删除我的姓名、候选人 ID 和认证日期而创建的“模板”对齐。遗憾的是,它的关键点匹配了错误的区域(单击 here 获取图像),特别是将我的姓名、候选人 ID 和日期与证书中不应该匹配的区域匹配。奇怪的是,当我将照片与自身关键点匹配时,它工作正常(即:不删除我的姓名、日期等)

我尝试过使用不同的匹配器,包括 sift 和 brute force 匹配器,但仍然导致相同的问题。有谁知道为什么会发生这种情况,有什么我可以尝试克服的吗?

谢谢:)

这是我正在使用的代码

# import the necessary packages
#import numpy as np
import imutils
import cv2

def align_images(image,template,maxFeatures=500,keepPercent=0.2,debug=True):
    # convert both the input image and template to grayscale
    imageGray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
    templateGray = cv2.cvtColor(template,cv2.COLOR_BGR2GRAY)

    # use ORB to detect keypoints and extract (binary) local
    # invariant features
    orb = cv2.ORB_create(maxFeatures)
    (kpsA,descsA) = orb.detectAndCompute(imageGray,None)
    (kpsB,descsB) = orb.detectAndCompute(templateGray,None)

    # match the features
    method = cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING
    matcher = cv2.DescriptorMatcher_create(method)
    matches = matcher.match(descsA,descsB,None)

    # sort the matches by their distance (the smaller the distance,# the "more similar" the features are)
    matches = sorted(matches,key=lambda x:x.distance)

    # keep only the top matches
    keep = int(len(matches) * keepPercent)
    matches = matches[:keep]
    # Extract location of good matches
    points1 = np.zeros((len(matches),2),dtype=np.float32)
    points2 = np.zeros((len(matches),dtype=np.float32)
    matches = cv2.drawMatches(image,kpsA,kpsB,matches,None)
    matches = imutils.resize(matches,width = 1000)
    cv2.imshow("Matched Keypoints",matches)
    cv2.waitKey(0)
    for i,match in enumerate(matches):
        points1[i,:] = kpsA[match.queryIdx].pt
        points2[i,:] = kpsB[match.trainIdx].pt

     # Find homography
    H,mask = cv2.findHomography(points1,points2,cv2.RANSAC)


    # use the homography matrix to align the images
    (h,w) = template.shape[:2]
    aligned = cv2.warpPerspective(image,H,(w,h))

    # return the aligned image
    return aligned

image = cv2.imread(r"C:\Users\Soffie\Documents\GrayceAssignments\certificate_scanner\certif_image2.jpg")
template = cv2.imread(r"C:\Users\Soffie\Documents\GrayceAssignments\certificate_scanner\certificate_template.png")
aligned = align_images(image,template)

aligned = imutils.resize(aligned,width =700)
template = imutils.resize(template,width = 700)
stacked = np.hstack([aligned,template])
overlay = template.copy()
output = aligned.copy()
cv2.addWeighted(overlay,0.5,output,output)
cv2.imshow("Image Alignment Stacked",stacked)
cv2.imshow("Image Alignment Overlay",output)
cv2.waitKey(0)

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。