微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

如何计算高分辨率图像之间的匹配特征?

如何解决如何计算高分辨率图像之间的匹配特征?

我正在尝试使用OpenCV在两个图像之间匹配SIFT功能

sift = cv2.xfeatures2d.SIFT_create()
kp,desc = sift.detectAndCompute(img,None)

这两个图像似乎都包含许多功能,每个功能约有15,000个,并用绿点显示

enter image description here

enter image description here

但是在匹配它们之后,我只保留了87个,其中一些是离群值。

enter image description here

我正在尝试确定我做错了什么。 我匹配两个图像的代码是:

def match(this_filename,this_desc,this_kp,othr_filename,othr_desc,othr_kp):

    E_RANSAC_PROB = 0.999
    F_RANSAC_PROB = 0.999
    E_PROJ_ERROR = 15.0
    F_PROJ_ERROR = 15.0
    LOWE_RATIO = 0.9
    # FLANN Matcher
    # FLANN_INDEX_KDTREE = 1 # 1? https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_matcher/py_matcher.html#basics-of-brute-force-matcher
    # index_params = dict(algorithm = FLANN_INDEX_KDTREE,trees = 5)
    # search_params = dict(checks=50)   # or pass empty dictionary
    # flann = cv2.FlannBasedMatcher(index_params,search_params)
    # matcherij = flann.knnMatch(this_desc,k=2)
    # matcherji = flann.knnMatch(othr_desc,k=2)

    # BF Matcher
    this_matches = {}
    othr_matches = {}


    bf = cv2.BFMatcher()
    matcherij = bf.knnMatch(this_desc,k=2)
    matcherji = bf.knnMatch(othr_desc,k=2)

    matchesij = []
    matchesji = []

    for i,(m,n) in enumerate(matcherij):
        if m.distance < LOWE_RATIO*n.distance:
            matchesij.append((m.queryIdx,m.trainIdx))

    for i,n) in enumerate(matcherji):
        if m.distance < LOWE_RATIO*n.distance:
            matchesji.append((m.trainIdx,m.queryIdx))


    # Make sure matches are symmetric
    symmetric = set(matchesij).intersection(set(matchesji))
    symmetric = list(symmetric)

    this_matches[othr_filename] = [ (a,b) for (a,b) in symmetric ]
    othr_matches[this_filename] = [ (b,a) for (a,b) in symmetric ]

    src = np.array([ this_kp[index[0]].pt for index in this_matches[othr_filename] ])
    dst = np.array([ othr_kp[index[1]].pt for index in this_matches[othr_filename] ])

    if len(this_matches[othr_filename]) == 0:
        print("no symmetric matches")
        return 0

    # retain inliers that fit x.F.xT == 0
    F,inliers = cv2.findFundamentalMat(src,dst,cv2.FM_RANSAC,F_PROJ_ERROR,F_RANSAC_PROB)

    if F is None or inliers is None:
        print("no F matrix estimated")
        return 0

    inliers = inliers.ravel()

    this_matches[othr_filename] = [ this_matches[othr_filename][x] for x in range(len(inliers)) if inliers[x] ]
    othr_matches[this_filename] = [ othr_matches[this_filename][x] for x in range(len(inliers)) if inliers[x] ]

    return this_matches,othr_matches,inliers.sum()

这是两个原始图像: https://www.dropbox.com/s/pvi247be2ds0noc/images.zip?dl=0

解决方法

我不明白为什么在代码中您要过滤掉距离大于0.9((... . ...))的匹配项。这些点已经匹配。通过滤除这些点,您可以将匹配的特征从15000减少到839,然后在线检测器仅将其中的87个视为在线检测器。


此外,使用下面的使用ORB(定向的FAST和Roted Brief)的代码,我有500个关键点和158个匹配项。我相信它可以替代SIFT:

LOWE_RATIO

,匹配项如下所示: enter image description here

,

高分辨率在图像处理中并不总是一件好事,因此我只是按照本教程tutorial并添加了中值滤镜。如下所示,结果还不错

im1  = cv.imread('IMG_1596.png')
gry1 = cv.cvtColor(im1,cv.COLOR_BGR2GRAY)
gry1 = cv.medianBlur(gry1,ksize = 5)

im2  = cv.imread('IMG_1598.png')
gry2 = cv.cvtColor(im2,cv.COLOR_BGR2GRAY)
gry2 = cv.medianBlur(gry2,ksize = 5)

# Initiate ORB detector
orb = cv.ORB_create()

# find the keypoints and descriptors with ORB
kp1,des1 = orb.detectAndCompute(gry1,None)
kp2,des2 = orb.detectAndCompute(gry2,None)

# create BFMatcher object
bf = cv.BFMatcher(cv.NORM_HAMMING,crossCheck=True)

# Match descriptors.
matches = bf.match(des1,des2)

# Sort them in the order of their distance.
matches = sorted(matches,key = lambda x:x.distance)

im3 = cv.drawMatches(im1,kp1,im2,kp2,matches,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imwrite("ORB_RESULTS.png",im3)

len(matches)
>>> 121

enter image description here

# Initiate SIFT detector
sift = cv.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1,des1 = sift.detectAndCompute(gry1,des2 = sift.detectAndCompute(gry2,None)

# BFMatcher with default params
bf = cv.BFMatcher()
matches = bf.knnMatch(des1,des2,k=2)

# Apply ratio test
good = []
for m,n in matches:
    if m.distance < 0.75*n.distance:
        good.append([m])
        
# cv.drawMatchesKnn expects list of lists as matches.
im3 = cv.drawMatchesKnn(im1,good,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
cv.imwrite("SIFT_RESULTS.png",im3)
len(good)
>>> 183

enter image description here

FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE,trees = 5)

FLANN_INDEX_LSH = 6
index_params= dict(algorithm = FLANN_INDEX_LSH,table_number = 6,# 12
                   key_size = 12,# 20
                   multi_probe_level = 1) #2

# Initiate SIFT with FLANN parameters detector
sift = cv.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1,None)

# FLANN parameters
FLANN_INDEX_KDTREE = 1
index_params = dict(algorithm = FLANN_INDEX_KDTREE,trees = 5)
search_params = dict()
flann = cv.FlannBasedMatcher(index_params,search_params)
matches = flann.knnMatch(des1,k=2)

# Need to draw only good matches,so create a mask
matchesMask = [[0,0] for i in range(len(matches))]

# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.7*n.distance:
        matchesMask[i]=[1,0]
draw_params = dict(matchColor = (0,255,0),singlePointColor = (255,matchesMask = matchesMask,flags = cv.DrawMatchesFlags_DEFAULT)
im3 = cv.drawMatchesKnn(im1,**draw_params)
cv.imwrite("SIFT_w_FLANN_RESULTS.png",im3)

enter image description here

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。