OpenCV 找到正确的阈值来确定图像匹配与否与匹配分数

问题描述 投票:0回答:2

我目前正在使用各种特征提取器和各种匹配器制作识别程序。使用匹配器的分数,我想创建一个分数阈值,它可以进一步确定它是正确的匹配还是不正确的匹配。

我试图从各种匹配器中了解 DMatch 距离的含义,距离值越小是否匹配效果越好?如果是,我很困惑,因为具有不同位置的同一图像返回的值比两个不同图像更大。

我运行了两个测试用例:

  1. 将一张图像与不同位置的相同图像进行比较等
  2. 将一张图像与具有几个不同位置的完全不同的图像进行比较,等等。

这是我的测试结果:

-----------------------------------------------

Positive image average distance
Total test number: 70
Comparing with SIFT
     Use BF with Ratio Test: 874.071456255
     Use FLANN             : 516.737270464

Comparing with SURF
     Use BF with Ratio Test: 2.92960552163
     Use FLANN             : 1.47225751158

Comparing with ORB
     Use BF                : 12222.1428571
     Use BF with Ratio Test: 271.638643755

Comparing with BRISK
     Use BF                : 31928.4285714
     Use BF with Ratio Test: 1537.63658578

Maximum positive image distance
Comparing with SIFT
     Use BF with Ratio Test: 2717.88008881
     Use FLANN             : 1775.63563538

Comparing with SURF
     Use BF with Ratio Test: 4.88817568123
     Use FLANN             : 2.81848525628

Comparing with ORB
     Use BF                : 14451.0
     Use BF with Ratio Test: 1174.47851562

Comparing with BRISK
     Use BF                : 41839.0
     Use BF with Ratio Test: 3846.39746094

-----------------------------------------

Negative image average distance
Total test number: 72
Comparing with SIFT
     Use BF with Ratio Test: 750.028228866
     Use FLANN             : 394.982576052

Comparing with SURF
     Use BF with Ratio Test: 2.89866939275
     Use FLANN             : 1.59815886725

Comparing with ORB
     Use BF                : 12098.9444444
     Use BF with Ratio Test: 261.874231339

Comparing with BRISK
     Use BF                : 31165.8472222
     Use BF with Ratio Test: 1140.46670034

Minimum negative image distance
Comparing with SIFT
     Use BF with Ratio Test: 0
     Use FLANN             : 0

Comparing with SURF
     Use BF with Ratio Test: 1.25826786458
     Use FLANN             : 0.316588282585

Comparing with ORB
     Use BF                : 10170.0
     Use BF with Ratio Test: 0

Comparing with BRISK
     Use BF                : 24774.0
     Use BF with Ratio Test: 0

此外,在某些情况下,当两个不同的图像相互测试并且没有匹配时,匹配器也会返回 0 分数,这与两个相同的图像一起比较时的分数完全相同。

经进一步检查,主要有四种情况:

  1. 两张相同的图像,很多匹配,距离 = 0
  2. 两张相同的图像(不相同),很多匹配,距离=大 值
  3. 两张完全不同的图像,不匹配,距离 = 0
  4. 两张不同的图像,一些匹配,距离=小值

根据这些情况找到正确的阈值似乎是个问题,因为有些情况相互矛盾。 通常图像越相似,距离值越低。

匹配器.py

def useBruteForce(img1, img2, kp1, kp2, des1, des2, setDraw):
    # create BFMatcher object
    bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

    # Match descriptors.
    matches = bf.match(des1,des2)

    # Sort them in the order of their distance.
    matches = sorted(matches, key = lambda x:x.distance)

    totalDistance = 0
    for g in matches:
        totalDistance += g.distance

    if setDraw == True:
        # Draw matches.
        img3 = cv2.drawMatches(img1, kp1, img2, kp2, matches, None, flags=2)
        plt.imshow(img3),plt.show()

    return totalDistance


def useBruteForceWithRatioTest(img1, img2, kp1, kp2, des1, des2, setDraw):
    # BFMatcher with default params
    bf = cv2.BFMatcher()
    matches = bf.knnMatch(des1,des2, k=2)

    # Apply ratio test
    good = []
    for m,n in matches:
        if m.distance < 0.75*n.distance:
            good.append(m)

    totalDistance = 0
    for g in good:
        totalDistance += g.distance

    if setDraw == True:
        # cv2.drawMatchesKnn expects list of lists as matches.
        img3 = cv2.drawMatchesKnn(img1,kp1,img2,kp2,[good],None,flags=2)
        plt.imshow(img3),plt.show()

    return totalDistance


def useFLANN(img1, img2, kp1, kp2, des1, des2, setDraw, type):
    # Fast Library for Approximate Nearest Neighbors
    MIN_MATCH_COUNT = 1
    FLANN_INDEX_KDTREE = 0
    FLANN_INDEX_LSH = 6

    if type == True:
        # Detect with ORB
        index_params= dict(algorithm = FLANN_INDEX_LSH,
                       table_number = 6, # 12
                       key_size = 12,     # 20
                       multi_probe_level = 1) #2
    else:
        # Detect with Others such as SURF, SIFT
        index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)

    # It specifies the number of times the trees in the index should be recursively traversed. Higher values gives better precision, but also takes more time
    search_params = dict(checks = 90)

    flann = cv2.FlannBasedMatcher(index_params, search_params)
    matches = flann.knnMatch(des1, des2, k=2)

    # store all the good matches as per Lowe's ratio test.
    good = []
    for m,n in matches:
        if m.distance < 0.7*n.distance:
            good.append(m)

    totalDistance = 0
    for g in good:
        totalDistance += g.distance

    if setDraw == True:
        if len(good)>MIN_MATCH_COUNT:
            src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
            dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)

            M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)
            matchesMask = mask.ravel().tolist()

            h,w = img1.shape
            pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0] ]).reshape(-1,1,2)
            dst = cv2.perspectiveTransform(pts,M)

            img2 = cv2.polylines(img2,[np.int32(dst)],True,255,3, cv2.LINE_AA)

        else:
            print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
            matchesMask = None

        draw_params = dict(matchColor = (0,255,0), # draw matches in green color
                           singlePointColor = None,
                           matchesMask = matchesMask, # draw only inliers
                           flags = 2)

        img3 = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
        plt.imshow(img3, 'gray'),plt.show()

    return totalDistance

比较器.py

import matcher    

def check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB):
    if matcherType == 1:
        return matcher.useBruteForce(img1, img2, kp1, kp2, des1, des2, setDraw)
    elif matcherType == 2:
        return matcher.useBruteForceWithRatioTest(img1, img2, kp1, kp2, des1, des2, setDraw)
    elif matcherType == 3:
        return matcher.useFLANN(img1, img2, kp1, kp2, des1, des2, setDraw, ORB)
    else:
        print "Matcher not chosen correctly, use Brute Force matcher as default"
        return matcher.useBruteForce(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw)


def useORB(filename1, filename2, matcherType, setDraw):
    img1 = cv2.imread(filename1,0) # queryImage
    img2 = cv2.imread(filename2,0) # trainImage

    # Initiate ORB detector
    orb = cv2.ORB_create()

    # find the keypoints and descriptors with ORB
    kp1, des1 = orb.detectAndCompute(img1,None)
    kp2, des2 = orb.detectAndCompute(img2,None)
    ORB = True
    return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)


def useSIFT(filename1, filename2, matcherType, setDraw):
    img1 = cv2.imread(filename1,0) # queryImage
    img2 = cv2.imread(filename2,0) # trainImage

    # Initiate SIFT detector
    sift = cv2.xfeatures2d.SIFT_create()

    # find the keypoints and descriptors with SIFT
    kp1, des1 = sift.detectAndCompute(img1, None)
    kp2, des2 = sift.detectAndCompute(img2, None)
    ORB = False
    return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)


def useSURF(filename1, filename2, matcherType, setDraw):
    img1 = cv2.imread(filename1, 0)
    img2 = cv2.imread(filename2, 0)

    # Here I set Hessian Threshold to 400
    surf = cv2.xfeatures2d.SURF_create(400)

    # Find keypoints and descriptors directly
    kp1, des1 = surf.detectAndCompute(img1, None)
    kp2, des2 = surf.detectAndCompute(img2, None)
    ORB = False
    return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)


def useBRISK(filename1, filename2, matcherType, setDraw):
    img1 = cv2.imread(filename1,0) # queryImage
    img2 = cv2.imread(filename2,0) # trainImage

    # Initiate BRISK detector
    brisk = cv2.BRISK_create()

    # find the keypoints and descriptors with BRISK
    kp1, des1 = brisk.detectAndCompute(img1,None)
    kp2, des2 = brisk.detectAndCompute(img2,None)
    ORB = True
    return check(img1, img2, kp1, kp2, des1, des2, matcherType, setDraw, ORB)
python opencv feature-extraction matcher threshold
2个回答
0
投票

OpenCV的教程里是这么说的

对于 BF 匹配器,首先我们必须使用 cv.BFMatcher() 创建 BFMatcher 对象。它需要两个可选参数。第一个是normType。它指定要使用的距离测量。默认情况下,它是cv.NORM_L2。它适用于 SIFT、SURF 等(cv.NORM_L1 也在那里)。对于基于二进制字符串的描述符,如 ORB、BRIEF、BRISK 等,应使用 cv.NORM_HAMMING,它使用汉明距离作为测量。如果 ORB 使用 WTA_K == 3 或 4,则应使用 cv.NORM_HAMMING2。

https://docs.opencv.org/3.4/dc/dc3/tutorial_py_matcher.html

所以你应该为 SIFT 和 ORB 创建不同的匹配器对象(你明白了)。这可能就是您计算的距离差异如此之大的原因。


0
投票

根据Open CV Docs,更好的匹配应该给出更短的距离:

DMatch.distance - 描述符之间的距离。越低越好。

我没有使用距离阈值来确定两个图像是否真正匹配,而是只是检查顶部匹配是否给出了一致的变换。

就我而言,我正在处理显微镜图像,所以我只对每对匹配图像之间的 x 和 y 平移感兴趣。这样的事情对我来说非常有效。

best_matches = sorted(matches, key=lambda x:x.distance)[:5]
x_offsets, y_offsets = [], []
for match in best_matches:
    key_point_1 = image_1.key_points[match.queryIdx]
    key_point_2 = image_2.key_points[match.trainIdx]
    x_offsets.append(key_point_1.pt[0] - key_point_2.pt[0])
    y_offsets.append(key_point_1.pt[1] - key_point_2.pt[1])

tolerance = 1 #a 1 pixel shift is acceptable
is_true_match = np.std(x_offsets) < tolerance and np.std(y_offsets) < tolerance

这样做的好处是不需要限定什么比赛距离足够好。

然而,当转换更加复杂时,这会变得更加棘手。当需要缩放、旋转和视角变化时。您需要多组匹配来查找多个变换矩阵,并提出合理的方法来衡量矩阵的相似程度。

您还可能面临没有足够的图像重叠来提供所需的真实匹配数量的风险。这可以通过使用像 SIFT 这样的匹配器来缓解,该匹配器可以检测更多关键点,但代价是速度稍慢。

© www.soinside.com 2019 - 2024. All rights reserved.