我正在尝试采用具有重复特征的壁纸样本,并通过使用 ORB 或 SIFT 特征匹配将样本基本上映射到其自身的边缘以扩展图案,从而在更大的画布上概括图案。基本原理就像将全景图缝合在一起,只不过图案被缝合到自身上,从而完成边缘处的任何剪切特征。
对于第一个例子,我试图将截断的顶部宇宙飞船与全底部宇宙飞船相匹配,以便图案继续超出壁纸的原始边缘。
我已经成功地使用 ORB 和 OpenCV 执行简单的操作,例如将壁纸的图像拼接部分重新组合在一起以创建更大的样本。但我无法获取完整的样本并用它来创建一个大的图案。
我尝试通过将图像的中心与外围的每个边缘进行比较来使用 ORB 特征匹配(见下文)。但我未能成功地将中心图像映射到外部边缘作为完整的图像。
import cv2
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
# Function to crop the center 90% of the image
def crop_center(image, crop_percentage=0.9):
# Get dimensions of the original image
width, height = image.size
# Calculate dimensions for the cropped image
new_width = int(width * crop_percentage)
new_height = int(height * crop_percentage)
# Calculate the coordinates for the cropping box (centered)
left = (width - new_width) // 2
top = (height - new_height) // 2
right = (width + new_width) // 2
bottom = (height + new_height) // 2
# Crop the image
return image.crop((left, top, right, bottom))
# Function to extract the outer 10% edges of the image
def extract_edges(image, edge_percentage=0.1):
width, height = image.size
# Calculate edge sizes
edge_width = int(width * edge_percentage)
edge_height = int(height * edge_percentage)
# Extract the four edge regions
edges = {
"top": image.crop((0, 0, width, edge_height)),
"bottom": image.crop((0, height - edge_height, width, height)),
"left": image.crop((0, 0, edge_width, height)),
"right": image.crop((width - edge_width, 0, width, height)),
}
return edges
# Function to perform ORB feature matching
def feature_match(cropped_img, edge_img):
# Convert images to grayscale
cropped_img_gray = cv2.cvtColor(np.array(cropped_img), cv2.COLOR_RGB2GRAY)
edge_img_gray = cv2.cvtColor(np.array(edge_img), cv2.COLOR_RGB2GRAY)
# Initialize ORB detector
orb = cv2.ORB_create()
# Detect keypoints and descriptors
kp1, des1 = orb.detectAndCompute(cropped_img_gray, None)
kp2, des2 = orb.detectAndCompute(edge_img_gray, None)
# Create a Brute Force Matcher
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
# Match descriptors
matches = bf.match(des1, des2)
# Sort matches based on their distance (lower distance is better)
matches = sorted(matches, key=lambda x: x.distance)
# Draw the top 10 matches
matched_image = cv2.drawMatches(np.array(cropped_img), kp1, np.array(edge_img), kp2, matches[:10], None, flags=2)
# Return the matched image
return matched_image
# Function to display the images in a Colab notebook
def show_image(img, title="Image"):
plt.figure(figsize=(10, 10))
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.title(title)
plt.axis("off")
plt.show()
# Main function
def find_and_show_matches(image_path):
# Open the original image
img = Image.open(image_path)
# Crop the center 90% of the image
cropped_img = crop_center(img)
# Extract edges (top, bottom, left, right) which represent the outer 10%
edges = extract_edges(img)
# Perform feature matching for each edge and visualize matches
for edge_name, edge_img in edges.items():
matched_img = feature_match(cropped_img, edge_img)
# Display the matches using matplotlib
show_image(matched_img, title=f"Matches with {edge_name} edge")
放弃那个。通过卷积使用自相关。
# the usual imports
# read image
im = cv.cvtColor(im, cv.COLOR_BGR2GRAY)
im = im.astype(np.float32)
im -= im.mean()
im /= im.std()
spec = cv.dft(im)
multiplied = cv.mulSpectrums(spec, spec, 0, conjB=True)
convolved = cv.idft(multiplied)
# shift it
convolved = np.fft.fftshift(convolved)
plt.figure(figsize=(10, 10))
plt.imshow(convolved, cmap='turbo')
plt.show()
完成后,您“只”需要找到非同一性峰值。
第二张图:可以很清楚地看到重复。
第一张图片:您从额外的“边界”中获得了伪影,但可以从垂直方向上两个有点微弱的偏离同一性的峰值看到垂直重复。