从图像拼接中删除黑色虚线

如何解决从图像拼接中删除黑色虚线

我正在拼接多个图像。在拼接两个图像时,它在拼接之间显示黑色虚线,如下所示。

enter image description here

有谁知道我如何删除或摆脱这条黑色虚线?

拼接代码的主要部分,将两张图片拼接起来,并用之前拼接图片的结果调用下一张图片,直到所有图片都结束:

detector = cv2.xfeatures2d.SURF_create(400)
gray1 = cv2.cvtColor(image1,cv2.COLOR_BGR2GRAY)
ret1,mask1 = cv2.threshold(gray1,1,255,cv2.THRESH_BINARY)
kp1,descriptors1 = detector.detectAndCompute(gray1,mask1)

gray2 = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)
ret2,mask2 = cv2.threshold(gray2,cv2.THRESH_BINARY)
kp2,descriptors2 = detector.detectAndCompute(gray2,mask2)

keypoints1Im = cv2.drawKeypoints(image1,kp1,outImage = cv2.DRAW_MATCHES_FLAGS_DEFAULT,color=(0,255))
util.display("KEYPOINTS",keypoints1Im)
keypoints2Im = cv2.drawKeypoints(image2,kp2,keypoints2Im)

matcher = cv2.BFMatcher()
matches = matcher.knnMatch(descriptors2,descriptors1,k=2)

good = []
for m,n in matches:
    if m.distance < 0.55 * n.distance:
        good.append(m)

print (str(len(good)) + " Matches were Found")

if len(good) <= 10:
    return image1

matches = copy.copy(good)

matchDrawing = util.drawMatches(gray2,gray1,matches)
util.display("matches",matchDrawing)

src_pts = np.float32([ kp2[m.queryIdx].pt for m in matches ]).reshape(-1,2)
dst_pts = np.float32([ kp1[m.trainIdx].pt for m in matches ]).reshape(-1,2)

A = cv2.estimateRigidTransform(src_pts,dst_pts,fullAffine=False)

if A is None:
    HomogResult = cv2.findHomography(src_pts,method=cv2.RANSAC)
    H = HomogResult[0]

height1,width1 = image1.shape[:2]
height2,width2 = image2.shape[:2]

corners1 = np.float32(([0,0],[0,height1],[width1,0]))
corners2 = np.float32(([0,height2],[width2,0]))

warpedCorners2 = np.zeros((4,2))

for i in range(0,4):
    cornerX = corners2[i,0]
    cornerY = corners2[i,1]
    if A is not None: #check if we're working with affine transform or perspective transform
        warpedCorners2[i,0] = A[0,0]*cornerX + A[0,1]*cornerY + A[0,2]
        warpedCorners2[i,1] = A[1,0]*cornerX + A[1,1]*cornerY + A[1,2]
    else:
        warpedCorners2[i,0] = (H[0,0]*cornerX + H[0,1]*cornerY + H[0,2])/(H[2,0]*cornerX + H[2,1]*cornerY + H[2,2])
        warpedCorners2[i,1] = (H[1,0]*cornerX + H[1,1]*cornerY + H[1,2])

allCorners = np.concatenate((corners1,warpedCorners2),axis=0)

[xMin,yMin] = np.int32(allCorners.min(axis=0).ravel() - 0.5)
[xMax,yMax] = np.int32(allCorners.max(axis=0).ravel() + 0.5)

translation = np.float32(([1,-1*xMin],-1*yMin],1]))
warpedResImg = cv2.warpPerspective(image1,translation,(xMax-xMin,yMax-yMin))


if A is None:
    fullTransformation = np.dot(translation,H) #again,images must be translated to be 100% visible in new canvas
    warpedImage2 = cv2.warpPerspective(image2,fullTransformation,yMax-yMin))

else:
    warpedImageTemp = cv2.warpPerspective(image2,yMax-yMin))
    warpedImage2 = cv2.warpAffine(warpedImageTemp,A,yMax-yMin))

result = np.where(warpedImage2 != 0,warpedImage2,warpedResImg)

请帮帮我。谢谢。

编辑:

输入图像1(调整大小)

enter image description here

输入图像2(调整大小)

enter image description here

结果(调整大小)

enter image description here

更新:

@fmw42 anwser 之后的结果:

enter image description here

解决方法

出现问题是因为当您进行变形时,图像的边框像素会被重新采样/插入黑色背景像素。这会在不同值的扭曲图像周围留下一个非零边框,当与其他图像合并时,这些边框显示为您的黑色虚线。发生这种情况是因为您的合并测试是二进制的,使用 != 0 进行测试。

因此,您可以做的一件简单的事情是在 Python/OpenCV 中屏蔽扭曲的图像,以从图像外部的黑色背景中获取其边界,然后侵蚀该蒙版。然后使用遮罩侵蚀图像边界。这可以通过对您的最后几行代码进行以下更改来实现,如下所示:

if A is None:
    fullTransformation = np.dot(translation,H) #again,images must be translated to be 100% visible in new canvas
    warpedImage2 = cv2.warpPerspective(image2,fullTransformation,(xMax-xMin,yMax-yMin))

else:
    warpedImageTemp = cv2.warpPerspective(image2,translation,yMax-yMin))
    warpedImage2 = cv2.warpAffine(warpedImageTemp,A,yMax-yMin))
    mask2 = cv2.threshold(warpedImage2,255,cv2.THRESH_BINARY)[1]
    kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(3,3))
    mask2 = cv2.morphologyEx(mask2,cv2.MORPH_ERODE,kernel)
    warpedImage2[mask2==0] = 0

result = np.where(warpedImage2 != 0,warpedImage2,warpedResImg)

我只是在您的代码中添加了以下代码行:

mask2 = cv2.threshold(warpedImage2,cv2.THRESH_BINARY)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,3))
mask2 = cv2.morphologyEx(mask2,kernel)
warpedImage2[mask2==0] = 0

如果需要腐蚀更多,您可以增加内核大小。

这是之前和之后。请注意,我没有 SURF 并尝试使用 ORB,但它没有很好地对齐。所以你的道路没有对齐。但是由于未对齐而导致的不匹配强调了这个问题,因为它显示了虚线锯齿状黑色边框线。 ORB 不工作或我没有正确的代码使其对齐的事实并不重要。遮罩做我认为你想要的,并且可以扩展到处理你的所有图像。

enter image description here

可以结合上述方法完成的另一件事是羽化蒙版,然后使用蒙版渐变混合两个图像。这是通过模糊蒙版(多一点)然后在模糊边框的内侧一半上拉伸值并使渐变仅在模糊边框的外侧一半上完成的。然后将两个图像与斜面蒙版及其反相混合,如下所示,代码与上述相同。

    if A is None:
        fullTransformation = np.dot(translation,images must be translated to be 100% visible in new canvas
        warpedImage2 = cv2.warpPerspective(image2,yMax-yMin))
    
    else:
        warpedImageTemp = cv2.warpPerspective(image2,yMax-yMin))
        warpedImage2 = cv2.warpAffine(warpedImageTemp,yMax-yMin))
        mask2 = cv2.threshold(warpedImage2,cv2.THRESH_BINARY)[1]
        kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,3))
        mask2 = cv2.morphologyEx(mask2,kernel)
        warpedImage2[mask2==0] = 0
        mask2 = cv2.blur(mask2,(5,5))
        mask2 = skimage.exposure.rescale_intensity(mask2,in_range=(127.5,255),out_range=(0,255)).astype(np.float64)
    
    result = (warpedImage2 * mask2 +  warpedResImg * (255 - mask2))/255
    result = result.clip(0,255).astype(np.uint8)

cv2.imwrite("image1_image2_merged3.png",result)

与原始组合相比的结果如下:

enter image description here

添加

我已经更正了我的 ORB 代码以反转图像的使用,现在它对齐了。所以这里有 3 种技术:原始技术、仅使用二进制蒙版的技术和使用渐变蒙版进行混合的技术(如上所述)。

enter image description here

附加 2

以下是 3 个请求的图像:原始图像、二进制蒙版、渐变蒙版混合。

enter image description here

enter image description here

enter image description here

这是我上面最后一个版本的 ORB 代码

我尝试对您的代码进行尽可能少的更改,但我必须使用 ORB,并且必须在接近结尾时交换名称 image1 和 image2。

import cv2
import matplotlib.pyplot as plt
import numpy as np
import itertools
from scipy.interpolate import UnivariateSpline
from skimage.exposure import rescale_intensity


image1 = cv2.imread("image1.jpg")
image2 = cv2.imread("image2.jpg")

gray1 = cv2.cvtColor(image1,cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(image2,cv2.COLOR_BGR2GRAY)

# Detect ORB features and compute descriptors.
MAX_FEATURES = 500
GOOD_MATCH_PERCENT = 0.15
orb = cv2.ORB_create(MAX_FEATURES)

keypoints1,descriptors1 = orb.detectAndCompute(gray1,None)
keypoints2,descriptors2 = orb.detectAndCompute(gray2,None)

# Match features.
matcher = cv2.DescriptorMatcher_create(cv2.DESCRIPTOR_MATCHER_BRUTEFORCE_HAMMING)
matches = matcher.match(descriptors1,descriptors2,None)

# Sort matches by score
matches.sort(key=lambda x: x.distance,reverse=False)

# Remove not so good matches
numGoodMatches = int(len(matches) * GOOD_MATCH_PERCENT)
matches = matches[:numGoodMatches]

# Draw top matches
imMatches = cv2.drawMatches(image1,keypoints1,image2,keypoints2,matches,None)
cv2.imwrite("/Users/fred/desktop/image1_image2_matches.png",imMatches)

# Extract location of good matches
points1 = np.zeros((len(matches),2),dtype=np.float32)
points2 = np.zeros((len(matches),dtype=np.float32)

for i,match in enumerate(matches):
    points1[i,:] = keypoints1[match.queryIdx].pt
    points2[i,:] = keypoints2[match.trainIdx].pt

print(points1)
print("")
print(points2)

A = cv2.estimateRigidTransform(points1,points2,fullAffine=False)
#print(A)

if A is None:
    HomogResult = cv2.findHomography(points1,method=cv2.RANSAC)
    H = HomogResult[0]

height1,width1 = image1.shape[:2]
height2,width2 = image2.shape[:2]

corners1 = np.float32(([0,0],[0,height1],[width1,0]))
corners2 = np.float32(([0,height2],[width2,0]))

warpedCorners2 = np.zeros((4,2))

# project corners2 into domain of image1 from A affine or H homography
for i in range(0,4):
    cornerX = corners2[i,0]
    cornerY = corners2[i,1]
    if A is not None: #check if we're working with affine transform or perspective transform
        warpedCorners2[i,0] = A[0,0]*cornerX + A[0,1]*cornerY + A[0,2]
        warpedCorners2[i,1] = A[1,0]*cornerX + A[1,1]*cornerY + A[1,2]
    else:
        warpedCorners2[i,0] = (H[0,0]*cornerX + H[0,1]*cornerY + H[0,2])/(H[2,0]*cornerX + H[2,1]*cornerY + H[2,2])
        warpedCorners2[i,1] = (H[1,0]*cornerX + H[1,1]*cornerY + H[1,2])

allCorners = np.concatenate((corners1,warpedCorners2),axis=0)

[xMin,yMin] = np.int32(allCorners.min(axis=0).ravel() - 0.5)
[xMax,yMax] = np.int32(allCorners.max(axis=0).ravel() + 0.5)

translation = np.float32(([1,-1*xMin],1,-1*yMin],1]))
warpedResImg = cv2.warpPerspective(image2,yMax-yMin))


if A is None:
    fullTransformation = np.dot(translation,yMax-yMin))

else:
    warpedImageTemp = cv2.warpPerspective(image1,kernel)
    warpedImage2[mask2==0] = 0
    mask2 = cv2.blur(mask2,5))
    mask2 = rescale_intensity(mask2,255)).astype(np.float64)

result = (warpedImage2 * mask2 +  warpedResImg * (255 - mask2))/255
result = result.clip(0,255).astype(np.uint8)

cv2.imwrite("image1_image2_merged2.png",result)

你有以下内容。请注意与我上面的代码相比,名称 image1 和 image2 的使用位置。

warpedResImg = cv2.warpPerspective(image1,yMax-yMin))
,

水平粘合

作为概念证明,我将重点介绍其中一个剪辑。我同意您的代码有点冗长且难以使用的评论。所以第一步是自己粘贴图片。

await

naive glue

好吧,没有黑色虚线,但我承认也不完美。所以下一步是至少将街道和图片最右边的小路对齐。为此,我需要将图片缩小到非整数大小并将其转回网格。为此,我将使用类似 import cv2 import matplotlib.pyplot as plt import numpy as np import itertools from scipy.interpolate import UnivariateSpline upper_image = cv2.cvtColor(cv2.imread('yQv6W.jpg'),cv2.COLOR_BGR2RGB)/255 lower_image = cv2.cvtColor(cv2.imread('zoWJv.jpg'),cv2.COLOR_BGR2RGB)/255 result_image = np.zeros((466+139,700+22,3)) result_image[139:139+lower_image.shape[0],:lower_image.shape[1]] = lower_image result_image[0:upper_image.shape[0],22:22+upper_image.shape[1]] = upper_image plt.imshow(result_image) 的方法。

编辑:根据评论中的要求,我将更详细地解释收缩,因为它必须再次手动完成以进行其他缝合。魔法发生在行中(我用它的值替换了 n)

knn

我首先尝试在 x 方向上缩放下方图片,以使最右侧的小路适合上方图片。不幸的是,这条街不适合这条街。所以我所做的是根据上述功能缩小规模。在像素 0 处,我仍然希望像素为零,在 290 处,我想拥有以前在 310 处的像素,依此类推。 请注意,290,510 和 310,530 分别是街道和道路在胶合高度的新旧 x 坐标。

f = UnivariateSpline([0,290,510,685],310,530,700])

gluing with alignment

好多了,没有黑线,但也许我们仍然可以使切口平滑一点。我想如果我在剪裁处采用上下图像的凸组合看起来会更好。

class Image_knn():
    def fit(self,image):
        self.image = image.astype('float')

    def predict(self,x,y):
        image = self.image
        weights_x = [(1-(x % 1)).reshape(*x.shape,1),(x % 1).reshape(*x.shape,1)]
        weights_y = [(1-(y % 1)).reshape(*x.shape,(y % 1).reshape(*x.shape,1)]
        start_x = np.floor(x).astype('int')
        start_y = np.floor(y).astype('int')
        return sum([image[np.clip(np.floor(start_x + x),image.shape[0]-1).astype('int'),np.clip(np.floor(start_y + y),image.shape[1]-1).astype('int')] * weights_x[x]*weights_y[y] 
                    for x,y in itertools.product(range(2),range(2))])

image_model = Image_knn()
image_model.fit(lower_image)

n = 685
f = UnivariateSpline([0,n],700])
np.linspace(0,lower_image.shape[1],n)
yspace = f(np.arange(n))

result_image = np.zeros((466+139,3))
a,b = np.meshgrid(np.arange(0,lower_image.shape[0]),yspace)
result_image[139:139+lower_image.shape[0],:n] = np.transpose(image_model.predict(a,b),[1,2])
result_image[0:upper_image.shape[0],22:22+upper_image.shape[1]] = upper_image
plt.imshow(result_image,'gray')

Final Gluing

倾斜粘合

为了完整起见,这里还有一个顶部粘合的版本,底部倾斜。我把图片贴在一个点上,然后把那个 result_image = np.zeros((466+139,2]) transition_range = 10 result_image[0:upper_image.shape[0]-transition_range,22:22+upper_image.shape[1]] = upper_image[:-transition_range,:] transition_pixcels = upper_image[-transition_range:,:]*np.linspace(1,transition_range).reshape(-1,1) result_image[upper_image.shape[0]-transition_range:upper_image.shape[0],22:22+upper_image.shape[1]] *= np.linspace(0,22:22+upper_image.shape[1]] += transition_pixcels plt.imshow(result_image) plt.savefig('text.jpg') 转几度。最后我再次纠正一些非常轻微的非对齐。为了获得坐标,我使用了 jupyter lab 和 fixed point

%matplotlib widget

Tilted Gluing

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


使用本地python环境可以成功执行 import pandas as pd import matplotlib.pyplot as plt # 设置字体 plt.rcParams[&#39;font.sans-serif&#39;] = [&#39;SimHei&#39;] # 能正确显示负号 p
错误1:Request method ‘DELETE‘ not supported 错误还原:controller层有一个接口,访问该接口时报错:Request method ‘DELETE‘ not supported 错误原因:没有接收到前端传入的参数,修改为如下 参考 错误2:cannot r
错误1:启动docker镜像时报错:Error response from daemon: driver failed programming external connectivity on endpoint quirky_allen 解决方法:重启docker -&gt; systemctl r
错误1:private field ‘xxx‘ is never assigned 按Altʾnter快捷键,选择第2项 参考:https://blog.csdn.net/shi_hong_fei_hei/article/details/88814070 错误2:启动时报错,不能找到主启动类 #
报错如下,通过源不能下载,最后警告pip需升级版本 Requirement already satisfied: pip in c:\users\ychen\appdata\local\programs\python\python310\lib\site-packages (22.0.4) Coll
错误1:maven打包报错 错误还原:使用maven打包项目时报错如下 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:3.2.0:resources (default-resources)
错误1:服务调用时报错 服务消费者模块assess通过openFeign调用服务提供者模块hires 如下为服务提供者模块hires的控制层接口 @RestController @RequestMapping(&quot;/hires&quot;) public class FeignControl
错误1:运行项目后报如下错误 解决方案 报错2:Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project sb 解决方案:在pom.
参考 错误原因 过滤器或拦截器在生效时,redisTemplate还没有注入 解决方案:在注入容器时就生效 @Component //项目运行时就注入Spring容器 public class RedisBean { @Resource private RedisTemplate&lt;String
使用vite构建项目报错 C:\Users\ychen\work&gt;npm init @vitejs/app @vitejs/create-app is deprecated, use npm init vite instead C:\Users\ychen\AppData\Local\npm-
参考1 参考2 解决方案 # 点击安装源 协议选择 http:// 路径填写 mirrors.aliyun.com/centos/8.3.2011/BaseOS/x86_64/os URL类型 软件库URL 其他路径 # 版本 7 mirrors.aliyun.com/centos/7/os/x86
报错1 [root@slave1 data_mocker]# kafka-console-consumer.sh --bootstrap-server slave1:9092 --topic topic_db [2023-12-19 18:31:12,770] WARN [Consumer clie
错误1 # 重写数据 hive (edu)&gt; insert overwrite table dwd_trade_cart_add_inc &gt; select data.id, &gt; data.user_id, &gt; data.course_id, &gt; date_format(
错误1 hive (edu)&gt; insert into huanhuan values(1,&#39;haoge&#39;); Query ID = root_20240110071417_fe1517ad-3607-41f4-bdcf-d00b98ac443e Total jobs = 1
报错1:执行到如下就不执行了,没有显示Successfully registered new MBean. [root@slave1 bin]# /usr/local/software/flume-1.9.0/bin/flume-ng agent -n a1 -c /usr/local/softwa
虚拟及没有启动任何服务器查看jps会显示jps,如果没有显示任何东西 [root@slave2 ~]# jps 9647 Jps 解决方案 # 进入/tmp查看 [root@slave1 dfs]# cd /tmp [root@slave1 tmp]# ll 总用量 48 drwxr-xr-x. 2
报错1 hive&gt; show databases; OK Failed with exception java.io.IOException:java.lang.RuntimeException: Error in configuring object Time taken: 0.474 se
报错1 [root@localhost ~]# vim -bash: vim: 未找到命令 安装vim yum -y install vim* # 查看是否安装成功 [root@hadoop01 hadoop]# rpm -qa |grep vim vim-X11-7.4.629-8.el7_9.x
修改hadoop配置 vi /usr/local/software/hadoop-2.9.2/etc/hadoop/yarn-site.xml # 添加如下 &lt;configuration&gt; &lt;property&gt; &lt;name&gt;yarn.nodemanager.res