以下代码在大多数情况下会在一组时间内找到最佳聚焦图像,但是在某些图像中,该图像返回的值更高,使我的眼睛更加模糊。
我正在Linux和/或Mac上使用OpenCV 3.4.2。
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import static org.opencv.core.Core.BORDER_DEFAULT;
public class LaplacianExample {
public static Double calcSharpnessScore(Mat srcImage) {
/// Remove noise with a Gaussian filter
Mat filteredImage = new Mat();
Imgproc.GaussianBlur(srcImage, filteredImage, new Size(3, 3), 0, 0, BORDER_DEFAULT);
int kernel_size = 3;
int scale = 1;
int delta = 0;
Mat lplImage = new Mat();
Imgproc.Laplacian(filteredImage, lplImage, CvType.CV_64F, kernel_size, scale, delta, Core.BORDER_DEFAULT);
// converting back to CV_8U generate the standard deviation
Mat absLplImage = new Mat();
Core.convertScaleAbs(lplImage, absLplImage);
// get the standard deviation of the absolute image as input for the sharpness score
MatOfDouble median = new MatOfDouble();
MatOfDouble std = new MatOfDouble();
Core.meanStdDev(absLplImage, median, std);
return Math.pow(std.get(0, 0)[0], 2);
}
}
这里是使用同一照明(荧光,DAPI)的两个图像,它们是从显微镜载玻片下面拍摄的,同时试图自动聚焦在载玻片上表面的涂层/掩模上。
我希望有人可以向我解释为什么我的算法无法检测到模糊程度较小的图像。谢谢!
主要问题是拉普拉斯内核大小太小。
您正在使用kernel_size = 3
,对于上述场景而言,它太小了。在上面的图像中,kernel_size = 3
主要受noise的影响,因为边缘(在显示更多细节的图像中)远大于3x3像素。
换句话说,细节的“特殊频率”是低频,而3x3内核则强调更高的特殊频率。
可能的解决方案:
kernel_size = 11
。 您的代码中有一个小问题:Core.convertScaleAbs(lplImage, absLplImage)
计算拉普拉斯结果的绝对值,结果计算出的STD不正确。
我建议以下修复:
将拉普拉斯深度设为CvType.CV_16S
(而不是CvType.CV_64F
:
Imgproc.Laplacian(filteredImage, lplImage, CvType.CV_16S, kernel_size, scale, delta, Core.BORDER_DEFAULT);
不执行Core.meanStdDev(absLplImage, median, std)
,在lplImage
上计算三通STD:
Core.meanStdDev(lplImage, median, std);
我使用以下Python代码进行测试:
import cv2
def calc_sharpness_score(srcImage):
""" Compute sharpness score for automatic focus """
filteredImage = cv2.GaussianBlur(srcImage, (3, 3), 0, 0)
kernel_size = 11
scale = 1
delta = 0
#lplImage = cv2.Laplacian(filteredImage, cv2.CV_64F, ksize=kernel_size, scale=scale, delta=delta)
lplImage = cv2.Laplacian(filteredImage, cv2.CV_16S, ksize=kernel_size, scale=scale, delta=delta)
# converting back to CV_8U generate the standard deviation
#absLplImage = cv2.convertScaleAbs(lplImage)
# get the standard deviation of the absolute image as input for the sharpness score
# (mean, std) = cv2.meanStdDev(absLplImage)
(mean, std) = cv2.meanStdDev(lplImage)
return std[0][0]**2
im1 = cv2.imread('im1.jpg', cv2.COLOR_BGR2GRAY) # Read input image as Grayscale
im2 = cv2.imread('im2.jpg', cv2.COLOR_BGR2GRAY) # Read input image as Grayscale
var1 = calc_sharpness_score(im1)
var2 = calc_sharpness_score(im2)
结果:
std1 = 668464355
std2 = 704603944