计算具有共享地平面的2个单应平面之间的距离

问题描述 投票:0回答:2

我认为最简单的就是用图像来解释问题: two cubes

我有两个立方体(相同大小)放在桌子上。它们的一侧标有绿色(以便于跟踪)。我想以立方体大小单位计算左立方体与右立方体(图片上的红线)的相对位置(x,y)。

这可能吗?我知道如果这两个绿色侧面有共同的平面 - 就像立方体的顶面一样,问题会很简单,但我不能用它来跟踪。我只需计算一个正方形的单应性并与其他立方角相乘。

我应该通过乘以 90 度旋转矩阵来“旋转”单应性矩阵以获得“地面”单应性吗?我计划在智能手机场景中进行处理,因此陀螺仪、相机内在参数可能具有任何值。

opencv image-processing distance homography
2个回答
0
投票

这是可能的。 让我们假设(或声明)桌子是 z=0 平面,并且您的第一个框位于该平面的原点。这意味着左框的绿色角具有(表格)坐标 (0,0,0)、(1,0,0)、(0,0,1) 和 (1,0,1)。 (您的盒子尺寸为 1)。 您还可以获得这些点的像素坐标。如果将这些 2d 和 3d 值(以及相机的内在特性和畸变)提供给 cv::solvePnP,您将获得相机与盒子(和平面)的相对姿势。

在下一步中,您必须使桌面平面与从相机中心穿过第二个绿色框右下角像素的光线相交。这个交集看起来像 (x,y,0) 和 [x-1,y] 将是盒子右角之间的平移。


0
投票

如果您拥有所有信息(相机内在信息),您可以按照 FooBar 回答的方式进行操作。

但是您可以通过单应性更直接地使用点位于平面上的信息(无需计算射线等):

计算图像平面和地平面之间的单应性。 不幸的是,您需要 4 个点对应,但图像中只有 3 个可见的立方体点,接触地平面。 相反,您可以使用立方体的顶面,可以测量相同的距离。

首先是代码:

    #include "opencv2/core/core.hpp"
    #include "opencv2/calib3d/calib3d.hpp"
    #include "opencv2/highgui/highgui.hpp"
    #include "opencv2/imgproc/imgproc.hpp"
    #include "opencv2/imgcodecs.hpp"

    int main()
    {
        // calibrate plane distance for boxes
        cv::Mat input = cv::imread("../inputData/BoxPlane.jpg");


        // if we had 4 known points on the ground plane, we could use the ground plane but here we instead use the top plane
        // points on real world plane: height = 1: // so it's not measured on the ground plane but on the "top plane" of the cube
        std::vector<cv::Point2f> objectPoints;  
        objectPoints.push_back(cv::Point2f(0,0)); // top front
        objectPoints.push_back(cv::Point2f(1,0)); // top right
        objectPoints.push_back(cv::Point2f(0,1)); // top left
        objectPoints.push_back(cv::Point2f(1,1)); // top back
    
        // image points:
        std::vector<cv::Point2f> imagePoints;
        imagePoints.push_back(cv::Point2f(141,302));// top front
        imagePoints.push_back(cv::Point2f(334,232));// top right
        imagePoints.push_back(cv::Point2f(42,231)); // top left
        imagePoints.push_back(cv::Point2f(223,177));// top back

        cv::Point2f pointToMeasureInImage(741,200); // bottom right of second box
    
    
        // for transform we need the point(s) to be in a vector
        std::vector<cv::Point2f> sourcePoints;
        sourcePoints.push_back(pointToMeasureInImage);
        //sourcePoints.push_back(pointToMeasureInImage);
        sourcePoints.push_back(cv::Point2f(718,141));
        sourcePoints.push_back(imagePoints[0]);


        // list with points that correspond to sourcePoints. This is not needed but used to create some ouput
        std::vector<int> distMeasureIndices;
        distMeasureIndices.push_back(1);
        //distMeasureIndices.push_back(0);
        distMeasureIndices.push_back(3);
        distMeasureIndices.push_back(2);


        // draw points for visualization
        for(unsigned int i=0; i<imagePoints.size(); ++i)
        {
            cv::circle(input, imagePoints[i], 5, cv::Scalar(0,255,255));
        }
        //cv::circle(input, pointToMeasureInImage, 5, cv::Scalar(0,255,255));
        //cv::line(input, imagePoints[1], pointToMeasureInImage, cv::Scalar(0,255,255), 2);
    
        // compute the relation between the image plane and the real world top plane of the cubes
        cv::Mat homography = cv::findHomography(imagePoints, objectPoints);



        std::vector<cv::Point2f> destinationPoints;
        cv::perspectiveTransform(sourcePoints, destinationPoints, homography);

        // compute the distance between some defined points (here I use the input points but could be something else)
        for(unsigned int i=0; i<sourcePoints.size(); ++i)
        {
            std::cout << "distance: " << cv::norm(destinationPoints[i] - objectPoints[distMeasureIndices[i]]) << std::endl; 

            cv::circle(input, sourcePoints[i], 5, cv::Scalar(0,255,255));
            // draw the line which was measured
            cv::line(input, imagePoints[distMeasureIndices[i]], sourcePoints[i], cv::Scalar(0,255,255), 2);
        }


        // just for fun, measure distances on the 2nd box:
        float distOn2ndBox = cv::norm(destinationPoints[0]-destinationPoints[1]);
        std::cout << "distance on 2nd box: " << distOn2ndBox << " which should be near 1.0" << std::endl;
        cv::line(input, sourcePoints[0], sourcePoints[1], cv::Scalar(255,0,255), 2);


        cv::imshow("input", input);
        cv::waitKey(0);
        return 0;
    }

这是我想解释的输出:

distance: 2.04674
distance: 2.82184
distance: 1
distance on 2nd box: 0.882265 which should be near 1.0

这些距离是:

1. the yellow bottom one from one box to the other
2. the yellow top one
3. the yellow one on the first box
4. the pink one

所以红线(你要求的)的长度应该几乎正好是立方体边长的 2 倍。但正如您所看到的,我们有一些错误。

enter image description here

单应性计算之前像素位置越好/越正确,结果越准确。

您需要一个针孔相机模型,因此不要扭曲您的相机(在现实世界的应用中)。

还要记住,如果有 4 个可见的线性点(不在同一条线上),您可以计算地平面上的距离!

© www.soinside.com 2019 - 2024. All rights reserved.