computational-geometry 相关问题

是计算机科学的一个分支,致力于算法的研究,可以用几何学来陈述。






在Pytorch中创建3D张量的可区分2D投影? 我需要在3D空间中的表面平面触摸单元球上创建3D张量的2D投影,以使该投影是可区分的。

import torch import torch.nn.functional as F import math def get_proj(volume,right_angle,left_angle,distance_to_obj = 4,surface_extent=3,N_samples_per_ray=200,H_out=128, W_out=128,grid_sample_mode='bilinear'): """ Generates a 2D projection of a 3D volume by casting rays from a specified camera position. This function simulates an orthographic projection of a 3D volume onto a 2D plane. The camera is positioned on a sphere centered at the origin, with its position determined by the provided right and left angles. Rays are cast from the camera through points on a plane tangent to the sphere, and the volume is sampled along these rays to produce the projection. Args: volume (torch.Tensor): A 5D tensor of shape (N, C, D, H, W) representing the 3D volume to be projected. right_angle (float): The azimuthal angle (in radians) determining the camera's position around the z-axis. left_angle (float): The polar angle (in radians) determining the camera's elevation from the xy-plane. distance_to_obj (float, optional): The distance from the camera to the origin. Defaults to 4. surface_extent (float, optional): The half-extent of the tangent plane in world units. Defines the plane's size. Defaults to 3. N_samples_per_ray (int, optional): The number of sample points along each ray. Higher values yield more accurate projections. Defaults to 200. H_out (int, optional): The height (in pixels) of the output 2D projection. Defaults to 128. W_out (int, optional): The width (in pixels) of the output 2D projection. Defaults to 128. Returns: torch.Tensor: A 4D tensor of shape (1, 1, H_out, W_out) representing the 2D projection of the input volume. Raises: ValueError: If the input volume is not a 5D tensor. RuntimeError: If the sampling grid is out of the volume's bounds. Example: ```python import torch # Create a sample 3D volume volume = torch.zeros((1, 1, 32, 32, 32)) volume[0, 0, 16, :, :] = 1 # Add a plane in the middle # Define camera angles right_angle = 0.5 # radians left_angle = 0.3 # radians # Generate the projection projection = get_proj(volume, right_angle, left_angle) # Visualize the projection import matplotlib.pyplot as plt plt.imshow(projection.squeeze().cpu().numpy(), cmap='gray') plt.show() ``` Note: - Ensure that the input volume is normalized to the range [-1, 1] for proper sampling. - The function assumes an orthographic projection model. - Adjust `N_samples_per_ray` for a trade-off between performance and projection accuracy. """ device = volume.device ra = right_angle la = left_angle # Compute camera position p on the unit sphere. p = torch.tensor([ math.cos(la) * math.cos(ra), math.cos(la) * math.sin(ra), math.sin(la) ]).to(device) p*=distance_to_obj # p is of shape (3,). (It lies on the unit sphere.) # The camera is at position p and always looks to the origin. # Define the opposite point on the sphere: q = -p # This will be the point of tangency of the projection plane. # ------------------------------------------------------------------- # 3. Define an orthonormal basis for the projection plane tangent to the unit sphere at q. # We need two vectors (right, up) lying in the plane. # One way is to choose a reference vector not colinear with q. # ------------------------------------------------------------------- ref = torch.tensor([0.0, 0.0, 1.0]).to(device) if torch.allclose(torch.abs(q), torch.tensor([1.0, 1.0, 1.0]).to(device) * q[0], atol=1e-3): ref = torch.tensor([0.0, 1.0, 0.0]) # Compute right as the normalized cross product of ref and q. right_vec = torch.cross(ref, q,dim=0) right_vec = right_vec / torch.norm(right_vec) # Compute up as the cross product of q and right. up_vec = torch.cross(q, right_vec) up_vec = up_vec / torch.norm(up_vec) # ------------------------------------------------------------------- # 4. Build the image plane grid. # # We want to form an image on the plane tangent to the sphere at q. # The plane is defined by the equation: q · x = 1. # # A convenient parameterization is: # # For (u, v) in some range, the 3D point on the plane is: # P(u,v) = q + u * right_vec + v * up_vec. # # Note: Since q is a unit vector, q · q = 1 and q is perpendicular to both right_vec and up_vec, # so q · P(u,v) = 1 automatically. # # Choose an output image resolution and an extent for u and v. # ------------------------------------------------------------------- # Choose an extent so that the sampled points remain in [-1,1]^3. # (Since our volume covers [-1,1]^3, a modest extent is needed.) extent = surface_extent # you may adjust this value u_vals = torch.linspace(-extent, extent, W_out).to(device) v_vals = torch.linspace(-extent, extent, H_out).to(device) grid_v, grid_u = torch.meshgrid(v_vals, u_vals, indexing='ij') # shapes: (H_out, W_out) # For each pixel (u,v) on the plane, compute its world coordinate. # P = q + u * right_vec + v * up_vec. plane_points = q.unsqueeze(0).unsqueeze(0) + \ grid_u.unsqueeze(-1) * right_vec + \ grid_v.unsqueeze(-1) * up_vec # plane_points shape: (H_out, W_out, 3) # ------------------------------------------------------------------- # 5. For each pixel, sample along the ray from the camera p through the point P. # # Since the camera is at p and the ray passing through a pixel is along the line from p to P, # the ray can be parameterized as: # # r(t) = p + t*(P - p), for t in [0, 1] # # t=0 gives the camera position, t=1 gives the intersection with the image plane (P). # ------------------------------------------------------------------- N_samples = N_samples_per_ray t_vals = torch.linspace(0, 1, N_samples).to(device) # shape: (N_samples,) # Expand plane_points to sample along t: # plane_points has shape (H_out, W_out, 3). We want to combine it with p. # Compute (P - p): note that p is a vector; we can reshape it appropriately. P_minus_p = plane_points - p.unsqueeze(0).unsqueeze(0) # shape: (H_out, W_out, 3) # Now, for each t, compute the sample point: # sample_point(t, u, v) = p + t*(P(u,v) - p) # We can do: sample_grid = p.unsqueeze(0).unsqueeze(0).unsqueeze(0) + \ t_vals.view(N_samples, 1, 1, 1) * P_minus_p.unsqueeze(0) # sample_grid now has shape: (N_samples, H_out, W_out, 3). # Add a batch dimension (batch size 1) so that grid_sample sees a grid of shape: # (1, N_samples, H_out, W_out, 3) sample_grid = sample_grid.unsqueeze(0) # IMPORTANT: grid_sample expects the grid coordinates in the normalized coordinate system # of the input volume. Here our volume is defined on [-1, 1]^3. Make sure that the computed # sample_grid falls in that range. (Depending on extent, p, etc., you may need to adjust.) # For our setup, choose the parameters so that sample_grid is within [-1, 1]. # ------------------------------------------------------------------- # 6. Use grid_sample to sample the volume along each ray and integrate. # ------------------------------------------------------------------- # grid_sample expects input volume of shape [N, C, D, H, W] and grid of shape [N, D_out, H_out, W_out, 3]. proj_samples = F.grid_sample(volume, sample_grid, mode=grid_sample_mode, align_corners=False) # proj_samples has shape: (1, 1, N_samples, H_out, W_out) # For a simple projection (like an X-ray), integrate along the ray. # Here we simply sum along the sample (ray) dimension. proj_image = proj_samples.sum(dim=2) # shape: (1, 1, H_out, W_out) return proj_image

回答 1 投票 0





SciPy 最小化以找到反函数?

我有一个(不可逆)函数 ak([u,v,w]) 这取单位八面体表面上的一个点(p:使得 |u|+|v|+|w| = 1)并返回单位球体表面上的一个点。有趣的是...

回答 1 投票 0

(2D) 如何使用python让贝塞尔曲线有宽度?

如果我有一条贝塞尔曲线,如何使其具有宽度,以及如何获取其轮廓的顶点? 我的尝试:我使用一个起点、两个控制点和一个终点绘制了一条贝塞尔曲线...

回答 1 投票 0

从 2d 点循环中删除条子区域

我想从二维闭合点循环中删除条子区域。 例如,这是我在循环中的观点 边界节点数据: Node_ID U_param V_param 298 -1.570694 1.933077 第859章...

回答 1 投票 0

如何使用圆锥或三棱锥构造球体

我有具有固定圆心角的圆锥体(或三棱锥体)。如何使用这些圆锥体(或三棱锥体)构造或填充球体(如下所示:http://blog.andreaskahler.com/2009/06/

回答 1 投票 0

有金字塔束-AABB相交算法吗?

我正在尝试找出一种方法来检查轴对齐边界框(AABB)是否与金字塔梁相交。是否有已知的算法?有谁知道如何

回答 1 投票 0

CGAL 排列从半边检索原始片段(以自然的方式)

我有这个代码片段,它只计算一些边的排列: #包括 #包括 #包括 我有这个代码片段,它只计算一些边的排列。: #include <iostream> #include <CGAL/Exact_predicates_exact_constructions_kernel.h> #include <CGAL/Arr_segment_traits_2.h> #include <CGAL/Arrangement_2.h> typedef CGAL::Exact_predicates_exact_constructions_kernel Kernel; typedef CGAL::Arr_segment_traits_2<Kernel> Traits_2; // Traits for segments typedef Traits_2::Segment_2 Segment_2; typedef CGAL::Arrangement_2<Traits_2> Arrangement_2; int main() { Arrangement_2 arrangement; std::vector<Segment_2> segments = { Segment_2(Kernel::Point_2(0.5, 0), Kernel::Point_2(2, 2)), Segment_2(Kernel::Point_2(0, 1.5), Kernel::Point_2(2, 1)), Segment_2(Kernel::Point_2(1, 0), Kernel::Point_2(1, 2)), Segment_2(Kernel::Point_2(0, 2), Kernel::Point_2(2, 0)), Segment_2(Kernel::Point_2(0, 1.5), Kernel::Point_2(2, 2)) }; CGAL::insert(arrangement, segments.begin(), segments.end()); std::cout << "Vertices:" << std::endl; for (auto vit = arrangement.vertices_begin(); vit != arrangement.vertices_end(); ++vit) { std::cout << "(" << vit->point() << ")" << std::endl; } return 0; } 现在在一个项目中,我需要这样的功能,我可以派生出原始段,其中半边是其子段,例如,在示例中我有半边 e,现在想知道它是一个子段-GH 段: 有没有办法在 CGAL 中做到这一点?这似乎是一个自然的功能,但我在任何地方都找不到它。 是的,您可以使用Arrangement_with_history_2,正如@EfiFogel建议的那样。但是,在您的情况下,有一种简单的方法可以使用函数 CGAL::intersection 来完成您想要的操作。因此,对于每个排列边,您在原始段向量中查找一个段,其中包含该边。请参阅下面的代码: #include <iostream> #include <vector> #include <CGAL/Exact_predicates_exact_constructions_kernel.h> #include <CGAL/Arr_segment_traits_2.h> #include <CGAL/Arrangement_2.h> using Kernel = CGAL::Exact_predicates_exact_constructions_kernel; using Traits = CGAL::Arr_segment_traits_2<Kernel>; using Point = Traits::Point_2; using Segment = Traits::Segment_2; using Arrangement = CGAL::Arrangement_2<Traits>; using SVec = std::vector<Segment>; // ------ return true if the segment S contains the point P bool contains(Segment const& S, Point const& P) { return CGAL::intersection(S, P).has_value(); } // ------ return true if the segment S1 contains the segment S2 bool contains(Segment const& S1, Segment const& S2) { return contains(S1, S2.source()) && contains(S1, S2.target()); } // ------ return index of segment in the vector V, containing the segment S int find(SVec const& V, Segment const& S) { int res = -1; for (auto i = 0U; i < V.size(); ++i) { if (contains(V[i], S)) { res = i; break; } } return res; } int main() { Arrangement arrangement; SVec const segments = { {{0.5, 0 }, {2, 2}}, {{0 , 1.5}, {2, 1}}, {{1 , 0 }, {1, 2}}, {{0 , 2 }, {2, 0}}, {{0 , 1.5}, {2, 2}} }; CGAL::insert(arrangement, segments.begin(), segments.end()); std::cout << "Edges and Original Segments:" << std::endl; for (auto const& e: arrangement.edge_handles()) { Segment const s{e->source()->point(), e->target()->point()}; std::cout << '(' << s << ") in (" << segments[find(segments, s)] << ')' << std::endl; } } 如果性能是一个问题,您可以(例如)将原始片段存储在 std::multiset<Segment> 中(而不是向量),按角度排序。

回答 1 投票 0

纳尔代数的 SVD 分解表现得很奇怪

我正在尝试计算矩阵的 SVD,作为一个玩具示例,我使用了向量。 我运行了我的代码: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=

回答 2 投票 0

如何判断四条直线是否构成一个四边形,是凸四边形还是凹四边形?

在c++中,如何检测任意给定的四条线是否构成一个四边形,是凸的还是凹的,以及它们是否构成多个四边形? 例如,这样: 有这样一个: 还有这个:...

回答 1 投票 0

3D洞穴生成

我需要将两个不同尺寸的多边形连接成一个部分。这些多边形可以位于彼此不同的平面中。多边形也可以是凸多边形或非凸多边形。我需要一个算法...

回答 1 投票 0

如何扩展多边形直到其中一个边界到达一点

我有扩展多边形的代码,它的工作原理是将 xs 和 ys 乘以一个因子,然后将所得多边形重新居中于原始多边形的中心。 我还有代码来查找

回答 3 投票 0

最新问题
© www.soinside.com 2019 - 2025. All rights reserved.