作者 by Tianzhi Jia / 2022-05-16 / 暂无评论 / 1355 个足迹
draw_registration_result
在配准过程中可视化对齐。在本教程中,我们将展示不同的ICP变体,以及使用它们的API。def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.4459,
front=[0.9288, -0.2951, -0.2242],
lookat=[1.6784, 2.0612, 1.4451],
up=[-0.3402, -0.9189, -0.1996])
TransformationEstimationPointToPoint
提供了用于计算点到点ICP目标的残差和雅克比矩阵的函数。TransformationEstimationPointToPlane
提供了用于计算点到面ICP目标的残差和雅克比矩阵的函数。TransformationEstimationForColoredICP
提供了用于计算联合优化目标的残差和雅克比矩阵的函数。注意:该Tensor based ICP implementation
API与基于特征的ICP实现略有不同,以支持更多功能。
PointClouds
间的Transformation
需要估计。[open3d.t.PointCloud]注意:初始对齐通常通过全局配准算法获得。
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
double
for icp
,utility.DoubleVector
for multi-scale-icp
。1.0x - 3.0x
的voxel-size
参数。# For Vanilla ICP (double)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):
# `max_correspondence_distances` is proportianal to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial allignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
CPU:0
上形状为[4, 4]类型的变换矩阵Tensor。# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
target point-cloud
具有normals
属性(与position
属性具有相同的dtype)。target
点云具有normals
属性(与position
属性具有相同的dtype)。source
和target
点云具有colors
属性(与position
属性具有相同的dtype)。# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
Robust Kernels
:鲁棒核用于离群点去除。有关此内容的更多信息,请参见Robust Kernel
相关章节。robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)
Multi-Scale ICP
,它是一个ICPConvergenceCriteria
的list
,对于每个尺度的ICP,提供对性能更精细的控制。relative_fitness
和relative_rmse
的初始值保持在低值,因为我们只想获得一个估计变换,而高值用于以后的迭代以进行微调。# Convergence-Criteria for Vanilla ICP:
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:
# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
True
时,它会将fitness
,inlier_rmse
,transformaton
,scale
,iteration
的迭代值保存在loss_log_
inregsitration_result
中。默认:False
。save_loss_log = True
# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4],
[0.0, 0.0, 0.0, 1.0]])
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
relative_rmse=0.000001,
max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
max_correspondence_distance
可能会解决此问题,但这需要更长的时间来处理。init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
draw_registration_result(source, target, registration_icp.transformation)
Multi-Scale ICP
解决上述问题。max_correspondence_distance
。max_correspondence_distance
。收敛也不太可能卡在局部最小值中。当我们得到一个很好的估计值时,在密集的点云上需要更少的迭代来收敛到更准确的变换。Multi-Scale ICP
替代ICP
,以实现高效收敛,特别是对于大型点云。# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])
# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=20),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]
# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])
# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source, target, registration_ms_icp.transformation)
registration_result.loss_log
绘制收敛性并微调我们的应用程序。from matplotlib import pyplot as plt
def plot_rmse(registration_result):
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Inlier RMSE vs Iteration")
axes.plot(registration_result.loss_log["index"].numpy(),
registration_result.loss_log["inlier_rmse"].numpy())
def plot_scale_wise_rmse(registration_result):
scales = registration_result.loss_log["scale"].numpy()
iterations = registration_result.loss_log["iteration"].numpy()
num_scales = scales[-1][0] + 1
fig, axes = plt.subplots(nrows=1, ncols=num_scales, figsize=(20, 5))
masks = {}
for scale in range(0, num_scales):
masks[scale] = registration_result.loss_log["scale"] == scale
rmse = registration_result.loss_log["inlier_rmse"][masks[scale]].numpy()
iteration = registration_result.loss_log["iteration"][
masks[scale]].numpy()
title_prefix = "Scale Index: " + str(scale)
axes[scale].set_title(title_prefix + " Inlier RMSE vs Iteration")
axes[scale].plot(iteration, rmse)
print("Vanilla ICP")
plot_rmse(registration_icp)
print("Multi Scale ICP")
plot_rmse(registration_ms_icp)
plot_scale_wise_rmse(registration_ms_icp)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
axes.set_title("Vanilla ICP and Multi-Scale ICP `Inlier RMSE` vs `Iteration`")
if len(registration_ms_icp.loss_log["index"]) > len(
registration_icp.loss_log["inlier_rmse"]):
axes.plot(registration_ms_icp.loss_log["index"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy())
else:
axes.plot(registration_icp.loss_log["index"].numpy(),
registration_icp.loss_log["inlier_rmse"].numpy(),
registration_ms_icp.loss_log["inlier_rmse"].numpy())
# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()
registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
voxel_sizes, criteria_list,
max_correspondence_distances,
init_source_to_target, estimation,
save_loss_log)
ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)
draw_registration_result(source.cpu(), target.cpu(),
registration_ms_icp.transformation)
no correspondences
的情况no_correspondences
,则fitness
和inlier_rmse
为0
。max_correspondence_distance = 0.02
init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
[0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])
registration_icp = treg.icp(source, target, max_correspondence_distance,
init_source_to_target)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)
if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
print("ICP Convergence Failed, as no correspondence were found")
Information Matrix
为我们提供了有关点云对齐程度的进一步信息。information_matrix = treg.get_information_matrix(
source, target, max_correspondence_distances[2],
registration_ms_icp.transformation)
print(information_matrix)
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
[-0.139, 0.967, -0.215, 0.7],
[0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])
draw_registration_result(source, target, trans_init)
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
max_correspondence_distance, trans_init)
print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)
-
-
# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02
# Initial alignment or source to target transform.
init_source_to_target = trans_init
# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1
# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()
reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)
draw_registration_result(source, target, reg_point_to_point.transformation)
-
-
-
-
estimation = treg.TransformationEstimationPointToPlane()
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
relative_rmse=0.0000001,
max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation, criteria,
voxel_size, save_loss_log)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
-
-
# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
source_temp = source.clone()
target_temp = target.clone()
source_temp.transform(transformation)
# This is patched version for tutorial rendering.
# Use `draw` function for you application.
o3d.visualization.draw_geometries(
[source_temp.to_legacy(),
target_temp.to_legacy()],
zoom=0.5,
front=[-0.2458, -0.8088, 0.5342],
lookat=[1.7745, 2.2305, 0.9787],
up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
source = o3d.t.io.read_point_cloud("../../test_data/ColoredICP/frag_115.ply")
target = o3d.t.io.read_point_cloud("../../test_data/ColoredICP/frag_116.ply")
# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
o3d.core.Dtype.Float32) / 255.0
# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)
-
estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()
reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_point_to_plane.transformation)
-
-
-
-
-
-
-
estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)
criteria_list = [
treg.ICPConvergenceCriteria(relative_fitness=0.0001,
relative_rmse=0.0001,
max_iteration=50),
treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]
max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])
voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017
print("Colored point cloud registration")
s = time.time()
reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
criteria_list,
max_correspondence_distances,
init_source_to_target, estimation)
icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)
draw_registration_result(source, target, reg_multiscale_icp.transformation)
独特见解