【Open3D】Pipelines (Tensor)——ICP配准

作者 by Tianzhi Jia / 2022-05-16 / 暂无评论 / 1355 个足迹

  • 本教程演示了ICP(迭代最近点)配准算法。多年来,它一直是研究和工业中几何配准的中流砥柱。输入是两个点云和一个初始变换,该变换将源点云与目标点云大致对齐。输出是一个精细的变换,可紧密对齐两个点云。辅助函数draw_registration_result在配准过程中可视化对齐。在本教程中,我们将展示不同的ICP变体,以及使用它们的API。

辅助可视化函数

  • 下面的函数可视化目标点云和通过对齐变换变换后的源点云。目标点云和源点云分别涂有青色和黄色。两个点云之间的重叠越多越紧密,对齐结果就越好。
def draw_registration_result(source, target, transformation):
    source_temp = source.clone()
    target_temp = target.clone()

    source_temp.transform(transformation)

    # This is patched version for tutorial rendering.
    # Use `draw` function for you application.
    o3d.visualization.draw_geometries(
        [source_temp.to_legacy(),
         target_temp.to_legacy()],
        zoom=0.4459,
        front=[0.9288, -0.2951, -0.2242],
        lookat=[1.6784, 2.0612, 1.4451],
        up=[-0.3402, -0.9189, -0.1996])

了解ICP算法

  • 通常,ICP算法循环迭代两个步骤:
  • 1、查找来自目标点云P和源点云Q使用当前变换矩阵T后的对应集K={(p, q)}。
  • 2、通过最小化在对应集K上定义的目标函数E(T)来更新变换T。
  • ICP的不同变体使用不同的目标函数E(T)[BeslAndMcKay1992][ChenAndMedioni1992][Park2017]

比较ICP的不同变体

【点到点ICP】

  • 我们首先展示了一个点到点ICP算法[BeslAndMcKay1992],使用目标
  • TransformationEstimationPointToPoint提供了用于计算点到点ICP目标的残差和雅克比矩阵的函数。

【点到面ICP】

  • 点到面ICP算法[ChenAndMedioni1992]使用不同的目标函数
  • 其中np是点p的法向量。[Rusinkiewicz2001]已经证明,点到面ICP算法比点到点ICP算法具有更快的收敛速度。
  • TransformationEstimationPointToPlane提供了用于计算点到面ICP目标的残差和雅克比矩阵的函数。

【颜色ICP】

  • 为进一步提高效率,[Park2017]提出了一个多尺度的配准方案。
  • TransformationEstimationForColoredICP提供了用于计算联合优化目标的残差和雅克比矩阵的函数。

了解ICP API

注意:该Tensor based ICP implementationAPI与基于特征的ICP实现略有不同,以支持更多功能。

输入

  • PointClouds间的Transformation需要估计。[open3d.t.PointCloud]
  • 源Tensor点云。[支持Float32 或 Float64 dtypes]
  • 目标Tensor点云。[支持Float32 或 Float64 dtypes]
注意:初始对齐通常通过全局配准算法获得。
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")

# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
    o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
    o3d.core.Dtype.Float32) / 255.0

# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
                         [-0.139, 0.967, -0.215, 0.7],
                         [0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])

draw_registration_result(source, target, trans_init)


参数

【最大对应距离】

  • 这是源点云中每个点的距离半径,邻居搜索将尝试在目标点云中查找相应的点。
  • 它是double for icputility.DoubleVector for multi-scale-icp
  • 对于每个尺度,通常可以保持1.0x - 3.0xvoxel-size参数。
  • 此参数对于性能调优最为重要,因为较高的半径将花费更长的时间(因为邻居搜索将在较大的半径上执行)。
# For Vanilla ICP (double)

# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07
# For Multi-Scale ICP (o3d.utility.DoubleVector):

# `max_correspondence_distances` is proportianal to the resolution or the `voxel_sizes`.
# In general it is recommended to use values between 1x - 3x of the corresponding `voxel_sizes`.
# We may have a higher value of the `max_correspondence_distances` for the first coarse
# scale, as it is not much expensive, and gives us more tolerance to initial allignment.
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])

【从源到目标的初始变换[open3d.core.Tensor]】

  • 从源到目标的转换的初始估计。
  • 设备CPU:0上形状为[4, 4]类型的变换矩阵Tensor。
  • 初始对齐通常通过全局配准算法获得。有关示例,请参阅全局配准
# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
                                    [-0.139, 0.967, -0.215, 0.7],
                                    [0.487, 0.255, 0.835, -1.4],
                                    [0.0, 0.0, 0.0, 1.0]])

【估计方法】

  • 这将设置ICP方法来计算给定对应关系的两个点云之间的转换。
  • 选项:
  • o3d.t.pipelines.registration.TransformationEstimationPointToPoint()
  • 点到点ICP。
  • o3d.t.pipelines.registration.TransformationEstimationPointToPlane(robust_kernel)
  • 点到面ICP。
  • 需要target point-cloud具有normals属性(与position属性具有相同的dtype)。
  • o3d.t.pipelines.registration.TransformationEstimationForColoredICP(robust_kernel, lambda)
  • 颜色ICP。
  • 需要target点云具有normals属性(与position属性具有相同的dtype)。
  • 需要sourcetarget点云具有colors属性(与position属性具有相同的dtype)。
  • o3d.t.pipelines.registration.TransformationEstimationForGeneralizedICP(robust_kernel, epsilon)[待添加]
  • 广义ICP。
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()
  • 估计方法还支持Robust Kernels:鲁棒核用于离群点去除。有关此内容的更多信息,请参见Robust Kernel相关章节。
  • robust_kernel = o3d.t.pipelines.registration.robust_kernel.RobustKernel(method, scale, shape)
  • 方法选项:
  • robust_kernel.RobustKernelMethod.L2Loss
  • robust_kernel.RobustKernelMethod.L1Loss
  • robust_kernel.RobustKernelMethod.HuberLoss
  • robust_kernel.RobustKernelMethod.CauchyLoss
  • robust_kernel.RobustKernelMethod.GMLoss
  • robust_kernel.RobustKernelMethod.TukeyLoss
  • robust_kernel.RobustKernelMethod.GeneralizedLoss

【ICP收敛标准[relative rmse, relative fitness, max iterations]】

  • 这里设置终止条件,或者何时可以将尺度迭代视为收敛。
  • 如果相对值(与上次迭代相比值的变化)rmse和fitness等于或小于指定值,则该尺度的迭代被视为收敛/完成。
  • 对于Multi-Scale ICP,它是一个ICPConvergenceCriterialist,对于每个尺度的ICP,提供对性能更精细的控制。
  • 我们可以将relative_fitnessrelative_rmse的初始值保持在低值,因为我们只想获得一个估计变换,而高值用于以后的迭代以进行微调。
  • 在更高分辨率上进行迭代的成本更高(需要更多时间),因此我们希望在更高分辨率上进行更少的迭代。
# Convergence-Criteria for Vanilla ICP:

criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
                                       relative_rmse=0.000001,
                                       max_iteration=50)
# List of Convergence-Criteria for Multi-Scale ICP:

# We can control `ConvergenceCriteria` of each `scale` individually.
# We want to keep `relative_fitness` and `relative_rmse` high (more error tolerance)
# for initial scales, i.e. we will be happy to consider ICP converged, when difference
# between 2 successive iterations for that scale is smaller than this value.
# We expect less accuracy (more error tolerance) initial coarse-scale iteration,
# and want our later scale convergence to be more accurate (less error tolerance).
criteria_list = [
    treg.ICPConvergenceCriteria(relative_fitness=0.0001,
                                relative_rmse=0.0001,
                                max_iteration=20),
    treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
    treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]

【体素大小】

  • 体素大小(较低的体素大小对应于较高的分辨率),适用于多尺度ICP的每个尺度。
  • 我们希望在粗点云(低分辨率或大体素大小)上执行初试迭代(因为它更省时,并且避免局部最小值),然后移动到密集的点云(高分辨率或小体素大小)。因此,体素大小必须严格降序排列。
# Vanilla ICP
voxel_size = 0.025
# Lower `voxel_size` is equivalent to higher resolution,
# and we want to perform iterations from coarse to dense resolution,
# therefore `voxel_sizes` must be in strictly decressing order.
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])

【保存Loss日志】

  • True时,它会将fitnessinlier_rmsetransformatonscaleiteration的迭代值保存在loss_log_inregsitration_result中。默认:False
save_loss_log = True

普通ICP示例

# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.07

# Initial alignment or source to target transform.
init_source_to_target = np.asarray([[0.862, 0.011, -0.507, 0.5],
                                    [-0.139, 0.967, -0.215, 0.7],
                                    [0.487, 0.255, 0.835, -1.4],
                                    [0.0, 0.0, 0.0, 1.0]])

# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()

# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.000001,
                                       relative_rmse=0.000001,
                                       max_iteration=50)
# Down-sampling voxel-size.
voxel_size = 0.025

# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
s = time.time()

registration_icp = treg.icp(source, target, max_correspondence_distance,
                            init_source_to_target, estimation, criteria,
                            voxel_size, save_loss_log)

icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)

draw_registration_result(source, target, registration_icp.transformation)


  • 现在,让我们尝试使用糟糕的初始化。
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)
max_correspondence_distance = 0.07
s = time.time()

registration_icp = treg.icp(source, target, max_correspondence_distance,
                            init_source_to_target, estimation, criteria,
                            voxel_size, save_loss_log)

icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)

draw_registration_result(source, target, registration_icp.transformation)

  • 正如我们所看到的,不良的初始对齐可能会使ICP收敛失败。
  • 变大max_correspondence_distance可能会解决此问题,但这需要更长的时间来处理。
init_source_to_target = o3d.core.Tensor.eye(4, o3c.float32)
max_correspondence_distance = 0.5
s = time.time()
# It is highly recommended to down-sample the point-cloud before using
# ICP algorithm, for better performance.

registration_icp = treg.icp(source, target, max_correspondence_distance,
                            init_source_to_target, estimation, criteria,
                            voxel_size, save_loss_log)

icp_time = time.time() - s
print("Time taken by ICP: ", icp_time)
print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)

draw_registration_result(source, target, registration_icp.transformation)

  • 我们可以通过使用Multi-Scale ICP解决上述问题。

多尺度ICP示例

  • 使用Vanilla-ICP(早期版本)时出现的问题:
  • 在密集的点云上运ICP算法非常慢。
  • 它需要良好的初始对齐方式:
  • 如果点云没有很好地对齐,收敛可能会在初始迭代中卡在局部最小值中。
  • 如果对齐的点云没有足够的重叠,我们需要有一个更大的max_correspondence_distance
  • 如果点云被大量下采样(粗),则获得的结果将不准确。
  • 这些缺点可以使用多尺度ICP解决。在多尺度ICP中,我们在粗点云上执行初试迭代,以获得对初始对齐的更好估计,并使用此对齐方式在更密集的点云上收敛。粗点云上的ICP非常昂贵,并且允许我们使用更大的max_correspondence_distance。收敛也不太可能卡在局部最小值中。当我们得到一个很好的估计值时,在密集的点云上需要更少的迭代来收敛到更准确的变换。
  • 建议使用Multi-Scale ICP替代ICP,以实现高效收敛,特别是对于大型点云。
# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
voxel_sizes = o3d.utility.DoubleVector([0.1, 0.05, 0.025])

# List of Convergence-Criteria for Multi-Scale ICP:
criteria_list = [
    treg.ICPConvergenceCriteria(relative_fitness=0.0001,
                                relative_rmse=0.0001,
                                max_iteration=20),
    treg.ICPConvergenceCriteria(0.00001, 0.00001, 15),
    treg.ICPConvergenceCriteria(0.000001, 0.000001, 10)
]

# `max_correspondence_distances` for Multi-Scale ICP (o3d.utility.DoubleVector):
max_correspondence_distances = o3d.utility.DoubleVector([0.3, 0.14, 0.07])

# Initial alignment or source to target transform.
init_source_to_target = o3d.core.Tensor.eye(4, o3d.core.Dtype.Float32)

# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPlane()

# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
# Setting Verbosity to Debug, helps in fine-tuning the performance.
# o3d.utility.set_verbosity_level(o3d.utility.VerbosityLevel.Debug)

s = time.time()

registration_ms_icp = treg.multi_scale_icp(source, target, voxel_sizes,
                                           criteria_list,
                                           max_correspondence_distances,
                                           init_source_to_target, estimation,
                                           save_loss_log)

ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)

draw_registration_result(source, target, registration_ms_icp.transformation)

绘制收敛图

  • 我们可以使用registration_result.loss_log绘制收敛性并微调我们的应用程序。
from matplotlib import pyplot as plt


def plot_rmse(registration_result):
    fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))
    axes.set_title("Inlier RMSE vs Iteration")
    axes.plot(registration_result.loss_log["index"].numpy(),
              registration_result.loss_log["inlier_rmse"].numpy())


def plot_scale_wise_rmse(registration_result):
    scales = registration_result.loss_log["scale"].numpy()
    iterations = registration_result.loss_log["iteration"].numpy()

    num_scales = scales[-1][0] + 1

    fig, axes = plt.subplots(nrows=1, ncols=num_scales, figsize=(20, 5))

    masks = {}
    for scale in range(0, num_scales):
        masks[scale] = registration_result.loss_log["scale"] == scale

        rmse = registration_result.loss_log["inlier_rmse"][masks[scale]].numpy()
        iteration = registration_result.loss_log["iteration"][
            masks[scale]].numpy()

        title_prefix = "Scale Index: " + str(scale)
        axes[scale].set_title(title_prefix + " Inlier RMSE vs Iteration")
        axes[scale].plot(iteration, rmse)
print("Vanilla ICP")
plot_rmse(registration_icp)

print("Multi Scale ICP")
plot_rmse(registration_ms_icp)
plot_scale_wise_rmse(registration_ms_icp)

fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(20, 5))

axes.set_title("Vanilla ICP and Multi-Scale ICP `Inlier RMSE` vs `Iteration`")

if len(registration_ms_icp.loss_log["index"]) > len(
        registration_icp.loss_log["inlier_rmse"]):
    axes.plot(registration_ms_icp.loss_log["index"].numpy(),
              registration_ms_icp.loss_log["inlier_rmse"].numpy(),
              registration_icp.loss_log["inlier_rmse"].numpy())
else:
    axes.plot(registration_icp.loss_log["index"].numpy(),
              registration_icp.loss_log["inlier_rmse"].numpy(),
              registration_ms_icp.loss_log["inlier_rmse"].numpy())


CUDA设备上的多尺度ICP示例

# The algorithm runs on the same device as the source and target point-cloud.
source_cuda = source.cuda(0)
target_cuda = target.cuda(0)
s = time.time()

registration_ms_icp = treg.multi_scale_icp(source_cuda, target_cuda,
                                           voxel_sizes, criteria_list,
                                           max_correspondence_distances,
                                           init_source_to_target, estimation,
                                           save_loss_log)

ms_icp_time = time.time() - s
print("Time taken by Multi-Scale ICP: ", ms_icp_time)
print("Inlier Fitness: ", registration_ms_icp.fitness)
print("Inlier RMSE: ", registration_ms_icp.inlier_rmse)

draw_registration_result(source.cpu(), target.cpu(),
                         registration_ms_icp.transformation)


no correspondences的情况

  • 如果no_correspondences,则fitnessinlier_rmse 0
max_correspondence_distance = 0.02

init_source_to_target = np.asarray([[1.0, 0.0, 0.0, 5], [0.0, 1.0, 0.0, 7],
                                    [0.0, 0.0, 1.0, 10], [0.0, 0.0, 0.0, 1.0]])

registration_icp = treg.icp(source, target, max_correspondence_distance,
                            init_source_to_target)

print("Inlier Fitness: ", registration_icp.fitness)
print("Inlier RMSE: ", registration_icp.inlier_rmse)
print("Transformation: \n", registration_icp.transformation)

if registration_icp.fitness == 0 and registration_icp.inlier_rmse == 0:
    print("ICP Convergence Failed, as no correspondence were found")

信息矩阵

  • Information Matrix为我们提供了有关点云对齐程度的进一步信息。
information_matrix = treg.get_information_matrix(
    source, target, max_correspondence_distances[2],
    registration_ms_icp.transformation)

print(information_matrix)

  • 现在我们已经对ICP算法和API有了基本的了解,让我们尝试不同的版本来了解差异。

初始对齐

source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")

# Initial guess transform between the two point-cloud.
# ICP algortihm requires a good initial allignment to converge efficiently.
trans_init = np.asarray([[0.862, 0.011, -0.507, 0.5],
                         [-0.139, 0.967, -0.215, 0.7],
                         [0.487, 0.255, 0.835, -1.4], [0.0, 0.0, 0.0, 1.0]])

draw_registration_result(source, target, trans_init)

# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02

print("Initial alignment")
evaluation = treg.evaluate_registration(source, target,
                                        max_correspondence_distance, trans_init)

print("Fitness: ", evaluation.fitness)
print("Inlier RMSE: ", evaluation.inlier_rmse)

点到点ICP配准

-

-

# Input point-clouds
source = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_0.pcd")
target = o3d.t.io.read_point_cloud("../../test_data/ICP/cloud_bin_1.pcd")
# Select the `Estimation Method`, and `Robust Kernel` (for outlier-rejection).
estimation = treg.TransformationEstimationPointToPoint()
# Search distance for Nearest Neighbour Search [Hybrid-Search is used].
max_correspondence_distance = 0.02

# Initial alignment or source to target transform.
init_source_to_target = trans_init

# Convergence-Criteria for Vanilla ICP
criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
                                       relative_rmse=0.0000001,
                                       max_iteration=30)

# Down-sampling voxel-size. If voxel_size < 0, original scale is used.
voxel_size = -1

# Save iteration wise `fitness`, `inlier_rmse`, etc. to analyse and tune result.
save_loss_log = True
print("Apply Point-to-Point ICP")
s = time.time()

reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
                              init_source_to_target, estimation, criteria,
                              voxel_size, save_loss_log)

icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)

draw_registration_result(source, target, reg_point_to_point.transformation)

criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
                                       relative_rmse=0.0000001,
                                       max_iteration=1000)
print("Apply Point-to-Point ICP")
s = time.time()

reg_point_to_point = treg.icp(source, target, max_correspondence_distance,
                              init_source_to_target, estimation, criteria,
                              voxel_size, save_loss_log)

icp_time = time.time() - s
print("Time taken by Point-To-Point ICP: ", icp_time)
print("Fitness: ", reg_point_to_point.fitness)
print("Inlier RMSE: ", reg_point_to_point.inlier_rmse)

draw_registration_result(source, target, reg_point_to_point.transformation)

-


点到面ICP配准

-

-
-

estimation = treg.TransformationEstimationPointToPlane()

criteria = treg.ICPConvergenceCriteria(relative_fitness=0.0000001,
                                       relative_rmse=0.0000001,
                                       max_iteration=30)
print("Apply Point-to-Plane ICP")
s = time.time()

reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
                              init_source_to_target, estimation, criteria,
                              voxel_size, save_loss_log)

icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)

draw_registration_result(source, target, reg_point_to_plane.transformation)

-


颜色ICP配准

-

# Overriding visualization function, according to best camera view for colored-icp sample data.
def draw_registration_result(source, target, transformation):
    source_temp = source.clone()
    target_temp = target.clone()

    source_temp.transform(transformation)

    # This is patched version for tutorial rendering.
    # Use `draw` function for you application.
    o3d.visualization.draw_geometries(
        [source_temp.to_legacy(),
         target_temp.to_legacy()],
        zoom=0.5,
        front=[-0.2458, -0.8088, 0.5342],
        lookat=[1.7745, 2.2305, 0.9787],
        up=[0.3109, -0.5878, -0.7468])
print("1. Load two point clouds and show initial pose")
source = o3d.t.io.read_point_cloud("../../test_data/ColoredICP/frag_115.ply")
target = o3d.t.io.read_point_cloud("../../test_data/ColoredICP/frag_116.ply")

# For Colored-ICP `colors` attribute must be of the same dtype as `positions` and `normals` attribute.
source.point["colors"] = source.point["colors"].to(
    o3d.core.Dtype.Float32) / 255.0
target.point["colors"] = target.point["colors"].to(
    o3d.core.Dtype.Float32) / 255.0

# draw initial alignment
current_transformation = np.identity(4)
draw_registration_result(source, target, current_transformation)

使用点到面配准设为baseline

-

estimation = treg.TransformationEstimationPointToPlane()
max_correspondence_distance = 0.02
init_source_to_target = np.identity(4)
print("Apply Point-to-Plane ICP")
s = time.time()

reg_point_to_plane = treg.icp(source, target, max_correspondence_distance,
                              init_source_to_target, estimation)

icp_time = time.time() - s
print("Time taken by Point-To-Plane ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)

draw_registration_result(source, target, reg_point_to_plane.transformation)

颜色配准

-
-
-
-
-
-
-

estimation = treg.TransformationEstimationForColoredICP()
current_transformation = np.identity(4)

criteria_list = [
    treg.ICPConvergenceCriteria(relative_fitness=0.0001,
                                relative_rmse=0.0001,
                                max_iteration=50),
    treg.ICPConvergenceCriteria(0.00001, 0.00001, 30),
    treg.ICPConvergenceCriteria(0.000001, 0.000001, 14)
]

max_correspondence_distances = o3d.utility.DoubleVector([0.08, 0.04, 0.02])

voxel_sizes = o3d.utility.DoubleVector([0.04, 0.02, 0.01])
# colored pointcloud registration
# This is implementation of following paper
# J. Park, Q.-Y. Zhou, V. Koltun,
# Colored Point Cloud Registration Revisited, ICCV 2017

print("Colored point cloud registration")
s = time.time()

reg_multiscale_icp = treg.multi_scale_icp(source, target, voxel_sizes,
                                          criteria_list,
                                          max_correspondence_distances,
                                          init_source_to_target, estimation)

icp_time = time.time() - s
print("Time taken by Colored ICP: ", icp_time)
print("Fitness: ", reg_point_to_plane.fitness)
print("Inlier RMSE: ", reg_point_to_plane.inlier_rmse)

draw_registration_result(source, target, reg_multiscale_icp.transformation)

独特见解