在什么情况下使用SLAM技术?
In which case the SLAM technique is used?
我正在研究增强现实领域,尤其是Google的ARCore技术。我想知道基于模型的跟踪是否需要 SLAM 方法。在我看来很明显在这种情况下没有使用它,但我找不到任何文章来证实这一点。
我的第二个问题与第一个问题类似,与 Azure Spatial Anchors 技术有关。该技术能够识别在之前的会话中可视化的场景。在某种程度上,Azure Spatial Anchors 技术让我想起了一些基于模型的跟踪技术,知道基于模型的跟踪具有识别先前记录的 3D 对象的能力。所以,同样地我想知道使用Azure Spatial Anchors技术是否需要使用slam方法?
看看Frequently asked questions about Azure Spatial Anchors
Azure Spatial Anchors depends on mixed reality / augmented reality trackers. These trackers perceive the environment with cameras and track the device in 6-degrees-of-freedom (6DoF) as it moves through the space.
Given a 6DoF tracker as a building block, Azure Spatial Anchors allows you to designate certain points of interest in your real environment as "anchor" points. You might, for example, use an anchor to render content at a specific place in the real-world.
When you create an anchor, the client SDK captures environment information around that point and transmits it to the service. If another device looks for the anchor in that same space, similar data transmits to the service. That data is matched against the environment data previously stored. The position of the anchor relative to the device is then sent back for use in the application.
...
For each point in the sparse point cloud, we transmit and store a hash of the visual characteristics of that point. The hash is derived from, but does not contain, any pixel data.
Microsoft Research 博客披露了相同类型的视觉同步定位和映射 (SLAM) 算法正在与 Azure Spatial Anchors 一起使用:Azure Spatial Anchors: How it works
有关保密协议下算法的更多详细信息,您可以Open a tech support ticket。
我正在研究增强现实领域,尤其是Google的ARCore技术。我想知道基于模型的跟踪是否需要 SLAM 方法。在我看来很明显在这种情况下没有使用它,但我找不到任何文章来证实这一点。 我的第二个问题与第一个问题类似,与 Azure Spatial Anchors 技术有关。该技术能够识别在之前的会话中可视化的场景。在某种程度上,Azure Spatial Anchors 技术让我想起了一些基于模型的跟踪技术,知道基于模型的跟踪具有识别先前记录的 3D 对象的能力。所以,同样地我想知道使用Azure Spatial Anchors技术是否需要使用slam方法?
看看Frequently asked questions about Azure Spatial Anchors
Azure Spatial Anchors depends on mixed reality / augmented reality trackers. These trackers perceive the environment with cameras and track the device in 6-degrees-of-freedom (6DoF) as it moves through the space.
Given a 6DoF tracker as a building block, Azure Spatial Anchors allows you to designate certain points of interest in your real environment as "anchor" points. You might, for example, use an anchor to render content at a specific place in the real-world.
When you create an anchor, the client SDK captures environment information around that point and transmits it to the service. If another device looks for the anchor in that same space, similar data transmits to the service. That data is matched against the environment data previously stored. The position of the anchor relative to the device is then sent back for use in the application.
...
For each point in the sparse point cloud, we transmit and store a hash of the visual characteristics of that point. The hash is derived from, but does not contain, any pixel data.
Microsoft Research 博客披露了相同类型的视觉同步定位和映射 (SLAM) 算法正在与 Azure Spatial Anchors 一起使用:Azure Spatial Anchors: How it works
有关保密协议下算法的更多详细信息,您可以Open a tech support ticket。