检查 ARReferenceImage 在相机视图中是否不再可见

Check whether the ARReferenceImage is no longer visible in the camera's view

我想检查 ARReferenceImage is no longer visible in the camera's view. At the moment I can check if the image's node is in the camera's view, but this node is still visible in the camera's view when the ARReferenceImage 是否被另一张图片覆盖或图片何时被删除。

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
    guard let node = self.currentImageNode else { return }

    if let pointOfView = sceneView.pointOfView {
        let isVisible = sceneView.isNode(node, insideFrustumOf: pointOfView)
        print("Is node visible: \(isVisible)")
    }
}

所以我需要检查图像是否不再可见而不是图像的节点可见性。但我不知道这是否可能。第一个屏幕截图显示了找到下面的图像时添加的三个框。当找到的图像被覆盖时(见截图2)我想删除框。

我认为目前这不可能。

来自Recognizing Images in an AR Experience documentation

Design your AR experience to use detected images as a starting point for virtual content.

ARKit doesn’t track changes to the position or orientation of each detected image. If you try to place virtual content that stays attached to a detected image, that content may not appear to stay in place correctly. Instead, use detected images as a frame of reference for starting a dynamic scene.


iOS 12.0

的新答案

ARKit 2.0 和 iOS 12 最终添加了此功能,通过 ARImageTrackingConfigurationARWorldTrackingConfiguration.detectionImages 属性 现在也可以跟踪图像的位置。

ARImageTrackingConfiguration 的 Apple 文档列出了这两种方法的优点:

With ARImageTrackingConfiguration, ARKit establishes a 3D space not by tracking the motion of the device relative to the world, but solely by detecting and tracking the motion of known 2D images in view of the camera. ARWorldTrackingConfiguration can also detect images, but each configuration has its own strengths:

  • World tracking has a higher performance cost than image-only tracking, so your session can reliably track more images at once with ARImageTrackingConfiguration.

  • Image-only tracking lets you anchor virtual content to known images only when those images are in view of the camera. World tracking with image detection lets you use known images to add virtual content to the 3D world, and continues to track the position of that content in world space even after the image is no longer in view.

  • World tracking works best in a stable, nonmoving environment. You can use image-only tracking to add virtual content to known images in more situations—for example, an advertisement inside a moving subway car.

我不完全确定我是否理解您的要求(非常抱歉),但如果我理解了,那么这可能会有所帮助...

似乎 insideOfFrustum 要正常工作,它们必须是一些 SCNGeometry 与节点关联才能工作(仅 SCNNode 是不够的)。

例如,如果我们在 delegate 回调中执行类似的操作并将添加的 SCNNode 保存到数组中:

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

    //1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
    guard let currentImageAnchor = anchor as? ARImageAnchor else { return }

    //2. Print The Anchor ID & It's Associated Node
    print("""
        Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
        Associated Node Details = \(node)
        """)


    //3. Store The Node
    imageTargets.append(node)
}

然后使用 insideOfFrustum 方法,99% 的时间它会说节点在视图中,即使我们知道它不应该在视图中。

然而,如果我们做这样的事情(由此我们创建一个透明标记节点,例如具有一些几何形状的节点):

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {

    //1. If Out Target Image Has Been Detected Than Get The Corresponding Anchor
    guard let currentImageAnchor = anchor as? ARImageAnchor else { return }

    //2. Print The Anchor ID & It's Associated Node
    print("""
        Anchor With ID Has Been Detected \(currentImageAnchor.identifier)
        Associated Node Details = \(node)
        """)


    //3. Create A Transpanrent Geometry
    node.geometry = SCNSphere(radius: 0.1)
    node.geometry?.firstMaterial?.diffuse.contents = UIColor.clear

    //3. Store The Node
    imageTargets.append(node)
}

然后调用下面的方法,它会检测ARReferenceImage是否在视图中:

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {

    //1. Get The Current Point Of View
    guard let pointOfView = augmentedRealityView.pointOfView else { return }

    //2. Loop Through Our Image Target Markers
    for addedNode in imageTargets{

        if  augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
            print("Node Is Visible")
        }else{
            print("Node Is Not Visible")
        }

    }

}

关于您关于 SCNNode 被另一个 SCNNode 遮挡的另一点,Apple Docs 指出 inViewOfFrostrum:

does not perform occlusion testing. That is, it returns true if the tested node lies within the specified viewing frustum regardless of whether that node’s contents are obscured by other geometry.

再次抱歉,如果我没有正确理解您的意思,但希望它能在某种程度上有所帮助...

更新:

现在我完全理解你的问题了,我同意@orangenkopf 的观点,这是不可能的。因为正如文档所述:

ARKit doesn’t track changes to the position or orientation of each detected image.

来自Recognizing Images in an AR Experience documentation

ARKit adds an image anchor to a session exactly once for each reference image in the session configuration’s detectionImages array. If your AR experience adds virtual content to the scene when an image is detected, that action will by default happen only once. To allow the user to experience that content again without restarting your app, call the session’s remove(anchor:) method to remove the corresponding ARImageAnchor. After the anchor is removed, ARKit will add a new anchor the next time it detects the image.

所以,也许您可​​以找到适合您的情况的解决方法:

假设我们是保存检测到的 ARImageAnchor 和关联的虚拟内容的结构:

struct ARImage {
    var anchor: ARImageAnchor
    var node: SCNNode
}

然后,当 renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) 被调用时,将检测到的图像保存到 ARImage 的临时列表中:

...

var tmpARImages: [ARImage] = []

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
        guard let imageAnchor = anchor as? ARImageAnchor else { return }
        let referenceImage = imageAnchor.referenceImage

        // If the ARImage does not exist
        if !tmpARImages.contains(where: {[=11=].anchor.referenceImage.name == referenceImage.name}) {
            let virtualContent = SCNNode(...)
            node.addChildNode(virtualContent)

            tmpARImages.append(ARImage(anchor: imageAnchor, node: virtualContent))
        }


        // Delete anchor from the session to reactivate the image recognition
        sceneView.session.remove(anchor: anchor)    
}

如果你明白,当你的相机的视角指向 image/marker 之外时,委托函数将无限循环...(因为我们从会话中删除了锚点).

想法是结合图像识别循环、检测到的图像保存到 tmp 列表和 sceneView.isNode(node, insideFrustumOf: pointOfView) 函数来确定检测到的 image/marker 是否不再是视图。

我希望很清楚...

我设法解决了问题!使用了一点 Maybe1 的代码和他的概念来解决问题,但方式不同。下面这行代码还是用来重新激活图片识别的。

// Delete anchor from the session to reactivate the image recognition
sceneView.session.remove(anchor: anchor) 

让我解释一下。首先我们需要添加一些变量。

// The scnNodeBarn variable will be the node to be added when the barn image is found. Add another scnNode when you have another image.    
var scnNodeBarn: SCNNode = SCNNode()
// This variable holds the currently added scnNode (in this case scnNodeBarn when the barn image is found)     
var currentNode: SCNNode? = nil
// This variable holds the UUID of the found Image Anchor that is used to add a scnNode    
var currentARImageAnchorIdentifier: UUID?
// This variable is used to call a function when there is no new anchor added for 0.6 seconds    
var timer: Timer!

完整代码,下面有注释。

/// - Tag: ARImageAnchor-Visualizing
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
    guard let imageAnchor = anchor as? ARImageAnchor else { return }

    let referenceImage = imageAnchor.referenceImage

    // The following timer fires after 0.6 seconds, but everytime when there found an anchor the timer is stopped.
    // So when there is no ARImageAnchor found the timer will be completed and the current scene node will be deleted and the variable will set to nil
    DispatchQueue.main.async {
        if(self.timer != nil){
            self.timer.invalidate()
        }
        self.timer = Timer.scheduledTimer(timeInterval: 0.6 , target: self, selector: #selector(self.imageLost(_:)), userInfo: nil, repeats: false)
    }

    // Check if there is found a new image on the basis of the ARImageAnchorIdentifier, when found delete the current scene node and set the variable to nil
    if(self.currentARImageAnchorIdentifier != imageAnchor.identifier &&
        self.currentARImageAnchorIdentifier != nil
        && self.currentNode != nil){
            //found new image
            self.currentNode!.removeFromParentNode()
            self.currentNode = nil
    }

    updateQueue.async {

        //If currentNode is nil, there is currently no scene node
        if(self.currentNode == nil){

            switch referenceImage.name {
                case "barn":
                    self.scnNodeBarn.transform = node.transform
                    self.sceneView.scene.rootNode.addChildNode(self.scnNodeBarn)
                    self.currentNode = self.scnNodeBarn
                default: break
            }

        }

        self.currentARImageAnchorIdentifier = imageAnchor.identifier

        // Delete anchor from the session to reactivate the image recognition
        self.sceneView.session.remove(anchor: anchor)
    }

}

当计时器结束表示没有找到新的 ARImageAnchor 时删除节点。

@objc
    func imageLost(_ sender:Timer){
        self.currentNode!.removeFromParentNode()
        self.currentNode = nil
    }

这样当图片被覆盖或者发现新的图片时,会删除当前添加的scnNode。

不幸的是,由于以下原因,该解决方案没有解决图像的定位问题:

ARKit doesn’t track changes to the position or orientation of each detected image.

检查您正在跟踪的图像是否当前未被 ARKit 跟踪的正确方法是使用 didUpdate 节点上的 ARImageAnchor 中的 "isTracked" 属性 锚函数。

为此,我使用下一个结构:

struct TrackedImage {
    var name : String
    var node : SCNNode?
}

然后是该结构的数组,其中包含所有图像的名称。

var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]

然后在anchor的didAdd节点中,将新的内容设置到场景中,同时将该节点添加到trackedImages数组中对应的元素中

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
    // Check if the added anchor is a recognized ARImageAnchor
    if let imageAnchor = anchor as? ARImageAnchor{
        // Get the reference ar image
        let referenceImage = imageAnchor.referenceImage
        // Create a plane to match the detected image.
        let plane = SCNPlane(width: referenceImage.physicalSize.width, height: referenceImage.physicalSize.height)
        plane.firstMaterial?.diffuse.contents = UIColor(red: 1, green: 1, blue: 1, alpha: 0.5)
        // Create SCNNode from the plane
        let planeNode = SCNNode(geometry: plane)
        planeNode.eulerAngles.x = -.pi / 2
        // Add the plane to the scene.
        node.addChildNode(planeNode)
        // Add the node to the tracked images
        for (index, trackedImage) in trackedImages.enumerated(){
            if(trackedImage.name == referenceImage.name){
                trackedImage[index].node = planeNode
            }
        }
    }
}

最后,在锚函数的 didUpdate 节点中,我们在数组中搜索锚名称并检查 属性 isTracked 是否为假。

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
    var trackedImages : [TrackedImage] = [ TrackedImage(name: "image_1", node: nil) ]
    if let imageAnchor = anchor as? ARImageAnchor{
        // Search the corresponding node for the ar image anchor
        for (index, trackedImage) in trackedImages.enumerated(){
            if(trackedImage.name == referenceImage.name){
                // Check if track is lost on ar image
                if(imageAnchor.isTracked){
                    // The image is being tracked
                    trackedImage.node?.isHidden = false // Show or add content
                }else{
                    // The image is lost
                    trackedImage.node?.isHidden = true // Hide or delete content
                }
                break
            }
        }
    }
}

当您想同时跟踪多张图片并知道其中任何一张丢失时,此解决方案很有效。

注意:要使此解决方案起作用,AR 配置中的 maximumNumberOfTrackedImages 必须设置为非零数字。

此代码仅在您严格水平或垂直握持设备时有效。如果您保持 iPhone 倾斜或开始倾斜,则此代码不起作用:

func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {

    //1. Get The Current Point Of View
    guard let pointOfView = augmentedRealityView.pointOfView else { return }

    //2. Loop Through Our Image Target Markers
    for addedNode in imageTargets{

        if  augmentedRealityView.isNode(addedNode, insideFrustumOf: pointOfView){
            print("Node Is Visible")
        }else{
            print("Node Is Not Visible")
        }

    }

}

为了它的价值,我花了几个小时试图弄清楚如何不断检查图像参考。 didUpdate 函数就是答案。然后您只需要使用 .isTracked 属性 测试正在跟踪的参考图像。此时,您可以将 .isHidden 属性 设置为 true 或 false。这是我的例子:

func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
        let trackedNode = node

        if let imageAnchor = anchor as? ARImageAnchor{

        if (imageAnchor.isTracked) {
            trackedNode.isHidden = false
            print("\(trackedNode.name)")

        }else {
            trackedNode.isHidden = true

            //print("\(trackedImageName)")
            print("No image in view")

        }
        }
    }