ARKit SKVideoNode 在渲染时播放
ARKit SKVideoNode Playing on Render
主要问题:
我在之后添加此部分以澄清问题。 -- 我可以暂停我的视频(我不希望它循环播放)。当我的节点出现时,我的节点会播放我的视频,即使它处于暂停状态。如果我的视频播放完毕,出现在视线范围内,它会重新开始。我想删除此行为。
在我的应用程序中,我使用 SCNNode
对象和 SCNGeometry
对象从 3D Space 内部的 AVPlayer(:URL)
创建了一个 SKVideoNode
。我使用 ARKit
.ImageTracking
来确定何时找到特定图像,并从那里播放视频。一切都很好,但播放器决定在自己的时间播放,每次 AVPlayer 出现时;然而,它可能是在 ARImageAnchor
和 SCNNode
所附的任何时候出现的时候。无论哪种方式,每次节点进入相机镜头时都会播放 AVPlayer
。我用
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if(keyPath == "rate") {
print((object as! AVPlayer).rate)
}
}
打印出汇率,原来是1,结果是0。
我使用 player.pause() 或 player.play() 和 none 为我的所有函数打印了某种打印函数 print("Play")
每当上述汇率发生变化时,都会调用其中的一个。我怎样才能找到改变播放器速率的来源?
我检查了原始根节点,self.sceneview.scene.rootNode.childNodes
以确保我没有创建额外的 VideoNodes/SCNNodes/AVPlayers,等等,似乎只有 1 个。
关于为什么 SKVideoNode/AVPlayer 在 SCNNode 使用 ARKit 进入相机视线时播放的任何想法?提前致谢!
编辑1:
制定了一个解决方法,仅在用户单击此节点时确定
let tap = UITapGestureRecognizer(target: self, action: #selector(self!.tapGesture))
tap.delegate = self!
tap.name = "MyTap"
self!.sceneView.addGestureRecognizer(tap)
然后在下一个函数中,我输入
@objc func tapGesture(_ gesture:UITapGestureRecognizer) {
let tappedNodes = self.sceneView.hitTest(gesture.location(in: gesture.view), options: [SCNHitTestOption.searchMode: 1])
if !tappedNodes.isEmpty {
for nodes in tappedNodes {
if nodes.node == videoPlayer3D {
videoPlayer3D.tappedVideoPlayer = true
videoPlayer3D.playOrPause()
break
}
}
}
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if(keyPath == "rate") {
print((object as! AVPlayer).rate)
if(!self.tappedVideoPlayer) {
self.player.pause() //HERE
}
}
}
其中 videoPlayer3D 是包含 SKVideoNode 的 SCNNode
。
但是,我在上面标记为 "HERE" 的部分收到错误 com.apple.scenekit.scnview-renderer (17): EXC_BAD_ACCESS (code=2, address=0x16d8f7ad0)
。似乎场景视图的渲染器试图在渲染函数中更改我的视频节点,尽管如此,我什至没有使用 renderer(updateAtTime:)
函数,我只使用
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
createVideoNode(imageAnchor)
}
确定我何时看到图像,即 imageTracking
并创建节点。有什么建议吗?
想法 1
显示的错误表明正在从 SCNView 对象调用方法渲染器的某些方法(这是我从错误中了解到的),但我没有专门调用的节点。我认为可能会在节点即将查看时调用默认操作,但是,我不能 100% 确定如何访问它或确定哪种方法。我正在使用的对象不是 SCNView 对象,我不相信它们是从 SCNView 对象继承的(查看第 1 段以查看使用的变量)。只是想删除每次在视图中播放的节点的 "action" 。
添加:
为了跟随我的视频播放器的创建,如果有兴趣,就在这里。让我知道您是否还有其他想看的内容(不确定您还想看什么),感谢您的帮助。
func createVideoNode(_ anchor:ARImageAnchor, initialPOV:SCNNode) -> My3DPlayer? {
guard let currentFrame = self.sceneView.session.currentFrame else {
return nil
}
let delegate = UIApplication.shared.delegate as! AppDelegate
var videoPlayer:My3DPlayer!
videoPlayer = delegate.testing ? My3DPlayer(data: nil, currentFrame: currentFrame, anchor: anchor) : My3DPlayer(data: self.urlData, currentFrame: currentFrame, anchor: anchor)
//Create TapGesture
let tap = UITapGestureRecognizer(target: self, action: #selector(self.tapGesture))
tap.delegate = self
tap.name = "MyTap"
self.sceneView.addGestureRecognizer(tap)
return videoPlayer
}
My3dPlayer Class:
class My3DPlayer: SCNNode {
init(geometry: SCNGeometry?) {
super.init()
self.geometry = geometry
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
convenience init(data:Data?, currentFrame:ARFrame, anchor:ARImageAnchor) {
self.init(geometry: nil)
self.createPlayer(currentFrame, data, anchor)
}
private func createPlayer(_ frame:ARFrame, _ data:Data?,_ anchor:ARImageAnchor) {
let physicalSize = anchor.referenceImage.physicalSize
print("Init Player W/ physicalSize: \(physicalSize)")
//Create video
if((UIApplication.shared.delegate! as! AppDelegate).testing) {
let path = Bundle.main.path(forResource: "Bear", ofType: "mov")
self.url = URL(fileURLWithPath: path!)
}
else {
let url = data!.getAVAssetURL(location: "MyLocation")
self.url = url
}
let asset = AVAsset(url: self.url)
let track = asset.tracks(withMediaType: AVMediaType.video).first!
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(playerItem: playerItem)
self.player = player
var videoSize = track.naturalSize.applying(track.preferredTransform)
videoSize = CGSize(width: abs(videoSize.width), height: abs(videoSize.height))
print("Init Video W/ size: \(videoSize)")
//Determine if landscape or portrait
self.landscape = videoSize.width > videoSize.height
print(self.landscape == true ? "Landscape" : "Portrait")
//Do something when video ended
NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(note:)), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: nil)
//Add observer to determine when Player is ready
player.addObserver(self, forKeyPath: "status", options: [], context: nil)
//Create video Node
let videoNode = SKVideoNode(avPlayer: player)
//Create 2d scene to put 2d player on - SKScene
videoNode.position = CGPoint(x: videoSize.width/2, y: videoSize.height/2)
videoNode.size = videoSize
//Portrait -- //Landscape doesn't need adjustments??
if(!self.landscape) {
let width = videoNode.size.width
videoNode.size.width = videoNode.size.height
videoNode.size.height = width
videoNode.position = CGPoint(x: videoNode.size.width/2, y: videoNode.size.height/2)
}
let scene = SKScene(size: videoNode.size)
//Add videoNode to scene
scene.addChild(videoNode)
//Create Button-look even though we don't use the button. Just creates the illusion to pressing play and pause
let image = UIImage(named: "PlayButton")!
let texture = SKTexture(image: image)
self.button = SKSpriteNode(texture: texture)
self.button.position = videoNode.position
//Makes the button look like a square
let minimumSize = [videoSize.width, videoSize.height].min()!
self.button.size = CGSize(width: minimumSize/4, height: minimumSize/4)
scene.addChild(button)
//Get ratio difference from physicalsize and video size
let widthRatio = Float(physicalSize.width)/Float(videoSize.width)
let heightRatio = Float(physicalSize.height)/Float(videoSize.height)
let finalRatio = [widthRatio, heightRatio].min()!
//Create a Plane (SCNPlane) to put the SKScene on
let plane = SCNPlane(width: scene.size.width, height: scene.size.height)
plane.firstMaterial?.diffuse.contents = scene
plane.firstMaterial?.isDoubleSided = true
//Set Self.geometry = plane
self.geometry = plane
//Size the node correctly
//Find the real scaling variable
let scale = CGFloat(finalRatio)
let appearanceAction = SCNAction.scale(to: scale, duration: 0.4)
appearanceAction.timingMode = .easeOut
//Set initial scale to 0 then use action to scale up
self.scale = SCNVector3Make(0, 0, 0)
self.runAction(appearanceAction)
}
@objc func playerDidFinishPlaying(note: Notification) {
self.player.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero)
self.player.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero)
self.setButtonAlpha(alpha: 1)
}
}
努力1:
我试图通过以下方式停止跟踪:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
createVideoNode(imageAnchor)
self.resetConfiguration(turnOnConfig: true, turnOnImageTracking: false)
}
func resetConfiguration(turnOnConfig: Bool = true, turnOnImageTracking:Bool = false) {
let configuration = ARWorldTrackingConfiguration()
if(turnOnImageTracking) {
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
configuration.planeDetection = .horizontal
configuration.detectionImages = referenceImages
}
else {
configuration.planeDetection = []
}
if(turnOnConfig) {
sceneView.session.run(configuration, options: [.resetTracking])
}
}
以上,我试过重置配置。这只会导致它重置看起来的平面,因为视频仍在渲染上播放。无论是暂停还是结束,它都会重置并重新开始或从中断处继续播放。
努力2:
我试过了
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
createVideoNode(imageAnchor)
self.pauseTracking()
}
func pauseTracking() {
self.sceneView.session.pause()
}
这会停止一切,因此相机甚至会冻结,因为没有任何东西被跟踪。在这里完全没用。
好的。所以这是一个修复。参见 renderer(_:updateAtTime:)
。
var player: AVPlayer!
var play = true
@objc func tap(_ recognizer: UITapGestureRecognizer){
if play{
play = false
player.pause()
}else{
play = true
player.play()
}
}
func setVideo() -> SKScene{
let size = CGSize(width: 500, height: 500)
let skScene = SKScene(size: size)
let videoURL = Bundle.main.url(forResource: "video.mp4", withExtension: nil)!
player = AVPlayer(url: videoURL)
skScene.scaleMode = .aspectFit
videoSpriteNode = SKVideoNode(avPlayer: player)
videoSpriteNode.position = CGPoint(x: size.width/2, y: size.height/2)
videoSpriteNode.size = size
videoSpriteNode.yScale = -1
skScene.addChild(videoSpriteNode)
player.play()
return skScene
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let image = anchor as? ARImageAnchor{
print("found")
let planeGeometry = SCNPlane(width: image.referenceImage.physicalSize.width, height: image.referenceImage.physicalSize.height)
let plane = SCNNode(geometry: planeGeometry)
planeGeometry.materials.first?.diffuse.contents = setVideo()
plane.transform = SCNMatrix4MakeRotation(-.pi/2, 1, 0, 0)
node.addChildNode(plane)
}
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
if !play{
player.pause()
}
}
在您的代码中使用这个想法。
主要问题:
我在之后添加此部分以澄清问题。 -- 我可以暂停我的视频(我不希望它循环播放)。当我的节点出现时,我的节点会播放我的视频,即使它处于暂停状态。如果我的视频播放完毕,出现在视线范围内,它会重新开始。我想删除此行为。
在我的应用程序中,我使用 SCNNode
对象和 SCNGeometry
对象从 3D Space 内部的 AVPlayer(:URL)
创建了一个 SKVideoNode
。我使用 ARKit
.ImageTracking
来确定何时找到特定图像,并从那里播放视频。一切都很好,但播放器决定在自己的时间播放,每次 AVPlayer 出现时;然而,它可能是在 ARImageAnchor
和 SCNNode
所附的任何时候出现的时候。无论哪种方式,每次节点进入相机镜头时都会播放 AVPlayer
。我用
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if(keyPath == "rate") {
print((object as! AVPlayer).rate)
}
}
打印出汇率,原来是1,结果是0。
我使用 player.pause() 或 player.play() 和 none 为我的所有函数打印了某种打印函数 print("Play")
每当上述汇率发生变化时,都会调用其中的一个。我怎样才能找到改变播放器速率的来源?
我检查了原始根节点,self.sceneview.scene.rootNode.childNodes
以确保我没有创建额外的 VideoNodes/SCNNodes/AVPlayers,等等,似乎只有 1 个。
关于为什么 SKVideoNode/AVPlayer 在 SCNNode 使用 ARKit 进入相机视线时播放的任何想法?提前致谢!
编辑1:
制定了一个解决方法,仅在用户单击此节点时确定
let tap = UITapGestureRecognizer(target: self, action: #selector(self!.tapGesture))
tap.delegate = self!
tap.name = "MyTap"
self!.sceneView.addGestureRecognizer(tap)
然后在下一个函数中,我输入
@objc func tapGesture(_ gesture:UITapGestureRecognizer) {
let tappedNodes = self.sceneView.hitTest(gesture.location(in: gesture.view), options: [SCNHitTestOption.searchMode: 1])
if !tappedNodes.isEmpty {
for nodes in tappedNodes {
if nodes.node == videoPlayer3D {
videoPlayer3D.tappedVideoPlayer = true
videoPlayer3D.playOrPause()
break
}
}
}
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if(keyPath == "rate") {
print((object as! AVPlayer).rate)
if(!self.tappedVideoPlayer) {
self.player.pause() //HERE
}
}
}
其中 videoPlayer3D 是包含 SKVideoNode 的 SCNNode
。
但是,我在上面标记为 "HERE" 的部分收到错误 com.apple.scenekit.scnview-renderer (17): EXC_BAD_ACCESS (code=2, address=0x16d8f7ad0)
。似乎场景视图的渲染器试图在渲染函数中更改我的视频节点,尽管如此,我什至没有使用 renderer(updateAtTime:)
函数,我只使用
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
createVideoNode(imageAnchor)
}
确定我何时看到图像,即 imageTracking
并创建节点。有什么建议吗?
想法 1
显示的错误表明正在从 SCNView 对象调用方法渲染器的某些方法(这是我从错误中了解到的),但我没有专门调用的节点。我认为可能会在节点即将查看时调用默认操作,但是,我不能 100% 确定如何访问它或确定哪种方法。我正在使用的对象不是 SCNView 对象,我不相信它们是从 SCNView 对象继承的(查看第 1 段以查看使用的变量)。只是想删除每次在视图中播放的节点的 "action" 。
添加:
为了跟随我的视频播放器的创建,如果有兴趣,就在这里。让我知道您是否还有其他想看的内容(不确定您还想看什么),感谢您的帮助。
func createVideoNode(_ anchor:ARImageAnchor, initialPOV:SCNNode) -> My3DPlayer? {
guard let currentFrame = self.sceneView.session.currentFrame else {
return nil
}
let delegate = UIApplication.shared.delegate as! AppDelegate
var videoPlayer:My3DPlayer!
videoPlayer = delegate.testing ? My3DPlayer(data: nil, currentFrame: currentFrame, anchor: anchor) : My3DPlayer(data: self.urlData, currentFrame: currentFrame, anchor: anchor)
//Create TapGesture
let tap = UITapGestureRecognizer(target: self, action: #selector(self.tapGesture))
tap.delegate = self
tap.name = "MyTap"
self.sceneView.addGestureRecognizer(tap)
return videoPlayer
}
My3dPlayer Class:
class My3DPlayer: SCNNode {
init(geometry: SCNGeometry?) {
super.init()
self.geometry = geometry
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
convenience init(data:Data?, currentFrame:ARFrame, anchor:ARImageAnchor) {
self.init(geometry: nil)
self.createPlayer(currentFrame, data, anchor)
}
private func createPlayer(_ frame:ARFrame, _ data:Data?,_ anchor:ARImageAnchor) {
let physicalSize = anchor.referenceImage.physicalSize
print("Init Player W/ physicalSize: \(physicalSize)")
//Create video
if((UIApplication.shared.delegate! as! AppDelegate).testing) {
let path = Bundle.main.path(forResource: "Bear", ofType: "mov")
self.url = URL(fileURLWithPath: path!)
}
else {
let url = data!.getAVAssetURL(location: "MyLocation")
self.url = url
}
let asset = AVAsset(url: self.url)
let track = asset.tracks(withMediaType: AVMediaType.video).first!
let playerItem = AVPlayerItem(asset: asset)
let player = AVPlayer(playerItem: playerItem)
self.player = player
var videoSize = track.naturalSize.applying(track.preferredTransform)
videoSize = CGSize(width: abs(videoSize.width), height: abs(videoSize.height))
print("Init Video W/ size: \(videoSize)")
//Determine if landscape or portrait
self.landscape = videoSize.width > videoSize.height
print(self.landscape == true ? "Landscape" : "Portrait")
//Do something when video ended
NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(note:)), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: nil)
//Add observer to determine when Player is ready
player.addObserver(self, forKeyPath: "status", options: [], context: nil)
//Create video Node
let videoNode = SKVideoNode(avPlayer: player)
//Create 2d scene to put 2d player on - SKScene
videoNode.position = CGPoint(x: videoSize.width/2, y: videoSize.height/2)
videoNode.size = videoSize
//Portrait -- //Landscape doesn't need adjustments??
if(!self.landscape) {
let width = videoNode.size.width
videoNode.size.width = videoNode.size.height
videoNode.size.height = width
videoNode.position = CGPoint(x: videoNode.size.width/2, y: videoNode.size.height/2)
}
let scene = SKScene(size: videoNode.size)
//Add videoNode to scene
scene.addChild(videoNode)
//Create Button-look even though we don't use the button. Just creates the illusion to pressing play and pause
let image = UIImage(named: "PlayButton")!
let texture = SKTexture(image: image)
self.button = SKSpriteNode(texture: texture)
self.button.position = videoNode.position
//Makes the button look like a square
let minimumSize = [videoSize.width, videoSize.height].min()!
self.button.size = CGSize(width: minimumSize/4, height: minimumSize/4)
scene.addChild(button)
//Get ratio difference from physicalsize and video size
let widthRatio = Float(physicalSize.width)/Float(videoSize.width)
let heightRatio = Float(physicalSize.height)/Float(videoSize.height)
let finalRatio = [widthRatio, heightRatio].min()!
//Create a Plane (SCNPlane) to put the SKScene on
let plane = SCNPlane(width: scene.size.width, height: scene.size.height)
plane.firstMaterial?.diffuse.contents = scene
plane.firstMaterial?.isDoubleSided = true
//Set Self.geometry = plane
self.geometry = plane
//Size the node correctly
//Find the real scaling variable
let scale = CGFloat(finalRatio)
let appearanceAction = SCNAction.scale(to: scale, duration: 0.4)
appearanceAction.timingMode = .easeOut
//Set initial scale to 0 then use action to scale up
self.scale = SCNVector3Make(0, 0, 0)
self.runAction(appearanceAction)
}
@objc func playerDidFinishPlaying(note: Notification) {
self.player.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero)
self.player.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero)
self.setButtonAlpha(alpha: 1)
}
}
努力1:
我试图通过以下方式停止跟踪:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
createVideoNode(imageAnchor)
self.resetConfiguration(turnOnConfig: true, turnOnImageTracking: false)
}
func resetConfiguration(turnOnConfig: Bool = true, turnOnImageTracking:Bool = false) {
let configuration = ARWorldTrackingConfiguration()
if(turnOnImageTracking) {
guard let referenceImages = ARReferenceImage.referenceImages(inGroupNamed: "AR Resources", bundle: nil) else {
fatalError("Missing expected asset catalog resources.")
}
configuration.planeDetection = .horizontal
configuration.detectionImages = referenceImages
}
else {
configuration.planeDetection = []
}
if(turnOnConfig) {
sceneView.session.run(configuration, options: [.resetTracking])
}
}
以上,我试过重置配置。这只会导致它重置看起来的平面,因为视频仍在渲染上播放。无论是暂停还是结束,它都会重置并重新开始或从中断处继续播放。
努力2:
我试过了
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
createVideoNode(imageAnchor)
self.pauseTracking()
}
func pauseTracking() {
self.sceneView.session.pause()
}
这会停止一切,因此相机甚至会冻结,因为没有任何东西被跟踪。在这里完全没用。
好的。所以这是一个修复。参见 renderer(_:updateAtTime:)
。
var player: AVPlayer!
var play = true
@objc func tap(_ recognizer: UITapGestureRecognizer){
if play{
play = false
player.pause()
}else{
play = true
player.play()
}
}
func setVideo() -> SKScene{
let size = CGSize(width: 500, height: 500)
let skScene = SKScene(size: size)
let videoURL = Bundle.main.url(forResource: "video.mp4", withExtension: nil)!
player = AVPlayer(url: videoURL)
skScene.scaleMode = .aspectFit
videoSpriteNode = SKVideoNode(avPlayer: player)
videoSpriteNode.position = CGPoint(x: size.width/2, y: size.height/2)
videoSpriteNode.size = size
videoSpriteNode.yScale = -1
skScene.addChild(videoSpriteNode)
player.play()
return skScene
}
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let image = anchor as? ARImageAnchor{
print("found")
let planeGeometry = SCNPlane(width: image.referenceImage.physicalSize.width, height: image.referenceImage.physicalSize.height)
let plane = SCNNode(geometry: planeGeometry)
planeGeometry.materials.first?.diffuse.contents = setVideo()
plane.transform = SCNMatrix4MakeRotation(-.pi/2, 1, 0, 0)
node.addChildNode(plane)
}
}
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
if !play{
player.pause()
}
}
在您的代码中使用这个想法。