如何使用 CGAffineTransform 使两个视图具有相同的宽度和高度
How to get two views to be the same width and height using CGAffineTransform
如果我想获得 2 个宽度和高度相同的视图,并且它们的两个中心都位于屏幕中间,我使用下面的代码,效果很好。两者并排在屏幕中间,宽度和高度完全相同。
let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)
// blue
leftView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
leftView.leadingAnchor.constraint(equalTo: view.leadingAnchor).isActive = true
leftView.widthAnchor.constraint(equalToConstant: rect.width).isActive = true
leftView.heightAnchor.constraint(equalToConstant: rect.height).isActive = true
// purple
rightView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
rightView.trailingAnchor.constraint(equalTo: view.trailingAnchor).isActive = true
rightView.widthAnchor.constraint(equalToConstant: leftView.widthAnchor).isActive = true
rightView.heightAnchor.constraint(equalToConstant: leftView.heightAnchor).isActive = true
如何使用 CGAffineTransform
做同样的事情?我试图找到一种方法使 rightView 与左视图的大小相同,但没有成功。 leftView 框架的顶部在屏幕中间而不是它的中心,rightView 完全关闭。
let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)
leftView.transform = CGAffineTransform(scaleX: 0.5, y: 0.5)
leftView.transform = CGAffineTransform(translationX: 0, y: view.frame.height / 2)
rightView.transform = leftView.transform
rightView.transform = CGAffineTransform(translationX: rect.width, y: view.frame.height / 2)
您需要根据合成视频的 输出 大小 - 它的 .renderSize
.
进行转换
根据你的其他问题...
因此,如果您有两个 1280.0 x 720.0
视频,并且希望它们在 640 x 480
渲染帧中并排显示,则需要:
- 获取第一个视频的大小
- 将其缩放到
320 x 480
- 移至
0, 0
然后:
- 获取第二个视频的大小
- 将其缩放到
320 x 480
- 移至
320, 0
因此您的缩放变换将是:
let targetWidth = renderSize.width / 2.0
let targetHeight = renderSize.height
let widthScale = targetWidth / sourceVideoSize.width
let heightScale = targetHeight / sourceVideoSize.height
let scale = CGAffineTransform(scaleX: widthScale, y: heightScale)
那应该能让你到达那里 --- 除了...
在我的测试中,我拍摄了 4 个 8 秒的横向视频。
由于我不知道的原因 - “本地”首选转换是:
Videos 1 & 3
[-1, 0, 0, -1, 1280, 720]
Videos 2 & 4
[1, 0, 0, 1, 0, 0]
因此,推荐 track.naturalSize.applying(track.preferredTransform)
返回的尺寸最终为:
Videos 1 & 3
-1280 x -720
Videos 2 & 4
1280 x 720
这会扰乱转换。
经过一些实验,如果大小为负,我们需要:
- 旋转变换
- 缩放变换(确保使用正 widths/heights)
- 翻译针对方向变化调整的变换
这是一个完整的实现(最后没有保存到磁盘):
import UIKit
import AVFoundation
class VideoViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemYellow
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let originalVideoURL1 = Bundle.main.url(forResource: "video1", withExtension: "mov"),
let originalVideoURL2 = Bundle.main.url(forResource: "video2", withExtension: "mov")
else { return }
let firstAsset = AVURLAsset(url: originalVideoURL1)
let secondAsset = AVURLAsset(url: originalVideoURL2)
let mixComposition = AVMutableComposition()
guard let firstTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
let timeRange1 = CMTimeRangeMake(start: .zero, duration: firstAsset.duration)
do {
try firstTrack.insertTimeRange(timeRange1, of: firstAsset.tracks(withMediaType: .video)[0], at: .zero)
} catch {
return
}
guard let secondTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
let timeRange2 = CMTimeRangeMake(start: .zero, duration: secondAsset.duration)
do {
try secondTrack.insertTimeRange(timeRange2, of: secondAsset.tracks(withMediaType: .video)[0], at: .zero)
} catch {
return
}
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: CMTimeMaximum(firstAsset.duration, secondAsset.duration))
var track: AVAssetTrack!
track = firstAsset.tracks(withMediaType: .video).first
let firstSize = track.naturalSize.applying(track.preferredTransform)
track = secondAsset.tracks(withMediaType: .video).first
let secondSize = track.naturalSize.applying(track.preferredTransform)
// debugging
print("firstSize:", firstSize)
print("secondSize:", secondSize)
let renderSize = CGSize(width: 640, height: 480)
var scale: CGAffineTransform!
var move: CGAffineTransform!
let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
scale = .identity
move = .identity
if (firstSize.width < 0) {
scale = CGAffineTransform(rotationAngle: .pi)
}
scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / firstSize.width), y: abs(renderSize.height / firstSize.height))
move = CGAffineTransform(translationX: 0, y: 0)
if (firstSize.width < 0) {
move = CGAffineTransform(translationX: renderSize.width / 2.0, y: renderSize.height)
}
firstLayerInstruction.setTransform(scale.concatenating(move), at: .zero)
let secondLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)
scale = .identity
move = .identity
if (secondSize.width < 0) {
scale = CGAffineTransform(rotationAngle: .pi)
}
scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / secondSize.width), y: abs(renderSize.height / secondSize.height))
move = CGAffineTransform(translationX: renderSize.width / 2.0, y: 0)
if (secondSize.width < 0) {
move = CGAffineTransform(translationX: renderSize.width, y: renderSize.height)
}
secondLayerInstruction.setTransform(scale.concatenating(move), at: .zero)
mainInstruction.layerInstructions = [firstLayerInstruction, secondLayerInstruction]
let mainCompositionInst = AVMutableVideoComposition()
mainCompositionInst.instructions = [mainInstruction]
mainCompositionInst.frameDuration = CMTime(value: 1, timescale: 30)
mainCompositionInst.renderSize = renderSize
let newPlayerItem = AVPlayerItem(asset: mixComposition)
newPlayerItem.videoComposition = mainCompositionInst
let player = AVPlayer(playerItem: newPlayerItem)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = view.bounds
view.layer.addSublayer(playerLayer)
player.seek(to: .zero)
player.play()
// video export code goes here...
}
}
前置/后置摄像头、镜像等的 preferredTransforms 也可能不同。但我会把它留给你解决。
编辑
示例项目位于:https://github.com/DonMag/VideoTest
制作(使用两个 720 x 1280
视频剪辑):
如果我想获得 2 个宽度和高度相同的视图,并且它们的两个中心都位于屏幕中间,我使用下面的代码,效果很好。两者并排在屏幕中间,宽度和高度完全相同。
let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)
// blue
leftView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
leftView.leadingAnchor.constraint(equalTo: view.leadingAnchor).isActive = true
leftView.widthAnchor.constraint(equalToConstant: rect.width).isActive = true
leftView.heightAnchor.constraint(equalToConstant: rect.height).isActive = true
// purple
rightView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
rightView.trailingAnchor.constraint(equalTo: view.trailingAnchor).isActive = true
rightView.widthAnchor.constraint(equalToConstant: leftView.widthAnchor).isActive = true
rightView.heightAnchor.constraint(equalToConstant: leftView.heightAnchor).isActive = true
如何使用 CGAffineTransform
做同样的事情?我试图找到一种方法使 rightView 与左视图的大小相同,但没有成功。 leftView 框架的顶部在屏幕中间而不是它的中心,rightView 完全关闭。
let width = view.frame.width
let insideRect = CGRect(x: 0, y: 0, width: width / 2, height: .infinity)
let rect = AVMakeRect(aspectRatio: CGSize(width: 9, height: 16), insideRect: insideRect)
leftView.transform = CGAffineTransform(scaleX: 0.5, y: 0.5)
leftView.transform = CGAffineTransform(translationX: 0, y: view.frame.height / 2)
rightView.transform = leftView.transform
rightView.transform = CGAffineTransform(translationX: rect.width, y: view.frame.height / 2)
您需要根据合成视频的 输出 大小 - 它的 .renderSize
.
根据你的其他问题...
因此,如果您有两个 1280.0 x 720.0
视频,并且希望它们在 640 x 480
渲染帧中并排显示,则需要:
- 获取第一个视频的大小
- 将其缩放到
320 x 480
- 移至
0, 0
然后:
- 获取第二个视频的大小
- 将其缩放到
320 x 480
- 移至
320, 0
因此您的缩放变换将是:
let targetWidth = renderSize.width / 2.0
let targetHeight = renderSize.height
let widthScale = targetWidth / sourceVideoSize.width
let heightScale = targetHeight / sourceVideoSize.height
let scale = CGAffineTransform(scaleX: widthScale, y: heightScale)
那应该能让你到达那里 --- 除了...
在我的测试中,我拍摄了 4 个 8 秒的横向视频。
由于我不知道的原因 - “本地”首选转换是:
Videos 1 & 3
[-1, 0, 0, -1, 1280, 720]
Videos 2 & 4
[1, 0, 0, 1, 0, 0]
因此,推荐 track.naturalSize.applying(track.preferredTransform)
返回的尺寸最终为:
Videos 1 & 3
-1280 x -720
Videos 2 & 4
1280 x 720
这会扰乱转换。
经过一些实验,如果大小为负,我们需要:
- 旋转变换
- 缩放变换(确保使用正 widths/heights)
- 翻译针对方向变化调整的变换
这是一个完整的实现(最后没有保存到磁盘):
import UIKit
import AVFoundation
class VideoViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .systemYellow
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
guard let originalVideoURL1 = Bundle.main.url(forResource: "video1", withExtension: "mov"),
let originalVideoURL2 = Bundle.main.url(forResource: "video2", withExtension: "mov")
else { return }
let firstAsset = AVURLAsset(url: originalVideoURL1)
let secondAsset = AVURLAsset(url: originalVideoURL2)
let mixComposition = AVMutableComposition()
guard let firstTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
let timeRange1 = CMTimeRangeMake(start: .zero, duration: firstAsset.duration)
do {
try firstTrack.insertTimeRange(timeRange1, of: firstAsset.tracks(withMediaType: .video)[0], at: .zero)
} catch {
return
}
guard let secondTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else { return }
let timeRange2 = CMTimeRangeMake(start: .zero, duration: secondAsset.duration)
do {
try secondTrack.insertTimeRange(timeRange2, of: secondAsset.tracks(withMediaType: .video)[0], at: .zero)
} catch {
return
}
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: CMTimeMaximum(firstAsset.duration, secondAsset.duration))
var track: AVAssetTrack!
track = firstAsset.tracks(withMediaType: .video).first
let firstSize = track.naturalSize.applying(track.preferredTransform)
track = secondAsset.tracks(withMediaType: .video).first
let secondSize = track.naturalSize.applying(track.preferredTransform)
// debugging
print("firstSize:", firstSize)
print("secondSize:", secondSize)
let renderSize = CGSize(width: 640, height: 480)
var scale: CGAffineTransform!
var move: CGAffineTransform!
let firstLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: firstTrack)
scale = .identity
move = .identity
if (firstSize.width < 0) {
scale = CGAffineTransform(rotationAngle: .pi)
}
scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / firstSize.width), y: abs(renderSize.height / firstSize.height))
move = CGAffineTransform(translationX: 0, y: 0)
if (firstSize.width < 0) {
move = CGAffineTransform(translationX: renderSize.width / 2.0, y: renderSize.height)
}
firstLayerInstruction.setTransform(scale.concatenating(move), at: .zero)
let secondLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: secondTrack)
scale = .identity
move = .identity
if (secondSize.width < 0) {
scale = CGAffineTransform(rotationAngle: .pi)
}
scale = scale.scaledBy(x: abs(renderSize.width / 2.0 / secondSize.width), y: abs(renderSize.height / secondSize.height))
move = CGAffineTransform(translationX: renderSize.width / 2.0, y: 0)
if (secondSize.width < 0) {
move = CGAffineTransform(translationX: renderSize.width, y: renderSize.height)
}
secondLayerInstruction.setTransform(scale.concatenating(move), at: .zero)
mainInstruction.layerInstructions = [firstLayerInstruction, secondLayerInstruction]
let mainCompositionInst = AVMutableVideoComposition()
mainCompositionInst.instructions = [mainInstruction]
mainCompositionInst.frameDuration = CMTime(value: 1, timescale: 30)
mainCompositionInst.renderSize = renderSize
let newPlayerItem = AVPlayerItem(asset: mixComposition)
newPlayerItem.videoComposition = mainCompositionInst
let player = AVPlayer(playerItem: newPlayerItem)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = view.bounds
view.layer.addSublayer(playerLayer)
player.seek(to: .zero)
player.play()
// video export code goes here...
}
}
前置/后置摄像头、镜像等的 preferredTransforms 也可能不同。但我会把它留给你解决。
编辑
示例项目位于:https://github.com/DonMag/VideoTest
制作(使用两个 720 x 1280
视频剪辑):