将数据从 ViewController 传递到 Representable SwiftUI
Pass data from ViewController to Representable SwiftUI
我正在进行对象检测并使用 UIViewControllerRepresentable
添加我的视图控制器。问题是我无法将数据从我的 ViewController
传递到我的 SwiftUI 视图。我可以打印它。
有人可以帮助我吗?这是我的代码:
//
import SwiftUI
import AVKit
import UIKit
import Vision
let SVWidth = UIScreen.main.bounds.width
struct MaskDetectionView: View {
let hasMaskColor = Color.green
let noMaskColor = Color.red
let shadowColor = Color.gray
var body: some View {
VStack(alignment: .center) {
VStack(alignment: .center) {
Text("Please place your head inside the bounded box.")
.font(.system(size: 15, weight: .regular, design: .default))
Text("For better result, show your entire face.")
.font(.system(size: 15, weight: .regular, design: .default))
}.padding(.top, 10)
VStack(alignment: .center) {
SwiftUIViewController()
.frame(width: SVWidth - 30, height: SVWidth + 30, alignment: .center)
.background(Color.white)
.cornerRadius(25)
.shadow(color: hasMaskColor, radius: 7, x: 0, y: 0)
.padding(.top, 30)
Spacer()
/// VALUE HERE
}
}.padding()
}
}
struct MaskDetectionView_Previews: PreviewProvider {
static var previews: some View {
MaskDetectionView()
}
}
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
var result = String()
//ALL THE OBJECTS
override func viewDidLoad() {
super.viewDidLoad()
// 1 - start session
let capture_session = AVCaptureSession()
//capture_session.sessionPreset = .vga640x480
// 2 - set the device front & add input
guard let capture_device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: .video, position: .front) else {return}
guard let input = try? AVCaptureDeviceInput(device: capture_device) else { return }
capture_session.addInput(input)
// 3 - the layer on screen that shows the picture
let previewLayer = AVCaptureVideoPreviewLayer(session: capture_session)
view.layer.addSublayer(previewLayer)
previewLayer.frame.size = CGSize(width: SVWidth, height: SVWidth + 40)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
// 4 - run the session
capture_session.startRunning()
// 5 - the produced output aka image or video
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
capture_session.addOutput(dataOutput)
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
// our model
guard let model = try? VNCoreMLModel(for: SqueezeNet(configuration: MLModelConfiguration()).model) else { return }
// request for our model
let request = VNCoreMLRequest(model: model) { (finishedReq, err) in
if let error = err {
print("failed to detect faces:", error)
return
}
//result
guard let results = finishedReq.results as? [VNClassificationObservation] else {return}
guard let first_observation = results.first else {return}
self.result = first_observation.identifier
print(self.result)
}
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {return}
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
}
struct SwiftUIViewController: UIViewControllerRepresentable {
func makeUIViewController(context: Context) -> ViewController{
return ViewController()
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
}
}
Swift 有多种方法可以让您在视图和对象之间来回传递数据
例如委托、Key-Value-Observation,或者,专门针对SwiftUI, 属性 包装器 例如@State、@Binding、@ObservableObject 和@ObservedObject。但是,在 SwiftUI 视图中显示数据时,您将需要 属性 包装器。
如果您想以 SwiftUI 方式进行操作,您可能需要查看 @State
和 @Binding
属性 包装器以及如何在 UIViewControllerRepresentable
结构中使用协调器。将 @State
属性 添加到 SwiftUI 视图并将其作为绑定传递给您的 UIViewControllerRepresentable
.
//Declare a new property in struct MaskDetectionView and pass it to SwiftUIViewController as a binding
@State var string result = ""
...
SwiftUIViewController(resultText: $result)
//Add your new binding as a property in the SwiftUIViewController struct
@Binding var string resultText
这样你就可以将 SwiftUI 视图的一部分(例如你可以在 Text
视图中使用的结果字符串)暴露给 UIViewControllerRepresentable
。从那里,您可以将它进一步传递给 ViewController
and/or 看看下面关于协调器的文章:https://www.hackingwithswift.com/books/ios-swiftui/using-coordinators-to-manage-swiftui-view-controllers
在我看来,将您的相机工作封装在另一个 class ViewController
中已经过时,可以通过使用协调器来完成。以下步骤应该可以帮助您启动视图控制器和 运行:
- 在
makeUIView
中创建您的视图控制器代码,包括设置 AVKit 对象
- 确保将
context.coordinator
作为代表而不是 self
- 在
SwiftUIViewController
中创建一个嵌套的 class Coordinator
并声明 class 您的 AVCaptureVideoDataOutputSampleBufferDelegate
- 向协调器添加一个 属性 以保存视图控制器对象的实例并实现初始化程序和
makeCoordinator
函数以使协调器存储对视图控制器的引用
- 如果到目前为止设置正确,您现在可以在协调器中实现您的
AVCaptureVideoDataOutputSampleBufferDelegate
委托功能 class 并在检测到某些内容时更新视图控制器的绑定 属性 和 return一个结果
协议(其他语言的界面)让这种用例变得简单,使用起来也非常简单
1 - 在合适的地方定义协议
2 - 在需要的视图中实现(class,结构)
3 - 将实现的对象引用传递给调用者 class 或 struct
例子->下面
//Protocol
protocol MyDataReceiverDelegte {
func dataReceived(data:String) //any type of data as your need, I choose String
}
struct MaskDetectionView: View, MyDataReceiverDelegte { // implementer struct
func dataReceived(data:String){
//write your code here to process received data
print(data)
}
var body: some View {
//your views comes here
VStack(alignment: .center) {
SwiftUIViewController(parent:self)
}
}
}
//custom view
struct SwiftUIViewController: UIViewControllerRepresentable {
let parent:MaskDetectionView
func makeUIViewController(context: Context) -> ViewController{
return ViewController(delegate:parent)
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
}
}
//caller class
//i omit your code for simpilicity
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let delegate: MyDataReceiverDelegte
init(delegate d: MyDataReceiverDelegte) {
self.delegate = d
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
//your code comes here
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
//your rest of code comes here ,
delegate.dataReceived("Data you want to pass to parent view")
}
}
惯用的方法是通过 UI 层次结构循环一个 Binding
实例——这包括 SwiftUI 和 UIKit 代码。 Binding
将透明地更新与其连接的所有视图上的数据,无论更改者是谁。
数据流图可能与此类似:
好的,了解实施细节,首先您需要一个 @State
来存储来自 UIKit 端的数据,以使其可用于更新视图控制器,:
struct MaskDetectionView: View {
@State var clasificationIdentifier: String = ""
接下来,您需要将其传递给视图控制器和 SwiftUI 视图:
var body: some View {
...
SwiftUIViewController(identifier: $clasificationIdentifier)
...
// this is the "VALUE HERE" from your question
Text("Clasification identifier: \(clasificationIdentifier)")
现在,您已正确注入绑定,您需要更新代码的 UIKit 端以允许接收绑定。
将您的视图表示更新为如下所示:
struct SwiftUIViewController: UIViewControllerRepresentable {
// this is the binding that is received from the SwiftUI side
let identifier: Binding<String>
// this will be the delegate of the view controller, it's role is to allow
// the data transfer from UIKit to SwiftUI
class Coordinator: ViewControllerDelegate {
let identifierBinding: Binding<String>
init(identifierBinding: Binding<String>) {
self.identifierBinding = identifierBinding
}
func clasificationOccured(_ viewController: ViewController, identifier: String) {
// whenever the view controller notifies it's delegate about receiving a new idenfifier
// the line below will propagate the change up to SwiftUI
identifierBinding.wrappedValue = identifier
}
}
func makeUIViewController(context: Context) -> ViewController{
let vc = ViewController()
vc.delegate = context.coordinator
return vc
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
// update the controller data, if needed
}
// this is very important, this coordinator will be used in `makeUIViewController`
func makeCoordinator() -> Coordinator {
Coordinator(identifierBinding: identifier)
}
}
最后一块拼图是为视图控制器委托编写代码,以及使用该委托的代码:
protocol ViewControllerDelegate: AnyObject {
func clasificationOccured(_ viewController: ViewController, identifier: String)
}
class ViewController: UIViewController {
weak var delegate: ViewControllerDelegate?
...
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
...
print(self.result)
// let's tell the delegate we found a new clasification
// the delegate, aka the Coordinator will then update the Binding
// the Binding will update the State, and this change will be
// propagate to the Text() element from the SwiftUI view
delegate?.clasificationOccured(self, identifier: self.result)
}
我正在进行对象检测并使用 UIViewControllerRepresentable
添加我的视图控制器。问题是我无法将数据从我的 ViewController
传递到我的 SwiftUI 视图。我可以打印它。
有人可以帮助我吗?这是我的代码:
//
import SwiftUI
import AVKit
import UIKit
import Vision
let SVWidth = UIScreen.main.bounds.width
struct MaskDetectionView: View {
let hasMaskColor = Color.green
let noMaskColor = Color.red
let shadowColor = Color.gray
var body: some View {
VStack(alignment: .center) {
VStack(alignment: .center) {
Text("Please place your head inside the bounded box.")
.font(.system(size: 15, weight: .regular, design: .default))
Text("For better result, show your entire face.")
.font(.system(size: 15, weight: .regular, design: .default))
}.padding(.top, 10)
VStack(alignment: .center) {
SwiftUIViewController()
.frame(width: SVWidth - 30, height: SVWidth + 30, alignment: .center)
.background(Color.white)
.cornerRadius(25)
.shadow(color: hasMaskColor, radius: 7, x: 0, y: 0)
.padding(.top, 30)
Spacer()
/// VALUE HERE
}
}.padding()
}
}
struct MaskDetectionView_Previews: PreviewProvider {
static var previews: some View {
MaskDetectionView()
}
}
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
var result = String()
//ALL THE OBJECTS
override func viewDidLoad() {
super.viewDidLoad()
// 1 - start session
let capture_session = AVCaptureSession()
//capture_session.sessionPreset = .vga640x480
// 2 - set the device front & add input
guard let capture_device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: .video, position: .front) else {return}
guard let input = try? AVCaptureDeviceInput(device: capture_device) else { return }
capture_session.addInput(input)
// 3 - the layer on screen that shows the picture
let previewLayer = AVCaptureVideoPreviewLayer(session: capture_session)
view.layer.addSublayer(previewLayer)
previewLayer.frame.size = CGSize(width: SVWidth, height: SVWidth + 40)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
// 4 - run the session
capture_session.startRunning()
// 5 - the produced output aka image or video
let dataOutput = AVCaptureVideoDataOutput()
dataOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "videoQueue"))
capture_session.addOutput(dataOutput)
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
// our model
guard let model = try? VNCoreMLModel(for: SqueezeNet(configuration: MLModelConfiguration()).model) else { return }
// request for our model
let request = VNCoreMLRequest(model: model) { (finishedReq, err) in
if let error = err {
print("failed to detect faces:", error)
return
}
//result
guard let results = finishedReq.results as? [VNClassificationObservation] else {return}
guard let first_observation = results.first else {return}
self.result = first_observation.identifier
print(self.result)
}
guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {return}
try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]).perform([request])
}
}
struct SwiftUIViewController: UIViewControllerRepresentable {
func makeUIViewController(context: Context) -> ViewController{
return ViewController()
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
}
}
Swift 有多种方法可以让您在视图和对象之间来回传递数据
例如委托、Key-Value-Observation,或者,专门针对SwiftUI, 属性 包装器 例如@State、@Binding、@ObservableObject 和@ObservedObject。但是,在 SwiftUI 视图中显示数据时,您将需要 属性 包装器。
如果您想以 SwiftUI 方式进行操作,您可能需要查看 @State
和 @Binding
属性 包装器以及如何在 UIViewControllerRepresentable
结构中使用协调器。将 @State
属性 添加到 SwiftUI 视图并将其作为绑定传递给您的 UIViewControllerRepresentable
.
//Declare a new property in struct MaskDetectionView and pass it to SwiftUIViewController as a binding
@State var string result = ""
...
SwiftUIViewController(resultText: $result)
//Add your new binding as a property in the SwiftUIViewController struct
@Binding var string resultText
这样你就可以将 SwiftUI 视图的一部分(例如你可以在 Text
视图中使用的结果字符串)暴露给 UIViewControllerRepresentable
。从那里,您可以将它进一步传递给 ViewController
and/or 看看下面关于协调器的文章:https://www.hackingwithswift.com/books/ios-swiftui/using-coordinators-to-manage-swiftui-view-controllers
在我看来,将您的相机工作封装在另一个 class ViewController
中已经过时,可以通过使用协调器来完成。以下步骤应该可以帮助您启动视图控制器和 运行:
- 在
makeUIView
中创建您的视图控制器代码,包括设置 AVKit 对象 - 确保将
context.coordinator
作为代表而不是self
- 在
SwiftUIViewController
中创建一个嵌套的 classCoordinator
并声明 class 您的AVCaptureVideoDataOutputSampleBufferDelegate
- 向协调器添加一个 属性 以保存视图控制器对象的实例并实现初始化程序和
makeCoordinator
函数以使协调器存储对视图控制器的引用 - 如果到目前为止设置正确,您现在可以在协调器中实现您的
AVCaptureVideoDataOutputSampleBufferDelegate
委托功能 class 并在检测到某些内容时更新视图控制器的绑定 属性 和 return一个结果
协议(其他语言的界面)让这种用例变得简单,使用起来也非常简单
1 - 在合适的地方定义协议
2 - 在需要的视图中实现(class,结构)
3 - 将实现的对象引用传递给调用者 class 或 struct
例子->下面
//Protocol
protocol MyDataReceiverDelegte {
func dataReceived(data:String) //any type of data as your need, I choose String
}
struct MaskDetectionView: View, MyDataReceiverDelegte { // implementer struct
func dataReceived(data:String){
//write your code here to process received data
print(data)
}
var body: some View {
//your views comes here
VStack(alignment: .center) {
SwiftUIViewController(parent:self)
}
}
}
//custom view
struct SwiftUIViewController: UIViewControllerRepresentable {
let parent:MaskDetectionView
func makeUIViewController(context: Context) -> ViewController{
return ViewController(delegate:parent)
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
}
}
//caller class
//i omit your code for simpilicity
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let delegate: MyDataReceiverDelegte
init(delegate d: MyDataReceiverDelegte) {
self.delegate = d
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
//your code comes here
}
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){
//your rest of code comes here ,
delegate.dataReceived("Data you want to pass to parent view")
}
}
惯用的方法是通过 UI 层次结构循环一个 Binding
实例——这包括 SwiftUI 和 UIKit 代码。 Binding
将透明地更新与其连接的所有视图上的数据,无论更改者是谁。
数据流图可能与此类似:
好的,了解实施细节,首先您需要一个 @State
来存储来自 UIKit 端的数据,以使其可用于更新视图控制器,:
struct MaskDetectionView: View {
@State var clasificationIdentifier: String = ""
接下来,您需要将其传递给视图控制器和 SwiftUI 视图:
var body: some View {
...
SwiftUIViewController(identifier: $clasificationIdentifier)
...
// this is the "VALUE HERE" from your question
Text("Clasification identifier: \(clasificationIdentifier)")
现在,您已正确注入绑定,您需要更新代码的 UIKit 端以允许接收绑定。
将您的视图表示更新为如下所示:
struct SwiftUIViewController: UIViewControllerRepresentable {
// this is the binding that is received from the SwiftUI side
let identifier: Binding<String>
// this will be the delegate of the view controller, it's role is to allow
// the data transfer from UIKit to SwiftUI
class Coordinator: ViewControllerDelegate {
let identifierBinding: Binding<String>
init(identifierBinding: Binding<String>) {
self.identifierBinding = identifierBinding
}
func clasificationOccured(_ viewController: ViewController, identifier: String) {
// whenever the view controller notifies it's delegate about receiving a new idenfifier
// the line below will propagate the change up to SwiftUI
identifierBinding.wrappedValue = identifier
}
}
func makeUIViewController(context: Context) -> ViewController{
let vc = ViewController()
vc.delegate = context.coordinator
return vc
}
func updateUIViewController(_ uiViewController: ViewController, context: Context) {
// update the controller data, if needed
}
// this is very important, this coordinator will be used in `makeUIViewController`
func makeCoordinator() -> Coordinator {
Coordinator(identifierBinding: identifier)
}
}
最后一块拼图是为视图控制器委托编写代码,以及使用该委托的代码:
protocol ViewControllerDelegate: AnyObject {
func clasificationOccured(_ viewController: ViewController, identifier: String)
}
class ViewController: UIViewController {
weak var delegate: ViewControllerDelegate?
...
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
...
print(self.result)
// let's tell the delegate we found a new clasification
// the delegate, aka the Coordinator will then update the Binding
// the Binding will update the State, and this change will be
// propagate to the Text() element from the SwiftUI view
delegate?.clasificationOccured(self, identifier: self.result)
}