如何解决将多个变量从C ++发送到Swift
我是iOS开发的新手,我正在尝试制作一个可以通过相机进行某种图像处理的应用程序。我正在使用OpenCV进行图像处理,这是C ++。我成功制作了一个执行此操作的应用程序,但是我想将一些信息从C ++代码发送到Swift,以便可以在用户界面中显示它。因此,我需要将多个变量从C ++返回给Swift。我尝试使用元组,但没有成功。我需要更改或添加哪些文件才能从C ++向Swift返回多个内容?预先感谢!
这是我的代码的制作方式:
在ViewController.swift文件中,我有以下代码:
import UIKit
import AVFoundation
class ViewController: UIViewController,AVCaptureVideoDataOutputSampleBufferDelegate {
@IBOutlet weak var imageView: UIImageView!
@IBOutlet weak var frameRate: UITextField!
private var captureSession: AVCaptureSession = AVCaptureSession()
private let videoDataOutput = AVCaptureVideoDataOutput()
private func addCamerainput() {
guard let device = AVCaptureDevice.discoverySession(deviceTypes: [.builtInWideAngleCamera],mediaType: .video,position: .back).devices.first else {
fatalError("no camera found")
}
let cameraInput = try! AVCaptureDeviceInput(device: device)
self.captureSession.sessionPreset = .vga640x480
self.captureSession.addInput(cameraInput)
}
private func getFrames() {
videoDataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as Nsstring) : NSNumber(value: kCVPixelFormatType_32BGRA)] as [String: Any]
videoDataOutput.alwaysdiscardsLateVideoFrames = true
videoDataOutput.setSampleBufferDelegate(self,queue: dispatchQueue(label: "camera.frame.processing.queue"))
self.captureSession.addOutput(videoDataOutput)
guard let connection = self.videoDataOutput.connection(with: AVMediaType.video),connection.isVideoOrientationSupported else { return }
connection.videoOrientation = .portrait
}
func captureOutput(
_ output: AVCaptureOutput,didOutput sampleBuffer: CMSampleBuffer,from connection: AVCaptureConnection) {
// Here we can process the frame
guard let imageBuffer = CMSampleBufferGetimageBuffer(sampleBuffer) else { return }
CVPixelBufferLockBaseAddress(imageBuffer,CVPixelBufferLockFlags.readOnly)
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
let context = CGContext(data: baseAddress,width: width,height: height,bitsPerComponent: 8,bytesPerRow: bytesPerRow,space: colorSpace,bitmapInfo: bitmapInfo)
guard let quartzImage = context?.makeImage() else { return }
CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags.readOnly)
let image = UIImage(cgImage: quartzImage)
let processed_image = imageProcessingBridge().imageProcessing(in: image)
dispatchQueue.main.async {
self.imageView.image = processed_image
}
}
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
self.addCamerainput()
self.getFrames()
self.captureSession.startRunning()
}
}
对于图像处理,我将这段代码保存在名为imageProcessing.cpp的文件中:
#include "imageProcessing.hpp"
using namespace cv;
using namespace std;
Mat image_processing::detect_line(Mat image) {
// do some image processing with image
return image;
}
,并将此代码存储在名为imageProcessing.hpp的文件中:
#ifndef imageProcessing_hpp
#define imageProcessing_hpp
#include <stdio.h>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
class image_processing {
public:
Mat detect_line(Mat image);
};
#endif /* imageProcessing_hpp */
对于C ++和Swift之间的链接,我有2个文件,分别是imageProcessingBridge.h和imageProcessingBridge.mm:
#ifndef imageProcessingBridge_h
#define imageProcessingBridge_h
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
@interface imageProcessingBridge : NSObject
- (UIImage *) imageProcessingIn: (UIImage *) image;
@end
#endif /* imageProcessingBridge_h */
和:
#import <opencv2/opencv.hpp>
#import <opencv2/imgcodecs/ios.h>
#import <Foundation/Foundation.h>
#import "imageProcessingBridge.h"
#import "imageProcessing.hpp"
@implementation imageProcessingBridge
- (UIImage *) imageProcessingIn:(UIImage *)image {
// convert UI image to mat
cv::Mat opencvImage;
UIImagetoMat(image,opencvImage,true);
// convert colorspace
cv::Mat convertedColorSpaceImage;
cv::cvtColor(opencvImage,convertedColorSpaceImage,CV_RGBA2RGB);
// Run image processing algorithm
image_processing imageProcessing;
cv::Mat processedImage = imageProcessing.detect_line(convertedColorSpaceImage);
// convert mat to UI image and return it to the caller
return MatToUIImage(processedImage);
}
@end
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。