微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

将多个变量从C ++发送到Swift

如何解决将多个变量从C ++发送到Swift

我是iOS开发的新手,我正在尝试制作一个可以通过相机进行某种图像处理的应用程序。我正在使用OpenCV进行图像处理,这是C ++。我成功制作了一个执行此操作的应用程序,但是我想将一些信息从C ++代码发送到Swift,以便可以在用户界面中显示它。因此,我需要将多个变量从C ++返回给Swift。我尝试使用元组,但没有成功。我需要更改或添加哪些文件才能从C ++向Swift返回多个内容?预先感谢!

这是我的代码的制作方式:

在ViewController.swift文件中,我有以下代码

import UIKit
import AVFoundation

class ViewController: UIViewController,AVCaptureVideoDataOutputSampleBufferDelegate {
    @IBOutlet weak var imageView: UIImageView!
    @IBOutlet weak var frameRate: UITextField!
    
    private var captureSession: AVCaptureSession = AVCaptureSession()
    private let videoDataOutput = AVCaptureVideoDataOutput()
    
    
    private func addCamerainput() {
        guard let device = AVCaptureDevice.discoverySession(deviceTypes: [.builtInWideAngleCamera],mediaType: .video,position: .back).devices.first else {
            fatalError("no camera found")
        }
        let cameraInput = try! AVCaptureDeviceInput(device: device)
        self.captureSession.sessionPreset = .vga640x480
        self.captureSession.addInput(cameraInput)
    }
    
    private func getFrames() {
        videoDataOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as Nsstring) : NSNumber(value: kCVPixelFormatType_32BGRA)] as [String: Any]
        videoDataOutput.alwaysdiscardsLateVideoFrames = true
        videoDataOutput.setSampleBufferDelegate(self,queue: dispatchQueue(label: "camera.frame.processing.queue"))
        self.captureSession.addOutput(videoDataOutput)
        
        guard let connection = self.videoDataOutput.connection(with: AVMediaType.video),connection.isVideoOrientationSupported else { return }
        connection.videoOrientation = .portrait
    }
    
    func captureOutput(
        _ output: AVCaptureOutput,didOutput sampleBuffer: CMSampleBuffer,from connection: AVCaptureConnection) {
        
        // Here we can process the frame
        guard let  imageBuffer = CMSampleBufferGetimageBuffer(sampleBuffer) else { return }
        CVPixelBufferLockBaseAddress(imageBuffer,CVPixelBufferLockFlags.readOnly)
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
        let width = CVPixelBufferGetWidth(imageBuffer)
        let height = CVPixelBufferGetHeight(imageBuffer)
        let colorSpace = CGColorSpaceCreateDeviceRGB()
        var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
        bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
        let context = CGContext(data: baseAddress,width: width,height: height,bitsPerComponent: 8,bytesPerRow: bytesPerRow,space: colorSpace,bitmapInfo: bitmapInfo)
        guard let quartzImage = context?.makeImage() else { return }
        CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags.readOnly)
        let image = UIImage(cgImage: quartzImage)
        
        let processed_image = imageProcessingBridge().imageProcessing(in: image)
        
        dispatchQueue.main.async {
            self.imageView.image = processed_image
        }
    }
    
    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view.
        self.addCamerainput()
        self.getFrames()
        self.captureSession.startRunning()
    }


}

对于图像处理,我将这段代码保存在名为imageProcessing.cpp的文件中:

#include "imageProcessing.hpp"

using namespace cv;
using namespace std;

Mat image_processing::detect_line(Mat image) {
    // do some image processing with image
    return image;
}

,并将此代码存储在名为imageProcessing.hpp的文件中:

#ifndef imageProcessing_hpp
#define imageProcessing_hpp

#include <stdio.h>
#include <opencv2/opencv.hpp>

using namespace cv;
using namespace std;

class image_processing {
public:
    Mat detect_line(Mat image);
};

#endif /* imageProcessing_hpp */

对于C ++和Swift之间的链接我有2个文件,分别是imageProcessingBridge.h和imageProcessingBridge.mm:

#ifndef imageProcessingBridge_h
#define imageProcessingBridge_h

#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>

@interface imageProcessingBridge : NSObject

- (UIImage *) imageProcessingIn: (UIImage *) image;

@end

#endif /* imageProcessingBridge_h */

和:

#import <opencv2/opencv.hpp>
#import <opencv2/imgcodecs/ios.h>
#import <Foundation/Foundation.h>
#import "imageProcessingBridge.h"
#import "imageProcessing.hpp"

@implementation imageProcessingBridge

- (UIImage *) imageProcessingIn:(UIImage *)image {
    
    // convert UI image to mat
    cv::Mat opencvImage;
    UIImagetoMat(image,opencvImage,true);
    
    // convert colorspace
    cv::Mat convertedColorSpaceImage;
    cv::cvtColor(opencvImage,convertedColorSpaceImage,CV_RGBA2RGB);
    
    // Run image processing algorithm
    image_processing imageProcessing;
    cv::Mat processedImage = imageProcessing.detect_line(convertedColorSpaceImage);
    
    // convert mat to UI image and return it to the caller
    return MatToUIImage(processedImage);
}
@end

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其他元素将获得点击?
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。)
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbcDriver发生异常。为什么?
这是用Java进行XML解析的最佳库。
Java的PriorityQueue的内置迭代器不会以任何特定顺序遍历数据结构。为什么?
如何在Java中聆听按键时移动图像。
Java“Program to an interface”。这是什么意思?