如何在 Temi Robot 上旋转活动物体检测屏幕?

如何解决如何在 Temi Robot 上旋转活动物体检测屏幕?

我目前正在使用 android-demo-app/ObjectDetection/Temi Robot 上,预加载的图像目前可以正常工作,但是当我按“实时”进入实时物体检测屏幕时,它向右旋转了 90 度。

Temi 机器人在屏幕的同一侧只有一个前置摄像头。

我尝试更改 textureView.setTransform() imageAnalysisConfig.Builder().setTargetRotation() imageAnalysis.setTargetRotation() 但无济于事

还尝试将 AndroidManifest.xml 标记下的 screenorientation activity 更改为 fullSenorLandscape,但没有任何改变。

我一直在 Android Developer CameraX 页面上下寻找答案 first link second link,但我找不到任何答案。也许我不够聪明,无法在这里找到解决方案。

非常感谢任何帮助!

AbstactCameraXActivity.java

    private void setupCameraX() {
        final TextureView textureView = getCameraPreviewTextureView();
        final PreviewConfig previewConfig = new PreviewConfig.Builder().build();
        final Preview preview = new Preview(previewConfig);
//        Matrix m = new Matrix();
//        m.postRotate(180);
//        textureView.setTransform(m); //not working
        preview.setonPreviewOutputUpdateListener(output -> textureView.setSurfaceTexture(output.getSurfaceTexture()));



        final var imageAnalysisConfig =
                new ImageAnalysisConfig.Builder()
                        .setTargetResolution(new Size(500,500))
                        .setCallbackHandler(mBackgroundHandler)
                        .setimageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
                        //.setTargetRotation(Surface.ROTATION_0) // not working
                        .build();

         imageAnalysis = new ImageAnalysis(imageAnalysisConfig);
        imageAnalysis.setAnalyzer((image,rotationdegrees) -> {
            if (SystemClock.elapsedRealtime() - mLastAnalysisResultTime < 500) {
                return;
            }

            final R2 result = analyzeImage(image,rotationdegrees);
            if (result != null) {
                mLastAnalysisResultTime = SystemClock.elapsedRealtime();
                runOnUiThread(() -> applyToUiAnalyzeImageResult(result));
            }
        });
        //imageAnalysis.setTargetRotation(Surface.ROTATION_180); // not working
        CameraX.bindToLifecycle(this,preview,imageAnalysis);
    }

ObjectDetectionActivity.java

@Override
@WorkerThread
@Nullable
protected AnalysisResult analyzeImage(ImageProxy image,int rotationdegrees) {
    try {
        if (mModule == null) {
            mModule = LiteModuleLoader.load(MainActivity.assetFilePath(getApplicationContext(),"yolov5s.torchscript.ptl"));
        }
    } catch (IOException e) {
        Log.e("Object Detection","Error reading assets",e);
        return null;
    }

    Bitmap bitmap = imgToBitmap(Objects.requireNonNull(image.getimage()));
    Matrix matrix = new Matrix();
    matrix.postRotate(90.0f);
    bitmap = Bitmap.createBitmap(bitmap,bitmap.getWidth(),bitmap.getHeight(),matrix,true);
    Bitmap resizedBitmap = Bitmap.createScaledBitmap(bitmap,PrePostProcessor.mInputWidth,PrePostProcessor.mInputHeight,true);

    final Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(resizedBitmap,PrePostProcessor.NO_MEAN_RGB,PrePostProcessor.NO_STD_RGB);
    IValue[] outputTuple = mModule.forward(IValue.from(inputTensor)).toTuple();
    final Tensor outputTensor = outputTuple[0].toTensor();
    final float[] outputs = outputTensor.getDataAsFloatArray();

    float imgScaleX = (float)bitmap.getWidth() / PrePostProcessor.mInputWidth;
    float imgScaleY = (float)bitmap.getHeight() / PrePostProcessor.mInputHeight;
    float ivScaleX = (float)mResultView.getWidth() / bitmap.getWidth();
    float ivScaleY = (float)mResultView.getHeight() / bitmap.getHeight();

    final ArrayList<Result> results = PrePostProcessor.outputsToNMSPredictions(outputs,imgScaleX,imgScaleY,ivScaleX,ivScaleY,0);
    return new AnalysisResult(results);
}

AndroidManifest.xml

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="org.pytorch.demo.objectdetection">

    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.CAMERA" />

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/AppTheme">
        <activity android:name=".MainActivity"
            android:configChanges="orientation"
            android:screenorientation="fullSensor">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>
        <activity
            android:name=".ObjectDetectionActivity"
            android:configChanges="orientation"
            android:screenorientation="fullSensor">
        </activity>
    </application>

</manifest>

更新

我想我现在可能知道问题所在了。在 ObjectDetectionActivity 的 setupCameraX() 方法中,我应该操纵 textureView 并操纵矩阵变换的枢轴是我所需要的。我开始在屏幕上看到一些cameraview。但是我不知道这个参数中需要的 x 和 y 是什么...

      final TextureView textureView = getCameraPreviewTextureView();
    final PreviewConfig previewConfig = new PreviewConfig.Builder().build();
    final Preview preview = new Preview(previewConfig);
    Matrix m = new Matrix();
    m.postRotate(180,x,y);//potential solution here.
    textureView.setTransform(m); //not working
    preview.setonPreviewOutputUpdateListener(output -> textureView.setSurfaceTexture(output.getSurfaceTexture()));

解决方法

我已将 cameraX 版本从 1.0.0-alpha5 更改为 1.0.0

  private void setupCameraX() {
    ListenableFuture<ProcessCameraProvider> cameraProviderFuture =
            ProcessCameraProvider.getInstance(this);

    cameraProviderFuture.addListener(() -> {
        try {
            ProcessCameraProvider cameraProvider = cameraProviderFuture.get();
            PreviewView previewView = getCameraPreviewTextureView();
            final Preview preview = new Preview.Builder()
                    .setTargetRotation(Surface.ROTATION_270)//working nicely
                    .build();
            //TODO: Check if result_view can render over preview_view


            CameraSelector cameraSelector = new CameraSelector
                    .Builder()
                    .requireLensFacing(CameraSelector.LENS_FACING_FRONT)
                    .build();

            preview.setSurfaceProvider(previewView.getSurfaceProvider());

            executor = Executors.newSingleThreadExecutor();
            imageAnalysis = new ImageAnalysis.Builder()
                    .setTargetResolution(new Size(500,500))

                    .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                    .build();
            imageAnalysis.setAnalyzer(executor,image -> {
                        Log.d("image analyzer","Entered Analyse method");
                        if (SystemClock.elapsedRealtime() - mLastAnalysisResultTime < 500) {
                            return;
                        }

                        final T result = analyzeImage(image,90);
                        if (result != null) {
                            mLastAnalysisResultTime = SystemClock.elapsedRealtime();
                            runOnUiThread(() -> applyToUiAnalyzeImageResult(result));
                        }
                    });
            camera = cameraProvider.bindToLifecycle(
                    this,cameraSelector,imageAnalysis,preview);
        } catch (InterruptedException | ExecutionException e) {
            new AlertDialog
                    .Builder(this)
                    .setTitle("Camera setup error")
                    .setMessage(e.getMessage())
                    .setPositiveButton("Ok",(dialog,which) -> {
                            })
                    .show();
        }
    },ContextCompat.getMainExecutor(this));

注意:getCameraPreviewTextureView() 是膨胀 ViewStub 的方法。我只是在关注 pytorch android 示例。

  @Override
protected PreviewView getCameraPreviewTextureView() {
    mResultView = findViewById(R.id.resultView);
    //
    return ((ViewStub) findViewById(R.id.preview_view_stub))
            .inflate()
            .findViewById(R.id.preview_view);
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐


Selenium Web驱动程序和Java。元素在(x,y)点处不可单击。其他元素将获得点击?
Python-如何使用点“。” 访问字典成员?
Java 字符串是不可变的。到底是什么意思?
Java中的“ final”关键字如何工作?(我仍然可以修改对象。)
“loop:”在Java代码中。这是什么,为什么要编译?
java.lang.ClassNotFoundException:sun.jdbc.odbc.JdbcOdbcDriver发生异常。为什么?
这是用Java进行XML解析的最佳库。
Java的PriorityQueue的内置迭代器不会以任何特定顺序遍历数据结构。为什么?
如何在Java中聆听按键时移动图像。
Java“Program to an interface”。这是什么意思?
Java在半透明框架/面板/组件上重新绘画。
Java“ Class.forName()”和“ Class.forName()。newInstance()”之间有什么区别?
在此环境中不提供编译器。也许是在JRE而不是JDK上运行?
Java用相同的方法在一个类中实现两个接口。哪种接口方法被覆盖?
Java 什么是Runtime.getRuntime()。totalMemory()和freeMemory()?
java.library.path中的java.lang.UnsatisfiedLinkError否*****。dll
JavaFX“位置是必需的。” 即使在同一包装中
Java 导入两个具有相同名称的类。怎么处理?
Java 是否应该在HttpServletResponse.getOutputStream()/。getWriter()上调用.close()?
Java RegEx元字符(。)和普通点?