Cla*_*son 6

我假设你的意思是相机所看到的和AR对象的图片.在较高级别,您需要获得写入外部存储的权限以保存图片,从OpenGL复制帧然后将其另存为png(例如).以下是具体内容:

WRITE_EXTERNAL_STORAGE权限添加到AndroidManifest.xml

   <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
Run Code Online (Sandbox Code Playgroud)

然后更改CameraPermissionHelper以迭代CAMERA和WRITE_EXTERNAL_STORAGE权限,以确保它们被授予

 private static final String REQUIRED_PERMISSIONS[] = {
          Manifest.permission.WRITE_EXTERNAL_STORAGE,
          Manifest.permission.CAMERA
  };

  /**
   * Check to see we have the necessary permissions for this app.
   */
  public static boolean hasCameraPermission(Activity activity) {
    for (String p : REQUIRED_PERMISSIONS) {
      if (ContextCompat.checkSelfPermission(activity, p) !=
            PackageManager.PERMISSION_GRANTED) {
        return false;
      }
    }
    return true;
  }

  /**
   * Check to see we have the necessary permissions for this app,
   *   and ask for them if we don't.
   */
  public static void requestCameraPermission(Activity activity) {
    ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
            CAMERA_PERMISSION_CODE);
  }

  /**
   * Check to see if we need to show the rationale for this permission.
   */
  public static boolean shouldShowRequestPermissionRationale(Activity activity) {
    for (String p : REQUIRED_PERMISSIONS) {
      if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
        return true;
      }
    }
    return false;
  }
Run Code Online (Sandbox Code Playgroud)

接下来,添加几个字段以HelloARActivity跟踪帧的尺寸和布尔值以指示何时保存图片.

 private int mWidth;
 private int mHeight;
 private  boolean capturePicture = false;
Run Code Online (Sandbox Code Playgroud)

设置宽度和高度 onSurfaceChanged()

 public void onSurfaceChanged(GL10 gl, int width, int height) {
     mDisplayRotationHelper.onSurfaceChanged(width, height);
     GLES20.glViewport(0, 0, width, height);
     mWidth = width;
     mHeight = height;
 }
Run Code Online (Sandbox Code Playgroud)

在底部onDrawFrame(),添加对捕获标志的检查.这应该在所有其他绘图发生后完成.

         if (capturePicture) {
             capturePicture = false;
             SavePicture();
         }
Run Code Online (Sandbox Code Playgroud)

然后为按钮添加onClick方法以拍摄照片,并添加实际代码以保存图像:

  public void onSavePicture(View view) {
    // Here just a set a flag so we can copy
    // the image from the onDrawFrame() method.
    // This is required for OpenGL so we are on the rendering thread.
    this.capturePicture = true;
  }

  /**
   * Call from the GLThread to save a picture of the current frame.
   */
  public void SavePicture() throws IOException {
    int pixelData[] = new int[mWidth * mHeight];

    // Read the pixels from the current GL frame.
    IntBuffer buf = IntBuffer.wrap(pixelData);
    buf.position(0);
    GLES20.glReadPixels(0, 0, mWidth, mHeight,
            GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);

    // Create a file in the Pictures/HelloAR album.
    final File out = new File(Environment.getExternalStoragePublicDirectory(
            Environment.DIRECTORY_PICTURES) + "/HelloAR", "Img" +
            Long.toHexString(System.currentTimeMillis()) + ".png");

    // Make sure the directory exists
    if (!out.getParentFile().exists()) {
      out.getParentFile().mkdirs();
    }

    // Convert the pixel data from RGBA to what Android wants, ARGB.
    int bitmapData[] = new int[pixelData.length];
    for (int i = 0; i < mHeight; i++) {
      for (int j = 0; j < mWidth; j++) {
        int p = pixelData[i * mWidth + j];
        int b = (p & 0x00ff0000) >> 16;
        int r = (p & 0x000000ff) << 16;
        int ga = p & 0xff00ff00;
        bitmapData[(mHeight - i - 1) * mWidth + j] = ga | r | b;
      }
    }
    // Create a bitmap.
    Bitmap bmp = Bitmap.createBitmap(bitmapData,
                     mWidth, mHeight, Bitmap.Config.ARGB_8888);

    // Write it to disk.
    FileOutputStream fos = new FileOutputStream(out);
    bmp.compress(Bitmap.CompressFormat.PNG, 100, fos);
    fos.flush();
    fos.close();
    runOnUiThread(new Runnable() {
      @Override
      public void run() {
        showSnackbarMessage("Wrote " + out.getName(), false);
      }
    });
  }
Run Code Online (Sandbox Code Playgroud)

最后一步是将按钮添加到activity_main.xml布局的末尾

<Button
    android:id="@+id/fboRecord_button"
    android:layout_width="wrap_content"
    android:layout_height="wrap_content"
    android:layout_alignStart="@+id/surfaceview"
    android:layout_alignTop="@+id/surfaceview"
    android:onClick="onSavePicture"
    android:text="Snap"
    tools:ignore="OnClick"/>
Run Code Online (Sandbox Code Playgroud)


nbs*_*jan 5

获取图像缓冲区

在最新的 ARCore SDK 中,我们可以通过公共类Frame访问图像缓冲区。下面是示例代码,它使我们能够访问图像缓冲区。

private void onSceneUpdate(FrameTime frameTime) {
    try {
        Frame currentFrame = sceneView.getArFrame();
        Image currentImage = currentFrame.acquireCameraImage();
        int imageFormat = currentImage.getFormat();
        if (imageFormat == ImageFormat.YUV_420_888) {
            Log.d("ImageFormat", "Image format is YUV_420_888");
        }
}
Run Code Online (Sandbox Code Playgroud)

onSceneUpdate()如果您将其注册到setOnUpdateListener()回调,则每次更新都会被调用。图像将采用 YUV_420_888 格式,但它将具有原生高分辨率相机的完整视野。

也不要忘记通过调用关闭接收图像的资源currentImage.close()。否则,您将ResourceExhaustedException在下一次运行时收到onSceneUpdate.

将获取的图像缓冲区写入文件 以下实现将YUV缓冲区转换为压缩的JPEG字节数组

private static byte[] NV21toJPEG(byte[] nv21, int width, int height) {
    ByteArrayOutputStream out = new ByteArrayOutputStream();
    YuvImage yuv = new YuvImage(nv21, ImageFormat.NV21, width, height, null);
    yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
    return out.toByteArray();
}

public static void WriteImageInformation(Image image, String path) {
    byte[] data = null;
    data = NV21toJPEG(YUV_420_888toNV21(image),
                image.getWidth(), image.getHeight());
    BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(path));                       
    bos.write(data);
    bos.flush();
    bos.close();
}
    
private static byte[] YUV_420_888toNV21(Image image) {
    byte[] nv21;
    ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
    ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
    ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();

    int ySize = yBuffer.remaining();
    int uSize = uBuffer.remaining();
    int vSize = vBuffer.remaining();

    nv21 = new byte[ySize + uSize + vSize];

    //U and V are swapped
    yBuffer.get(nv21, 0, ySize);
    vBuffer.get(nv21, ySize, vSize);
    uBuffer.get(nv21, ySize + vSize, uSize);

    return nv21;
}
Run Code Online (Sandbox Code Playgroud)