jjzjj

android - 使用 ARCore 提供视频录制功能

coder 2023-12-19 原文

我正在使用此示例 ( https://github.com/google-ar/arcore-android-sdk/tree/master/samples/hello_ar_java ),我想提供录制放置了 AR 对象的视频的功能。

我尝试了多种方法但无济于事,有推荐的方法吗?

最佳答案

从 OpenGL 表面创建视频有点复杂,但是是可行的。我认为最简单的理解方法是使用两个 EGL 表面,一个用于 UI,一个用于媒体编码器。 Grafika 中需要的 EGL 级别调用有一个很好的示例GitHub 上的项目。我用它作为起点来找出 ARCore 的 HelloAR 示例所需的修改。由于有很多更改,我将其分解为几个步骤。

进行更改以支持写入外部存储

要保存视频,您需要将视频文件写入可访问的位置,因此您需要获得此权限。

AndroidManifest.xml文件中声明权限:

   <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

然后修改CameraPermissionHelper.java请求外部存储权限和摄像头权限。为此,创建一个权限数组并在请求权限时使用它,并在检查权限状态时迭代它:

private static final String REQUIRED_PERMISSIONS[] = {
         Manifest.permission.CAMERA,
          Manifest.permission.WRITE_EXTERNAL_STORAGE
};

public static void requestCameraPermission(Activity activity) {
    ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
             CAMERA_PERMISSION_CODE);
}

public static boolean hasCameraPermission(Activity activity) {
  for(String p : REQUIRED_PERMISSIONS) {
    if (ContextCompat.checkSelfPermission(activity, p) !=
     PackageManager.PERMISSION_GRANTED) {
      return false;
    }
  }
  return true;
}

public static boolean shouldShowRequestPermissionRationale(Activity activity) {
  for(String p : REQUIRED_PERMISSIONS) {
    if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
      return true;
    }
  }
  return false;
}

为HelloARActivity添加录音

activity_main.xml 底部的 UI 中添加一个简单的按钮和 TextView :

<Button
   android:id="@+id/fboRecord_button"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:layout_alignStart="@+id/surfaceview"
   android:layout_alignTop="@+id/surfaceview"
   android:onClick="clickToggleRecording"
   android:text="@string/toggleRecordingOn"
   tools:ignore="OnClick"/>

<TextView
   android:id="@+id/nowRecording_text"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:layout_alignBaseline="@+id/fboRecord_button"
   android:layout_alignBottom="@+id/fboRecord_button"
   android:layout_toEndOf="@+id/fboRecord_button"
   android:text="" />

HelloARActivity中添加用于记录的成员变量:

private VideoRecorder mRecorder;
private android.opengl.EGLConfig mAndroidEGLConfig;

onSurfaceCreated() 中初始化 mAndroidEGLConfig。我们将使用此配置对象来创建编码器表面。

EGL10 egl10 =  (EGL10)EGLContext.getEGL();
javax.microedition.khronos.egl.EGLDisplay display = egl10.eglGetCurrentDisplay();
int v[] = new int[2];
egl10.eglGetConfigAttrib(display,config, EGL10.EGL_CONFIG_ID, v);

EGLDisplay androidDisplay = EGL14.eglGetCurrentDisplay();
int attribs[] = {EGL14.EGL_CONFIG_ID, v[0], EGL14.EGL_NONE};
android.opengl.EGLConfig myConfig[] = new android.opengl.EGLConfig[1];
EGL14.eglChooseConfig(androidDisplay, attribs, 0, myConfig, 0, 1, v, 1);
this.mAndroidEGLConfig = myConfig[0];

重构 onDrawFrame() 方法,以便首先执行所有非绘图代码,然后在名为 draw() 的方法中完成实际绘图。这样在录制过程中,我们可以更新 ARCore 框架,处理输入,然后绘制到 UI,然后再次绘制到编码器。

@Override
public void onDrawFrame(GL10 gl) {

 if (mSession == null) {
   return;
 }
 // Notify ARCore session that the view size changed so that
 // the perspective matrix and
 // the video background can be properly adjusted.
 mDisplayRotationHelper.updateSessionIfNeeded(mSession);

 try {
   // Obtain the current frame from ARSession. When the 
   //configuration is set to
   // UpdateMode.BLOCKING (it is by default), this will
   // throttle the rendering to the camera framerate.
   Frame frame = mSession.update();
   Camera camera = frame.getCamera();

   // Handle taps. Handling only one tap per frame, as taps are
   // usually low frequency compared to frame rate.
   MotionEvent tap = mQueuedSingleTaps.poll();
   if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
     for (HitResult hit : frame.hitTest(tap)) {
       // Check if any plane was hit, and if it was hit inside the plane polygon
       Trackable trackable = hit.getTrackable();
       if (trackable instanceof Plane
               && ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
         // Cap the number of objects created. This avoids overloading both the
         // rendering system and ARCore.
         if (mAnchors.size() >= 20) {
           mAnchors.get(0).detach();
           mAnchors.remove(0);
         }
         // Adding an Anchor tells ARCore that it should track this position in
         // space. This anchor is created on the Plane to place the 3d model
         // in the correct position relative both to the world and to the plane.
         mAnchors.add(hit.createAnchor());

         // Hits are sorted by depth. Consider only closest hit on a plane.
         break;
       }
     }
   }


   // Get projection matrix.
   float[] projmtx = new float[16];
   camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);

   // Get camera matrix and draw.
   float[] viewmtx = new float[16];
   camera.getViewMatrix(viewmtx, 0);

   // Compute lighting from average intensity of the image.
   final float lightIntensity = frame.getLightEstimate().getPixelIntensity();

   // Visualize tracked points.
   PointCloud pointCloud = frame.acquirePointCloud();
   mPointCloud.update(pointCloud);


   draw(frame,camera.getTrackingState() == TrackingState.PAUSED,
           viewmtx, projmtx, camera.getDisplayOrientedPose(),lightIntensity);

   if (mRecorder!= null && mRecorder.isRecording()) {
     VideoRecorder.CaptureContext ctx = mRecorder.startCapture();
     if (ctx != null) {
       // draw again
       draw(frame, camera.getTrackingState() == TrackingState.PAUSED,
            viewmtx, projmtx, camera.getDisplayOrientedPose(), lightIntensity);

       // restore the context
       mRecorder.stopCapture(ctx, frame.getTimestamp());
     }
   }



   // Application is responsible for releasing the point cloud resources after
   // using it.
   pointCloud.release();

   // Check if we detected at least one plane. If so, hide the loading message.
   if (mMessageSnackbar != null) {
     for (Plane plane : mSession.getAllTrackables(Plane.class)) {
       if (plane.getType() == 
              com.google.ar.core.Plane.Type.HORIZONTAL_UPWARD_FACING
               && plane.getTrackingState() == TrackingState.TRACKING) {
         hideLoadingMessage();
         break;
       }
     }
   }
 } catch (Throwable t) {
   // Avoid crashing the application due to unhandled exceptions.
   Log.e(TAG, "Exception on the OpenGL thread", t);
 }
}


private void draw(Frame frame, boolean paused,
                 float[] viewMatrix, float[] projectionMatrix,
                 Pose displayOrientedPose, float lightIntensity) {

 // Clear screen to notify driver it should not load
 // any pixels from previous frame.
 GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);

 // Draw background.
 mBackgroundRenderer.draw(frame);

 // If not tracking, don't draw 3d objects.
 if (paused) {
   return;
 }

 mPointCloud.draw(viewMatrix, projectionMatrix);

 // Visualize planes.
 mPlaneRenderer.drawPlanes(
         mSession.getAllTrackables(Plane.class),
         displayOrientedPose, projectionMatrix);

 // Visualize anchors created by touch.
 float scaleFactor = 1.0f;
 for (Anchor anchor : mAnchors) {
   if (anchor.getTrackingState() != TrackingState.TRACKING) {
     continue;
   }
   // Get the current pose of an Anchor in world space.
   // The Anchor pose is
   // updated during calls to session.update() as ARCore refines
   // its estimate of the world.
   anchor.getPose().toMatrix(mAnchorMatrix, 0);

   // Update and draw the model and its shadow.
   mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
   mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
   mVirtualObject.draw(viewMatrix, projectionMatrix, lightIntensity);
   mVirtualObjectShadow.draw(viewMatrix, projectionMatrix, lightIntensity);
 }
}

处理录音的切换:

public void clickToggleRecording(View view) {
 Log.d(TAG, "clickToggleRecording");
 if (mRecorder == null) {
   File outputFile = new File(Environment.getExternalStoragePublicDirectory(
        Environment.DIRECTORY_PICTURES) + "/HelloAR",
           "fbo-gl-" + Long.toHexString(System.currentTimeMillis()) + ".mp4");
   File dir = outputFile.getParentFile();
   if (!dir.exists()) {
     dir.mkdirs();
   }

   try {
     mRecorder = new VideoRecorder(mSurfaceView.getWidth(),
             mSurfaceView.getHeight(),
             VideoRecorder.DEFAULT_BITRATE, outputFile, this);
     mRecorder.setEglConfig(mAndroidEGLConfig);
     } catch (IOException e) {
    Log.e(TAG,"Exception starting recording", e);
   }
 }
 mRecorder.toggleRecording();
 updateControls();
}

private void updateControls() {
 Button toggleRelease = findViewById(R.id.fboRecord_button);
 int id = (mRecorder != null && mRecorder.isRecording()) ?
         R.string.toggleRecordingOff : R.string.toggleRecordingOn;
 toggleRelease.setText(id);

 TextView tv =  findViewById(R.id.nowRecording_text);
 if (id == R.string.toggleRecordingOff) {
   tv.setText(getString(R.string.nowRecording));
 } else {
   tv.setText("");
 }
}

添加监听接口(interface)接收视频录制状态变化:

@Override
public void onVideoRecorderEvent(VideoRecorder.VideoEvent videoEvent) {
 Log.d(TAG, "VideoEvent: " + videoEvent);
 updateControls();

 if (videoEvent == VideoRecorder.VideoEvent.RecordingStopped) {
   mRecorder = null;
 }
}

实现 VideoRecorder 类以将图像提供给编码器

VideoRecorder 类用于将图像提供给媒体编码器。此类使用媒体编码器的输入表面创建离屏 EGLSurface。一般的方法是在录制期间为 UI 显示绘制一次,然后为媒体编码器表面进行完全相同的绘制调用。

构造函数在录制过程中接受录制参数和一个监听器来推送事件。

public VideoRecorder(int width, int height, int bitrate, File outputFile,
                    VideoRecorderListener listener) throws IOException {
 this.listener = listener;
 mEncoderCore = new VideoEncoderCore(width, height, bitrate, outputFile);
 mVideoRect = new Rect(0,0,width,height);
}

录制开始时,我们需要为编码器创建一个新的 EGL 表面。然后通知编码器有新帧可用,使编码器表面成为当前 EGL 表面,然后返回,以便调用者可以进行绘图调用。

public CaptureContext startCapture() {

 if (mVideoEncoder == null) {
   return null;
 }

 if (mEncoderContext == null) {
   mEncoderContext = new CaptureContext();
   mEncoderContext.windowDisplay = EGL14.eglGetCurrentDisplay();

   // Create a window surface, and attach it to the Surface we received.
   int[] surfaceAttribs = {
           EGL14.EGL_NONE
   };

   mEncoderContext.windowDrawSurface = EGL14.eglCreateWindowSurface(
           mEncoderContext.windowDisplay,
         mEGLConfig,mEncoderCore.getInputSurface(),
         surfaceAttribs, 0);
   mEncoderContext.windowReadSurface = mEncoderContext.windowDrawSurface;
 }

 CaptureContext displayContext = new CaptureContext();
 displayContext.initialize();

 // Draw for recording, swap.
 mVideoEncoder.frameAvailableSoon();


 // Make the input surface current
 // mInputWindowSurface.makeCurrent();
 EGL14.eglMakeCurrent(mEncoderContext.windowDisplay,
         mEncoderContext.windowDrawSurface, mEncoderContext.windowReadSurface,
         EGL14.eglGetCurrentContext());

 // If we don't set the scissor rect, the glClear() we use to draw the
 // light-grey background will draw outside the viewport and muck up our
 // letterboxing.  Might be better if we disabled the test immediately after
 // the glClear().  Of course, if we were clearing the frame background to
 // black it wouldn't matter.
 //
 // We do still need to clear the pixels outside the scissor rect, of course,
 // or we'll get garbage at the edges of the recording.  We can either clear
 // the whole thing and accept that there will be a lot of overdraw, or we
 // can issue multiple scissor/clear calls.  Some GPUs may have a special
 // optimization for zeroing out the color buffer.
 //
 // For now, be lazy and zero the whole thing.  At some point we need to
 // examine the performance here.
 GLES20.glClearColor(0f, 0f, 0f, 1f);
 GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

 GLES20.glViewport(mVideoRect.left, mVideoRect.top,
         mVideoRect.width(), mVideoRect.height());
 GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
 GLES20.glScissor(mVideoRect.left, mVideoRect.top,
         mVideoRect.width(), mVideoRect.height());

 return displayContext;
}

当绘制完成后,需要将EGLContext恢复回UI界面:

public void stopCapture(CaptureContext oldContext, long timeStampNanos) {

 if (oldContext == null) {
   return;
 }
 GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
 EGLExt.eglPresentationTimeANDROID(mEncoderContext.windowDisplay,
      mEncoderContext.windowDrawSurface, timeStampNanos);

 EGL14.eglSwapBuffers(mEncoderContext.windowDisplay,
      mEncoderContext.windowDrawSurface);


 // Restore.
 GLES20.glViewport(0, 0, oldContext.getWidth(), oldContext.getHeight());
 EGL14.eglMakeCurrent(oldContext.windowDisplay,
         oldContext.windowDrawSurface, oldContext.windowReadSurface,
         EGL14.eglGetCurrentContext());
}

添加一些记账方法

public boolean isRecording() {
 return mRecording;
}

public void toggleRecording() {
 if (isRecording()) {
   stopRecording();
 } else {
   startRecording();
 }
}

protected void startRecording() {
 mRecording = true;
 if (mVideoEncoder == null) {
    mVideoEncoder = new TextureMovieEncoder2(mEncoderCore);
 }
 if (listener != null) {
   listener.onVideoRecorderEvent(VideoEvent.RecordingStarted);
 }
}

protected void stopRecording() {
 mRecording = false;
 if (mVideoEncoder != null) {
   mVideoEncoder.stopRecording();
 }
 if (listener != null) {
   listener.onVideoRecorderEvent(VideoEvent.RecordingStopped);
 }
}

public void setEglConfig(EGLConfig eglConfig) {
 this.mEGLConfig = eglConfig;
}

public enum VideoEvent {
 RecordingStarted,
 RecordingStopped
}

public interface VideoRecorderListener {

 void onVideoRecorderEvent(VideoEvent videoEvent);
}

CaptureContext 的内部类跟踪显示和表面,以便轻松处理与 EGL 上下文一起使用的多个表面:

public static class CaptureContext {
 EGLDisplay windowDisplay;
 EGLSurface windowReadSurface;
 EGLSurface windowDrawSurface;
 private int mWidth;
 private int mHeight;

 public void initialize() {
   windowDisplay = EGL14.eglGetCurrentDisplay();
   windowReadSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW);
   windowDrawSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_READ);
   int v[] = new int[1];
   EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_WIDTH,
       v, 0);
   mWidth = v[0];
   v[0] = -1;
   EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_HEIGHT,
       v, 0);
   mHeight = v[0];
 }

 /**
  * Returns the surface's width, in pixels.
  * <p>
  * If this is called on a window surface, and the underlying
  * surface is in the process
  * of changing size, we may not see the new size right away
  * (e.g. in the "surfaceChanged"
  * callback).  The size should match after the next buffer swap.
  */
 public int getWidth() {
   if (mWidth < 0) {
     int v[] = new int[1];
     EGL14.eglQuerySurface(windowDisplay,
       windowDrawSurface, EGL14.EGL_WIDTH, v, 0);
     mWidth = v[0];
   }
     return mWidth;
 }

 /**
  * Returns the surface's height, in pixels.
  */
 public int getHeight() {
   if (mHeight < 0) {
     int v[] = new int[1];
     EGL14.eglQuerySurface(windowDisplay, windowDrawSurface,
         EGL14.EGL_HEIGHT, v, 0);
     mHeight = v[0];
   }
   return mHeight;
 }

}

添加 VideoEncoder 类

VideoEncoderCore类是从 Grafika 复制的,以及 TextureMovieEncoder2类。

关于android - 使用 ARCore 提供视频录制功能,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/47869061/

有关android - 使用 ARCore 提供视频录制功能的更多相关文章

  1. ruby - 如何使用 Nokogiri 的 xpath 和 at_xpath 方法 - 2

    我正在学习如何使用Nokogiri,根据这段代码我遇到了一些问题:require'rubygems'require'mechanize'post_agent=WWW::Mechanize.newpost_page=post_agent.get('http://www.vbulletin.org/forum/showthread.php?t=230708')puts"\nabsolutepathwithtbodygivesnil"putspost_page.parser.xpath('/html/body/div/div/div/div/div/table/tbody/tr/td/div

  2. ruby - 使用 RubyZip 生成 ZIP 文件时设置压缩级别 - 2

    我有一个Ruby程序,它使用rubyzip压缩XML文件的目录树。gem。我的问题是文件开始变得很重,我想提高压缩级别,因为压缩时间不是问题。我在rubyzipdocumentation中找不到一种为创建的ZIP文件指定压缩级别的方法。有人知道如何更改此设置吗?是否有另一个允许指定压缩级别的Ruby库? 最佳答案 这是我通过查看ruby​​zip内部创建的代码。level=Zlib::BEST_COMPRESSIONZip::ZipOutputStream.open(zip_file)do|zip|Dir.glob("**/*")d

  3. ruby - 为什么我可以在 Ruby 中使用 Object#send 访问私有(private)/ protected 方法? - 2

    类classAprivatedeffooputs:fooendpublicdefbarputs:barendprivatedefzimputs:zimendprotecteddefdibputs:dibendendA的实例a=A.new测试a.foorescueputs:faila.barrescueputs:faila.zimrescueputs:faila.dibrescueputs:faila.gazrescueputs:fail测试输出failbarfailfailfail.发送测试[:foo,:bar,:zim,:dib,:gaz].each{|m|a.send(m)resc

  4. ruby-on-rails - 使用 Ruby on Rails 进行自动化测试 - 最佳实践 - 2

    很好奇,就使用ruby​​onrails自动化单元测试而言,你们正在做什么?您是否创建了一个脚本来在cron中运行rake作业并将结果邮寄给您?git中的预提交Hook?只是手动调用?我完全理解测试,但想知道在错误发生之前捕获错误的最佳实践是什么。让我们理所当然地认为测试本身是完美无缺的,并且可以正常工作。下一步是什么以确保他们在正确的时间将可能有害的结果传达给您? 最佳答案 不确定您到底想听什么,但是有几个级别的自动代码库控制:在处理某项功能时,您可以使用类似autotest的内容获得关于哪些有效,哪些无效的即时反馈。要确保您的提

  5. ruby - 在 Ruby 中使用匿名模块 - 2

    假设我做了一个模块如下:m=Module.newdoclassCendend三个问题:除了对m的引用之外,还有什么方法可以访问C和m中的其他内容?我可以在创建匿名模块后为其命名吗(就像我输入“module...”一样)?如何在使用完匿名模块后将其删除,使其定义的常量不再存在? 最佳答案 三个答案:是的,使用ObjectSpace.此代码使c引用你的类(class)C不引用m:c=nilObjectSpace.each_object{|obj|c=objif(Class===objandobj.name=~/::C$/)}当然这取决于

  6. ruby - 使用 ruby​​ 和 savon 的 SOAP 服务 - 2

    我正在尝试使用ruby​​和Savon来使用网络服务。测试服务为http://www.webservicex.net/WS/WSDetails.aspx?WSID=9&CATID=2require'rubygems'require'savon'client=Savon::Client.new"http://www.webservicex.net/stockquote.asmx?WSDL"client.get_quotedo|soap|soap.body={:symbol=>"AAPL"}end返回SOAP异常。检查soap信封,在我看来soap请求没有正确的命名空间。任何人都可以建议我

  7. python - 如何使用 Ruby 或 Python 创建一系列高音调和低音调的蜂鸣声? - 2

    关闭。这个问题是opinion-based.它目前不接受答案。想要改进这个问题?更新问题,以便editingthispost可以用事实和引用来回答它.关闭4年前。Improvethisquestion我想在固定时间创建一系列低音和高音调的哔哔声。例如:在150毫秒时发出高音调的蜂鸣声在151毫秒时发出低音调的蜂鸣声200毫秒时发出低音调的蜂鸣声250毫秒的高音调蜂鸣声有没有办法在Ruby或Python中做到这一点?我真的不在乎输出编码是什么(.wav、.mp3、.ogg等等),但我确实想创建一个输出文件。

  8. ruby-on-rails - 'compass watch' 是如何工作的/它是如何与 rails 一起使用的 - 2

    我在我的项目目录中完成了compasscreate.和compassinitrails。几个问题:我已将我的.sass文件放在public/stylesheets中。这是放置它们的正确位置吗?当我运行compasswatch时,它不会自动编译这些.sass文件。我必须手动指定文件:compasswatchpublic/stylesheets/myfile.sass等。如何让它自动运行?文件ie.css、print.css和screen.css已放在stylesheets/compiled。如何在编译后不让它们重新出现的情况下删除它们?我自己编译的.sass文件编译成compiled/t

  9. ruby - 使用 ruby​​ 将 HTML 转换为纯文本并维护结构/格式 - 2

    我想将html转换为纯文本。不过,我不想只删除标签,我想智能地保留尽可能多的格式。为插入换行符标签,检测段落并格式化它们等。输入非常简单,通常是格式良好的html(不是整个文档,只是一堆内容,通常没有anchor或图像)。我可以将几个正则表达式放在一起,让我达到80%,但我认为可能有一些现有的解决方案更智能。 最佳答案 首先,不要尝试为此使用正则表达式。很有可能你会想出一个脆弱/脆弱的解决方案,它会随着HTML的变化而崩溃,或者很难管理和维护。您可以使用Nokogiri快速解析HTML并提取文本:require'nokogiri'h

  10. ruby - 在 64 位 Snow Leopard 上使用 rvm、postgres 9.0、ruby 1.9.2-p136 安装 pg gem 时出现问题 - 2

    我想为Heroku构建一个Rails3应用程序。他们使用Postgres作为他们的数据库,所以我通过MacPorts安装了postgres9.0。现在我需要一个postgresgem并且共识是出于性能原因你想要pggem。但是我对我得到的错误感到非常困惑当我尝试在rvm下通过geminstall安装pg时。我已经非常明确地指定了所有postgres目录的位置可以找到但仍然无法完成安装:$envARCHFLAGS='-archx86_64'geminstallpg--\--with-pg-config=/opt/local/var/db/postgresql90/defaultdb/po

随机推荐