Android中音視頻合成的幾種方案詳析

前言

10年積累的網(wǎng)站設(shè)計、成都網(wǎng)站建設(shè)經(jīng)驗,可以快速應(yīng)對客戶對網(wǎng)站的新想法和需求。提供各種問題對應(yīng)的解決方案。讓選擇我們的客戶得到更好、更有力的網(wǎng)絡(luò)服務(wù)。我雖然不認(rèn)識你,你也不認(rèn)識我。但先網(wǎng)站制作后付款的網(wǎng)站建設(shè)流程,更有榕城免費網(wǎng)站建設(shè)讓你可以放心的選擇與我們合作。

最近工作中遇到了音視頻處理的需求,Android下音視頻合成,在當(dāng)前調(diào)研方案中主要有三大類方法:MediaMux硬解碼,mp4parser,F(xiàn)Fmepg。三種方法均可實現(xiàn),但是也有不同的局限和問題,先將實現(xiàn)和問題記錄于此,便于之后的總結(jié)學(xué)習(xí)。下面話不多說了,來一起看看詳細(xì)的介紹吧。

方法一(Fail)

利用MediaMux實現(xiàn)音視頻的合成。

效果:可以實現(xiàn)音視頻的合并,利用Android原生的VideoView和SurfaceView播放正常,大部分的播放器也播放正常,但是,但是,在上傳Youtube就會出現(xiàn)問題:音頻不連續(xù),分析主要是上傳Youtube時會被再次的壓縮,可能在壓縮的過程中出現(xiàn)音頻的幀率出現(xiàn)問題。

分析:在MediaCodec.BufferInfo的處理中,時間戳presentationTimeUs出現(xiàn)問題,導(dǎo)致Youtube的壓縮造成音頻的紊亂。

public static void muxVideoAndAudio(String videoPath, String audioPath, String muxPath) {
 try {
  MediaExtractor videoExtractor = new MediaExtractor();
  videoExtractor.setDataSource(videoPath);
  MediaFormat videoFormat = null;
  int videoTrackIndex = -1;
  int videoTrackCount = videoExtractor.getTrackCount();
  for (int i = 0; i < videoTrackCount; i++) {
  videoFormat = videoExtractor.getTrackFormat(i);
  String mimeType = videoFormat.getString(MediaFormat.KEY_MIME);
  if (mimeType.startsWith("video/")) {
   videoTrackIndex = i;
   break;
  }
  }
  MediaExtractor audioExtractor = new MediaExtractor();
  audioExtractor.setDataSource(audioPath);
  MediaFormat audioFormat = null;
  int audioTrackIndex = -1;
  int audioTrackCount = audioExtractor.getTrackCount();
  for (int i = 0; i < audioTrackCount; i++) {
  audioFormat = audioExtractor.getTrackFormat(i);
  String mimeType = audioFormat.getString(MediaFormat.KEY_MIME);
  if (mimeType.startsWith("audio/")) {
   audioTrackIndex = i;
   break;
  }
  }
  videoExtractor.selectTrack(videoTrackIndex);
  audioExtractor.selectTrack(audioTrackIndex);
  MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
  MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();
  MediaMuxer mediaMuxer = new MediaMuxer(muxPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
  int writeVideoTrackIndex = mediaMuxer.addTrack(videoFormat);
  int writeAudioTrackIndex = mediaMuxer.addTrack(audioFormat);
  mediaMuxer.start();
  ByteBuffer byteBuffer = ByteBuffer.allocate(500 * 1024);
  long sampleTime = 0;
  {
  videoExtractor.readSampleData(byteBuffer, 0);
  if (videoExtractor.getSampleFlags() == MediaExtractor.SAMPLE_FLAG_SYNC) {
   videoExtractor.advance();
  }
  videoExtractor.readSampleData(byteBuffer, 0);
  long secondTime = videoExtractor.getSampleTime();
  videoExtractor.advance();
  long thirdTime = videoExtractor.getSampleTime();
  sampleTime = Math.abs(thirdTime - secondTime);
  }
  videoExtractor.unselectTrack(videoTrackIndex);
  videoExtractor.selectTrack(videoTrackIndex);
  while (true) {
  int readVideoSampleSize = videoExtractor.readSampleData(byteBuffer, 0);
  if (readVideoSampleSize < 0) {
   break;
  }
  videoBufferInfo.size = readVideoSampleSize;
  videoBufferInfo.presentationTimeUs += sampleTime;
  videoBufferInfo.offset = 0;
  //noinspection WrongConstant
  videoBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;//videoExtractor.getSampleFlags()
  mediaMuxer.writeSampleData(writeVideoTrackIndex, byteBuffer, videoBufferInfo);
  videoExtractor.advance();
  }
  while (true) {
  int readAudioSampleSize = audioExtractor.readSampleData(byteBuffer, 0);
  if (readAudioSampleSize < 0) {
   break;
  }
  audioBufferInfo.size = readAudioSampleSize;
  audioBufferInfo.presentationTimeUs += sampleTime;
  audioBufferInfo.offset = 0;
  //noinspection WrongConstant
  audioBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;// videoExtractor.getSampleFlags()
  mediaMuxer.writeSampleData(writeAudioTrackIndex, byteBuffer, audioBufferInfo);
  audioExtractor.advance();
  }
  mediaMuxer.stop();
  mediaMuxer.release();
  videoExtractor.release();
  audioExtractor.release();
 } catch (IOException e) {
  e.printStackTrace();
 }
 }

方法二(Success)

public static void muxVideoAudio(String videoFilePath, String audioFilePath, String outputFile) {
 try {
  MediaExtractor videoExtractor = new MediaExtractor();
  videoExtractor.setDataSource(videoFilePath);
  MediaExtractor audioExtractor = new MediaExtractor();
  audioExtractor.setDataSource(audioFilePath);
  MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
  videoExtractor.selectTrack(0);
  MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
  int videoTrack = muxer.addTrack(videoFormat);
  audioExtractor.selectTrack(0);
  MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
  int audioTrack = muxer.addTrack(audioFormat);
  LogUtil.d(TAG, "Video Format " + videoFormat.toString());
  LogUtil.d(TAG, "Audio Format " + audioFormat.toString());
  boolean sawEOS = false;
  int frameCount = 0;
  int offset = 100;
  int sampleSize = 256 * 1024;
  ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
  ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
  MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
  MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();
  videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
  audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
  muxer.start();
  while (!sawEOS) {
  videoBufferInfo.offset = offset;
  videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);
  if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {
   sawEOS = true;
   videoBufferInfo.size = 0;
  } else {
   videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
   //noinspection WrongConstant
   videoBufferInfo.flags = videoExtractor.getSampleFlags();
   muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
   videoExtractor.advance();
   frameCount++;
  }
  }
  
  boolean sawEOS2 = false;
  int frameCount2 = 0;
  while (!sawEOS2) {
  frameCount2++;
  audioBufferInfo.offset = offset;
  audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);
  if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0) {
   sawEOS2 = true;
   audioBufferInfo.size = 0;
  } else {
   audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
   //noinspection WrongConstant
   audioBufferInfo.flags = audioExtractor.getSampleFlags();
   muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
   audioExtractor.advance();
  }
  }
  muxer.stop();
  muxer.release();
  LogUtil.d(TAG,"Output: "+outputFile);
 } catch (IOException e) {
  LogUtil.d(TAG, "Mixer Error 1 " + e.getMessage());
 } catch (Exception e) {
  LogUtil.d(TAG, "Mixer Error 2 " + e.getMessage());
 }
 }

方法三

利用mp4parser實現(xiàn)

mp4parser是一個視頻處理的開源工具箱,由于mp4parser里的方法都依靠工具箱里的一些內(nèi)容,所以需要將這些內(nèi)容打包成jar包,放到自己的工程里,才能對mp4parser的方法進行調(diào)用。

compile “com.googlecode.mp4parser:isoparser:1.1.21”

問題:上傳Youtube壓縮后,視頻數(shù)據(jù)丟失嚴(yán)重,大部分就只剩下一秒鐘的時長,相當(dāng)于把視頻變成圖片了,囧

 public boolean mux(String videoFile, String audioFile, final String outputFile) {
 if (isStopMux) {
  return false;
 }
 Movie video;
 try {
  video = MovieCreator.build(videoFile);
 } catch (RuntimeException e) {
  e.printStackTrace();
  return false;
 } catch (IOException e) {
  e.printStackTrace();
  return false;
 }
 Movie audio;
 try {
  audio = MovieCreator.build(audioFile);
 } catch (IOException e) {
  e.printStackTrace();
  return false;
 } catch (NullPointerException e) {
  e.printStackTrace();
  return false;
 }
 Track audioTrack = audio.getTracks().get(0);
 video.addTrack(audioTrack);
 Container out = new DefaultMp4Builder().build(video);
 FileOutputStream fos;
 try {
  fos = new FileOutputStream(outputFile);
 } catch (FileNotFoundException e) {
  e.printStackTrace();
  return false;
 }
 BufferedWritableFileByteChannel byteBufferByteChannel = new
  BufferedWritableFileByteChannel(fos);
 try {
  out.writeContainer(byteBufferByteChannel);
  byteBufferByteChannel.close();
  fos.close();
  if (isStopMux) {
  return false;
  }
  runOnUiThread(new Runnable() {
  @Override
  public void run() {
   mCustomeProgressDialog.setProgress(100);
   goShareActivity(outputFile);
//   FileUtils.insertMediaDB(AddAudiosActivity.this,outputFile);//
  }
  });
 } catch (IOException e) {
  e.printStackTrace();
  if (mCustomeProgressDialog.isShowing()) {
  mCustomeProgressDialog.dismiss();
  }
  ToastUtil.showShort(getString(R.string.process_failed));
  return false;
 }
 return true;
 }
 private static class BufferedWritableFileByteChannel implements WritableByteChannel {
 private static final int BUFFER_CAPACITY = 2000000;
 private boolean isOpen = true;
 private final OutputStream outputStream;
 private final ByteBuffer byteBuffer;
 private final byte[] rawBuffer = new byte[BUFFER_CAPACITY];
 private BufferedWritableFileByteChannel(OutputStream outputStream) {
  this.outputStream = outputStream;
  this.byteBuffer = ByteBuffer.wrap(rawBuffer);
 }
 @Override
 public int write(ByteBuffer inputBuffer) throws IOException {
  int inputBytes = inputBuffer.remaining();
  if (inputBytes > byteBuffer.remaining()) {
  dumpToFile();
  byteBuffer.clear();
  if (inputBytes > byteBuffer.remaining()) {
   throw new BufferOverflowException();
  }
  }
  byteBuffer.put(inputBuffer);
  return inputBytes;
 }
 @Override
 public boolean isOpen() {
  return isOpen;
 }
 @Override
 public void close() throws IOException {
  dumpToFile();
  isOpen = false;
 }
 private void dumpToFile() {
  try {
  outputStream.write(rawBuffer, 0, byteBuffer.position());
  } catch (IOException e) {
  throw new RuntimeException(e);
  }
 }
 }

方法四

利用FFmpeg大法

FFmpeg 由于其豐富的 codec 插件,詳細(xì)的文檔說明,并且與其調(diào)試復(fù)雜量大的編解碼代碼(是的,用 MediaCodec 實現(xiàn)起來十分啰嗦和繁瑣)還是不如調(diào)試一行 ffmpeg 命令來的簡單。

Merge Video /Audio and retain both audios

可以實現(xiàn),兼容性強,但由于是軟解碼,合并速度很慢,忍受不了,而相應(yīng)的FFmpeg優(yōu)化還不太了解,囧…….

總結(jié)

以上就是這篇文章的全部內(nèi)容了,希望本文的內(nèi)容對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,如果有疑問大家可以留言交流,謝謝大家對創(chuàng)新互聯(lián)的支持。

文章題目:Android中音視頻合成的幾種方案詳析
文章地址:http://muchs.cn/article36/pihpsg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供建站公司、網(wǎng)站維護、網(wǎng)站內(nèi)鏈、網(wǎng)站營銷、外貿(mào)建站網(wǎng)站設(shè)計

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)

網(wǎng)站托管運營