android - 如何混合音频文件和视频文件?
android - How to mux audio file and video file?
我有一个从麦克风录制的 3gp 文件和一个 mp4 视频文件。
我想将音频文件和视频文件混合到 mp4 文件中并保存。
我搜索了很多,但没有找到任何对使用 android 的 MediaMuxer api 有帮助的东西。
MediaMuxer api
更新:这是我混合两个文件的方法,其中有一个异常。
原因是目标 mp4 文件没有任何曲目!
有人可以帮我将音频和视频轨道添加到 muxer 吗?
异常
java.lang.IllegalStateException: Failed to stop the muxer
我的代码:
private void cloneMediaUsingMuxer( String dstMediaPath) throws IOException {
// Set up MediaExtractor to read from the source.
MediaExtractor soundExtractor = new MediaExtractor();
soundExtractor.setDataSource(audioFilePath);
MediaExtractor videoExtractor = new MediaExtractor();
AssetFileDescriptor afd2 = getAssets().openFd("Produce.MP4");
videoExtractor.setDataSource(afd2.getFileDescriptor() , afd2.getStartOffset(),afd2.getLength());
//PATH
//extractor.setDataSource();
int trackCount = soundExtractor.getTrackCount();
int trackCount2 = soundExtractor.getTrackCount();
//assertEquals("wrong number of tracks", expectedTrackCount, trackCount);
// Set up MediaMuxer for the destination.
MediaMuxer muxer;
muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Set up the tracks.
HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
for (int i = 0; i < trackCount; i++) {
soundExtractor.selectTrack(i);
MediaFormat SoundFormat = soundExtractor.getTrackFormat(i);
int dstIndex = muxer.addTrack(SoundFormat);
indexMap.put(i, dstIndex);
}
HashMap<Integer, Integer> indexMap2 = new HashMap<Integer, Integer>(trackCount2);
for (int i = 0; i < trackCount2; i++) {
videoExtractor.selectTrack(i);
MediaFormat videoFormat = videoExtractor.getTrackFormat(i);
int dstIndex2 = muxer.addTrack(videoFormat);
indexMap.put(i, dstIndex2);
}
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
int bufferSize = MAX_SAMPLE_SIZE;
int frameCount = 0;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
MediaCodec.BufferInfo bufferInfo2 = new MediaCodec.BufferInfo();
muxer.start();
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = soundExtractor.readSampleData(dstBuf, offset);
bufferInfo2.offset = offset;
bufferInfo2.size = videoExtractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
bufferInfo2.size = 0;
}else if(bufferInfo2.size < 0){
sawEOS = true;
bufferInfo.size = 0;
bufferInfo2.size = 0;
}
else {
bufferInfo.presentationTimeUs = soundExtractor.getSampleTime();
bufferInfo2.presentationTimeUs = videoExtractor.getSampleTime();
//bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = soundExtractor.getSampleTrackIndex();
int trackIndex2 = videoExtractor.getSampleTrackIndex();
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo);
soundExtractor.advance();
videoExtractor.advance();
frameCount++;
}
}
Toast.makeText(getApplicationContext(),"f:"+frameCount,Toast.LENGTH_SHORT).show();
muxer.stop();
muxer.release();
}
更新 2:问题已解决!检查我对问题的回答。
感谢您的帮助
在 ffmpeg 中工作需要什么。这里有一个 link 可以帮助解决这个问题:
FFmpeg on Android
ffmpeg 在 Android 上需要 NDK。
完成该工作后,您可以使用 ffmpeg 将音频和视频混合在一起。这是一个 link 的问题,它使用 2 个视频文件(答案应该相似)。
FFMPEG mux video and audio (from another video) - mapping issue
我在处理音频和视频文件的轨道时遇到了一些问题。
它们消失了,我的代码一切正常,但现在您可以使用它将音频文件和视频文件合并在一起。
代码:
private void muxing() {
String outputFile = "";
try {
File file = new File(Environment.getExternalStorageDirectory() + File.separator + "final2.mp4");
file.createNewFile();
outputFile = file.getAbsolutePath();
MediaExtractor videoExtractor = new MediaExtractor();
AssetFileDescriptor afdd = getAssets().openFd("Produce.MP4");
videoExtractor.setDataSource(afdd.getFileDescriptor() ,afdd.getStartOffset(),afdd.getLength());
MediaExtractor audioExtractor = new MediaExtractor();
audioExtractor.setDataSource(audioFilePath);
Log.d(TAG, "Video Extractor Track Count " + videoExtractor.getTrackCount() );
Log.d(TAG, "Audio Extractor Track Count " + audioExtractor.getTrackCount() );
MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
videoExtractor.selectTrack(0);
MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
int videoTrack = muxer.addTrack(videoFormat);
audioExtractor.selectTrack(0);
MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
int audioTrack = muxer.addTrack(audioFormat);
Log.d(TAG, "Video Format " + videoFormat.toString() );
Log.d(TAG, "Audio Format " + audioFormat.toString() );
boolean sawEOS = false;
int frameCount = 0;
int offset = 100;
int sampleSize = 256 * 1024;
ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();
videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
muxer.start();
while (!sawEOS)
{
videoBufferInfo.offset = offset;
videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);
if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
{
Log.d(TAG, "saw input EOS.");
sawEOS = true;
videoBufferInfo.size = 0;
}
else
{
videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
videoBufferInfo.flags = videoExtractor.getSampleFlags();
muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
videoExtractor.advance();
frameCount++;
Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);
}
}
Toast.makeText(getApplicationContext() , "frame:" + frameCount , Toast.LENGTH_SHORT).show();
boolean sawEOS2 = false;
int frameCount2 =0;
while (!sawEOS2)
{
frameCount2++;
audioBufferInfo.offset = offset;
audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);
if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
{
Log.d(TAG, "saw input EOS.");
sawEOS2 = true;
audioBufferInfo.size = 0;
}
else
{
audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
audioBufferInfo.flags = audioExtractor.getSampleFlags();
muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
audioExtractor.advance();
Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);
}
}
Toast.makeText(getApplicationContext() , "frame:" + frameCount2 , Toast.LENGTH_SHORT).show();
muxer.stop();
muxer.release();
} catch (IOException e) {
Log.d(TAG, "Mixer Error 1 " + e.getMessage());
} catch (Exception e) {
Log.d(TAG, "Mixer Error 2 " + e.getMessage());
}
}
感谢 mohamad ali gharat 的回答,它对我帮助太大了。
但是我对代码做了一些改变才能工作,
第一:我改
videoExtractor.setDataSource
to
videoExtractor.setDataSource(Environment.getExternalStorageDirectory().getPath()
+ "/Produce.MP4");
从 SDCard
加载视频。
第二:我收到
的错误
videoBufferInfo.flags = videoExtractor.getSampleFlags();
所以改成
videoBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;
让它像这样工作 link 说 Android MediaMuxer failed to stop
private const val MAX_SAMPLE_SIZE = 256 * 1024
fun muxAudioVideo(destination: File, audioSource: File, videoSource: File): Boolean {
var result : Boolean
var muxer : MediaMuxer? = null
try {
// Set up MediaMuxer for the destination.
muxer = MediaMuxer(destination.path, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
// Copy the samples from MediaExtractor to MediaMuxer.
var videoFormat : MediaFormat? = null
var audioFormat : MediaFormat? = null
var muxerStarted : Boolean = false
var videoTrackIndex = -1
var audioTrackIndex = -1
// extractorVideo
var extractorVideo = MediaExtractor()
extractorVideo.setDataSource(videoSource.path)
val tracks = extractorVideo.trackCount
for (i in 0 until tracks) {
val mf = extractorVideo.getTrackFormat(i)
val mime = mf.getString(MediaFormat.KEY_MIME)
if (mime!!.startsWith("video/")) {
extractorVideo.selectTrack(i)
videoFormat = extractorVideo.getTrackFormat(i)
break
}
}
// extractorAudio
var extractorAudio = MediaExtractor()
extractorAudio.setDataSource(audioSource.path)
for (i in 0 until tracks) {
val mf = extractorAudio.getTrackFormat(i)
val mime = mf.getString(MediaFormat.KEY_MIME)
if (mime!!.startsWith("audio/")) {
extractorAudio.selectTrack(i)
audioFormat = extractorAudio.getTrackFormat(i)
break
}
}
val audioTracks = extractorAudio.trackCount
// videoTrackIndex
if (videoTrackIndex == -1) {
videoTrackIndex = muxer.addTrack(videoFormat!!)
}
// audioTrackIndex
if (audioTrackIndex == -1) {
audioTrackIndex = muxer.addTrack(audioFormat!!)
}
var sawEOS = false
var sawAudioEOS = false
val bufferSize = MAX_SAMPLE_SIZE
val dstBuf = ByteBuffer.allocate(bufferSize)
val offset = 0
val bufferInfo = MediaCodec.BufferInfo()
// start muxer
if (!muxerStarted) {
muxer.start()
muxerStarted = true
}
// write video
while (!sawEOS) {
bufferInfo.offset = offset
bufferInfo.size = extractorVideo.readSampleData(dstBuf, offset)
if (bufferInfo.size < 0) {
sawEOS = true
bufferInfo.size = 0
} else {
bufferInfo.presentationTimeUs = extractorVideo.sampleTime
bufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME
muxer.writeSampleData(videoTrackIndex, dstBuf, bufferInfo)
extractorVideo.advance()
}
}
// write audio
val audioBuf = ByteBuffer.allocate(bufferSize)
while (!sawAudioEOS) {
bufferInfo.offset = offset
bufferInfo.size = extractorAudio.readSampleData(audioBuf, offset)
if (bufferInfo.size < 0) {
sawAudioEOS = true
bufferInfo.size = 0
} else {
bufferInfo.presentationTimeUs = extractorAudio.sampleTime
bufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME
muxer.writeSampleData(audioTrackIndex, audioBuf, bufferInfo)
extractorAudio.advance()
}
}
extractorVideo.release()
extractorAudio.release()
result = true
} catch (e: IOException) {
result = false
} finally {
if (muxer != null) {
muxer.stop()
muxer.release()
}
}
return result
}
我有一个从麦克风录制的 3gp 文件和一个 mp4 视频文件。 我想将音频文件和视频文件混合到 mp4 文件中并保存。 我搜索了很多,但没有找到任何对使用 android 的 MediaMuxer api 有帮助的东西。 MediaMuxer api
更新:这是我混合两个文件的方法,其中有一个异常。 原因是目标 mp4 文件没有任何曲目! 有人可以帮我将音频和视频轨道添加到 muxer 吗?
异常
java.lang.IllegalStateException: Failed to stop the muxer
我的代码:
private void cloneMediaUsingMuxer( String dstMediaPath) throws IOException {
// Set up MediaExtractor to read from the source.
MediaExtractor soundExtractor = new MediaExtractor();
soundExtractor.setDataSource(audioFilePath);
MediaExtractor videoExtractor = new MediaExtractor();
AssetFileDescriptor afd2 = getAssets().openFd("Produce.MP4");
videoExtractor.setDataSource(afd2.getFileDescriptor() , afd2.getStartOffset(),afd2.getLength());
//PATH
//extractor.setDataSource();
int trackCount = soundExtractor.getTrackCount();
int trackCount2 = soundExtractor.getTrackCount();
//assertEquals("wrong number of tracks", expectedTrackCount, trackCount);
// Set up MediaMuxer for the destination.
MediaMuxer muxer;
muxer = new MediaMuxer(dstMediaPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
// Set up the tracks.
HashMap<Integer, Integer> indexMap = new HashMap<Integer, Integer>(trackCount);
for (int i = 0; i < trackCount; i++) {
soundExtractor.selectTrack(i);
MediaFormat SoundFormat = soundExtractor.getTrackFormat(i);
int dstIndex = muxer.addTrack(SoundFormat);
indexMap.put(i, dstIndex);
}
HashMap<Integer, Integer> indexMap2 = new HashMap<Integer, Integer>(trackCount2);
for (int i = 0; i < trackCount2; i++) {
videoExtractor.selectTrack(i);
MediaFormat videoFormat = videoExtractor.getTrackFormat(i);
int dstIndex2 = muxer.addTrack(videoFormat);
indexMap.put(i, dstIndex2);
}
// Copy the samples from MediaExtractor to MediaMuxer.
boolean sawEOS = false;
int bufferSize = MAX_SAMPLE_SIZE;
int frameCount = 0;
int offset = 100;
ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
MediaCodec.BufferInfo bufferInfo2 = new MediaCodec.BufferInfo();
muxer.start();
while (!sawEOS) {
bufferInfo.offset = offset;
bufferInfo.size = soundExtractor.readSampleData(dstBuf, offset);
bufferInfo2.offset = offset;
bufferInfo2.size = videoExtractor.readSampleData(dstBuf, offset);
if (bufferInfo.size < 0) {
sawEOS = true;
bufferInfo.size = 0;
bufferInfo2.size = 0;
}else if(bufferInfo2.size < 0){
sawEOS = true;
bufferInfo.size = 0;
bufferInfo2.size = 0;
}
else {
bufferInfo.presentationTimeUs = soundExtractor.getSampleTime();
bufferInfo2.presentationTimeUs = videoExtractor.getSampleTime();
//bufferInfo.flags = extractor.getSampleFlags();
int trackIndex = soundExtractor.getSampleTrackIndex();
int trackIndex2 = videoExtractor.getSampleTrackIndex();
muxer.writeSampleData(indexMap.get(trackIndex), dstBuf,
bufferInfo);
soundExtractor.advance();
videoExtractor.advance();
frameCount++;
}
}
Toast.makeText(getApplicationContext(),"f:"+frameCount,Toast.LENGTH_SHORT).show();
muxer.stop();
muxer.release();
}
更新 2:问题已解决!检查我对问题的回答。
感谢您的帮助
在 ffmpeg 中工作需要什么。这里有一个 link 可以帮助解决这个问题:
FFmpeg on Android
ffmpeg 在 Android 上需要 NDK。
完成该工作后,您可以使用 ffmpeg 将音频和视频混合在一起。这是一个 link 的问题,它使用 2 个视频文件(答案应该相似)。
FFMPEG mux video and audio (from another video) - mapping issue
我在处理音频和视频文件的轨道时遇到了一些问题。 它们消失了,我的代码一切正常,但现在您可以使用它将音频文件和视频文件合并在一起。
代码:
private void muxing() {
String outputFile = "";
try {
File file = new File(Environment.getExternalStorageDirectory() + File.separator + "final2.mp4");
file.createNewFile();
outputFile = file.getAbsolutePath();
MediaExtractor videoExtractor = new MediaExtractor();
AssetFileDescriptor afdd = getAssets().openFd("Produce.MP4");
videoExtractor.setDataSource(afdd.getFileDescriptor() ,afdd.getStartOffset(),afdd.getLength());
MediaExtractor audioExtractor = new MediaExtractor();
audioExtractor.setDataSource(audioFilePath);
Log.d(TAG, "Video Extractor Track Count " + videoExtractor.getTrackCount() );
Log.d(TAG, "Audio Extractor Track Count " + audioExtractor.getTrackCount() );
MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
videoExtractor.selectTrack(0);
MediaFormat videoFormat = videoExtractor.getTrackFormat(0);
int videoTrack = muxer.addTrack(videoFormat);
audioExtractor.selectTrack(0);
MediaFormat audioFormat = audioExtractor.getTrackFormat(0);
int audioTrack = muxer.addTrack(audioFormat);
Log.d(TAG, "Video Format " + videoFormat.toString() );
Log.d(TAG, "Audio Format " + audioFormat.toString() );
boolean sawEOS = false;
int frameCount = 0;
int offset = 100;
int sampleSize = 256 * 1024;
ByteBuffer videoBuf = ByteBuffer.allocate(sampleSize);
ByteBuffer audioBuf = ByteBuffer.allocate(sampleSize);
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
MediaCodec.BufferInfo audioBufferInfo = new MediaCodec.BufferInfo();
videoExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
audioExtractor.seekTo(0, MediaExtractor.SEEK_TO_CLOSEST_SYNC);
muxer.start();
while (!sawEOS)
{
videoBufferInfo.offset = offset;
videoBufferInfo.size = videoExtractor.readSampleData(videoBuf, offset);
if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
{
Log.d(TAG, "saw input EOS.");
sawEOS = true;
videoBufferInfo.size = 0;
}
else
{
videoBufferInfo.presentationTimeUs = videoExtractor.getSampleTime();
videoBufferInfo.flags = videoExtractor.getSampleFlags();
muxer.writeSampleData(videoTrack, videoBuf, videoBufferInfo);
videoExtractor.advance();
frameCount++;
Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);
}
}
Toast.makeText(getApplicationContext() , "frame:" + frameCount , Toast.LENGTH_SHORT).show();
boolean sawEOS2 = false;
int frameCount2 =0;
while (!sawEOS2)
{
frameCount2++;
audioBufferInfo.offset = offset;
audioBufferInfo.size = audioExtractor.readSampleData(audioBuf, offset);
if (videoBufferInfo.size < 0 || audioBufferInfo.size < 0)
{
Log.d(TAG, "saw input EOS.");
sawEOS2 = true;
audioBufferInfo.size = 0;
}
else
{
audioBufferInfo.presentationTimeUs = audioExtractor.getSampleTime();
audioBufferInfo.flags = audioExtractor.getSampleFlags();
muxer.writeSampleData(audioTrack, audioBuf, audioBufferInfo);
audioExtractor.advance();
Log.d(TAG, "Frame (" + frameCount + ") Video PresentationTimeUs:" + videoBufferInfo.presentationTimeUs +" Flags:" + videoBufferInfo.flags +" Size(KB) " + videoBufferInfo.size / 1024);
Log.d(TAG, "Frame (" + frameCount + ") Audio PresentationTimeUs:" + audioBufferInfo.presentationTimeUs +" Flags:" + audioBufferInfo.flags +" Size(KB) " + audioBufferInfo.size / 1024);
}
}
Toast.makeText(getApplicationContext() , "frame:" + frameCount2 , Toast.LENGTH_SHORT).show();
muxer.stop();
muxer.release();
} catch (IOException e) {
Log.d(TAG, "Mixer Error 1 " + e.getMessage());
} catch (Exception e) {
Log.d(TAG, "Mixer Error 2 " + e.getMessage());
}
}
感谢 mohamad ali gharat 的回答,它对我帮助太大了。 但是我对代码做了一些改变才能工作, 第一:我改
videoExtractor.setDataSource
tovideoExtractor.setDataSource(Environment.getExternalStorageDirectory().getPath() + "/Produce.MP4");
从 SDCard
加载视频。
第二:我收到
videoBufferInfo.flags = videoExtractor.getSampleFlags();
所以改成
videoBufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME;
让它像这样工作 link 说 Android MediaMuxer failed to stop
private const val MAX_SAMPLE_SIZE = 256 * 1024
fun muxAudioVideo(destination: File, audioSource: File, videoSource: File): Boolean {
var result : Boolean
var muxer : MediaMuxer? = null
try {
// Set up MediaMuxer for the destination.
muxer = MediaMuxer(destination.path, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)
// Copy the samples from MediaExtractor to MediaMuxer.
var videoFormat : MediaFormat? = null
var audioFormat : MediaFormat? = null
var muxerStarted : Boolean = false
var videoTrackIndex = -1
var audioTrackIndex = -1
// extractorVideo
var extractorVideo = MediaExtractor()
extractorVideo.setDataSource(videoSource.path)
val tracks = extractorVideo.trackCount
for (i in 0 until tracks) {
val mf = extractorVideo.getTrackFormat(i)
val mime = mf.getString(MediaFormat.KEY_MIME)
if (mime!!.startsWith("video/")) {
extractorVideo.selectTrack(i)
videoFormat = extractorVideo.getTrackFormat(i)
break
}
}
// extractorAudio
var extractorAudio = MediaExtractor()
extractorAudio.setDataSource(audioSource.path)
for (i in 0 until tracks) {
val mf = extractorAudio.getTrackFormat(i)
val mime = mf.getString(MediaFormat.KEY_MIME)
if (mime!!.startsWith("audio/")) {
extractorAudio.selectTrack(i)
audioFormat = extractorAudio.getTrackFormat(i)
break
}
}
val audioTracks = extractorAudio.trackCount
// videoTrackIndex
if (videoTrackIndex == -1) {
videoTrackIndex = muxer.addTrack(videoFormat!!)
}
// audioTrackIndex
if (audioTrackIndex == -1) {
audioTrackIndex = muxer.addTrack(audioFormat!!)
}
var sawEOS = false
var sawAudioEOS = false
val bufferSize = MAX_SAMPLE_SIZE
val dstBuf = ByteBuffer.allocate(bufferSize)
val offset = 0
val bufferInfo = MediaCodec.BufferInfo()
// start muxer
if (!muxerStarted) {
muxer.start()
muxerStarted = true
}
// write video
while (!sawEOS) {
bufferInfo.offset = offset
bufferInfo.size = extractorVideo.readSampleData(dstBuf, offset)
if (bufferInfo.size < 0) {
sawEOS = true
bufferInfo.size = 0
} else {
bufferInfo.presentationTimeUs = extractorVideo.sampleTime
bufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME
muxer.writeSampleData(videoTrackIndex, dstBuf, bufferInfo)
extractorVideo.advance()
}
}
// write audio
val audioBuf = ByteBuffer.allocate(bufferSize)
while (!sawAudioEOS) {
bufferInfo.offset = offset
bufferInfo.size = extractorAudio.readSampleData(audioBuf, offset)
if (bufferInfo.size < 0) {
sawAudioEOS = true
bufferInfo.size = 0
} else {
bufferInfo.presentationTimeUs = extractorAudio.sampleTime
bufferInfo.flags = MediaCodec.BUFFER_FLAG_SYNC_FRAME
muxer.writeSampleData(audioTrackIndex, audioBuf, bufferInfo)
extractorAudio.advance()
}
}
extractorVideo.release()
extractorAudio.release()
result = true
} catch (e: IOException) {
result = false
} finally {
if (muxer != null) {
muxer.stop()
muxer.release()
}
}
return result
}