使用 webrtc 将图像从 android 应用程序发送到网络应用程序

Sending image from android app to web app using webrtc

我正在关注这两个代码示例:一个用于从 android 发送图像,另一个用于在 canvas

上附加接收到的图像

使用 webrtc 数据通道从 android 发送图像

https://github.com/Temasys/skylink-android-screen-sharing/blob/master/SkylinkShare/app/src/main/java/skylink/temasys/com/sg/skylinkshare/MainActivity.java

用于在网络上接收图像并使用 webrtc 数据通道在 canvas 上附加

https://io2014codelabs.appspot.com/static/codelabs/webrtc-file-sharing/#7

案例是我想不断地将屏幕图像从 android 发送到网络,这样看起来该屏幕是从 android 共享的,并且 android 屏幕上的每个更改=] 将显示在网络 canvas 上。

Android

上的代码

这是捕获 android 屏幕的代码。

public void startProjection() {
   startActivityForResult(projectionManager.createScreenCaptureIntent(), SCREEN_REQUEST_CODE);
}

这是从我刚刚捕获的 android 屏幕中提取图像的代码。

@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    switch (requestCode) {
        case SCREEN_REQUEST_CODE:
            mediaProjection = projectionManager.getMediaProjection(resultCode, data);
            if (mediaProjection != null) {

                projectionStarted = true;

                // Initialize the media projection
                DisplayMetrics metrics = getResources().getDisplayMetrics();
                int density = metrics.densityDpi;
                int flags = DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY
                        | DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC;

                Display display = getWindowManager().getDefaultDisplay();
                Point size = new Point();
                display.getSize(size);

                projectionDisplayWidth = size.x;
                projectionDisplayHeight = size.y;

                imageReader = ImageReader.newInstance(projectionDisplayWidth, projectionDisplayHeight
                        , PixelFormat.RGBA_8888, 2);
                mediaProjection.createVirtualDisplay("screencap",
                        projectionDisplayWidth, projectionDisplayHeight, density,
                        flags, imageReader.getSurface(), null, handler);
                imageReader.setOnImageAvailableListener(new ImageAvailableListener(), handler);
            }
            break;
    }
}

这里是可用的监听器图像class:

private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
    @Override
    public void onImageAvailable(ImageReader reader) {
        Image image = null;
        FileOutputStream fos = null;
        Bitmap bitmap = null;

        ByteArrayOutputStream stream = null;

        try {
            image = imageReader.acquireLatestImage();
            if (image != null) {
                Image.Plane[] planes = image.getPlanes();
                ByteBuffer buffer = planes[0].getBuffer();
                int pixelStride = planes[0].getPixelStride();
                int rowStride = planes[0].getRowStride();
                int rowPadding = rowStride - pixelStride * projectionDisplayWidth;

                // create bitmap
                bitmap = Bitmap.createBitmap(projectionDisplayWidth + rowPadding / pixelStride,
                        projectionDisplayHeight, Bitmap.Config.ARGB_8888);
                bitmap.copyPixelsFromBuffer(buffer);

                stream = new ByteArrayOutputStream();
                bitmap.compress(Bitmap.CompressFormat.JPEG, 5, stream);


                ByteBuffer byteBuffer = ByteBuffer.wrap(stream.toByteArray());
                DataChannel.Buffer buf = new DataChannel.Buffer(byteBuffer, true);

                Log.w("CONFERENCE_SCREEN", "Image size less than chunk size condition");

                client.sendDataChannelMessage(buf);

                imagesProduced++;
                Log.w("CONFERENCE_SCREEN", "captured image: " + imagesProduced);
            }

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (fos != null) {
                try {
                    fos.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (stream != null) {
                try {
                    stream.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (bitmap != null) {
                bitmap.recycle();
            }

            if (image != null) {
                image.close();
            }
        }
    }
}

网络代码

正在创建 Canvas:

var canvas = document.createElement('canvas');
canvas.classList.add('incomingPhoto');
screenAndroidImage.insertBefore(canvas, screenAndroidImage.firstChild); // screenAndroidImage is a div

我 运行 每当从 android 发送图像时,我都会使用以下代码:

if (data.data.byteLength  || typeof data.data !== 'string') {
      var context = canvas.getContext('2d');
      var img = context.createImageData(300, 150);
      img.data.set(data.data);
      context.putImageData(img, 0, 0);
      trace("Image chunk received");
}

我可以在 Web 控制台上看到以 ArrayBuffer{} 形式接收的图像数据。我在 canvas.

上看不到任何渲染

SkylinkJS 目前似乎不支持二进制传输。我想可以做的解决方案是将字节编码为 Base64 编码字符串,并将它们作为 P2P 消息发送到 Web 端。并在Web端,将base64字符串转为图片写入canvas.

对于 Android SDK 文档 API:MessagesListener sendP2PMessage 对于 Web SDK 文档 API:incomingMessage

发现错误并更正。首先,在 class ImageAvailableListener 中,我们需要将其更改为支持图像大小是否大于 webrtc 数据通道的字节限制。如果图像大小大于我们的限制,那么我们将图像分成更小的字节块。

private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
    @Override
    public void onImageAvailable(ImageReader reader) {
        Image image = null;
        FileOutputStream fos = null;
        Bitmap bitmap = null;

        ByteArrayOutputStream stream = null;

        try {
            image = imageReader.acquireLatestImage();
            if (image != null) {
                Image.Plane[] planes = image.getPlanes();
                ByteBuffer buffer = planes[0].getBuffer();
                int pixelStride = planes[0].getPixelStride();
                int rowStride = planes[0].getRowStride();
                int rowPadding = rowStride - pixelStride * projectionDisplayWidth;

                // create bitmap
                bitmap = Bitmap.createBitmap(projectionDisplayWidth + rowPadding / pixelStride,
                        projectionDisplayHeight, Bitmap.Config.ARGB_8888);
                bitmap.copyPixelsFromBuffer(buffer);

                stream = new ByteArrayOutputStream();
                bitmap.compress(Bitmap.CompressFormat.JPEG, 5, stream);

                if(stream.toByteArray().length < 16000){
                    ByteBuffer byteBuffer = ByteBuffer.wrap(stream.toByteArray());
                    DataChannel.Buffer buf = new DataChannel.Buffer(byteBuffer, true);

                    Log.w("CONFERENCE_SCREEN", "Image size less than chunk size condition");

                    client.sendDataChannelMessage(buf);

                    client.sendDataChannelMessage(new DataChannel.Buffer(Utility.toByteBuffer("\n"), false));
                } else {
                    // todo break files in pieces here

                    ByteBuffer byteBuffer = ByteBuffer.wrap(stream.toByteArray());
                    DataChannel.Buffer buf = new DataChannel.Buffer(byteBuffer, true);
                    client.sendDataChannelMessage(buf);
                    client.sendDataChannelMessage(new DataChannel.Buffer(Utility.toByteBuffer("\n"), false));
                    //   skylinkConnection.sendData(currentRemotePeerId, stream.toByteArray());
                    Log.w("CONFERENCE_SCREEN", "sending screen data to peer :");
                }

                imagesProduced++;
                Log.w("CONFERENCE_SCREEN", "captured image: " + imagesProduced);
            }

        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            if (fos != null) {
                try {
                    fos.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (stream != null) {
                try {
                    stream.close();
                } catch (IOException ioe) {
                    ioe.printStackTrace();
                }
            }

            if (bitmap != null) {
                bitmap.recycle();
            }

            if (image != null) {
                image.close();
            }
        }
    }
}

网络代码

应在侦听来自数据通道的传入字节的函数外部声明以下变量。

var buf;
var chunks = []; var count;

监听数据通道的函数体:

   if (typeof data.data === 'string') {
      buf = new Uint8ClampedArray(parseInt(data.data));
      count = 0;
      chunks = [];
      console.log('Expecting a total of ' + buf.byteLength + ' bytes');
      return;
    }
    var imgdata = new Uint8ClampedArray(data.data);
    console.log('image chunk')
    buf.set(imgdata, count);
    chunks[count] = data.data;
    count += imgdata.byteLength;
    if (count === buf.byteLength) {
      // we're done: all data chunks have been received
      //renderPhoto(buf);
      var builder = new Blob(chunks, buf.type);
      console.log('full image received');
      screenViewer.src = URL.createObjectURL(builder);
    }

其中 screenViewer 是一个 HTML 图像元素。