Dot Net 中的网络摄像头 Web 服务器
Webcam Web Server in Dot Net
我已经搜索了几天以找到一个示例项目,该项目允许人们构建一个 "web server" 只需将网络摄像头视频流式传输给连接到它的任何人。听起来很简单。有一些应用程序可以做到这一点,但我想将此功能嵌入到我正在处理的 windows 表单应用程序中。
所以,我只想构建一个充当 HTTP 主机(网络服务器)的项目,当您连接到它时,它将提供网络摄像头视频。当然,您希望能够同时为多个人提供服务。
任何人都可以指出这样的 project/example 吗?
如果您希望可以安装 NuGet 包 NequeoFFmpeg, this is a C++/CLI wrapper of some FFmpeg tools. One thing you can do is use this wrapper to get WebCam data through the FFmpeg binaries. You can get my pre-built FFmpeg binaries from FFmpeg,请使用版本 2016_01_15。
代码示例:
private void Capture()
{
Nequeo.Media.FFmpeg.MediaDemux demux = new Nequeo.Media.FFmpeg.MediaDemux();
demux.OpenDevice("video=Integrated Webcam", true, false);
// create instance of video writer
Nequeo.Media.FFmpeg.VideoFileWriter writer = new Nequeo.Media.FFmpeg.VideoFileWriter();
writer.Open(@"C:\Temp\Misc\ffmpeg_screen_capture_video.avi", demux.Width, demux.Height, demux.FrameRate, Nequeo.Media.FFmpeg.VideoCodec.MPEG4);
byte[] sound = null;
Bitmap[] image = null;
List<Bitmap> video = new List<Bitmap>();
long audioPos = 0;
long videoPos = 0;
int captureCount = 0;
int captureNumber = 500;
while ((demux.ReadFrame(out sound, out image, out audioPos, out videoPos) > 0) && captureCount < captureNumber)
{
if (image != null && image.Length > 0)
{
captureCount++;
for (int i = 0; i < image.Length; i++)
{
writer.WriteVideoFrame(image[i]);
image[i].Dispose();
}
}
}
writer.Close();
demux.Close();
}
设置视频采集设备名称,在上面的示例中我写入了文件,但是您可以将位图写入流。您可以压缩位图,然后写入流。您可以将位图更改为 Jpeg,然后发送到流。
FFmpeg 可以流式传输实时网络摄像头视频,请参阅:StreamingGuide
我已经创建了一个简单的 UDP 服务器来广播从网络摄像头捕获的图像,请注意发送捕获的图像不涉及任何协议,例如 RTSP。从网络摄像头捕获图像后,它会按原样发送该图像。在客户端收到图像后,您必须按需要渲染该图像。
此服务器不应该用于超过 100 个客户端,如果您需要更强大的东西,您将需要寻找其他替代方案。
对于服务器部分,您需要安装 NuGet 包 NequeoNetServer as well as NequeoFFmpeg:
int _clientCount = 0;
bool _stopCapture = false;
Nequeo.Net.UdpSingleServer _udpsingle = null;
Nequeo.Media.FFmpeg.MediaDemux _demux = null;
ConcurrentDictionary<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer> _clients = null;
private void StartServer()
{
// Create the server endpoint.
Nequeo.Net.Sockets.MultiEndpointModel[] model = new Nequeo.Net.Sockets.MultiEndpointModel[]
{
// None secure.
new Nequeo.Net.Sockets.MultiEndpointModel()
{
Port = 514,
Addresses = new System.Net.IPAddress[]
{
System.Net.IPAddress.IPv6Any,
System.Net.IPAddress.Any
}
},
};
if (_udpsingle == null)
{
// Create the UDP server.
_udpsingle = new Nequeo.Net.UdpSingleServer(model);
_udpsingle.OnContext += UDP_Single;
}
// Create the client collection.
_clients = new ConcurrentDictionary<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer>();
_demux = new Nequeo.Media.FFmpeg.MediaDemux();
// Start the server.
_udpsingle.Start();
_clientCount = 0;
_stopCapture = false;
// Start the capture process.
CaptureAndSend();
}
停止服务器:
private void StopServer()
{
_clientCount = 0;
_stopCapture = true;
if (_udpsingle != null)
{
_udpsingle.Stop();
_udpsingle.Dispose();
}
_udpsingle = null;
if (_demux != null)
_demux.Close();
_demux = null;
}
客户端发送的消息:
private void UDP_Single(object sender, Nequeo.Net.Sockets.IUdpSingleServer server, byte[] data, IPEndPoint endpoint)
{
string request = System.Text.Encoding.Default.GetString(data);
if (request.ToLower().Contains("connect"))
// Add the new client.
_clients.GetOrAdd(endpoint, server);
if (request.ToLower().Contains("disconnect"))
{
Nequeo.Net.Sockets.IUdpSingleServer removedServer = null;
// Remove the existing client.
_clients.TryRemove(endpoint, out removedServer);
}
}
捕获:从解复用器中可以得到捕获设备的_demux.Width
、_demux.Height
和_demux.FrameRate
。
private async void CaptureAndSend()
{
await System.Threading.Tasks.Task.Run(() =>
{
if (_demux != null)
{
// Open the web cam device.
_demux.OpenDevice("video=Integrated Webcam", true, false);
byte[] sound = null;
Bitmap[] image = null;
long audioPos = 0;
long videoPos = 0;
int count = 0;
KeyValuePair<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer>[] clientCol = null;
// Most of the time one image at a time.
MemoryStream[] imageStream = new MemoryStream[10];
int imageStreamCount = 0;
// Within this loop you can place a check if there are any clients
// connected, and if none then stop capturing until some are connected.
while ((_demux.ReadFrame(out sound, out image, out audioPos, out videoPos) > 0) && !_stopCapture)
{
imageStreamCount = 0;
count = _clients.Count;
// If count has changed.
if (_clientCount != count)
{
// Get the collection of all clients.
_clientCount = count;
clientCol = _clients.ToArray();
}
// Has an image been captured.
if (image != null && image.Length > 0)
{
// Get all clients and send.
if (clientCol != null)
{
for (int i = 0; i < image.Length; i++)
{
// Create a memory stream for each image.
imageStream[i] = new MemoryStream();
imageStreamCount++;
// Save the image to the stream.
image[i].Save(imageStream[i], System.Drawing.Imaging.ImageFormat.Jpeg);
// Cleanup.
image[i].Dispose();
}
// For each client.
foreach (KeyValuePair<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer> client in clientCol)
{
// For each image captured.
for (int i = 0; i < imageStreamCount; i++)
{
// Send the image to this client.
client.Value.SendTo(imageStream[i].ToArray(), client.Key);
imageStream[i].Seek(0, SeekOrigin.Begin);
}
}
for (int i = 0; i < imageStreamCount; i++)
// Cleanup.
imageStream[i].Dispose();
}
}
}
}
});
}
在客户端,如上所述,接收到的数据是捕获的图像,您需要渲染图像和所有后续图像。您可以从服务器向客户端发送捕获图像的 Width
、Height
和 FrameRate
,这些图像可用于渲染接收到的每个图像。
客户端代码:UDP客户端状态容器
public class UdpState
{
public UdpClient u { get; set; }
public IPEndPoint e { get; set; }
}
客户端代码:客户端代码应该包含一些数据缓冲区,这样您就可以接收和渲染图像而不会丢帧。
private void Connect()
{
pictureBox1.ClientSize = new Size(320, 240);
// Create the client.
IPEndPoint ee = new IPEndPoint(IPAddress.Any, 541);
UdpClient u = new UdpClient(ee);
// Create the state.
UdpState s = new UdpState();
s.e = ee;
s.u = u;
// Connect to the server.
IPEndPoint server = new IPEndPoint(IPAddress.Any, 514);
u.Connect("localhost", 514);
// Start the begin receive callback.
u.BeginReceive(new AsyncCallback(ReceiveCallback), s);
// Send a connect request.
byte[] connect = System.Text.Encoding.Default.GetBytes("connect");
u.Send(connect, connect.Length);
}
在接收回调中,您可以将网络摄像头图像渲染到图片框。
public void ReceiveCallback(IAsyncResult ar)
{
// Get the client.
UdpClient u = (UdpClient)((UdpState)(ar.AsyncState)).u;
IPEndPoint e = (IPEndPoint)((UdpState)(ar.AsyncState)).e;
// Get the image.
Byte[] receiveBytes = u.EndReceive(ar, ref e);
// Load the image into a stream.
MemoryStream stream = new MemoryStream(receiveBytes);
Image image = Image.FromStream(stream);
// Add the image to the picture box.
pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage;
pictureBox1.Image = image;
stream.Dispose();
// Start a new receive request.
u.BeginReceive(new AsyncCallback(ReceiveCallback), (UdpState)(ar.AsyncState));
}
注意:我创建了一个 UDP 示例,但您可以使用相同的方法创建一个 HTTP TCP 示例,NuGet 包 NequeoNetServer 包含一个自定义 HTTP 服务器,您可以调整它来为任何 HTTP 请求提供服务。
private void StartCustomHttpServer()
{
Nequeo.Net.CustomServer c = new Nequeo.Net.CustomServer(typeof(http), 30);
c.Start();
}
自定义 HTTP 服务器class:
internal class http : Nequeo.Net.Http.CustomContext
{
/// <summary>
/// On new client Http context.
/// </summary>
/// <param name="context">The client Http context.</param>
protected override void OnHttpContext(Nequeo.Net.Http.HttpContext context)
{
// Get the headers from the stream and assign the request data.
bool headersExist = Nequeo.Net.Http.Utility.SetRequestHeaders(context, 30000, 10000);
context.HttpResponse.ContentLength = 5;
context.HttpResponse.ContentType = "text/plain";
context.HttpResponse.StatusCode = 200;
context.HttpResponse.StatusDescription = "OK";
context.HttpResponse.WriteHttpHeaders();
context.HttpResponse.Write("Hello");
}
}
对于所有对将网络摄像头数据流式传输到网络浏览器感兴趣的人,下面的代码使用 WebSockets,要使用示例,请获取 NuGet 包 NequeoWebSockets as well as NequeoFFmpeg
正在启动 WebSocket 服务器:
TestServer.WebcamWebSocketServer wsServer = new WebcamWebSocketServer();
wsServer.UriList = new string[] { "http://localhost:2012/" };
wsServer.Start();
WebCam WebSocket 服务器代码:根据示例调整您的代码。
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Net.WebSockets;
using System.Threading;
using System.IO;
using System.Drawing;
using System.Collections.Concurrent;
namespace TestServer
{
public class WebcamWebSocketServer : Nequeo.Net.WebSockets.WebSocketServer
{
/// <summary>
///
/// </summary>
public WebcamWebSocketServer()
{
OnServerInitialise();
}
/// <summary>
///
/// </summary>
/// <param name="uriList"></param>
public WebcamWebSocketServer(string[] uriList)
: base(uriList)
{
OnServerInitialise();
}
int _clientCount = 0;
private int READ_BUFFER_SIZE = 8192;
private bool _stopCapture = false;
private Nequeo.Media.FFmpeg.MediaDemux _demuxHttp = null;
ConcurrentDictionary<System.Net.WebSockets.HttpListenerWebSocketContext, WebSocket> _clients = null;
/// <summary>
///
/// </summary>
private void OnServerInitialise()
{
base.Timeout = 60;
base.HeaderTimeout = 30000;
base.RequestTimeout = 30000;
base.ResponseTimeout = 30000;
base.Name = "Nequeo Web Socket Server";
base.ServiceName = "WebSocketServer";
base.OnWebSocketContext += WebSocketServer_OnWebSocketContext;
_demuxHttp = new Nequeo.Media.FFmpeg.MediaDemux();
// Open the web cam device.
_demuxHttp.OpenDevice("video=Integrated Webcam", true, false);
_clients = new ConcurrentDictionary<HttpListenerWebSocketContext, WebSocket>();
// Start capture.
CaptureAndSend();
}
/// <summary>
///
/// </summary>
/// <param name="sender"></param>
/// <param name="context"></param>
private void WebSocketServer_OnWebSocketContext(object sender, System.Net.WebSockets.HttpListenerWebSocketContext context)
{
OnWebcamWebSocketContext(context);
}
/// <summary>
///
/// </summary>
/// <param name="context"></param>
private async void OnWebcamWebSocketContext(System.Net.WebSockets.HttpListenerWebSocketContext context)
{
WebSocket webSocket = null;
try
{
// Get the current web socket.
webSocket = context.WebSocket;
CancellationTokenSource receiveCancelToken = new CancellationTokenSource();
byte[] receiveBuffer = new byte[READ_BUFFER_SIZE];
// While the WebSocket connection remains open run a
// simple loop that receives data and sends it back.
while (webSocket.State == WebSocketState.Open)
{
// Receive the next set of data.
ArraySegment<byte> arrayBuffer = new ArraySegment<byte>(receiveBuffer);
WebSocketReceiveResult receiveResult = await webSocket.ReceiveAsync(arrayBuffer, receiveCancelToken.Token);
// If the connection has been closed.
if (receiveResult.MessageType == WebSocketMessageType.Close)
{
break;
}
else
{
// Add the client
_clients.GetOrAdd(context, webSocket);
}
}
// Cancel the receive request.
if (webSocket.State != WebSocketState.Open)
receiveCancelToken.Cancel();
}
catch { }
finally
{
// Clean up by disposing the WebSocket.
if (webSocket != null)
webSocket.Dispose();
}
}
/// <summary>
///
/// </summary>
private async void CaptureAndSend()
{
await System.Threading.Tasks.Task.Run(async () =>
{
byte[] sound = null;
Bitmap[] image = null;
long audioPos = 0;
long videoPos = 0;
int count = 0;
KeyValuePair<HttpListenerWebSocketContext, WebSocket>[] clientCol = null;
// Most of the time one image at a time.
MemoryStream[] imageStream = new MemoryStream[10];
int imageStreamCount = 0;
// Within this loop you can place a check if there are any clients
// connected, and if none then stop capturing until some are connected.
while ((_demuxHttp.ReadFrame(out sound, out image, out audioPos, out videoPos) > 0) && !_stopCapture)
{
imageStreamCount = 0;
count = _clients.Count;
// If count has changed.
if (_clientCount != count)
{
// Get the collection of all clients.
_clientCount = count;
clientCol = _clients.ToArray();
}
// Has an image been captured.
if (image != null && image.Length > 0)
{
// Get all clients and send.
if (clientCol != null)
{
for (int i = 0; i < image.Length; i++)
{
// Create a memory stream for each image.
imageStream[i] = new MemoryStream();
imageStreamCount++;
// Save the image to the stream.
image[i].Save(imageStream[i], System.Drawing.Imaging.ImageFormat.Jpeg);
// Cleanup.
image[i].Dispose();
}
// For each client.
foreach (KeyValuePair<HttpListenerWebSocketContext, WebSocket> client in clientCol)
{
// For each image captured.
for (int i = 0; i < imageStreamCount; i++)
{
// data to send.
byte[] result = imageStream[0].GetBuffer();
string base64 = Convert.ToBase64String(result);
byte[] base64Bytes = Encoding.Default.GetBytes(base64);
try
{
// Send a message back to the client indicating that
// the message was recivied and was sent.
await client.Value.SendAsync(new ArraySegment<byte>(base64Bytes),
WebSocketMessageType.Text, true, CancellationToken.None);
}
catch { }
imageStream[i].Seek(0, SeekOrigin.Begin);
}
}
for (int i = 0; i < imageStreamCount; i++)
// Cleanup.
imageStream[i].Dispose();
}
}
}
});
}
}
}
单个 HTML 页面的代码:
<!DOCTYPE html>
<html>
<head>
<title>Test</title>
<script type="text/javascript" src="js/jquery.js"></script>
<script type="text/javascript">
var noSupportMessage = "Your browser cannot support WebSocket!";
var ws;
function appendMessage(message) {
$('body').append(message);
}
function connectSocketServer() {
var support = "MozWebSocket" in window ? 'MozWebSocket' : ("WebSocket" in window ? 'WebSocket' : null);
if (support == null) {
appendMessage("* " + noSupportMessage + "<br/>");
return;
}
appendMessage("* Connecting to server ..<br/>");
// create a new websocket and connect
ws = new window[support]('ws://localhost:2012/');
ws.binaryType = "arraybuffer";
// when data is comming from the server, this metod is called
ws.onmessage = function (evt) {
if (evt.data) {
drawImage(evt.data);
}
};
// when the connection is established, this method is called
ws.onopen = function () {
appendMessage('* Connection open<br/>');
$('#messageInput').attr("disabled", "");
$('#sendButton').attr("disabled", "");
$('#connectButton').attr("disabled", "disabled");
$('#disconnectButton').attr("disabled", "");
};
// when the connection is closed, this method is called
ws.onclose = function () {
appendMessage('* Connection closed<br/>');
$('#messageInput').attr("disabled", "disabled");
$('#sendButton').attr("disabled", "disabled");
$('#connectButton').attr("disabled", "");
$('#disconnectButton').attr("disabled", "disabled");
}
}
function sendMessage() {
if (ws) {
var messageBox = document.getElementById('messageInput');
ws.send(messageBox.value);
messageBox.value = "";
}
}
function disconnectWebSocket() {
if (ws) {
ws.close();
}
}
function connectWebSocket() {
connectSocketServer();
}
window.onload = function () {
$('#messageInput').attr("disabled", "disabled");
$('#sendButton').attr("disabled", "disabled");
$('#disconnectButton').attr("disabled", "disabled");
}
function drawImage(data)
{
$("#image").attr('src', 'data:image/jpg;base64,' + data);
}
</script>
</head>
<body>
<input type="button" id="connectButton" value="Connect" onclick="connectWebSocket()" />
<input type="button" id="disconnectButton" value="Disconnect" onclick="disconnectWebSocket()" />
<input type="text" id="messageInput" />
<input type="button" id="sendButton" value="Load Remote Image" onclick="sendMessage()" /> <br />
<img id="image" src="" width="320" height="240" />
</body>
</html>
我已经搜索了几天以找到一个示例项目,该项目允许人们构建一个 "web server" 只需将网络摄像头视频流式传输给连接到它的任何人。听起来很简单。有一些应用程序可以做到这一点,但我想将此功能嵌入到我正在处理的 windows 表单应用程序中。
所以,我只想构建一个充当 HTTP 主机(网络服务器)的项目,当您连接到它时,它将提供网络摄像头视频。当然,您希望能够同时为多个人提供服务。
任何人都可以指出这样的 project/example 吗?
如果您希望可以安装 NuGet 包 NequeoFFmpeg, this is a C++/CLI wrapper of some FFmpeg tools. One thing you can do is use this wrapper to get WebCam data through the FFmpeg binaries. You can get my pre-built FFmpeg binaries from FFmpeg,请使用版本 2016_01_15。
代码示例:
private void Capture()
{
Nequeo.Media.FFmpeg.MediaDemux demux = new Nequeo.Media.FFmpeg.MediaDemux();
demux.OpenDevice("video=Integrated Webcam", true, false);
// create instance of video writer
Nequeo.Media.FFmpeg.VideoFileWriter writer = new Nequeo.Media.FFmpeg.VideoFileWriter();
writer.Open(@"C:\Temp\Misc\ffmpeg_screen_capture_video.avi", demux.Width, demux.Height, demux.FrameRate, Nequeo.Media.FFmpeg.VideoCodec.MPEG4);
byte[] sound = null;
Bitmap[] image = null;
List<Bitmap> video = new List<Bitmap>();
long audioPos = 0;
long videoPos = 0;
int captureCount = 0;
int captureNumber = 500;
while ((demux.ReadFrame(out sound, out image, out audioPos, out videoPos) > 0) && captureCount < captureNumber)
{
if (image != null && image.Length > 0)
{
captureCount++;
for (int i = 0; i < image.Length; i++)
{
writer.WriteVideoFrame(image[i]);
image[i].Dispose();
}
}
}
writer.Close();
demux.Close();
}
设置视频采集设备名称,在上面的示例中我写入了文件,但是您可以将位图写入流。您可以压缩位图,然后写入流。您可以将位图更改为 Jpeg,然后发送到流。
FFmpeg 可以流式传输实时网络摄像头视频,请参阅:StreamingGuide
我已经创建了一个简单的 UDP 服务器来广播从网络摄像头捕获的图像,请注意发送捕获的图像不涉及任何协议,例如 RTSP。从网络摄像头捕获图像后,它会按原样发送该图像。在客户端收到图像后,您必须按需要渲染该图像。
此服务器不应该用于超过 100 个客户端,如果您需要更强大的东西,您将需要寻找其他替代方案。
对于服务器部分,您需要安装 NuGet 包 NequeoNetServer as well as NequeoFFmpeg:
int _clientCount = 0;
bool _stopCapture = false;
Nequeo.Net.UdpSingleServer _udpsingle = null;
Nequeo.Media.FFmpeg.MediaDemux _demux = null;
ConcurrentDictionary<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer> _clients = null;
private void StartServer()
{
// Create the server endpoint.
Nequeo.Net.Sockets.MultiEndpointModel[] model = new Nequeo.Net.Sockets.MultiEndpointModel[]
{
// None secure.
new Nequeo.Net.Sockets.MultiEndpointModel()
{
Port = 514,
Addresses = new System.Net.IPAddress[]
{
System.Net.IPAddress.IPv6Any,
System.Net.IPAddress.Any
}
},
};
if (_udpsingle == null)
{
// Create the UDP server.
_udpsingle = new Nequeo.Net.UdpSingleServer(model);
_udpsingle.OnContext += UDP_Single;
}
// Create the client collection.
_clients = new ConcurrentDictionary<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer>();
_demux = new Nequeo.Media.FFmpeg.MediaDemux();
// Start the server.
_udpsingle.Start();
_clientCount = 0;
_stopCapture = false;
// Start the capture process.
CaptureAndSend();
}
停止服务器:
private void StopServer()
{
_clientCount = 0;
_stopCapture = true;
if (_udpsingle != null)
{
_udpsingle.Stop();
_udpsingle.Dispose();
}
_udpsingle = null;
if (_demux != null)
_demux.Close();
_demux = null;
}
客户端发送的消息:
private void UDP_Single(object sender, Nequeo.Net.Sockets.IUdpSingleServer server, byte[] data, IPEndPoint endpoint)
{
string request = System.Text.Encoding.Default.GetString(data);
if (request.ToLower().Contains("connect"))
// Add the new client.
_clients.GetOrAdd(endpoint, server);
if (request.ToLower().Contains("disconnect"))
{
Nequeo.Net.Sockets.IUdpSingleServer removedServer = null;
// Remove the existing client.
_clients.TryRemove(endpoint, out removedServer);
}
}
捕获:从解复用器中可以得到捕获设备的_demux.Width
、_demux.Height
和_demux.FrameRate
。
private async void CaptureAndSend()
{
await System.Threading.Tasks.Task.Run(() =>
{
if (_demux != null)
{
// Open the web cam device.
_demux.OpenDevice("video=Integrated Webcam", true, false);
byte[] sound = null;
Bitmap[] image = null;
long audioPos = 0;
long videoPos = 0;
int count = 0;
KeyValuePair<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer>[] clientCol = null;
// Most of the time one image at a time.
MemoryStream[] imageStream = new MemoryStream[10];
int imageStreamCount = 0;
// Within this loop you can place a check if there are any clients
// connected, and if none then stop capturing until some are connected.
while ((_demux.ReadFrame(out sound, out image, out audioPos, out videoPos) > 0) && !_stopCapture)
{
imageStreamCount = 0;
count = _clients.Count;
// If count has changed.
if (_clientCount != count)
{
// Get the collection of all clients.
_clientCount = count;
clientCol = _clients.ToArray();
}
// Has an image been captured.
if (image != null && image.Length > 0)
{
// Get all clients and send.
if (clientCol != null)
{
for (int i = 0; i < image.Length; i++)
{
// Create a memory stream for each image.
imageStream[i] = new MemoryStream();
imageStreamCount++;
// Save the image to the stream.
image[i].Save(imageStream[i], System.Drawing.Imaging.ImageFormat.Jpeg);
// Cleanup.
image[i].Dispose();
}
// For each client.
foreach (KeyValuePair<IPEndPoint, Nequeo.Net.Sockets.IUdpSingleServer> client in clientCol)
{
// For each image captured.
for (int i = 0; i < imageStreamCount; i++)
{
// Send the image to this client.
client.Value.SendTo(imageStream[i].ToArray(), client.Key);
imageStream[i].Seek(0, SeekOrigin.Begin);
}
}
for (int i = 0; i < imageStreamCount; i++)
// Cleanup.
imageStream[i].Dispose();
}
}
}
}
});
}
在客户端,如上所述,接收到的数据是捕获的图像,您需要渲染图像和所有后续图像。您可以从服务器向客户端发送捕获图像的 Width
、Height
和 FrameRate
,这些图像可用于渲染接收到的每个图像。
客户端代码:UDP客户端状态容器
public class UdpState
{
public UdpClient u { get; set; }
public IPEndPoint e { get; set; }
}
客户端代码:客户端代码应该包含一些数据缓冲区,这样您就可以接收和渲染图像而不会丢帧。
private void Connect()
{
pictureBox1.ClientSize = new Size(320, 240);
// Create the client.
IPEndPoint ee = new IPEndPoint(IPAddress.Any, 541);
UdpClient u = new UdpClient(ee);
// Create the state.
UdpState s = new UdpState();
s.e = ee;
s.u = u;
// Connect to the server.
IPEndPoint server = new IPEndPoint(IPAddress.Any, 514);
u.Connect("localhost", 514);
// Start the begin receive callback.
u.BeginReceive(new AsyncCallback(ReceiveCallback), s);
// Send a connect request.
byte[] connect = System.Text.Encoding.Default.GetBytes("connect");
u.Send(connect, connect.Length);
}
在接收回调中,您可以将网络摄像头图像渲染到图片框。
public void ReceiveCallback(IAsyncResult ar)
{
// Get the client.
UdpClient u = (UdpClient)((UdpState)(ar.AsyncState)).u;
IPEndPoint e = (IPEndPoint)((UdpState)(ar.AsyncState)).e;
// Get the image.
Byte[] receiveBytes = u.EndReceive(ar, ref e);
// Load the image into a stream.
MemoryStream stream = new MemoryStream(receiveBytes);
Image image = Image.FromStream(stream);
// Add the image to the picture box.
pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage;
pictureBox1.Image = image;
stream.Dispose();
// Start a new receive request.
u.BeginReceive(new AsyncCallback(ReceiveCallback), (UdpState)(ar.AsyncState));
}
注意:我创建了一个 UDP 示例,但您可以使用相同的方法创建一个 HTTP TCP 示例,NuGet 包 NequeoNetServer 包含一个自定义 HTTP 服务器,您可以调整它来为任何 HTTP 请求提供服务。
private void StartCustomHttpServer()
{
Nequeo.Net.CustomServer c = new Nequeo.Net.CustomServer(typeof(http), 30);
c.Start();
}
自定义 HTTP 服务器class:
internal class http : Nequeo.Net.Http.CustomContext
{
/// <summary>
/// On new client Http context.
/// </summary>
/// <param name="context">The client Http context.</param>
protected override void OnHttpContext(Nequeo.Net.Http.HttpContext context)
{
// Get the headers from the stream and assign the request data.
bool headersExist = Nequeo.Net.Http.Utility.SetRequestHeaders(context, 30000, 10000);
context.HttpResponse.ContentLength = 5;
context.HttpResponse.ContentType = "text/plain";
context.HttpResponse.StatusCode = 200;
context.HttpResponse.StatusDescription = "OK";
context.HttpResponse.WriteHttpHeaders();
context.HttpResponse.Write("Hello");
}
}
对于所有对将网络摄像头数据流式传输到网络浏览器感兴趣的人,下面的代码使用 WebSockets,要使用示例,请获取 NuGet 包 NequeoWebSockets as well as NequeoFFmpeg
正在启动 WebSocket 服务器:
TestServer.WebcamWebSocketServer wsServer = new WebcamWebSocketServer();
wsServer.UriList = new string[] { "http://localhost:2012/" };
wsServer.Start();
WebCam WebSocket 服务器代码:根据示例调整您的代码。
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Net.WebSockets;
using System.Threading;
using System.IO;
using System.Drawing;
using System.Collections.Concurrent;
namespace TestServer
{
public class WebcamWebSocketServer : Nequeo.Net.WebSockets.WebSocketServer
{
/// <summary>
///
/// </summary>
public WebcamWebSocketServer()
{
OnServerInitialise();
}
/// <summary>
///
/// </summary>
/// <param name="uriList"></param>
public WebcamWebSocketServer(string[] uriList)
: base(uriList)
{
OnServerInitialise();
}
int _clientCount = 0;
private int READ_BUFFER_SIZE = 8192;
private bool _stopCapture = false;
private Nequeo.Media.FFmpeg.MediaDemux _demuxHttp = null;
ConcurrentDictionary<System.Net.WebSockets.HttpListenerWebSocketContext, WebSocket> _clients = null;
/// <summary>
///
/// </summary>
private void OnServerInitialise()
{
base.Timeout = 60;
base.HeaderTimeout = 30000;
base.RequestTimeout = 30000;
base.ResponseTimeout = 30000;
base.Name = "Nequeo Web Socket Server";
base.ServiceName = "WebSocketServer";
base.OnWebSocketContext += WebSocketServer_OnWebSocketContext;
_demuxHttp = new Nequeo.Media.FFmpeg.MediaDemux();
// Open the web cam device.
_demuxHttp.OpenDevice("video=Integrated Webcam", true, false);
_clients = new ConcurrentDictionary<HttpListenerWebSocketContext, WebSocket>();
// Start capture.
CaptureAndSend();
}
/// <summary>
///
/// </summary>
/// <param name="sender"></param>
/// <param name="context"></param>
private void WebSocketServer_OnWebSocketContext(object sender, System.Net.WebSockets.HttpListenerWebSocketContext context)
{
OnWebcamWebSocketContext(context);
}
/// <summary>
///
/// </summary>
/// <param name="context"></param>
private async void OnWebcamWebSocketContext(System.Net.WebSockets.HttpListenerWebSocketContext context)
{
WebSocket webSocket = null;
try
{
// Get the current web socket.
webSocket = context.WebSocket;
CancellationTokenSource receiveCancelToken = new CancellationTokenSource();
byte[] receiveBuffer = new byte[READ_BUFFER_SIZE];
// While the WebSocket connection remains open run a
// simple loop that receives data and sends it back.
while (webSocket.State == WebSocketState.Open)
{
// Receive the next set of data.
ArraySegment<byte> arrayBuffer = new ArraySegment<byte>(receiveBuffer);
WebSocketReceiveResult receiveResult = await webSocket.ReceiveAsync(arrayBuffer, receiveCancelToken.Token);
// If the connection has been closed.
if (receiveResult.MessageType == WebSocketMessageType.Close)
{
break;
}
else
{
// Add the client
_clients.GetOrAdd(context, webSocket);
}
}
// Cancel the receive request.
if (webSocket.State != WebSocketState.Open)
receiveCancelToken.Cancel();
}
catch { }
finally
{
// Clean up by disposing the WebSocket.
if (webSocket != null)
webSocket.Dispose();
}
}
/// <summary>
///
/// </summary>
private async void CaptureAndSend()
{
await System.Threading.Tasks.Task.Run(async () =>
{
byte[] sound = null;
Bitmap[] image = null;
long audioPos = 0;
long videoPos = 0;
int count = 0;
KeyValuePair<HttpListenerWebSocketContext, WebSocket>[] clientCol = null;
// Most of the time one image at a time.
MemoryStream[] imageStream = new MemoryStream[10];
int imageStreamCount = 0;
// Within this loop you can place a check if there are any clients
// connected, and if none then stop capturing until some are connected.
while ((_demuxHttp.ReadFrame(out sound, out image, out audioPos, out videoPos) > 0) && !_stopCapture)
{
imageStreamCount = 0;
count = _clients.Count;
// If count has changed.
if (_clientCount != count)
{
// Get the collection of all clients.
_clientCount = count;
clientCol = _clients.ToArray();
}
// Has an image been captured.
if (image != null && image.Length > 0)
{
// Get all clients and send.
if (clientCol != null)
{
for (int i = 0; i < image.Length; i++)
{
// Create a memory stream for each image.
imageStream[i] = new MemoryStream();
imageStreamCount++;
// Save the image to the stream.
image[i].Save(imageStream[i], System.Drawing.Imaging.ImageFormat.Jpeg);
// Cleanup.
image[i].Dispose();
}
// For each client.
foreach (KeyValuePair<HttpListenerWebSocketContext, WebSocket> client in clientCol)
{
// For each image captured.
for (int i = 0; i < imageStreamCount; i++)
{
// data to send.
byte[] result = imageStream[0].GetBuffer();
string base64 = Convert.ToBase64String(result);
byte[] base64Bytes = Encoding.Default.GetBytes(base64);
try
{
// Send a message back to the client indicating that
// the message was recivied and was sent.
await client.Value.SendAsync(new ArraySegment<byte>(base64Bytes),
WebSocketMessageType.Text, true, CancellationToken.None);
}
catch { }
imageStream[i].Seek(0, SeekOrigin.Begin);
}
}
for (int i = 0; i < imageStreamCount; i++)
// Cleanup.
imageStream[i].Dispose();
}
}
}
});
}
}
}
单个 HTML 页面的代码:
<!DOCTYPE html>
<html>
<head>
<title>Test</title>
<script type="text/javascript" src="js/jquery.js"></script>
<script type="text/javascript">
var noSupportMessage = "Your browser cannot support WebSocket!";
var ws;
function appendMessage(message) {
$('body').append(message);
}
function connectSocketServer() {
var support = "MozWebSocket" in window ? 'MozWebSocket' : ("WebSocket" in window ? 'WebSocket' : null);
if (support == null) {
appendMessage("* " + noSupportMessage + "<br/>");
return;
}
appendMessage("* Connecting to server ..<br/>");
// create a new websocket and connect
ws = new window[support]('ws://localhost:2012/');
ws.binaryType = "arraybuffer";
// when data is comming from the server, this metod is called
ws.onmessage = function (evt) {
if (evt.data) {
drawImage(evt.data);
}
};
// when the connection is established, this method is called
ws.onopen = function () {
appendMessage('* Connection open<br/>');
$('#messageInput').attr("disabled", "");
$('#sendButton').attr("disabled", "");
$('#connectButton').attr("disabled", "disabled");
$('#disconnectButton').attr("disabled", "");
};
// when the connection is closed, this method is called
ws.onclose = function () {
appendMessage('* Connection closed<br/>');
$('#messageInput').attr("disabled", "disabled");
$('#sendButton').attr("disabled", "disabled");
$('#connectButton').attr("disabled", "");
$('#disconnectButton').attr("disabled", "disabled");
}
}
function sendMessage() {
if (ws) {
var messageBox = document.getElementById('messageInput');
ws.send(messageBox.value);
messageBox.value = "";
}
}
function disconnectWebSocket() {
if (ws) {
ws.close();
}
}
function connectWebSocket() {
connectSocketServer();
}
window.onload = function () {
$('#messageInput').attr("disabled", "disabled");
$('#sendButton').attr("disabled", "disabled");
$('#disconnectButton').attr("disabled", "disabled");
}
function drawImage(data)
{
$("#image").attr('src', 'data:image/jpg;base64,' + data);
}
</script>
</head>
<body>
<input type="button" id="connectButton" value="Connect" onclick="connectWebSocket()" />
<input type="button" id="disconnectButton" value="Disconnect" onclick="disconnectWebSocket()" />
<input type="text" id="messageInput" />
<input type="button" id="sendButton" value="Load Remote Image" onclick="sendMessage()" /> <br />
<img id="image" src="" width="320" height="240" />
</body>
</html>