无法通过 WebRTC 收听 MediaStream
Can't hear MediaStream through WebRTC
我正在尝试使用 WebRTC 和 WebSocket 进行语音聊天以交换报价。
首先我创建我的 RTCPeerConection
const pc = new RTCPeerConnection(configuration);
然后我将此函数作为 navigator.getUserMedia
的成功回调
let localStream: MediaStream;
export function setupRtc(stream: MediaStream) {
localStream = stream;
// add local media stream tracks to the connection
console.log("adding tracks");
console.log("localStream.getTracks()", localStream.getTracks());
localStream.getTracks().forEach((track: MediaStreamTrack) => {
pc.addTrack(track, localStream);
console.log("added track ", track);
});
console.log("done adding tracks");
// handle pcs tracks
let remoteStream = new MediaStream();
pc.ontrack = function(event: RTCTrackEvent) {
console.log("ontrack", event);
//add audio Tag
event.streams[0].getTracks().forEach((track) => {
remoteStream.addTrack(track);
});
}
let remoteAudio = <HTMLMediaElement> document.getElementById('remoteAudio');
remoteAudio.srcObject = remoteStream;
}
来电者这样发送报价:
async function sendOffer(connectionId: string) {
console.log("SENDING OFFER...");
//create offer desc
let sessionDesc: RTCSessionDescriptionInit = await pc.createOffer();
await pc.setLocalDescription(sessionDesc);
//send offer
socket.send(JSON.stringify(
{
type: "signaling_offer",
desc: {
sdp: sessionDesc.sdp,
type: sessionDesc.type
},
connectionId: connectionId
}
));
console.log("SEND OFFER");
}
connectionId是主叫方和被叫方都有的id。
在 WebSocket 服务器将提议转发给 Callee 之后,他使用此函数回答:
function sendAnswer(connectionId: string, offer: RTCSessionDescriptionInit) {
console.log("SENDING ANSWER...");
pc.setRemoteDescription(new RTCSessionDescription(offer)).then(async () => {
const answerDesc = await pc.createAnswer();
await pc.setLocalDescription(answerDesc);
//send answer
socket.send(JSON.stringify(
{
type: "signaling_answer",
desc: {
sdp: answerDesc.sdp,
type: answerDesc.type
},
connectionId: connectionId
}
));
console.log("SEND ANSWER!");
});
}
在 WebSocket 服务器再次转发后,调用方使用以下方式处理答案:
function handleAnswered(connectionId: string, answer: RTCSessionDescriptionInit) {
console.log("HANDLING ANSWER...");
pc.setRemoteDescription(new RTCSessionDescription(answer));
console.log("HANDLED ANSWER!");
}
在教程和说明中,我遵循了音频元素,然后似乎播放了我的音频,但看起来仍然像没有源的音频元素:
audio element that's not playing
主叫方和被叫方都添加了曲目和流,并收到了一些。我还附上了他们的控制台输出:
来电者:
caller console
被叫方:
callee console
原来我完全漏掉了一步。 Caller 和 Callee 都需要交换他们的 ice candidates。所以我添加了以下代码:
pc.onicecandidate = event => {
if(event.candidate != undefined) {
let candidateInit: RTCIceCandidateInit = event.candidate.toJSON();
//send candidate
socket.send(JSON.stringify(
{
type: "signaling_add_candidate",
candidate: candidateInit,
connectionId: connectionId
}
));
console.log("send candidate from caller", event.candidate);
}
};
服务器将该候选人转发给对方。所以从调用者到被调用者,从被调用者到调用者。
然后将另一侧转发的候选人添加到 PeerConnection,我添加了以下代码:
function handleNewCandidate(connectionId: string, candidateInit: RTCIceCandidateInit) {
pc.addIceCandidate(new RTCIceCandidate(candidateInit));
}
我正在尝试使用 WebRTC 和 WebSocket 进行语音聊天以交换报价。
首先我创建我的 RTCPeerConection
const pc = new RTCPeerConnection(configuration);
然后我将此函数作为 navigator.getUserMedia
let localStream: MediaStream;
export function setupRtc(stream: MediaStream) {
localStream = stream;
// add local media stream tracks to the connection
console.log("adding tracks");
console.log("localStream.getTracks()", localStream.getTracks());
localStream.getTracks().forEach((track: MediaStreamTrack) => {
pc.addTrack(track, localStream);
console.log("added track ", track);
});
console.log("done adding tracks");
// handle pcs tracks
let remoteStream = new MediaStream();
pc.ontrack = function(event: RTCTrackEvent) {
console.log("ontrack", event);
//add audio Tag
event.streams[0].getTracks().forEach((track) => {
remoteStream.addTrack(track);
});
}
let remoteAudio = <HTMLMediaElement> document.getElementById('remoteAudio');
remoteAudio.srcObject = remoteStream;
}
来电者这样发送报价:
async function sendOffer(connectionId: string) {
console.log("SENDING OFFER...");
//create offer desc
let sessionDesc: RTCSessionDescriptionInit = await pc.createOffer();
await pc.setLocalDescription(sessionDesc);
//send offer
socket.send(JSON.stringify(
{
type: "signaling_offer",
desc: {
sdp: sessionDesc.sdp,
type: sessionDesc.type
},
connectionId: connectionId
}
));
console.log("SEND OFFER");
}
connectionId是主叫方和被叫方都有的id。
在 WebSocket 服务器将提议转发给 Callee 之后,他使用此函数回答:
function sendAnswer(connectionId: string, offer: RTCSessionDescriptionInit) {
console.log("SENDING ANSWER...");
pc.setRemoteDescription(new RTCSessionDescription(offer)).then(async () => {
const answerDesc = await pc.createAnswer();
await pc.setLocalDescription(answerDesc);
//send answer
socket.send(JSON.stringify(
{
type: "signaling_answer",
desc: {
sdp: answerDesc.sdp,
type: answerDesc.type
},
connectionId: connectionId
}
));
console.log("SEND ANSWER!");
});
}
在 WebSocket 服务器再次转发后,调用方使用以下方式处理答案:
function handleAnswered(connectionId: string, answer: RTCSessionDescriptionInit) {
console.log("HANDLING ANSWER...");
pc.setRemoteDescription(new RTCSessionDescription(answer));
console.log("HANDLED ANSWER!");
}
在教程和说明中,我遵循了音频元素,然后似乎播放了我的音频,但看起来仍然像没有源的音频元素: audio element that's not playing
主叫方和被叫方都添加了曲目和流,并收到了一些。我还附上了他们的控制台输出:
来电者: caller console
被叫方: callee console
原来我完全漏掉了一步。 Caller 和 Callee 都需要交换他们的 ice candidates。所以我添加了以下代码:
pc.onicecandidate = event => {
if(event.candidate != undefined) {
let candidateInit: RTCIceCandidateInit = event.candidate.toJSON();
//send candidate
socket.send(JSON.stringify(
{
type: "signaling_add_candidate",
candidate: candidateInit,
connectionId: connectionId
}
));
console.log("send candidate from caller", event.candidate);
}
};
服务器将该候选人转发给对方。所以从调用者到被调用者,从被调用者到调用者。 然后将另一侧转发的候选人添加到 PeerConnection,我添加了以下代码:
function handleNewCandidate(connectionId: string, candidateInit: RTCIceCandidateInit) {
pc.addIceCandidate(new RTCIceCandidate(candidateInit));
}