我想在浏览器中将网络摄像头输入作为 ReadableStream 传输到 WritableStream。我尝试过使用 MediaRecorder API,但该流被分成单独的 blob,而我想要一个连续的流。我认为解决方案可能是将 MediaRecorder 块通过管道传输到统一缓冲区并作为连续流从中读取,但我不确定如何让中间缓冲区工作。
mediaRecorder = new MediaRecorder(stream, recorderOptions);
mediaRecorder.ondataavailable = handleDataAvailable;
mediaRecorder.start(1000);
async function handleDataAvailable(event) {
if (event.data.size > 0) {
const data: Blob = event.data;
// I think I need to pipe to an intermediate stream? Not sure how tho
data.stream().pipeTo(writable);
}
}
目前我们无法真正访问 MediaStream 的原始数据,我们对视频最接近的是 MediaRecorder API,但这将对数据进行编码并按块而不是作为流工作。
但是,有一个新的 MediaCapture Transform W3C 小组正在开发 MediaStreamTrackProcessor 接口,完全可以满足您的需求,并且该接口已在 Chrome 中以
chrome://flags/#enable-experimental-web-platform-features
标志提供。if( window.MediaStreamTrackProcessor ) {
const track = getCanvasTrack();
const processor = new MediaStreamTrackProcessor( track );
const reader = processor.readable.getReader();
readChunk();
function readChunk() {
reader.read().then( ({ done, value }) => {
// value is a VideoFrame
// we can read the data in each of its planes into an ArrayBufferView
const channels = value.planes.map( (plane) => {
const arr = new Uint8Array(plane.length);
plane.readInto(arr);
return arr;
});
value.close(); // close the VideoFrame when we're done with it
log.textContent = "planes data (15 first values):\n" +
channels.map( (arr) => JSON.stringify( [...arr.subarray(0,15)] ) ).join("\n");
if( !done ) {
readChunk();
}
});
}
}
else {
console.error("your browser doesn't support this API yet");
}
function getCanvasTrack() {
// just some noise...
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");
const img = new ImageData(300, 150);
const data = new Uint32Array(img.data.buffer);
const track = canvas.captureStream().getVideoTracks()[0];
anim();
return track;
function anim() {
for( let i=0; i<data.length;i++ ) {
data[i] = Math.random() * 0xFFFFFF + 0xFF000000;
}
ctx.putImageData(img, 0, 0);
if( track.readyState === "live" ) {
requestAnimationFrame(anim);
}
}
}
<pre id="log"></pre>
<p>
Source<br>
<canvas id="canvas"></canvas>
</p>
我认为解决方案可能是将 MediaRecorder 块通过管道传输到统一缓冲区并作为连续流从中读取,但我不确定如何让中间缓冲区工作。
需要注意的是,您仍然无法获取 raw 数据(位图和/或没有生成损失修饰的 PCM),您希望存在的功能非常简单,只有大约 15 行代码,并且适用于所有主要浏览器:
function encodedStreamFromMediaRecorder(recorder, timeslice) {
// "takes ownership" of the MediaRecorder.
// The MediaRecorder will be started immediately
// and will be stopped when this stream is cancelled
if (recorder.state !== 'inactive')
throw new Error('Can\'t wrap already-started MediaRecorder');
let onDataavailable;
return new ReadableStream({
start(controller) {
onDataavailable = (event) => void controller.enqueue(event.data);
recorder.addEventListener('dataavailable', onDataavailable);
recorder.start(timeslice);
},
cancel(reason) {
recorder.stop();
recorder.removeEventListener('dataavailable', onDataavailable);
}
}).pipeThrough(new TransformStream({
async transform(chunk, controller) {
controller.enqueue(new Uint8Array(await chunk.arrayBuffer()));
}
}));
}
Demo:
<form action="javascript:" onsubmit="(eval(event.submitter.value))(event.target)">
<select name="codec" hidden><option selected>audio/webm;codecs=opus</option></select>
<input type="number" value="24000" step="1000" hidden />
<button type="submit" value="async (form) => {try {
let codec = form.elements.codec.value;
let bitrate = parseFloat(form.elements.bitrate.value);
let mic = await navigator.mediaDevices.getUserMedia({audio: true});
let mic_recorder = new MediaRecorder(mic, {mimeType: codec, audioBitrateMode: 'variable', audioBitsPerSecond: bitrate});
let encoded_chunk_stream = encodedStreamFromMediaRecorder(mic_recorder, 1000);
for await (const chunk of encoded_chunk_stream) {
console.debug('Got a second of audio: %o', chunk);
}
} catch (error) {alert(error);}}">Record</button>
</form>