我正在用Javascript编写游戏引擎的前端。该引擎在服务器上运行,并通过“ SignalR”将图片和声音发送到Web浏览器。我正在使用React框架。
游戏运行时,服务器发送WAVE格式的小声音样本,并通过AudioPlayerProps传递到此组件中。
我在声音上有两个主要问题。首先是声音听起来“脱节”。第二个是一段时间后声音停止播放。我可以看到声音在音频队列中排队,但是没有调用'playNextAudioTrack'方法。控制台中没有错误可以解释这一点。
如果这不是为游戏前端提供声音的最佳方法,请告诉我。
此外,如果您想查看更多代码,请告诉我。这是一个巨大的多层项目,所以我只展示我认为您需要看到的内容。
目前,我正在Chrome中进行测试。在此阶段,我需要打开DEV工具以克服“用户未与页面互动,因此您无法播放任何声音问题”的问题。我将在适当时候解决该问题。
import * as React from "react";
import { useEffect, useState } from "react";
export interface AudioPlayerProps {
data: string;
}
export const AudioPlayer = function (props: AudioPlayerProps): JSX.Element {
const [audioQueue, setAudioQueue] = useState<string[]>([])
useEffect(
() => {
if (props.data != undefined) {
audioQueue.push(props.data);
}
}, [props.data]);
const playNextAudioTrack = () => {
if (audioQueue.length > 0) {
const audioBase64 = audioQueue.pop();
const newAudio = new Audio(`data:audio/wav;base64,${audioBase64}`)
newAudio.play().then(playNextAudioTrack).catch(
(error) => {
setTimeout(playNextAudioTrack, 10);
}
)
}
else {
setTimeout(playNextAudioTrack, 10);
}
}
useEffect(playNextAudioTrack, []);
return null;
}
我解决了自己的问题。这是我编写的用于处理JavaScript中的分块音频的typescript类。
我不是JavaScript专家,因此可能会有错误,但是它可以正常工作:)。
// mostly from https://gist.github.com/revolunet/e620e2c532b7144c62768a36b8b96da2
// Modified to play chunked audio for games
import { setInterval } from "timers";
//
const MaxScheduled = 10;
const MaxQueueLength = 2000;
const MinScheduledToStopDraining = 5;
export class WebAudioStreamer {
constructor() {
this.isDraining = false;
this.isWorking = false;
this.audioStack = [];
this.nextTime = 0;
this.numberScheduled = 0;
setInterval(() => {
if (this.audioStack.length && !this.isWorking) {
this.scheduleBuffers(this);
}
}, 0);
}
context: AudioContext;
audioStack: AudioBuffer[];
nextTime: number;
numberScheduled: number;
isDraining: boolean;
isWorking: boolean;
pushOntoAudioStack(encodedBytes: number[]) {
if (this.context == undefined) {
this.context = new (window.AudioContext)();
}
const encodedBuffer = new Uint8ClampedArray(encodedBytes).buffer;
const streamer: WebAudioStreamer = this;
if (this.audioStack.length > MaxQueueLength) {
this.audioStack = [];
}
streamer.context.decodeAudioData(encodedBuffer, function (decodedBuffer) {
streamer.audioStack.push(decodedBuffer);
}
);
}
scheduleBuffers(streamer: WebAudioStreamer) {
streamer.isWorking = true;
if (streamer.context == undefined) {
streamer.context = new (window.AudioContext)();
}
if (streamer.isDraining && streamer.numberScheduled <= MinScheduledToStopDraining) {
streamer.isDraining = false;
}
while (streamer.audioStack.length && !streamer.isDraining) {
var buffer = streamer.audioStack.shift();
var source = streamer.context.createBufferSource();
source.buffer = buffer;
source.connect(streamer.context.destination);
if (streamer.nextTime == 0)
streamer.nextTime = streamer.context.currentTime + 0.01; /// add 50ms latency to work well across systems - tune this if you like
source.start(streamer.nextTime);
streamer.nextTime += source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
streamer.numberScheduled++;
source.onended = function () {
streamer.numberScheduled--;
}
if (streamer.numberScheduled == MaxScheduled) {
streamer.isDraining = true;
}
};
streamer.isWorking = false;
}
}