如何使用 Chrome 扩展程序从 Google Meet 录制音频(麦克风+标签音频)?

问题描述 投票:0回答:1

我想使用浏览器扩展程序录制 Google Meet 会议,但可以录制任何选项卡音频,例如 YouTube,包括我的麦克风。

我知道如何录制选项卡音频。

我不知道如何将标签音频与麦克风结合起来完整地录制 GMeet。包括尊重 GMeet 上麦克风静音的功能。

我当前的代码位于 GitHub 上: https://github.com/prokopsimek/chrome-extension-recording

已实施的内容:

  • 录制标签音频
  • 使用音频文件停止音频后打开一个新选项卡

我的屏幕外.tsx:

    const media = await navigator.mediaDevices.getUserMedia({
      audio: {
        mandatory: {
          chromeMediaSource: 'tab',
          chromeMediaSourceId: streamId,
        },
      },
      video: false,
    } as any);
    console.error('OFFSCREEN media', media);

    // FIXME: this causes error in recording, stops recording the offscreen
    // const micMedia = await navigator.mediaDevices.getUserMedia({
    //   audio: {
    //     mandatory: {
    //       chromeMediaSource: 'tab',
    //       chromeMediaSourceId: micStreamId,
    //     },
    //   },
    //   video: false,
    // } as any);

    // Continue to play the captured audio to the user.
    const output = new AudioContext();
    const source = output.createMediaStreamSource(media);

    const destination = output.createMediaStreamDestination();
    // const micSource = output.createMediaStreamSource(micMedia);

    source.connect(output.destination);
    source.connect(destination);
    // micSource.connect(destination);
    console.error('OFFSCREEN output', output);

    // Start recording.
    recorder = new MediaRecorder(destination.stream, { mimeType: 'video/webm' });
    recorder.ondataavailable = (event: any) => data.push(event.data);
    recorder.onstop = async () => {
      const blob = new Blob(data, { type: 'video/webm' });

      // delete local state of recording
      chrome.runtime.sendMessage({
        action: 'set-recording',
        recording: false,
      });

      window.open(URL.createObjectURL(blob), '_blank');

我的 popup.tsx useEffect:

  const handleRecordClick = () => {
    if (isRecording) {
      console.log('Attemping to stop recording');
      chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
        const currentTab = tabs[0];
        if (currentTab.id) {
          chrome.runtime.sendMessage({
            action: 'stopRecording',
            tabId: currentTab.id,
          });
          setIsRecording(false);
        }
      });
    } else {
      chrome.tabs.query({ active: true, currentWindow: true }, (tabs) => {
        const currentTab = tabs[0];
        if (currentTab.id) {
          chrome.runtime.sendMessage({
            action: 'startRecording',
            tabId: currentTab.id,
          });
          setIsRecording(true);
        }
      });
    }
  };

我的background.ts正在屏幕外初始化:

const startRecordingOffscreen = async (tabId: number) => {
  const existingContexts = await chrome.runtime.getContexts({});
  let recording = false;

  const offscreenDocument = existingContexts.find((c) => c.contextType === 'OFFSCREEN_DOCUMENT');

  // If an offscreen document is not already open, create one.
  if (!offscreenDocument) {
    console.error('OFFSCREEN no offscreen document');
    // Create an offscreen document.
    await chrome.offscreen.createDocument({
      url: 'pages/offscreen/index.html',
      reasons: [chrome.offscreen.Reason.USER_MEDIA, chrome.offscreen.Reason.DISPLAY_MEDIA],
      justification: 'Recording from chrome.tabCapture API',
    });
  } else {
    recording = offscreenDocument.documentUrl?.endsWith('#recording') ?? false;
  }

  if (recording) {
    chrome.runtime.sendMessage({
      type: 'stop-recording',
      target: 'offscreen',
    });
    chrome.action.setIcon({ path: 'icons/not-recording.png' });
    return;
  }

  // Get a MediaStream for the active tab.
  console.error('BACKGROUND getMediaStreamId');

  const streamId = await new Promise<string>((resolve) => {
    // chrome.tabCapture.getMediaStreamId({ consumerTabId: tabId }, (streamId) => {
    chrome.tabCapture.getMediaStreamId({ targetTabId: tabId }, (streamId) => {
      resolve(streamId);
    });
  });
  console.error('BACKGROUND streamId', streamId);

  const micStreamId = await new Promise<string>((resolve) => {
    chrome.tabCapture.getMediaStreamId({ consumerTabId: tabId }, (streamId) => {
      resolve(streamId);
    });
  });
  console.error('BACKGROUND micStreamId', micStreamId);

  // Send the stream ID to the offscreen document to start recording.
  chrome.runtime.sendMessage({
    type: 'start-recording',
    target: 'offscreen',
    data: streamId,
    micStreamId,
  });

  chrome.action.setIcon({ path: '/icons/recording.png' });
};

缺少什么:

  • 录制麦克风
  • 将麦克风与标签音频相结合
  • 尊重 GMeet 中麦克风的打开/关闭时间

我困惑的是:

  1. 我真的应该使用屏外文档吗?
  • 我在屏幕外收到权限被拒绝 (ref)
  1. 录音用什么?离屏、内容脚本、弹出窗口还是其他什么?
  2. 如何将两个源的音频流合并到一个文件中?

我的目标:

  • 从 Google Meets 录制音频(选项卡 + 我的麦克风)
  • 仅录制音频
  • 使用 v3 清单(我预计旧版本会比 v3 更早被弃用)
  • 如果尚未允许,我应该允许在我想要开始录音的选项卡中使用麦克风
  • 当我停止录制时,将打开一个包含 blob 文件的新选项卡,以便可以保存它

参考链接:

javascript google-chrome google-chrome-extension webrtc audio-recording
1个回答
0
投票

我将尝试单独解决剩余的要求

  1. 从特定选项卡捕获音频。

     const media = await navigator.mediaDevices.getUserMedia({
      audio: {
        mandatory: {
          chromeMediaSource: 'tab',
          chromeMediaSourceId: streamId,
        },
      },
      video: false,
    } as any);
    

假设我们创建了一个名为

getTabAudioStream
的单独函数,它返回上述内容。

    • 录制麦克风
    return await navigator.mediaDevices.getUserMedia({
        audio: true,  // Request microphone audio
        video: false,
    });   
    

假设我们创建了一个名为

getMicrophoneAudioStream
的单独函数,它返回上述内容。

    • 将麦克风与标签音频相结合
    async function setupAudioCombination(tabStreamId) {
      const tabStream = await getTabAudioStream(tabStreamId);
      const micStream = await getMicrophoneAudioStream();
    
      const audioContext = new AudioContext();
      const tabSource = audioContext.createMediaStreamSource(tabStream);
      const micSource = audioContext.createMediaStreamSource(micStream);
      const destination = audioContext.createMediaStreamDestination();
      const micGainNode = audioContext.createGain(); // Create a GainNode to control mic volume when mute state updates
    
    
      // Connect both sources to the destination
      tabSource.connect(destination);
      micSource.connect(micGainNode); // Connect micSource to the GainNode
      micGainNode.connect(destination);
    
      return {
        combinedStream: destination.stream, // record this
        micGainNode,
        audioContext,
      };
    }
    
    • 尊重 GMeet 中麦克风的打开/关闭时间

    您可以通过使用

    [data-mute-button]
    属性选择静音按钮并设置
    MutationObserver
    来监听
    data-is-muted
    属性的变化,从而跟踪 Google Meet 中麦克风的“静音”状态。该属性在“true”和“false”之间切换,以指示麦克风是否静音。

    这是一个简单的实现:

    const muteButton = document.querySelector('[data-mute-button]');
    let isMuted = muteButton.getAttribute('data-is-muted') === 'true';
    
    // Create a MutationObserver to watch for changes in the `data-is-muted` attribute
    const observer = new MutationObserver((mutationsList) => {
      for (const mutation of mutationsList) {
        if (mutation.type === 'attributes' && mutation.attributeName === 'data-is-muted') {
          isMuted = muteButton.getAttribute('data-is-muted') === 'true';
          console.log('Microphone mute state changed:', isMuted ? 'Muted' : 'Unmuted');
           handleMicMuteState(isMuted);
        }
      }
    });
    
    // Start observing the mute button for attribute changes
    observer.observe(muteButton, { attributes: true, attributeFilter: ['data-is-muted'] });
    

    其中

    handleMicMuteState
    通过
    micGainNode
    单独静音麦克风音频流,并且继续录制选项卡音频:

    function handleMicMuteState(isMuted) {
      if (micGainNode) {
        micGainNode.gain.value = isMuted ? 0 : 1; // Set gain to 0 to mute, 1 to unmute
      }
    }
    
© www.soinside.com 2019 - 2024. All rights reserved.