Discord.js/voice 如何创建音频资源?

Discord.js/voice How to create an AudioResource?

我正在使用 Discord.js API 的最新版本,这需要使用 Discord。js/voice 才能在语音聊天中播放音频。我正在尝试创建自己的音乐机器人。然而,我在实际播放音频时遇到了问题。

我认为问题在于我如何创建 AudioResource 对象,尽管我已尝试按照 the discord guide 上的示例进行操作。

以下是代码的相关部分:

    const discord = require("discord.js")
    const ytdl = require("ytdl-core")
    const MUSIC_PATH = "./music/song.webm"
    const {
        createWriteStream,
        createReadStream,
    } = require("fs")
    const {
        joinVoiceChannel,
        createAudioPlayer,
        createAudioResource,
        StreamType,
        AudioPlayerStatus,
    } = require("@discordjs/voice") 
    const {
        prefix,
        token
    } = require("./config.json")
    const client = new discord.Client({ intents: ["GUILDS", "GUILD_MESSAGES"] }) //Intention to interact with messages
    
    const audioPlayer = {
        musicStream: createAudioPlayer(),
        connection: null,
        connectionId: null,
    }

client.on('messageCreate', msg => {
    if (msg.author.bot || !msg.content.startsWith(prefix)) return
    let messageParts = msg.content.split(" ")

    const voiceChannel = msg.member.voice.channel
    switch (messageParts[0]) {
        case "!play":
            if (!canExecutePlayRequest(msg, voiceChannel)) return
            createChannelConnection(msg, voiceChannel)
            playMusic(messageParts[1])
            break;
        case "!skip":
            msg.reply("!skip")
            break;
        case "!stop":
            msg.reply("!stop")
            break;
        case "!disconnect":
            destroyChannelConnection(msg, voiceChannel)
            break;
        default:
            msg.reply("That's not a real command!")
    }

/**
 * Creates connection object for channel that user is currently in. Adds said connection to audioPlayer.
 * @param {*} msg Command message
 * @param {*} voiceChannel Current voice channel of user
 */
function createChannelConnection(msg, voiceChannel) { 
    //Check for existing connection
    if (audioPlayer.connection != null) {
        //If already connected to channel of user return
        if (audioPlayer.connectionId == voiceChannel.id) return //FIXME: channel checking update when user changes

        //If connected to different channel destroy that connection first
        destroyChannelConnection(msg, voiceChannel)
    }

    //Create and save connection
    const connection = joinVoiceChannel({
        channelId: voiceChannel.id,
        guildId: voiceChannel.guild.id,
        adapterCreator: voiceChannel.guild.voiceAdapterCreator,
    })
    connection.subscribe(audioPlayer.musicStream)

    audioPlayer.connection = connection
    audioPlayer.connectionId = voiceChannel.id
}
})

function playMusic(url){
    ytdl(url, { filter: 'audioonly' }).pipe(createWriteStream(MUSIC_PATH)) //works


    const resource = createAudioResource(createReadStream(MUSIC_PATH), {
        inputType: StreamType.WebmOpus,
    })
    console.log(resource)
    audioPlayer.musicStream.play(resource)
}

一些注意事项:

  1. 我使用我的 MUSIC_PATH 而不是 join(__dirname, 'file.webm') 就像他们在我链接的不和谐指南中所做的那样。我已经使用了两者并获得了相同的输出。两者都不会引发错误。

  2. 机器人加入语音聊天没有问题。使用音频状态更新后,我还得出结论,audioPlayer.musicStream.play() 确实会导致音频播放器进入播放模式。

  3. 在执行 !play 命令之前,机器人会检查它是否具有同时通过的连接和通话权限。

  4. 这是 console.log(资源)在 url 尝试播放 Joyner Lucas' Will 时的输出:

AudioResource {
  playbackDuration: 0,
  started: false,
  silenceRemaining: -1,
  edges: [
    {
      type: 'webm/opus demuxer',
      to: [Node],
      cost: 1,
      transformer: [Function: transformer],
      from: [Node]
    }
  ],
  playStream: WebmDemuxer {
    _readableState: ReadableState {
      objectMode: true,
      highWaterMark: 16,
      buffer: BufferList { head: null, tail: null, length: 0 },
      length: 0,
      pipes: [],
      flowing: false,
      ended: false,
      endEmitted: false,
      reading: false,
      constructed: true,
      sync: false,
      needReadable: true,
      emittedReadable: false,
      readableListening: true,
      resumeScheduled: false,
      errorEmitted: false,
      emitClose: true,
      autoDestroy: true,
      destroyed: false,
      errored: null,
      closed: false,
      closeEmitted: false,
      defaultEncoding: 'utf8',
      awaitDrainWriters: null,
      multiAwaitDrain: false,
      readingMore: false,
      dataEmitted: false,
      decoder: null,
      encoding: null,
      [Symbol(kPaused)]: null
    },
    _events: [Object: null prototype] {
      prefinish: [Function: prefinish],
      close: [Array],
      end: [Function: onend],
      finish: [Array],
      error: [Array],
      unpipe: [Function: onunpipe],
      readable: [Function]
    },
    _eventsCount: 7,
    _maxListeners: undefined,
    _writableState: WritableState {
      objectMode: false,
      highWaterMark: 16384,
      finalCalled: false,
      needDrain: false,
      ending: false,
      ended: false,
      finished: false,
      destroyed: false,
      decodeStrings: true,
      defaultEncoding: 'utf8',
      length: 0,
      writing: false,
      corked: 0,
      sync: true,
      bufferProcessing: false,
      onwrite: [Function: bound onwrite],
      writecb: null,
      writelen: 0,
      afterWriteTickInfo: null,
      buffered: [],
      bufferedIndex: 0,
      allBuffers: true,
      allNoop: true,
      pendingcb: 0,
      constructed: true,
      prefinished: false,
      errorEmitted: false,
      emitClose: true,
      autoDestroy: true,
      errored: null,
      closed: false,
      closeEmitted: false,
      [Symbol(kOnFinished)]: []
    },
    allowHalfOpen: true,
    _remainder: null,
    _length: 0,
    _count: 0,
    _skipUntil: null,
    _track: null,
    _incompleteTrack: {},
    _ebmlFound: false,
    [Symbol(kCapture)]: false,
    [Symbol(kCallback)]: null
  },
  metadata: null,
  silencePaddingFrames: 5
}

不用说语音聊天中没有播放音乐。我在创建此资源时做错了什么?显然它不是很好用。跟discordjs/opus有关系吗?我看到有人提到它,但我对此一无所知,尽管我的项目中包含该依赖项。

提前感谢您的帮助。

我找到了解决办法!

我的 AudioResource 创建失败的假设是正确的。虽然看起来不是我做错了什么,而是 ytdl-core 包。我一直没能找出到底出了什么问题,但现在已经切换到使用 play-dl 包将我的音乐流式传输到 AudioResource 中,如下所示:

    //Create Stream from Youtube URL
    const stream = await play.stream(url)

    //Create AudioResource from Stream
    let resource = createAudioResource(stream.stream, {
        inputType: stream.type
    })

    //Play resource
    audioPlayer.musicStream.play(resource)

它现在正在创建功能性的 AudioResources 来播放他们的音乐。

值得一提的是,我在创建客户端时也遗漏了一个意图。显然,“GUILD_VOICE_STATES”意图是在语音通道中播放音频所必需的。