正确计算整个请求负载中的令牌数量 - OpenAI

问题描述 投票:0回答:1

17.10.24:编辑标题以便于搜索
原标题:OpenAI API 请求负载的哪一部分受到最大代币数量的限制?


我有点了解如何计算字符中的标记,但我实际上必须计算什么? 如果我有这样的有效负载:

{
  "model": "gpt-3.5-turbo",
  "temperature": 1,
  "max_tokens": 400,
  "presence_penalty": 0.85,
  "frequency_penalty": 0.85,
  "messages": [
    {
      "role": "system",
      "content": "prompt"
    },
    {
      "role": "assistant",
      "content": "message"
    },
    // tens of messages
  ]
}

我必须从其中完全数出代币吗?还是我必须只计算"messages"

?如果是这样,我是否还必须计算所有 json 语法字符,例如空格键、括号和逗号? 
"role"
"content"
 键怎么样? 
"role"
值怎么样?
或者我必须简单地将所有
"content"
 值连接到一个字符串中并仅根据它来计算标记?

openai-api chatgpt-api gpt-3 gpt-4
1个回答
2
投票
根据我的理解和计算,

"messages"

中提供的列表中的所有代币都被计算在内。这包括键“role”和“content”及其值,但不包括空格、括号、逗号和引号。

我使用 OpenAI 提供的以下脚本来计算输入中的令牌数量。我修改了脚本来计算多条消息的输入(而不是输出响应)所涉及的成本,这对我来说相当准确。

import json import os import tiktoken import numpy as np from collections import defaultdict def num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613"): """Return the number of tokens used by a list of messages.""" try: encoding = tiktoken.encoding_for_model(model) except KeyError: print("Warning: model not found. Using cl100k_base encoding.") encoding = tiktoken.get_encoding("cl100k_base") if model in { "gpt-3.5-turbo-0613", "gpt-3.5-turbo-16k-0613", "gpt-4-0613", "gpt-4-32k-0613", }: tokens_per_message = 3 tokens_per_name = 1 elif model == "gpt-3.5-turbo-0301": tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n tokens_per_name = -1 # if there's a name, the role is omitted elif "gpt-3.5-turbo" in model: print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.") return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613") elif "gpt-4" in model: print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.") return num_tokens_from_messages(messages, model="gpt-4-0613") else: raise NotImplementedError( f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""" ) num_tokens = 0 for message in messages: num_tokens += tokens_per_message for key, value in message.items(): num_tokens += len(encoding.encode(value)) if key == "name": num_tokens += tokens_per_name num_tokens += 3 # every reply is primed with <|start|>assistant<|message|> return num_tokens convo_lens = [] for ex in dataset: #Your list of inputs messages = ex["messages"] convo_lens.append(num_tokens_from_messages(messages)) n_input_tokens_in_dataset = sum(min(4096, length) for length in convo_lens) print(f"Input portion of the data has ~{n_input_tokens_in_dataset} tokens") # costs as of Aug 29 2023. costs = { "gpt-4-0613": { "input" : 0.03, "output": 0.06 }, "gpt-4-32k-0613": { "input" : 0.06, "output": 0.12 }, "gpt-3.5-turbo-0613": { "input": 0.0015, "output": 0.002 }, "gpt-3.5-turbo-16k-0613": { "input": 0.003, "output": 0.004 } } # We select GPT 3.5 turbo here print(f"Cost of inference: ${(n_input_tokens_in_dataset/1000) * costs['gpt-3.5-turbo-0613']['input']}")
    
© www.soinside.com 2019 - 2024. All rights reserved.