Manifesto

Last Updated

April 1, 2025

Our methodology – turning ChatGPT sessions into carbon data.

We base our emissions calculations on three key components:

  1. Token-based tracking

Every ChatGPT/AI platform uses tokens. Tokens are the building blocks of text that AI models use to process and generate language. A token can be as small as a single character, like "a," or as large as a whole word, depending on the context.

We estimate that an average ChatGPT prompt-response cycle typically processes around 750 tokens. This includes both the user input and the model’s generated output. This estimate is based general observations and averages used in calculating ChatGPT's API costs. Here's how this figure is derived:

🔸 Tokenization Rule of Thumb: OpenAI states that on average 1 token corresponds to about 0.75 words in English. This means a query with approximately 1,000 words (including both input and output) would translate to around 750 tokens.

🔸 Typical Query Length: Many interactions with ChatGPT involve a user prompt and the model's response, which together often fall within the range of 500–1,000 tokens. For example, a short question with a concise answer might use fewer tokens, while a longer query and detailed response might use more.

🔸 Context Window Usage: ChatGPT models like GPT-3.5 and GPT-4 have token limits (e.g., 4,096 or 32,768 tokens). Typical user queries rarely approach these limits, and estimates like 750 tokens reflect average usage within these constraints.

  1. Calculating energy use from tokens

We use a token-based estimation model to calculate the energy used per AI interaction. Here's how we arrive at the figure of 0.00387 Wh per token, which underpins our real-time emissions tracking:

🔸 Baseline Reference: 2.9 Wh per Chat Query

Multiple public sources and research estimates (including Perplexity and academic analyses) converge on the approximation that a single ChatGPT query consumes about 2.9 Wh of energy on average.

🔸 Token Count: 750 Tokens per Query

An average ChatGPT prompt-response cycle typically processes around 750 tokens. This includes both the user input and the model’s generated output.

🔸 Deriving Per-Token Usage

To obtain a scalable, per-token estimate, we divide the average energy per query by the average number of tokens:

2.9 Wh per query ÷ 750 tokens = 0.00387 Wh/token

2.9 Wh per query÷750 tokens=0.00387 Wh/token