Models

BlockRun provides access to models from multiple providers through a unified API.

List Models

GET https://api.blockrun.ai/v1/models

Returns a list of available models with pricing information.

Available Models

OpenAI

Model IDNameInput PriceOutput PriceContext
openai/gpt-5.2GPT-5.2$1.75/M$14.00/M400K
openai/gpt-4oGPT-4o$2.50/M$10.00/M128K
openai/gpt-4o-miniGPT-4o Mini$0.15/M$0.60/M128K
openai/o1o1$15.00/M$60.00/M200K
openai/o1-minio1-mini$3.00/M$12.00/M128K

Anthropic

Model IDNameInput PriceOutput PriceContext
anthropic/claude-opus-4Claude Opus 4$15.00/M$75.00/M200K
anthropic/claude-sonnet-4Claude Sonnet 4$3.00/M$15.00/M200K
anthropic/claude-haiku-4.5Claude Haiku 4.5$1.00/M$5.00/M200K

Google

Model IDNameInput PriceOutput PriceContext
google/gemini-3-pro-previewGemini 3 Pro$2.00/M$12.00/M1M
google/gemini-2.5-proGemini 2.5 Pro$1.25/M$10.00/M1M
google/gemini-2.5-flashGemini 2.5 Flash$0.15/M$0.60/M1M

xAI

Model IDNameInput PriceOutput PriceContext
x-ai/grok-4-fastGrok 4 Fast$0.20/M$0.50/M2M

DeepSeek

Model IDNameInput PriceOutput PriceContext
deepseek/deepseek-v3-0324DeepSeek V3$0.20/M$0.88/M164K
deepseek/deepseek-r1DeepSeek R1$0.55/M$2.19/M64K

Meta (via Together.ai)

Model IDNameInput PriceOutput PriceContext
meta-llama/llama-3.3-70b-instructLlama 3.3 70B$0.12/M$0.30/M128K
meta-llama/llama-3.1-405b-instructLlama 3.1 405B$2.00/M$2.00/M128K

Qwen (via Together.ai)

Model IDNameInput PriceOutput PriceContext
qwen/qwen-2.5-72b-instructQwen 2.5 72B$0.07/M$0.26/M128K

Mistral

Model IDNameInput PriceOutput PriceContext
mistralai/mistral-largeMistral Large$2.00/M$6.00/M128K

Model Categories

Models are tagged with capabilities:

  • chat - General conversation
  • reasoning - Complex problem-solving
  • coding - Code generation and analysis
  • vision - Image understanding

Pricing

Prices are per 1 million tokens. Your actual cost depends on:

  1. Input tokens - Length of your prompt and context
  2. Output tokens - Length of the model's response

The SDK calculates the exact price before each request.

Example

{% tabs %} {% tab title="Python" %}

from blockrun_llm import LLMClient

client = LLMClient()
models = client.list_models()

for model in models:
    print(f"{model['id']}: ${model['inputPrice']}/M input")

{% endtab %}

{% tab title="TypeScript" %}

import { LLMClient } from '@blockrun/llm';

const client = new LLMClient({ privateKey: '0x...' });
const models = await client.listModels();

for (const model of models) {
  console.log(`${model.id}: $${model.inputPrice}/M input`);
}

{% endtab %} {% endtabs %}