LogoLing-1T Docs

Messages API (Claude Format)

Anthropic-compatible request structure for Ling-1T.

Messages API (Claude Format)

Use this endpoint when integrating Ling-1T with SDKs or workflows that expect Anthropic Claude semantics.

  • Endpoint: POST https://ling-1t.ai/api/v1/messages
  • Headers:
    • x-api-key: <api-key>
    • anthropic-version: 2023-06-01
  • Streaming: not yet supported (returns full response)

Request Schema

{
  "model": "inclusionai/ling-1t",
  "system": "Provide concise, well-cited answers.",
  "messages": [
    {
      "role": "user",
      "content": "Draft a launch announcement for Ling-1T."}
  ],
  "max_tokens": 800,
  "temperature": 0.5
}

The messages array supports role values user and assistant. Each content entry may be a string or an array of { "type": "text", "text": "..." } blocks.

Example: cURL

curl https://ling-1t.ai/api/v1/messages \
  -H "content-type: application/json" \
  -H "x-api-key: $LING1T_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -d '{
    "model": "inclusionai/ling-1t",
    "system": "Provide concise, well-cited answers.",
    "messages": [
      {"role": "user", "content": "Draft a launch announcement for Ling-1T."}
    ],
    "max_tokens": 800
  }'

Node.js (TypeScript)

import fetch from 'node-fetch';
 
const response = await fetch('https://ling-1t.ai/api/v1/messages', {
  method: 'POST',
  headers: {
    'content-type': 'application/json',
    'x-api-key': process.env.LING1T_API_KEY ?? '',
    'anthropic-version': '2023-06-01',
  },
  body: JSON.stringify({
    model: 'inclusionai/ling-1t',
    system: 'Provide concise, well-cited answers.',
    messages: [
      { role: 'user', content: 'Draft a launch announcement for Ling-1T.' },
    ],
    max_tokens: 800,
  }),
});
 
if (!response.ok) {
  throw new Error(`Request failed: ${response.status}`);
}
 
const result = await response.json();
console.log(result.content?.[0]?.text);

Python

import os
import requests
 
headers = {
    "content-type": "application/json",
    "x-api-key": os.getenv("LING1T_API_KEY", ""),
    "anthropic-version": "2023-06-01",
}
 
payload = {
    "model": "inclusionai/ling-1t",
    "system": "Provide concise, well-cited answers.",
    "messages": [
        {"role": "user", "content": "Draft a launch announcement for Ling-1T."}
    ],
    "max_tokens": 800,
}
 
resp = requests.post("https://ling-1t.ai/api/v1/messages", json=payload, headers=headers)
resp.raise_for_status()
data = resp.json()
print(data["content"][0]["text"])

Response Shape

{
  "id": "msg_01HZA...",
  "type": "message",
  "role": "assistant",
  "model": "inclusionai/ling-1t",
  "content": [
    {
      "type": "text",
      "text": "Announcing Ling-1T..."
    }
  ],
  "usage": {
    "input_tokens": 215,
    "output_tokens": 438
  },
  "stop_reason": "end_turn",
  "created_at": "2025-06-23T08:42:11.201Z"
}

Use stop_reason to determine why generation ended (end_turn, max_tokens, etc.).

Client Integration Tips

  • Anthropic SDKs: Configure the base URL to https://ling-1t.ai/api/v1 and supply the headers above. Most clients allow overriding the endpoint.
  • Retries: Handle 429 (rate limit) and 500 responses with exponential backoff.
  • Token accounting: The response usage block feeds your internal cost monitoring.

Table of Contents