Text Generation API
Generate natural, contextual responses using RunAnythingAI's advanced language models. Whether you need general-purpose text generation or character-specific interactions, this API provides the foundation for all AI-powered conversations.
Quick Start
// Generate text with the default model
const response = await fetch('https://api.runanythingai.com/api/text/default', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
"messages": [{ "role": "You", "content": "Hello!", "index": 0 }],
"botName": "Assistant"
})
});
const { id } = await response.json();
// Then poll /api/v2/status/{id} until complete
Endpoint
POST /api/text/{model_id}
Available Models
| Model ID | Best For | Character Type |
|---|---|---|
default | General text generation, multi-purpose content | Neutral assistant |
Witch | Fantasy settings, mystical interactions, magical worldbuilding | Wise, mysterious |
Mage | Educational content, analysis, strategic thinking | Scholarly, intelligent |
Succubus | Social dynamics, persuasive content, charismatic interactions | Alluring, confident |
Lightning | Action scenes, energetic responses, quick interactions | Dynamic, spontaneous |
Model Selection
- Use
defaultfor general-purpose applications - Use character models (
Witch,Mage,Succubus,Lightning) when you want consistent personality and specialized behavior - Character models work best with custom personas that match their archetype
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
messages | array | Yes | Array of conversation messages (see Message Object below) |
persona | string | No | Custom personality description to guide the model's behavior |
botName | string | No | Name for the AI assistant (default: "Assistant") |
samplingParams | object | No | Generation parameters (see Sampling Parameters below) |
Message Object
Each message in the conversation requires these fields:
| Field | Type | Required | Description |
|---|---|---|---|
role | string | Yes | Either "You" (user) or "Bot" (assistant) |
content | string | Yes | The actual message text |
index | number | Yes | Position in conversation (0, 1, 2...) |
id | string | No | Unique message identifier |
messageIndex | number | No | Alternative to index |
chatId | string | No | Chat session identifier |
userId | string | No | User identifier |
createdAt | string | No | ISO timestamp |
upvote | number | No | Message rating (usually 0) |
image | string | No | Image URL (usually null) |
Sampling Parameters
Fine-tune the generation behavior:
| Parameter | Type | Default | Description |
|---|---|---|---|
max_tokens | number | 150 | Maximum tokens to generate (1-2048) |
temperature | number | 0.7 | Creativity level (0.1-2.0) |
top_p | number | 0.9 | Nucleus sampling parameter |
Message Format
For basic use cases, you only need role, content, and index. The additional fields are useful for tracking conversations in complex applications.
Response Format
Initial Response (Immediate)
When you submit a generation request, you'll immediately receive:
{
"id": "08ab17b0-e0d0-4b4d-a0cd-ea0bbdef455a",
"tokens": -1
}
| Field | Type | Description |
|---|---|---|
id | string | Unique request identifier for status tracking |
tokens | number | Token usage (-1 indicates processing) |
Status Polling
Use the request ID to check generation progress:
GET /api/v2/status/{id}
Processing Response:
{
"status": "processing"
}
Completed Response:
{
"status": "completed",
"reply": "This is the generated text response from the model.",
"tokens": 42
}
Error Response:
{
"status": "error",
"error": "Error description here"
}
Polling Best Practice
Poll every 1-2 seconds for most requests. Character models typically respond in 2-5 seconds.
Complete Examples
Basic Text Generation
async function basicGeneration() {
const response = await fetch('https://api.runanythingai.com/api/text/default', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
"messages": [
{
"role": "You",
"content": "Write a creative product description for a smart coffee mug.",
"index": 0
}
],
"botName": "Assistant",
"samplingParams": {
"max_tokens": 200
}
})
});
const { id } = await response.json();
// Poll for result
const result = await pollUntilComplete(id);
console.log(result);
}
Character-Based Generation
async function characterGeneration() {
const response = await fetch('https://api.runanythingai.com/api/text/Mage', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
"messages": [
{
"id": "msg_001",
"messageIndex": 0,
"chatId": "study_session",
"userId": "student_123",
"content": "Can you explain quantum computing in simple terms?",
"createdAt": new Date().toISOString(),
"upvote": 0,
"role": "You",
"image": null,
"index": 0
}
],
"persona": `Professor Merlin's Persona: A wise, patient professor of theoretical physics
who excels at explaining complex concepts through analogies and real-world examples.
He speaks with enthusiasm for discovery and has a gift for making the impossible
seem understandable.`,
"botName": "Professor Merlin",
"samplingParams": {
"max_tokens": 250,
"temperature": 0.8
}
})
});
const { id } = await response.json();
const result = await pollUntilComplete(id);
console.log(result);
}
Multi-turn Conversation
async function conversationExample() {
const conversation = [
{
"role": "You",
"content": "I'm starting a small bakery. What should I focus on first?",
"index": 0
},
{
"role": "Bot",
"content": "Congratulations on your bakery! Focus on three key areas first: perfecting 2-3 signature recipes, understanding your local market, and managing your costs carefully.",
"index": 1
},
{
"role": "You",
"content": "That's helpful! Can you elaborate on understanding the local market?",
"index": 2
}
];
const response = await fetch('https://api.runanythingai.com/api/text/default', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
"messages": conversation,
"botName": "Business Advisor",
"samplingParams": {
"max_tokens": 200
}
})
});
const { id } = await response.json();
const result = await pollUntilComplete(id);
console.log(result);
}
// Helper function for polling
async function pollUntilComplete(requestId) {
while (true) {
const statusResponse = await fetch(`https://api.runanythingai.com/api/v2/status/${requestId}`, {
headers: { 'Authorization': 'Bearer YOUR_API_KEY' }
});
const data = await statusResponse.json();
if (data.status === 'completed') {
return data.reply;
} else if (data.status === 'error') {
throw new Error(data.error);
}
await new Promise(resolve => setTimeout(resolve, 1000));
}
}
Error Handling
| Status Code | Error Type | Description | Solution |
|---|---|---|---|
400 | Bad Request | Missing required parameters or invalid format | Check your request body structure |
401 | Unauthorized | Invalid or missing API key | Verify your API key is correct |
404 | Not Found | Model ID doesn't exist | Use valid model: default, Witch, Mage, Succubus, Lightning |
429 | Rate Limited | Too many requests | Implement exponential backoff, check rate limits |
500 | Server Error | Internal processing error | Retry request, contact support if persistent |
Error Response Format
{
"error": {
"code": "INVALID_PARAMETER",
"message": "The 'messages' parameter is required but was not provided",
"details": {
"parameter": "messages",
"expected_type": "array"
}
},
"status": "error"
}
Best Practices
✅ Do's
- Keep personas concise but descriptive (100-300 words)
- Use consistent message indexing (0, 1, 2...)
- Set appropriate max_tokens for your use case
- Implement proper error handling and retries
- Poll status endpoints instead of long timeouts
❌ Don'ts
- Don't exceed rate limits - implement backoff
- Don't send empty messages or missing required fields
- Don't use extremely high max_tokens unnecessarily
- Don't ignore error responses - handle them gracefully
- Don't poll too frequently - respect server resources
Rate Limits
| Model Type | Requests per Minute |
|---|---|
| Default | 100 |
| Character Models | 100 |
Optimization Tips
- Batch related requests when possible
- Cache responses for repeated queries
- Use appropriate max_tokens to reduce processing time
- Monitor your usage via response headers
Next Steps
- Character Endpoint Guide - Deep dive into character-specific features
- Status API - Complete status polling documentation
- API Workflows - Combine endpoints for complex use cases
- Character Development Guide - Master persona creation