Skip to main content

Status API

The Status API provides robust monitoring and retrieval capabilities for asynchronous operations in the RunAnythingAI platform. This API is essential for tracking the progress of text generation requests and retrieving results when processing completes.

Endpoint Details

GET api/v2/status/{id}

This endpoint follows RESTful principles for checking the status of previously initiated asynchronous operations.

Request Parameters

Path Parameters

ParameterTypeRequiredDescription
idstringYesThe unique request identifier returned from text generation or character endpoints

Header Parameters

ParameterTypeRequiredDescription
AuthorizationstringYesAPI key in the format: Bearer YOUR_API_KEY

Response Specifications

The Status API returns different response structures based on the current state of the requested operation.

Processing State Response

Returned when the request is still being processed by the AI model.

{
"status": "processing",
"progress": 0.45,
"estimatedTimeRemaining": "2s"
}
ParameterTypeDescription
statusstringCurrent request status ("processing")
progressnumberOptional: Completion percentage (0-1) if available
estimatedTimeRemainingstringOptional: Estimated time until completion if available

Completed State Response

Returned when processing has successfully completed.

{
"status": "completed",
"reply": "This is the generated text response from the model.",
"model": "Mage-1",
"processingTime": "1.24s",
"completionTimestamp": "2023-09-15T14:32:45Z"
}
ParameterTypeDescription
statusstringStatus of the request ("completed")
replystringThe generated content from the model
modelstringThe model that generated the response
processingTimestringTotal time taken to process the request
completionTimestampstringISO 8601 timestamp when the request was completed

Error State Response

Returned when the request encountered an error during processing.

{
"status": "error",
"error": "Error message explaining the specific issue that occurred",
"errorCode": "RATE_LIMIT_EXCEEDED",
"requestId": "req_abcdef123456"
}
ParameterTypeDescription
statusstringStatus of the request ("error")
errorstringHuman-readable description of the error
errorCodestringMachine-readable error code for programmatic handling
requestIdstringUnique identifier for error tracking and support

Implementation Examples

Basic Status Polling

Standard implementation with fixed polling interval:

async function checkRequestStatus(requestId, apiKey) {
try {
const response = await fetch(`https://api.runanythingai.com/api/v2/status/${requestId}`, {
headers: {
'Authorization': `Bearer ${apiKey}`
}
});

if (!response.ok) {
const errorData = await response.json();
throw new Error(`Status API error: ${errorData.error || response.statusText}`);
}

const data = await response.json();

if (data.status === 'processing') {
console.log(`Request ${requestId} still processing...`);
if (data.progress) {
console.log(`Progress: ${Math.round(data.progress * 100)}%`);
}
// Poll again after a delay
return new Promise(resolve => {
setTimeout(() => resolve(checkRequestStatus(requestId, apiKey)), 1000);
});
} else if (data.status === 'completed') {
console.log(`Request ${requestId} completed in ${data.processingTime || 'N/A'}`);
return data.reply;
} else {
throw new Error(`Request failed: ${data.error}`);
}
} catch (error) {
console.error('Status check failed:', error);
throw error;
}
}

// Example usage
checkRequestStatus('08ab17b0-e0d0-4b4d-a0cd-ea0bbdef455a', 'YOUR_API_KEY')
.then(result => {
console.log('Final result:', result);
// Process the completed result
})
.catch(error => {
// Handle errors appropriately
console.error('Error in status polling:', error);
});

Advanced Implementation with Exponential Backoff

Production-ready implementation with exponential backoff and timeout:

/**
* Poll for request completion with exponential backoff
* @param {string} requestId - The ID of the request to check
* @param {string} apiKey - RunAnythingAI API key
* @param {Object} options - Configuration options
* @param {number} options.initialDelay - Initial delay between polls in ms
* @param {number} options.maxDelay - Maximum delay between polls in ms
* @param {number} options.backoffFactor - Multiplier for backoff calculation
* @param {number} options.maxAttempts - Maximum number of polling attempts
* @param {number} options.timeout - Overall timeout in ms
* @param {Function} options.onProgress - Callback for progress updates
* @returns {Promise<string>} The completed result text
*/
async function pollWithBackoff(requestId, apiKey, options = {}) {
const {
initialDelay = 1000,
maxDelay = 10000,
backoffFactor = 1.5,
maxAttempts = 30,
timeout = 120000,
onProgress = null
} = options;

let attempts = 0;
let currentDelay = initialDelay;
const startTime = Date.now();

async function attemptPoll() {
if (attempts >= maxAttempts) {
throw new Error(`Maximum polling attempts (${maxAttempts}) exceeded`);
}

if (Date.now() - startTime > timeout) {
throw new Error(`Polling timeout exceeded (${timeout}ms)`);
}

attempts++;

try {
const response = await fetch(`https://api.runanythingai.com/api/v2/status/${requestId}`, {
headers: {
'Authorization': `Bearer ${apiKey}`
}
});

if (!response.ok) {
// Handle HTTP errors
if (response.status === 429) {
console.warn('Rate limit hit, increasing backoff...');
currentDelay = Math.min(currentDelay * 2, maxDelay * 2);
await new Promise(resolve => setTimeout(resolve, currentDelay));
return attemptPoll();
}

const errorText = await response.text();
throw new Error(`HTTP error ${response.status}: ${errorText}`);
}

const data = await response.json();

if (data.status === 'processing') {
if (onProgress && typeof onProgress === 'function') {
onProgress(data);
}

// Apply exponential backoff
currentDelay = Math.min(currentDelay * backoffFactor, maxDelay);
await new Promise(resolve => setTimeout(resolve, currentDelay));
return attemptPoll();
} else if (data.status === 'completed') {
return data.reply;
} else if (data.status === 'error') {
throw new Error(`API error: ${data.error} (Code: ${data.errorCode || 'unknown'})`);
} else {
throw new Error(`Unknown status: ${data.status}`);
}
} catch (error) {
// If the error is a network issue, retry with backoff
if (error.name === 'TypeError' && error.message.includes('network')) {
console.warn(`Network error on attempt ${attempts}, retrying...`);
currentDelay = Math.min(currentDelay * backoffFactor, maxDelay);
await new Promise(resolve => setTimeout(resolve, currentDelay));
return attemptPoll();
}
throw error;
}
}

return attemptPoll();
}

Production Best Practices

  1. Intelligent Polling Strategy

    • Implement exponential backoff to reduce API load
    • Set appropriate maximum polling duration based on expected generation time
    • Use shorter initial polling intervals for better UX, increasing gradually
  2. User Experience Integration

    • Display progress indicators during processing state
    • Implement estimated wait time for longer generations
    • Provide cancellation mechanisms for long-running requests
  3. Error Resilience

    • Implement automatic retry for transient errors and network issues
    • Design clear error messaging for different failure scenarios
    • Cache the request ID for potential manual retry
  4. Performance Optimization

    • Cache completed results to prevent unnecessary re-fetching
    • Implement request batching for multiple simultaneous status checks
    • Use websocket connections when available for push updates instead of polling
  5. Monitoring and Analytics

    • Track response times and completion rates for different request types
    • Monitor error rates and patterns to detect systemic issues
    • Implement alerts for anomalous processing times

HTTP Status Codes

Status CodeDescriptionRecommended Action
200Successful status checkProcess the response based on the status field
400Invalid request ID format or parameterCheck request ID format and retry with valid format
401Authentication failureVerify API key and authorization header
403Permission deniedVerify account permissions and subscription status
404Request ID not foundVerify the request ID or consider it expired
429Rate limit exceededImplement exponential backoff and retry
500Server error during status checkRetry with backoff or contact support
503Service temporarily unavailableRetry after a longer delay

Error Codes

Common error codes returned in the errorCode field of error responses:

Error CodeDescriptionSuggested Handling
INVALID_REQUEST_IDThe provided request ID format is invalidVerify the request ID format
REQUEST_NOT_FOUNDThe specified request ID does not existCheck if the request was submitted successfully
REQUEST_EXPIREDThe request results are no longer availableResubmit the original request
RATE_LIMIT_EXCEEDEDToo many status checks in a short time periodImplement longer intervals between status checks
PROCESSING_ERRORAn error occurred during text generationCheck input parameters and resubmit if needed
MODEL_UNAVAILABLEThe requested model is temporarily unavailableTry an alternative model or retry later
CONTENT_FILTEREDThe generation was blocked by content filtersModify the input to comply with content policies