Skip to main content

Advanced Features & Techniques (Part 2)

This guide continues our exploration of RunAnythingAI's advanced capabilities, building on the foundation established in Advanced Features & Techniques. Here we delve into sophisticated implementation patterns, optimization strategies, and specialized use cases for enterprise-grade AI applications.

Pro-level Content

This section covers advanced concepts and techniques. For fundamental information, please refer to Part 1 first.

Advanced Memory Systems

Create persistent character interactions with sophisticated memory systems that enable characters to remember past conversations, learn from experiences, and develop meaningful relationships with users.

Short-term vs. Long-term Memory

Implement a tiered memory architecture for your characters:

class CharacterMemorySystem {
constructor(characterId) {
this.characterId = characterId;
this.shortTermMemory = []; // Recent interactions (recency bias)
this.longTermMemory = []; // Important facts and events
this.relationshipInsights = {}; // Insights about relationships
}

// Add a new interaction to memory
addInteraction(message, userId) {
// Add to short-term memory (limited capacity)
this.shortTermMemory.unshift({
content: message,
timestamp: Date.now(),
userId
});

// Keep only the most recent interactions
if (this.shortTermMemory.length > 10) {
this.shortTermMemory.pop();
}

// Analyze for important information to add to long-term memory
this.analyzeAndStore(message, userId);
}

// Extract important facts for long-term retention
analyzeAndStore(message, userId) {
// In production, you would use an embedding model or
// classification system to identify important information

// This is a simplified example
const importantTopics = [
'name', 'job', 'family', 'preference',
'birthday', 'location', 'hobby'
];

for (const topic of importantTopics) {
if (message.toLowerCase().includes(topic)) {
this.longTermMemory.push({
topic,
content: message,
timestamp: Date.now(),
userId
});

// Update relationship insights
this.updateRelationship(userId, topic, message);
}
}
}

// Update relationship insights
updateRelationship(userId, topic, message) {
if (!this.relationshipInsights[userId]) {
this.relationshipInsights[userId] = {
familiarity: 0,
topics: {},
lastInteraction: Date.now()
};
}

const relationship = this.relationshipInsights[userId];
relationship.familiarity += 0.1; // Increase familiarity with each important interaction
relationship.lastInteraction = Date.now();

// Store topic-specific insights
if (!relationship.topics[topic]) {
relationship.topics[topic] = [];
}
relationship.topics[topic].push(message);
}

// Generate memory-infused persona for API calls
generateEnhancedPersona(basePersona, userId) {
// Start with base character definition
let enhancedPersona = basePersona;

// Add relationship context if available
const relationship = this.relationshipInsights[userId];
if (relationship && relationship.familiarity > 0.5) {
enhancedPersona += " They have a good rapport with the user.";

// Add key facts from memory
const topics = Object.keys(relationship.topics).slice(0, 3);
if (topics.length > 0) {
enhancedPersona += " They remember that the user ";

topics.forEach((topic, index) => {
const latestMemory = relationship.topics[topic].slice(-1)[0];
const simplifiedMemory = this.simplifyMemory(topic, latestMemory);

enhancedPersona += simplifiedMemory;
if (index < topics.length - 2) enhancedPersona += ", ";
else if (index === topics.length - 2) enhancedPersona += " and ";
});

enhancedPersona += ".";
}
}

return enhancedPersona;
}

// Helper to convert raw memories into digestible facts
simplifyMemory(topic, memory) {
// In production, use NLP to extract concise insights
// This is a simplified example
if (topic === 'name') return "is named " + memory.split('name is ')[1]?.split(' ')[0] || "someone";
if (topic === 'job') return "works as " + memory.split('work as ')[1]?.split(' ')[0] || "a professional";
if (topic === 'hobby') return "enjoys " + memory.split('enjoy ')[1]?.split(' ')[0] || "hobbies";
return "mentioned something about " + topic;
}
}

Implementing Memory in API Calls

Integrate memory systems with character interactions:

async function characterInteractionWithMemory(userId, message, character) {
// Retrieve or initialize memory system
const memory = getCharacterMemory(character.id);

// Add new message to memory
memory.addInteraction(message, userId);

// Generate memory-enhanced persona
const enhancedPersona = memory.generateEnhancedPersona(character.basePersona, userId);

// Get conversation history (last few exchanges)
const recentMessages = getUserConversationHistory(userId, character.id, 5);

// Make API call with enhanced persona and history
const response = await fetch(`https://api.runanythingai.com/api/text/${character.endpoint}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`
},
body: JSON.stringify({
"messages": recentMessages,
"persona": enhancedPersona,
"botName": character.name,
"samplingParams": { "max_tokens": 200 }
})
});

const data = await response.json();

// Process response as usual...
return pollForCompletion(data.id);
}

Character Ensembles and Specialized Roles

Create sophisticated multi-character systems where specialized characters handle different aspects of interaction.

Character Routing System

Route user inputs to the most appropriate specialized character based on intent:

// Define character specialists
const characters = {
technicalSupport: {
endpoint: "Mage",
name: "Alex",
persona: "Alex is a technical support specialist with extensive knowledge of software troubleshooting...",
topics: ["error", "bug", "doesn't work", "broken", "fix", "problem"]
},
productAdvice: {
endpoint: "Witch",
name: "Jamie",
persona: "Jamie is a product specialist with detailed knowledge of all product features and use cases...",
topics: ["recommend", "should I", "which one", "features", "difference", "compare"]
},
billing: {
endpoint: "Lightning",
name: "Morgan",
persona: "Morgan is a billing specialist with expertise in subscription plans, payment processing and refunds...",
topics: ["payment", "charge", "refund", "subscription", "billing", "price"]
}
};

// Intent detection function
function detectIntent(message) {
message = message.toLowerCase();

for (const [type, character] of Object.entries(characters)) {
for (const topic of character.topics) {
if (message.includes(topic)) {
return type;
}
}
}

// Default to product advice if no clear match
return "productAdvice";
}

// Character routing function
async function routeToCharacter(message, userId) {
// Detect intent
const intent = detectIntent(message);
const selectedCharacter = characters[intent];

console.log(`Routing to ${selectedCharacter.name} for ${intent}`);

// Process with the selected character
return characterInteraction(
selectedCharacter.endpoint,
message,
userId,
selectedCharacter.persona,
selectedCharacter.name
);
}

Advanced Voice & Audio Integration

Create rich audio experiences with advanced text-to-speech integration.

SSML Enhancement for Emotional Speech

Use Speech Synthesis Markup Language (SSML)-like techniques to add emotional depth:

// Helper function to add emotional markup
function enhanceTextWithEmotions(text) {
// Identify emotional content
if (text.includes('!')) {
text = text.replace(/!([^!]+)!/g, '<emphasis>$1</emphasis>');
}

// Add pauses at punctuation for more natural speech
text = text.replace(/\./g, '.<break strength="medium"/>');
text = text.replace(/\?/g, '?<break strength="medium"/>');
text = text.replace(/,/g, ',<break strength="weak"/>');

// Add emotional indicators
text = text.replace(/happy|glad|excited|thrilled/gi, match => `<happy>${match}</happy>`);
text = text.replace(/sad|upset|unfortunate/gi, match => `<sad>${match}</sad>`);
text = text.replace(/angry|furious|outraged/gi, match => `<angry>${match}</angry>`);

return text;
}

// Process text through TTS with emotional markup
async function expressiveTTS(text, voice = "af_nicole") {
const enhancedText = enhanceTextWithEmotions(text);

// In a real implementation, you'd use an SSML-compatible TTS service
// For RunAnythingAI, we'll use speed variations to approximate emotional speech

// Process each emotional segment with appropriate speed
const segments = enhancedText.split(/(<\w+>.*?<\/\w+>|<break.*?\/>)/g);
const audioBuffers = [];

for (const segment of segments) {
if (!segment.trim()) continue;

let segmentText = segment;
let speed = 1.0;

if (segment.startsWith('<happy>')) {
segmentText = segment.replace(/<\/?happy>/g, '');
speed = 1.15; // Slightly faster for happy emotions
} else if (segment.startsWith('<sad>')) {
segmentText = segment.replace(/<\/?sad>/g, '');
speed = 0.85; // Slower for sad emotions
} else if (segment.startsWith('<angry>')) {
segmentText = segment.replace(/<\/?angry>/g, '');
speed = 1.1; // Slightly faster for angry emotions
} else if (segment.startsWith('<emphasis>')) {
segmentText = segment.replace(/<\/?emphasis>/g, '');
// Keep normal speed but would emphasize in real SSML
} else if (segment.startsWith('<break')) {
const strength = segment.includes('medium') ? 500 :
segment.includes('weak') ? 250 : 100;
// Add silence in milliseconds
await new Promise(resolve => setTimeout(resolve, strength));
continue;
}

// Generate TTS for this segment
const response = await fetch('https://api.runanythingai.com/api/audio/full', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`
},
body: JSON.stringify({
"text": segmentText,
"voice": voice,
"speed": speed
})
});

const buffer = await response.arrayBuffer();
audioBuffers.push(buffer);
}

// In a real implementation, concatenate audio buffers
// For simplicity, we'll return the last one
return audioBuffers[audioBuffers.length - 1];
}

Enterprise Integration Patterns

Patterns for integrating RunAnythingAI into enterprise systems with high reliability and scalability requirements.

Enterprise-grade Caching Strategy

Implement a sophisticated caching system for high-volume deployments:

const Redis = require('redis');
const crypto = require('crypto');

class EnterpriseCache {
constructor(config = {}) {
this.client = Redis.createClient(config.redisUrl || 'redis://localhost:6379');
this.ttl = config.ttl || 3600; // Default 1 hour cache
this.prefix = config.prefix || 'runanything:cache:';

// Connect to Redis
this.client.connect().catch(err => {
console.error('Redis connection error:', err);
});
}

// Generate deterministic cache key from request params
generateKey(endpoint, params) {
const normalizedParams = this.normalizeParams(params);
const hash = crypto
.createHash('sha256')
.update(`${endpoint}:${JSON.stringify(normalizedParams)}`)
.digest('hex');

return `${this.prefix}${hash}`;
}

// Normalize parameters to ensure consistent keys
normalizeParams(params) {
// For chat, only include the last few messages to avoid excessive cache misses
if (params.messages && params.messages.length > 3) {
return {
...params,
messages: params.messages.slice(-3)
};
}
return params;
}

// Try to get from cache, otherwise execute function and cache result
async getOrExecute(endpoint, params, executeFn) {
const cacheKey = this.generateKey(endpoint, params);

try {
// Try to get from cache
const cachedValue = await this.client.get(cacheKey);

if (cachedValue) {
console.log('Cache hit:', cacheKey);
return JSON.parse(cachedValue);
}

console.log('Cache miss:', cacheKey);

// Execute the function
const result = await executeFn();

// Cache the result
await this.client.setEx(
cacheKey,
this.ttl,
JSON.stringify(result)
);

return result;
} catch (error) {
console.error('Cache error:', error);
// If cache fails, execute anyway
return executeFn();
}
}

// Invalidate specific cache entries
async invalidate(endpoint, params) {
const cacheKey = this.generateKey(endpoint, params);
await this.client.del(cacheKey);
}

// Close connection when done
async close() {
await this.client.quit();
}
}

// Example usage
const cache = new EnterpriseCache();

async function generateCachedResponse(prompt) {
const endpoint = 'api/text/Mage';
const params = {
messages: [{ role: 'You', content: prompt, index: 0 }],
persona: "Expert assistant persona",
botName: "Assistant"
};

return cache.getOrExecute(endpoint, params, async () => {
// This function only executes on cache miss
console.log('Executing API call for:', prompt);

// Actual API call implementation
const response = await fetch(`https://api.runanythingai.com/${endpoint}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`
},
body: JSON.stringify(params)
});

const { id } = await response.json();
return pollForCompletion(id);
});
}

Advanced Error Handling and Resilience

Implement sophisticated error handling and recovery strategies for mission-critical applications.

Tiered Fallback System

Create a robust handling system with multiple fallback options:

class ResilientCharacterSystem {
constructor() {
this.primaryEndpoint = 'Witch';
this.fallbackEndpoints = ['Mage', 'Lightning', 'default'];
this.cachedResponses = new Map();
}

// Generate character response with fallbacks
async generateResponse(message, persona, botName) {
let errors = [];

// Try primary endpoint
try {
const response = await this.tryEndpoint(
this.primaryEndpoint, message, persona, botName
);
return response;
} catch (error) {
console.error(`Primary endpoint (${this.primaryEndpoint}) failed:`, error);
errors.push({ endpoint: this.primaryEndpoint, error });
}

// Try fallbacks in sequence
for (const endpoint of this.fallbackEndpoints) {
try {
console.log(`Trying fallback endpoint: ${endpoint}`);
const response = await this.tryEndpoint(endpoint, message, persona, botName);
return response;
} catch (error) {
console.error(`Fallback endpoint (${endpoint}) failed:`, error);
errors.push({ endpoint, error });
}
}

// All endpoints failed, try to use cached similar response
const cachedResponse = this.findSimilarCachedResponse(message);
if (cachedResponse) {
return {
text: cachedResponse,
source: 'cache',
fallback: true
};
}

// Last resort: Return a generic response with error details
return {
text: "I apologize, but I'm unable to respond right now. Please try again later.",
errors,
fallback: true,
source: 'generic'
};
}

// Try a specific endpoint
async tryEndpoint(endpoint, message, persona, botName) {
// Make API call to this endpoint
const response = await fetch(`https://api.runanythingai.com/api/text/${endpoint}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`
},
body: JSON.stringify({
"messages": [{ role: 'You', content: message, index: 0 }],
"persona": persona,
"botName": botName,
"samplingParams": { "max_tokens": 150 }
})
});

if (!response.ok) {
throw new Error(`API error: ${response.status}`);
}

const { id } = await response.json();
const text = await pollForCompletion(id);

// Cache this successful response
this.cacheResponse(message, text);

return {
text,
source: endpoint,
fallback: endpoint !== this.primaryEndpoint
};
}

// Cache response for potential fallbacks
cacheResponse(message, response) {
// Simplified: in production use semantic similarity
const key = message.toLowerCase().slice(0, 50);
this.cachedResponses.set(key, response);

// Limit cache size
if (this.cachedResponses.size > 1000) {
// Remove oldest entry (first key)
const firstKey = this.cachedResponses.keys().next().value;
this.cachedResponses.delete(firstKey);
}
}

// Find similar cached response
findSimilarCachedResponse(message) {
// Simplified: in production use embeddings for semantic search
const searchKey = message.toLowerCase().slice(0, 50);

for (const [key, value] of this.cachedResponses.entries()) {
// Check for simple substring match (would use semantic similarity in production)
if (key.includes(searchKey) || searchKey.includes(key)) {
return value;
}
}

return null;
}
}

Next Steps

Now that you've mastered these advanced techniques, explore these resources for further development: