Troubleshooting5 min read

Fix High Memory Usage in Chatbots

Diagnose and resolve high memory consumption in your AI chatbot. Learn about memory leaks, optimization techniques, and monitoring.

Published: 27/01/2025

Identifying Memory Issues

Check Current Usage

# PM2 memory stats
pm2 monit

# Detailed process info
pm2 show your-bot

# System-wide memory
free -h
htop

Signs of Memory Problems

  • Memory climbs continuously over time
  • Bot becomes slower before crashing
  • "JavaScript heap out of memory" errors
  • Server becomes unresponsive

Common Causes

1. Memory Leaks in Code

Symptoms: Memory grows indefinitely, never decreases

Common leak sources:

// BAD: Growing array that's never cleared
const messageHistory = [];
client.on('message', (msg) => {
  messageHistory.push(msg); // Grows forever!
});

// GOOD: Limited array with cleanup
const messageHistory = [];
const MAX_HISTORY = 100;

client.on('message', (msg) => {
  messageHistory.push(msg);
  if (messageHistory.length > MAX_HISTORY) {
    messageHistory.shift(); // Remove oldest
  }
});

2. Conversation Context Accumulation

Problem: Storing entire conversation history without limits

// BAD: Unlimited conversation storage
const conversations = new Map();

function addMessage(userId, message) {
  if (!conversations.has(userId)) {
    conversations.set(userId, []);
  }
  conversations.get(userId).push(message); // Never cleared!
}

// GOOD: Limited context with auto-cleanup
const conversations = new Map();
const MAX_MESSAGES = 20;
const CONTEXT_TIMEOUT = 30 * 60 * 1000; // 30 minutes

function addMessage(userId, message) {
  const now = Date.now();

  if (!conversations.has(userId)) {
    conversations.set(userId, { messages: [], lastActive: now });
  }

  const context = conversations.get(userId);
  context.messages.push(message);
  context.lastActive = now;

  // Limit messages
  if (context.messages.length > MAX_MESSAGES) {
    context.messages = context.messages.slice(-MAX_MESSAGES);
  }
}

// Cleanup old conversations periodically
setInterval(() => {
  const now = Date.now();
  for (const [userId, context] of conversations) {
    if (now - context.lastActive > CONTEXT_TIMEOUT) {
      conversations.delete(userId);
    }
  }
}, 5 * 60 * 1000); // Every 5 minutes

3. Event Listener Buildup

Problem: Adding listeners without removing them

// BAD: Listeners pile up
function handleMessage(channel) {
  channel.on('message', (msg) => {
    // Handle message
  });
}

// GOOD: Remove listeners when done
function handleMessage(channel) {
  const handler = (msg) => {
    // Handle message
  };

  channel.on('message', handler);

  // Remove after timeout or condition
  setTimeout(() => {
    channel.off('message', handler);
  }, 60000);
}

4. Large Response Caching

Problem: Caching API responses without expiration

// BAD: Cache grows forever
const responseCache = new Map();

async function getCachedResponse(key) {
  if (responseCache.has(key)) {
    return responseCache.get(key);
  }
  const response = await fetchResponse(key);
  responseCache.set(key, response);
  return response;
}

// GOOD: Cache with TTL
const responseCache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function getCachedResponse(key) {
  const cached = responseCache.get(key);

  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }

  const response = await fetchResponse(key);
  responseCache.set(key, { data: response, timestamp: Date.now() });

  // Cleanup old entries
  if (responseCache.size > 1000) {
    const oldest = [...responseCache.entries()]
      .sort((a, b) => a[1].timestamp - b[1].timestamp)
      .slice(0, 100);
    oldest.forEach(([k]) => responseCache.delete(k));
  }

  return response;
}

Quick Fixes

Increase Node.js Memory Limit

# Stop current process
pm2 delete your-bot

# Start with higher limit
pm2 start index.js --name your-bot --node-args="--max-old-space-size=2048"
pm2 save

Memory limits by available RAM:

| Server RAM | Recommended Limit | |------------|-------------------| | 2GB | 1024MB | | 4GB | 2048MB | | 8GB | 4096MB |

Enable Auto-Restart on Memory Threshold

pm2 start index.js --name your-bot --max-memory-restart 500M

This restarts the bot when it exceeds 500MB, preventing crashes.

Schedule Regular Restarts

For temporary relief while fixing leaks:

# Restart every 6 hours
pm2 start index.js --name your-bot --cron-restart="0 */6 * * *"

Finding Memory Leaks

Use Node.js Heap Snapshots

// Add to your bot
const v8 = require('v8');
const fs = require('fs');

// Trigger with command or endpoint
function takeHeapSnapshot() {
  const filename = `heap-${Date.now()}.heapsnapshot`;
  const snapshotStream = v8.writeHeapSnapshot(filename);
  console.log(`Heap snapshot written to ${snapshotStream}`);
}

Analyze snapshots in Chrome DevTools:

  1. Open Chrome DevTools
  2. Go to Memory tab
  3. Load snapshot file
  4. Look for growing objects

Memory Logging

// Log memory usage periodically
setInterval(() => {
  const usage = process.memoryUsage();
  console.log({
    heapUsed: `${Math.round(usage.heapUsed / 1024 / 1024)}MB`,
    heapTotal: `${Math.round(usage.heapTotal / 1024 / 1024)}MB`,
    rss: `${Math.round(usage.rss / 1024 / 1024)}MB`,
  });
}, 60000);

PM2 Memory Monitoring

# Real-time monitoring
pm2 monit

# Export metrics
pm2 report

Optimization Tips

1. Use Streaming for Large Data

// BAD: Load entire response into memory
const response = await fetch(url);
const data = await response.json();

// GOOD: Stream large responses
const { pipeline } = require('stream/promises');
const fs = require('fs');

await pipeline(
  response.body,
  fs.createWriteStream('output.json')
);

2. Process in Batches

// BAD: Process all at once
const users = await getAllUsers(); // 10,000 users in memory
users.forEach(process);

// GOOD: Process in batches
async function* getUserBatches(batchSize = 100) {
  let offset = 0;
  while (true) {
    const batch = await getUsers(offset, batchSize);
    if (batch.length === 0) break;
    yield batch;
    offset += batchSize;
  }
}

for await (const batch of getUserBatches()) {
  batch.forEach(process);
}

3. Weak References for Caches

// Use WeakMap for caches that can be garbage collected
const userCache = new WeakMap();

function getCachedUser(userObject) {
  if (userCache.has(userObject)) {
    return userCache.get(userObject);
  }
  const data = computeUserData(userObject);
  userCache.set(userObject, data);
  return data;
}

Monitoring Setup

Create Memory Alert Script

#!/bin/bash
# /opt/scripts/memory-check.sh

BOT_NAME="your-bot"
THRESHOLD=400  # MB
WEBHOOK="your-discord-webhook"

MEM=$(pm2 jlist | jq ".[] | select(.name==\"$BOT_NAME\") | .monit.memory")
MEM_MB=$((MEM / 1024 / 1024))

if [ $MEM_MB -gt $THRESHOLD ]; then
  curl -H "Content-Type: application/json" \
    -d "{\"content\":\"⚠️ High memory: ${MEM_MB}MB (threshold: ${THRESHOLD}MB)\"}" \
    $WEBHOOK
fi

Schedule check:

crontab -e
# Add:
*/5 * * * * /opt/scripts/memory-check.sh

Related Guides

Need Help?

Memory issues can be complex. Our maintenance service includes performance optimization and monitoring.

Need a VPS for Your Bot?

We recommend Hostinger KVM 2 VPS - reliable, fast, and perfect for AI chatbots. Get started with our recommended setup.

Get Hostinger VPS

Need Help With Setup?

Got your VPS? Let us handle the technical work. Professional setup and maintenance for OpenClaw (formerly Clawd.bot).