Command Palette

Search for a command to run...

Reference

Rate Limits

Understanding API rate limits and how to handle them in your applications.

Overview

Rate limits protect the API from abuse and ensure fair usage for all users. Limits are applied per API key and vary by plan.

PlanRequests/SecondEmails/MonthBurst Limit
Free10 req/s5,000 emails20 requests
Pro ($20/mo)100 req/s50,000 emails200 requests
Scale ($100/mo)500 req/s200,000 emails1000 requests

Rate Limit Headers

Every API response includes headers to help you track your rate limit status:

HeaderDescription
X-RateLimit-LimitMaximum requests allowed per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when window resets
Retry-AfterSeconds to wait (only on 429 response)

Example Response Headers

response-headers.txt
HTTP/1.1 200 OK
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 150
X-RateLimit-Reset: 1705320000
Content-Type: application/json

Check Current Usage

You can check your current rate limit status with any API request, or use the dedicated endpoint:

terminal
curl -X GET "https://www.unosend.co/api/v1/usage" \
  -H "Authorization: Bearer un_xxxxxxxx" \
  -i

Response

response.json
{
  "plan": "pro",
  "rate_limit": {
    "requests_per_second": 100,
    "burst_limit": 200
  },
  "usage": {
    "emails_sent": 12450,
    "emails_limit": 50000,
    "period_start": "2024-01-01T00:00:00Z",
    "period_end": "2024-01-31T23:59:59Z"
  }
}

Handling Rate Limits

When you exceed the rate limit, the API returns a 429 Too Many Requests response:

rate-limit-response.json
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Rate limit exceeded. Retry after 60 seconds.",
    "retry_after": 60
  }
}

Implementing Retry Logic

Here's how to handle rate limits with automatic retry in different languages:

cURL with Retry

terminal
# Use --retry flag for automatic retries
curl -X POST "https://www.unosend.co/api/v1/emails" \
  -H "Authorization: Bearer un_xxxxxxxx" \
  -H "Content-Type: application/json" \
  --retry 3 \
  --retry-delay 5 \
  -d '{"from": "hello@yourdomain.com", "to": "user@example.com", "subject": "Hello", "html": "<p>Hi!</p>"}'

JavaScript/TypeScript

retry-logic.ts
async function sendEmailWithRetry(
  payload: EmailPayload,
  maxRetries: number = 3
): Promise<Response> {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch('https://www.unosend.co/api/v1/emails', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${apiKey}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify(payload)
    });
    
    if (response.status === 429) {
      const retryAfter = parseInt(
        response.headers.get('Retry-After') || '60'
      );
      console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
      await sleep(retryAfter * 1000);
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded');
}

function sleep(ms: number): Promise<void> {
  return new Promise(resolve => setTimeout(resolve, ms));
}

Python

retry_logic.py
import requests
import time

def send_email_with_retry(payload, max_retries=3):
    api_key = "un_xxxxxxxx"
    
    for attempt in range(max_retries):
        response = requests.post(
            "https://www.unosend.co/api/v1/emails",
            headers={
                "Authorization": f"Bearer {api_key}",
                "Content-Type": "application/json"
            },
            json=payload
        )
        
        if response.status_code == 429:
            retry_after = int(response.headers.get("Retry-After", 60))
            print(f"Rate limited. Waiting {retry_after} seconds...")
            time.sleep(retry_after)
            continue
        
        return response
    
    raise Exception("Max retries exceeded")

Exponential Backoff

For robust retry logic, use exponential backoff with jitter to avoid thundering herd:

exponential-backoff.ts
async function sendWithExponentialBackoff(
  payload: EmailPayload,
  maxRetries: number = 5
): Promise<Response> {
  const baseDelay = 1000; // 1 second
  
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch('https://www.unosend.co/api/v1/emails', {
        method: 'POST',
        headers: {
          'Authorization': `Bearer ${apiKey}`,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify(payload)
      });
      
      if (response.status === 429) {
        // Calculate delay with exponential backoff + jitter
        const delay = baseDelay * Math.pow(2, attempt);
        const jitter = Math.random() * 1000;
        const waitTime = delay + jitter;
        
        console.log(`Rate limited. Waiting ${waitTime}ms (attempt ${attempt + 1})`);
        await sleep(waitTime);
        continue;
      }
      
      if (!response.ok) {
        throw new Error(`HTTP ${response.status}`);
      }
      
      return response;
      
    } catch (error) {
      if (attempt === maxRetries - 1) throw error;
      
      const delay = baseDelay * Math.pow(2, attempt);
      await sleep(delay);
    }
  }
  
  throw new Error('Max retries exceeded');
}

Queue Pattern for Bulk Sending

For sending many emails, use a queue with rate limiting to stay within limits:

rate-limited-queue.ts
class RateLimitedQueue {
  private queue: EmailPayload[] = [];
  private processing = false;
  private requestsPerSecond: number;
  
  constructor(requestsPerSecond: number = 10) {
    this.requestsPerSecond = requestsPerSecond;
  }
  
  async add(payload: EmailPayload): Promise<void> {
    this.queue.push(payload);
    this.process();
  }
  
  private async process(): Promise<void> {
    if (this.processing) return;
    this.processing = true;
    
    const interval = 1000 / this.requestsPerSecond;
    
    while (this.queue.length > 0) {
      const payload = this.queue.shift()!;
      
      try {
        await sendEmailWithRetry(payload);
      } catch (error) {
        console.error('Failed to send email:', error);
      }
      
      await sleep(interval);
    }
    
    this.processing = false;
  }
}

// Usage
const queue = new RateLimitedQueue(10); // 10 req/sec

for (const recipient of recipients) {
  queue.add({
    from: 'hello@yourdomain.com',
    to: recipient.email,
    subject: 'Hello!',
    html: '<p>Your email content</p>'
  });
}

Tip: For bulk sending, consider using the /v1/emails/batch endpoint which allows up to 100 emails per request, significantly reducing API calls.

Best Practices

1

Monitor rate limit headers

Check X-RateLimit-Remaining and slow down before hitting limits.

2

Use batch endpoints

Send multiple emails in one request using POST /v1/emails/batch to reduce API calls.

3

Implement queuing

Queue emails during high-traffic periods and process them at a controlled rate.

4

Use webhooks instead of polling

Instead of polling for status, use webhooks to receive delivery updates asynchronously.

5

Cache responses when possible

Cache responses from endpoints like GET /v1/domains to reduce unnecessary requests.

Need Higher Limits?

If you need higher rate limits for your use case, upgrade your plan or contact us for Enterprise options with custom limits.