Rate Limits
Understanding API rate limits and how to handle them in your applications.
Overview
Rate limits protect the API from abuse and ensure fair usage for all users. Limits are applied per API key and vary by plan.
| Plan | Requests/Second | Emails/Month | Burst Limit |
|---|---|---|---|
| Free | 10 req/s | 5,000 emails | 20 requests |
| Pro ($20/mo) | 100 req/s | 50,000 emails | 200 requests |
| Scale ($100/mo) | 500 req/s | 200,000 emails | 1000 requests |
Rate Limit Headers
Every API response includes headers to help you track your rate limit status:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when window resets |
Retry-After | Seconds to wait (only on 429 response) |
Example Response Headers
HTTP/1.1 200 OK
X-RateLimit-Limit: 200
X-RateLimit-Remaining: 150
X-RateLimit-Reset: 1705320000
Content-Type: application/jsonCheck Current Usage
You can check your current rate limit status with any API request, or use the dedicated endpoint:
curl -X GET "https://www.unosend.co/api/v1/usage" \
-H "Authorization: Bearer un_xxxxxxxx" \
-iResponse
{
"plan": "pro",
"rate_limit": {
"requests_per_second": 100,
"burst_limit": 200
},
"usage": {
"emails_sent": 12450,
"emails_limit": 50000,
"period_start": "2024-01-01T00:00:00Z",
"period_end": "2024-01-31T23:59:59Z"
}
}Handling Rate Limits
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Retry after 60 seconds.",
"retry_after": 60
}
}Implementing Retry Logic
Here's how to handle rate limits with automatic retry in different languages:
cURL with Retry
# Use --retry flag for automatic retries
curl -X POST "https://www.unosend.co/api/v1/emails" \
-H "Authorization: Bearer un_xxxxxxxx" \
-H "Content-Type: application/json" \
--retry 3 \
--retry-delay 5 \
-d '{"from": "hello@yourdomain.com", "to": "user@example.com", "subject": "Hello", "html": "<p>Hi!</p>"}'JavaScript/TypeScript
async function sendEmailWithRetry(
payload: EmailPayload,
maxRetries: number = 3
): Promise<Response> {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch('https://www.unosend.co/api/v1/emails', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
});
if (response.status === 429) {
const retryAfter = parseInt(
response.headers.get('Retry-After') || '60'
);
console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
await sleep(retryAfter * 1000);
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
function sleep(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms));
}Python
import requests
import time
def send_email_with_retry(payload, max_retries=3):
api_key = "un_xxxxxxxx"
for attempt in range(max_retries):
response = requests.post(
"https://www.unosend.co/api/v1/emails",
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
},
json=payload
)
if response.status_code == 429:
retry_after = int(response.headers.get("Retry-After", 60))
print(f"Rate limited. Waiting {retry_after} seconds...")
time.sleep(retry_after)
continue
return response
raise Exception("Max retries exceeded")Exponential Backoff
For robust retry logic, use exponential backoff with jitter to avoid thundering herd:
async function sendWithExponentialBackoff(
payload: EmailPayload,
maxRetries: number = 5
): Promise<Response> {
const baseDelay = 1000; // 1 second
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch('https://www.unosend.co/api/v1/emails', {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
});
if (response.status === 429) {
// Calculate delay with exponential backoff + jitter
const delay = baseDelay * Math.pow(2, attempt);
const jitter = Math.random() * 1000;
const waitTime = delay + jitter;
console.log(`Rate limited. Waiting ${waitTime}ms (attempt ${attempt + 1})`);
await sleep(waitTime);
continue;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
return response;
} catch (error) {
if (attempt === maxRetries - 1) throw error;
const delay = baseDelay * Math.pow(2, attempt);
await sleep(delay);
}
}
throw new Error('Max retries exceeded');
}Queue Pattern for Bulk Sending
For sending many emails, use a queue with rate limiting to stay within limits:
class RateLimitedQueue {
private queue: EmailPayload[] = [];
private processing = false;
private requestsPerSecond: number;
constructor(requestsPerSecond: number = 10) {
this.requestsPerSecond = requestsPerSecond;
}
async add(payload: EmailPayload): Promise<void> {
this.queue.push(payload);
this.process();
}
private async process(): Promise<void> {
if (this.processing) return;
this.processing = true;
const interval = 1000 / this.requestsPerSecond;
while (this.queue.length > 0) {
const payload = this.queue.shift()!;
try {
await sendEmailWithRetry(payload);
} catch (error) {
console.error('Failed to send email:', error);
}
await sleep(interval);
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue(10); // 10 req/sec
for (const recipient of recipients) {
queue.add({
from: 'hello@yourdomain.com',
to: recipient.email,
subject: 'Hello!',
html: '<p>Your email content</p>'
});
}Tip: For bulk sending, consider using the /v1/emails/batch endpoint which allows up to 100 emails per request, significantly reducing API calls.
Best Practices
Monitor rate limit headers
Check X-RateLimit-Remaining and slow down before hitting limits.
Use batch endpoints
Send multiple emails in one request using POST /v1/emails/batch to reduce API calls.
Implement queuing
Queue emails during high-traffic periods and process them at a controlled rate.
Use webhooks instead of polling
Instead of polling for status, use webhooks to receive delivery updates asynchronously.
Cache responses when possible
Cache responses from endpoints like GET /v1/domains to reduce unnecessary requests.
Need Higher Limits?
If you need higher rate limits for your use case, upgrade your plan or contact us for Enterprise options with custom limits.