Public beta
Rate Limits
The Zippex API enforces rate limits to ensure fair usage and platform stability. All rate limits apply per API key.
Default limits
100 requests per minute
Per API key, across all endpoints.
This limit applies equally to test and live keys. If you need higher limits, contact api@zippex.com.
Rate limit headers
Every API response includes headers that tell you your current rate limit status:
| Header | Description | Example |
|---|---|---|
| X-RateLimit-Limit | Maximum requests allowed per minute. | 100 |
| X-RateLimit-Remaining | Requests remaining in the current window. | 87 |
| X-RateLimit-Reset | Unix timestamp when the rate limit window resets. | 1710079560 |
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1710079560
Content-Type: application/jsonHandling 429 responses
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
429 Too Many Requests
{
"error": {
"type": "rate_limit_error",
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Retry after 1710079560.",
"param": null
}
}Use the X-RateLimit-Reset header to determine when to retry. Do not retry immediately.
async function zippexRequest(path, options = {}, retries = 3) {
for (let attempt = 0; attempt < retries; attempt++) {
const response = await fetch(`https:"color:#6a9955">//api.zippex.com${path}`, {
...options,
headers: {
'Authorization': `Bearer ${process.env.ZIPPEX_API_KEY}`,
'Content-Type': 'application/json',
...options.headers,
},
});
if (response.status === 429) {
const resetTimestamp = parseInt(
response.headers.get('X-RateLimit-Reset') || '0',
10
);
const waitMs = Math.max(
(resetTimestamp * 1000) - Date.now(),
1000 * Math.pow(2, attempt) "color:#6a9955">// Fallback: exponential backoff
);
console.log(`Rate limited. Waiting ${Math.ceil(waitMs / 1000)}s before retry...`);
await new Promise(resolve => setTimeout(resolve, waitMs));
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}Best practices
- Monitor the headers. Check
X-RateLimit-Remainingin your responses and slow down proactively when it gets low. - Use webhooks instead of polling. Subscribe to webhook events for delivery status changes rather than polling
GET /v1/deliveries/{deliveryId}repeatedly. - Batch where possible. Use the list endpoint with filters instead of fetching deliveries one by one.
- Implement exponential backoff. On 429 or 5xx errors, wait progressively longer between retries (1s, 2s, 4s, etc.).
- Cache responses. Quote and delivery data does not change frequently. Cache responses for a few seconds to reduce redundant calls.
- Use a single API key per environment. Do not spread requests across multiple keys to circumvent limits -- this violates the terms of service.
Related guides
Pair this page with Tracking for polling cadence and Webhooks for lower-volume event-driven updates.