Skip to main content

Rate Limits

Zenoo enforces rate limits per project to protect service stability. Limits apply independently to each API endpoint.

Default limits

EndpointRate LimitDescription
POST .../api (sync)100 requests/minuteCompany and Person sync verification
POST .../init (async)100 requests/minuteAsync journey initiation
GET .../sharable-payload/{token} (pull)300 requests/minuteResult polling
POST .../api (screening)200 requests/minuteStandalone AML screening
Limits are measured on a rolling 60-second window. Each project has its own independent counters.
Staging limits are lower: 50 requests/minute across all endpoints. Do not use staging for load testing.

Rate limit response

When you exceed the limit, the API returns 429 Too Many Requests with a Retry-After header: Response headers:
HTTP/1.1 429 Too Many Requests
Retry-After: 30
Content-Type: application/json
Response body:
{
  "error": "RATE_LIMITED",
  "message": "Rate limit exceeded. Retry after 30 seconds.",
  "request_id": "req-a1b2c3d4"
}
The Retry-After value is in seconds. Always use this value instead of guessing.

Best practices

Implement request queuing

Don’t rely on retry loops to handle rate limits. Use a queue to smooth out request bursts.
request-queue.js
class RequestQueue {
  constructor(maxPerMinute = 90) {
    this.queue = [];
    this.interval = (60 / maxPerMinute) * 1000;
    this.processing = false;
  }

  enqueue(requestFn) {
    return new Promise((resolve, reject) => {
      this.queue.push({ requestFn, resolve, reject });
      if (!this.processing) this.processQueue();
    });
  }

  async processQueue() {
    this.processing = true;
    while (this.queue.length > 0) {
      const { requestFn, resolve, reject } = this.queue.shift();
      try {
        const result = await requestFn();
        resolve(result);
      } catch (error) {
        reject(error);
      }
      await new Promise((r) => setTimeout(r, this.interval));
    }
    this.processing = false;
  }
}

// Usage
const queue = new RequestQueue(90); // Stay under 100/min limit
await queue.enqueue(() => fetch(zenooUrl, options));
Set maxPerMinute to 90% of your limit to leave headroom for retries.

Respect the Retry-After header

When you receive a 429, always wait for the duration specified in Retry-After. Do not guess or use a fixed delay.
if (response.status === 429) {
  const retryAfter = parseInt(response.headers.get("Retry-After") || "60");
  await new Promise((r) => setTimeout(r, retryAfter * 1000));
  // Retry the request
}

Monitor queue depth

Track the size of your request queue over time. A growing queue indicates you’re approaching your rate limit and may need a higher allocation.
setInterval(() => {
  metrics.gauge("zenoo.queue_depth", queue.queue.length);
}, 10000);
If queue depth consistently exceeds 50 pending requests, contact Zenoo about increasing your limits.

Use sync mode for batch operations

If you need to verify many entities at once, use sync mode with X-SYNC-TIMEOUT instead of initiating many async journeys. Sync requests consume fewer round trips than the init + poll pattern.

Separate polling from submission

The pull endpoint (/sharable-payload/{token}) has a higher rate limit (300/min) than the submission endpoints (100/min). Polling aggressively will not count against your submission limit.

Higher limits

Default limits are sufficient for most integrations. If you need higher throughput, contact your Zenoo account manager with your current request volume, expected peak volume, and use case (batch processing, real-time onboarding, ongoing monitoring). Higher limits are available on Enterprise plans.

Next steps