The Store API enforces rate limits to protect against abuse and ensure fair usage. Rate limits are applied per API key or per IP address depending on the endpoint.Documentation Index
Fetch the complete documentation index at: https://spreecommerce.org/docs/llms.txt
Use this file to discover all available pages before exploring further.
Default Limits
| Endpoint | Limit | Scope | Window |
|---|---|---|---|
| All endpoints | 300 requests | Per API key | 1 minute |
POST /auth/login | 5 requests | Per IP | 1 minute |
POST /customers | 3 requests | Per IP | 1 minute |
POST /auth/refresh | 10 requests | Per IP | 1 minute |
POST /auth/oauth/callback | 5 requests | Per IP | 1 minute |
X-Spree-Api-Key). If the key is not provided, the limit falls back to the client’s IP address.
Authentication endpoints have stricter per-IP limits to prevent brute-force attacks.
Rate Limit Headers
Every Store API response includes headers that show your current rate limit usage:| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum number of requests allowed per window |
X-RateLimit-Remaining | Number of requests remaining in the current window |
Retry-After | Seconds to wait before retrying (only present when limit is reached) |
Rate Limit Response
When you exceed the rate limit, the API returns a429 Too Many Requests response:
Retry-After header indicating how many seconds to wait before retrying:
SDK Retry Handling
The Spree SDK automatically handles rate-limited responses with built-in retry logic and exponential backoff:Retry-After header and only retries on 429 status codes for non-GET requests. For GET/HEAD requests, it also retries on 500, 502, 503, and 504 errors.
Configuring Rate Limits
If you’re self-hosting Spree, you can adjust rate limits in your initializer:Rails.cache as the backing store. For production environments with multiple application servers, ensure you’re using a shared cache store like Redis:

