Limits and recommendations
Any complex system has rules, recommendations, and limits that users must follow. This page explains which actions are acceptable and which are not.
All users must follow the rules and limits described here. We may block access until any violations are fixed.
Basics
When we provide API access, we expect users to act responsibly and in good faith.
Good use means interacting with the system in a way that:
- does not create unnecessary or excessive load;
- does not reduce service availability or stability;
- does not negatively affect other users.
Bad use includes any actions that reduce performance, create unnecessary load on the infrastructure, or disrupt normal system operation. It does not matter if this behavior is intentional or not.
You are fully responsible for all actions performed under your account, including those made by automated API clients like trading bots. If your access is blocked, we may explain the reason, but how to fix the issue is up to you.
Examples of Undesirable Behavior
This section describes usage patterns that can negatively impact system performance and availability. Review your API client and eliminate these patterns where applicable.
The list is not exhaustive. Access may be restricted for behavior that degrades the system, even if not explicitly listed.
HTTP
Using HTTP API for real-time data streaming
Scenario description
During API client development, market data updates were implemented through HTTP API instead of WebSocket subscriptions. While this initially simplified the client architecture, it eventually resulted in the client sending anywhere from hundreds to thousands of requests per second in an attempt to keep data up to date.
Processing excessive request traffic creates unnecessary system load, increases latency for other operations, and negatively affects overall platform stability.
Recommendations
HTTP API is appropriate for:
- initial data loading;
- historical data retrieval;
- reconciliation and recovery workflows;
- other targeted or low-frequency requests.
For high-frequency real-time updates, we strongly recommend switching to WebSocket subscriptions.
To reduce system load, HTTP API requests are limited to up to 100 requests per second.
If a specific request does not have an equivalent in WebSocket API, its use through HTTP API is considered acceptable and compliant with fair usage expectations.
Repeated requests for static data
Scenario description
Some data exposed through HTTP API is effectively static: instrument lists, exchanges, currencies, status dictionaries, and similar reference data. Such data changes rarely — or may not change at all — yet the API client still requests it repeatedly as part of every workflow.
Repeatedly fetching unchanged data wastes system resources on unnecessary operations. Under high request volume, this behavior can negatively impact overall platform performance.
Recommendations
- Separate static, infrequently changing, and real-time data.
- Use a local cache with an explicit invalidation policy for static data:
- warm up the cache during application startup if the reference data is required for core trading logic;
- use TTLs and versioning where possible;
- refresh reference data when the trading day changes or when explicit invalidation is required.
- Follow the core principle: static data should be read from the local cache by default instead of repeatedly requesting it through HTTP API.
Uneven traffic distribution
Scenario description
Even a moderate average request rate can negatively affect system performance if requests are sent in large bursts.
During sudden traffic spikes, the system must process a large number of requests simultaneously, which increases response times and raises the probability of errors. Periods of low activity do not compensate for the impact of these spikes.
Recommendations
Distribute requests evenly over time and avoid sudden traffic spikes by using:
- client-side throttling and rate limiting;
- jitter for retries and scheduled operations;
- queues and request scheduling mechanisms;
- event-driven approaches and subscriptions instead of periodic bulk polling.
Requesting large datasets without filtering
Scenario description
To retrieve information for a single instrument, the API client downloads the entire list of available exchange instruments and filters the required data locally instead of narrowing the query scope in advance.
This forces the system to return large payloads for every request, unnecessarily consuming network bandwidth and server resources while also increasing client-side processing overhead.
Recommendations
- Restrict the result set using pagination and limit parameters;
- Use filtering options to narrow the requested range of data;
- Avoid requesting full datasets unless they are actually required.
Excessive time synchronization
Scenario description
Access to server time can be used to estimate clock drift, however requesting it too frequently provides little practical benefit.
If such requests are made before every operation, they begin generating a separate stream of service traffic. In addition, the returned timestamp is not intended to provide precise synchronization with external time sources.
Recommendations
Time synchronization should be infrequent and controlled:
- determine the offset between client and server time during application startup, after long idle periods, after significant network failures, or periodically at a low frequency;
- do not synchronize time before every request;
- use a locally stored offset correction value.
Note that the returned server time does not provide the precision, resolution, or synchronization guarantees required for exchange-grade timing. Because of this, aggressive synchronization strategies based on this endpoint are not suitable for high-frequency trading scenarios.
Aggressive retry policies
Scenario description
The API client does not stop execution when errors occur, causing large numbers of invalid requests to be continuously sent to HTTP API despite having no chance of successful execution. Depending on the error type, the user will eventually be blocked by automatic protection systems, but until then the requests continue generating unnecessary traffic and system load.
Recommendations
- At a minimum, classify errors into:
- temporary errors;
- permanent errors;
- validation errors;
- authentication/authorization errors;
- unknown errors.
- Implement automatic workflow interruption after repeated failures.
- Retry requests only where retries are actually appropriate, using backoff and jitter mechanisms.
Duplicate requests
Scenario description
Due to implementation details, the API client sends multiple identical requests with the same query range simultaneously. From the system perspective, these requests are indistinguishable, causing the same operation to be executed repeatedly and wasting system resources on redundant work.
Recommendations
- Cache data shared across multiple workflows;
- Prefer local cache reads whenever possible.
Frequent access token generation
Scenario description
Requests to the system are authorized using access tokens, which remain valid for 30 minutes after issuance. Continuous operation requires periodic token renewal, however excessively frequent token requests create unnecessary load on the authorization infrastructure.
Recommendations
- Implement local token storage and reuse the same token across requests;
- Refresh tokens independently from business operations;
- Avoid unnecessary token generation requests.
The recommended refresh interval is every 20–25 minutes after the previous token was issued.
WebSocket
Using a single connection for all subscriptions
Scenario description
Due to the API client architecture, all subscriptions are concentrated within a single WebSocket connection instead of being distributed across multiple connections. Over time, this connection becomes a bottleneck for client-server communication: traffic volume grows, parsing and serialization overhead increases, and internal message queues expand. Message delivery latency also increases. In addition, losing this connection means losing a large number of subscriptions at once and complicates recovery.
Recommendations
- Distribute subscriptions logically across separate connections, for example by data type;
- Separate high-frequency streams from low-frequency but business-critical events;
- Split large numbers of similar high-frequency subscriptions across multiple connections.
We recommend limiting subscriptions to no more than 5000 per connection.
Missing local message buffering
Scenario description
This is a special case of the previous scenario. The system limits the number of unprocessed messages in the server-side buffer to 5000 entries per connection. Once this limit is exceeded, the system forcibly closes the WebSocket connection with the following error: Too many (>5009) messages in server buffer, closing WebSocket.
If the API client does not implement a local buffer for incoming messages, active subscription usage may cause frequent connection drops. This can result in data loss and trigger aggressive reconnection behavior.
Recommendations
- Replace the “receive and process immediately” approach with a “receive, store, process” workflow using a local message buffer;
- Reduce the overall volume of incoming messages by removing unnecessary subscriptions;
- Parallelize message processing across multiple execution threads.
WebSocket disconnections caused by server buffer overflows are tracked by automatic protection systems. Frequent disconnects may result in account restrictions.
Excessive number of connections
Scenario description
The opposite side of the scenarios above: the API client creates a separate WebSocket connection for every new subscription, unnecessarily increasing load on both the network and the platform infrastructure.
Recommendations
- Enforce a maximum number of simultaneously open WebSocket connections on the client side;
- Group subscriptions logically across several connections.
The recommended number of concurrently active WebSocket connections is no more than 10.
Missing subscription lifecycle management
Scenario description
Without proper subscription lifecycle management, the API client may create duplicate subscriptions, fail to unsubscribe from obsolete streams, and lose the relationship between subscriptions and their local consumers.
This behavior creates unnecessary system load, causes duplicated messages, and leads to non-deterministic behavior in client-side trading logic.
Recommendations
- Every subscription should be uniquely identifiable by the API client, for example through the
guidparameter; - Subscriptions should have an explicit and controlled lifecycle, including:
- creation;
- active usage;
- cancellation when no longer required;
- optional recovery after network failures.
High-frequency subscription reconfiguration
Scenario description
Frequently creating and canceling identical subscriptions within one or multiple connections, as well as constantly rebuilding sets of short-lived subscriptions, creates unnecessary load on both the client and server. It increases the number of service messages and complicates subscription lifecycle management.
This behavior creates race conditions between subscribe commands, unsubscribe commands, and actual data delivery. During transitional states, the risk of event loss, duplicate streams, and incorrect interpretation of active subscriptions increases significantly.
Recommendations
- Subscription sets should change deliberately:
- define stable groups of subscriptions in advance;
- avoid excessive subscribe/unsubscribe operations for the same subscription;
- avoid constantly rebuilding subscription sets due to short-lived local conditions.
- If some subscriptions are only used to receive a single message every few minutes, consider replacing such scenarios with HTTP API.
Aggressive reconnection behavior
Scenario description
When a connection is lost, the API client immediately attempts to reconnect without analyzing the cause of the failure. This behavior not only generates sudden spikes of activity, but may also result in automatic account restrictions if the disconnect was caused by an overflow of unprocessed messages in the server buffer (see Missing local message buffering above).
Recommendations
- Use a bounded reconnection strategy with backoff and jitter mechanisms;
- Avoid reconnecting multiple connections simultaneously using identical backoff and jitter settings;
- After reconnecting, avoid subscription storms by restoring subscriptions gradually over time.
GraphQL
At the moment, there are no specific limitations or usage recommendations defined for GraphQL API. Following the documentation and applying reasonable engineering practices is generally sufficient for successful integration.
Common patterns
Using a single interface for every task
Scenario description
During API client development, all integration scenarios were implemented through a single system interface — either HTTP API or WebSocket API. While this approach simplifies development initially, it later leads either to increased data delivery latency or to the loss of reliable recovery mechanisms.
Recommendations
Use each available interface according to its intended purpose:
- HTTP API for commands, snapshots, historical data, and reconciliation workflows;
- WebSocket API for real-time updates and event streams;
- GraphQL API for reference and aggregated data retrieval.
Blocking Conditions
Access may be restricted if user activity is determined to negatively impact the system.
Restrictions may be applied manually and do not have a predefined duration.
If access is restricted without automated system messages, contact support:
- 📧 Email: support@alor.ru
- 🗃 Personal account
- ☎ Phone: 8 800 775-11-99, +7 495 980-24-98
Summary
| Acceptable | Undesirable |
|---|---|
| Use HTTP for commands, WebSocket for streaming | Misuse API channels |
| Distribute load evenly | Generate traffic spikes |
| Cache static data | Repeatedly request unchanged data |
| Handle errors selectively | Retry indiscriminately |
| Reuse tokens | Request tokens per operation |
| Manage connections and subscriptions | Overload or duplicate them |
| Buffer messages | Process without control |
| Maintain stable subscriptions | Frequently recreate them |
| Use controlled reconnects | Reconnect aggressively |
| Restore state after failures | Ignore inconsistencies |
| Validate and deduplicate requests | Send invalid or duplicate requests |
Limits:
- Up to 100 HTTP requests per second;
- Up to 10 WebSocket connections;
- Up to 5000 subscriptions per connection;
- Up to 5000 pending messages in buffer.