The _bulk endpoint allows for efficient processing of multiple requests in a single operation. It supports streaming, parallel or sequential processing, and atomic execution.
Bulk requests can be streamed without requiring the entire request to be loaded into memory.
Results are kept in memory until the full stream is complete, which may result in large responses for big datasets. Consider breaking very large operations into smaller batches.
For processing more than 100 items, we recommend using streaming mode instead of increasing the bulk size limit.
To enable streaming, include one of the following content type headers in your HTTP request:
For a script stream, include content type application/vnd.formance.ledger.api.v2.bulk+script-stream
For a JSON stream, include content type application/vnd.formance.ledger.api.v2.bulk+json-stream
Script stream formatFor a script stream, each Numscript transaction must be wrapped with //script and //end delimiters:
The continueOnFailure query parameter controls how the bulk endpoint handles errors during sequential processing:
continueOnFailure=false (default): Processing stops at the first error. Subsequent elements are not processed.
continueOnFailure=true: Processing continues even if individual elements fail. Failed elements are reported in the response.
The continueOnFailure parameter only applies when parallel=false. In parallel mode, elements are processed independently regardless of this setting.
Example with continue on failure:
Copy
Ask AI
POST /api/ledger/v2/{ledger}/_bulk?parallel=false&continueOnFailure=true
This is useful when you want to process as many elements as possible and handle failures separately, rather than stopping the entire batch on the first error.
Idempotency keys prevent duplicate transactions when replaying bulk requests after failures. Each bulk element can specify its own key.For script streams, add ik=<key> to the script header:
The 100 item limit applies only to non-streaming requests. If your use case requires a larger limit, you can configure it using the --bulk-max-size flag or BULK_MAX_SIZE environment variable:
Copy
Ask AI
ledger serve --bulk-max-size 1000
Increasing the bulk size does not necessarily improve write performance. Test different values to find the optimal setting for your use case.