- Using the Ingest API
- Using OpenTelemetry
- Using a data shipper (Logstash, Filebeat, Metricbeat, Fluentd, etc.)
- Using the Elasticsearch Bulk API that Axiom supports natively
- Using endpoints
Ingest method
Select the method to ingest your data. Each ingest method follows a particular path.Client libraries
Library extensions
Other
Ingest API
Axiom exports a simple REST API that can accept any of the following formats:Ingest using JSON
application/json
- single event or JSON array of events
Example
Ingest using NDJSON
application/x-ndjson
- Ingests multiple JSON objects, each represented as a separate line.
Example
Ingest using CSV
text/csv
- this should include the header line with field names separated by commas
Example
Data shippers
Configure, read, collect, and send logs to your Axiom deployment using a variety of data shippers. Data shippers are lightweight agents that acquire logs and metrics enabling you to ship data directly into Axiom.AWS CloudFront
Amazon CloudWatch
Elastic Beats
Fluent Bit
Fluentd
Heroku Log Drains
Kubernetes
Logstash
Loki Multiplexer
Syslog Proxy
Vector
Apps
Send logs and metrics from Vercel, Netlify, and other supported apps.
Endpoints
Endpoints enable you to easily integrate Axiom into your existing data flow by allowing you to use tools and libraries that you are already familiar with. You can create an endpoint for the following services and send the logs directly to Axiom:
Limits and requirements
Axiom applies certain limits and requirements on the ingested data to guarantee good service across the platform. Some of these limits depend on your pricing plan, and some of them are applied system-wide. For more information, see Limits and requirements. The most important field requirement is about the timestamp.All events stored in Axiom must have a
_time
timestamp field. If the data you ingest doesn’t have a _time
field, Axiom assigns the time of the data ingest to the events. To specify the timestamp yourself, include a _time
field in the ingested data._time
field in the ingested data, ensure the _time
field contains timestamps in a valid time format. Axiom accepts many date strings and timestamps without knowing the format in advance, including Unix Epoch, RFC3339, or ISO 8601.
Best practices for sending data to Axiom
When sending data into Axiom, follow these best practices to optimize performance and reliability:- Batch events: Use a log forwarder, collector, or Axiom‘s official SDKs to group multiple events into a single request before sending them to Axiom. This reduces the number of API calls and improves overall throughput. Avoid implementing batching within your app itself as this introduces additional complexity and requires careful management of buffers and error handling.
- Use compression: Enable gzip, zstd compression for your requests to reduce bandwidth usage and potentially improve response time.
- Handle rate limiting and errors: Use Axiom‘s official libraries and SDKs which automatically implement best practices for handling rate limiting and errors. For advanced use cases or custom implementations, consider adding a fallback mechanism to store events locally or in cold storage if ingestion consistently fails after retries.