Sending Logs¶
Overview¶
Strix accepts logs via HTTP and inserts them into ClickHouse through a buffered worker pool. All ingestion endpoints require token-based authentication and are rate-limited.
Authentication¶
Ingest tokens are scoped per-fractal and carry per-token configuration (parser type, normalization, timestamp fields). Each fractal gets a default token automatically on creation.
Getting a Token¶
- Navigate to the Ingest tab within a fractal
- Copy the token (format:
strix_ingest_{32_hex_chars}) - Include it in the
Authorizationheader:
curl -X POST http://localhost:8080/api/v1/ingest \
-H "Authorization: Bearer strix_ingest_abc123..." \
-H "Content-Type: application/json" \
-d '[{"event":"login","user":"admin"}]'
Requests without a valid token receive 401 Unauthorized.
Token Features¶
Each token has its own configuration:
- Parser type: json (default), kv (key-value), or syslog
- Normalization: Automatically normalize field names
- Timestamp fields: Define which fields hold timestamps
Supported Formats¶
All formats are sent to POST /api/v1/ingest.
JSON array (multiple logs in one request):
curl -X POST http://localhost:8080/api/v1/ingest \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '[{"event":"login","user":"admin"},{"event":"logout","user":"admin"}]'
Single object:
curl -X POST http://localhost:8080/api/v1/ingest \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"message":"user login","source_ip":"10.0.0.1"}'
NDJSON (newline-delimited JSON):
curl -X POST http://localhost:8080/api/v1/ingest \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
--data-binary @logs.ndjson
Elasticsearch Bulk API¶
Strix also accepts POST /_bulk and PUT /_bulk for compatibility with Elasticsearch-style clients. These endpoints also require the Authorization: Bearer header.
Fractal Routing¶
Each ingest token is scoped to a single fractal. Logs are routed to the fractal associated with the token.
Timestamps¶
Strix extracts timestamps automatically:
- Token-configured timestamp fields (set per token in the Ingest tab)
- Configured timestamp fields (set in Settings)
- Common fields:
timestamp,@timestamp,time,ts,_time - Falls back to ingestion time if none found
Supported formats: RFC3339, unix seconds/millis/micros/nanos, and common ISO variants.
Parsing Philosophy¶
Strix is intentionally minimal when it comes to parsing. It accepts well-structured log formats (JSON, key-value, syslog) and focuses on what happens after logs arrive: storage, querying, alerting, and collaboration at scale.
Complex log parsing and transformation (extracting fields from unstructured text, grok patterns, multi-line assembly, etc.) is deliberately out of scope. Mature, battle-tested tools already exist for this:
- Logstash - Broad plugin ecosystem for parsing and routing
- Cribl - Stream processing and log transformation
- Fluentd / Fluent Bit - Lightweight log collection and parsing
- Vector - High-performance log pipeline
Use these tools upstream of Strix to parse raw logs into structured formats before ingestion.
What Strix Does Handle¶
Normalization. Strix normalizes field names across log sources to ensure consistency. This means alert rules, saved queries, and dashboards work reliably regardless of which source produced the log. Normalization can be configured per ingest token.