Streams API
Manage event streams and view their ingest credentials.
Create Stream
Create a new event stream for an organization.
Endpoint
POST /api/streams
Authentication
Requires a Personal Access Token with streams:create ability.
Request Format
Headers
| Header | Value | Required |
|---|---|---|
Authorization |
Bearer <USER_PAT> |
✅ |
Content-Type |
application/json |
✅ |
Body
{
"name": "My Stream"
}
Get Stream
Retrieve an existing stream and its ingest details.
Endpoint
GET /api/streams/{stream}
Authentication
Requires a Personal Access Token with streams:read ability.
Response Format
Identical to the response shown above for stream creation.
| Field | Type | Description | Required |
|---|---|---|---|
name |
string | Human-readable stream name | ✅ |
Response Format
{
"id": 1,
"uuid": "...",
"name": "My Stream",
"slug": "my-stream-abc123",
"stream_setup": false,
"team_id": 10,
"created_at": "2025-01-01T00:00:00Z",
"updated_at": "2025-01-01T00:00:00Z",
"ingest": {
"endpoint": "https://tailstream.io/api/ingest/{streamId}",
"token": "<INGEST_TOKEN>",
"expires_at": null
}
}
The token field is a signed JWT containing the stream UUID (stream_id claim) and scope: ingest. expires_at is populated only when you explicitly issue an expiring token.
List Stream Logs
Retrieve raw log events for a stream with optional filtering and pagination.
Endpoint
GET /api/streams/{stream}/logs
Authentication
Requires a Personal Access Token with streams:read ability. The token owner must belong to the same team as the requested stream.
Query Parameters
| Name | Type | Description |
|---|---|---|
start_time |
integer | (Optional) Millisecond Unix timestamp marking the beginning of the window. When omitted (along with end_time), acts as a "tail" returning the most recent logs. |
end_time |
integer | (Optional) Millisecond Unix timestamp marking the end of the window. When omitted, defaults to the current time. |
limit |
integer | (Optional) Number of events per request (1-500). Defaults to 100. |
cursor |
string | (Optional) Opaque cursor string for pagination. Use the next_cursor from a previous response to fetch the next page. |
direction |
string | (Optional) asc or desc for timestamp ordering. Defaults to desc (newest first). |
filters |
string | (Optional) JSON-encoded array of filter clauses. See Filters for the supported fields and operators. |
Tail Mode: When neither start_time nor end_time are provided, the endpoint returns the most recent log events (like tail -f). In this mode:
- The
totalcount will benull(not computed for efficiency) - Cursor pagination still works normally
- Perfect for real-time monitoring or getting the latest logs
Filters
Each filter clause is an object with the following keys:
field(string, required): one ofis_bot,status_class,status,method,host,path,response_time_ms,level,ip,src, orq. The aliassearchis accepted and normalized toq.operator(string, optional): comparison operator. Defaults to=when omitted.value(mixed, optional): value to compare against. Use arrays forinorbetweenoperators.
Supported Filters
| Field | Operators | Expected value(s) | Description |
|---|---|---|---|
is_bot |
=, !=, not |
Boolean or truthy/falsey literal (true, false, 1, 0, etc.) |
Include or exclude events identified as bots based on User-Agent patterns. |
status_class |
=, !=, not |
One of 1xx, 2xx, 3xx, 4xx, 5xx |
Filter or exclude by HTTP status class. |
status |
=, ==, !=, not, >, >=, <, <=, gt, gte, lt, lte, in, not_in, between |
Integer, list of integers, or [min, max] pair |
Filter by specific status codes, ranges, or exclude status codes. |
method |
=, ==, !=, not, in, not_in |
HTTP method string or array of methods | Match or exclude request method(s) (case-insensitive). |
host |
=, ==, !=, not, like, not_like, in, not_in |
Host string or array of hostnames | Filter or exclude by request host. Use % wildcards with like/not_like. |
path |
=, ==, !=, not, like, not_like, in, not_in |
Path string or array of paths | Filter or exclude by request path. Use % wildcards with like/not_like. |
response_time_ms |
=, ==, >, >=, <, <=, gt, gte, lt, lte, between |
Number or [min, max] range (milliseconds) |
Filter by response latency. |
level |
=, ==, !=, not, in, not_in |
Log level string or array (case-insensitive) | Filter or exclude by parsed log level (e.g., INFO, WARN, ERROR). |
ip |
=, ==, !=, not, like, not_like, in, not_in |
IP address string or array of IP addresses | Filter by client IP address. |
src |
=, ==, !=, not, like, not_like, in, not_in |
Source string or array of sources | Filter by source from parsed fields (JSON field). |
filename |
=, ==, !=, not, like, not_like, in, not_in |
File path string or array of file paths | Filter by source log file path (e.g., /var/log/nginx/access.log). |
q (or search) |
N/A | Free-form string | Case-insensitive full-text search across message, path, host, user agent, and parsed fields. No operator needed. |
Filter Operator Details
Comparison Operators:
=,==: Exact match (case-insensitive formethodandlevel)!=,not: Not equal (available forip,src)>,gt: Greater than (numeric fields only)>=,gte: Greater than or equal (numeric fields only)<,lt: Less than (numeric fields only)<=,lte: Less than or equal (numeric fields only)
Pattern Matching:
like: SQL LIKE pattern match (use%wildcards, e.g.,api/%or%/orders)not_like: Negated LIKE pattern match
Array Operators:
in: Match any value in arraynot_in: Exclude all values in arraybetween: Range match for numeric values, expects[min, max]array
Filter Examples
Status code filtering:
[
{"field": "status", "operator": "=", "value": 500}
]
Status code range:
[
{"field": "status", "operator": "between", "value": [400, 499]}
]
Multiple status codes:
[
{"field": "status", "operator": "in", "value": [500, 502, 503]}
]
Path pattern matching:
[
{"field": "path", "operator": "like", "value": "/api/%"}
]
Response time threshold:
[
{"field": "response_time_ms", "operator": ">", "value": 1000}
]
Log level filtering:
[
{"field": "level", "operator": "in", "value": ["ERROR", "FATAL"]}
]
Exclude bots:
[
{"field": "is_bot", "operator": "=", "value": false}
]
Full-text search:
[
{"field": "q", "value": "timeout"}
]
Combined filters (AND logic):
[
{"field": "status", "operator": ">=", "value": 500},
{"field": "path", "operator": "like", "value": "/api/%"},
{"field": "response_time_ms", "operator": ">", "value": 1000}
]
Important Notes
Text Search Behavior:
The q field performs a case-insensitive search across multiple columns:
- Standard columns:
path,host,user_agent,referrer,ip_address,service,message,raw_message,method - JSON parsed fields:
parsed_fields->>'path',parsed_fields->>'host',parsed_fields->>'src',parsed_fields->>'message'
Search uses ILIKE on PostgreSQL and LOWER() + LIKE on other databases for case-insensitive matching.
Filter Logic:
- Multiple filters use AND logic (all conditions must match)
- Filters are applied server-side before pagination
- Invalid filter fields are silently ignored
- Invalid operators default to
=
Performance Considerations:
- JSON field filters (
src) cannot use indexes and may be slower on large datasets - Text search (
q) performs full table scans; combine with time range and other indexed filters for better performance - Pattern matching with
likecan be slow; prefix patterns (api/%) perform better than suffix patterns (%/orders)
Example Requests
Basic Request
curl "https://app.tailstream.io/api/streams/ed3f0a15-b07e-4c60-8c0d-4c4dc3a2a935/logs?start_time=1735689600000&end_time=1735693200000&limit=100" \
-H "Authorization: Bearer <USER_PAT>"
With Filters
To filter only 500 errors you can pass a JSON array of filter rules:
curl "https://app.tailstream.io/api/streams/ed3f0a15-b07e-4c60-8c0d-4c4dc3a2a935/logs?filters=%5B%7B%22field%22:%22status%22,%22operator%22:%22=%22,%22value%22:500%7D%5D" \
-H "Authorization: Bearer <USER_PAT>"
Cursor-Based Pagination
Use the next_cursor from the response to fetch the next page:
# First page
curl "https://app.tailstream.io/api/streams/ed3f0a15-b07e-4c60-8c0d-4c4dc3a2a935/logs?limit=100" \
-H "Authorization: Bearer <USER_PAT>"
# Next page
curl "https://app.tailstream.io/api/streams/ed3f0a15-b07e-4c60-8c0d-4c4dc3a2a935/logs?cursor=eyJ0cyI6IjIwMjUtMDEtMDEgMTI6MDA6MDAuMDAwMDAwIiwiaWQiOjEyMzQ1fQ==&limit=100" \
-H "Authorization: Bearer <USER_PAT>"
Response Format
{
"data": [
{
"id": 12345,
"timestamp": "2025-01-01T12:00:00Z",
"raw_message": "GET /api/orders 200 142ms",
"fields": {
"level": "INFO",
"method": "GET"
}
},
{
"id": 12346,
"timestamp": "2025-01-01T12:00:01Z",
"raw_message": "Plain log message without parsed fields"
}
],
"meta": {
"has_more": true,
"next_cursor": "eyJ0cyI6IjIwMjUtMDEtMDEgMTI6MDA6MDAuMDAwMDAwIiwiaWQiOjEyMzQ1fQ==",
"total": 1250
},
"links": {
"next": "https://app.tailstream.io/api/streams/{stream}/logs?cursor=eyJ0c..."
}
}
Response Fields:
Each log event contains:
id: Unique identifier for the log eventtimestamp: ISO 8601 timestampraw_message: The original log messagefields: (Optional) Object containing parsed fields - only included when fields are detectedlevel: Log level (INFO, WARN, ERROR, etc.)method: HTTP method (GET, POST, etc.)
Metadata:
meta.has_more: Boolean indicating if more results existmeta.next_cursor: Opaque cursor string for the next page (null if no more results)meta.total: Total count of events matching filters (cached for 60s). Null in tail mode.links.next: Pre-built URL for the next page (null if no more results)
If the requested time window exceeds your plan's retention, the API responds with 422 Unprocessable Entity.
Cursor-Based Pagination
This endpoint uses cursor-based pagination for efficient traversal through large result sets:
- Opaque Cursors: Cursor strings are base64-encoded and should be treated as opaque tokens
- Constant Performance: O(log n) queries regardless of position in the dataset
- No Offset Limits: Can paginate through millions of records efficiently
- Stable Results: Immune to concurrent inserts affecting pagination
- Bidirectional: Supports forward navigation via
next_cursor
Usage Pattern:
- Make initial request without cursor
- Check
meta.has_morein response - If
true, usemeta.next_cursorfor next request - Repeat until
has_moreisfalse
Rate Limiting & Performance
- Rate Limit: 60 requests per minute per token
- Count Caching: Total counts are cached for 60 seconds to improve performance
- Efficient Queries: Uses composite indexes on
(stream_id, created_at, id) - Response Time: Consistent ~150ms regardless of dataset size or position
See Performance Documentation for detailed optimization guidelines.