Events & Pipeline ​
MapexOS provides a unified event pipeline that ingests, normalizes, routes, and persists events from heterogeneous sources into a consistent operational backbone.
At the center of this pipeline is the Events Service, which persists events in ClickHouse for high-throughput analytics and supports per-tenant retention (TTL).
Applies to v1.0.0 — HTTP and MQTT ingestion, fan-out routing via Route Groups, ClickHouse storage, and EVA-based querying.
What is an “event” in MapexOS? ​
In MapexOS, an event is the normalized representation of any incoming signal:
- IoT telemetry (temperature, humidity, battery, counters)
- Device status and health checks
- Business events from third-party systems (webhooks)
- Operational alerts generated by automation
MapexOS treats all of them as first-class events and processes them through the same pipeline.
Event pipeline overview ​
Every payload flows through a consistent, observable pipeline:
Data Source (HTTP/MQTT)
│
â–Ľ
JS Execution (Template scripts)
Decode → Validate → Transform
│
â–Ľ
Router (Route Groups fan-out)
│
├── Events Service (ClickHouse persistence)
├── Rule Engine (business logic + state)
└── Triggers (technical/communication actions)Pipeline stages ​
| Stage | Service | Purpose |
|---|---|---|
| Ingestion | Data Source (HTTP / MQTT) | Receive payloads and enforce authentication |
| Processing | JS Execution | Run template scripts (decode → validate → transform) to produce the Standard Event |
| Routing | Router | Fan-out to multiple destinations based on Route Groups |
| Storage | Events Service | Persist event streams in ClickHouse with per-tenant retention (TTL) |
| Automation | Rule Engine + Triggers | Evaluate rules, manage state, and execute actions |
Standard Event (normalized payload) ​
All sources are transformed into a consistent Standard Event schema:
{
eventType: string, // Classification (e.g., "telemetry.temperature")
eventId: string, // Unique identifier
data: object, // Normalized payload
metadata?: object, // Optional context (assetUUID, orgId, correlationId)
created: string // ISO 8601 timestamp
}This contract enables:
- Predictable interoperability across services
- Unified querying via EVA dynamic fields
- Composable automation (routes, rules, triggers)
- Traceability through consistent identifiers (eventId / correlationId)
Routing and fan-out ​
The Router receives Standard Events and dispatches them to one or more destinations based on Route Groups.
Route Groups ​
A Route Group defines how events are distributed:
| Field | Description |
|---|---|
| Name | Human-readable identifier |
| Conditions | Optional filters to match event types and/or values |
| Destinations | One or many targets (internal services or external outputs) |
Routing destinations ​
| Destination | Description |
|---|---|
| Events Service | Persistent storage for search, audit, and analytics |
| Rule Engine | Stateful automation and business logic evaluation |
| Triggers | Direct action execution (webhooks, notifications, queues) |
| External systems | Forward to third-party APIs, data pipelines, or downstream platforms |
Fan-out example ​
Route Group: "temperature-events"
├── Always → Events Service (store processed events)
├── If EVA.temperature >= 80 → Rule Engine (evaluate escalation rules)
└── If metadata.severity == "critical" → Triggers (Slack + Teams notification)A single event can be routed to multiple destinations simultaneously, enabling low-latency alerting while still persisting data for reporting.
Event streams ​
MapexOS organizes persisted data into streams for performance, access control, and operational clarity.
Streams in v1.0.0 ​
| Stream | Description | Typical use |
|---|---|---|
| Processed Events | Normalized Standard Events | Dashboards, analytics, queries, reporting |
| Raw Events | Original payloads as received (debug-dependent) | Troubleshooting, replay, forensic analysis |
| Router Logs | Routing decisions and destinations | Auditing fan-out behavior |
| Rule Logs | Rule evaluations and match results | Compliance, auditability, debugging |
| Trigger Logs | Action execution outcome (HTTP status, duration, retries) | Delivery tracking and reliability |
Note
Raw/debug streams may be stored selectively depending on the asset’sdebugEnabledconfiguration.
Events Service ​
The Events Service consumes event streams via NATS and persists them into ClickHouse tables optimized for append-heavy workloads.
High-level architecture ​
NATS JetStream subjects
├─ events.processed
├─ events.raw
├─ logs.router
├─ logs.rules
└─ logs.triggers
│
â–Ľ
Events Service consumers
│
â–Ľ
ClickHouse
(stream-optimized tables)Storage characteristics (ClickHouse) ​
| Capability | Description |
|---|---|
| Write performance | Optimized for high-ingest append workloads |
| Query performance | Time-range and tenant-filtered queries at scale |
| Retention (TTL) | Automatic expiration per tenant / stream |
| Analytics | Aggregations and reporting without moving data elsewhere |
Data retention (TTL) ​
MapexOS supports per-tenant retention policies, allowing each organization to control storage cost and compliance requirements.
Retention policy levels ​
| Scope | Description |
|---|---|
| Platform default | Global baseline retention |
| Per organization | Tenant-level override |
| Per stream | Different retention per stream type (processed vs logs vs raw) |
TTL behavior ​
TTL is enforced at the storage layer using time-based expiration:
-- Illustrative TTL policy (ClickHouse-style)
TTL timestamp + INTERVAL retention_days DAYQuerying events with EVA (dynamic fields) ​
MapexOS extracts EVA dynamic fields from heterogeneous payloads into typed structures that are efficient to query.
EVA storage pattern ​
Dynamic fields are stored in typed arrays to enable flexible indexing without schema migrations:
numberFields Array(Tuple(field String, value Float64))
stringFields Array(Tuple(field String, value String))
boolFields Array(Tuple(field String, value UInt8))
dateFields Array(Tuple(field String, value DateTime64(3)))Query examples (illustrative) ​
The examples below show the intent and pattern. Exact syntax may vary depending on table naming and ClickHouse function usage.
Filter: temperature above threshold
SELECT
timestamp,
asset_uuid,
arrayFirst(x -> x.field = 'temperature', numberFields).value AS temperature
FROM events_processed
WHERE
org_id = 'org-123'
AND arrayExists(x -> x.field = 'temperature', numberFields)
AND arrayFirst(x -> x.field = 'temperature', numberFields).value > 30
ORDER BY timestamp DESC
LIMIT 100;Aggregation: average temperature per sensor model
SELECT
arrayFirst(x -> x.field = 'model', stringFields).value AS model,
COUNT(*) AS events,
AVG(arrayFirst(x -> x.field = 'temperature', numberFields).value) AS avg_temp
FROM events_processed
WHERE
org_id = 'org-123'
AND timestamp >= now() - INTERVAL 24 HOUR
GROUP BY model
ORDER BY events DESC;Why EVA matters (enterprise view) ​
| Benefit | Outcome |
|---|---|
| Schema agility | Add new fields without DB migrations |
| Cross-vendor comparability | Query the same logical attribute across device types |
| Search readiness | UI and API can expose universal filters and autocomplete |
| Cost control | Avoid exploding table columns for every vendor payload |
Pipeline observability ​
MapexOS provides observability at each stage of the event lifecycle.
Debug logs are opt-in (asset-level) ​
Debug and history logging is opt-in
To protect throughput and storage costs:
debugEnabled = true→ persists debug + history logs (step-by-step traces)debugEnabled = false→ persists error-only logs (default)
Enable debug temporarily during onboarding or incident investigation, then disable it once stable.
What you can observe ​
| Stream | What it captures |
|---|---|
| JS Execution | Script execution time, memory usage, validation/transform errors |
| Router logs | Fan-out decisions, matched conditions, destinations |
| Rule logs | Match evaluation, state usage, actions selected |
| Trigger logs | Delivery outcome, response codes, duration, retry metadata |
Trace narrative (example) ​
eventId: evt-abc-123
├── JS Execution: template=temp-sensor-v1 processed in 12ms
├── Router: matched destinations=[events.processed, rules.evaluate]
├── Rule Engine: rule=high-temp-alert result=MATCH
└── Trigger: slack.notify delivered=200 duration=340msBest practices ​
| Practice | Recommendation |
|---|---|
| Design route groups by domain | Keep fan-out rules aligned to business ownership (cold-chain, compliance, energy, etc.) |
| Set TTL intentionally | Configure retention per tenant and per stream to balance cost vs governance |
| Invest in EVA fields | Extract the attributes you will search and aggregate on |
| Treat raw/debug as diagnostic | Enable debug only when necessary and timebox it |
| Monitor processing latency | Track ingestion → persistence latency and error rate trends |
| Keep eventType consistent | Use stable naming conventions for long-term analytics |
Next steps ​
- Assets & Templates — Template processing and EVA extraction
- Rules & Business Rules — Stateful automation and triggers
- Roles & Permissions — Stream-level permissions and governance
