Skip to content

Events & Pipeline ​

MapexOS provides a unified event pipeline that ingests, normalizes, routes, and persists events from heterogeneous sources into a consistent operational backbone.

At the center of this pipeline is the Events Service, which persists events in ClickHouse for high-throughput analytics and supports per-tenant retention (TTL).

Applies to v1.0.0 — HTTP and MQTT ingestion, fan-out routing via Route Groups, ClickHouse storage, and EVA-based querying.


What is an “event” in MapexOS? ​

In MapexOS, an event is the normalized representation of any incoming signal:

  • IoT telemetry (temperature, humidity, battery, counters)
  • Device status and health checks
  • Business events from third-party systems (webhooks)
  • Operational alerts generated by automation

MapexOS treats all of them as first-class events and processes them through the same pipeline.


Event pipeline overview ​

Every payload flows through a consistent, observable pipeline:

txt
Data Source (HTTP/MQTT)
      │
      â–Ľ
JS Execution (Template scripts)
Decode → Validate → Transform
      │
      â–Ľ
Router (Route Groups fan-out)
      │
      ├── Events Service (ClickHouse persistence)
      ├── Rule Engine (business logic + state)
      └── Triggers (technical/communication actions)

Pipeline stages ​

StageServicePurpose
IngestionData Source (HTTP / MQTT)Receive payloads and enforce authentication
ProcessingJS ExecutionRun template scripts (decode → validate → transform) to produce the Standard Event
RoutingRouterFan-out to multiple destinations based on Route Groups
StorageEvents ServicePersist event streams in ClickHouse with per-tenant retention (TTL)
AutomationRule Engine + TriggersEvaluate rules, manage state, and execute actions

Standard Event (normalized payload) ​

All sources are transformed into a consistent Standard Event schema:

ts
{
  eventType: string,     // Classification (e.g., "telemetry.temperature")
  eventId: string,       // Unique identifier
  data: object,          // Normalized payload
  metadata?: object,     // Optional context (assetUUID, orgId, correlationId)
  created: string        // ISO 8601 timestamp
}

This contract enables:

  • Predictable interoperability across services
  • Unified querying via EVA dynamic fields
  • Composable automation (routes, rules, triggers)
  • Traceability through consistent identifiers (eventId / correlationId)

Routing and fan-out ​

The Router receives Standard Events and dispatches them to one or more destinations based on Route Groups.

Route Groups ​

A Route Group defines how events are distributed:

FieldDescription
NameHuman-readable identifier
ConditionsOptional filters to match event types and/or values
DestinationsOne or many targets (internal services or external outputs)

Routing destinations ​

DestinationDescription
Events ServicePersistent storage for search, audit, and analytics
Rule EngineStateful automation and business logic evaluation
TriggersDirect action execution (webhooks, notifications, queues)
External systemsForward to third-party APIs, data pipelines, or downstream platforms

Fan-out example ​

txt
Route Group: "temperature-events"
├── Always → Events Service (store processed events)
├── If EVA.temperature >= 80 → Rule Engine (evaluate escalation rules)
└── If metadata.severity == "critical" → Triggers (Slack + Teams notification)

A single event can be routed to multiple destinations simultaneously, enabling low-latency alerting while still persisting data for reporting.


Event streams ​

MapexOS organizes persisted data into streams for performance, access control, and operational clarity.

Streams in v1.0.0 ​

StreamDescriptionTypical use
Processed EventsNormalized Standard EventsDashboards, analytics, queries, reporting
Raw EventsOriginal payloads as received (debug-dependent)Troubleshooting, replay, forensic analysis
Router LogsRouting decisions and destinationsAuditing fan-out behavior
Rule LogsRule evaluations and match resultsCompliance, auditability, debugging
Trigger LogsAction execution outcome (HTTP status, duration, retries)Delivery tracking and reliability

Note
Raw/debug streams may be stored selectively depending on the asset’s debugEnabled configuration.


Events Service ​

The Events Service consumes event streams via NATS and persists them into ClickHouse tables optimized for append-heavy workloads.

High-level architecture ​

txt
NATS JetStream subjects
  ├─ events.processed
  ├─ events.raw
  ├─ logs.router
  ├─ logs.rules
  └─ logs.triggers
           │
           â–Ľ
     Events Service consumers
           │
           â–Ľ
        ClickHouse
 (stream-optimized tables)

Storage characteristics (ClickHouse) ​

CapabilityDescription
Write performanceOptimized for high-ingest append workloads
Query performanceTime-range and tenant-filtered queries at scale
Retention (TTL)Automatic expiration per tenant / stream
AnalyticsAggregations and reporting without moving data elsewhere

Data retention (TTL) ​

MapexOS supports per-tenant retention policies, allowing each organization to control storage cost and compliance requirements.

Retention policy levels ​

ScopeDescription
Platform defaultGlobal baseline retention
Per organizationTenant-level override
Per streamDifferent retention per stream type (processed vs logs vs raw)

TTL behavior ​

TTL is enforced at the storage layer using time-based expiration:

sql
-- Illustrative TTL policy (ClickHouse-style)
TTL timestamp + INTERVAL retention_days DAY

Querying events with EVA (dynamic fields) ​

MapexOS extracts EVA dynamic fields from heterogeneous payloads into typed structures that are efficient to query.

EVA storage pattern ​

Dynamic fields are stored in typed arrays to enable flexible indexing without schema migrations:

sql
numberFields  Array(Tuple(field String, value Float64))
stringFields  Array(Tuple(field String, value String))
boolFields    Array(Tuple(field String, value UInt8))
dateFields    Array(Tuple(field String, value DateTime64(3)))

Query examples (illustrative) ​

The examples below show the intent and pattern. Exact syntax may vary depending on table naming and ClickHouse function usage.

Filter: temperature above threshold

sql
SELECT
  timestamp,
  asset_uuid,
  arrayFirst(x -> x.field = 'temperature', numberFields).value AS temperature
FROM events_processed
WHERE
  org_id = 'org-123'
  AND arrayExists(x -> x.field = 'temperature', numberFields)
  AND arrayFirst(x -> x.field = 'temperature', numberFields).value > 30
ORDER BY timestamp DESC
LIMIT 100;

Aggregation: average temperature per sensor model

sql
SELECT
  arrayFirst(x -> x.field = 'model', stringFields).value AS model,
  COUNT(*) AS events,
  AVG(arrayFirst(x -> x.field = 'temperature', numberFields).value) AS avg_temp
FROM events_processed
WHERE
  org_id = 'org-123'
  AND timestamp >= now() - INTERVAL 24 HOUR
GROUP BY model
ORDER BY events DESC;

Why EVA matters (enterprise view) ​

BenefitOutcome
Schema agilityAdd new fields without DB migrations
Cross-vendor comparabilityQuery the same logical attribute across device types
Search readinessUI and API can expose universal filters and autocomplete
Cost controlAvoid exploding table columns for every vendor payload

Pipeline observability ​

MapexOS provides observability at each stage of the event lifecycle.

Debug logs are opt-in (asset-level) ​

Debug and history logging is opt-in

To protect throughput and storage costs:

  • debugEnabled = true → persists debug + history logs (step-by-step traces)
  • debugEnabled = false → persists error-only logs (default)

Enable debug temporarily during onboarding or incident investigation, then disable it once stable.

What you can observe ​

StreamWhat it captures
JS ExecutionScript execution time, memory usage, validation/transform errors
Router logsFan-out decisions, matched conditions, destinations
Rule logsMatch evaluation, state usage, actions selected
Trigger logsDelivery outcome, response codes, duration, retry metadata

Trace narrative (example) ​

txt
eventId: evt-abc-123
├── JS Execution: template=temp-sensor-v1 processed in 12ms
├── Router: matched destinations=[events.processed, rules.evaluate]
├── Rule Engine: rule=high-temp-alert result=MATCH
└── Trigger: slack.notify delivered=200 duration=340ms

Best practices ​

PracticeRecommendation
Design route groups by domainKeep fan-out rules aligned to business ownership (cold-chain, compliance, energy, etc.)
Set TTL intentionallyConfigure retention per tenant and per stream to balance cost vs governance
Invest in EVA fieldsExtract the attributes you will search and aggregate on
Treat raw/debug as diagnosticEnable debug only when necessary and timebox it
Monitor processing latencyTrack ingestion → persistence latency and error rate trends
Keep eventType consistentUse stable naming conventions for long-term analytics

Next steps ​

Business Source License (BSL 1.1)