# Ouroborus Tech — Scalable, Low‑Cost Environmental Monitoring

<img src="./ouroborus_logo.png" alt="Ouroborus logo" width="200">

## One‑liner
**Ouroborus** is a **manifest‑driven** environmental monitoring platform that turns **low‑cost LoRa sensor kits** into **production‑grade data products** (time‑series + geospatial dashboards + alerts) — deployable in days, scalable to thousands of sensors, and maintainable with minimal manual work.

## Why this exists (pain points we solve)

### 1) Monitoring is still too manual and too fragmented
- Field deployments often rely on **ad‑hoc sensor choices**, **handwritten notes**, and **one‑off wiring/firmware variants**.
- The result: inconsistent data, hard troubleshooting, and high operational cost.

**How Ouroborus fixes it**
- A **single source of truth project manifest (YAML)** drives everything:
  - what sensors exist
  - how they sample & transmit
  - how they are named/tagged
  - what database schema exists
  - what dashboards/alerts exist

### 2) Scaling beyond a pilot breaks quickly
- Adding "just 50 more sensors" typically forces:
  - new firmware variants
  - manual database changes
  - dashboard copy/paste
  - new naming conventions every time

**How Ouroborus fixes it**
- **Universal firmware** + **auto‑generated configs** + **templated infrastructure**.
- Scaling is a matter of adding nodes/sensors to the manifest and re‑generating artifacts.

### 3) Poor connectivity and harsh conditions
- Remote sites have:
  - intermittent coverage
  - power constraints (battery/solar)
  - harsh weather (enclosures, condensation)
  - long distances (fields, rivers, remote industrial sites)

**How Ouroborus fixes it**
- **LoRa/LoRaWAN** between sensors and gateways (low power, long range).
- Gateway forwards to a **BeagleBone with SIM** for internet backhaul.
- Robust buffering and "send every N samples" modes.

### 4) Data exists, but decisions don't (no operational workflow)
Common outcomes today:
- "Nice charts" but no action loop
- No automatic anomalies/alerts
- No compliance reporting or audit trail
- No baselines and no benchmarking against seasons/years

**How Ouroborus fixes it**
- Opinionated dashboards + alerting + retention/compression policies.
- A "digitalization layer": data becomes **actionable signals** (thresholds, trends, anomalies, device health).

### 5) Calibration and data quality issues
- Low-cost sensors drift, need calibration metadata, and produce noisy readings.
- Projects fail when users lose trust in the data.

**How Ouroborus fixes it**
- Manifest includes per-sensor metadata (depth, channel, calibration info).
- Backend stores readings with rich `meta` for traceability.
- Dashboards surface outliers and health signals (battery, RSSI/SNR, last seen).

### 6) Vendor lock‑in vs. DIY maintenance burden  
- End‑to‑end vendors can be expensive and inflexible.
- DIY systems become unmaintainable after the first team changes.

**How Ouroborus fixes it**
- Standard building blocks:
  - **TimescaleDB (Postgres)** for time‑series
  - **Grafana** for dashboards
  - versioned manifest + templates
- Customers can extend, self‑host, or use a managed service.

---

## Digitalization benefits (what customers actually gain)

### Agriculture (farmers, agronomists, cooperatives)
- Irrigation optimization → reduced water + energy usage
- Yield stability via early stress detection (moisture/temperature/pH trends)
- Input reduction (fertilizer/chemicals) through targeted interventions
- Proof & reporting for certifications and insurer/auditor requirements
- Season-over-season learning using consistent, structured time-series

### Towns / municipalities
- Flood and stormwater monitoring (early warning, infrastructure planning)
- Urban heat islands (microclimate measurement for adaptation measures)
- Air quality hotspots (targeted mitigation)
- Smart maintenance: detect anomalies earlier (water levels, pump usage patterns)
- Transparent reporting to stakeholders and regulators

### Industrial operations (utilities, plants, logistics yards)
- Environmental compliance (logging, retention, exports)
- Site safety & operations (temperature/humidity, leak indicators, soil stability)
- Predictive maintenance (trend-based alerts, threshold deviations)
- Reduced downtime via early detection and faster root-cause analysis

Ouroborus turns monitoring into a repeatable workflow:
> install → ingest → visualize → alert → report → improve

---

## Product overview (end‑to‑end)

### Hardware/data path
1. **Sensors** connected to **Seeed / Grove** ecosystem components  
2. Sensors read by **XIAO ESP32‑S3** + LoRa radio  
3. Uplink via **LoRa/LoRaWAN** to a **concentrator/gateway** 
4. Gateway forwards to **BeagleBone + SIM** (internet backhaul)
5. Data ingested into **TimescaleDB**; visualized in **Grafana** (custom‑themed + geospatial‑first)

### Key design choice: "Manifest‑driven everything"
A **single project manifest** defines:
- customer + project metadata
- gateways and their sites
- nodes (ESP32 devices), location, sensor types, sampling config
- database schema + retention/compression policies
- dashboards and alerts to enable

---

## Single Source of Truth: Project Manifest (YAML)

This manifest is created once per project (via UI or integrator) and versioned.

```yaml
# ==========================
# Project metadata
# ==========================
customer:
  id: "farm_co"
  name: "Farm Co. GmbH"

project:
  id: "north_field_2025"
  name: "North Field Monitoring 2025"
  environment: "prod"     # prod / staging / dev
  timezone: "Europe/Berlin"

# ==========================
# Gateway(s)
# ==========================
gateways:
  - id: "gw-north-01"
    ip: "192.168.10.20"
    lorawan_region: "EU868"
    site_name: "North Field A"
    location:
      lat: 52.5200
      lon: 13.4050

# ==========================
# Nodes (ESP32 devices)
# ==========================
nodes:
  - device_id: "esp32-001"
    hw_id: null
    gateway_id: "gw-north-01"
    role: "soil_station"
    location:
      lat: 52.5201
      lon: 13.4045
      description: "North field, west corner"
    sensors:
      - type: "temperature"
        channel: 0
      - type: "humidity"
        channel: 0
    sampling:
      interval_sec: 300
      transmit_every_n_samples: 1

# ==========================
# Database / Timescale
# ==========================
database:
  schema: "north_field"
  retention_days: 730
  compression_after_days: 30

# ==========================
# Grafana / dashboards
# ==========================
grafana:
  folder: "North Field / Farm Co"
  datasource_name: "TimescaleDB"
  dashboards:
    - id: "overview"
      enabled: true
    - id: "soil_moisture"
      enabled: true
    - id: "device_health"
      enabled: true

# ==========================
# Alerts
# ==========================
alerts:
  moisture_low:
    enabled: true
    threshold: 0.22
    duration_min: 60
  gateway_offline:
    enabled: true
    duration_min: 30
```

---

## projectctl: Generate everything from the manifest

### What it produces
- **DB schema SQL** (hypertables, indexes, retention/compression policies)
- **Grafana provisioning** (datasource + dashboards)
- **Dashboard JSONs** (templated per project)
- **Gateway configs** (env files / config payloads)
- **Device configs** (per ESP32 `config.json`)

### Minimal CLI skeleton (Python)
```python
#!/usr/bin/env python3
from pathlib import Path
import yaml
from jinja2 import Environment, FileSystemLoader

ROOT = Path(__file__).parent
TEMPLATES = ROOT / "templates"
BUILD = ROOT / "build"

env = Environment(loader=FileSystemLoader(str(TEMPLATES)), autoescape=False)

def load_manifest(path: str) -> dict:
    with open(path, "r") as f:
        return yaml.safe_load(f)

def render(template: str, **ctx) -> str:
    return env.get_template(template).render(**ctx)

def ensure_dirs():
    (BUILD / "database").mkdir(parents=True, exist_ok=True)
    (BUILD / "grafana" / "provisioning").mkdir(parents=True, exist_ok=True)
    (BUILD / "grafana" / "dashboards").mkdir(parents=True, exist_ok=True)
    (BUILD / "gateway").mkdir(parents=True, exist_ok=True)
    (BUILD / "devices").mkdir(parents=True, exist_ok=True)

def generate_all(manifest: dict):
    ensure_dirs()

    # DB schema
    (BUILD / "database" / "schema.sql").write_text(
        render("database/schema.sql.j2", database=manifest["database"])
    )

    # Gateway configs (example)
    for gw in manifest.get("gateways", []):
        (BUILD / "gateway" / f"{gw['id']}.env").write_text(
            render("gateway/config.env.j2",
                   customer=manifest["customer"],
                   project=manifest["project"],
                   gateway=gw)
        )

    # Device configs
    for node in manifest.get("nodes", []):
        (BUILD / "devices" / f"{node['device_id']}.json").write_text(
            render("device/device_config.json.j2",
                   customer=manifest["customer"],
                   project=manifest["project"],
                   node=node,
                   sampling=node.get("sampling", {}))
        )
```

### Example DB schema template (TimescaleDB)
```sql
CREATE SCHEMA IF NOT EXISTS {{ database.schema }};

CREATE TABLE IF NOT EXISTS {{ database.schema }}.measurements (
  time timestamptz NOT NULL,
  device_id text NOT NULL,
  sensor_type text NOT NULL,
  value double precision NOT NULL,
  unit text,
  meta jsonb
);

SELECT create_hypertable('{{ database.schema }}.measurements', 'time', if_not_exists => TRUE);

-- Retention & compression policies (TimescaleDB)
SELECT add_retention_policy(
  '{{ database.schema }}.measurements',
  INTERVAL '{{ database.retention_days }} days',
  if_not_exists => TRUE
);

SELECT add_compression_policy(
  '{{ database.schema }}.measurements',
  INTERVAL '{{ database.compression_after_days }} days',
  if_not_exists => TRUE
);
```

---

## Universal ESP32 Firmware (one binary, many configurations)

### Key idea
- Ship **one firmware image** for all devices.
- Each device reads **config.json** from SPIFFS/LittleFS.
- Optional: backend can push config changes (downlink or periodic online sync).

### On‑device config.json (generated by projectctl)
```json
{
  "customer_id": "farm_co",
  "project_id": "north_field_2025",
  "device_id": "esp32-001",
  "gateway_id": "gw-north-01",
  "sampling": { "interval_sec": 300, "transmit_every_n_samples": 1 },
  "sensors": [
    { "type": "temperature", "channel": 0, "enabled": true },
    { "type": "humidity", "channel": 0, "enabled": true }
  ]
}
```

### Firmware modules
- **ConfigManager**: read/parse config.json; validate; defaults
- **SensorManager**: `sensor_type → driver`; read all enabled sensors
- **LoRaWANManager**: join/uplink/downlink; command channel (optional)
- **OTAUpdater**: update firmware when features change
- **Scheduler**: sampling cadence, buffering, transmit policy

### Compact payload over LoRa (not JSON)
For airtime efficiency, use a compact binary payload.

Example layout:
```
Byte 0: protocol version
Byte 1..4: device logical ID hash
Byte 5..8: unix timestamp (seconds)
Then for each reading:
  1 byte sensor_type_id
  2 bytes scaled value (int16, e.g. value * 100)
  1 byte flags (quality / reserved)
```

---

## Backend services

### Ingestion service (concept)
Responsibilities:
- decode LoRa payload
- validate device/project mapping
- write into TimescaleDB
- update device "last seen" health metrics
- integrate with MQTT/REST if needed

Pseudo‑ingest:
```python
def ingest(payload: bytes):
    msg = decode(payload)  # -> device_id, ts, readings[]
    for r in msg.readings:
        write_measurement(
            time=msg.timestamp,
            device_id=msg.device_id,
            sensor_type=r.sensor_type,
            value=r.value,
            unit=r.unit,
            meta={"gateway_id": msg.gateway_id, "q": r.quality}
        )
```

### Grafana (geospatial‑first)
- Overview map: nodes colored by status/value
- Time-series panels: sensor groups by site/role
- Device health: last-seen, packet loss estimate, battery, RSSI/SNR
- Alerts from manifest: moisture low, temp high, gateway offline, etc.

---

## Why Ouroborus wins (differentiation)

### Manifest‑driven automation
- "Infrastructure as code" for environmental monitoring
- repeatable deployments across customers/projects

### Universal firmware and kit standardization
- fewer firmware variants and field mistakes
- simpler operations and faster onboarding

### Open & extensible stack
- Timescale + Grafana standardizes data and reporting
- plugin-style sensor drivers + dashboard templates

### Lower total cost of ownership
- cheaper hardware **and** cheaper operations
- fewer manual steps and less maintenance complexity

---

## Roadmap (high level)

**Phase 1 — Reliable deployments**
- standardized kits, robust ingestion, baseline dashboards, alerting

**Phase 2 — Autoprovisioning + UI**
- website wizard generates manifest
- device enrollment + key management flows
- automated dashboard generation per project

**Phase 3 — Decision layer**
- trend/anomaly alerts
- compliance report exports
- integrations (GIS, municipal systems, industrial ops tooling)

---

## Appendix: minimal self-hosted stack (example)

```yaml
services:
  timescaledb:
    image: timescale/timescaledb:latest-pg16
    environment:
      POSTGRES_PASSWORD: example
    ports: ["5432:5432"]

  grafana:
    image: grafana/grafana:latest
    ports: ["3000:3000"]
    volumes:
      - ./build/grafana/provisioning:/etc/grafana/provisioning
      - ./build/grafana/dashboards:/var/lib/grafana/dashboards
```

---

## Notes / placeholders
- Replace any sensor catalog and pricing references with real BOM and deployment metrics.
- Add real photos in your materials:
  - sensor kit installed on-site (enclosure, mounting, cabling)
  - technician working in the field
  - dashboard screenshot (map + time-series)
