01 — Local Authority
Every node is authoritative for its own identity, naming, and services. No upstream permission. No DNS dependency. No cloud. The node creates its own complete TCP/IP network on power-up.
Architecture
FrogNet is Private Internet infrastructure: locally authoritative naming, resilient routing, deterministic convergence, and controlled propagation of state changes across connected nodes — over any transport available.

Core principles
These aren't aspirations. They're constraints. Every design decision in FrogNet either upholds all seven or doesn't ship.
Every node is authoritative for its own identity, naming, and services. No upstream permission. No DNS dependency. No cloud. The node creates its own complete TCP/IP network on power-up.
WiFi, Ethernet, ham radio, LoRa, MeshCore, WireGuard tunnels, satellite — FrogNet routes across all simultaneously. Traffic arriving on WiFi can exit over HF radio.
When networks fragment and rejoin, state converges without consensus protocols. No pause, no reconciliation, no distributed locks. The transient database matches the physics.
Data doesn't flood blindly. Trigger rules govern what crosses each link: critical alerts immediately, bulk history when bandwidth allows, low-priority data waits.
The mesh is not reachable from the internet unless deliberately configured. If any node gets internet, that route becomes the mesh default automatically. The reverse is never true.
BLDC-1 learns the structure of actual traffic empirically and gets smarter with every transaction. Network-wide intelligence that reduces what crosses the wire to just what changed.
Fully connected, partially connected, single node, fragmented mesh — every state between fully connected and fully isolated is operational. The network grows when you add nodes, shrinks when you lose them, splits when links fail, heals when they come back.
BLDC-1 protocol
The FrogNet Host is the ingress and egress point for every single network interface. All traffic passes through the host. That's what gives us compression for every message automatically. The learning process and cache work across all messages on all interfaces — regardless of transport.
When a FrogNet host connects to another, it establishes a persistent tunnel — a long-lived connection with multiplexed, multithreaded simultaneous messaging. No per-request TCP handshake. Inside the tunnel: opcode, sequence number, payload. No HTTP method line, no headers, no status codes, no Content-Type. 3 bytes of overhead instead of ~800.
The system watches actual traffic and learns the shape of every request and response empirically. No schema, no configuration file. Once it knows the shape, it caches the most recent response. On the next request: SAME (20 bytes), DIFF (bitmap + changed values), or FULL (stripped + LZW-compressed).
Remote 10.x traffic → daemon:9009 ALWAYS (semantic compression happens here). Local traffic → Apache:8080 ALWAYS (direct, no overhead). NEVER direct HTTP to remote port 80. NEVER bypass the proxy for remote traffic. This is what makes the compression network-wide.
Transient database
The network-wide shared database is intentionally ephemeral — a scratchpad, not a source of truth. By not trying to solve distributed consensus over a degraded radio link, we eliminated the entire class of problems that kills every other mesh data system.
Stores sensor data, presence beacons, GPS locations, chat messages, discovery state, and well-known site mappings. Every node has a local copy. The API is simple: entity + action + data. Sensors are the universal abstraction.
It doesn't try to be consistent. It doesn't pause when the network fragments. It doesn't reconcile when fragments rejoin. Consumers handle stale data themselves. Sensor data is inherently time-series and lossy.
Node architecture
DHCP server, DNS (dnsmasq with .frognet TLD), Apache on port 8080, semantic proxy on port 80. The node creates its own complete TCP/IP network on power-up.
MySQL (database name: FrogNet), PHP API endpoints, transient DB for real-time state, permanent DB option for persistent data. Discovery layer (DVD) maps service names to addresses.
Ollama running domain-specific models completely offline. With transient DB access, AI can read and write real-time data — the sensor-to-AI-to-actuator loop runs without the internet.
ESP32 sensor boards connected via Bluetooth, WiFi, or Ethernet to Pi Zero concentrators. Sensors can be anything — the Pi Zero aggregates and pushes to the local database.
WireGuard encryption on all tunnels. Ed25519 signatures for script distribution (TOFU trust). Network membership is the security perimeter — fail-closed by default.
StreamingFrog broker manages tunnel lifecycle: CONNECT → REGISTER → TUNNEL_CREATE → WG_HANDSHAKE → POLL → DISCONNECT → bidirectional RECONNECT. Transit addresses: 10.253.200.x/30.