AEE is purpose-built for IoT sensor data — 50 to 200 byte payloads where zlib, LZ4, and Huffman coding fall apart. Zero heap. Zero malloc. Runs on Cortex-M0.
IoT sensors transmit 50–200 byte payloads over cellular and LoRa. At this scale, zlib's 256KB RAM footprint is prohibitive, LZ4 produces no compression at all, and Huffman trees cost more than the data they encode. AEE uses pre-shared dictionaries tuned to your sensor alphabet — no overhead per packet, no handshake, just smaller packets from day one.
11 distinct sensor values (temperature/humidity range, 20–30). 100,000 iterations, averaged. Measured on x86 — Cortex-M equivalent at 16 MHz in parentheses.
| Codec | Payload | Ratio | Encode | Decode | RAM |
|---|---|---|---|---|---|
| AEE | 64 B | 0.53 | 1.4 µs | 0.18 µs | 1.7 KB |
| zlib‑1 | 64 B | 0.78 | 4.9 µs | 2.8 µs | 256 KB |
| LZ4 | 64 B | 1.03 | 0.2 µs | 0.01 µs | 16 KB |
| AEE | 1 KB | 0.44 | 12 µs | 6.3 µs | 1.7 KB |
| zlib‑6 | 1 KB | 0.51 | 23 µs | 6.1 µs | 256 KB |
| LZ4 | 1 KB | 0.98 | 1.4 µs | 0.14 µs | 16 KB |
Two paths to value — immediate firmware-only savings, or deeper hardware redesign savings for your next-gen platform.
Based on ₹15/device/year license. Savings estimates assume 100K active devices transmitting hourly over cellular.
Two Indian patent applications covering the encoding algorithm and a novel ternary weight encoding for AI inference.
We license AEE for IoT platforms, chipmakers, satellite constellations, and edge computing. Let's talk about your use case.