The Infranet
Current Internet infrastructure is dominated by centralized computing architectures. These models provide high throughput, efficient resource utilization, and simplified management. However, they also concentrate risk, including data aggregation, single points of failure, and systemic vulnerabilities that can compromise reliability. At the other extreme, decentralized computing architectures, ranging from peer-to-peer and mesh networks to emerging Web3 ecosystems, distribute workloads across multiple nodes, offering transparency, redundancy, and resilience. Yet, these systems often suffer from high coordination overhead, fragmented design, and limited scalability, making them unsuitable for demanding workloads. This creates a fundamental dichotomy in system architecture: centralization delivers efficiency but lacks robustness, while decentralization improves resilience but struggles with integration and scale.
To move beyond this trade-off, a new architectural approach is needed—one that combines the efficiency of centralization with the resilience of decentralization. We propose a distributed computing architecture that introduces a network address virtualization layer, an abstraction that decouples computation from the physical network infrastructure. By removing this dependency, the model enables compute mobility, allowing workloads to migrate seamlessly across heterogeneous environments such as cloud, edge, and IoT systems.
This architecture enforces a clear separation between the control plane and the data plane. The control plane is defined as a logically centralized unit responsible for global coordination, configuration, and system administration. The data plane is decentralized, performing the actual execution of workloads near the source of data to maximize locality, reduce latency, and provide fault tolerance. This design achieves global coordination with local execution, delivering consistent performance across diverse environments while maintaining reliability at scale.
We are building a distributed operating system based on our distributed computing architecture to enable Infrastructure Internetworking. By interconnecting diverse computing environments into a cohesive whole, this system allows computation to flow dynamically across the network, delivering scale and resilience without sacrificing efficiency. The network formed from these interconnections will be the foundation of the next generation of the Internet—an evolution we call the Infranet.
Mission Statement
Our mission is to build the infrastructure for the Knowledge Age.
Microstacks is a stack management system designed on the Unix philosophy of simplicity, modularity, and composability. It introduces a network address virtualization layer that decouples network addresses from the underlying network infrastructure, enabling seamless compute mobility across heterogeneous environments. It assembles a stack as a hierarchical structure of router, component, and service blocks, which respectively represent application, transport, and network layer abstractions. The hierarchical structure allows each branch of the stack to be deployed, managed, and scaled independently. This architecture enables new paradigm for stack management through distributed deployment and vector scaling.
Frameworks
MonolithicArchitecture
StaticStructure
CentralizedDeployment
VerticalScaling
Microstacks
ModularArchitecture
DistributedDeployment
VectorScaling
Orchestrators
MicroserviceArchitecture
DynamicStructure
DecentralizedDeployment
HorizontalScaling