You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RVM — The Virtual Machine Built for the Agentic Age
Agents don't fit in VMs. They need something that understands how they think.
Part of the RuVector ecosystem. Uses RuVix kernel primitives and RVF package format. Designed for Cognitum Seed, Appliance, and future chip targets.
Traditional hypervisors were built for an era of static server workloads —
long-running VMs with predictable resource needs. AI agents are different.
They spawn in milliseconds, communicate in dense, shifting graphs, share
context across trust boundaries, and die without warning. VMs are the wrong
abstraction.
RVM replaces VMs with coherence domains — lightweight, graph-structured
partitions whose isolation, scheduling, and memory placement are driven by how
agents actually communicate. When two agents start talking more, RVM moves
them closer. When trust drops, RVM splits them apart. Every mutation is
proof-gated. Every action is witnessed. The system understands its own
structure.
No KVM. No Linux. No VMs. Bare-metal Rust. Built for agents.
Traditional VM: VM₁ VM₂ VM₃ VM₄ (static, opaque boxes — agents don't fit)
─────────────────────
RVM: ┌─A──B─┐ ┌─C─┐ D (dynamic, agent-driven domains)
│ ↔ │──│ ↔ │──↔ (edges = agent communication weight)
└──────┘ └───┘ (auto-split when trust or coupling changes)
What Agents Need vs What They Get
What Agents Need
VMs / Containers
RVM
Sub-millisecond spawn
Seconds to boot
< 10µs partition switch
Dense, shifting comms graph
Static NIC-to-NIC
Graph-weighted CommEdges, auto-rebalanced
Shared context with isolation
All or nothing
Capability-gated shared memory, proof-checked
Per-agent fault containment
Whole-VM crash
F1–F4 graduated rollback, no reboot needed
Auditable every action
External log bolted on
64-byte witness on every syscall, hash-chained
Hibernate and reconstruct
Kill and restart
Dormant tier → rebuilt from witness log
Run on 64KB MCUs
Needs gigabytes
Seed profile: 64KB–1MB, capability-enforced
Why RVM?
Dynamic Re-isolation and Self-Healing Boundaries. Because RVM uses
graph-theoretic mincut algorithms, it can dynamically restructure its isolation
boundaries to match how workloads actually communicate. If an agent in one
partition begins communicating heavily with an agent in another, RVM
automatically triggers a partition split and migrates the agent to optimise
placement — no manual configuration. No existing hypervisor can split or merge
live partitions along a graph-theoretic cut boundary.
Memory Time Travel and Deep Forensics. Traditional virtual memory
permanently overwrites state or blindly swaps it to disk. RVM stores dormant
memory as a checkpoint combined with a delta-compressed witness trail. Any
historical state can be perfectly rebuilt on demand — days or weeks later —
because every privileged action is recorded in a tamper-evident, hash-chained
witness log. External forensic tools can reconstruct past states to answer
precise questions such as "which task mutated this vector store between 14:00
and 14:05 on Tuesday?"
Targeted Fault Rollback Without Global Reboots. When the kernel detects a
coherence violation or memory corruption it does not crash. Instead it finds
the last known-good checkpoint, replays the witness log, explicitly skips the
mutation that caused the failure, and resumes from a corrected state (DC-14,
failure classes F1–F3).
Deterministic Multi-Tenant Edge Orchestration. Existing edge orchestrators
rely on Linux-based VMs or containers, inheriting scheduling unpredictability
and no guarantee of bounded latency with provable isolation. RVM enables
scenarios such as an autonomous vehicle where safety-critical sensor-fusion
agents (Reflex mode, < 10 µs switch) are strictly isolated from low-priority
infotainment agents, or a smart factory floor running hard real-time PLC
control loops safely alongside ML inference agents.
High-Assurance Security on Extreme Microcontrollers. Through its Seed
hardware profile (ADR-138), RVM brings capability-enforced isolation,
proof-gated execution, and witness attestation to deeply constrained IoT
devices with as little as 64 KB of RAM. Delivering this level of zero-trust,
auditable security on microcontroller-class hardware is a novel capability not
provided by any existing embedded operating system.
All types are no_std, forbid(unsafe_code), deny(missing_docs)
Enforced
🔍 RVM vs State of the Art (12 differences)
RVM
KVM/Firecracker
seL4
Theseus OS
Primary abstraction
Coherence domains (graph-partitioned)
Virtual machines
Processes + capabilities
Cells (intralingual)
Isolation driver
Dynamic mincut + cut pressure
Hardware EPT/NPT
Formal verification + caps
Rust type system
Scheduling signal
Structural coherence (graph metrics)
CPU time / fairness
Priority / round-robin
Cooperative
Memory model
4-tier reconstructable (Hot/Warm/Dormant/Cold)
Demand paging
Untyped memory + retype
Single address space
Audit trail
Witness-native (64B hash-chained records)
External logging
Not built-in
Not built-in
Mutation control
Proof-gated (3-layer: P1/P2/P3)
Unix permissions
Capability tokens
Rust ownership
Partition operations
Live split/merge along graph cuts
Not supported
Not supported
Not supported
Linux dependency
None — bare-metal
Yes (KVM is a kernel module)
None
None
Language
95-99% Rust, <500 LoC assembly
C
C + Isabelle/HOL proofs
Rust
Target
Edge, IoT, agents
Cloud servers
Safety-critical
Research
Boot time
< 250ms to first witness
~125ms (Firecracker)
Varies
N/A
Partition switch
< 10µs
~2-5µs (VM exit)
~0.5-1µs (IPC)
N/A (no isolation)
✨ 6 Novel Capabilities (No Prior Art)
1. Kernel-Level Graph Control Loop
No existing OS uses spectral graph coherence metrics as a scheduling signal. RVM's coherence engine runs mincut algorithms in the kernel's scheduling loop — graph structure directly drives where computation runs, when partitions split, and which memory stays resident.
2. Reconstructable Memory ("Memory Time Travel")
RVM explicitly rejects demand paging. Dormant memory is stored as witness checkpoint + delta compression, not raw bytes. The system can deterministically reconstruct any historical state from the witness log.
3. Proof-Gated Infrastructure
Every state mutation requires a valid proof token verified through a three-tier system: P1 capability (<1µs), P2 policy (<100µs), P3 deep (<10ms, post-v1).
4. Witness-Native OS
Every privileged action emits a fixed 64-byte, FNV-1a hash-chained record. Tamper-evident by construction. Full deterministic replay from any checkpoint.
5. Live Partition Split/Merge
Partitions split along graph-theoretic cut boundaries and merge when coherence rises. Capabilities follow ownership (DC-8), regions use weighted scoring (DC-9), merges require 7 preconditions (DC-11).
6. Edge Security on 64KB RAM
Capability-based isolation, proof-gated execution, and witness attestation on microcontroller-class hardware (Cortex-M/R, 64KB RAM).
🎯 Success Criteria (v1)
#
Criterion
Target
1
All 13 crates compile with #![no_std] and #![forbid(unsafe_code)]
Enforced
2
Cold boot to first witness
< 250ms on Appliance hardware
3
Hot partition switch
< 10 microseconds
4
Witness record is exactly 64 bytes, cache-line aligned
Compile-time asserted
5
Capability derivation depth bounded at 8 levels
Enforced
6
EMA coherence filter operates without floating-point
Implemented
7
Boot sequence is deterministic and witness-gated
Implemented
8
Remote memory traffic reduction ≥ 20% vs naive placement
Target
9
Fault recovery without global reboot (F1–F3)
Target
🏗️ Implementation Phases
Phase 1: Foundation (M0-M1) — "Can it boot and isolate?"
M0: Bare-metal Rust boot on QEMU AArch64 virt. Reset → EL2 → serial → MMU → first witness.
M5: Memory tier management. Reconstruction from dormant state.
Phase 4: Expansion (M6-M7) — "Can agents run on it?"
M6: WASM agent runtime adapter. Agent lifecycle.
M7: Seed/Appliance hardware bring-up. All success criteria.
🔐 Security Model
Capability-Based Authority. All access controlled through unforgeable kernel-resident tokens. No ambient authority. Seven rights with monotonic attenuation.
Proof-Gated Mutation. No memory remap, device mapping, migration, or partition merge without a valid proof token. Three tiers with strict latency budgets.
Witness-Native Audit. 64-byte records for every mutating operation. Hash-chained for tamper evidence. Deterministic replay from checkpoint + witness log.
Failure Classification. F1 (agent restart) → F2 (partition reconstruct) → F3 (memory rollback) → F4 (kernel reboot). Each escalation witnessed.
🖥️ Target Platforms
Platform
Profile
RAM
Coherence Engine
WASM
Seed
Tiny, persistent, event-driven
64KB–1MB
No (DC-1)
Optional
Appliance
Edge hub, deterministic orchestration
1–32GB
Yes (full)
Yes
Chip
Future Cognitum silicon
Tile-local
Hardware-assisted
Yes
📚 ADR References
ADR
Topic
ADR-132
RVM top-level architecture and 15 design constraints
What it is: A Rust-based lightweight unikernel targeting scalable and predictable runtime for high-performance and cloud computing. Originally a rewrite of HermitCore.
Boot model: RustyHermit supports two deployment modes: (a) running inside a VM via the uhyve hypervisor (which itself requires KVM), and (b) running bare-metal side-by-side with Linux in a multi-kernel configuration. The uhyve path depends on KVM; the multi-kernel path allows bare-metal but assumes a Linux host for the other kernel.
Memory model: Single address space unikernel model. The application and kernel share one address space with no process isolation boundary. Memory safety comes from Rust's ownership model rather than MMU page tables.
Scheduling: Cooperative scheduling within the single unikernel image. No preemptive multitasking between isolated components. The scheduler is optimized for throughput rather than isolation.
RVM relevance: RustyHermit demonstrates that a pure-Rust kernel can achieve competitive performance, but its unikernel design lacks the isolation model RVM requires. RVM's capability-gated multi-task model is fundamentally different. However, RustyHermit's approach to no_std Rust kernel bootstrapping and its minimal dependency chain are instructive for RVM's Phase B bare-metal port.
Key lesson for RVM: Unikernels trade isolation for performance. RVM takes the opposite stance -- isolation is non-negotiable, but it must be capability-based rather than process-based.
1.2 Theseus OS
What it is: A research OS written entirely in Rust exploring "intralingual design" -- closing the semantic gap between compiler and hardware by maximally leveraging language safety and affine types.
Boot model: Boots on bare-metal x86_64 hardware (tested on Intel NUC, Thinkpad) and in QEMU. No dependency on Linux or KVM for operation. Uses a custom bootloader.
Memory model: All code runs at Ring 0 in a single virtual address space, including user applications written in purely safe Rust. Protection comes from the Rust type system rather than hardware privilege levels. The OS can guarantee at compile time that a given application or kernel component cannot violate isolation between modules.
Scheduling: Component-granularity scheduling where OS modules can be dynamically loaded and unloaded at runtime. State management is the central innovation -- Theseus minimizes the states one component holds for another, enabling live evolution of running system components.
RVM relevance: Theseus's intralingual approach is the closest philosophical match to RVM. Both systems bet on Rust's type system as a primary isolation mechanism. However, Theseus runs everything at Ring 0, while RVM uses EL1/EL0 separation with hardware MMU enforcement as a defense-in-depth layer on top of type safety.
Key lesson for RVM: Language-level isolation can replace MMU-based isolation for trusted components, but hardware-enforced boundaries remain essential for untrusted WASM workloads. RVM's hybrid approach (type safety for kernel, MMU for user components) is well-positioned.
1.3 RedLeaf
What it is: An OS developed from scratch in Rust to explore the impact of language safety on OS organization. Published at OSDI 2020.
Boot model: Boots on bare-metal x86_64. No Linux dependency. Custom bootloader with UEFI support.
Memory model: Does not rely on hardware address spaces for isolation. Instead uses only type and memory safety of the Rust language. Introduces "language domains" as the unit of isolation -- a lightweight abstraction for information hiding and fault isolation. Domains can be dynamically loaded and cleanly terminated without affecting other domains.
Scheduling: Domain-aware scheduling where the unit of execution is a domain rather than a process. Domains communicate through shared heaps with ownership transfer semantics that leverage Rust's ownership model for zero-copy IPC.
RVM relevance: RedLeaf's domain model closely parallels RVM's capability-gated task model. Both systems achieve isolation without traditional process boundaries. RedLeaf's shared heap with ownership transfer is conceptually similar to RVM's queue-based IPC with zero-copy ring buffers. RedLeaf also achieves 10Gbps network driver performance matching DPDK, demonstrating that language-based isolation does not inherently sacrifice throughput.
Key lesson for RVM: Language domains with clean termination semantics map well to RVM's RVF component model. The ability to isolate and restart a crashed driver without system-wide impact is exactly what RVM needs for agent workloads.
1.4 Tock OS
What it is: A secure embedded OS for microcontrollers, written in Rust, designed for running multiple concurrent, mutually distrustful applications.
Boot model: Runs on bare-metal Cortex-M and RISC-V microcontrollers. No OS dependency. Direct hardware boot.
Memory model: Dual isolation strategy:
Capsules (kernel components): Language-based isolation using safe Rust. Zero overhead. Capsules can only be written in safe Rust.
Processes (applications): Hardware MPU isolation. The MPU limits which memory addresses a process can access; violations trap to the kernel.
Scheduling: Priority-based preemptive scheduling with per-process grant regions for safe kernel-user memory sharing. Tock 2.2 (January 2025) achieved compilation on stable Rust for the first time.
RVM relevance: Tock's dual isolation model (language for trusted, hardware for untrusted) is the same architectural pattern RVM employs. Tock's capsule model directly influenced RVM's approach to kernel extensions. The 2025 TickTock formal verification effort discovered five previously unknown MPU configuration bugs and two interrupt handling bugs that broke isolation -- a cautionary result for any system relying on MPU/MMU configuration correctness.
Key lesson for RVM: Formal verification of the MMU/MPU configuration code in ruvix-aarch64 should be a priority. The TickTock results demonstrate that even mature, well-tested isolation code can harbor subtle bugs.
1.5 Hubris (Oxide Computer)
What it is: A microkernel OS for deeply embedded systems, developed by Oxide Computer Company. Written entirely in Rust. Production-deployed in Oxide rack-mount server service controllers.
Boot model: Bare-metal on ARM Cortex-M microcontrollers. No OS dependency. Static binary with all tasks compiled together.
Memory model: Strictly static architecture. No dynamic memory allocation. No runtime task creation or destruction. The kernel is approximately 2000 lines of Rust. Memory regions are assigned at compile time via a build system configuration (TOML-based task descriptions).
Scheduling: Strictly synchronous IPC model. Preemptive priority-based scheduling. Tasks that crash can be restarted without affecting the rest of the system. No driver code runs in privileged mode.
RVM relevance: Hubris demonstrates that a production-quality Rust microkernel can be extremely small (~2000 lines) while providing real isolation. Its static, no-allocation design philosophy aligns with RVM's "fixed memory layout" constraint. Hubris's approach to compile-time task configuration is analogous to RVM's RVF manifest-driven resource declaration.
Key lesson for RVM: Static resource declaration at boot (from RVF manifest) is a proven pattern. Hubris's production track record at Oxide validates the Rust microkernel approach for real hardware.
1.6 Redox OS
What it is: A complete Unix-like microkernel OS written in Rust, targeting general-purpose desktop and server use.
Boot model: Boots on bare-metal x86_64 hardware. Custom bootloader with UEFI support. The 2025-2026 roadmap includes ARM and RISC-V support.
Memory model: Traditional microkernel with hardware address space isolation. Processes run in separate address spaces. The kernel handles memory management, scheduling, and IPC. Device drivers run in userspace.
Scheduling: Standard microkernel scheduling with userspace servers. Recent 2025 improvements yielded 500-700% file I/O performance gains. Self-hosting is a key roadmap goal.
RVM relevance: Redox proves that a full microkernel OS can be written in Rust and run on real hardware. Its "everything in Rust" approach validates the toolchain. However, Redox's Unix-like POSIX interface is exactly the abstraction mismatch that RVM is designed to avoid. Redox optimizes for human-process workloads; RVM optimizes for agent-vector-graph workloads.
Key lesson for RVM: Redox's experience with driver isolation in userspace and its bare-metal boot process are directly transferable. But RVM should not adopt POSIX semantics.
1.7 Hyperlight (Microsoft)
What it is: A micro-VM manager that creates ultra-lightweight VMs with no OS inside. Open-sourced in 2024-2025, now in the CNCF Sandbox.
Boot model: Creates VMs using hardware hypervisor support (Hyper-V on Windows, KVM on Linux, mshv on Azure). The VMs themselves contain no operating system -- just a linear memory slice and a CPU. VM creation takes 1-2ms, with warm-start latency of 0.9ms.
Memory model: Each micro-VM gets a flat linear memory region. No virtual devices, no filesystem, no OS. The Hyperlight Wasm guest compiles wasmtime as a no_std Rust module that runs directly inside the micro-VM.
Scheduling: Host-managed. The micro-VMs are extremely short-lived function executions. No internal scheduler needed.
RVM relevance: Hyperlight demonstrates the "WASM-in-a-VM-with-no-OS" pattern that is extremely relevant to RVM. The key insight is that wasmtime can be compiled as a no_std component and run without any operating system. RVM's approach of embedding a WASM runtime directly in the kernel aligns with this pattern, but RVM goes further by providing kernel-native vector/graph primitives that Hyperlight lacks.
Key lesson for RVM: Wasmtime's no_std mode is production-viable. The Hyperlight architecture validates the "no OS needed for WASM execution" thesis. RVM should study Hyperlight's wasmtime-platform.h abstraction layer for the Phase B bare-metal WASM port.
2. Capability-Based Systems
2.1 seL4's Capability Model
Architecture: seL4 is the gold standard for capability-based microkernels. It was the first OS kernel to receive a complete formal proof of functional correctness (8,700 lines of C verified from abstract specification down to binary). Every kernel resource is accessed through capabilities -- unforgeable tokens managed by the kernel.
Capability structure: seL4 capabilities encode: an object pointer (which kernel object), access rights (what operations are permitted), and a badge (extra metadata for IPC demultiplexing). Capabilities are stored in CNodes (capability nodes), which are themselves accessed through capabilities, forming a recursive namespace.
Delegation and revocation: Capabilities can be copied (with equal or lesser rights), moved between CNodes, and revoked. Revocation is recursive -- revoking a capability invalidates all capabilities derived from it.
Rust bindings: The sel4-sys crate provides Rust bindings for seL4 system calls. Antmicro and Google developed a version designed for maintainability. The seL4 Microkit framework supports Rust as a first-class language.
RVM's adoption of seL4 concepts:
RVM's ruvix-cap crate implements seL4-style capabilities with CapRights, CapHandle, derivation trees, and epoch-based invalidation
Maximum delegation depth of 8 (configurable) prevents unbounded chains
Audit logging with depth-warning threshold at 4
The GRANT_ONCE right provides non-transitive delegation (not in seL4)
Unlike seL4's C implementation, RVM's capability manager is #![forbid(unsafe_code)]
Gap analysis: seL4's formal verification is its strongest asset. RVM currently lacks formal proofs for its capability manager. The Tock/TickTock experience (five bugs found through verification) suggests formal verification of ruvix-cap should be prioritized.
2.2 CHERI Hardware Capabilities
Architecture: CHERI (Capability Hardware Enhanced RISC Instructions) extends processor ISAs with hardware-enforced capabilities. Rather than relying solely on page tables for memory protection, CHERI encodes bounds and permissions directly in pointer representations. Pointers become fat capabilities that carry their own access metadata.
ARM Morello: Arm's Morello evaluation platform implemented CHERI extensions on an Armv8.2-A processor. Performance evaluation on 20 C/C++ applications showed overheads ranging from negligible to 1.65x, with the highest costs in pointer-intensive workloads. However, as of 2025, Arm has stepped back from active Morello development, pushing CHERI adoption toward smaller embedded processors.
Verified temporal safety: A 2025 paper at CPP presented a formal CHERI C memory model for verified temporal safety, demonstrating that CHERI can enforce not just spatial safety (bounds) but also temporal safety (use-after-free prevention).
RVM relevance: CHERI's capability-per-pointer model is more fine-grained than RVM's capability-per-object model. If future AArch64 processors include CHERI extensions, RVM could leverage them for sub-region protection within capability boundaries. In the near term, RVM achieves similar goals through Rust's ownership system (compile-time) and MMU page tables (runtime).
Key lesson for RVM: CHERI demonstrates that hardware capabilities are feasible but face adoption challenges. RVM's software-capability approach (ruvix-cap) is the right near-term strategy, with CHERI as a future hardware acceleration path. The ruvix-hal HAL trait layer already allows for pluggable MMU implementations, which could be extended to CHERI capabilities.
2.3 Barrelfish Multikernel
Architecture: Barrelfish runs a separate small kernel ("CPU driver") on each core. Kernels share no memory. All inter-core communication is explicit message passing. The rationale: hardware cache coherence protocols are difficult to scale beyond ~80 cores, so Barrelfish makes communication explicit rather than relying on shared-memory illusions.
Capability model: Barrelfish uses a capability system where the CPU driver maintains capabilities, executes syscalls on capabilities, and schedules dispatchers. Dispatchers are the unit of scheduling -- an application spanning multiple cores has a dispatcher per core, and dispatchers never migrate.
System knowledge base: At boot, Barrelfish probes hardware to measure inter-core communication performance, stores results in a small database (SKB), and runs an optimizer to select communication patterns.
RVM relevance: Barrelfish's per-core kernel model directly informs RVM's future Phase C (SMP) design. The ruvix-smp crate already provides CPU topology management, per-CPU state tracking, IPI messaging (Reschedule, TlbFlush, FunctionCall), and lock-free atomic state transitions -- all aligned with the multikernel philosophy.
Key lesson for RVM: For multi-core RVM, the Barrelfish model suggests: (1) run a scheduler instance per core rather than a single shared scheduler, (2) use explicit message passing between per-core schedulers, (3) probe inter-core latency at boot and store in a performance database that the coherence-aware scheduler can consult.
3. Coherence Protocols
3.1 Hardware Cache Coherence: MOESI and MESIF
MESI (Modified, Exclusive, Shared, Invalid): The baseline snooping protocol. Each cache line exists in one of four states. Write operations invalidate all other copies (write-invalidate). Simple but generates high bus traffic on writes to shared data.
MOESI (adds Owned): AMD's extension. The Owned state allows a modified, shared line to serve reads directly from the owning cache rather than writing back to memory first. This reduces write-back traffic at the cost of more complex state transitions.
MESIF (adds Forward): Intel's extension. The Forward state designates exactly one cache as the responder for shared-line requests, eliminating redundant responses when multiple caches hold the same shared line. Optimized for read-heavy sharing patterns.
Scalability limits: All snooping protocols face fundamental scalability issues beyond ~32-64 cores because every cache must observe every bus transaction. This motivates the shift to directory-based protocols at higher core counts.
3.2 Directory-Based Coherence
Architecture: Instead of broadcasting on a bus, directory protocols maintain a centralized (or distributed) directory tracking which caches hold each line. Only the relevant caches receive invalidation messages. Traffic scales with the number of sharers rather than the number of cores.
Overhead: Directory entries consume storage (bit-vector per cache line per core). For N cores with M cache lines, the directory requires O(N * M) bits. Various compression techniques (limited pointer directories, coarse directories) reduce this at the cost of precision.
Relevance to RVM: Directory-based coherence is the hardware mechanism that enables many-core scaling. RVM's SMP design should account for NUMA effects and directory-based coherence latencies when making scheduling decisions.
3.3 Software Coherence Protocols
Overview: Software coherence replaces hardware snooping/directory mechanisms with explicit software-managed cache operations. The OS or runtime issues explicit cache flush/invalidate instructions at synchronization points.
Examples:
Linux's explicit DMA coherence management (dma_map_single with cache maintenance)
Barrelfish's message-based coherence (no shared memory, explicit transfers)
Trade-offs: Software coherence eliminates hardware complexity but requires programmers (or compilers/runtimes) to correctly manage cache state. Errors lead to stale data or corruption. The benefit is full control over when coherence traffic occurs.
3.4 Coherence Signals as Scheduling Inputs -- The RVM Innovation
This is where RVM's design diverges from all existing systems. No existing OS uses coherence metrics as a scheduling signal. RVM's scheduler (ruvix-sched) computes priority as:
Where risk_penalty is derived from the pending coherence delta -- a measure of how much a task's execution would reduce global structural coherence. This is computed using spectral graph theory (Fiedler value, spectral gap, effective resistance) from the ruvector-coherence crate.
Why this matters: Traditional schedulers optimize for latency, throughput, or fairness. RVM optimizes for structural consistency. A task that would introduce logical contradictions into the system's knowledge graph gets deprioritized. A task processing genuinely novel information gets boosted. This is the right scheduling objective for agent workloads where maintaining a coherent world model is more important than raw throughput.
No prior art exists for coherence-driven scheduling in operating systems. The closest analogs are:
Database transaction schedulers that consider serializability (but these gate on commit, not schedule)
Network quality-of-service schedulers that consider flow coherence (but this is packet-level, not semantic)
Game engine entity-component schedulers that consider data locality (but this is cache-coherence, not semantic coherence)
4. Agent/Edge Computing Runtimes
4.1 Wasmtime Bare-Metal Embedding
Current status: Wasmtime can be compiled as a no_std Rust crate. The embedder must implement a platform abstraction layer (wasmtime-platform.h) specifying how to allocate virtual memory, handle signals, and manage threads.
Hyperlight precedent: Microsoft's Hyperlight Wasm project compiles wasmtime into a no_std guest that runs inside micro-VMs with no operating system. This is the strongest proof-of-concept for wasmtime on bare metal.
Practical considerations:
Wasmtime's cranelift JIT compiler works in no_std mode but requires virtual memory for code generation
The signals-and-traps feature can be disabled for platforms without virtual memory support
Custom memory allocators must be provided via the platform abstraction
RVM integration path: RVM's Phase B plan (weeks 35-36) specifies porting wasmtime or wasm-micro-runtime to bare metal. Given Hyperlight's success with no_std wasmtime, wasmtime is the recommended path. The ruvix-hal MMU trait can provide the virtual memory abstraction that wasmtime's platform layer requires.
4.2 Lunatic (Erlang-Like WASM Runtime)
What it is: A universal runtime for server-side applications inspired by Erlang. Actors are represented as WASM instances with per-actor sandboxing and runtime permissions.
Key features:
Preemptive scheduling of WASM processes via work-stealing async executor
Per-process fine-grained resource access control (filesystem, memory, network) enforced at the syscall level
Automatic transformation of blocking code into async operations
Written in Rust using wasmtime and tokio, with custom stack switching
Agent workload alignment: Lunatic's actor model closely matches agent workloads:
Each agent is an isolated WASM instance (Lunatic process)
Agents communicate through typed message passing
A failing agent can be restarted without affecting others (supervision trees)
Different agents can be written in different languages (polyglot via WASM)
RVM relevance: Lunatic validates the "agents as lightweight WASM processes" model but runs on top of Linux (tokio for async I/O, wasmtime for WASM). RVM can adopt Lunatic's architectural patterns while eliminating the Linux dependency. Key patterns to adopt:
Per-agent capability sets (RVM already has this via ruvix-cap)
Supervision trees for agent fault recovery
Work-stealing across cores (for Phase C SMP)
4.3 How Agent Workloads Differ from Traditional VM Workloads
Dimension
Traditional VM/Container
Agent Workload
Lifecycle
Long-running process
Short-lived reasoning bursts + long idle
State model
Files and databases
Vectors, graphs, proof chains
Communication
TCP/Unix sockets
Typed semantic queues with coherence scores
Isolation
Address space separation
Capability-gated resource access
Failure
Kill and restart process
Isolate, checkpoint, replay from last coherent state
Scheduling objective
Fairness / throughput
Coherence preservation / novelty exploration
Memory pattern
Heap allocation / GC
Append-only regions + slab allocators
Security model
User/group permissions
Proof-gated mutations with attestation witnesses
4.4 What an Agent-Optimized Hypervisor Needs
Based on the above analysis, an agent-optimized hypervisor requires:
Kernel-native vector/graph stores -- Agents think in embeddings and knowledge graphs, not files. These must be first-class kernel objects, not userspace libraries serializing to disk.
Coherence-aware scheduling -- The scheduler must understand that not all runnable tasks should run. A task that would decohere the world model should be delayed.
Proof-gated mutations -- Every state change must carry a cryptographic witness. This enables checkpoint/replay, audit, and distributed attestation.
Zero-copy typed IPC -- Agents exchange structured data (vectors, graph patches, proof tokens), not byte streams. The queue abstraction must be typed and schema-aware.
Sub-millisecond task spawn -- Agent reasoning involves spawning many short-lived sub-tasks. Task creation must be cheaper than thread creation.
Capability delegation without kernel round-trip -- Agents frequently delegate partial authority. This should be achievable through capability derivation in user space with kernel validation on use.
Deterministic replay -- For debugging and audit, the kernel must support replaying a sequence of operations and reaching the same state.
All seven of these requirements are already addressed by RVM's architecture (ADR-087).
5. Graph-Partitioned Scheduling
5.1 Min-Cut Based Task Placement
Theory: Given a graph where nodes are tasks and edges represent communication volume, the minimum cut partitioning assigns tasks to processors to minimize inter-processor communication. The min-cut objective directly minimizes the scheduling overhead of cross-core data movement.
Algorithms:
Karger's randomized contraction: O(n^2 log n) for global min-cut
Stoer-Wagner deterministic: O(nm + n^2 log n) for global min-cut
KaHIP/METIS multilevel: Practical tools for balanced k-way partitioning
SNN-based neural optimization (attractor, causal, morphogenetic, strange loop, time crystal)
5.2 Spectral Partitioning for Workload Isolation
Theory: Spectral partitioning uses the eigenvectors of the graph Laplacian to identify natural clusters. The Fiedler vector (eigenvector corresponding to the second-smallest eigenvalue) provides an optimal bisection -- the Cheeger bound guarantees that spectral bisection produces partitions with nearly optimal conductance.
Fiedler value estimation via inverse iteration with CG solver
Spectral gap ratio computation
Effective resistance sampling
Degree regularity scoring
Composite Spectral Coherence Score (SCS) with incremental updates
The SpectralTracker supports first-order perturbation updates (delta_lambda ~ v^T * delta_L * v) for incremental edge weight changes, avoiding full recomputation on every graph mutation.
5.3 Dynamic Graph Rebalancing Under Load
Challenge: Static partitioning fails when workload patterns change at runtime. Agents spawn, terminate, and change their communication patterns dynamically.
Approaches:
Diffusion-based: Migrate load from overloaded partitions to underloaded neighbors. O(diameter) convergence. Simple but can oscillate.
Repartitioning: Periodically re-run the partitioner on the current communication graph. Expensive but globally optimal.
Incremental spectral: Track the Fiedler vector incrementally (as ruvector-coherence does) and trigger repartitioning only when the spectral gap drops below a threshold.
RVM design implication: The scheduler's partition manager (ruvix-sched/partition.rs) currently uses static round-robin partition scheduling with fixed time slices. The spectral coherence infrastructure from ruvector-coherence is already in the workspace (ruvix-sched depends on it optionally via the coherence feature flag). The path forward:
Monitor the inter-task communication graph using queue message counters
Build a Laplacian from the communication weights
Compute the SCS incrementally using SpectralTracker
When SCS drops below threshold, trigger repartitioning using ruvector-mincut
Migrate tasks between partitions based on the new cut
5.4 The ruvector-sparsifier Connection
The ruvector-sparsifier crate provides dynamic spectral graph sparsification -- an "always-on compressed world model." For large task graphs, sparsification reduces the graph to O(n log n / epsilon^2) edges while preserving all cuts to within a (1+epsilon) factor. This means the scheduler can maintain an approximate communication graph at dramatically lower cost than the full graph, using it for partitioning decisions.
6. Existing RuVector Crates Relevant to Hypervisor Design
6.1 ruvector-mincut
Relevance: CRITICAL for graph-partitioned scheduling
Provides the algorithmic backbone for task-to-partition assignment
Subpolynomial dynamic min-cut means the scheduler can re-partition in response to workload changes without O(n^3) overhead
The j-Tree hierarchical decomposition (feature jtree) maps directly to multi-level partition hierarchies
The canonical min-cut feature provides deterministic partitioning -- the same communication graph always produces the same partition, enabling reproducible scheduling behavior
Integration point: Wire into ruvix-sched's PartitionManager to dynamically assign new tasks to optimal partitions based on their communication pattern with existing tasks.
6.2 ruvector-sparsifier
Relevance: HIGH for scalable partition management
Dynamic spectral sparsification keeps the scheduler's view of the task communication graph manageable as the number of tasks grows
Static and dynamic modes: static for boot-time graph reduction, dynamic for runtime maintenance
Preserves all cuts within (1+epsilon), so min-cut-based partition decisions remain valid on the sparsified graph
SIMD and WASM feature flags for acceleration
Integration point: Preprocess the inter-task communication graph through the sparsifier before feeding it to ruvector-mincut for partition computation.
6.3 ruvector-solver
Relevance: HIGH for spectral computations
Sublinear-time sparse linear system solver: O(log n) to O(sqrt(n)) for PageRank, Neumann series, forward/backward push, conjugate gradient
Direct application: solving the graph Laplacian systems needed for Fiedler vector computation and effective resistance estimation
The CG solver in ruvector-coherence/spectral.rs is a minimal inline implementation; ruvector-solver provides a more optimized, parallel version
Integration point: Replace the inline CG solver in spectral.rs with ruvector-solver's optimized implementation for faster coherence score computation in the scheduler hot path.
6.4 ruvector-cnn
Relevance: MODERATE for novelty detection
CNN feature extraction for image embeddings with SIMD acceleration
INT8 quantized inference for resource-constrained environments
The scheduler's novelty tracker (ruvix-sched/novelty.rs) computes novelty as distance from a centroid in embedding space
For vision-based agents, ruvector-cnn could provide the embedding that feeds into the novelty computation
Integration point: In RVF component space (above the kernel), vision agents use ruvector-cnn for perception. The resulting embedding vectors feed into the kernel's novelty tracker through the update_task_novelty syscall.
6.5 ruvector-coherence
Relevance: CRITICAL -- already integrated
Provides the coherence measurement primitives that drive the scheduler's risk penalty
Formal verification scope: What subset of the ruvix kernel can be practically verified? The entire ruvix-cap crate is #![forbid(unsafe_code)] and is a good candidate. The ruvix-aarch64 crate contains inherent unsafe code (MMU manipulation) that would need different verification techniques (possibly refinement proofs as in seL4).
Coherence signal latency: Computing spectral coherence scores involves linear algebra (CG solver, power iteration). Can this be fast enough for the scheduling hot path? The inline CG solver in spectral.rs uses 10-15 iterations; benchmarking against ruvector-solver's optimized version is needed.
WASM runtime selection: Wasmtime's no_std support is proven (Hyperlight) but cranelift JIT requires virtual memory. For the initial Phase B port, should RVM use: (a) wasmtime with cranelift JIT (better performance, needs MMU), (b) wasmtime with winch baseline compiler (simpler, still needs MMU), or (c) wasm-micro-runtime (interpreter, no MMU needed, slower)?
Multi-core coherence architecture: When Phase C introduces SMP, should the scheduler use: (a) a single shared scheduler with spinlock protection (simple, doesn't scale), (b) per-core schedulers with work-stealing (Lunatic model), or (c) per-core schedulers with message-passing (Barrelfish model)? The Barrelfish data suggests (c) for >8 cores.
Dynamic partition count: The current PartitionManager uses a compile-time const generic M for maximum partitions. Should this be dynamic to support workloads with variable component counts?
7.3 Recommended Next Steps
Immediate: Wire ruvector-mincut into ruvix-sched's PartitionManager for dynamic task-to-partition assignment based on communication graph analysis.
Phase B priority: Study Hyperlight's wasmtime no_std integration for the bare-metal WASM runtime port. The wasmtime-platform.h abstraction maps cleanly to ruvix-hal traits.
Verification: Begin formal verification of ruvix-cap using Kani (Rust model checker) or Creusot. The #![forbid(unsafe_code)] constraint makes this tractable.
Benchmarking: Measure spectral coherence computation latency in the scheduling hot path. If too slow, implement a fast-path approximation that falls back to full computation periodically (the SpectralTracker already supports this with refresh_threshold).
Phase C design: Adopt Barrelfish's per-core kernel model for SMP. The ruvix-smp crate's topology and IPI infrastructure is already aligned with this approach.
RVM is a Rust-first bare-metal microhypervisor that replaces the VM abstraction with coherence domains (partitions). It runs standalone without Linux or KVM, targeting QEMU virt as the reference platform with paths to real hardware on AArch64, RISC-V, and x86-64. The hypervisor integrates RuVector's mincut, sparsifier, and solver crates as first-class subsystems driving placement, isolation, and scheduling decisions.
This document covers the full system architecture from reset vector to agent runtime.
1.1 Not a VM, Not a Container -- a Coherence Domain
Traditional hypervisors (KVM, Xen, Firecracker) virtualize hardware to run guest operating systems. Traditional containers (Docker, gVisor) share a host kernel with namespace isolation. RVM does neither.
A RVM partition is a coherence domain: a set of memory regions, capabilities, communication edges, and scheduled tasks that form a self-consistent unit of computation. Partitions are not VMs -- they have no emulated hardware, no guest kernel, no BIOS. They are not containers -- there is no host kernel to share. The hypervisor is the kernel.
The unit of isolation is defined by the graph structure of partition communication, not by hardware virtualization features. A mincut of the communication graph reveals the natural fault isolation boundary. This is a fundamentally different model.
1.2 Core Invariants
These invariants hold for every operation in the system:
ID
Invariant
Enforcement
INV-1
No mutation without proof
ProofGate<T> at type level, 3-tier verification
INV-2
No access without capability
Capability table checked on every syscall
INV-3
Every privileged action is witnessed
Append-only witness log, no opt-out
INV-4
No unbounded allocation in syscall path
Pre-allocated structures, slab allocators
INV-5
No priority inversion
Capability-based access prevents blocking on unheld resources
RVM boots directly from the reset vector with no dependency on any existing OS, bootloader, or hypervisor. The sequence is identical in structure across architectures, with platform-specific assembly stubs.
2.1 Stage 0: Reset Vector (Assembly)
The CPU begins execution at the platform-defined reset vector. A minimal assembly stub performs the operations that cannot be expressed in Rust.
AArch64 (EL2 entry for hypervisor mode):
// ruvix-aarch64/src/boot.S.section .text.boot.global_start_start: // On QEMU virt, firmware drops usatEL2 (hypervisor mode) // x0 = DTB address // 1. Check we areatEL2 mrs x1, CurrentEL lsr x1, x1, #2cmp x1, #2 b.ne _wrong_el // 2. Disable MMU, caches (clean state) mrs x1, sctlr_el2 bic x1, x1, #1 // M=0: MMU off bic x1, x1, #(1 << 2) // C=0: data cache off bic x1, x1, #(1 << 12) // I=0: instruction cache off msr sctlr_el2, x1 isb // 3. Set up exception vector table adr x1, _exception_vectors_el2 msr vbar_el2, x1 // 4. Initialize stack pointer adr x1, _stack_topmovsp, x1 // 5. Clear BSS adr x1, __bss_start adr x2, __bss_end.Lbss_clear:cmp x1, x2 b.ge .Lbss_donestr xzr,[x1], #8 b .Lbss_clear.Lbss_done: // 6. x0 still holds DTB address -- pass to Rustbl ruvix_entry // Should never return b ._wrong_el: // IfatEL1, attempt to elevate via HVC (QEMU-specific) // IfatEL3, configure EL2 and eret // ...
RISC-V (HS-mode entry):
// ruvix-riscv/src/boot.S.section .text.boot.global_start_start: // a0 = hart ID, a1 = DTB address // QEMU starts in M-mode; OpenSBI transitions to S-mode // We need HS-mode (hypervisor extension) // 1. Check for hypervisor extension csrr t0, misa andi t0, t0, (1 << 7) // 'H' bit beqz t0, _no_hypervisor // 2. Park non-boot harts bnez a0, _park // 3. Set up stack la sp, _stack_top // 4. Clear BSS la t0, __bss_start la t1, __bss_end1: bge t0, t1, 2f sd zero, (t0) addi t0, t0,8 j 1b2: // 5. Enter Rust (a0=hart_id, a1=dtb)call ruvix_entry_park: wfi j _park
x86-64 (VMX root mode):
; ruvix-x86_64/src/boot.asm; Entered from a multiboot2-compliant loader or direct long mode setup; eax = multiboot2 magic, ebx = info struct pointersection .text.bootglobal_startbits 64_start: ; 1. Already in long mode (64-bit) from bootloader ; 2. Enable VMX if supportedmovecx,0x3A ; IA32_FEATURE_CONTROL MSRrdmsrtesteax, (1 << 2) ; VMXON outside SMXjz _no_vmx ; 3. Set up stacklearsp,[_stack_top] ; 4. Clear BSSleardi,[__bss_start]learcx,[__bss_end]subrcx,rdishrrcx,3xoreax,eaxrepstosq ; 5. rdi = multiboot info pointermovrdi,rbxcall ruvix_entryhltjmp $
2.2 Stage 1: Rust Entry and Hardware Detection
The assembly stub hands off to a single Rust entry point. This function is #[no_mangle] and extern "C", receiving the DTB/multiboot pointer.
EPT (Extended Page Tables) provide second-level address translation
Running at the hypervisor level provides two key advantages over running at kernel level (EL1/Ring 0):
Two-stage address translation: The hypervisor controls the mapping from guest-physical to host-physical addresses. Partitions can have their own page tables (stage-1) while the hypervisor enforces isolation via stage-2 tables. This is strictly more powerful than single-stage translation.
Trap-and-emulate without paravirtualization: The hypervisor can trap on specific instructions (WFI, MSR, MMIO access) without requiring the partition to be aware it is virtualized. This is essential for running unmodified WASM runtimes.
Stage-2 page table setup (AArch64):
// ruvix-aarch64/src/stage2.rs/// Stage-2 translation table for a partition.////// Maps Intermediate Physical Addresses (IPA) produced by the partition's/// stage-1 tables to actual Physical Addresses (PA). The hypervisor/// controls this mapping exclusively.pubstructStage2Tables{/// Level-0 table base (4KB aligned)root:PhysAddr,/// Physical pages backing the table structurepages:ArrayVec<PhysAddr,512>,/// IPA range assigned to this partitionipa_range:Range<u64>,}implStage2Tables{/// Create stage-2 tables for a partition with the given IPA range.////// The IPA range defines the partition's "view" of physical memory./// All accesses outside this range trap to the hypervisor.pubfnnew(ipa_range:Range<u64>,phys:&mutPhysicalAllocator,) -> Result<Self,HypervisorError>{let root = phys.allocate_page()?;// Zero the root tableunsafe{ core::ptr::write_bytes(root.as_mut_ptr::<u8>(),0,PAGE_SIZE)};Ok(Self{
root,pages:ArrayVec::new(),
ipa_range,})}/// Map an IPA to a PA with the given attributes.////// Enforces that the IPA falls within the partition's assigned range.pubfnmap(&mutself,ipa:u64,pa:PhysAddr,attrs:Stage2Attrs,phys:&mutPhysicalAllocator,) -> Result<(),HypervisorError>{if !self.ipa_range.contains(&ipa){returnErr(HypervisorError::IpaOutOfRange);}// Walk/allocate 4-level table and install entryself.walk_and_install(ipa, pa, attrs, phys)}/// Activate these tables for the current vCPU.////// Writes VTTBR_EL2 with the table base and VMID.pubunsafefnactivate(&self,vmid:u16){let vttbr = self.root.as_u64() | ((vmid asu64) << 48);
core::arch::asm!("msr vttbr_el2, {val}","isb",
val = in(reg) vttbr,);}}/// Stage-2 page attributes.#[derive(Debug,Clone,Copy)]pubstructStage2Attrs{pubreadable:bool,pubwritable:bool,pubexecutable:bool,/// Device memory (non-cacheable, strongly ordered)pubdevice:bool,}
2.4 Stage 3: Capability Table and Kernel Object Initialization
After the MMU is active and hypervisor mode is configured, the kernel initializes its object tables:
RVM defines eight first-class kernel objects. The first six (Task, Capability, Region, Queue, Timer, Proof) are inherited from Phase A (ADR-087). The remaining two (Partition, CommEdge) plus the supplementary metric objects (CoherenceScore, CutPressure, DeviceLease) are new to the hypervisor architecture.
3.1 Partition (Coherence Domain Container)
A partition is the primary execution container. It is NOT a VM.
// ruvix-partition/src/partition.rs/// A coherence domain: the fundamental unit of isolation in RVM.////// A partition groups:/// - A set of tasks that execute within the domain/// - A set of memory regions owned by the domain/// - A capability table scoped to the domain/// - A set of CommEdges connecting to other partitions/// - A coherence score measuring internal consistency/// - A set of device leases for hardware access////// Partitions can be split, merged, migrated, and hibernated./// The hypervisor manages stage-2 page tables per partition,/// ensuring hardware-enforced memory isolation.pubstructPartition{/// Unique partition identifierid:PartitionId,/// Stage-2 page tables (hardware isolation)stage2:Stage2Tables,/// Tasks belonging to this partitiontasks:BTreeMap<TaskHandle,TaskControlBlock>,/// Memory regions owned by this partitionregions:BTreeMap<RegionHandle,RegionDescriptor>,/// Capability table for this partitioncap_table:CapabilityTable,/// Communication edges to other partitionscomm_edges:ArrayVec<CommEdgeHandle,MAX_EDGES_PER_PARTITION>,/// Current coherence score (computed by solver crate)coherence:CoherenceScore,/// Current cut pressure (computed by mincut crate)cut_pressure:CutPressure,/// Active device leasesdevice_leases:ArrayVec<DeviceLease,MAX_DEVICES_PER_PARTITION>,/// Partition statestate:PartitionState,/// Witness log segment for this partitionwitness_segment:WitnessSegmentHandle,}/// Partition lifecycle states.#[derive(Debug,Clone,Copy,PartialEq,Eq)]pubenumPartitionState{/// Actively scheduled, tasks runningActive,/// All tasks suspended, state in hot memorySuspended,/// State compressed and moved to warm tierWarm,/// State serialized to cold storage, reconstructableDormant,/// Being split into two partitions (transient)Splitting,/// Being merged with another partition (transient)Merging,/// Being migrated to another physical node (transient)Migrating,}/// Partition identity.#[derive(Debug,Clone,Copy,PartialEq,Eq,Hash,PartialOrd,Ord)]pubstructPartitionId(u64);/// Maximum communication edges per partition.pubconstMAX_EDGES_PER_PARTITION:usize = 64;/// Maximum devices per partition.pubconstMAX_DEVICES_PER_PARTITION:usize = 8;
Partition operations trait:
/// Operations on coherence domains.pubtraitPartitionOps{/// Create a new empty partition with its own stage-2 address space.fncreate(&mutself,config:PartitionConfig,parent_cap:CapHandle,proof:&ProofToken,) -> Result<PartitionId,HypervisorError>;/// Split a partition along a mincut boundary.////// The mincut algorithm identifies the optimal split point./// Tasks, regions, and capabilities are redistributed according/// to which side of the cut they fall on.fnsplit(&mutself,partition:PartitionId,cut:&CutResult,proof:&ProofToken,) -> Result<(PartitionId,PartitionId),HypervisorError>;/// Merge two partitions into one.////// Requires that the partitions share at least one CommEdge/// and that the merged coherence score exceeds a threshold.fnmerge(&mutself,a:PartitionId,b:PartitionId,proof:&ProofToken,) -> Result<PartitionId,HypervisorError>;/// Transition a partition to the dormant state.////// Serializes all state, releases physical memory, and records/// a reconstruction receipt in the witness log.fnhibernate(&mutself,partition:PartitionId,proof:&ProofToken,) -> Result<ReconstructionReceipt,HypervisorError>;/// Reconstruct a dormant partition from its receipt.fnreconstruct(&mutself,receipt:&ReconstructionReceipt,proof:&ProofToken,) -> Result<PartitionId,HypervisorError>;}
3.2 Capability (Unforgeable Token)
Capabilities are inherited directly from ruvix-cap (Phase A). In the hypervisor context, the capability system is extended with new object types:
// ruvix-types/src/object.rs (extended)/// All kernel object types that can be referenced by capabilities.#[derive(Debug,Clone,Copy,PartialEq,Eq)]#[repr(u8)]pubenumObjectType{// Phase A objectsTask = 0,Region = 1,Queue = 2,Timer = 3,VectorStore = 4,GraphStore = 5,// Hypervisor objects (new)Partition = 6,CommEdge = 7,DeviceLease = 8,WitnessLog = 9,PhysMemPool = 10,}/// Capability rights bitmap (extended for hypervisor).bitflags!{pubstructCapRights:u32{// Phase A rightsconstREAD = 1 << 0;constWRITE = 1 << 1;constGRANT = 1 << 2;constGRANT_ONCE = 1 << 3;constPROVE = 1 << 4;constREVOKE = 1 << 5;// Hypervisor rights (new)constSPLIT = 1 << 6;// Split a partitionconstMERGE = 1 << 7;// Merge partitionsconstMIGRATE = 1 << 8;// Migrate partition to another nodeconstHIBERNATE = 1 << 9;// Hibernate/reconstructconstLEASE = 1 << 10;// Acquire device leaseconstWITNESS = 1 << 11;// Read witness log}}
3.3 Witness (Audit Record)
Every privileged action produces a witness record. See Section 8 for the full design.
3.4 MemoryRegion (Typed, Tiered Memory)
Memory regions from Phase A are extended with tier awareness:
// ruvix-region/src/tiered.rs/// Memory tier indicating thermal/access characteristics.#[derive(Debug,Clone,Copy,PartialEq,Eq,PartialOrd,Ord)]#[repr(u8)]pubenumMemoryTier{/// Actively accessed, in L1/L2 cache working set./// Physical pages pinned, stage-2 mapped.Hot = 0,/// Recently accessed, in DRAM but not cache-hot./// Physical pages allocated, stage-2 mapped but may be/// compressed in background.Warm = 1,/// Not recently accessed. Pages compressed in-place/// using LZ4. Stage-2 mapping points to compressed form./// Access triggers decompression fault handled by hypervisor.Dormant = 2,/// Evicted to persistent storage (NVMe, SD card, network)./// Stage-2 mapping removed. Access triggers reconstruction/// via the reconstruction protocol.Cold = 3,}/// A memory region with ownership tracking and tier management.pubstructTieredRegion{/// Base region (Immutable, AppendOnly, or Slab policy)inner:RegionDescriptor,/// Current memory tiertier:MemoryTier,/// Owning partitionowner:PartitionId,/// Sharing bitmap: which partitions have read access via CommEdgeshared_with:BitSet<256>,/// Last access timestamp (for tier promotion/demotion)last_access_ns:u64,/// Compressed size (if Dormant tier)compressed_size:Option<usize>,/// Reconstruction receipt (if Cold tier)reconstruction:Option<ReconstructionReceipt>,}
A CommEdge is a typed, capability-checked communication channel between two partitions:
// ruvix-commedge/src/lib.rs/// A communication edge between two partitions.////// CommEdges are the only mechanism for inter-partition communication./// They carry typed messages, support zero-copy sharing, and are/// tracked by the coherence graph.pubstructCommEdge{/// Unique edge identifierid:CommEdgeHandle,/// Source partitionsource:PartitionId,/// Destination partitiondest:PartitionId,/// Underlying queue (from ruvix-queue)queue:QueueHandle,/// Edge weight in the coherence graph./// Updated on every message send: weight += message_bytes./// Decays over time: weight *= decay_factor per epoch.weight:AtomicU64,/// Message count since last epochmessage_count:AtomicU64,/// Capability required to send on this edgesend_cap:CapHandle,/// Capability required to receive on this edgerecv_cap:CapHandle,/// Whether this edge supports zero-copy region sharingzero_copy:bool,/// Shared memory regions (if zero_copy is true)shared_regions:ArrayVec<RegionHandle,16>,}/// CommEdge operations.pubtraitCommEdgeOps{/// Create a new CommEdge between two partitions.////// Both partitions must hold appropriate capabilities./// The edge is registered in the coherence graph.fncreate_edge(&mutself,source:PartitionId,dest:PartitionId,config:CommEdgeConfig,proof:&ProofToken,) -> Result<CommEdgeHandle,HypervisorError>;/// Send a message over a CommEdge.////// Updates edge weight in the coherence graph.fnsend(&mutself,edge:CommEdgeHandle,msg:&[u8],priority:MsgPriority,cap:CapHandle,) -> Result<(),HypervisorError>;/// Receive a message from a CommEdge.fnrecv(&mutself,edge:CommEdgeHandle,buf:&mut[u8],timeout:Duration,cap:CapHandle,) -> Result<usize,HypervisorError>;/// Share a memory region over a CommEdge (zero-copy).////// Maps the region into the destination partition's stage-2/// address space with read-only permissions. The source retains/// ownership.fnshare_region(&mutself,edge:CommEdgeHandle,region:RegionHandle,proof:&ProofToken,) -> Result<(),HypervisorError>;/// Destroy a CommEdge.////// Unmaps any shared regions and removes the edge from the/// coherence graph.fndestroy_edge(&mutself,edge:CommEdgeHandle,proof:&ProofToken,) -> Result<(),HypervisorError>;}
3.6 DeviceLease (Time-Bounded Device Access)
// ruvix-partition/src/device_lease.rs/// A time-bounded, revocable lease granting a partition access to/// a hardware device.////// Device leases are the hypervisor's mechanism for safe device/// assignment. Unlike passthrough (where the guest owns the device/// permanently), leases expire and can be revoked.pubstructDeviceLease{/// Unique lease identifierid:LeaseId,/// Device being leaseddevice:DeviceDescriptor,/// Partition holding the leaseholder:PartitionId,/// Lease expiration (absolute time in nanoseconds)expires_ns:u64,/// Whether the lease has been revokedrevoked:bool,/// MMIO region mapped into the partition's stage-2 spacemmio_region:Option<RegionHandle>,/// Interrupt routing: device IRQ -> partition's virtual IRQirq_routing:Option<(u32,u32)>,// (physical_irq, virtual_irq)}/// Lease operations.pubtraitLeaseOps{/// Acquire a lease on a device.////// Requires LEASE capability. The device's MMIO region is mapped/// into the partition's stage-2 address space. Interrupts from/// the device are routed to the partition.fnacquire(&mutself,device:DeviceDescriptor,partition:PartitionId,duration_ns:u64,cap:CapHandle,proof:&ProofToken,) -> Result<LeaseId,HypervisorError>;/// Renew an existing lease.fnrenew(&mutself,lease:LeaseId,additional_ns:u64,proof:&ProofToken,) -> Result<(),HypervisorError>;/// Revoke a lease (immediate).////// Unmaps MMIO region, disables interrupt routing, resets/// device to safe state.fnrevoke(&mutself,lease:LeaseId,proof:&ProofToken,) -> Result<(),HypervisorError>;}
3.7 CoherenceScore
// ruvix-pressure/src/coherence.rs/// A coherence score for a partition, computed by the solver crate.////// The score measures how "internally consistent" a partition is:/// high coherence means the partition's tasks and data are tightly/// coupled and should stay together. Low coherence signals that/// the partition may benefit from splitting.#[derive(Debug,Clone,Copy)]pubstructCoherenceScore{/// Aggregate score in [0.0, 1.0]. Higher = more coherent.pubvalue:f64,/// Per-task contribution to the score./// Identifies which tasks are most/least coupled.pubtask_contributions:[f32;64],/// Timestamp of last computation.pubcomputed_at_ns:u64,/// Whether the score is stale (> 1 epoch old).pubstale:bool,}
3.8 CutPressure
// ruvix-pressure/src/cut.rs/// Graph-derived isolation signal for a partition.////// CutPressure is computed by running the ruvector-mincut algorithm/// on the partition's communication graph. High pressure means the/// partition has a cheap cut -- it could easily be split into two/// independent halves.#[derive(Debug,Clone)]pubstructCutPressure{/// Minimum cut value across all edges in/out of this partition./// Lower value = higher pressure to split.pubmin_cut_value:f64,/// The actual cut: which edges to sever.pubcut_edges:ArrayVec<CommEdgeHandle,32>,/// Partition IDs on each side of the proposed cut.pubside_a:ArrayVec<TaskHandle,64>,pubside_b:ArrayVec<TaskHandle,64>,/// Estimated coherence scores after split.pubpredicted_coherence_a:f64,pubpredicted_coherence_b:f64,/// Timestamp.pubcomputed_at_ns:u64,}
4. Memory Architecture
4.1 Two-Stage Address Translation
RVM uses hardware-enforced two-stage address translation for partition isolation:
Partition Virtual Address (VA)
|
| Stage-1 translation (partition's own page tables, EL1)
|
v
Intermediate Physical Address (IPA)
|
| Stage-2 translation (hypervisor-controlled, EL2)
|
v
Physical Address (PA)
Each partition has its own stage-1 page tables (which it controls) and stage-2 page tables (which only the hypervisor can modify). This means:
A partition cannot access memory outside its assigned IPA range
The hypervisor can remap, compress, or migrate physical pages without the partition's knowledge
Zero-copy sharing is implemented by mapping the same PA into two partitions' stage-2 tables
4.2 Physical Memory Allocator
The physical allocator uses a buddy system with per-tier free lists:
// ruvix-physmem/src/buddy.rs/// Physical memory allocator with tier-aware allocation.pubstructPhysicalAllocator{/// Buddy allocator for each tiertiers:[BuddyAllocator;4],// Hot, Warm, Dormant, Cold/// Total physical memory availabletotal_pages:usize,/// Per-tier statisticsstats:[TierStats;4],}implPhysicalAllocator{/// Allocate pages from a specific tier.pubfnallocate_pages(&mutself,count:usize,tier:MemoryTier,) -> Result<PhysRange,AllocError>{self.tiers[tier asusize].allocate(count)}/// Promote pages from a colder tier to a warmer tier.////// This is called when a dormant region is accessed.pubfnpromote(&mutself,range:PhysRange,from:MemoryTier,to:MemoryTier,) -> Result<PhysRange,AllocError>{assert!(to < from,"promotion must go to a warmer tier");let new_range = self.tiers[to asusize].allocate(range.page_count())?;// Copy and decompress if neededself.copy_and_promote(range, new_range, from, to)?;self.tiers[from asusize].free(range);Ok(new_range)}/// Demote pages to a colder tier.////// Pages are compressed (Dormant) or evicted (Cold).pubfndemote(&mutself,range:PhysRange,from:MemoryTier,to:MemoryTier,) -> Result<DemoteReceipt,AllocError>{assert!(to > from,"demotion must go to a colder tier");match to {MemoryTier::Dormant => self.compress_in_place(range),MemoryTier::Cold => self.evict_to_storage(range),
_ => unreachable!(),}}}
4.3 Memory Ownership via Rust's Type System
Memory ownership is enforced at the type level. A RegionHandle is a non-copyable token:
// ruvix-region/src/ownership.rs/// A typed memory region handle. Non-copyable, non-clonable.////// Ownership semantics:/// - Exactly one partition owns a region at any time/// - Transfer requires a proof and witness record/// - Sharing creates a read-only view (not an ownership transfer)/// - Dropping the handle does NOT free the region (the hypervisor manages lifetime)pubstructOwnedRegion<P:RegionPolicy>{handle:RegionHandle,owner:PartitionId,_policy:PhantomData<P>,}/// Immutable region policy marker.pubstructImmutable;/// Append-only region policy marker.pubstructAppendOnly;/// Slab region policy marker.pubstructSlab;impl<P:RegionPolicy>OwnedRegion<P>{/// Transfer ownership to another partition.////// Consumes self, ensuring the old owner cannot use the handle./// Updates stage-2 page tables for both partitions.pubfntransfer(self,new_owner:PartitionId,proof:&ProofToken,witness:&mutWitnessLog,) -> Result<OwnedRegion<P>,HypervisorError>{
witness.record(WitnessRecord::RegionTransfer{region:self.handle,from:self.owner,to: new_owner,proof_tier: proof.tier(),});// Remap stage-2 tablesOk(OwnedRegion{handle:self.handle,owner: new_owner,_policy:PhantomData,})}}/// Zero-copy sharing between partitions.////// Only Immutable and AppendOnly regions can be shared (INV-4 from/// Phase A: TOCTOU protection). Slab regions are never shared.implOwnedRegion<Immutable>{pubfnshare_readonly(&self,target:PartitionId,edge:CommEdgeHandle,witness:&mutWitnessLog,) -> Result<SharedRegionView,HypervisorError>{
witness.record(WitnessRecord::RegionShare{region:self.handle,owner:self.owner,
target,
edge,});Ok(SharedRegionView{handle:self.handle,viewer: target,})}}
4.4 Tier Management
The hypervisor runs a background tier management loop that promotes and demotes regions based on access patterns:
// ruvix-partition/src/tier_manager.rs/// Tier management policy.pubstructTierPolicy{/// Promote to Hot if accessed more than this many times per epochpubhot_access_threshold:u32,/// Demote to Dormant if not accessed for this many epochspubdormant_after_epochs:u32,/// Demote to Cold if dormant for this many epochspubcold_after_epochs:u32,/// Maximum Hot tier memory (bytes) before forced demotionpubmax_hot_bytes:usize,/// Compression algorithm for Dormant tierpubcompression:CompressionAlgorithm,}/// Reconstruction protocol for dormant/cold state.////// A reconstruction receipt contains everything needed to rebuild/// a region from its serialized form plus the witness log.#[derive(Debug,Clone)]pubstructReconstructionReceipt{/// Region identitypubregion:RegionHandle,/// Owning partitionpubpartition:PartitionId,/// Hash of the serialized statepubstate_hash:[u8;32],/// Storage location (for Cold tier)pubstorage_location:StorageLocation,/// Witness log range needed for replaypubwitness_range:Range<u64>,/// Proof that the serialization was correctpubattestation:ProofAttestation,}#[derive(Debug,Clone)]pubenumStorageLocation{/// Compressed in DRAM at the given physical address rangeCompressedDram(PhysRange),/// On block device at the given LBA rangeBlockDevice{device:DeviceDescriptor,lba_range:Range<u64>},/// On remote node (for distributed RVM)Remote{node_id:u64,receipt_id:u64},}
4.5 No Demand Paging
RVM does not implement demand paging, swap, or copy-on-write. All regions are physically backed at creation time. This is a deliberate design choice:
Deterministic latency: No page fault handler in the critical path
Simpler correctness proofs: No hidden state in page tables
Better for real-time: No unbounded delay from swap I/O
The tradeoff is higher memory pressure, which is managed by the tier system: instead of swapping, RVM compresses (Dormant) or serializes (Cold) entire regions with explicit witness records.
5. Scheduler Design
5.1 Three Scheduling Modes
The scheduler operates in one of three modes at any given time:
// ruvix-sched/src/mode.rs/// Scheduler operating mode.#[derive(Debug,Clone,Copy,PartialEq,Eq)]pubenumSchedulerMode{/// Hard real-time mode.////// Activated when any partition has a deadline-critical task./// Uses pure EDF (Earliest Deadline First) within partitions./// No novelty boosting. No coherence-based reordering./// Guaranteed bounded preemption latency.Reflex,/// Normal operating mode.////// Combines three signals:/// 1. Deadline pressure (EDF baseline)/// 2. Novelty signal (priority boost for new information)/// 3. Structural risk (deprioritize mutations that lower coherence)/// 4. Cut pressure (boost partitions near a split boundary)Flow,/// Recovery mode.////// Activated when coherence drops below a critical threshold/// or a partition reconstruction fails. Reduces concurrency,/// favors stability over throughput.Recovery,}
5.2 Graph-Pressure-Driven Scheduling
In Flow mode, the scheduler uses the coherence graph to make decisions:
// ruvix-sched/src/graph_pressure.rs/// Priority computation for Flow mode.////// final_priority = deadline_urgency/// + (novelty_boost * NOVELTY_WEIGHT)/// - (structural_risk * RISK_WEIGHT)/// + (cut_pressure_boost * PRESSURE_WEIGHT)pubfncompute_flow_priority(task:&TaskControlBlock,partition:&Partition,pressure:&PressureEngine,now_ns:u64,) -> FlowPriority{// 1. Deadline urgency: how close to missing the deadlinelet deadline_urgency = task.deadline.map(|d| {let remaining = d.saturating_sub(now_ns);// Urgency increases as deadline approaches1.0 / (remaining asf64 / 1_000_000.0 + 1.0)}).unwrap_or(0.0);// 2. Novelty boost: is this task processing genuinely new data?let novelty_boost = partition.coherence.task_contributions[task.handle.index() % 64]asf64;// 3. Structural risk: would this task's pending mutations// lower the partition's coherence score?let structural_risk = task.pending_mutation_risk();// 4. Cut pressure boost: if this partition is near a split// boundary, boost tasks that would reduce the cut cost// (making the partition more internally coherent)let cut_boost = if partition.cut_pressure.min_cut_value < SPLIT_THRESHOLD{// Boost tasks on the heavier side of the cutlet on_heavy_side = partition.cut_pressure.side_a.len()
> partition.cut_pressure.side_b.len();if partition.cut_pressure.side_a.contains(&task.handle) == on_heavy_side {PRESSURE_BOOST}else{0.0}}else{0.0};FlowPriority{
deadline_urgency,novelty_boost: novelty_boost *NOVELTY_WEIGHT,structural_risk: structural_risk *RISK_WEIGHT,cut_pressure_boost: cut_boost,total: deadline_urgency
+ novelty_boost *NOVELTY_WEIGHT
- structural_risk *RISK_WEIGHT
+ cut_boost,}}constNOVELTY_WEIGHT:f64 = 0.3;constRISK_WEIGHT:f64 = 2.0;constPRESSURE_BOOST:f64 = 0.5;constSPLIT_THRESHOLD:f64 = 0.2;
5.3 Partition Split/Merge Triggers
The scheduler monitors cut pressure and triggers structural changes:
// ruvix-sched/src/structural.rs/// Structural change triggers evaluated every epoch.pubfnevaluate_structural_changes(partitions:&[Partition],pressure:&PressureEngine,config:&StructuralConfig,) -> Vec<StructuralAction>{letmut actions = Vec::new();for partition in partitions {let cp = &partition.cut_pressure;let cs = &partition.coherence;// SPLIT trigger: low mincut AND low coherenceif cp.min_cut_value < config.split_cut_threshold
&& cs.value < config.split_coherence_threshold
&& cp.predicted_coherence_a > cs.value
&& cp.predicted_coherence_b > cs.value{
actions.push(StructuralAction::Split{partition: partition.id,cut: cp.clone(),});}// MERGE trigger: high coherence between two partitions// connected by a heavy CommEdgefor edge_handle in&partition.comm_edges{ifletSome(edge) = pressure.get_edge(*edge_handle){let weight = edge.weight.load(Ordering::Relaxed);if weight > config.merge_edge_threshold{let other = if edge.source == partition.id{
edge.dest}else{
edge.source};
actions.push(StructuralAction::Merge{a: partition.id,b: other,edge_weight: weight,});}}}// HIBERNATE trigger: partition has been suspended for too longif partition.state == PartitionState::Suspended
&& partition.last_activity_ns + config.hibernate_after_ns < now_ns(){
actions.push(StructuralAction::Hibernate{partition: partition.id,});}}
actions
}
5.4 Per-CPU Scheduling
On multi-core systems, each CPU runs its own scheduler instance with partition affinity:
// ruvix-sched/src/percpu.rs/// Per-CPU scheduler state.pubstructPerCpuScheduler{/// CPU identifiercpu_id:u32,/// Partitions assigned to this CPUassigned:ArrayVec<PartitionId,32>,/// Current time quantum remaining (microseconds)quantum_remaining:u32,/// Currently running taskcurrent:Option<TaskHandle>,/// Modemode:SchedulerMode,}/// Global scheduler coordinates per-CPU instances.pubstructGlobalScheduler{/// Per-CPU schedulersper_cpu:ArrayVec<PerCpuScheduler,MAX_CPUS>,/// Partition-to-CPU assignment (informed by coherence graph)assignment:PartitionAssignment,/// Global mode override (Recovery overrides all CPUs)global_mode:Option<SchedulerMode>,}
6. IPC Design
6.1 Zero-Copy Message Passing
All inter-partition communication goes through CommEdges, which wrap the ruvix-queue ring buffers. Zero-copy is achieved by descriptor passing:
// ruvix-commedge/src/zerocopy.rs/// A zero-copy message descriptor.////// Instead of copying data, the sender places a descriptor in the/// queue that references a shared region. The receiver reads directly/// from the shared region.////// This is safe because:/// 1. Only Immutable or AppendOnly regions can be shared (no mutation)/// 2. The stage-2 page tables enforce read-only access for the receiver/// 3. The witness log records every share operation#[derive(Debug,Clone,Copy)]#[repr(C)]pubstructZeroCopyDescriptor{/// Shared region handlepubregion:RegionHandle,/// Offset within the regionpuboffset:u32,/// Length of the datapublength:u32,/// Schema hash (for type checking)pubschema_hash:u64,}/// Send a zero-copy message.////// The region must already be shared with the destination partition/// via `CommEdgeOps::share_region`.pubfnsend_zerocopy(edge:&CommEdge,desc:ZeroCopyDescriptor,cap:CapHandle,cap_mgr:&CapabilityManager,witness:&mutWitnessLog,) -> Result<(),HypervisorError>{// 1. Capability checklet cap_entry = cap_mgr.lookup(cap)?;if !cap_entry.rights.contains(CapRights::WRITE){returnErr(HypervisorError::CapabilityDenied);}// 2. Verify region is shared with destinationif !edge.shared_regions.contains(&desc.region){returnErr(HypervisorError::RegionNotShared);}// 3. Validate descriptor bounds// (offset + length must be within region size)// 4. Enqueue descriptor in ring buffer
edge.queue.send_raw(
bytemuck::bytes_of(&desc),MsgPriority::Normal,)?;// 5. Witness
witness.record(WitnessRecord::ZeroCopySend{edge: edge.id,region: desc.region,offset: desc.offset,length: desc.length,});Ok(())}
6.2 Async Notification Mechanism
For lightweight signaling without data transfer (e.g., "new data available"), RVM provides notifications:
// ruvix-commedge/src/notification.rs/// A notification word: a bitmask that can be atomically OR'd.////// Notifications are the lightweight alternative to sending a/// full message. A partition can wait on a notification word/// and be woken when any bit is set.////// This maps to a virtual interrupt injection at the hypervisor/// level: setting a notification bit triggers a stage-2 fault/// that the hypervisor converts to a virtual IRQ in the/// destination partition.pubstructNotificationWord{/// The notification bits (64 independent signals)bits:AtomicU64,/// Source partition (who can signal)source:PartitionId,/// Destination partition (who is waiting)dest:PartitionId,/// Capability required to signalsignal_cap:CapHandle,}implNotificationWord{/// Signal one or more notification bits.pubfnsignal(&self,mask:u64,cap:CapHandle) -> Result<(),HypervisorError>{// Capability check omitted for brevityself.bits.fetch_or(mask,Ordering::Release);// Inject virtual interrupt into destination partitioninject_virtual_irq(self.dest,NOTIFICATION_VIRQ);Ok(())}/// Wait for any bit in the mask to be set.////// Blocks the calling task until a matching bit is set./// Returns the bits that were set.pubfnwait(&self,mask:u64) -> u64{loop{let current = self.bits.load(Ordering::Acquire);let matched = current & mask;if matched != 0{// Clear the matched bitsself.bits.fetch_and(!matched,Ordering::AcqRel);return matched;}// Block task until notification IRQyield_until_irq();}}}
6.3 Shared Memory Regions with Witness Tracking
Every shared memory operation is witnessed:
// Witness records for IPC operationspubenumIpcWitnessRecord{/// A region was shared between partitionsRegionShared{region:RegionHandle,from:PartitionId,to:PartitionId,permissions:PagePermissions,edge:CommEdgeHandle,},/// A zero-copy message was sentZeroCopySent{edge:CommEdgeHandle,region:RegionHandle,offset:u32,length:u32,},/// A region share was revokedShareRevoked{region:RegionHandle,from:PartitionId,to:PartitionId,},/// A notification was signaledNotificationSignaled{source:PartitionId,dest:PartitionId,mask:u64,},}
7. Device Model
7.1 Lease-Based Device Access
RVM does not emulate hardware. Instead, it provides direct device access through time-bounded leases. This is fundamentally different from KVM's device emulation (QEMU) or Firecracker's minimal device model (virtio).
Traditional Hypervisor:
Guest -> emulated device -> host driver -> real hardware
RVM:
Partition -> [lease check] -> real hardware (via stage-2 MMIO mapping)
The hypervisor maps device MMIO regions directly into the partition's stage-2 address space. The partition interacts with real hardware registers. The hypervisor's role is limited to:
Granting and revoking leases
Routing interrupts
Ensuring lease expiration
Resetting devices on lease revocation
7.2 Device Capability Tokens
// ruvix-drivers/src/device_cap.rs/// A device descriptor identifying a hardware device.#[derive(Debug,Clone,Copy,PartialEq,Eq,Hash)]pubstructDeviceDescriptor{/// Device classpubclass:DeviceClass,/// MMIO base address (physical)pubmmio_base:u64,/// MMIO region sizepubmmio_size:usize,/// Primary interrupt numberpubirq:u32,/// Device-specific identifierpubdevice_id:u32,}#[derive(Debug,Clone,Copy,PartialEq,Eq,Hash)]pubenumDeviceClass{Uart,Timer,InterruptController,NetworkVirtio,BlockVirtio,Gpio,Rtc,Pci,}/// Device registry maintained by the hypervisor.pubstructDeviceRegistry{/// All discovered devicesdevices:ArrayVec<DeviceDescriptor,64>,/// Current leases: device -> (partition, expiration)leases:BTreeMap<DeviceDescriptor,DeviceLease>,/// Devices reserved for the hypervisor (never leased)reserved:ArrayVec<DeviceDescriptor,8>,}implDeviceRegistry{/// Discover devices from the device tree.pubfnfrom_dtb(dtb:&DeviceTree) -> Self{letmut reg = Self::new();for node in dtb.iter_devices(){let desc = DeviceDescriptor::from_dtb_node(node);
reg.devices.push(desc);}// Reserve the interrupt controller and hypervisor timer
reg.reserved.push(reg.find_gic().unwrap());
reg.reserved.push(reg.find_timer().unwrap());
reg
}}
7.3 Interrupt Routing
Interrupts from leased devices are routed to the holding partition as virtual interrupts:
// ruvix-drivers/src/irq_route.rs/// Interrupt routing table.////// Maps physical IRQs to virtual IRQs in partitions./// Only one partition can receive a given physical IRQ at a time.pubstructIrqRouter{/// Physical IRQ -> (partition, virtual IRQ)routes:BTreeMap<u32,(PartitionId,u32)>,}implIrqRouter{/// Route a physical IRQ to a partition.////// Called when a device lease is acquired.pubfnadd_route(&mutself,phys_irq:u32,partition:PartitionId,virt_irq:u32,) -> Result<(),HypervisorError>{ifself.routes.contains_key(&phys_irq){returnErr(HypervisorError::IrqAlreadyRouted);}self.routes.insert(phys_irq,(partition, virt_irq));Ok(())}/// Handle a physical IRQ.////// Called from the hypervisor's IRQ handler. Looks up the/// route and injects a virtual interrupt into the target/// partition.pubfndispatch(&self,phys_irq:u32) -> Option<(PartitionId,u32)>{self.routes.get(&phys_irq).copied()}}
7.4 Virtio-Like Minimal Device Model
For devices that cannot be directly leased (shared devices, emulated devices for testing), RVM provides a minimal virtio-compatible interface:
// ruvix-drivers/src/virtio_shim.rs/// Minimal virtio device shim.////// This is NOT full virtio emulation. It provides:/// - A single virtqueue (descriptor table + available ring + used ring)/// - Interrupt injection via notification words/// - Region-backed buffers (no DMA emulation)////// Used for: virtio-console (debug), virtio-net (networking between/// partitions), virtio-blk (block storage).pubtraitVirtioShim{/// Device type (net = 1, blk = 2, console = 3)fndevice_type(&self) -> u32;/// Process available descriptors.fnprocess_queue(&mutself,queue:&VirtQueue) -> usize;/// Device-specific configuration read.fnread_config(&self,offset:u32) -> u32;/// Device-specific configuration write.fnwrite_config(&mutself,offset:u32,value:u32);}
8. Witness Subsystem
8.1 Append-Only Log Design
The witness log is the audit backbone of RVM. Every privileged action produces a witness record. The log is append-only: there is no API to delete or modify records.
// ruvix-witness/src/log.rs/// The kernel witness log.////// Backed by a physically contiguous region in DRAM (Hot tier)./// When the log fills, older segments are compressed to Warm tier/// and eventually serialized to Cold tier.////// The log is structured as a series of 64-byte records packed/// into 4KB pages. Each page has a header with a running hash.pubstructWitnessLog{/// Current write position (page index + offset within page)write_pos:AtomicU64,/// Physical pages backing the logpages:ArrayVec<PhysAddr,WITNESS_LOG_MAX_PAGES>,/// Running hash over all records (FNV-1a)chain_hash:AtomicU64,/// Sequence number (monotonically increasing)sequence:AtomicU64,/// Segment index for archivalcurrent_segment:u32,}/// Maximum log pages before rotation to warm tier.pubconstWITNESS_LOG_MAX_PAGES:usize = 4096;// 16 MB of hot log
8.2 Compact Binary Format
Each witness record is exactly 64 bytes to align with cache lines and avoid variable-length parsing:
The witness log supports two operations: audit (verify integrity) and replay (reconstruct state).
// ruvix-witness/src/replay.rs/// Verify the integrity of the witness log.////// Walks the log from start to end, recomputing chain hashes./// Any break in the chain indicates tampering.pubfnaudit_log(log:&WitnessLog) -> AuditResult{letmut expected_hash:u64 = 0;letmut record_count:u64 = 0;letmut violations:Vec<AuditViolation> = Vec::new();for record in log.iter(){// Verify chain hashif record.chain_hash_before != expected_hash {
violations.push(AuditViolation::ChainBreak{sequence: record.sequence,expected: expected_hash,found: record.chain_hash_before,});}// Verify record self-hashlet computed = compute_record_hash(&record);if record.record_hash != computed {
violations.push(AuditViolation::RecordTampered{sequence: record.sequence,});}// Advance chain
expected_hash = fnv1a_combine(expected_hash, record.record_hash);
record_count += 1;}AuditResult{total_records: record_count,
violations,chain_valid: violations.is_empty(),}}/// Replay a witness log to reconstruct system state.////// Given a checkpoint and a witness log segment, deterministically/// reconstructs the system state at any point in the log.pubfnreplay_from_checkpoint(checkpoint:&Checkpoint,log_segment:&[WitnessRecord],) -> Result<KernelState,ReplayError>{letmut state = checkpoint.restore()?;for record in log_segment {
state.apply_witness_record(record)?;}Ok(state)}
8.5 Integration with Proof Verifier
The witness log and proof engine form a closed loop:
A task requests a mutation (e.g., vector_put_proved)
The proof engine verifies the proof token (3-tier routing)
If the proof is valid, the mutation is applied
A witness record is emitted (ProofVerified + VectorPut)
If the proof is invalid, a rejection record is emitted (ProofRejected)
The witness record's chain hash incorporates the proof attestation
This means the witness log contains a complete, tamper-evident history of every proof that was checked and every mutation that was applied.
9. Agent Runtime Layer
9.1 WASM Partition Adapter
Agent workloads run as WASM modules inside partitions. The WASM runtime itself runs in the partition's address space (EL1/EL0), not in the hypervisor.
// ruvix-agent/src/adapter.rs/// Configuration for a WASM agent partition.pubstructAgentPartitionConfig{/// WASM module bytespubwasm_module:&'static[u8],/// Memory limitspubmax_memory_pages:u32,// Each page = 64KBpubinitial_memory_pages:u32,/// Stack size for the WASM executionpubstack_size:usize,/// Capabilities granted to this agentpubcapabilities:ArrayVec<CapHandle,32>,/// Communication edges to other agentspubcomm_edges:ArrayVec<CommEdgeConfig,16>,/// Scheduling prioritypubpriority:TaskPriority,/// Optional deadline for real-time agentspubdeadline:Option<Duration>,}/// WASM host functions exposed to agents.////// These are the agent's interface to the hypervisor, mapped to/// syscalls via the partition's capability table.pubtraitAgentHostFunctions{// --- Communication ---/// Send a message to another agent via CommEdge.fnsend(&mutself,edge_id:u32,data:&[u8]) -> Result<(),AgentError>;/// Receive a message from a CommEdge.fnrecv(&mutself,edge_id:u32,buf:&mut[u8]) -> Result<usize,AgentError>;/// Signal a notification.fnnotify(&mutself,edge_id:u32,mask:u64) -> Result<(),AgentError>;// --- Memory ---/// Request a shared memory region.fnrequest_shared_region(&mutself,size:usize,policy:u32,) -> Result<u32,AgentError>;/// Map a shared region from another agent.fnmap_shared(&mutself,region_id:u32) -> Result<*constu8,AgentError>;// --- Vector/Graph ---/// Read a vector from the kernel vector store.fnvector_get(&mutself,store_id:u32,key:u64,buf:&mut[f32],) -> Result<usize,AgentError>;/// Write a vector with proof.fnvector_put(&mutself,store_id:u32,key:u64,data:&[f32],) -> Result<(),AgentError>;// --- Lifecycle ---/// Spawn a child agent.fnspawn_agent(&mutself,config_ptr:u32) -> Result<u32,AgentError>;/// Request hibernation.fnhibernate(&mutself) -> Result<(),AgentError>;/// Yield execution.fnyield_now(&mutself);}
9.2 Agent-to-Coherence-Domain Mapping
Each agent maps to exactly one partition. Multiple agents can share a partition if they are tightly coupled (high coherence score).
Agent A ──┐
├── Partition P1 (coherence = 0.92)
Agent B ──┘
│ CommEdge (weight=1500)
v
Agent C ──── Partition P2 (coherence = 0.87)
│ CommEdge (weight=200)
v
Agent D ──┐
├── Partition P3 (coherence = 0.95)
Agent E ──┘
When the mincut algorithm detects that Agent B communicates more with Agent C than with Agent A, it will trigger a partition split, moving Agent B from P1 to P2 (or creating a new partition).
9.3 Agent Lifecycle
// ruvix-agent/src/lifecycle.rs/// Agent lifecycle states.#[derive(Debug,Clone,Copy,PartialEq,Eq)]pubenumAgentState{/// Being initialized (WASM module loading, capability setup)Initializing,/// Actively executing within its partitionRunning,/// Suspended (waiting on I/O or explicit yield)Suspended,/// Being migrated to a different partitionMigrating{from:PartitionId,to:PartitionId,},/// Hibernated (state serialized, partition may be dormant)Hibernated,/// Being reconstructed from hibernated stateReconstructing,/// Terminated (cleanup complete)Terminated,}/// Agent migration protocol.////// Migration moves an agent from one partition to another without/// losing state. This is triggered by the mincut-based placement/// engine when it detects that an agent is misplaced.pubfnmigrate_agent(agent:AgentHandle,from:PartitionId,to:PartitionId,kernel:&mutKernel,) -> Result<(),MigrationError>{// 1. Suspend agent
kernel.suspend_task(agent.task)?;// 2. Serialize agent state (WASM memory, stack, globals)let state = kernel.serialize_wasm_state(agent)?;// 3. Create new task in destination partitionlet new_task = kernel.create_task_in_partition(to, agent.config)?;// 4. Restore state into new task
kernel.restore_wasm_state(new_task,&state)?;// 5. Transfer owned regionsfor region in agent.owned_regions(){
kernel.transfer_region(region, from, to)?;}// 6. Update CommEdge endpointsfor edge in agent.comm_edges(){
kernel.update_edge_endpoint(edge, from, to)?;}// 7. Update coherence graph
kernel.pressure_engine.agent_migrated(agent, from, to);// 8. Witness
kernel.witness_log.record(WitnessRecord::new(WitnessRecordKind::PartitionMigrate,
from.0,
to.0,
agent.0asu64,));// 9. Resume agent in new partition
kernel.resume_task(new_task)?;// 10. Destroy old task
kernel.destroy_task(agent.task)?;Ok(())}
9.4 Multi-Agent Communication
Agents communicate exclusively through CommEdges. The communication pattern is recorded in the coherence graph and drives placement decisions:
// ruvix-agent/src/communication.rs/// Agent communication layer built on CommEdges.pubstructAgentComm{/// Agent's partitionpartition:PartitionId,/// Named edges: edge_name -> CommEdgeHandleedges:BTreeMap<&'staticstr,CommEdgeHandle>,/// Message serialization formatformat:MessageFormat,}#[derive(Debug,Clone,Copy)]pubenumMessageFormat{/// Raw bytes (no serialization overhead)Raw,/// WIT Component Model types (schema-validated)Wit,/// CBOR (compact, self-describing)Cbor,}implAgentComm{/// Send a typed message to a named edge.pubfnsend<T:Serialize>(&self,edge_name:&str,message:&T,) -> Result<(),AgentError>{let edge = self.edges.get(edge_name).ok_or(AgentError::UnknownEdge)?;let bytes = self.serialize(message)?;// This goes through CommEdgeOps::send, which updates// the coherence graph edge weightsyscall_queue_send(*edge,&bytes,MsgPriority::Normal)}/// Receive a typed message from a named edge.pubfnrecv<T:Deserialize>(&self,edge_name:&str,timeout:Duration,) -> Result<T,AgentError>{let edge = self.edges.get(edge_name).ok_or(AgentError::UnknownEdge)?;letmut buf = [0u8;65536];let len = syscall_queue_recv(*edge,&mut buf, timeout)?;self.deserialize(&buf[..len])}}
10. Hardware Abstraction
10.1 HAL Trait Design
The HAL defines platform-agnostic traits. Existing traits from ruvix-hal (Console, Timer, InterruptController, Mmu, PowerManagement) are extended with hypervisor-specific traits:
// ruvix-hal/src/hypervisor.rs/// Hypervisor-specific hardware abstraction.////// This trait captures the operations that differ between/// ARM EL2, RISC-V HS-mode, and x86 VMX root mode.pubtraitHypervisorHal{/// Stage-2/EPT page table typetypeStage2Table;/// Virtual CPU context typetypeVcpuContext;/// Configure the CPU for hypervisor mode.////// Called once during boot. Sets up:/// - Stage-2 translation (VTCR_EL2 / hgatp / EPT pointer)/// - Trap configuration (HCR_EL2 / hedeleg / VM-execution controls)/// - Virtual interrupt deliveryunsafefninit_hypervisor_mode(&self) -> Result<(),HalError>;/// Create a new stage-2 address space.fncreate_stage2_table(&self,phys:&mutdynPhysicalAllocator,) -> Result<Self::Stage2Table,HalError>;/// Map a page in a stage-2 table.fnstage2_map(&self,table:&mutSelf::Stage2Table,ipa:u64,pa:u64,attrs:Stage2Attrs,) -> Result<(),HalError>;/// Unmap a page from a stage-2 table.fnstage2_unmap(&self,table:&mutSelf::Stage2Table,ipa:u64,) -> Result<(),HalError>;/// Switch to a partition's address space.////// Activates the partition's stage-2 tables and restores/// the vCPU context.unsafefnenter_partition(&self,table:&Self::Stage2Table,vcpu:&Self::VcpuContext,);/// Handle a trap from a partition.////// Called when the partition triggers a stage-2 fault,/// HVC/ECALL, or trapped instruction.fnhandle_trap(&self,vcpu:&mutSelf::VcpuContext,trap:TrapInfo,) -> TrapAction;/// Inject a virtual interrupt into a partition.fninject_virtual_irq(&self,vcpu:&mutSelf::VcpuContext,irq:u32,) -> Result<(),HalError>;/// Flush stage-2 TLB entries for a partition.fnflush_stage2_tlb(&self,vmid:u16);}/// Information about a trap from a partition.#[derive(Debug)]pubstructTrapInfo{/// Trap causepubcause:TrapCause,/// Faulting address (if applicable)pubfault_addr:Option<u64>,/// Instruction that caused the trap (for emulation)pubinstruction:Option<u32>,}#[derive(Debug)]pubenumTrapCause{/// Stage-2 page fault (IPA not mapped)Stage2Fault{ipa:u64,is_write:bool},/// Hypercall (HVC/ECALL/VMCALL)Hypercall{code:u64,args:[u64;4]},/// MMIO access to an unmapped deviceMmioAccess{addr:u64,is_write:bool,value:u64,size:u8},/// WFI/WFE instruction (idle)WaitForInterrupt,/// System register access (trapped MSR/CSR)SystemRegister{reg:u32,is_write:bool,value:u64},}#[derive(Debug)]pubenumTrapAction{/// Resume the partitionResume,/// Resume with modified register stateResumeModified,/// Suspend the partition's current taskSuspendTask,/// Terminate the partitionTerminate,}
10.2 What Must Be in Assembly vs Rust
Component
Language
Reason
Reset vector, stack setup, BSS clear
Assembly
No Rust runtime available yet
Exception vector table entry points
Assembly
Fixed hardware-defined layout; must save/restore registers in exact order
Context switch (register save/restore)
Assembly
Must atomically save all 31 GPRs + SP + PC + PSTATE
TLB invalidation sequences
Inline asm in Rust
Specific instruction sequences with barriers
Cache maintenance
Inline asm in Rust
DC/IC instructions
Everything else
Rust
Type safety, borrow checker, no_std ecosystem
Target: less than 500 lines of assembly total per platform.
10.3 Platform Abstraction Summary
Operation
AArch64 (EL2)
RISC-V (HS-mode)
x86-64 (VMX root)
Stage-2 tables
VTTBR_EL2 + VTT
hgatp + G-stage PT
EPTP + EPT
Trap entry
VBAR_EL2 vectors
stvec (VS traps delegate to HS)
VM-exit handler
Virtual IRQ
HCR_EL2.VI bit
hvip.VSEIP
Posted interrupts / VM-entry interruption
Hypercall
HVC instruction
ECALL from VS-mode
VMCALL instruction
VMID/ASID
VTTBR_EL2[63:48]
hgatp.VMID
VPID (16-bit)
Cache control
DC CIVAC, IC IALLU
SFENCE.VMA
INVLPG, WBINVD
Timer
CNTHP_CTL_EL2
htimedelta + stimecmp
VMX preemption timer
10.4 QEMU virt as Reference Platform
The QEMU AArch64 virt machine is the first target:
The ruvector-mincut crate provides the dynamic minimum cut algorithm that drives partition split/merge decisions. The integration maps the hypervisor's coherence graph to the mincut data structure:
// ruvix-pressure/src/mincut_bridge.rsuse ruvector_mincut::{MinCutBuilder,DynamicMinCut};/// Bridge between the hypervisor coherence graph and ruvector-mincut.pubstructMinCutBridge{/// The dynamic mincut structuremincut:Box<dynDynamicMinCut>,/// Mapping: PartitionId -> mincut vertex IDpartition_to_vertex:BTreeMap<PartitionId,usize>,/// Mapping: CommEdgeHandle -> mincut edgeedge_to_mincut:BTreeMap<CommEdgeHandle,(usize,usize)>,/// Recomputation epochepoch:u64,}implMinCutBridge{pubfnnew() -> Self{let mincut = MinCutBuilder::new().exact().build().expect("mincut init");Self{mincut:Box::new(mincut),partition_to_vertex:BTreeMap::new(),edge_to_mincut:BTreeMap::new(),epoch:0,}}/// Register a new partition as a vertex.pubfnadd_partition(&mutself,id:PartitionId) -> usize{let vertex = self.partition_to_vertex.len();self.partition_to_vertex.insert(id, vertex);
vertex
}/// Register a CommEdge as a weighted edge.////// Called when a CommEdge is created.pubfnadd_edge(&mutself,edge:CommEdgeHandle,source:PartitionId,dest:PartitionId,initial_weight:f64,) -> Result<(),PressureError>{let u = *self.partition_to_vertex.get(&source).ok_or(PressureError::UnknownPartition)?;let v = *self.partition_to_vertex.get(&dest).ok_or(PressureError::UnknownPartition)?;self.mincut.insert_edge(u, v, initial_weight)?;self.edge_to_mincut.insert(edge,(u, v));Ok(())}/// Update edge weight (called on every message send).////// Uses delete + insert since ruvector-mincut supports dynamic updates.pubfnupdate_weight(&mutself,edge:CommEdgeHandle,new_weight:f64,) -> Result<(),PressureError>{let(u, v) = *self.edge_to_mincut.get(&edge).ok_or(PressureError::UnknownEdge)?;let _ = self.mincut.delete_edge(u, v);self.mincut.insert_edge(u, v, new_weight)?;Ok(())}/// Compute the current minimum cut.////// Returns CutPressure indicating where the system should split.pubfncompute_pressure(&self) -> CutPressure{let cut = self.mincut.min_cut();CutPressure{min_cut_value: cut.value,cut_edges:self.translate_cut_edges(&cut),// ... translate partition sidescomputed_at_ns:now_ns(),
..Default::default()}}}
API mapping from ruvector-mincut:
mincut API
Hypervisor Use
MinCutBuilder::new().exact().build()
Initialize placement engine
insert_edge(u, v, weight)
Register CommEdge creation
delete_edge(u, v)
Register CommEdge destruction
min_cut_value()
Query current cut pressure
min_cut() -> MinCutResult
Get the actual cut for split decisions
WitnessTree
Certify that the computed cut is correct
11.2 sparsifier Crate -> Efficient Graph State
The ruvector-sparsifier crate maintains a compressed shadow of the coherence graph. When the full graph becomes large (hundreds of partitions, thousands of edges), the sparsifier provides an approximate view that preserves spectral properties:
// ruvix-pressure/src/sparse_bridge.rsuse ruvector_sparsifier::{AdaptiveGeoSpar,SparseGraph,SparsifierConfig,Sparsifier};/// Sparsified view of the coherence graph.////// The full coherence graph tracks every CommEdge and its weight./// The sparsifier maintains a compressed version that preserves/// the Laplacian energy within (1 +/- epsilon), enabling efficient/// coherence score computation on large graphs.pubstructSparseBridge{/// The full graph (source of truth)full_graph:SparseGraph,/// The sparsifier (compressed view)sparsifier:AdaptiveGeoSpar,/// Compression ratiocompression:f64,}implSparseBridge{pubfnnew(epsilon:f64) -> Self{let full_graph = SparseGraph::new();let config = SparsifierConfig{
epsilon,
..Default::default()};let sparsifier = AdaptiveGeoSpar::build(&full_graph, config).expect("sparsifier init");Self{
full_graph,
sparsifier,compression:1.0,}}/// Add a CommEdge to the graph.pubfnadd_edge(&mutself,u:usize,v:usize,weight:f64,) -> Result<(),PressureError>{self.full_graph.add_edge(u, v, weight);self.sparsifier.insert_edge(u, v, weight)?;self.compression = self.sparsifier.compression_ratio();Ok(())}/// Get the sparsified graph for coherence computation.////// The solver crate operates on this compressed graph,/// not the full graph.pubfnsparsified(&self) -> &SparseGraph{self.sparsifier.sparsifier()}/// Audit sparsifier quality.pubfnaudit(&self) -> bool{self.sparsifier.audit().passed}}
API mapping from ruvector-sparsifier:
sparsifier API
Hypervisor Use
SparseGraph::from_edges()
Build initial coherence graph
AdaptiveGeoSpar::build()
Create compressed view
insert_edge() / delete_edge()
Dynamic graph updates
sparsifier() -> &SparseGraph
Feed to solver for coherence
audit() -> AuditResult
Verify compression quality
compression_ratio()
Monitor graph efficiency
11.3 solver Crate -> Coherence Score Computation
The ruvector-solver crate computes coherence scores by solving Laplacian systems on the sparsified coherence graph:
// ruvix-pressure/src/coherence_solver.rsuse ruvector_solver::traits::{SolverEngine,SparseLaplacianSolver};use ruvector_solver::neumann::NeumannSolver;use ruvector_solver::types::{CsrMatrix,ComputeBudget};/// Coherence score computation via Laplacian solver.////// The coherence score of a partition is derived from the/// effective resistance between its internal nodes. Low/// effective resistance = high coherence (tightly coupled).pubstructCoherenceSolver{/// The solver enginesolver:NeumannSolver,/// Compute budget per invocationbudget:ComputeBudget,}implCoherenceSolver{pubfnnew() -> Self{Self{solver:NeumannSolver::new(1e-4,200),// tolerance, max_iterbudget:ComputeBudget::default(),}}/// Compute the coherence score for a partition.////// Uses the sparsified Laplacian to compute average effective/// resistance between all pairs of tasks in the partition./// Lower resistance = higher coherence.pubfncompute_coherence(&self,partition:&Partition,sparse_graph:&SparseGraph,) -> Result<CoherenceScore,PressureError>{// 1. Extract the subgraph for this partitionlet subgraph = extract_partition_subgraph(partition, sparse_graph);// 2. Build Laplacian matrixlet laplacian = build_laplacian(&subgraph);// 3. Compute effective resistance between task pairsletmut total_resistance = 0.0;letmut pairs = 0;let task_ids:Vec<usize> = partition.tasks.keys().map(|t| t.index()).collect();for i in0..task_ids.len(){for j in(i+1)..task_ids.len(){let r = self.solver.effective_resistance(&laplacian,
task_ids[i],
task_ids[j],&self.budget,)?;
total_resistance += r;
pairs += 1;}}// 4. Normalize: coherence = 1 / (1 + avg_resistance)let avg_resistance = if pairs > 0{
total_resistance / pairs asf64}else{0.0};let coherence_value = 1.0 / (1.0 + avg_resistance);Ok(CoherenceScore{value: coherence_value,task_contributions:compute_per_task_contributions(&laplacian,&task_ids,&self.solver,&self.budget,),computed_at_ns:now_ns(),stale:false,})}}
API mapping from ruvector-solver:
solver API
Hypervisor Use
NeumannSolver::new(tol, max_iter)
Create solver for coherence computation
solve(&matrix, &rhs) -> SolverResult
General sparse linear solve
effective_resistance(laplacian, s, t)
Core coherence metric between task pairs
estimate_complexity(profile, n)
Budget estimation before solving
ComputeBudget
Bound solver computation per epoch
11.4 Full Pressure Engine Pipeline
The three crates form a pipeline that runs every scheduler epoch:
CommEdge weight updates (per message)
|
v
[ruvector-sparsifier] -- maintain compressed coherence graph
|
v
[ruvector-solver] -- compute coherence scores from Laplacian
|
v
[ruvector-mincut] -- compute cut pressure from communication graph
|
v
Scheduler decisions:
- Task priority adjustment (Flow mode)
- Partition split/merge triggers
- Agent migration signals
- Tier promotion/demotion hints
// ruvix-pressure/src/engine.rs/// The unified pressure engine.////// Combines sparsifier, solver, and mincut into a single subsystem/// that the scheduler queries every epoch.pubstructPressureEngine{/// Sparsified coherence graphsparse:SparseBridge,/// Mincut for split/merge decisionsmincut:MinCutBridge,/// Coherence solversolver:CoherenceSolver,/// Epoch counterepoch:u64,/// Epoch duration in nanosecondsepoch_duration_ns:u64,/// Cached results (valid for one epoch)cached_coherence:BTreeMap<PartitionId,CoherenceScore>,cached_pressure:Option<CutPressure>,}implPressureEngine{/// Called every scheduler epoch.////// Recomputes coherence scores and cut pressure.pubfntick(&mutself,partitions:&[Partition],) -> EpochResult{self.epoch += 1;// 1. Decay edge weights (exponential decay per epoch)self.sparse.decay_weights(0.95);self.mincut.decay_weights(0.95);// 2. Audit sparsifier qualityif !self.sparse.audit(){self.sparse.rebuild();}// 3. Recompute coherence scoresfor partition in partitions {let score = self.solver.compute_coherence(
partition,self.sparse.sparsified(),);ifletOk(s) = score {self.cached_coherence.insert(partition.id, s);}}// 4. Recompute cut pressureself.cached_pressure = Some(self.mincut.compute_pressure());// 5. Evaluate structural changeslet actions = evaluate_structural_changes(
partitions,self,&StructuralConfig::default(),);EpochResult{epoch:self.epoch,
actions,coherence_scores:self.cached_coherence.clone(),cut_pressure:self.cached_pressure.clone(),}}/// Called on every CommEdge message send.////// Incrementally updates edge weights in both the sparsifier/// and the mincut structure.pubfnon_message_sent(&mutself,edge:CommEdgeHandle,bytes:usize,){ifletSome((u, v)) = self.mincut.edge_to_mincut.get(&edge){let new_weight = bytes asf64;// Simplified; real impl accumulateslet _ = self.sparse.update_weight(*u,*v, new_weight);let _ = self.mincut.update_weight(edge, new_weight);}}}
12. What Makes RVM Different
12.1 Comparison Matrix
Property
KVM/QEMU
Firecracker
seL4
RVM
Abstraction unit
VM (full hardware)
microVM (minimal HW)
Thread + address space
Coherence domain (partition)
Device model
Full QEMU emulation
Minimal virtio
Passthrough
Time-bounded leases
Isolation basis
EPT/stage-2
EPT/stage-2
Capabilities + page tables
Capabilities + stage-2 + graph theory
Scheduling
Linux CFS
Linux CFS
Priority-based
Graph-pressure-driven, 3 modes
IPC
Virtio rings
VSOCK
Synchronous IPC
Zero-copy CommEdges with coherence tracking
Audit
None built-in
None built-in
Formal proof (binary level)
Witness log (every privileged action)
Mutation control
None
None
Capability rights
Proof-gated (3-tier cryptographic verification)
Memory model
Demand paging
Demand paging (host)
Typed memory objects
Tiered (Hot/Warm/Dormant/Cold), no demand paging
Dynamic reconfiguration
VM migration (external)
Snapshot/restore
Static CNode tree
Mincut-driven split/merge/migrate
Graph awareness
None
None
None
Native: mincut, sparsifier, solver integrated
Agent-native
No
No (but fast boot)
No
Yes: WASM partitions, lifecycle management
Written in
C (QEMU) + C (Linux)
Rust (VMM) + C (Linux)
C + Isabelle/HOL proofs
Rust (< 500 lines asm per platform)
Host OS dependency
Linux required
Linux required
None (standalone)
None (standalone)
12.2 Key Differentiators
1. Graph-theory-native isolation. No other hypervisor uses mincut algorithms to determine isolation boundaries. KVM and Firecracker rely on the human to define VM boundaries. seL4 relies on the human to define CNode trees. RVM computes boundaries dynamically from observed communication patterns.
2. Proof-gated mutation. seL4 has formal verification of the kernel binary, but does not gate runtime state mutations with proofs. RVM requires a cryptographic proof for every mutation, checked at three tiers (Reflex < 100ns, Standard < 100us, Deep < 10ms).
3. Witness-native auditability. The witness log is not an optional feature or an afterthought. It is woven into every syscall path. Every privileged action produces a 64-byte witness record with a chained hash. The log is tamper-evident and supports deterministic replay.
4. Coherence-driven scheduling. The scheduler does not just balance CPU load. It considers the graph structure of partition communication, novelty of incoming data, and structural risk of pending mutations. This is a fundamentally different optimization target.
5. Tiered memory without demand paging. By eliminating page faults from the critical path and replacing them with explicit tier transitions, RVM achieves deterministic latency while still supporting memory overcommit through compression and serialization.
6. Agent-native runtime. WASM agents are first-class entities with defined lifecycle states (spawn, execute, migrate, hibernate, reconstruct). The hypervisor understands agent communication patterns and uses them to optimize placement.
12.3 Threat Model
RVM assumes:
Trusted: The hypervisor binary (verified boot with ML-DSA-65 signatures), hardware
Untrusted: All partition code, all agent WASM modules, all inter-partition messages
Partially trusted: Device firmware (isolated via leases with bounded time)
The capability system ensures that a compromised partition cannot:
Access memory outside its stage-2 address space
Send messages on edges it does not hold capabilities for
Mutate kernel state without a valid proof
Read the witness log without WITNESS capability
Acquire devices without LEASE capability
Modify another partition's coherence score
12.4 Performance Targets
Operation
Target Latency
Bound
Hypercall (syscall) round-trip
< 1 us
Hardware trap + capability check
Zero-copy message send
< 500 ns
Ring buffer enqueue + witness record
Notification signal
< 200 ns
Atomic OR + virtual IRQ inject
Proof verification (Reflex)
< 100 ns
Hash comparison
Proof verification (Standard)
< 100 us
Merkle witness verification
Proof verification (Deep)
< 10 ms
Full coherence check via solver
Partition split
< 50 ms
Stage-2 table creation + region remapping
Agent migration
< 100 ms
State serialize + transfer + restore
Coherence score computation
< 5 ms per epoch
Laplacian solve on sparsified graph
Witness record write
< 50 ns
Cache-line-aligned append
Appendix A: Syscall Table (Extended for Hypervisor)
The Phase A syscall table (12 syscalls) is extended with hypervisor-specific operations:
Total new code: ~13,500 lines (Rust) + ~1,500 lines (assembly, 3 platforms)
Appendix C: Build and Test
# Build for QEMU AArch64 virt (hypervisor mode)
cargo build --target aarch64-unknown-none \
--release \
-p ruvix-nucleus \
--features "baremetal,aarch64,hypervisor"# Run on QEMU
qemu-system-aarch64 \
-machine virt,virtualization=on,gic-version=3 \
-cpu cortex-a72 \
-m 1G \
-smp 4 \
-nographic \
-kernel target/aarch64-unknown-none/release/ruvix
# Run unit tests (hosted, std feature)
cargo test --workspace --features "std,test-hosted"# Run integration tests (QEMU)
cargo test --test qemu_integration --features "qemu-test"
RVM Security Model
Status
Draft -- Research document for RVM bare-metal microhypervisor security architecture.
Date
2026-04-04
Scope
This document specifies the security model for RVM as a standalone, bare-metal, Rust-first
microhypervisor for agents and edge computing. RVM does NOT depend on Linux or KVM. It boots
directly on hardware (AArch64 primary, x86_64 secondary) and enforces all isolation through its
own MMU page tables, capability system, and proof-gated mutation protocol.
The security model builds on the primitives already implemented in Phase A (ruvix-types,
ruvix-cap, ruvix-proof, ruvix-region, ruvix-queue, ruvix-vecgraph, ruvix-nucleus) and extends
them for bare-metal operation with hardware-enforced isolation.
1. Capability-Based Authority
1.1 Design Philosophy
RVM enforces the principle of least authority through capabilities. There is no ambient
authority anywhere in the system. Every syscall requires an explicit capability handle that
authorizes the operation. This means:
No global namespaces (no filesystem paths, no PIDs, no network ports accessible by name)
No superuser or root -- the root task holds initial capabilities but cannot bypass the model
No default permissions -- a newly spawned task has exactly the capabilities its parent
explicitly grants via cap_grant
No ambient access to hardware -- device MMIO regions, interrupt lines, and DMA channels
are all gated by capabilities
1.2 Capability Structure
Capabilities are kernel-resident objects. User-space code never sees the raw capability; it
holds an opaque CapHandle that the kernel resolves through a per-task capability table.
/// The kernel-side capability. User space never sees this directly./// File: crates/ruvix/crates/types/src/capability.rs#[repr(C)]pubstructCapability{pubobject_id:u64,// Kernel object being referencedpubobject_type:ObjectType,// Region, Queue, VectorStore, Task, etc.pubrights:CapRights,// Bitmap of permitted operationspubbadge:u64,// Caller-visible demux identifierpubepoch:u64,// Revocation epoch (stale handles detected)}
Rights bitmap (from crates/ruvix/crates/types/src/capability.rs):
Non-transitive grant (derived cap cannot re-grant)
1.3 Capability Delegation and Attenuation
Delegation follows strict monotonic attenuation: a task can only grant capabilities it holds,
and the granted rights must be a subset of the held rights. This is enforced at the type level
in Capability::derive():
/// Derive a capability with equal or fewer rights./// Returns None if rights escalation is attempted or GRANT right is absent.pubfnderive(&self,new_rights:CapRights,new_badge:u64) -> Option<Self>{if !self.has_rights(CapRights::GRANT){returnNone;}if !new_rights.is_subset_of(self.rights){returnNone;}// GRANT_ONCE strips GRANT from the derived caplet final_rights = ifself.rights.contains(CapRights::GRANT_ONCE){
new_rights.difference(CapRights::GRANT).difference(CapRights::GRANT_ONCE)}else{
new_rights
};Some(Self{rights: final_rights,badge: new_badge, ..*self})}
Delegation depth limit: Maximum 8 levels (configurable per RVF manifest). The derivation
tree tracks the full chain, and audit flags chains deeper than 4 (AUDIT_DEPTH_WARNING_THRESHOLD).
1.4 Capability Revocation
Revocation propagates through the derivation tree. When a capability is revoked:
The capability's epoch is incremented in the kernel's object table
All entries in the derivation tree rooted at the revoked capability are invalidated
Any held CapHandle referencing the old epoch returns KernelError::StaleCapability
This is O(d) where d is the number of derived capabilities, bounded by the delegation depth
limit and the per-task capability table size (1024 entries max).
1.5 How This Differs from DAC/MAC
Property
DAC (Unix)
MAC (SELinux)
Capability (RVM)
Authority source
User identity
System-wide policy labels
Explicit token per object
Ambient authority
Yes (UID 0)
Yes (unconfined domain)
None
Confused deputy
Possible
Mitigated by labels
Prevented by design
Delegation
chmod/chown
Policy reload
cap_grant with attenuation
Revocation
File permission change
Policy reload
Tree-propagating, epoch-based
Granularity
File/directory
Type/role/level
Per-object, per-right
The critical difference: in RVM, authority is carried by the message, not the sender's
identity. A task cannot access a resource simply because of "who it is" -- it must present
a valid capability handle that was explicitly granted to it through a traceable delegation chain.
2. Proof-Gated Mutation
2.1 Invariant
No state mutation without a valid proof token. This is a kernel invariant, not a policy.
The kernel physically prevents mutation of vector stores, graph stores, and RVF mounts without
a ProofToken that passes all verification steps. Read operations (vector_get, queue_recv)
do not require proofs.
2.2 What Constitutes a Valid Proof
A proof token must pass six verification steps (implemented in
crates/ruvix/crates/vecgraph/src/proof_policy.rsProofVerifier::verify()):
Capability check: The calling task must hold a capability with PROVE right on the
target object
Hash match: proof.mutation_hash == expected_mutation_hash -- the proof authorizes
exactly the mutation being applied
Time bound: current_time_ns <= proof.valid_until_ns -- proofs expire
Validity window: The window proof.valid_until_ns - current_time_ns must not exceed
policy.max_validity_window_ns (prevents pre-computing proofs far in advance)
Nonce uniqueness: Each nonce can be consumed exactly once (ring buffer of 64 recent
nonces prevents replay)
On bare metal, device MMIO regions are mapped into a task's address space through region_map
with a RegionPolicy::DeviceMmio variant (new for Phase B). This mapping requires:
A capability with READ and/or WRITE rights on the device object
A ProofToken with tier >= Standard proving the task's intent matches the device mapping
The device must not already be mapped to another partition (exclusive lease)
/// Extended region policy for bare-metal device access./// New for Phase B -- extends the existing RegionPolicy enum.pubenumRegionPolicy{Immutable,AppendOnly{max_size:usize},Slab{slot_size:usize,slot_count:usize},/// Device MMIO region. Mapped as uncacheable, device memory./// Requires proof-gated capability for mapping.DeviceMmio{phys_base:u64,// Physical base address of MMIO rangesize:usize,// Size in bytesdevice_id:u32,// Kernel-assigned device identifier},}
2.7 Proof-Gated Migration
Partition migration (moving a task and its state from one physical node to another in a RVM
mesh) requires a Deep-tier proof containing:
Coherence certificate showing the partition's state is consistent
Source and destination node attestation (both nodes are trusted)
Hash of the serialized partition state
Without this proof, the kernel refuses to serialize or deserialize partition state.
/// Trait for migration authorization. Implemented by the migration subsystem.pubtraitMigrationAuthority{/// Verify that migration of this partition is authorized./// Returns the serialized partition state only if proof validates.fnauthorize_migration(&mutself,partition_id:u32,destination_attestation:&ProofAttestation,proof:&ProofToken,) -> Result<SerializedPartition,KernelError>;/// Accept an incoming migrated partition./// Verifies the source attestation and proof before instantiating.fnaccept_migration(&mutself,serialized:&SerializedPartition,source_attestation:&ProofAttestation,proof:&ProofToken,) -> Result<PartitionHandle,KernelError>;}
2.8 Proof-Gated Partition Merge/Split
Graph partitions (mincut boundaries in the vecgraph store) can only be merged or split with a
Deep-tier proof that includes the coherence impact analysis:
pubenumGraphMutationKind{AddNode{/* ... */},RemoveNode{/* ... */},AddEdge{/* ... */},RemoveEdge{/* ... */},UpdateWeight{/* ... */},/// Merge two partitions. Requires Deep-tier proof with coherence cert.MergePartitions{source_partition:u32,target_partition:u32,},/// Split a partition at a mincut boundary. Requires Deep-tier proof.SplitPartition{partition:u32,cut_specification:MinCutSpec,},}
3. Witness-Native Audit
3.1 Design Principle
Every privileged action in RVM emits a witness record to the kernel's append-only witness
log. "Privileged action" means any syscall that mutates kernel state: vector writes, graph
mutations, RVF mounts, task spawns, capability grants, region mappings.
3.2 Witness Record Format
Each record is 96 bytes, compact enough to sustain thousands of records per second on embedded
hardware without blocking the syscall path:
/// 96-byte witness record./// File: crates/ruvix/crates/nucleus/src/witness_log.rs#[repr(C)]pubstructWitnessRecord{pubsequence:u64,// Monotonically increasing (8 bytes)pubkind:WitnessRecordKind,// Boot, Mount, VectorMutation, etc. (1 byte)pubtimestamp_ns:u64,// Nanoseconds since boot (8 bytes)pubmutation_hash:[u8;32],// SHA-256 of the mutation data (32 bytes)pubattestation_hash:[u8;32],// Hash of the proof attestation (32 bytes)pubresource_id:u64,// Object identifier (8 bytes)// 7 bytes padding to 96}
Record kinds:
Kind
Value
Emitted By
Boot
0
kernel_entry at boot completion
Mount
1
rvf_mount syscall
VectorMutation
2
vector_put_proved syscall
GraphMutation
3
graph_apply_proved syscall
Checkpoint
4
Periodic state snapshots
ReplayComplete
5
After replaying from checkpoint
CapGrant
6
cap_grant syscall (proposed extension)
CapRevoke
7
Capability revocation (proposed extension)
TaskSpawn
8
task_spawn syscall (proposed extension)
DeviceMap
9
Device MMIO mapping (proposed extension)
3.3 Tamper Evidence
The witness log must be tamper-evident. The current Phase A implementation uses simple
append-only semantics with FNV-1a hashing. For bare-metal, the following extensions are
required:
Hash chaining: Each witness record includes the hash of the previous record, forming a
Merkle-like chain. Tampering with any record invalidates all subsequent records.
/// Extended witness record with hash chaining for tamper evidence.pubstructChainedWitnessRecord{/// The base witness record (96 bytes).pubrecord:WitnessRecord,/// SHA-256 hash of the previous record's serialized bytes./// For the first record (sequence 0), this is all zeros.pubprev_hash:[u8;32],/// SHA-256(serialize(record) || prev_hash). Computed by the kernel.pubchain_hash:[u8;32],}
TEE signing (when available): On hardware with TrustZone (Raspberry Pi 4/5), witness
records can be signed by the Secure World using a device-unique key. This means even a
compromised kernel (EL1) cannot forge witness entries:
/// Trait for hardware-backed witness signing.pubtraitWitnessSigner{/// Sign a chained witness record using hardware-bound key./// On AArch64 with TrustZone, this issues an SMC to Secure World./// On platforms without TEE, returns None (software chain only).fnsign_witness(&self,record:&ChainedWitnessRecord) -> Option<[u8;64]>;/// Verify a signed witness record.fnverify_witness_signature(&self,record:&ChainedWitnessRecord,signature:&[u8;64],) -> bool;}
3.4 Replayability and Forensics
The witness log, combined with periodic checkpoints, enables deterministic replay:
Checkpoint: The kernel serializes all vector stores, graph stores, capability tables,
and scheduler state to an immutable region. A WitnessRecordKind::Checkpoint record
captures the state hash and the witness sequence number at checkpoint time.
Replay: Starting from a checkpoint, the kernel replays all witness records in sequence
order, re-applying each mutation. Because mutations are deterministic (same proof token +
same state = same result), the final state is identical.
Forensic query: External tools can load the witness log and answer questions like:
"Which task mutated vector store X between timestamps T1 and T2?"
"What was the coherence score before and after each graph mutation?"
"Has the hash chain been broken?" (indicates tampering)
3.5 Witness-Enabled Rollback/Recovery
If a coherence violation is detected (coherence score drops below the configured threshold),
the kernel can:
Stop accepting new mutations to the affected partition
Find the most recent checkpoint where coherence was above threshold
Replay witnesses from that checkpoint, skipping the offending mutation
Resume normal operation from the corrected state
This requires the offending mutation to be identified by its witness record (the mutation_hash
and attestation_hash pinpoint exactly which operation caused the violation).
4. Isolation Model
4.1 Partition Isolation Guarantees
RVM partitions are the unit of isolation. Each partition consists of:
One or more tasks sharing a capability namespace
A set of regions (memory objects) accessible only through capabilities held by those tasks
Queue endpoints for controlled inter-partition communication
Isolation guarantee: A partition cannot access any memory, device, or kernel object for
which it does not hold a valid capability. This is enforced at two levels:
Software: The capability table lookup in every syscall rejects invalid or stale handles
Hardware: MMU page tables enforce that each partition's regions are mapped only in that
partition's address space, with no overlapping physical pages between partitions
(except explicitly shared immutable regions)
4.2 MMU-Enforced Memory Isolation (Bare Metal)
On bare metal, RVM directly controls the AArch64 MMU. Each partition gets its own translation
tables loaded via TTBR0_EL1 on context switch:
/// Per-partition page table management./// Kernel mappings use TTBR1_EL1 (shared across all partitions)./// Partition mappings use TTBR0_EL1 (swapped on context switch).pubtraitPartitionAddressSpace{/// Create a new empty address space for a partition.fncreate() -> Result<Self,KernelError>whereSelf:Sized;/// Map a region into this partition's address space./// Physical pages are allocated from the kernel's physical allocator./// Page table entries enforce the region's policy:/// Immutable -> PTE_USER | PTE_RO | PTE_CACHEABLE/// AppendOnly -> PTE_KERNEL_RW | PTE_CACHEABLE (user writes via syscall)/// Slab -> PTE_KERNEL_RW | PTE_CACHEABLE (user writes via syscall)/// DeviceMmio -> PTE_USER | PTE_DEVICE | PTE_nG (non-global, per-partition)fnmap_region(&mutself,region:&RegionDescriptor,phys_pages:&[PhysFrame],) -> Result<VirtAddr,KernelError>;/// Unmap a region, invalidating all TLB entries for those pages.fnunmap_region(&mutself,virt_addr:VirtAddr,size:usize) -> Result<(),KernelError>;/// Activate this address space (write to TTBR0_EL1 + TLBI).unsafefnactivate(&self);}
Critical invariant: The kernel NEVER maps the same physical page as writable in two
different partitions' address spaces simultaneously. Immutable regions may be shared read-only
(content-addressable deduplication is safe for immutable data).
EL0 (user mode): All RVF components, WASM runtimes, AgentDB, all application code
Syscalls transition EL0 -> EL1 via the SVC instruction. The exception handler in EL1 validates
the capability before dispatching to the syscall implementation. Return to EL0 uses ERET.
No EL0 code can:
Read or write kernel memory (TTBR1_EL1 mappings are PTE_KERNEL_RW)
Modify page tables (page table pages are not mapped in EL0)
Disable interrupts (only EL1 can mask IRQs via DAIF)
Access device MMIO unless explicitly mapped through a capability
4.4 Side-Channel Mitigation
4.4.1 Spectre v1 (Bounds Check Bypass)
All array accesses in the kernel use bounds-checked indexing (Rust's default)
The CapabilityTable uses get() returning Option<&T>, never unchecked indexing
Critical paths include an lfence / csdb barrier after bounds checks on the syscall
dispatch path
/// Spectre-safe capability table lookup./// The index is bounds-checked, and a speculation barrier follows.pubfnlookup(&self,handle:CapHandle) -> Option<&Capability>{let idx = handle.raw().idasusize;if idx >= self.entries.len(){returnNone;}// AArch64: CSDB (Consumption of Speculative Data Barrier)// Prevents speculative use of the result before bounds check resolves#[cfg(target_arch = "aarch64")]unsafe{ core::arch::asm!("csdb");}self.entries.get(idx).and_then(|e| e.as_ref())}
4.4.2 Spectre v2 (Branch Target Injection)
AArch64: Enable branch prediction barriers via SCTLR_EL1 configuration
On context switch between partitions: flush branch predictor state
(IC IALLU + TLBI VMALLE1IS + DSB ISH + ISB)
Kernel compiled with -Zbranch-protection=bti (Branch Target Identification)
4.4.3 Meltdown (Rogue Data Cache Load)
AArch64 is not vulnerable to Meltdown when Privileged Access Never (PAN) is enabled
RVM enables PAN via SCTLR_EL1.PAN = 1 at boot
Kernel accesses user memory only through explicit copy routines that temporarily disable PAN
4.4.4 Microarchitectural Data Sampling (MDS)
On x86_64 (secondary target): VERW-based buffer clearing on every kernel exit
On AArch64 (primary target): Not vulnerable to known MDS variants
Defense in depth: all sensitive kernel data structures are allocated in dedicated slab
regions that are never shared across partitions
4.5 Time Isolation
Timing side channels are mitigated through several mechanisms:
Fixed-time capability lookup: The capability table lookup path executes in constant
time regardless of whether the capability is found or not (compare all entries, select
result at the end)
Scheduler noise injection: The scheduler adds a small random jitter (0-10 us) to
context switch timing to prevent a partition from inferring another partition's behavior
from scheduling patterns
Timer virtualization: Each partition sees a virtual timer (CNTVCT_EL0) that advances
at the configured rate but does not leak information about other partitions' execution.
The kernel programs CNTV_CVAL_EL0 per-partition.
Constant-time proof verification: The ProofVerifier::verify() path is written to
avoid early returns that would leak information about which check failed. All six checks
execute, and only the final result is returned.
/// Constant-time proof verification to prevent timing side channels./// All checks execute regardless of early failures.pubfnverify_constant_time(&mutself,proof:&ProofToken,expected_hash:&[u8;32],current_time_ns:u64,capability:&Capability,) -> Result<ProofAttestation,KernelError>{letmut valid = true;// All checks execute -- no early return
valid &= capability.has_rights(CapRights::PROVE);
valid &= proof.mutation_hash == *expected_hash;
valid &= self.policy.tier_satisfies(proof.tier);
valid &= !proof.is_expired(current_time_ns);
valid &= (proof.valid_until_ns.saturating_sub(current_time_ns))
<= self.policy.max_validity_window_ns;let nonce_ok = self.nonce_tracker.check_and_mark(proof.nonce);
valid &= nonce_ok;if valid {Ok(self.create_attestation(proof, current_time_ns))}else{// Roll back nonce if overall verification failedif nonce_ok {self.nonce_tracker.unmark(proof.nonce);}Err(KernelError::ProofRejected)}}
4.6 Coherence Domain Isolation
Each vector store and graph store belongs to a coherence domain. Coherence domains provide an
additional layer of isolation at the semantic level:
Mutations within a coherence domain are evaluated against that domain's coherence config
Coherence violations in one domain do not affect other domains
Each domain has its own proof policy, nonce tracker, and witness region
/// Coherence domain configuration.pubstructCoherenceDomain{pubdomain_id:u32,pubvector_stores:&[VectorStoreHandle],pubgraph_stores:&[GraphHandle],pubproof_policy:ProofPolicy,pubmin_coherence_score:u16,// 0-10000 (0.00-1.00)pubisolation_level:DomainIsolationLevel,}pubenumDomainIsolationLevel{/// Stores in this domain share no physical pages with other domains.Full,/// Read-only immutable data may be shared across domains.SharedImmutable,}
5. Device Security
5.1 Lease-Based Device Access
Devices are not permanently assigned to partitions. Instead, RVM uses time-bounded,
revocable leases:
/// A time-bounded, revocable lease on a device.pubstructDeviceLease{/// Capability handle authorizing device access.pubcap:CapHandle,/// Device identifier (kernel-assigned, not hardware address).pubdevice_id:DeviceId,/// Lease start time (nanoseconds since boot).pubgranted_at_ns:u64,/// Lease expiry (0 = no expiry, must be explicitly revoked).pubexpires_at_ns:u64,/// Rights on the device (READ for sensors, WRITE for actuators, both for DMA).pubrights:CapRights,/// The MMIO region mapped for this lease (None if not yet mapped).pubmmio_region:Option<RegionHandle>,}/// Trait for the device lease manager.pubtraitDeviceLeaseManager{/// Request a lease on a device. Requires a capability with appropriate rights./// The lease is time-bounded; after expiry, the mapping is automatically torn down.fnrequest_lease(&mutself,device_id:DeviceId,cap:CapHandle,duration_ns:u64,) -> Result<DeviceLease,KernelError>;/// Renew an existing lease. Must be called before expiry.fnrenew_lease(&mutself,lease:&mutDeviceLease,additional_ns:u64,) -> Result<(),KernelError>;/// Revoke a lease immediately. Tears down MMIO mapping and flushes DMA.fnrevoke_lease(&mutself,lease:DeviceLease) -> Result<(),KernelError>;/// Check if a lease is still valid.fnis_lease_valid(&self,lease:&DeviceLease,current_time_ns:u64) -> bool;}
Lease lifecycle:
Partition requests a lease via request_lease() with a capability
Kernel checks the capability has appropriate rights on the device object
Kernel maps the device's MMIO region into the partition's address space as
RegionPolicy::DeviceMmio with PTE_DEVICE (uncacheable) flags
Kernel programs an expiry timer; when it fires, the lease is automatically torn down
On teardown: MMIO pages are unmapped, TLB is flushed, DMA channels are reset
5.2 DMA Isolation
DMA is the most dangerous hardware capability because DMA engines can read/write arbitrary
physical memory. RVM uses a layered defense:
5.2.1 With IOMMU (Preferred)
On platforms with an IOMMU (ARM SMMU, Intel VT-d), the kernel programs the IOMMU's page
tables to restrict each device's DMA to only the physical pages belonging to the leaseholder's
regions:
/// IOMMU-based DMA isolation.pubtraitIommuController{/// Create a DMA mapping for a device, restricting it to the given physical pages./// The device can only DMA to/from these pages and no others.fnmap_device_dma(&mutself,device_id:DeviceId,allowed_pages:&[PhysFrame],direction:DmaDirection,) -> Result<DmaMapping,KernelError>;/// Remove a DMA mapping, preventing the device from accessing those pages.fnunmap_device_dma(&mutself,device_id:DeviceId,mapping:DmaMapping,) -> Result<(),KernelError>;/// Invalidate all DMA mappings for a device (called on lease revocation).fninvalidate_device(&mutself,device_id:DeviceId) -> Result<(),KernelError>;}
5.2.2 Without IOMMU (Bounce Buffers)
On platforms without an IOMMU (early Raspberry Pi models), DMA isolation uses bounce buffers:
The kernel allocates a dedicated physical region for DMA operations
Before a device-to-memory transfer, the kernel prepares the bounce buffer
After transfer completion, the kernel copies data from the bounce buffer to the
partition's region (after validation)
The device never has direct access to partition memory
This is slower (extra copy) but maintains the isolation invariant. The
crates/ruvix/crates/dma/ crate provides the abstraction layer.
/// Bounce buffer DMA isolation (fallback when no IOMMU).pubstructBounceBufferDma{/// Kernel-owned physical region for DMA bounce.bounce_region:PhysRegion,/// Maximum bounce buffer size.max_bounce_size:usize,}implBounceBufferDma{/// Execute a DMA transfer through the bounce buffer./// The device only ever sees the bounce buffer's physical address.pubfntransfer(&mutself,device:DeviceId,partition_region:&RegionHandle,offset:usize,length:usize,direction:DmaDirection,) -> Result<(),KernelError>{if length > self.max_bounce_size{returnErr(KernelError::LimitExceeded);}match direction {DmaDirection::MemToDevice => {// Copy from partition region to bounce bufferself.copy_to_bounce(partition_region, offset, length)?;// Program DMA from bounce buffer to deviceself.start_dma(device, direction)?;}DmaDirection::DeviceToMem => {// Program DMA from device to bounce bufferself.start_dma(device, direction)?;// Wait for completionself.wait_completion()?;// Copy from bounce buffer to partition region (validated)self.copy_from_bounce(partition_region, offset, length)?;}DmaDirection::MemToMem => {returnErr(KernelError::InvalidArgument);}}Ok(())}}
5.3 Interrupt Routing Security
Each interrupt line is a kernel object accessed through capabilities:
Interrupt capability: A partition must hold a capability with READ right on an
interrupt object to receive interrupts from that line
Interrupt-to-queue routing: Interrupts are delivered as messages on a queue
(via sensor_subscribe), not as direct callbacks. This maintains the queue-based IPC
model and prevents a malicious interrupt handler from running in kernel context.
Priority ceiling: Interrupt processing tasks have bounded priority to prevent a
flood of interrupts from starving other partitions
Rate limiting: The kernel enforces a maximum interrupt rate per device. Interrupts
exceeding the rate are queued and delivered at the rate limit.
/// Interrupt routing configuration.pubstructInterruptRoute{/// Hardware interrupt number (e.g., GIC SPI number).pubirq_number:u32,/// Capability authorizing access to this interrupt.pubcap:CapHandle,/// Queue where interrupt messages are delivered.pubtarget_queue:QueueHandle,/// Maximum interrupt rate (interrupts per second). 0 = unlimited.pubrate_limit_hz:u32,/// Priority ceiling for the interrupt processing task.pubpriority_ceiling:TaskPriority,}
5.4 Device Capability Model
Every device in the system is represented as a kernel object with its own capability:
pubenumObjectType{Task,Region,Queue,Timer,VectorStore,GraphStore,RvfMount,Sensor,/// A hardware device (UART, DMA controller, GPU, NIC, etc.)Device,/// An interrupt line (GIC SPI/PPI/SGI)Interrupt,}
The root task (first task created at boot) receives capabilities to all devices discovered
during boot (from DTB parsing). It then distributes device capabilities to appropriate
partitions according to the RVF manifest's resource policy.
6. Boot Security
6.1 Secure Boot Chain
RVM implements a four-stage secure boot chain:
Stage 0: Hardware ROM / eFUSE
| Root of trust: device-unique key burned in silicon
| Measures and verifies Stage 1
v
Stage 1: RVM Boot Stub (ruvix-aarch64/src/boot.S + boot.rs)
| Minimal assembly: set up stack, clear BSS, jump to Rust
| Rust entry: initialize MMU, verify Stage 2 signature
| Verifies using trusted keys embedded in Stage 1 image
v
Stage 2: RVM Kernel (ruvix-nucleus)
| Full kernel initialization: cap table, proof engine, scheduler
| Verifies RVF package signature (ML-DSA-65 or Ed25519)
| SEC-001: Signature failure -> PANIC (no fallback)
v
Stage 3: Boot RVF Package
| Contains all initial RVF components
| Loaded into immutable regions
| Queue wiring and capability distribution per manifest
v
Stage 4: Application RVF Components
Runtime-mounted RVF packages, each signature-verified
6.2 Signature Verification
The existing verify_boot_signature_or_panic() in crates/ruvix/crates/cap/src/security.rs
implements SEC-001: signature failure panics the system with no fallback path. The security
feature flag disable-boot-verify is blocked at compile time for release builds:
// CVE-001 FIX: Prevent disable-boot-verify in release builds#[cfg(all(feature = "disable-boot-verify", not(debug_assertions)))]compile_error!("SECURITY ERROR [CVE-001]: The 'disable-boot-verify' feature cannot be used \ in release builds.");
Supported algorithms:
Algorithm
Status
Use Case
Ed25519
Implemented
Primary boot signature
ECDSA P-256
Supported
Legacy compatibility
RSA-PSS 2048
Supported
Legacy compatibility
ML-DSA-65
Planned
Post-quantum RVF signatures
6.3 Measured Boot with Witness Log
Every boot stage emits a witness record:
Stage 1 measurement: Hash of the kernel image, stored as WitnessRecordKind::Boot
Stage 2 initialization: Each subsystem (cap manager, proof engine, scheduler)
records its initialized state
Stage 3 RVF mount: Each mounted RVF package is recorded as WitnessRecordKind::Mount
with the package hash and attestation
The boot witness log forms the root of the system's audit trail. All subsequent witness
records chain from it.
6.4 Remote Attestation for Edge Deployment
For edge deployments where RVM nodes must prove their integrity to a remote verifier:
/// Remote attestation protocol.pubtraitRemoteAttestor{/// Generate an attestation report that a remote verifier can check./// The report includes:/// - Platform identity (device-unique key signed measurement)/// - Boot chain hashes (all four stages)/// - Current witness log root hash/// - Loaded RVF component inventory/// - Nonce from the challenger (prevents replay)fngenerate_attestation_report(&self,challenge_nonce:&[u8;32],) -> Result<AttestationReport,KernelError>;/// Verify an attestation report from another node./// Used in mesh deployments where nodes must mutually attest.fnverify_attestation_report(&self,report:&AttestationReport,expected_measurements:&MeasurementPolicy,) -> Result<AttestationVerdict,KernelError>;}pubstructAttestationReport{/// Platform identifier (public key of device).pubplatform_id:[u8;32],/// Boot chain measurement (hash of all four stages).pubboot_measurement:[u8;32],/// Current witness log chain hash (latest chain_hash).pubwitness_root:[u8;32],/// List of loaded RVF component hashes.pubcomponent_inventory:Vec<[u8;32]>,/// Challenge nonce from the verifier.pubnonce:[u8;32],/// Signature over all of the above using the platform key.pubsignature:[u8;64],}
6.5 Code Signing for Partition Images
All RVF packages must be signed before they can be mounted. The signature is verified by the
kernel's boot loader (crates/ruvix/crates/boot/src/signature.rs):
The RVF manifest specifies the signing key ID and algorithm
The kernel maintains a TrustedKeyStore (up to 8 keys, expirable)
Keys can be rotated by mounting a key-update RVF signed by an existing trusted key
The signing key hierarchy supports a two-level PKI:
Root key: Burned in eFUSE or compiled into Stage 1 (immutable)
Signing keys: Derived from root key, time-bounded, rotatable
7. Agent-Specific Security
7.1 WASM Sandbox Security Within Partitions
RVF components execute as WASM modules within partitions. The WASM sandbox provides a second
layer of isolation inside the capability boundary:
Linear memory isolation: Each WASM module has its own linear memory; it cannot access
memory of other modules or the host
Import-only system access: WASM modules can only call functions explicitly imported
from the host. The host provides a minimal syscall shim that maps WASM calls to
capability-gated RVM syscalls
Resource limits: Each WASM module has configured limits on memory size, stack depth,
execution fuel (instruction count), and table size
No raw pointer access: WASM's type system prevents arbitrary memory access. Pointers
are offsets into the linear memory, bounds-checked by the runtime
/// WASM module resource limits.pubstructWasmResourceLimits{/// Maximum linear memory size in pages (64 KiB per page).pubmax_memory_pages:u32,/// Maximum call stack depth.pubmax_stack_depth:u32,/// Maximum execution fuel (instructions). 0 = unlimited.pubmax_fuel:u64,/// Maximum number of table entries.pubmax_table_elements:u32,/// Maximum number of globals.pubmax_globals:u32,}/// The host interface exposed to WASM modules./// Every function here validates capabilities before performing the operation.pubtraitWasmHostInterface{fnvector_get(&self,store:u32,key:u64) -> Result<WasmVectorRef,WasmTrap>;fnvector_put(&self,store:u32,key:u64,data:&[f32],proof:WasmProofRef)
-> Result<(),WasmTrap>;fnqueue_send(&self,queue:u32,msg:&[u8],priority:u8) -> Result<(),WasmTrap>;fnqueue_recv(&self,queue:u32,buf:&mut[u8],timeout_ms:u64)
-> Result<usize,WasmTrap>;fnlog(&self,level:u8,message:&str);}
7.2 Inter-Agent Communication Security
Agents communicate exclusively through typed queues. Security properties of queue-based IPC:
Capability-gated: Both sender and receiver must hold capabilities on the queue
Typed messages: Queue schema (WIT types) is validated at send time. Malformed
messages are rejected before reaching the receiver
Zero-copy safety: Zero-copy messages use descriptors pointing into immutable or
append-only regions. The kernel rejects descriptors pointing into slab regions
(TOCTOU mitigation -- SEC-004)
No covert channels: Queue capacity is bounded and visible. The kernel does not
leak information about queue occupancy to tasks that do not hold the queue's capability
Message ordering: Messages within a priority level are delivered in FIFO order.
Cross-priority ordering is by priority (higher first). This is deterministic and
does not leak information.
7.3 Agent Identity and Authentication
Agents do not have traditional identities (no UIDs, no usernames). Instead, agent identity
is established through the capability chain:
Boot-time identity: An agent's initial capabilities are assigned by the RVF manifest.
The manifest is signed, so the identity is rooted in the code signer.
Runtime identity: An agent can prove its identity by demonstrating possession of
specific capabilities. A "who are you?" query is answered by "I hold capability X with
badge Y", and the verifier checks that badge against its expected value.
Attestation identity: An agent can emit an attest_emit record that binds its
capability badge to a witness entry. External verifiers can trace this back through the
witness chain to the boot attestation.
/// Agent identity is derived from capability badges, not global names.pubstructAgentIdentity{/// The agent's task handle (ephemeral, changes across reboots).pubtask:TaskHandle,/// Badge on the agent's primary capability (stable across reboots if/// assigned by the RVF manifest).pubprimary_badge:u64,/// RVF component ID that spawned this agent.pubcomponent_id:RvfComponentId,/// Hash of the WASM module binary (code identity).pubcode_hash:[u8;32],}
7.4 Resource Limits and DoS Prevention
Each partition and each WASM module within a partition has enforceable resource limits:
/// Per-partition resource quota.pubstructPartitionQuota{/// Maximum physical memory (bytes).pubmax_memory_bytes:usize,/// Maximum number of tasks.pubmax_tasks:u32,/// Maximum number of capabilities.pubmax_capabilities:u32,/// Maximum number of queue endpoints.pubmax_queues:u32,/// Maximum number of region mappings.pubmax_regions:u32,/// CPU time budget per scheduling epoch (microseconds). 0 = unlimited.pubcpu_budget_us:u64,/// Maximum interrupt rate across all devices (per second).pubmax_interrupt_rate_hz:u32,/// Maximum witness log entries per epoch (prevents log flooding).pubmax_witness_entries_per_epoch:u32,}/// Enforcement mechanism.pubtraitQuotaEnforcer{/// Check if an allocation would exceed the partition's quota.fncheck_allocation(&self,partition:PartitionHandle,resource:ResourceKind,amount:usize,) -> Result<(),KernelError>;/// Record a resource allocation against the quota.fnrecord_allocation(&mutself,partition:PartitionHandle,resource:ResourceKind,amount:usize,) -> Result<(),KernelError>;/// Release a resource allocation.fnrelease_allocation(&mutself,partition:PartitionHandle,resource:ResourceKind,amount:usize,);}pubenumResourceKind{Memory,Tasks,Capabilities,Queues,Regions,CpuTime,WitnessEntries,}
Deep-tier proof with source/destination attestation
Replay migration
Nonce in migration proof
Man-in-the-middle on migration
Encrypted channel + attestation binding
8.2 What Is Out of Scope for v1
The following are explicitly NOT defended against in v1. They are acknowledged risks that
will be addressed in future iterations:
Physical access attacks: An attacker with physical access to the hardware (JTAG,
bus probing, cold boot attacks) is out of scope. Hardware security modules (HSMs) and
tamper-resistant packaging are future work.
Rowhammer / DRAM disturbance: RVM does not implement guard rows or ECC
requirements in v1. Edge hardware with ECC RAM is recommended but not enforced.
Supply chain attacks on the compiler: RVM trusts the Rust compiler. Reproducible
builds are recommended but not verified in v1.
Formal verification of the kernel: Unlike seL4, RVM is not formally verified in v1.
The kernel is written in safe Rust (with minimal unsafe in the HAL layer), but there
is no machine-checked proof of correctness.
Covert channels via power consumption: Power analysis side channels are out of scope.
RVM does not implement constant-power execution.
GPU/accelerator isolation: v1 targets CPU-only execution. GPU and accelerator DMA
isolation is future work.
Encrypted memory (SEV-SNP/TDX): v1 does not implement memory encryption. The
hypervisor trusts the physical memory bus.
Multi-tenant adversarial scheduling: The scheduler provides time isolation through
budgets and jitter, but does not defend against a sophisticated adversary performing
cache-timing analysis across many scheduling quanta.
The kernel is correct (not formally verified, but written in safe Rust)
The hardware functions as documented (MMU enforces page permissions, IOMMU restricts DMA)
The boot signing key has not been compromised
The Rust compiler generates correct code
The WASM runtime (Wasmtime or WAMR) correctly enforces sandboxing
8.4 Comparison to KVM and seL4 Threat Models
Property
KVM
seL4
RVM
TCB size
~2M lines (Linux kernel)
~8.7K lines (C)
~15K lines (Rust)
Formal verification
No
Yes (full functional correctness)
No (safe Rust, not verified)
Memory safety
C (manual)
C (verified)
Rust (compiler-enforced)
Capability model
No (uses DAC/MAC)
Yes (unforgeable tokens)
Yes (seL4-inspired)
Proof-gated mutation
No
No
Yes (unique to RVM)
Witness audit log
No (relies on external logging)
No
Yes (kernel-native)
DMA isolation
VT-d/SMMU
IOMMU-dependent
IOMMU + bounce buffer fallback
Side-channel defense
KPTI, IBRS, MDS mitigations
Limited (depends on platform)
CSDB, BTI, PAN, const-time paths
Agent-native primitives
No
No
Yes (vectors, graphs, coherence)
Hot-code loading
Module loading (large TCB)
No
RVF mount (capability-gated)
Key differentiators:
RVM vs. KVM: RVM has a 100x smaller TCB. KVM inherits the entire Linux kernel as
its TCB, including filesystems, networking, drivers, and hundreds of syscalls. RVM has
12 syscalls and no ambient authority. KVM relies on Linux's DAC/MAC; RVM uses
capabilities with proof-gated mutation.
RVM vs. seL4: seL4 has formal verification, which RVM does not. However, RVM
has proof-gated mutation (no mutation without cryptographic authorization), kernel-native
witness logging, and agent-specific primitives (vector stores, graph stores, coherence
scoring). seL4 would require these as userspace servers communicating through IPC,
reintroducing overhead and expanding the trusted codebase.
9. Security Invariants Summary
The following invariants MUST hold at all times. Violation of any invariant indicates a
security breach.
ID
Invariant
Enforcement
SEC-001
Boot signature failure -> PANIC
verify_boot_signature_or_panic(), compile-time block on disable-boot-verify in release
This document defines the GOAP (Goal-Oriented Action Planning) strategy for RVM Hypervisor Core -- a Rust-first, coherence-native microhypervisor that replaces the VM abstraction with coherence domains: graph-partitioned isolation units managed by dynamic min-cut, governed by proof-gated capabilities, and optimized for multi-agent edge computing.
RVM Hypervisor Core is NOT a KVM VMM. It is NOT a Linux module. It is a standalone hypervisor that boots bare metal, manages hardware directly, and uses coherence domains as its primary scheduling and isolation primitive. Traditional VMs are subsumed as a degenerate case (a coherence domain with a single opaque partition and no graph structure).
Current State Assessment
The RuVector project already has significant infrastructure in place:
ruvix kernel workspace -- 22 sub-crates, ~101K lines of Rust, 760 tests passing (Phase A complete)
ruvix-cap -- seL4-inspired capability system with derivation trees
ruvix-proof -- 3-tier proof engine (Reflex <100ns, Standard <100us, Deep <10ms)
ruvix-sched -- Coherence-aware scheduler with novelty boosting
ruvix-hal -- HAL traits for AArch64, RISC-V, x86 (trait definitions)
ruvix-aarch64 -- AArch64 boot, MMU stubs
ruvix-physmem -- Physical memory allocator
ruvix-boot -- 5-stage RVF boot with ML-DSA-65 signatures
ruvix-nucleus -- 12 syscalls, checkpoint/replay
ruvector-mincut -- Subpolynomial dynamic min-cut (the crown jewel)
Current world state: ruvix has 6 primitives (Task, Capability, Region, Queue, Timer, Proof) but no concept of a "coherence domain" as a first-class hypervisor object.
Goal state: Coherence domains are the primary isolation and scheduling unit, replacing the VM abstraction.
Definition
A coherence domain is a graph-structured isolation unit consisting of:
CoherenceDomain {
id: DomainId,
graph: VecGraph, // from ruvix-vecgraph, nodes=tasks, edges=data dependencies
regions: Vec<RegionHandle>, // memory owned by this domain
capabilities: CapTree, // capability subtree rooted at domain cap
coherence_score: f32, // spectral coherence metric
mincut_partition: Partition, // current min-cut boundary from ruvector-mincut
witness_log: WitnessLog, // domain-local witness chain
tier: MemoryTier, // Hot | Warm | Dormant | Cold
}
How It Replaces VMs
VM Concept
Coherence Domain Equivalent
vCPU
Tasks within the domain's graph
Guest physical memory
Regions with domain-scoped capabilities
VM exit/enter
Partition switch (rescheduling at min-cut boundary)
AD-1: No hardware virtualization extensions required. RVM uses capability-based isolation (software) + MMU page table partitioning (hardware) instead of VT-x/AMD-V/EL2 trap-and-emulate. This means:
No VM exits. No VMCS/VMCB. No nested page tables.
Isolation comes from: (a) capability enforcement in ruvix-cap, (b) MMU page table boundaries per domain, (c) proof-gated mutation.
A traditional VM is a degenerate coherence domain: single partition, opaque graph, no coherence scoring.
AD-2: EL2 is used for page table management only. On AArch64, the hypervisor runs at EL2. But EL2 is used purely to manage stage-2 page tables that enforce region boundaries -- not for trap-and-emulate virtualization.
AD-3: Coherence score drives everything. The coherence score (computed from the domain's graph structure via ruvector-coherence spectral methods) determines:
Scheduling priority (high coherence = more CPU time)
Memory tier (high coherence = hot tier; low coherence = demote to warm/dormant)
Migration eligibility (domains with suboptimal min-cut partition are candidates)
Reclamation order (lowest coherence reclaimed first under memory pressure)
Actions:
A2.1.1: Add CoherenceDomain struct to ruvix-types
A2.1.2: Add DomainCreate, DomainDestroy, DomainMigrate syscalls to ruvix-nucleus
A2.1.3: Implement domain-scoped capability trees in ruvix-cap
A2.1.4: Wire ruvector-coherence spectral scoring into ruvix-sched
2.2 Hardware Abstraction Layer
Current state: ruvix-hal defines traits for Console, Timer, InterruptController, Mmu, Power. ruvix-aarch64 has stubs.
Goal state: HAL supports three architectures with hypervisor-level primitives.
A2.2.1: Extend ruvix-hal with HypervisorMmu and CoherenceHardware traits
A2.2.2: Implement AArch64 EL2 page table management in ruvix-aarch64
A2.2.3: Implement GIC-600 interrupt routing per coherence domain
A2.2.4: Define RISC-V H-extension HAL (trait impl stubs)
2.3 Memory Model
Current state: ruvix-region provides Immutable/AppendOnly/Slab policies with mmap-backed storage. ruvix-physmem has a buddy allocator.
Goal state: Hybrid memory model with capability-gated regions and tiered coherence.
Design: Four-Tier Memory Hierarchy
Tier | Backing | Access Latency | Coherence State | Eviction Policy
---------|-----------------|----------------|-----------------|------------------
Hot | L1/L2 resident | <10ns | Exclusive/Modified | Never (pinned)
Warm | DRAM | ~100ns | Shared/Clean | LRU with coherence weight
Dormant | Compressed DRAM | ~1us | Invalid (reconstructable) | Coherence score threshold
Cold | NVMe/Flash | ~10us | Tombstone | Witness log pointer only
Key Innovation: Reconstructable Memory.
Dormant regions are not stored as raw bytes. They are stored as:
A witness log checkpoint hash
A delta-compressed representation (using ruvector-temporal-tensor compression)
Reconstruction instructions that can rebuild the region from the witness log
This means memory reclamation does not destroy state -- it compresses it into the witness chain.
Actions:
A2.3.1: Extend ruvix-region with MemoryTier enum and tier-transition methods
A2.3.2: Implement dormant-region compression using witness log + delta encoding
A2.3.3: Implement cold-tier eviction to NVMe with tombstone references
A2.3.4: Wire physical memory allocator to tier-aware allocation (hot from buddy, warm from slab pool)
A2.3.5: Define the page table structure for stage-2 domain isolation
2.4 Scheduler: Graph-Pressure-Driven
Current state: ruvix-sched has a coherence-aware scheduler with deadline pressure, novelty signal, and structural risk. Fixed partition model.
Goal state: Scheduler uses live graph state from ruvector-mincut to make scheduling decisions.
Scheduling Algorithm: CoherencePressure
EVERY scheduler_tick:
1. For each active coherence domain D:
a. Read D.graph edge weights (data flow rates between tasks)
b. Compute min-cut value via ruvector-mincut (amortized O(n^{o(1)}))
c. Compute coherence_score = spectral_gap(D.graph) / min_cut_value
d. Compute pressure = deadline_urgency * coherence_score * novelty_boost
2. Sort domains by pressure (descending)
3. Assign CPU time proportional to pressure
4. If any domain's coherence_score < threshold:
- Trigger repartition: invoke ruvector-mincut to compute new boundary
- If repartition improves score by >10%: execute migration
Partition Switch Protocol (target: <10us):
switch_partition(from: DomainId, to: DomainId):
1. Save from.task_state to from.region (register dump, ~500ns)
2. Switch stage-2 page table root (TTBR write, ~100ns)
3. TLB invalidate for from domain (TLBI, ~2us on ARM)
4. Load to.task_state from to.region (~500ns)
5. Emit witness record for switch (~200ns with reflex proof)
6. Resume execution in to domain
Total budget: ~3.3us (well within 10us target)
Actions:
A2.4.1: Refactor ruvix-sched to accept graph state from ruvix-vecgraph
A2.4.2: Integrate ruvector-mincut as a scheduling oracle (no_std subset)
A2.4.3: Implement partition switch protocol in ruvix-aarch64
A2.4.4: Benchmark partition switch time on QEMU virt
2.5 IPC: Zero-Copy Message Passing
Current state: ruvix-queue provides io_uring-style ring buffers with zero-copy semantics. 47 tests passing.
Goal state: Cross-domain IPC through shared regions with capability-gated access.
Design
Inter-domain IPC:
1. Sender domain S holds Capability(Queue Q, WRITE)
2. Receiver domain R holds Capability(Queue Q, READ)
3. Queue Q is backed by a shared Region visible in both S and R stage-2 page tables
4. Messages are written as typed records with coherence metadata
5. Every send/recv emits a witness record linking the two domains
Intra-domain IPC:
Same as current ruvix-queue, but within a single stage-2 address space.
No page table switch required. Pure ring buffer.
Message Format:
structDomainMessage{header:MsgHeader,// 16 bytes: sender, receiver, type, lencoherence:CoherenceMeta,// 8 bytes: coherence score at send timewitness:WitnessHash,// 32 bytes: hash linking to witness chainpayload:[u8],// variable: zero-copy reference into shared region}
Actions:
A2.5.1: Extend ruvix-queue with cross-domain shared region support
A2.5.2: Implement capability-gated queue access for inter-domain messages
A2.5.3: Add CoherenceMeta and WitnessHash to message headers
Goal state: Devices are not "assigned" to domains. They are leased with capability-bounded time windows.
structDeviceLease{device_id:DeviceId,domain:DomainId,capability:CapHandle,// Revocable capability for device accesslease_start:Timestamp,lease_duration:Duration,max_dma_budget:usize,// Maximum DMA bytes allowed during leasewitness:WitnessHash,// Proof of lease grant}
Key properties:
Lease expiry automatically revokes capability (no explicit release needed)
DMA budget prevents device from exhausting memory during lease
Multiple domains can hold read-only leases to the same device simultaneously
Exclusive write lease requires proof of non-interference (via min-cut: device node has no shared edges)
Actions:
A2.6.1: Design DeviceLease struct and lease lifecycle
A2.6.2: Implement lease-based MMIO region mapping in ruvix-drivers
A2.6.3: Implement DMA budget enforcement in ruvix-dma
A2.6.4: Wire lease expiry to capability revocation in ruvix-cap
2.7 Witness Subsystem: Compact Append-Only Log
Current state: ruvix-boot has WitnessLog with SHA-256 chaining. ruvix-proof has 3-tier proof engine.
Goal state: Hypervisor-wide witness log that enables deterministic replay, audit, and fault recovery.
Design
WitnessLog (per coherence domain):
- Append-only ring buffer in a dedicated Region(AppendOnly)
- Each entry: [timestamp: u64, action_type: u8, proof_hash: [u8; 32], prev_hash: [u8; 32], payload: [u8; N]]
- Fixed 82-byte entries (ATTESTATION_SIZE from ruvix-types)
- Hash chain: entry[i].prev_hash = SHA256(entry[i-1])
- Compaction: when ring buffer wraps, emit a Merkle root of the evicted segment to cold storage
GlobalWitness (hypervisor-level):
- Merges per-domain witness chains at partition switch boundaries
- Enables cross-domain causality reconstruction
- Uses ruvector-dag for causal ordering
Actions:
A2.7.1: Implement per-domain witness log in ruvix-proof
A2.7.2: Implement global witness merge at partition switch
A2.7.3: Implement Merkle compaction for ring buffer overflow
A2.7.4: Implement deterministic replay from witness log + checkpoint
3. Implementation Milestones
M0: Bare-Metal Rust Boot on QEMU (No KVM, Direct Machine Code)
Goal: Boot RVM at EL2 on QEMU aarch64 virt, print to UART, emit first witness record.
Preconditions:
ruvix-hal traits defined (done)
ruvix-aarch64 boot stubs (partially done)
aarch64-boot directory with linker script and build system (exists)
Actions:
M0.1: Complete _start assembly: disable MMU, set up stack, branch to Rust
M0.2: Initialize PL011 UART via ruvix-drivers
M0.3: Initialize GIC-400 minimal (mask all interrupts except timer)
M0.4: Set up EL2 translation tables (identity mapping for kernel, device MMIO)
M0.5: Initialize witness log in a fixed RAM region
M0.6: Emit first witness record (boot attestation)
M0.7: Measure cold boot to first witness time (target: <250ms)
Keep O(n log n) edges instead of O(n^2) for coherence queries
Coherence scoring
Spectral gap from sparsified Laplacian
Fast coherence score without full eigendecomposition
Migration planning
Sparsified graph for min-cut
Approximate min-cut on sparsified graph (faster)
Memory accounting
Sparse representation of access patterns
Track which regions are accessed by which tasks
Key insight: The sparsifier maintains a spectrally-equivalent graph with O(n log n / epsilon^2) edges. This means coherence scoring and min-cut computation can run on the sparse representation instead of the full graph, reducing kernel-mode computation time.
Actions:
I4.2.1: Add no_std feature to ruvector-sparsifier
I4.2.2: Implement incremental sparsification (update sparse graph on edge insert/delete)
I4.2.3: Wire sparsified graph into scheduler for fast coherence queries
I4.2.4: Benchmark: sparsified vs. full graph coherence scoring latency
Key insight: The solver's Neumann series and conjugate gradient methods can compute approximate spectral properties of the domain graph in O(sqrt(n)) time. This is fast enough for per-tick coherence scoring.
Actions:
I4.3.1: Add no_std subset of ruvector-solver (Neumann series only, no nalgebra)
I4.3.2: Implement approximate Fiedler vector computation for coherence scoring
Key insight: When a dormant region needs reconstruction, the witness log provides the exact mutation sequence. But semantic embeddings can identify which other regions contain related state, enabling speculative prefetch during reconstruction.
Actions:
I4.4.1: Implement kernel-resident micro-HNSW in ruvix-vecgraph (fixed-size, no_std)
I4.4.2: Wire novelty detection into scheduler (vector distance from recent inputs)
I4.4.3: Implement embedding-based prefetch for dormant region reconstruction
I4.4.4: Implement anomaly detection for cross-domain state divergence
4.5 Additional RuVector Crate Integration
Crate
Integration Point
Priority
ruvector-raft
Cross-node consensus for multi-node coherence domains
P2 (M7)
ruvector-verified
Formal proofs for capability derivation correctness
P1 (M2)
ruvector-dag
Causal ordering in global witness log
P1 (M2)
ruvector-temporal-tensor
Delta compression for dormant regions
P1 (M5)
ruvector-coherence
Spectral coherence scoring
P0 (M3)
cognitum-gate-kernel
256-tile fabric as coherence domain topology
P2 (M7)
ruvector-snapshot
Checkpoint/restore for domain state
P1 (M5)
5. Success Metrics
5.1 Performance Targets
Metric
Target
Measurement Method
Milestone
Cold boot to first witness
<250ms
QEMU timer from power-on to first witness UART print
M0
Hot partition switch
<10us
ARM cycle counter around switch_partition()
M1
Remote memory traffic reduction
20% vs. static
Hardware perf counters (cache miss/remote access)
M4
Tail latency reduction
20% vs. round-robin
P99 latency of agent request/response
M4
Full witness trail
100% coverage
Audit: every syscall has witness record
M2
Fault recovery without global reboot
Domain-local recovery
Kill one domain, verify others unaffected
M5
WASM agent boot time
<5ms per agent
Timer around WASM instantiation
M6
Zero-copy IPC latency
<100ns intra, <1us inter
Benchmark ring buffer round-trip
M1
Coherence scoring overhead
<1us per domain per tick
Cycle counter around scoring function
M3
Min-cut update amortized
<5us for 64-node graph
Benchmark in kernel context
M4
5.2 Correctness Targets
Property
Verification Method
Milestone
Capability safety (no unauthorized access)
ruvector-verified + Kani
M1
Witness chain integrity (no gaps, no forgery)
SHA-256 chain verification
M2
Deterministic replay (same inputs -> same state)
Replay 10K syscall traces
M2
Proof soundness (invalid proofs always rejected)
Fuzzing + proptest
M2
Isolation (domain fault does not affect others)
Inject faults, verify containment
M5
Memory safety (no UB in kernel code)
Miri + Kani + #![forbid(unsafe_code)] where possible
A2.2 (HAL) -> M0 -> M1 -> M3 -> M4 -> M5 -> M6 -> M7
^
|
I4.1 (mincut no_std) -- this is the highest-risk integration
Highest risk item: Creating a no_std subset of ruvector-mincut that runs in kernel context within the scheduler tick budget. If the amortized min-cut update exceeds 5us for 64-node graphs, the scheduler design must fall back to periodic (not per-tick) repartitioning.
WorldState{bare_metal_research_complete:false,capability_model_decided:true,// seL4-inspired, already in ruvix-capscheduling_algorithm_specified:false,memory_model_designed:false,verification_strategy_decided:false,agent_runtime_selected:false,boots_on_qemu:false,uart_works:false,mmu_configured:false,interrupts_working:false,coherence_domains_exist:false,capabilities_enforce_isolation:true,// ruvix-cap works in hosted modewitness_log_records_all:false,proofs_gate_all_mutations:true,// ruvix-proof works in hosted modescheduler_uses_coherence:false,mincut_drives_partitioning:false,memory_tiers_work:false,wasm_agents_run:false,boot_under_250ms:false,switch_under_10us:false,traffic_reduced_20pct:false,tail_latency_reduced_20pct:false,runs_on_seed_hardware:false,runs_on_appliance_hardware:false,}
Goal State
All fields set to true.
A* Search Heuristic
The heuristic for GOAP planning uses the number of false fields as the distance estimate. Each action sets one or more fields to true. The planner finds the minimum-cost path from initial to goal state.
Cost model:
Research action: 1 week (cost = 1)
Architecture action: 1-2 weeks (cost = 1.5)
Implementation milestone: 2-5 weeks (cost = 3)
Integration action: 1-3 weeks (cost = 2)
Hardware bring-up: 6-8 weeks (cost = 7)
Optimal plan total estimated duration: 28-36 weeks (with parallelism in research and integration phases, critical path through M0->M1->M3->M4->M5->M6->M7).
8. Risk Register
Risk
Likelihood
Impact
Mitigation
MinCut no_std too slow for per-tick scheduling
Medium
High
Fall back to periodic repartitioning (every 100 ticks); use sparsified graph
EL2 page table management bugs
High
Medium
Extensive QEMU testing; Miri for unsafe blocks; compare with known-good implementations
WASM runtime too large for kernel integration
Medium
Medium
Use WAMR interpreter (smallest footprint); or run WASM in EL1 with EL2 capability enforcement
Witness log overhead degrades hot path
Low
High
Reflex proof tier (<100ns) is already within budget; batch witness records if needed
Hardware coherence counters unavailable
Medium
Low
Fall back to software instrumentation (memory access tracking via page faults)
Formal verification scope creep
High
Low
Strict verification budget: only ruvix-cap and ruvix-proof get full verification
Cross-node migration protocol correctness
High
High
TLA+ model before implementation; extensive simulation in qemu-swarm
9. Existing Codebase Inventory
What We Have (and reuse directly)
Crate
LoC (est.)
Reuse Level
Notes
ruvix-types
~2,000
Direct
Add CoherenceDomain, MemoryTier types
ruvix-cap
~1,500
Direct
Add domain-scoped trees
ruvix-proof
~1,800
Direct
Add per-domain witness log
ruvix-sched
~1,200
Refactor
Wire to coherence scoring
ruvix-region
~1,500
Extend
Add tier management
ruvix-queue
~1,000
Extend
Add cross-domain shared regions
ruvix-boot
~2,000
Refactor
EL2 boot sequence
ruvix-vecgraph
~1,200
Extend
Add kernel HNSW
ruvix-nucleus
~3,000
Refactor
Add domain syscalls
ruvix-hal
~800
Extend
Add HypervisorMmu traits
ruvix-aarch64
~800
Major work
EL2 implementation
ruvix-drivers
~500
Extend
Lease-based device model
ruvix-physmem
~800
Direct
Tier-aware allocation
ruvix-smp
~500
Direct
Multi-core domain placement
ruvix-dma
~400
Extend
Budget enforcement
ruvix-dtb
~400
Direct
Device tree parsing
ruvix-shell
~600
Direct
Debug interface
qemu-swarm
~3,000
Direct
Testing infrastructure
What We Have (reuse via no_std adaptation)
Crate
Adaptation Needed
ruvector-mincut
no_std feature, fixed-size graph backend
ruvector-sparsifier
no_std feature, remove rayon
ruvector-solver
no_std Neumann series only
ruvector-coherence
Already minimal, add spectral feature
ruvector-verified
Lean-agentic proofs for cap verification
ruvector-dag
no_std causal ordering
What We Need to Build
Component
Estimated LoC
Milestone
CoherenceDomain lifecycle
~2,000
M1
EL2 page table management
~3,000
M0/M1
Partition switch protocol
~500
M1
Per-domain witness log
~1,000
M2
Global witness merge
~800
M2
Graph-pressure scheduler
~1,500
M3
MinCut kernel integration
~2,000
M4
Memory tier manager
~2,000
M5
WASM runtime adapter
~3,000
M6
Device lease manager
~1,000
M6
Hardware drivers (Seed/Appliance)
~5,000
M7
Total new code
~21,800
Combined with ~20K lines of existing ruvix code being reused/extended, the total codebase at M7 completion is estimated at ~42K lines of Rust.
10. Next Steps (Immediate Actions)
Week 1-2: Research Sprint
Read Theseus OS and RedLeaf papers (A1.1.1, A1.1.2)
Audit ruvix-cap against seL4 CNode spec (A1.2.1)
Formalize coherence-pressure scheduling problem (A1.3.1)
Benchmark ruvector-mincut update latency for kernel budget (A1.3.4)
Select WASM runtime (WAMR vs wasmtime-minimal) (A1.6.1)
Week 3-4: M0 Sprint
Complete _start assembly for AArch64 EL2 boot (M0.1)
Initialize PL011 UART (M0.2)
Configure EL2 translation tables (M0.4)
Emit first witness record (M0.6)
Measure boot time (M0.7)
Week 5-8: M1 + M2 Sprint
Implement CoherenceDomain in ruvix-types (M1.1)
Add domain syscalls (M1.2)
Implement stage-2 page tables (M1.4)
Wire witness logging to all syscalls (M2.2)
Week 9-12: M3 + M4 Sprint (Critical Path)
Integrate ruvector-coherence into scheduler (M3.1)