-
Notifications
You must be signed in to change notification settings - Fork 86
Description
Describe the bug
This is a tough bug to trace, because the XNU kernel does not expose which subsystem is allocating kalloc.1024 objects. However, based on allocation size and OrbStack’s architecture (VM + networking + filesystem bridging), the following kernel subsystems are likely sources of sustained 1024-byte allocations:
1. Network stack (very likely)
mbufclusters / packet metadata (especially for bridged/NAT/virtio networking)- Interface state (
ifnet, routing entries) - Socket buffers and per-connection state
- NAT / port forwarding / connection tracking tables
2. Virtualization / virtio interfaces
virtio-netandvirtio-vsockdescriptor rings and buffers- Hypervisor.framework / Virtualization.framework bookkeeping
- Per-VM device emulation state
3. Filesystem bridging (high likelihood)
- vnodes and name cache entries
- File descriptor / fileproc structures
- FUSE / VirtioFS request buffers and message objects
- Path resolution and directory enumeration caches
4. IPC / messaging
- Mach message buffers (host ↔ VM communication)
- XPC-related kernel objects triggered by helper processes
5. I/O + eventing
kqueue/keventstructures- File watchers / fsnotify-style activity
- Poll/select descriptor tracking
Important observation (likely key to reproduction)
This issue appears to accelerate significantly under write-heavy workloads using an in-memory filesystem:
- I was running PostgreSQL inside OrbStack with its data directory on a
tmpfs(no bind mount / no persistent volume) - The database existed entirely in memory and generated frequent small writes (typical OLTP/test workload)
- Under this pattern,
kalloc.1024growth rate increased substantially compared to normal container workloads
This suggests the leak may be triggered or amplified by:
- High-frequency small writes
- Page cache / memory-backed filesystem interactions
- VirtioFS / FUSE bridging of memory-backed filesystems
- Increased syscall + IPC churn between guest and host
Observed behavior
kalloc.1024steadily grows over time while OrbStack is running- Eventually exceeds ~18GB, leading to kernel map exhaustion and system panic
- Growth correlates with container activity, especially write-heavy workloads
After uninstalling OrbStack and switching to a Linux VM setup:
kalloc.1024remains stable (~174 KB) for 48+ hours under similar workload conditions
This strongly indicates the leak originates from OrbStack’s interaction with macOS kernel subsystems rather than the workload itself. I found that even if I shut down Orbstack completely, kalloc.1024 never shrinks in size unless I restart the mac. Slab allocators do not shrink often, but they do eventually shrink. A good example is iOS simulator which oscillates kalloc.1024 allocations during debugging but returns to normal when shut down.
To Reproduce
- Write a script that brings up and tears down ephemeral, in-memory database containers such as PostgreSQL and assigns them different subnets.
- Write a test with frequent insertions to the database
sudo zprint | rg data_shared.kalloc.1024: run this periodically, observe quadratic growth rate with test activity- Stop tests, bring down container stacks
- Run zprint and observe growth rate stabilize but never return to baseline
If you stick with steps 1-3 long enough, the mac freezes, the screen flashes pink and goes blank, mac restarts, and after login you are presented with an Apple crash report stating the kernel panic message "kalloc.1024 kernel map exhausted". On my system it happens between 18-20GB, likely dependent on allocations in other kernel maps at the time of the overflow. Reproduced the crash 4 times:
macOS 26.3.1 : Mac Studio M1 Ultra 128GB
Expected behavior
kalloc.1024 growth rate increases are expected, but should stabilize and the number of active objects should decrease, especially after shut down. It should never be held in perpetuity until restart.
Diagnostic report (REQUIRED)
I uninstalled Orbstack so my mac would stop kernel panicking
Screenshots and additional context (optional)
Logs in Console.app are sparse, but the kernel panic is sometimes preceded with jetsam events:
{
"build" : "macOS 26.3.1 (25D2128)",
"product" : "Mac13,2",
"kernel" : "Darwin Kernel Version 25.3.0: Wed Jan 28 20:54:46 PST 2026; root:xnu-12377.91.3~2\/RELEASE_ARM64_T6000",
"incident" : "402602B3-0942-4469-9FCB-3031F18BD830",
"crashReporterKey" : "09588BAB-6341-552C-7FDE-8CEAF5CA1D1F",
"date" : "2026-03-13 16:40:01.93 -0700",
"codeSigningMonitor" : 1,
"bug_type" : "298",
"developerMode" : 1,
"appleIntelligenceStatus" : {"state":"available"},
"timeDelta" : 80,
"memoryStatus" : {
"compressorSize" : 375051,
"compressions" : 1046712,
"decompressions" : 83374,
"zoneMapCap" : 34359738368,
"largestZone" : "APFS_4K_OBJS",
"largestZoneSize" : 1980809216,
"pageSize" : 16384,
"uncompressed" : 935297,
"zoneMapSize" : 4793171968,
"memoryPages" : {
"active" : 3587685,
"throttled" : 0,
"fileBacked" : 2313044,
"wired" : 762769,
"anonymous" : 4860174,
"purgeable" : 40951,
"inactive" : 3550632,
"free" : 38327,
"speculative" : 34901
}
},
"largestProcess" : "OrbStack Helper",