Skip to content

drpr/ulma

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

ULMA

The repository drpr/alex is now PRIVATE, and access is restricted to authorized whitelist users only. ULMA: Universal Large Language Model Memory Architecture (Official) ⚠️ IMPORTANT NOTICE: ONGOING LEGAL ACTION & IP PROTECTION ALERT ⚠️

Due to an ongoing investigation and legal action regarding an organized academic ring that systematically plagiarized the source code, architectural designs, and proprietary technical stack of this repository to generate unauthorized academic preprints, the original source code repository (drpr/alex, the official ULMA core repository) has been permanently set to PRIVATE.

I am the exclusive original author and architect of the entire ULMA system. The complete Git commit history (with cryptographic timestamps spanning from v1.0 to the latest v2.4.0), architectural decision records (ADRs), and engineering iteration logs have been fully archived and secured as irrefutable, timestamped proof of original authorship for the ongoing legal proceedings.

Access to this repository is strictly limited to pre-authorized whitelist users only. Unauthorized access, copying, plagiarism, or commercial use of any content herein is strictly prohibited, and all access is logged for audit and rights protection purposes.

  1. The Plagiarized Technical Innovations (For the Record) For the purpose of IP assertion and ongoing investigations, it is hereby publicly declared that the following proprietary technical architectures, conceptual intersections, and engineering implementations were originally designed, coded, and committed in this repository well before any related unauthorized publications:

Distributed Graph State Machine over NATS: The highly specific integration of NATS JetStream acting as a distributed Write-Ahead Log (WAL) combined with a Two-Phase Commit (2PC) and CRDTs to maintain strict ACID properties for graph mutations. Agentic MESI Cache Coherence: The novel adaptation of the hardware MESI protocol into a multi-agent software architecture, mapping state transitions to resolve multi-agent Code Property Graph (CPG) synchronization conflicts and prevent "Split-Brain" scenarios. Hardware-Agnostic Transport (ulma_transport_core): The unique abstraction layer allowing the system to seamlessly degrade from physical Zero-Copy RDMA (using InfiniBand Verbs and atomic CAS instructions) to NATS JetStream without altering business logic. Cross-Language AST to CPG Synchronization: The pipeline utilizing distributed NATS messaging to synchronize AST parsing (Rust/Python/JS/C) into a unified, performance-optimized Code Property Graph across distributed nodes. Graph Interpreter & Translator Pipelines: The specific architectural patterns combining NATS, gRPC, and hybrid retrieval logic for agentic code understanding. Any recent preprints (e.g., those appearing on arXiv around March 2026) claiming original invention... are currently under formal DMCA takedown proceedings and investigation for academic misconduct.

  1. What is ULMA? ULMA (Universal Large Language Model Memory Architecture) is a production-grade, strongly consistent, and cloud-native distributed memory foundation designed specifically for Long-Horizon Agentic Tasks.

Unlike conventional vector database wrappers, ULMA is engineered from the ground up using Rust, integrating the aforementioned hardware-level communication (RDMA), distributed consensus (2PC/CRDT), Code Property Graphs (CPG), and multi-level cache coherence protocols (MESI) into a unified cognitive architecture.

  1. Core Architectural Pillars (The Genuine, Original Tech Stack) The plagiarized materials merely copied surface-level concepts. They cannot replicate the underlying design rationale, engineering implementation, or iterative evolution of the genuine ULMA system (spanning from v1.0 to the current v2.4.0), which is built upon the following proprietary engineering pillars:

3.1 The Foundations: v1.0 Core Principles & ulma-plugin Ecosystem The conceptual foundation of ULMA was established in v1.0 (and detailed in the initial February 2026 unpublished manuscript). The system is not just a backend; it is an end-to-end cognitive loop.

The L1-L4 Memory Hierarchy (v1.0 Origin): L1 (Working Context): Session-bound transient memory utilizing adaptive compression algorithms with dynamic importance weights. L2 (Task Anchors & Shared Pool): A native state machine mapping task updates into Decision and Outcome nodes. L3 (Warm Index): Decoupled vector storage supporting scope-aware (local/dependency/global) hybrid retrieval. L4 (Cold Archive): Task lineage persistence supporting cognitive integrity verification. Edge-to-Cloud Cognitive Loop (ulma-plugin): The architecture includes a tightly integrated IDE extension (ulma-plugin). It serves as the "Edge Sensors" for the LLM, continuously intercepting user keystrokes, AST modifications, and terminal execution contexts, streaming them in real-time to the ulma-core to construct the dynamic Code Property Graph (CPG). 3.2 Distributed Graph State Machine & Consistency (v2.0+) 2PC + WAL: Employs Two-Phase Commit and Write-Ahead Logging to ensure strict ACID properties for graph mutations across distributed nodes. 128-bit Hybrid Logical Clocks (HLC): To resolve CRDT key collisions during high-concurrency Agentic Tool-calling, the system implements a bespoke 128-bit HLC (64-bit physical + 32-bit logical + 32-bit node ID). Agentic MESI Protocol: Real-time BusRdX and Invalidate signal broadcasting via WebSockets to maintain cache coherence among collaborative multi-agent swarms. 3.3 Hardware-Agnostic Transport (ulma_transport_core) Zero-Copy RDMA: Native InfiniBand Verbs implementation utilizing atomic CAS instructions for physical shared memory pool manipulation. Seamless Cloud-Native Fallback: A highly abstracted Rust Trait design (MemoryPool & RemoteTransport) allows the system to seamlessly degrade from physical RDMA to NATS JetStream for standard Kubernetes GPU cloud deployments without altering business logic. 3.4 Unified Cognitive Gateway (UCG) MCP Protocol Native: Full support for Model Context Protocol, enabling out-of-the-box integration with platforms like OpenClaw and Manus. Consistent Hash Routing: Stateful session routing ensuring maximum L1 cache hit rates across stateless ulma-node workers. Enterprise Observability: Fully integrated with Prometheus metrics and ELK stack (structured JSON tracing via request_id and tenant_id). 4. Evolution & Authenticity Proof The genuine development of ULMA has undergone rigorous, highly complex iterations, 100% driven by the original author. Plagiarists lacking the original cryptographic Git history cannot produce the evolution logs, architectural decision records (ADRs), or specific bug-fix rationales associated with these versions:

v1.0 (Genesis): Establishment of the L1-L4 Memory Hierarchy, initial ulma-core monolithic design, and ulma-plugin edge-sync architecture. v2.0: Introduction of the 2PC/WAL Graph State Machine and RDMA alignment. v2.1: Cluster benchmarking, consistency probes, and resilient HF model caching. v2.2: 128-bit HLC expansion and transport decoupling (RDMA/TCP/Mock). v2.3: Unified Cognitive Gateway, Multi-tenant governance, and MESI WebSocket coherence. v2.4.0: Full Cloud-Native Distributed Refactoring (NATS, Qdrant, Redis, K8s). 5. Access Request & Contact 5.1 Who Can Request Access? Only academic researchers, prospective supervisors, or authorized collaborators with a legitimate, verifiable purpose (e.g., peer review, joint research, technical verification) may request access. All requests are subject to strict review, and unauthorized access will be rejected immediately.

5.2 How to Request Access To request source code access, report academic misconduct regarding this architecture, or discuss potential research collaboration, please contact the author directly via the email associated with this GitHub profile, and provide:

Your full name, affiliation, and GitHub username. A clear, detailed explanation of your access purpose. A written commitment to not reproduce, share, or use any content for academic publication or commercial use without the author's prior written consent. 5.3 Intellectual Property Statement All content in the drpr/alex (ULMA core) and related plugin repositories, including source code, technical designs, algorithms, research results, and engineering implementations, is the exclusive intellectual property of the original author, protected by copyright law, the Computer Software Protection Regulations, and international intellectual property treaties. All rights reserved.

Any unauthorized access, copying, plagiarism, academic misrepresentation, or commercial use of this content is strictly prohibited. The author reserves the right to pursue all legal remedies, including reporting academic misconduct to academic institutions, preprint administration, and relevant regulatory authorities, and filing civil claims for damages against infringers.

Original Author & Chief Architect: [Yixiang Gao] (Alex) Professional Background: Senior AI System and Cloud Architect (15 Years Industry Experience in Embedded system/Arm64 chip development/public cloud artitecture design/Agent solution design verticals) GitHub Profile: @drpr (Official Account for ULMA Core Development) Official Contact: [alltheright121@gmail.com] Initial Public Release Date: January 28, 2026 Repository Privacy Status: Permanently Private (Whitelist Access Only for Verified Academic Collaborators)

About

The repository `drpr/alex` is now PRIVATE, and access is restricted to authorized whitelist users only.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors