Brain-like
LLM Recursive
Database
Neurons fire and decay. Synapses form through Hebbian learning. Memories consolidate during sleep. A database that actually thinks like your brain.
neuron.Fire()
✓ Energy: 0.7 → 1.0
✓ LastFiredAt: now
hebbian.OnNeuronFired(neuronID)
✓ Co-activation detected: 3 neurons
✓ Synapse n-42:n-17 strengthened → 0.84
✓ New synapse formed: n-42:n-89
Built on proven open-source foundations
Architecture
A database that thinks
like your brain
Not another key-value store. QubicDB implements actual neuroscience principles — neurons, synapses, Hebbian learning, and organic memory lifecycle.
Neurons — Living Memory Units
Each memory is a neuron with energy, activation level, and depth. Neurons fire when accessed, decay over time, and consolidate into long-term storage — just like biological memory.
View docsSynapses — Hebbian Learning
"Neurons that fire together, wire together." Synapses form automatically between co-activated neurons, strengthen with use, and weaken without it. Pure associative memory.
View docsMatrix — N-Dimensional Space
Neurons exist in an organic, growing matrix. Related neurons pull closer together over time. The matrix reorganizes itself based on actual usage patterns.
View docsSpreading Activation
Query one neuron and activation spreads through synaptic connections. Recursive traversal finds related memories you didn't explicitly search for — true associative recall.
View docsOrganic Lifecycle
Decay reduces unused memory energy. Consolidation moves important memories deeper. Pruning removes dead connections. The brain maintains itself automatically.
View docsBrain States
Each brain transitions through Active → Idle → Sleeping → Dormant states. Consolidation happens during "sleep" — mirroring how biological memory actually works.
View docsMemories that live and breathe
Each memory is a neuron with its own energy level, activation state, and position in N-dimensional space. Neurons fire when accessed, boosting their energy. Over time, unused neurons decay — just like biological memory.
// Neuron — the fundamental memory unit
type Neuron struct {
ID NeuronID
Content string
// Spatial position in N-dimensional matrix
Position []float64
// Activation dynamics
Energy float64 // Current activation (0.0 - 1.0)
BaseEnergy float64 // Resting energy level
// Depth in memory hierarchy
// 0 = surface/hot, higher = deeper/consolidated
Depth int
// Temporal markers
CreatedAt time.Time
LastFiredAt time.Time
AccessCount uint64
}
// Fire activates the neuron, boosting its energy
func (n *Neuron) Fire() {
n.Energy = min(1.0, n.Energy + 0.3)
n.LastFiredAt = time.Now()
n.AccessCount++
}
// Decay reduces energy based on time elapsed
func (n *Neuron) Decay(rate float64) {
elapsed := time.Since(n.LastFiredAt).Seconds()
n.Energy = max(n.BaseEnergy, n.Energy - rate*elapsed/3600)
}Fire together, wire together
Hebbian learning creates and strengthens connections automatically. When neurons fire within the same time window, a synapse forms between them. The more they co-activate, the stronger the connection becomes.
// HebbianEngine — "Neurons that fire together, wire together"
type HebbianEngine struct {
matrix *Matrix
recentFires map[NeuronID]time.Time
coActivationWindow time.Duration // 5 seconds
learningRate float64
forgettingRate float64
}
// OnNeuronFired — called whenever a neuron fires
func (h *HebbianEngine) OnNeuronFired(neuronID NeuronID) {
now := time.Now()
// Find co-activated neurons (fired within window)
for id, firedAt := range h.recentFires {
if now.Sub(firedAt) <= h.coActivationWindow {
// Strengthen or create synapse
h.strengthenOrCreate(neuronID, id)
}
}
h.recentFires[neuronID] = now
}
// Synapses strengthen with use, weaken without it
func (s *Synapse) Strengthen(delta float64) {
s.Weight = min(1.0, s.Weight + delta)
s.CoFireCount++
}Recursive associative recall
Query one memory and activation spreads through synaptic connections. The search traverses the neural graph recursively, finding related memories you didn't explicitly search for — true associative intelligence.
// Search with spreading activation
func (e *Engine) Search(query string, depth int, limit int) []*Neuron {
// Start with lexical/vector matches
seeds := e.findSeeds(query)
// Spread activation through synaptic connections
activated := make(map[NeuronID]float64)
for _, seed := range seeds {
e.spreadActivation(seed.ID, seed.Energy, depth, activated)
}
// Collect and rank by accumulated activation
results := e.collectByActivation(activated, limit)
// Fire all accessed neurons (Hebbian reinforcement)
for _, n := range results {
n.Fire()
e.hebbian.OnNeuronFired(n.ID)
}
return results
}
// Recursive traversal through neural graph
func (e *Engine) spreadActivation(
id NeuronID,
energy float64,
depth int,
acc map[NeuronID]float64,
) {
if depth <= 0 || energy < 0.01 { return }
acc[id] += energy
// Spread to connected neurons via synapses
for _, neighborID := range e.matrix.Adjacency[id] {
synapse := e.getSynapse(id, neighborID)
e.spreadActivation(neighborID, energy*synapse.Weight*0.7, depth-1, acc)
}
}The brain maintains itself
Background daemons run continuously: Decay reduces unused neuron energy. Consolidation moves important memories deeper. Pruning removes dead synapses. The brain reorganizes itself — no manual maintenance required.
// Brain states — like biological sleep cycles
type BrainState int
const (
StateActive BrainState = iota // Recently used
StateIdle // No recent activity
StateSleeping // Consolidation happening
StateDormant // Ready for eviction
)
// Consolidation happens during "sleep"
func (dm *DaemonManager) consolidateDaemon() {
for dm.waitInterval(dm.consolidateInterval) {
// Only consolidate sleeping brains
sleeping := dm.lifecycle.GetSleepingUsers()
for _, indexID := range sleeping {
worker := dm.pool.Get(indexID)
worker.Submit(&Operation{Type: OpConsolidate})
// 🌙 Index abc: consolidated 47 neurons
}
}
}
// Decay daemon reduces energy of unused memories
// Prune daemon removes dead synapses
// Reorg daemon optimizes spatial locality
// Persist daemon flushes to durable storageNeural Dashboard
Watch your brain
think in real-time
Monitor neuron energy levels, observe synapse formation, track consolidation during sleep cycles, and watch the matrix reorganize itself.

Not a vector store.
A living brain.
QubicDB is built on eight interlocking mechanisms derived from computational neuroscience. Each layer compounds the others — producing recall accuracy and contextual depth that flat vector databases cannot replicate.
Engram Neurons
Biologically-inspired memory units
Each memory is a living neuron — not a static row. Neurons carry energy, activation level, N-dimensional spatial position, lifecycle state, and synaptic connections. They fire when accessed, strengthening associated pathways, and decay when unused — exactly like biological engrams.
Hebbian Learning
Neurons that fire together, wire together
Synaptic connections form and strengthen automatically based on co-activation. When two neurons are accessed together, their synapse weight increases. Without reinforcement, weights decay. This creates genuine associative memory — not keyword indexes.
Spreading Activation
Recursive associative recall
A query activates seed neurons, then activation spreads recursively through synaptic graph — depth-first, weighted by synapse strength and neuron energy. Related memories surface automatically without explicit joins or graph queries. Configurable depth controls recall breadth.
Fractal Clustering
Self-similar spatial topology
Neurons organize into fractal clusters in N-dimensional space. Three-phase algorithm: pairwise attraction toward midpoints, centroid pull toward cluster center, repulsion of unconnected neurons. Dense cores → loose shells → separated clusters. The topology mirrors actual usage patterns.
Emotional Mapping
VADER sentiment + Ekman emotion model
Every memory is analyzed for emotional valence using VADER (govader). Six Ekman primary emotions are mapped: happiness, sadness, fear, anger, disgust, surprise. At search time, sentiment-matching memories receive a boost multiplier [0.8–1.2] — emotionally congruent recall, like human memory.
Organic Lifecycle
Active → Idle → Sleeping → Dormant
Background daemons continuously manage neuron health: Decay reduces energy over time (using LastDecayAt, not LastFiredAt), Consolidation moves low-energy neurons to deeper storage, Pruning removes dead synapses, Reorganization rebalances spatial positions. No manual garbage collection.
Hybrid Search Scoring
α·vector + (1-α)·tanh(lexical/10) + sentiment
Search combines three signals: vector cosine similarity via MiniLM-L6-v2 (384-dim, GGUF), lexical string matching normalized with tanh(x/10) for better score separation, and sentiment congruence boost. Alpha (default 0.6) controls vector vs lexical balance. Short queries are expanded and repeated (Springer et al. 2024).
Concurrency Model
Per-index serialized worker goroutines
Each index (user/agent) gets a dedicated BrainWorker goroutine with a serialized operation queue. No cross-index locking. Reads use RWMutex for parallelism; writes serialize through the worker. Background daemons run independently. Deadlock-free by design — repulsion reads under RLock, releases, then writes.
Why Accuracy Matters Here
Design decisions that directly improve recall quality
Spreading activation surfaces semantically related memories that pure vector search misses — associative paths through synaptic graph.
Decay uses actual elapsed time since last decay tick (LastDecayAt), not last access — preventing stale memories from persisting.
Energy < 0.5 criterion prevents active neurons from being consolidated — only truly dormant memories move to deep storage.
tanh(x/10) normalization gives 0.031→0.143 spread across [0,15] range vs the old x/5 formula — better ranking differentiation.
Sentiment-matched memories score up to 20% higher — mirroring how humans recall emotionally congruent information more readily.
Write and search paths both run CleanText before embedding — ensuring consistent tokenization and preventing embedding drift.
Hybrid Scoring Formula
// α=0.6 default · sentiment_boost ∈ [0.8, 1.2] · spreading activation applied post-score
Benchmarks
Real numbers,
real hardware
Apple M3 · darwin/arm64 · go test -benchmem
github.com/qubicDB/benchmarksAddNeuron
Write + embed + Hebbian update
Search (1K neurons)
Hybrid lexical + vector + spread activation
ParallelSearch
Read-lock concurrent search (~5.5× faster)
BrainWorker AddNeuron
Full worker queue round-trip
BrainWorker Search
Worker queue + engine search
Parallel Async Submit
Queue submit only — near-zero overhead
# Engine microbenchmarks (no model needed) go test -run '^$' -bench 'BenchmarkMatrixEngine' \ -benchmem ./pkg/engine # Worker/concurrency path go test -run '^$' -bench 'BenchmarkBrainWorker' \ -benchmem ./pkg/concurrency # Live vector benchmarks (requires GGUF model) QUBICDB_VECTOR_MODEL_PATH=./dist/MiniLM-L6-v2.Q8_0.gguf \ go test -run '^$' -bench . -benchmem ./pkg/e2e
Everything is open.
Self-host anywhere.
REST API, MCP endpoint, persistence layer, lifecycle daemons, admin UI, CLI, and all SDKs — fully open source under MIT license. No cloud lock-in.
# Pull and run docker pull qubicdb/qubicdb:1.0.0 docker run -d -p 6060:6060 \ -e QUBICDB_ADMIN_USER=admin \ -e QUBICDB_ADMIN_PASSWORD=changeme \ qubicdb/qubicdb:1.0.0 # Admin UI docker pull qubicdb/qubicdb-ui:1.0.0 docker run -d -p 8080:80 qubicdb/qubicdb-ui:1.0.0 # Or build from source git clone https://github.com/qubicDB/qubicdb cd qubicdb && go build ./cmd/qubicdb/
Full REST + MCP + config docs
Machine-readable API contract
Biology, reasoning, memory suites
How to contribute
Give your AI
a real brain
Not just storage — actual neural architecture. Neurons fire and decay, synapses form through learning, memories consolidate during sleep.
Open source · MIT License · Built with neuroscience principles