Training in progress — PPL 13.64 on Polish Wikipedia

AI That Learns
After Deployment

PGAN is a new neural architecture inspired by the brain's hippocampus. It learns new facts in microseconds — no backpropagation, no GPU, on a $3 chip.

192/192
Human brain tests pass
7.2x
Hopfield memory density
3
Published papers with DOI
$3
Hardware cost (ESP32)

Three Streams,
One Architecture

Each component is mathematically proven and biologically grounded. Not a metaphor — the same equations the brain uses.

Slow Stream

S1 Memory

Dense Associative Memory on the unit circle S1. Stores patterns as explicit phase vectors — addressable, readable, writable.

alpha* = 1.0 proven capacity
7.2x denser than Hopfield
Hebbian online learning
Fast Stream

S2 Attention

Dense Associative Memory on the sphere S2. Mathematically equivalent to Transformer attention — but derived from theory.

= softmax Transformer attention
alpha >= 1.56 proven capacity
1-step closed-form retrieval
Gating

CNOT Gate

Injection-locking dynamics validated on hippocampal theta-gamma coupling. Not a metaphor — identical equations.

192/192 human brain tests
8 patients hippocampal data
Turing complete (proven)
CNOT Phase Gate Dynamics
dphi/dt = omega + K · cos(phislow) · sin(phifast - phiout)
= Hippocampal theta-gamma coupling equation
What No Transformer Can Do

Learn New Facts
in Microseconds

Every LLM today is frozen after training. GPT, Claude, Gemini — they forget everything when you close the chat. PGAN remembers. One Hebbian update, no GPU needed.

No backpropagation
Single multiplication: xi += eta * sin(phi - xi)
No catastrophic forgetting
Different facts live in different phases — they don't collide
Persistent memory
Save/load S1 patterns — model remembers between sessions
live_learning.py
# Day: learn a new fact (microseconds, no GPU)
learner.associate(model, tokenizer,
    "Krakow", "Wawel")

# Overlap: 0.17 -> 0.75 (4.4x increase)

# Night: consolidate to permanent weights
consolidator.consolidate(
    model, tokenizer, learner)

# Morning: model knows Krakow = Wawel
# No retraining. No GPU. No cloud.

# Privacy: remove S1 = remove all
# personal knowledge (4MB file)
learner.export_public(model, "public.pt")

Proven Results

From 3,422 perplexity to 13.64. Eight versions. $350 total compute cost.

Capability Transformer PGAN
Formal memory theory None alpha* = 1.0 (proven)
Post-deployment learning Impossible Native (Hebbian S1)
Brain validation None 192/192 tests (8 patients)
Hardware path GPU only Analog oscillators
Catastrophic forgetting Yes No (phase isolation)
Edge deployment Limited $3 ESP32 with live learning

Published Research

Three peer-reviewable papers deposited on Zenodo with DOI

The Vision

💻
Today
Software PGAN on GPU & Jetson
🗣️
Near
Instruction-tuned chatbot on edge
🧠
Medium
Live learning AI that remembers
Far
Analog oscillator chip — AI on $1

Get in Touch

Interested in PGAN? Looking for collaboration, funding, or want to learn more?

Founder & Researcher
Krzysztof Gwozdz
krzysztof.gwozdz@myreson.ai
General Inquiries
MyReson AI
biuro@myreson.ai