© 1998–2026 Miroslav Šotek. All rights reserved. Contact: www.anulum.li | protoscience@anulum.li ORCID: https://orcid.org/0009-0009-3560-0851 License: GNU AFFERO GENERAL PUBLIC LICENSE v3 Commercial Licensing: Available
Active Development — SC-NeuroCore is under intensive development. The core engine, all 173 neuron models, the full simulation pipeline (Population → Projection → Network → SpikeMonitor → Analysis), and 19 industrialized modules (safety certification, ASIC flow, evolutionary substrate, hypervisor, chiplet generator, and more) are fully functional, tested (10 315 passing tests — 8 598 Python + 1 717 Rust), and production-deployable. We are currently completing comprehensive per-model documentation and end-to-end pipeline benchmarking across the entire model library. APIs may evolve as this work progresses.
Version: 3.14.0 Status: 173 Neuron Models (164 Bio + 9 AI) | 99.49% MNIST (ConvSNN) | 8 598 Python tests passing + 1 717 Rust tests | 100% Core Coverage | 173 Rust Neuron Models (PyO3) | 160-Model NetworkRunner | 132-Function Analysis Toolkit | wgpu GPU Backend | 19 Industrial Modules | 6 Rust Crates | 29 Notebooks
SC-NeuroCore is an open-source stochastic computing SNN framework
with FPGA synthesis. 173 neuron models (164 biophysical + 9 AI-optimised) spanning
82 years of computational neuroscience (McCulloch-Pitts 1943 through
ArcaneNeuron 2026) run inside a deterministic stochastic computing engine
with bit-true Verilog RTL co-simulation, FPGA synthesis via an IR compiler
(SystemVerilog + MLIR/CIRCT backends), an equation-to-Verilog compiler
that turns arbitrary ODE strings into synthesizable Q8.8 fixed-point RTL,
formal verification (7 SymbiYosys
modules, 72 properties), a Rust SIMD engine 39–202× faster than Brian2
(27.7 billion synaptic events/s at 100K neurons, 173 Rust neuron models
with PyO3 bindings, 160-model NetworkRunner with Rayon-parallel populations),
wgpu compute shader GPU backend (21× on GTX 1060, cross-platform Vulkan/Metal/DX12),
CuPy GPU acceleration, JAX JIT training,
MPI distributed simulation (billion-neuron scale via mpi4py),
an identity continuity substrate (persistent spiking networks with
checkpointing and L16 Director control), a 132-function spike train
analysis toolkit (24 modules, including POYO+/POSSM/NDT3/CEBRA foundation-model
decoders), 14 visualisation plots, 13 advanced
plasticity rules (pair/triplet/voltage STDP, BCM, BPTT, TBPTT, EWC,
e-prop, R-STDP, MAML, STP, structural plasticity), 7 biological circuit
primitives (gap junctions, tripartite synapse, Rall dendrite, cortical
column, lateral inhibition, WTA, gamma oscillation), 10 model zoo
configurations with 3 pre-trained weight sets, 9 hardware chip emulators,
quantum hybrid computing (Qiskit + PennyLane + SC-to-quantum compiler),
surrogate gradient training reaching 99.49% MNIST accuracy, a
NIR bridge — FPGA backend for the
neuromorphic intermediate representation standard (18/18 primitives,
recurrent edges, multi-port subgraphs; verified interop with SpikingJelly,
snnTorch, and Norse), a SpikeInterface adapter for experimental data import,
ANN-to-SNN conversion (trained PyTorch models to rate-coded SNNs in one call),
trainable per-synapse delays (DelayLinear with differentiable interpolation),
one-command FPGA synthesis (sc-neurocore deploy model.nir --target ice40 auto-runs Yosys+nextpnr+icepack if installed; generates project files for Vivado targets),
per-layer adaptive bitstream length for mixed-precision SC networks,
event-driven FPGA RTL (AER encoder, event neuron, spike router —
15-39x fewer register toggles than clock-driven at 0.01-10% activity, measured),
and a 6-codec neural data compression library (ISI, predictive, delta, streaming,
AER) with a unified API and auto-recommendation engine — targeting BCI
implants (Neuralink-scale 1024+ channels), neural probes (Neuropixels),
neuromorphic inter-chip routing, and real-time closed-loop telemetry.
8 598 Python tests passing and 1 717 Rust tests (across 6 crates).
13 CI workflows guard every push. conda-forge recipe ready.
19 industrialized modules: IEC 61508 safety certification, multi-PDK ASIC flow,
fault injection, UVM testbench generation, multi-tenant hypervisor, digital twin sync,
spintronic/memristor/chiplet mapping, evolutionary substrate with FPGA deployment,
meta-plasticity, bioware interface, federated learning, BCI studio,
explainability, neuro-symbolic predictive coding, stochastic doctor, and model zoo.
| Feature | SC-NeuroCore | snnTorch | Norse | Lava | Brian2 |
|---|---|---|---|---|---|
| Stochastic computing (bitstream) | Yes | — | — | — | — |
| Bit-true RTL co-simulation | Yes | — | — | — | — |
| Verilog / FPGA synthesis | Yes | — | — | Loihi only | — |
| IR compiler → SystemVerilog | Yes | — | — | — | — |
| Rust engine (39–202× vs Brian2) | Yes | — | — | — | — |
| Surrogate gradient training | 6 surrogates, 12 cells | Yes | Yes | Yes | — |
PyTorch nn.Module SNN |
Yes (+ SC weight export) | Yes | Yes | — | — |
| GPU acceleration | wgpu + PyTorch + CuPy | PyTorch | PyTorch | — | — |
| Neuron model library | 173 | 11 | 6 | 3 | ~5 builtin |
| Rust neuron models (PyO3) | 173 | — | — | — | — |
| NetworkRunner (fused loop) | 160 models | — | — | — | — |
| Network simulation engine | 3 backends | PyTorch | PyTorch | Lava | C++ codegen |
| MPI distributed simulation | Yes | — | — | — | — |
| Pre-trained model zoo | 10 configs, 3 weights | — | — | — | — |
| Spike train analysis | 132 functions | — | — | — | — |
| Visualization plots | 14 | — | — | — | — |
| Advanced plasticity rules | 13 | — | — | — | — |
| Biological circuits | 7 | — | — | — | — |
| SC→quantum compiler | Yes | — | — | — | — |
| Predictive coding (SC) | Yes | — | — | — | — |
| Fault tolerance benchmark | Yes | — | — | — | — |
| Phi* (IIT) estimation | Yes | — | — | — | — |
| SpikeInterface adapter | Yes | — | — | — | — |
| NIR primitives | 18/18 | — | 12 | 5 | — |
| MNIST accuracy (SNN) | 99.49% | ~95% | ~93% | — | — |
| Plasticity (STDP, R-STDP) | Yes | — | Yes | Yes | Yes |
| Quantum hybrid (Qiskit/PennyLane) | Yes | — | — | — | — |
| MLIR emitter (CIRCT) | Yes | — | — | — | — |
| Hyperdimensional computing | Yes | — | — | — | — |
| Formal verification (SymbiYosys) | 7 modules, 72 props | — | — | — | — |
| JAX JIT training | Yes | — | — | — | — |
| CuPy sparse GPU | Yes | — | — | — | — |
| AI-optimised neurons | 9 (ArcaneNeuron + 8) | — | — | — | — |
| Identity substrate | Yes | — | — | — | — |
| ANN-to-SNN conversion | Yes | — | — | — | — |
| Trainable per-synapse delays | Yes | — | — | — | — |
| One-command FPGA deploy CLI | Yes | — | — | — | — |
| Per-layer adaptive bitstream | Yes | — | — | — | — |
| Event-driven FPGA RTL (AER) | Yes | — | — | — | — |
| Raw waveform compression (24x) | Yes | — | — | — | — |
| Spike codec library (6 codecs) | Yes | — | — | — | — |
| Visual SNN Design Studio | Yes (web IDE) | Basic GUI | Jupyter | — | — |
| IEC 61508 safety certification | Yes | — | — | — | — |
| Multi-PDK ASIC flow | Yes | — | — | — | — |
| Evolutionary substrate (FPGA) | Yes | — | — | — | — |
| Multi-tenant hypervisor | Yes | — | — | — | — |
| Chiplet/memristor/spintronic | Yes | — | — | — | — |
| BCI closed-loop control | Yes | — | — | — | — |
| Federated SC learning | Yes | — | — | — | — |
| conda-forge recipe | Ready | Yes | — | — | Yes |
| PyPI package | Yes | Yes | Yes | Yes | Yes |
| License | AGPL-3.0 | MIT | LGPL-3.0 | BSD-3 | CeCILL-2.1 |
-
132-function spike train analysis toolkit — CV, Fano factor, cross-correlation, Victor-Purpura distance, SPIKE-sync, Granger causality, GPFA, SPADE pattern detection, plus 4 foundation-model decoders (POYO+, POSSM, NDT3, CEBRA). Matches Elephant + PySpike combined. Pure NumPy + Rust acceleration.
-
Neural data compression library — Two layers: WaveformCodec compresses raw 10-bit electrode waveforms end-to-end (spike detection + template matching + LFP compression, 24x on 1024-channel Neuralink-scale data, fits Bluetooth uplink). Spike raster codecs (ISI+Huffman, Predictive with 4 learnable predictors, Delta, Streaming, AER) compress binary spike trains 50-750x. Unified API:
get_codec(name),recommend_codec(). Learnable world-model predictor (99.6% accuracy). Rust backend (780x speedup). Bit-true LFSR matches Verilog RTL.
SC-NeuroCore's niche: deterministic stochastic computing with FPGA co-design — Python simulation matches synthesisable RTL bit-for-bit (deterministic LFSR seeds, Q8.8 fixed-point, cycle-exact co-simulation).
Measured on i5-11600K @ 3.90 GHz, 300 ms simulation, 10% connection probability.
Stored artifact: benchmarks/results/rust_scaling_benchmark.json
| Scale | SC Rust | Brian2 | Speedup | SC synaptic events/s |
|---|---|---|---|---|
| 1K neurons | 0.029 s | 2.689 s | 93× | 110 M/s |
| 5K neurons | 0.285 s | 4.681 s | 16× | 288 M/s |
| 10K neurons | 0.172 s | 6.754 s | 39× | 1.86 B/s |
| 50K neurons | 0.582 s | 31.03 s | 53× | 13.9 B/s |
| 100K neurons | 1.153 s | 232.3 s | 202× | 27.7 B/s |
SIMD primitives: 190 Gbit/s popcount (AVX-512 dispatch, Criterion benchmark:
benchmarks/results/criterion_bitstream_2026-03-26.json)
Population-Projection-Network architecture with 3 backends:
| Backend | Scope | Performance |
|---|---|---|
| Python | Any of 173 neuron models | NumPy vectorized |
| Rust NetworkRunner | 160 models in fused Rayon-parallel loop | 100K+ neurons, near-linear scaling |
| MPI | Billion-neuron distributed simulation via mpi4py | Multi-node HPC clusters |
6 topology generators (random, small-world, scale-free, ring, grid, all-to-all), 14 visualization plots (raster, voltage, ISI, cross-correlogram, PSD, firing rate, phase portrait, population activity, instantaneous rate, spike train comparison, network graph, weight matrix, connectivity, spatial), and 13 advanced plasticity rules (pair/triplet/voltage STDP, BCM, BPTT, TBPTT, EWC, e-prop, R-STDP, MAML, homeostatic, STP, structural).
10 pre-built network configurations (Brunel balanced, cortical column, CPG, decision-making, working memory, visual cortex V1, auditory processing, MNIST classifier, SHD speech classifier, DVS gesture classifier) with 3 pre-trained weight sets (MNIST 784-128-10, SHD 700-256-20, DVS 256-256-11).
Every model has a uniform step(current) -> spike API, a reset(), and a
cited reference. One file per model in src/sc_neurocore/neurons/models/.
| Category | Count | Examples |
|---|---|---|
| Integrate-and-fire variants | 18 | AdEx, GLIF5, ExpIF, QIF, SFA, MAT, COBA-LIF, Parametric LIF, Fractional LIF |
| Simple spiking (2D+) | 20 | FitzHugh-Nagumo, Morris-Lecar, Hindmarsh-Rose, Resonate-and-Fire, Chay |
| Biophysical (conductance-based) | 20 | Hodgkin-Huxley, Connor-Stevens, Traub-Miles, Mainen-Sejnowski, Pospischil |
| Stochastic / population / neural mass | 13 | Poisson, GLM, Jansen-Rit, Wong-Wang, Wilson-Cowan, Ermentrout-Kopell |
| Rate / plasticity / other | 12 | McCulloch-Pitts (1943), Sigmoid Rate, Astrocyte, Amari, GatedLIF (2022) |
| Hardware chip emulators | 9 | Loihi CUBA, Loihi 2, TrueNorth, BrainScaleS AdEx, SpiNNaker, Akida, DPI |
| Multi-compartment | 7 | Pinsky-Rinzel, Hay L5 Pyramidal, Rall Cable, Booth-Rinzel, Dendrify |
| Map-based (discrete-time) | 6 | Chialvo, Rulkov, Ibarz-Tanaka, Cazelles, Courbage-Nekorkin, Medvedev |
| Core (stochastic computing) | 5 | StochasticLIF, FixedPointLIF, HomeostaticLIF, Dendritic, SC-Izhikevich |
| Training cells (PyTorch) | 4 | LIF, ALIF, RecurrentLIF, EProp-ALIF |
| AI-optimized (novel) | 9 | ArcaneNeuron, MultiTimescale, AttentionGated, PredictiveCoding, SelfReferential, CompositionalBinding, DifferentiableSurrogate, ContinuousAttractor, MetaPlastic |
The flagship AI-optimized model. Five coupled subsystems in a single ODE: fast compartment (tau=5ms), working memory (tau=200ms), deep context (tau=10s), learned attention gate, and a forward self-model (predictor). The deep compartment accumulates identity: it changes only on genuine novelty (prediction errors), not routine input. Confidence modulates threshold and meta-learning rate. No equivalent in any other toolkit.
Persistent spiking network for identity continuity (sc_neurocore.identity).
| Module | Class | Purpose |
|---|---|---|
substrate.py |
IdentitySubstrate |
3-population network (HH cortical + WB inhibitory + HR memory) with STDP and small-world connectivity |
encoder.py |
TraceEncoder |
LSH-based text-to-spike-pattern encoding |
decoder.py |
StateDecoder |
PCA + attractor extraction + priming context generation |
checkpoint.py |
Checkpoint |
Lazarus protocol: save/restore/merge complete network state (.npz) |
director.py |
DirectorController |
L16 cybernetic closure: monitor, diagnose, correct network dynamics |
pip install sc-neurocorefrom sc_neurocore import StochasticLIFNeuron
neuron = StochasticLIFNeuron(v_threshold=1.0, tau_mem=20.0, noise_std=0.0)
spikes = sum(neuron.step(0.8) for _ in range(500))
print(f"{spikes} spikes in 500 steps")# Optional extras
pip install sc-neurocore[full] # all research modules
pip install sc-neurocore[gpu] # CuPy GPU acceleration
pip install sc-neurocore[nir] # NIR interop (Norse, snnTorch, Lava)
pip install sc-neurocore[studio] # Visual SNN Design Studio (web IDE)The optional Rust engine provides SIMD-accelerated simulation, 173 neuron models via PyO3, and fused E-I network simulation. Pre-built wheels are available for Linux, Windows, and macOS (Python 3.10–3.13):
pip install sc-neurocore-engineWhen installed, SC-NeuroCore automatically uses the Rust engine for:
- NetworkRunner: 160-model fused Rayon-parallel simulation loop
- E-I network: single Rust call for connectivity + Poisson + Euler + spike detection
- Batch simulate: model dispatch loop in compiled Rust
- SIMD bitstream ops: 190 Gbit/s popcount (AVX-512)
The pure Python package works without the engine — NumPy fallbacks are used for all operations. Install the engine only when you need the performance advantage.
pip install sc-neurocore publishes the Python suite under the public
sc-neurocore package name. The optional Rust engine remains part of the
repository / release-asset / source-build flow rather than a separate PyPI
runtime dependency. Source-only Frontier modules such as analysis, viz,
audio, dashboard, and swarm still require a source checkout.
git clone https://github.com/anulum/sc-neurocore.git
cd sc-neurocore
pip install -e ".[dev]" # editable install with all dev tools
make preflight # verify setup (lint + tests)If you are changing the Rust bridge locally, install bridge/ in the same
environment or run source-tree commands with PYTHONPATH=src:bridge.
Status: Development preview. The Studio is functional but under active development. API and UI may change between releases.
A web-based IDE for designing, training, compiling, and deploying spiking neural networks — from ODE equations to FPGA bitstream in a single browser tab.
pip install sc-neurocore[studio]
sc-neurocore studio # opens browser at http://127.0.0.1:8001| Feature | What it does |
|---|---|
| 118 Model Browser | Browse all neuron models by category, simulate with parameter sliders |
| 18+ Analysis Views | Trace, phase portrait, ISI, f-I curve, bifurcation, heatmap, STA, frequency response, characterisation dashboard |
| Compiler Inspector | Build SC IR from equations, verify, emit SystemVerilog |
| Synthesis Dashboard | One-click Yosys synthesis to ice40/ECP5/Gowin/Xilinx, multi-target comparison, resource bars |
| Training Monitor | Live loss/accuracy curves via SSE, 6 surrogate gradients, per-layer spike rates |
| Network Canvas | Drag-and-drop populations and projections (React Flow), NIR export/import |
| Full Pipeline | Network → simulate → compile → synthesise in one click |
| Project Save/Load | Persistent workspaces as JSON, server-side storage |
No other SNN framework provides a visual design-to-hardware pipeline. snnTorch has Jupyter notebooks. Brian2 has a basic GUI. Neither goes from visual network design to FPGA resource estimation.
| Feature | SC-NeuroCore Studio | Brian2 GUI | snnTorch | Nengo GUI |
|---|---|---|---|---|
| Visual network design | Yes | Basic | No | Yes |
| ODE equation editor | Yes | No | No | No |
| Live training curves | Yes | No | TensorBoard | No |
| Verilog output viewer | Yes | No | No | No |
| FPGA synthesis | Yes | No | No | No |
| Co-simulation view | Yes | No | No | No |
Full documentation: Studio Guide
The Docker image ships with the full Rust engine (39–202× faster than Brian2):
# Build
make docker-build
# or: docker build -f deploy/Dockerfile -t sc-neurocore:latest .
# Run interactive Python shell
make docker-run
# or: docker run --rm -it sc-neurocore:latest
# Smoke test via docker compose
docker compose -f deploy/docker-compose.yml upPre-built images are published to GHCR on every release:
docker pull ghcr.io/anulum/sc-neurocore:latest
docker run --rm -it ghcr.io/anulum/sc-neurocore:latestpip install sc-neurocore ships Core + Simulation + Domain bridges only.
Research and Frontier modules are available from source (pip install -e ".[dev]").
| Tier | Modules | Ships in wheel | Status |
|---|---|---|---|
| Core | neurons, synapses, layers, sources, utils, recorders, accel, compiler, hdl_gen, hardware, cli, exceptions | Yes | Production-ready. 100% coverage. |
| Simulation | hdc, solvers, transformers, learning, graphs, ensembles, export, pipeline, profiling, models, math, spatial, verification, security | Yes | Stable. Import explicitly. |
| Industrial | safety_cert, asic_flow, fault_injection, uvm_gen, hypervisor, digital_twin, chiplet, spintronic, memristor, analog_bridge | No | 1,173 tests. Available from source. |
| Frontier | evo_substrate, meta_plasticity, bioware, federated, bci_studio, explainability, neuro_symbolic, stochastic_doctor, model_zoo | No | 1,173 tests. Available from source. |
| Domain bridges | quantum (Qiskit/PennyLane), adapters/holonomic (JAX), scpn (Petri nets) | Yes | Requires pip install sc-neurocore[quantum] or [jax] |
| Research | robotics, physics, bio, optics, chaos, sleep, interfaces | No | Tested. Available from source. |
| Speculative | research/ (eschaton, exotic, meta, post_silicon, transcendent) |
No | Theoretical. See research/README.md. |
graph TD
subgraph "Python API (pip install sc-neurocore)"
A[BitstreamEncoder] --> B[SCDenseLayer / SCConv2DLayer]
B --> C[173 Neuron Models<br/>LIF · HH · AdEx · Izhikevich · ArcaneNeuron · ...]
C --> NET[Network Engine<br/>Population · Projection · 3 Backends]
C --> ID[Identity Substrate<br/>Persistent SNN · Checkpoint · Director]
C --> D[STDP / R-STDP Synapses]
D --> E[BitstreamSpikeRecorder]
end
subgraph "Acceleration"
B --> F{Backend?}
F -->|CPU| G[NumPy / Numba SIMD]
F -->|GPU| H[CuPy CUDA]
F -->|Rust| I[sc_neurocore_engine<br/>39–202× vs Brian2 · 173 neuron models<br/>160-model NetworkRunner]
F -->|MPI| MPI[mpi4py distributed<br/>billion-neuron scale]
end
subgraph "Hardware Target"
I --> J[IR Compiler]
J --> K[SystemVerilog Emitter]
J --> K2[MLIR/CIRCT Emitter]
K --> L[Verilog RTL<br/>AXI-Lite + LIF Core]
K2 --> L
L --> M[FPGA Bitstream<br/>Xilinx / Intel]
L --> V[Formal Verification<br/>SymbiYosys · 7 modules]
end
subgraph "Domain Bridges (optional)"
B --> N[SCPN Petri Nets]
B --> O[Quantum Hybrid<br/>Qiskit / PennyLane]
B --> P[HDC/VSA Symbolic Memory]
end
style A fill:#2d6a4f,color:#fff
style I fill:#b5651d,color:#fff
style L fill:#1a237e,color:#fff
style M fill:#4a148c,color:#fff
style O fill:#6a1b9a,color:#fff
style V fill:#004d40,color:#fff
from sc_neurocore import (
# Neurons
StochasticLIFNeuron, FixedPointLIFNeuron, FixedPointLFSR,
FixedPointBitstreamEncoder, HomeostaticLIFNeuron,
StochasticDendriticNeuron, SCIzhikevichNeuron,
# Synapses
BitstreamSynapse, BitstreamDotProduct,
StochasticSTDPSynapse, RewardModulatedSTDPSynapse,
# Layers
SCDenseLayer, SCConv2DLayer, SCLearningLayer,
VectorizedSCLayer, SCRecurrentLayer, MemristiveDenseLayer,
SCFusionLayer, StochasticAttention,
# Utilities
BitstreamEncoder, BitstreamAverager, RNG,
generate_bernoulli_bitstream, generate_sobol_bitstream,
bitstream_to_probability,
# Sources & Recorders
BitstreamCurrentSource, BitstreamSpikeRecorder,
)hdl/
sc_bitstream_encoder.v -- LFSR-based stochastic encoder (SEED_INIT param)
sc_bitstream_synapse.v -- AND-gate SC multiplier
sc_mux_add.v -- 2-input MUX (scaled addition)
sc_cordiv.v -- CORDIV stochastic divider (Li et al. 2014)
sc_dotproduct_to_current.v -- Popcount -> fixed-point current
sc_lif_neuron.v -- Q8.8 leaky integrate-and-fire
sc_firing_rate_bank.v -- Spike rate estimator
sc_dense_layer_core.v -- Full dense layer pipeline (decorrelated seeds)
sc_dense_matrix_layer.v -- N×M weight matrix layer
sc_axil_cfg.v -- AXI-Lite register file
sc_axil_cfg_param.v -- Parameterized AXI-Lite register file
sc_axis_interface.v -- AXI-Stream bulk bitstream I/O
sc_dma_controller.v -- DMA for weight upload and output readback
sc_cdc_primitives.v -- Clock domain crossing (2-FF sync, Gray, async FIFO)
sc_dense_layer_top.v -- Dense layer top wrapper
sc_neurocore_top.v -- System top (DMA + AXI + layers)
sc_aer_encoder.v -- AER spike encoder (event-driven output)
sc_event_neuron.v -- Event-triggered LIF (power ∝ spike rate)
sc_aer_router.v -- AER event distribution to target neurons
tb_sc_*.v (7 testbenches) -- Self-checking simulation testbenches
formal/ (7 modules) -- SymbiYosys formal verification properties
from sc_neurocore.accel import xp, HAS_CUPY, to_device, to_host
from sc_neurocore.accel.gpu_backend import gpu_vec_mac
# VectorizedSCLayer auto-detects GPU
layer = VectorizedSCLayer(n_inputs=32, n_neurons=64, length=1024)
output = layer.forward(input_values) # GPU if CuPy available, else CPUThe co-sim flow verifies bit-exact equivalence between the Python model and Verilog RTL:
# 1. Generate stimuli + expected results (Python golden model)
python scripts/cosim_gen_and_check.py --generate
# 2. Run Verilog simulation (requires Icarus Verilog)
iverilog -o tb_lif hdl/sc_lif_neuron.v hdl/tb_sc_lif_neuron.v
vvp tb_lif
# 3. Compare results
python scripts/cosim_gen_and_check.py --checkEvery GitHub Release includes:
- wheel + sdist — Python distribution artifacts (
dist/sc_neurocore-*) - SBOM — CycloneDX software bill of materials (
sbom.json) - Changelog extract — release notes from
CHANGELOG.md
Co-simulation traces are generated deterministically from fixed LFSR seeds. To reproduce a published benchmark:
git checkout v3.13.3
pip install -e ".[dev]"
python benchmarks/benchmark_suite.py --markdown > BENCHMARKS.mdFor Verilog co-sim trace reproduction, see scripts/cosim_gen_and_check.py
and the seed constants in hdl/sc_bitstream_encoder.v.
- LFSR: 16-bit maximal-length, polynomial x^16+x^14+x^13+x^11+1, period 65535
- Seed strategy: Input encoders
0xACE1 + i*7, weight encoders0xBEEF + i*13 - Fixed-point: Q8.8 (DATA_WIDTH=16, FRACTION=8), signed two's complement
- Overflow: Explicit bit-width masking via
_mask()function
Runnable scripts in examples/:
| Script | Description |
|---|---|
01_basic_sc_encoding.py |
Bernoulli & Sobol bitstream encoding/decoding |
02_sc_neuron_layer.py |
SCDenseLayer construction, spike trains, and firing-rate summary |
03_ir_compile_demo.py |
IR graph building, verification, SystemVerilog emission (v3 Rust engine) |
04_vectorized_layer.py |
VectorizedSCLayer throughput benchmarking |
05_scpn_stack.py |
Full 7-layer SCPN consciousness stack with inter-layer coupling |
06_hdl_generation.py |
Verilog top-level generation from a network description |
07_ensemble_consensus.py |
Multi-agent ensemble orchestration and voting |
08_hdc_symbolic_query.py |
Hyper-Dimensional Computing symbolic memory (v3 Rust engine) |
09_safety_critical_logic.py |
Fault-tolerant Boolean logic with stochastic redundancy (v3 Rust engine) |
10_benchmark_report.py |
Head-to-head v2/v3 benchmark suite (v3 Rust engine) |
11_sc_training_demo.py |
Surrogate-gradient training of an SC dense layer (v3 Rust engine) |
12_load_pretrained_model.py |
Load pretrained ConvSpikingNet and classify MNIST digits |
jax_training_demo.py |
JAX JIT surrogate-gradient SNN training on synthetic data |
mnist_fpga/demo.py |
MNIST classifier: train → quantise Q8.8 → SC simulate → Verilog export |
mnist_conv_train.py |
ConvSpikingNet: 99.49% MNIST (learnable beta/threshold, cosine LR) |
mnist_surrogate/train.py |
Surrogate gradient SNN training (FastSigmoid/SuperSpike/ATan, ~95% MNIST) |
nir_roundtrip_demo.py |
NIR roundtrip: CubaLIF + recurrent connections, build → import → run → export |
norse_nir_roundtrip.py |
Norse → NIR → SC-NeuroCore roundtrip with real Norse weights |
snntorch_nir_roundtrip.py |
snnTorch RSynaptic → NIR → SC-NeuroCore roundtrip (CubaLIF + recurrent) |
spikingjelly_nir_roundtrip.py |
SpikingJelly → NIR → SC-NeuroCore roundtrip |
ann_to_snn_demo.py |
Convert trained PyTorch ANN to rate-coded SNN |
delay_training_demo.py |
Train spiking network with learnable per-synapse delays |
PYTHONPATH=src:bridge python examples/01_basic_sc_encoding.pyExamples marked (v3 Rust engine) require an available sc_neurocore_engine
bridge install. For source-tree runs against local bridge code, use
PYTHONPATH=src:bridge or install bridge/ in the same environment.
13 GitHub Actions workflows (.github/workflows/), all SHA-pinned:
| Workflow | Purpose |
|---|---|
| ci.yml | Lint (ruff format + ruff check + bandit) + Test (Python 3.10-3.14, coverage = 100%) + Build |
| v3-engine.yml | Rust engine cargo test + cargo clippy |
| v3-wheels.yml | Cross-platform wheels (Linux, macOS, Windows × Python 3.10–3.14) |
| docker.yml | Build & push Docker image to GHCR on release tags |
| docs.yml | MkDocs → GitHub Pages |
| publish.yml | Publish sc-neurocore to PyPI and engine/ to crates.io on release tags |
| release.yml | Python wheel + sdist + changelog extraction → GitHub Release |
| benchmark.yml | Performance regression tracking |
| codeql.yml | CodeQL security analysis (weekly + on push) |
| scorecard.yml | OpenSSF Scorecard |
| pre-commit.yml | Pre-commit hook validation |
| yosys-synth.yml | Yosys HDL synthesis verification |
| stale.yml | Auto-label and close stale issues |
Run the benchmark suite:
python benchmarks/benchmark_suite.py # quick mode
python benchmarks/benchmark_suite.py --full # thorough (10x)
python benchmarks/benchmark_suite.py --markdown # output BENCHMARKS.mdSample results (CPU, quick mode):
| Operation | Throughput |
|---|---|
| LFSR step | 2.25 Mstep/s |
| Bitstream encoder | 1.88 Mstep/s |
| LIF neuron step | 1.15 Mstep/s |
| vec_and (1024 words) | 45.67 Gbit/s |
| gpu_vec_mac (64x32x16w) | 6.15 GOP/s |
Live site: anulum.github.io/sc-neurocore
- Getting Started — Installation & quickstart
- Tutorials — 51 hands-on guides (SC fundamentals → MNIST → FPGA → quantum → formal verification)
- API Reference — Python package API
- Rust Engine API — Rust engine docs
- Hardware Guide — FPGA deployment workflow
- Architecture — Package architecture
- Benchmarks — Performance measurements
- CHANGELOG.md — Version history
Build docs locally:
pip install mkdocs mkdocs-material mkdocstrings[python]
mkdocs servepip install sc-neurocore # core engine only (neurons, layers, compiler, HDL gen)
pip install sc-neurocore[accel] # + Numba JIT acceleration
pip install sc-neurocore[gpu] # + CuPy CUDA acceleration
pip install sc-neurocore[jax] # + JAX backend for holonomic adapters
pip install sc-neurocore[quantum] # + Qiskit + PennyLane quantum bridges
pip install sc-neurocore[lava] # + Intel Lava interop (Loihi target)
pip install sc-neurocore[research] # + matplotlib, networkx, onnx, torch
pip install sc-neurocore[full] # + numba, matplotlib, networkx, onnx, qiskit, pennylaneFor development (includes all modules + research/frontier code from source):
pip install -e ".[dev]" # editable install with pytest, mypy, ruff, hypothesisPinned dependency files for reproducible environments:
pip install -r requirements.txt # runtime only
pip install -r requirements-dev.txt # runtime + dev toolsThe sc_neurocore_engine crate provides 173 Rust neuron models callable
from Python via PyO3 bindings (including ArcaneNeuron), a 160-model
NetworkRunner with Rayon-parallel population simulation (100K+ neurons),
and SIMD-accelerated primitives with dispatch across five ISAs (AVX-512,
AVX2, NEON, SVE, RISC-V V).
1 717 Rust tests across 6 workspace crates:
| Crate | Tests | Purpose |
|---|---|---|
sc_neurocore_engine |
1,549 | PyO3 SIMD engine, 173 neuron models, NetworkRunner |
tinysc_riscv |
83 | RISC-V SC instruction set simulator |
core_engine |
22 | SC arithmetic core (standalone) |
autonomous_learning |
12 | Self-modifying plasticity rules |
neuro_symbolic |
28 | Hyperdimensional computing + predictive coding |
stochastic_doctor_core |
23 | Bitstream diagnostics engine |
| Category | Scope |
|---|---|
| Primitives | Bernoulli + Sobol bitstream, pack/unpack, popcount, SIMD (5 ISAs) |
| Neurons | 173 models: LIF variants, HH-type, maps, hardware emulators, population, ArcaneNeuron |
| NetworkRunner | 160-model fused simulation loop with CSR projections and Rayon parallelism |
| Synapses | Static, STDP, Reward-STDP |
| Layers | Dense, Conv2D, Recurrent, Learning, Fusion, Memristive, Attention |
| Networks | Brunel, GNN, Spike recorder, Connectome, Fault injection |
| Compiler | IR builder/parser/verifier, SystemVerilog + MLIR emitters, IR bridge |
| Domain | HDC, Kuramoto, SSGF geometry |
| Training | 6 surrogate gradient functions + property tests |
- GitHub Discussions — questions, ideas, show & tell
- Issue Tracker — bug reports and feature requests
- Contributing Guide — how to set up, test, and submit PRs
If you use SC-NeuroCore in your research, please cite:
@software{sotek2026scneurocore,
author = {Šotek, Miroslav},
title = {SC-NeuroCore: A Deterministic Stochastic Computing Framework for Neuromorphic Hardware Design},
version = {3.14.0},
year = {2026},
doi = {10.5281/zenodo.18906614},
url = {https://github.com/anulum/sc-neurocore},
license = {AGPL-3.0-or-later}
}See also CITATION.cff for the machine-readable citation metadata.
This project uses LLMs for advanced control mechanisms and GitHub handling. All output is reviewed, tested, and verified by the project author.
SC-NeuroCore is dual-licensed:
- Open Source: GNU Affero General Public License v3.0 (AGPLv3)
- Commercial: Proprietary license available for integration into closed-source products
For commercial licensing enquiries, contact protoscience@anulum.li.
Developed by ANULUM / Fortis Studio

