Skip to content

Q-Store Quantum ML Integration

Q-Memory provides native integration with Q-Store, an async quantum machine learning framework, enabling ultra-fast parameter checkpointing and model persistence for hybrid quantum-classical neural networks.

Q-Store is an async-first quantum ML framework that trains hybrid quantum-classical neural networks using real quantum hardware.

  • Async quantum execution: 10-20× parallel circuit submissions
  • Quantum layers: QuantumFeatureExtractor, QuantumNonlinearity, QuantumPooling
  • Hardware backends: IonQ, AWS Braket
  • Classical integration: Seamless integration with PyTorch/TensorFlow
  • Type: Rotation angles for quantum gates (RY, RZ, RX)
  • Format: Float32 in range [-π, π]
  • Quantity: 64-16,384 parameters per model
  • Update frequency: Every epoch or batch
  • Precision requirements: ~0.1 rad (NISQ-compatible)
  • Native dual-parameter encoding: Store RY, RZ pairs in single cell
  • Fast writes: 100ns per cell vs 50-100ms SSD
  • Non-volatile: Survives power cycles
  • High endurance: 10⁹ cycles for frequent updates
  • Low power: <50mW idle vs 5-10W SSD
Q-Store Training Loop (Async)
┌────────────────────────────────────┐
│ Quantum Layer Forward Pass │ Execute on IonQ hardware
│ 64-16,384 quantum parameters │ 10-20× parallel circuits
└────────────┬───────────────────────┘
┌────────────────────────────────────┐
│ Parameter Extraction │ Extract trainable params
│ (θ_y, θ_z) rotation angle pairs │ Float32 values
└────────────┬───────────────────────┘
┌────────────────────────────────────┐
│ AsyncPhase2Wrapper │ Non-blocking write
│ Background thread pool (4 workers) │ Zero training overhead
└────────────┬───────────────────────┘
┌────────────────────────────────────┐
│ Q-Memory Phase 2 Array │
│ • Encode to (θ, φ) levels │ 5-11 bit encoding
│ • Write to crossbar row │ <1µs parallel write
│ • BCH ECC protection │ Error correction
└────────────────────────────────────┘

Q-Memory implements a two-tier approach for optimal performance:

Tier 1: Q-Memory (Quantum Parameters Only)

Section titled “Tier 1: Q-Memory (Quantum Parameters Only)”
# Fast checkpoint every epoch
quantum_params = model.get_quantum_parameters() # 64 params
await async_phase2.write_quantum_parameters_async(
layer_id=epoch,
parameters=quantum_params
)
# Latency: 3.2µs (Phase 1) or <1µs (Phase 2)
# Overhead: 0.0001% of training time
# Comprehensive checkpoint every 10 epochs
if epoch % 10 == 0:
await zarr_storage.save_checkpoint_async(
epoch,
model.state_dict() # All layers + optimizer
)
# Latency: 50-100ms
# Overhead: 0.16-0.33% (only when triggered)
BackendWrite LatencyTraining OverheadSpeedup
SSD (Zarr)50-100 ms0.16-0.33%Baseline
Q-Memory Phase 06.4 µs0.0002%7,800-15,600×
Q-Memory Phase 13.2 µs0.0001%15,600-31,000×
Q-Memory Phase 2<1 µs0.00003%50,000-100,000×
OperationSSD OnlyQ-Memory HybridSpeedup
Load quantum params5-10 sec0.2 sec25-50×
Resume trainingImmediateImmediateSame
Total restart time5-10 sec0.2 sec25-50×

Q-Memory Phase 1+ supports native dual-parameter storage optimized for quantum gates:

# Quantum circuit with paired gates
circuit.ry(theta_y, qubit=0) # RY gate
circuit.rz(theta_z, qubit=0) # RZ gate
# Store both in single dual-parameter cell
# θ (resistance): theta_y → 5-bit base level
# φ (capacitance): theta_z → 6-bit base level
# Result: 2× parameter density vs Phase 0
# 64 parameters → 32 cells (Phase 1) vs 64 cells (Phase 0)
RepresentationBitsPrecisionQuantum ML Impact
Q-Store (float32)32~10⁻⁷ radBaseline
Phase 0 (5-bit)50.098 radSufficient ✓
Phase 1 θ (5-bit)50.098 radSufficient ✓
Phase 1 φ (6-bit)60.049 radExcellent ✓

Assessment: Q-Memory precision sufficient for NISQ quantum ML (typical gradient noise 0.1-0.5 rad >> quantization error)

Model ArchitectureParametersQ-Memory PhaseArray Utilization
Tiny (4q×3d)24Phase 012 cells (0.02%)
Small (6q×3d)36Phase 018 cells (0.03%)
Medium (8q×4d)64Phase 132 cells (0.05%)
Large (12q×4d)96Phase 148 cells (0.07%)
X-Large (16q×4d)256Phase 2128 cells (0.2%)
XX-Large (20q×5d)1,000Phase 2500 cells (0.8%)
Maximum16,384Phase 28,192 cells (12.5%)

Application: Quantum chemistry, finding molecular ground states

Q-Memory Benefits:

  • Fast parameter updates during energy minimization
  • Non-volatile storage of converged parameters
  • Low power for extended optimization runs
  • Typical usage: 32-128 parameters (Phase 1 optimal)

Application: Combinatorial optimization problems

Q-Memory Benefits:

  • Rapid checkpoint during parameter sweeps
  • Persistent storage of optimal solutions
  • High endurance for many optimization iterations
  • Typical usage: 64-256 parameters (Phase 1-2)

Application: Quantum machine learning classifiers

Q-Memory Benefits:

  • Near-zero overhead quantum layer checkpointing
  • Fast model restarts for inference
  • Seamless Q-Store integration
  • Typical usage: 64-1,024 parameters (Phase 1-2)

Scenario: Fashion MNIST classification with QuantumFeatureExtractor

Model Configuration:
• Architecture: 8 qubits, depth=4
• Parameters: 64 (32 RY + 32 RZ)
• Training: 100 epochs, 30 batches/epoch
• Hardware: IonQ quantum processor
Performance with Q-Memory Phase 2:
• Total training time: 50 minutes
- Quantum execution: 47.5 min (95%)
- Classical ops: 2.4 min (4.8%)
- Q-Memory checkpoints: 0.0032 sec (0.00006%)
• Checkpoint overhead: NEGLIGIBLE ✓
• Fast restart: 0.2 sec vs 8 sec SSD (40× faster) ✓
• Power savings: 100× less idle power ✓

Q-Memory integrates with Q-Store’s quantum error mitigation:

  1. Quantum Layer (Q-Store): ZNE, PEC, MEM error mitigation
  2. Encoding Layer: Float32 → dual-parameter conversion
  3. Analog Layer (Q-Memory): BCH(15,11) hardware ECC
  4. System Layer: 1S1R selectors for crosstalk suppression

Combined Error Budget:

  • Quantum gate errors: 0.1-1% (dominant)
  • Parameter quantization: 0.0004 rad (negligible)
  • Analog storage errors: <0.001% (with ECC)
  • Total system error: <0.2% (dominated by quantum, not storage)

Q-Memory’s async interface enables zero-blocking integration:

# Non-blocking parameter checkpoint
async def train_epoch_async(model, data, epoch):
# Training loop (existing Q-Store code)
for batch in data:
predictions = await model.forward_async(batch)
loss = loss_fn(predictions, targets)
await model.backward_async(loss)
optimizer.step()
# Non-blocking checkpoint (runs in background)
checkpoint_future = async_phase2.write_quantum_parameters_async(
layer_id=epoch,
parameters=model.get_quantum_parameters()
)
# Continue immediately, no waiting
# Checkpoint completes in parallel with next epoch

Result: Training proceeds at full speed with automatic parameter persistence

  • Q-Store v4.1.0+ (pip install qstore)
  • Python 3.9+ with asyncio
  • Quantum hardware access
  • Q-Memory Python SDK
  • Phase 0: Simulation only (hardware in fabrication)
  • Phase 1: 4×4 prototype array (validated)
  • Phase 2: 256×256 production array (recommended)
  1. Install Q-Store and Q-Memory SDK
  2. Initialize async Phase 2 wrapper
  3. Configure Q-Store backend to use Q-Memory
  4. Run training with automatic checkpointing
  5. Monitor performance statistics
  • 10,000-100,000× faster parameter checkpointing
  • 25-50× faster model restart vs SSD
  • 0.00003% training overhead (effectively zero)
  • 100× lower idle power consumption
  • 2× parameter density with dual encoding
  • Native quantum parameter format (no conversion overhead)
  • Non-volatile persistence across power cycles
  • 10⁹ cycle endurance for frequent updates
  • Multi-layer error correction for data integrity