Skip to content

Applications Overview

The Q-Memory photonic platform enables a broad range of applications by providing a single chip that can perform quantum computing, AI matrix acceleration, and quantum communications — all using the same programmable optical hardware.

This page summarises the application landscape across two time horizons: near-term (achievable within the current development roadmap) and longer-term (enabled by fault-tolerant quantum operation).

These applications are achievable with the current and near-future platform, without requiring full fault-tolerant quantum operation.

What it is: Using entangled photon pairs as the basis for encryption keys that are physically impossible to intercept without detection.

Why photonics is well-suited: The platform’s on-chip entangled photon pair sources generate the exact type of quantum state required for QKD protocols. The photons are generated at telecom wavelengths, directly compatible with existing fibre optic infrastructure.

Status: Achievable from Phase 0 using characterised photon pair sources; Phase 1 enables full on-chip integration.

Certified Quantum Random Number Generation

Section titled “Certified Quantum Random Number Generation”

What it is: Generating truly random numbers — not from software algorithms, but from the fundamentally unpredictable outcomes of quantum measurements.

Why photonics is well-suited: Single-photon detections at a beam splitter produce genuinely random outcomes, certifiable against any classical model. The platform generates and detects these events on-chip.

Status: Achievable from Phase 0; directly integrable into security and cryptographic infrastructure.

What it is: A class of computational task where quantum devices can demonstrate an advantage over classical computers — sampling from a probability distribution that would take impractical classical resources to simulate exactly.

Why photonics is well-suited: The programmable optical mesh is a natural implementation of Boson Sampling and related protocols, which are among the earliest demonstrations of practical quantum advantage.

Status: Demonstrable from Phase 1.

What it is: Using the optical interference network to perform the matrix-vector multiplications that dominate neural network inference and training — faster and at lower power than GPU-based approaches.

Why photonics is well-suited: Optical interference naturally implements matrix operations. The same hardware runs quantum algorithms and AI workloads, making the platform uniquely dual-use.

Key metrics (Phase 1–2):

  • Matrix computation time: constant, independent of matrix size
  • Power: significantly lower than GPU for equivalent throughput, especially with non-volatile optical memory eliminating static control power
  • Latency: nanosecond-scale for each matrix operation

Status: Phase 1 demonstrates AI matrix acceleration; Phase 2 targets production-scale workloads.

These applications require a larger, more integrated platform with on-chip photon sources and real-time feed-forward correction.

Molecular Simulation (Variational Quantum Eigensolver)

Section titled “Molecular Simulation (Variational Quantum Eigensolver)”

What it is: Finding the ground-state energy of molecules — directly relevant to drug discovery and materials science. Quantum computers can solve this class of problem efficiently where classical computers scale exponentially.

Why photonics is well-suited: Variational algorithms iterate between quantum hardware and classical optimisers. The photonic platform’s programmable mesh implements the quantum circuit component; the CMOS electronics handle the classical loop.

Status: Targeted for Phase 1–2; requires multi-mode entangled state preparation.

What it is: Running the training loop of large language models and other deep networks partially in optics, matching GPU speed at substantially lower power consumption.

Status: Targeted for Phase 2; requires large-mode non-volatile optical mesh and high-speed reprogramming.

These applications require fault-tolerant quantum operation with error correction.

What it is: Running full quantum algorithms — including factoring large numbers, solving combinatorial optimisation problems, and simulating quantum chemistry — with error-corrected logical qubits.

Current state: Photonic fault-tolerant quantum computing requires the loss budget (photon loss per operation) to be below a tight threshold. The best demonstrated systems are just above that threshold; closing the gap is the primary research challenge for Phase 2+.

Status: Research direction; targeted for Phase 2–3.

ApplicationTimescaleQuantum or Classical?Key Platform Feature
Quantum key distributionPhase 0–1QuantumOn-chip entangled photon pairs
Quantum random numbersPhase 0–1QuantumSingle-photon detection
Quantum samplingPhase 1QuantumProgrammable optical mesh
AI matrix accelerationPhase 1–2Classical/hybridOptical matrix-vector multiply
Molecular simulationPhase 1–2QuantumFeed-forward quantum circuits
AI training accelerationPhase 2ClassicalLarge non-volatile optical mesh
Fault-tolerant quantumPhase 2–3QuantumError-corrected logical qubits