Applications Overview
Q-Memory Applications
Section titled “Q-Memory Applications”The Q-Memory photonic platform enables a broad range of applications by providing a single chip that can perform quantum computing, AI matrix acceleration, and quantum communications — all using the same programmable optical hardware.
This page summarises the application landscape across two time horizons: near-term (achievable within the current development roadmap) and longer-term (enabled by fault-tolerant quantum operation).
Near-Term Applications (Phase 0–1)
Section titled “Near-Term Applications (Phase 0–1)”These applications are achievable with the current and near-future platform, without requiring full fault-tolerant quantum operation.
Quantum Key Distribution (QKD)
Section titled “Quantum Key Distribution (QKD)”What it is: Using entangled photon pairs as the basis for encryption keys that are physically impossible to intercept without detection.
Why photonics is well-suited: The platform’s on-chip entangled photon pair sources generate the exact type of quantum state required for QKD protocols. The photons are generated at telecom wavelengths, directly compatible with existing fibre optic infrastructure.
Status: Achievable from Phase 0 using characterised photon pair sources; Phase 1 enables full on-chip integration.
Certified Quantum Random Number Generation
Section titled “Certified Quantum Random Number Generation”What it is: Generating truly random numbers — not from software algorithms, but from the fundamentally unpredictable outcomes of quantum measurements.
Why photonics is well-suited: Single-photon detections at a beam splitter produce genuinely random outcomes, certifiable against any classical model. The platform generates and detects these events on-chip.
Status: Achievable from Phase 0; directly integrable into security and cryptographic infrastructure.
Quantum Sampling
Section titled “Quantum Sampling”What it is: A class of computational task where quantum devices can demonstrate an advantage over classical computers — sampling from a probability distribution that would take impractical classical resources to simulate exactly.
Why photonics is well-suited: The programmable optical mesh is a natural implementation of Boson Sampling and related protocols, which are among the earliest demonstrations of practical quantum advantage.
Status: Demonstrable from Phase 1.
Photonic AI Matrix Acceleration
Section titled “Photonic AI Matrix Acceleration”What it is: Using the optical interference network to perform the matrix-vector multiplications that dominate neural network inference and training — faster and at lower power than GPU-based approaches.
Why photonics is well-suited: Optical interference naturally implements matrix operations. The same hardware runs quantum algorithms and AI workloads, making the platform uniquely dual-use.
Key metrics (Phase 1–2):
- Matrix computation time: constant, independent of matrix size
- Power: significantly lower than GPU for equivalent throughput, especially with non-volatile optical memory eliminating static control power
- Latency: nanosecond-scale for each matrix operation
Status: Phase 1 demonstrates AI matrix acceleration; Phase 2 targets production-scale workloads.
Mid-Term Applications (Phase 1–2)
Section titled “Mid-Term Applications (Phase 1–2)”These applications require a larger, more integrated platform with on-chip photon sources and real-time feed-forward correction.
Molecular Simulation (Variational Quantum Eigensolver)
Section titled “Molecular Simulation (Variational Quantum Eigensolver)”What it is: Finding the ground-state energy of molecules — directly relevant to drug discovery and materials science. Quantum computers can solve this class of problem efficiently where classical computers scale exponentially.
Why photonics is well-suited: Variational algorithms iterate between quantum hardware and classical optimisers. The photonic platform’s programmable mesh implements the quantum circuit component; the CMOS electronics handle the classical loop.
Status: Targeted for Phase 1–2; requires multi-mode entangled state preparation.
Photonic AI Training Acceleration
Section titled “Photonic AI Training Acceleration”What it is: Running the training loop of large language models and other deep networks partially in optics, matching GPU speed at substantially lower power consumption.
Status: Targeted for Phase 2; requires large-mode non-volatile optical mesh and high-speed reprogramming.
Longer-Term Applications (Phase 2–3)
Section titled “Longer-Term Applications (Phase 2–3)”These applications require fault-tolerant quantum operation with error correction.
Fault-Tolerant Quantum Computing
Section titled “Fault-Tolerant Quantum Computing”What it is: Running full quantum algorithms — including factoring large numbers, solving combinatorial optimisation problems, and simulating quantum chemistry — with error-corrected logical qubits.
Current state: Photonic fault-tolerant quantum computing requires the loss budget (photon loss per operation) to be below a tight threshold. The best demonstrated systems are just above that threshold; closing the gap is the primary research challenge for Phase 2+.
Status: Research direction; targeted for Phase 2–3.
Application Comparison
Section titled “Application Comparison”| Application | Timescale | Quantum or Classical? | Key Platform Feature |
|---|---|---|---|
| Quantum key distribution | Phase 0–1 | Quantum | On-chip entangled photon pairs |
| Quantum random numbers | Phase 0–1 | Quantum | Single-photon detection |
| Quantum sampling | Phase 1 | Quantum | Programmable optical mesh |
| AI matrix acceleration | Phase 1–2 | Classical/hybrid | Optical matrix-vector multiply |
| Molecular simulation | Phase 1–2 | Quantum | Feed-forward quantum circuits |
| AI training acceleration | Phase 2 | Classical | Large non-volatile optical mesh |
| Fault-tolerant quantum | Phase 2–3 | Quantum | Error-corrected logical qubits |