Note: This program includes links that allow direct access to detailed sections of the website (keynotes). Clicking a link will automatically take you to the relevant section, even if it is located on a different page.
Alain Aspect
(2022 Physics Nobel Prize)
Keynote Speaker : Djeylan Aktas
(Slovak Academy of Sciences, Slovakia)
Chairman : Jaimes GRIEVE, Romain ALLEAUME, Laurent LABONTE
The rise of quantum computing presents a major threat to the security of current communication networks, making it imperative to act now to protect both existing and future data exchanges. Although theoretical solutions—such as post-quantum cryptography (PQC) and quantum key distribution (QKD)—are available; these technologies require further development before integration into current networks.
The Gilles Brassard Laboratory (GBL) responds to this challenge by serving as a research and development platform focused on advancing, testing, and validating key components of quantum-resistant communication networks. This GBL is a quantum research laboratory unique in Europe, and of worldwide scope, a powerful tool equipped with an ultra-secure quantum key distributor QKD. The GBL is organized into two main segments: a quantum segment dedicated to implementing and assessing cryptographic technologies like PQC and QKD, and a radio segment that emulates terrestrial (such as LTE/5G/6G) and space-based communications (DVB-S, CCSDS, 5G NTN). Through collaborative efforts, the lab aims to accelerate the deployment of secure, future-proof communication systems by offering a realistic testing environment for researchers and engineers.
The National Quantum-Safe Network (NQSN) in Singapore is a nationwide collaborative platform and a field-deployed test-bed aimed at demonstrating quantum-safe cryptography solutions. NQSN links up academic, public and private members, targets trials for quantum key distribution (QKD) network with different QKD protocols, post-quantum cryptography (PQC) and classical symmetric key technologies.
Quantum Key Distribution (QKD) is a technology that enables the sharing of secret cryptographic keys between two distant users (Alice and Bob), with intrinsic security guaranteed by the fundamental laws of nature.
QKD has become a mature technology, and in Europe, all 27 member states are collaborating on a European Commission initiative (EuroQCI) to design, develop, and deploy a quantum communication infrastructure. In Italy, the QUID project is responsible for implementing the Italian segment of EuroQCI [1].
QKD relies on single photons to secure the distribution of the keys and, to become a viable real-world solution, the metrological characterization of optical components and systems is fundamental. To obtain the appropriate security requirements, test and evaluation methods at single-photon level need to be developed; in particular, since the single-photon detectors represent the most vulnerable part of a QKD system, their characterization in terms of operating parameters (quantum efficiency, dead time, jitter, afterpulsing..) is of the utmost importance.
We present the INRIM efforts in the quantum efficiency calibration of single-photon avalanche detectors (SPADs), focusing on QKD application. The detection efficiency is evaluated for a fibre-coupled InGaAs/InP-SPAD [2, 3, 4] and for a free-space Si-SPAD [5]. The calibration is performed using different experimental setups and reference standards with proper traceability chains at the wavelength of 1550 nm and 850 nm respectively. Dependence of detection efficiency on polarization in superconducting nanowire single-photon detectors (SNSPDs) is also reported.
The work is fundamental to align the Italian deployment of QKD, in the framework of QUID, with validation needs, providing test services for the characterization, validation and certification for QKD.
[1] https://quid-euroqci-italy.eu/it/
[2] M. López, A. Meda, G. Porrovecchio, R. A. Starkwood (Kirkwood), M. Genovese, G. Brida, M. Šmid, C. J. Chunnilall, I. P. Degiovanni, and S. Kück, “A study to develop a robust method for measuring the detection efficiency of free-running InGaAs/InP single-photon detectors”, EPJ Quantum Technol. 7, 14, (2020).
[3] H. Georgieva, A. Meda, H. Hofer, S. M. F. Raupach, M. Gramegna, I. P. Degiovanni, M. Genovese, M. López and S. Kück, “Detection of ultra-weak laser pulses by free-running single-photon detectors: Modeling dead time and dark count effects”, Appl. Phys. Lett. 118, 174002, (2021).
[4] S. M. F. Raupach, I. P. Degiovanni, H. Georgieva, A. Meda, H. Hofer, M. Gramegna, M. Genovese, S. Kück, and M. López, “Detection rate dependence of the inherent detection efficiency in single-photon detectors based on avalanche diodes”, Phys. Rev. A 105, 042615 (2022)
[5] S. Virzì, A. Meda, E. Redolfi, M. Gramegna, G. Brida, M. Genovese, I. P. Degiovanni; Detection efficiency characterization for free-space single-photon detectors: Measurement facility and wavelength-dependence investigation. Appl. Phys. Lett. 25, 125 (22): 221108 (2024).
We present QuaNTUM, a modular and extensible quantum communication testbed under development at the Technical University of Munich (TUM) by the Quantum Communication Systems Engineering group (Prof. Tobias Vogl). The project aims to enable scalable, flexible, and secure quantum communication across fiber-based campus networks and satellite-ground links. QuaNTUM is designed as an open-access platform for experimental quantum communication protocols, quantum device benchmarking, and hybrid network integration.
The terrestrial network links quantum research institutes across the TUM Garching campus using single-mode fibers in a star-shaped topology. Each node is equipped with polarization-maintaining components, multiplexers, and time-synchronized analysis modules. A central switching hub dynamically routes quantum channels and supports fiber-to-fiber and wavelength-selective switching. Active polarization control with real-time feedback ensures low error rates and stable qubit transmission, making the infrastructure suitable for high-fidelity QKD and entanglement distribution.
A core component of QuaNTUM is its integration of deterministic single-photon sources based on optically active defects in hexagonal boron nitride (hBN). These carbon-based emitters are created through localized electron beam irradiation and exhibit stable emission in the visible spectrum with sub-Poissonian statistics and short excited-state lifetimes at room temperature. Characterization is performed using custom-built photoluminescence microscopy systems, complemented by FDTD simulations to optimize coupling into optical fibers and photonic structures.
One of these sources will be deployed in orbit as part of the QUICK³ CubeSat mission, marking one of the first demonstrations of a solid-state quantum emitter in space. By uniting fiber and free-space links with scalable hardware and open protocols, QuaNTUM serves as a forward-compatible foundation for future hybrid quantum networks—supporting both current testbed research and the long-term vision of a global quantum internet.
Keynote speaker : Richard Versluis
(TNO / TU Delft)
Chairman : Guillaume DE GIOVANNI, Mathieu TAUPIN
As the quantum technology landscape continues to evolve, the need for advanced cryogenic solutions becomes increasingly crucial. This paper presents Thales innovative products and strategic roadmap aimed at enhancing the capabilities of quantum applications through cryogenics on a large of temperature (between 2K to 80K). We detail Thales state-of-the-art cryogenic technologies, including systems designed for cooling quantum processors, sensors, and other critical components, demonstrating their potential to optimize performance and stability in quantum systems. Furthermore, we outline our vision for the future of cryogenics within quantum technology, highlighting ongoing research initiatives, collaboration efforts, and upcoming product developments. Through a comprehensive analysis of market needs and technological advancements, we emphasize Thales commitment to addressing the challenges of cooling and maintaining quantum coherence.
Quantum computing has recently gained interest from industry, opening new fields of applications. Air Liquide Advanced Technologies, thanks to its experiences on ultra-low temperature systems (CryoConcept, subsidiary has been commercializing Dilution Fridges for 20 years for scientific labs) and on Helium Refrigeration and Liquefaction systems for Physics and Industry, is actively developing solutions to address the many emerging challenges associated with Quantum Data Centers.
Recently, the challenges of scaling up various quantum computing technologies have been highlighted through the roadmaps of several major players. One key area of development is the need for increased cryogenic cooling power, which could be provided by helium refrigerators similar to those used to cool particle accelerator equipment or fusion reactors.
This presentation will address the adaptation of solutions developed by Air Liquide Advanced Technologies over several years for industrial and scientific helium cryogenics applications. It will focus on the upcoming needs of quantum computing, particularly in terms of energy efficiency, distribution, reliability, and operability leading to proposals of new cryogenic architectures.
By exploring these aspects, the presentation aims to contribute to the ongoing discourse surrounding the future of quantum computing and its integration into large-scale data centers, offering insights into the intricate challenges and innovative solutions within this burgeoning field.
In this paper, we present initial results demonstrating the use of the Quantum Instrumentation Control Kit (QICK) to perform measurements on a spin qubit. We used the RFSoC to perform energy-selective spin readout through a resonator coupled to the qubit. Thanks to a frequency up-converter, we also drive the spin at the Larmor frequency beyond the RFSoC range, typically 15 GHz. These early-stage measurements provide evidence that QICK can be adapted for spin qubit control. This work opens a path towards using an open-source hardware platform (QICK) to characterize and control spin qubits.
The transition from prototype to large-scale quantum computers requires a new class of microwave interconnect solutions capable of operating reliably at ultra-low temperatures. This paper presents a modular cryogenic interconnect framework designed for compatibility with superconducting quantum processors, addressing critical requirements for signal fidelity, thermal anchoring, and system scalability. The proposed architecture integrates cryogenic attenuators, filters, and switches optimized for operation in dilution refrigerators down to 10 millikelvin. Multiphysics modeling, including electro-thermal co-simulation and mesoscopic heat transport analysis, guides component design. Measurement validation combines calibrated RF methods with qubit-based performance metrics to assess thermal noise impact and signal integrity. Standardized calibration and benchmarking protocols ensure reproducibility across platforms. This work supports ongoing standardization efforts within CEN/CENELEC and paves the way toward scalable, low-noise quantum infrastructures.
Scaling quantum computers require control systems that are both high-performance and cryogenically efficient. Traditional microwave-based electronics struggle in this environment due to heat generation (active and passive), signal loss, and electromagnetic interference. To address these challenges, we introduce fully photonic control architecture using Viqthor’s MENTHOR, a multichannel RF-over-Fiber (RFoF) transmitter designed for quantum computing applications. MENTHOR converts microwave signals up to 18 GHz into optical signals, transmitting them over single-mode fiber to reduce active heat load and signal loss. With an integrated cryo-compatible multi-channel photo detector, the system enables reliable signal recovery directly inside dilution refrigerators at temperatures as low as 55K. Our Cyro-photonic experimental results feature low relative intensity noise (RIN), high dynamic range, and flat signal response, ensuring high-fidelity control for quantum operations. This platform represents a major step forward in enabling high-density, fiber-based control systems for next-generation quantum processors.
Chairman : Jérémie GUILLAUD, Simanraj SADANA
Qubit routing is a critical challenge in quantum computing, essential for implementing quantum circuits on hardware with limited connectivity. This paper introduces a novel perspective on qubit routing by exploring the use of bridge gates, which enable the execution of controlled NOT (CNOT) operations over non-adjacent qubits without re-routing the qubits. The study highlights the advantages of bridge gates compared to SWAP gates, particularly their ability to preserve qubit assignments and potentially optimize subsequent routing steps.
We propose an extension to the concept of bridge gates by generalizing them for arbitrary distances and provide constructions demonstrating their feasibility. By analysing their performance, we show that larger-distance bridge gates can significantly reduce the number of CNOT gates in certain circuits. Furthermore, a new qubit routing problem is defined, incorporating both SWAP and bridge gates, and we discuss how this impacts the complexity of the problem.
Quantum arithmetic computation requires a substantial number of scratch qubits to stay reversible. These operations necessitate qubit and gate resources equivalent to those needed for the larger of the input or output registers due to state encoding. Quantum Hamiltonian Computing (QHC) introduces a novel approach by encoding input for logic operations within a single rotating quantum gate. This innovation reduces the required qubit register N to the size of the output states O, where N = log2 O. Leveraging QHC principles, we present reversible half-adder and full-adder circuits that compress the standard Toffoli + CNOT layout [Vedral et al., PRA, 54, 11, (1996)] from three-qubit and four-qubit formats for the Quantum half-adder circuit and five sequential
Fredkin gates using five qubits [Moutinho et al., PRX Energy 2, 033002
(2023)] for full-adder circuit; into a two-qubit, 4×4 Hilbert space. This
scheme, presented here, is optimized for classical logic evaluated on quantum hardware, which due to unitary evolution can bypass classical CMOS energy limitations to a certain degree. Although we avoid superposition of input and output states in this manuscript, this remains feasible in principle. We see the best application for QHC in finding the minimal qubit and gate resources needed to evaluate any truth table, advancing FPGA capabilities using integrated quantum circuits or photonics.
We introduce the concept of quantum minimal learning machine (QMLM), a supervised similarity-based learning algorithm. The algorithm is conceptually based on a classical machine learning model and adopted to work with quantum data. We will motivate the theory and run the model as an error mitigation method for various parameters.
Solving linear systems of equations is a fundamental problem that serves as a crucial foundation in numerous fields. Initiated by the pioneering Harrow-Hassidim-Lloyd algorithm, the asymptotic performance of quantum linear system solvers has steadily improved over time. However, there remains a lack of quantitative evaluation of quantum resources based on explicit gate implementations. In this study, we particularly focus on the matrix inversion method via quantum singular value transformation and explicitly construct a quantum circuit to invert specific matrices using the standard universal gate set, Clifford+T gates. We examine the detailed methods for block-encoding, QSP angle finding, and gate decompositions, and numerically evaluate the overall resources required for the quantum circuit.
Quantum Machine Learning Algorithms based on Variational Quantum Circuits (VQCs) are important candidates for useful application of quantum computing. It is known that a VQC is a linear model in a feature space determined by its architecture. Such models can be compared to classical ones using various sets of tools, and surrogate models designed to classically approximate their results were proposed. At the same time, quantum advantages for learning tasks have been proven in the case of discrete data distributions and cryptography primitives. In this work, we propose a general theory of quantum advantages for regression problems. Using previous results, we establish conditions on the weight vectors of the quantum models that are necessary to avoid dequantization. We show that this theory is compatible with previously proven quantum advantages on discrete inputs, and provides examples of advantages for continuous inputs. This separation is connected to large weight vector norm, and we suggest that this can only happen with a high dimensional feature map. Our results demonstrate that it is possible to design quantum models that cannot be classically approximated with good generalization. Finally, we discuss how concentration issues must be considered to design such instances. We expect that our work will be a starting point to design near-term quantum models that avoid dequantization methods by ensuring non-classical convergence properties, and to identify existing quantum models that can be classically approximated.
Chairman : Arnau RIERA, Ronin WU
Quantum Approximate Optimization Algorithm (QAOA) and Variational Quantum Eigensolver (VQE) have emerged as promising approaches for solving portfolio optimization tasks. However, the practical scalability of these methods remains a challenge due to the inherent noise and limitations of Noisy Intermediate-Scale Quantum (NISQ) devices. In this paper, we present Q-PORT (Quantum Portfolio Optimization with Resource-efficient Encoding and Scalability Analysis), a systematic study on the trade-offs between quantum circuit depth, stock encoding strategies, and scalability in quantum portfolio optimization.
We investigate the impact of multi-qubit representations per stock and multi-stock encodings per qubit while varying circuit repetitions and Ansatz types. Our experimental results indicate that increasing qubits per stock offers negligible precision gains compared to classical Mean-Variance Optimization (MVO), while encoding multiple stocks per qubit significantly improves efficiency with minimal precision loss. These findings provide a new pathway toward resource-efficient and scalable quantum portfolio optimization, paving the way for near-term financial applications in quantum computing.
Accurately estimating expected payoffs is central to the pricing of European call options, especially when valuation depends on low-probability events in the distribution tail. This study evaluates the performance of Quantum Amplitude Estimation (QAE), using Iterative Amplitude Estimation (IAE) and Maximum Likelihood Amplitude Estimation (MLAE), in pricing European call options based on historical Apple market data. Despite the theoretical advantages of QAE, our experiments show that both quantum estimators return an expected payoff of zero, even in scenarios where classical methods, Black Scholes and Monte Carlo simulation yield significantly positive values. This outcome stems from the limited resolution imposed by limited uncertainty qubits, which inadequately encode small-amplitude, in-the-money price regions. While QAE circuits correctly identify the realized market outcome, they fail to capture the full expectation implied by the distribution. These results highlight the current limitations of QAE under realistic constraints and underscore the importance of enhanced encoding strategies for future quantum financial applications.
Combinatorial optimization with a smooth and convex objective function arises naturally in applications such as discrete mean-variance portfolio optimization, where assets must be traded in integer quantities. Although optimal solutions to the associated smooth problem can be computed efficiently, existing adiabatic quantum optimization methods cannot leverage this information. Moreover, while various warm-starting strategies have been proposed for gate-based quantum optimization, none of them explicitly integrate insights from the relaxed continuous solution into the QUBO formulation. In this work, a novel approach is introduced that restricts the search space to discrete solutions in the vicinity of the continuous optimum by constructing a compact Hilbert space, thereby reducing the number of required qubits. Experiments on software solvers and a D-Wave Advantage quantum annealer demonstrate that our method outperforms state-of-the-art techniques.
Clustering financial assets based on return correlations is a fundamental task in portfolio optimization and statistical arbitrage. However, classical clustering methods often fall short when dealing with signed correlation structures, typically requiring lossy transformations and heuristic assumptions such as a fixed number of clusters. In this work, we apply the Graph-based Coalition Structure Generation algorithm (GCS-Q) to directly cluster signed, weighted graphs without relying on such transformations. GCS-Q formulates each partitioning step as a QUBO problem, enabling it to leverage quantum annealing for efficient exploration of exponentially large solution spaces. We validate our approach on both synthetic and real-world financial data, benchmarking against state-of-the-art classical algorithms such as SPONGE and $k$-Medoids. Our experiments demonstrate that GCS-Q consistently achieves higher clustering quality, as measured by Adjusted Rand Index and structural balance penalties, while dynamically determining the number of clusters. These results highlight the practical utility of near-term quantum computing for graph-based unsupervised learning in financial applications.
This paper investigates the application of quantum computing to the Unit Commitment (UC) problem, a fundamental optimisation challenge in power system operations. From the literature, we can learn about various quantum approaches, including the Quantum Approximate optimisation Algorithm (QAOA), Quantum Annealing (QA), and hybrid quantum-classical methods. The review shows that in the literature it is believed that quantum computing can potentially solve the UC problem more efficiently than classical methods. QAOA shows promise in handling binary decision variables, while hybrid methods enhance computational efficiency and scalability. Quantum Annealing is effective for smaller UC instances, with larger problems requiring partitioning. Despite current hardware limitations, advancements in quantum algorithms and hybrid methods provide a strong foundation for future research. This study highlights the transformative potential of quantum computing in optimising power systems, emphasising the need for continued innovation in quantum hardware and error mitigation techniques.
Chairman : Arnau RIERA, Mikko MÖTTÖNEN
Quantum computing has developed since the 1980s, with significant progress in its theoretical and practical applications. A critical aspect of this field is quantum hardware development, which supports research and real-world applications. One notable example of quantum computing’s potential is cryptography, where the RSA protocol has been employed to secure browsers and other internet applications. The private key of the RSA protocol is based on two prime numbers that are so large that even supercomputers cannot factor them to their prime factors in a reasonable amount of time. In 1994, Caltech alumnus Peter Shor proposed Shor’s Algorithm, which exploits the unique properties of quantum computers to factorize large numbers quickly and efficiently. Implementing this algorithm on quantum hardware would compromise the security of the RSA protocol. Quantum computing has been touted as revolutionary, but understanding the progress of different quantum hardware types is vital. This paper aims to analyze the types of quantum hardware and applications they are best suited for, presenting a comprehensive look into most quantum hardware in development. By understanding the current state of quantum hardware, we can gain valuable insights into the potential applications of quantum computing.
We report recent developments in improving coherence times of superconducting qubits at IQM Quantum Computers. We demonstrate how optimized fabrication and design result in increases in coherence times, enabling enhanced performance when integrated into quantum processing units (QPUs). We also report current efforts to mitigate the detrimental effects on coherence of remaining two-level-system (TLS) defects. Our results showcase state-of-the-art fabrication capabilities at IQM Quantum Computers, exhibiting T1 and T2 echo times up to the millisecond level in test devices. These improvements, when applied to QPU-compliant designs and combined with IQM’s quantum computer control software stack, yield consistently improved performance, enabling increased gate fidelities and paving the way to building fault-tolerant quantum computers.
Recent advancements in qubit manipulation in quantum dot arrays and classi-cal/quantum co-integration in FDSOI spin-based quantum circuits shade lights on a potential technology path for large scale quantum computing. We present this path to design and engineer good qubits in a technology as close as possible to the most advanced industrial technological nodes. We propose and investigate qubit design constrained with known industrial fabrication methods. More precisely, we repurpose W vias and, with a single contact patterning step, integrate both gates to define the electrochemical potential of quantum dots (QDs) and vias to define their coupling barriers in CMOS-based, linear qubit arrays. We show both simu-lated and experimental results of individual coupling control of QDs in arrays that were fully fabricated in a foundry on the 28nm FD-SOI platform. We show detailed wafer-level transfer characteristics for each barrier implemented on a 1×3 linear array, at room temperature and at 2K, which demonstrate that the vias are well-behaved MOSFET gates with electrostatic control over the Si channel.
Statistical inference involves measuring multiple systems parameterized by a common unknown parameter, with the aim of reducing uncertainty about that parameter. It is well-established that information processing, such as measurement and erasure, incurs an entropic cost. In this work, we quantify the minimal entropy required to perform inference under general measurement processes, focusing on how correlations between measurements affect this cost. We derive fundamental bounds in two paradigms: one where measurements are performed simultaneously, and another where they are performed sequentially; capturing the roles of spatial and temporal correlations, respectively. In both settings, we show that inter-measurement correlations can act as an entropy reservoir, allowing part of the entropy budget to be effectively recycled when correlations are leveraged. This recycled entropy can be used to perform additional measurements without increasing the overall entropic cost, thereby improving the quality of statistical inference. While developed in the context of inference, our framework applies more broadly, offering a thermodynamic lens on correlated measurement protocols in quantum information.
Chairman : Rémi de La VIEUVILLE, Frank PHILLIPSON
As drone traffic and air taxi services expand globally, regulators face increasing pressure to manage complex airspace safely, while operators aim to optimise energy-efficient routes. Traditional computing may not scale to meet future path planning demands, making quantum computing a promising alternative. However, the variety of qubit modalities across quantum processing units (QPUs) raises questions about which platforms are most suitable for such applications. This paper presents a hybrid quantum-classical drone path planning approach that accounts for urban obstacles and its implementation on two QPUs: D-Wave’s quantum annealer and Pasqal’s neutral atom processor. We show that both quantum solver implementations can produce paths of comparable or better quality than a classical solver, and present insights on their performance, scalability, and challenges for real-world deployment.
In the era of Industry 4.0, where automation, traceability, and secure data flows are foundational to modern industrial ecosystems, blockchain technology has emerged as a powerful tool to ensure transparency, auditability, and resistance to tampering. By offering an immutable ledger of transactions or events, blockchains are increasingly adopted in manufacturing, logistics, healthcare, and energy sectors to track critical operations and ensure trust among stakeholders.
However, the integrity of any blockchain-based system critically depends on the quality of the data it ingests. If the data recorded is predictable, manipulated, or inserted without sufficient entropy or verification, the immutability of the blockchain becomes moot. This paper introduces a proof-of-concept framework that integrates Quantum Random Number Generation (QRNG) and smart contract-based logging on Ethereum to strengthen the cryptographic robustness of industrial event recording. Each event is timestamped, tagged with a quantum-generated number, described via metadata, and hashed using SHA-256 to produce a unique fingerprint. The hash is then immutably stored on a local Ethereum blockchain. The system demonstrates a fusion of quantum physical entropy and decentralized blockchain trust.
The paper proposes a Quadratic Unconstrained Binary Optimization (QUBO) formulation on a power grid fault location problem using sparse measurements. The sparse approximation problem consists of solving an underdetermined system of complex-values equations. Through enforcement of the grid-depending sparsity of the problem, the amount of quantum bits necessary to solve the system is reduced, opening possibilities for industrial, large grid applications with lowered computational cost.