Note: This program includes links that allow direct access to detailed sections of the website (keynotes). Clicking a link will automatically take you to the relevant section, even if it is located on a different page.
Keynote speaker : to be validated
Chairman :
The advancement of quantum computing threatens classical cryptographic methods, necessitating the development of secure quantum key distribution (QKD) solutions for QKD Networks (QKDN). In this paper, a novel key distribution protocol, Onion Routing Relay (ORR), that integrates onion routing (OR) with post-quantum cryptography (PQC) in a key-relay (KR) model is evaluated for QKDNs. This approach increases the security by enhancing confidentiality, integrity, authenticity (CIA principles), and anonymity in quantum-secure communications. By employing PQC-based encapsulation, ORR aims to avoid the security risks posed by intermediate malicious nodes and ensures end-to-end security. Our results show a competitive performance of the basic ORR model, against current KR and trusted-node (TN) approaches, demonstrating its feasibility and applicability in high-security environments maintaining a consistent Quality of Service (QoS). The results also show that while basic ORR incurs higher encryption overhead, it provides substantial security improvements without significantly impacting the overall key distribution time. Nevertheless, the introduction of an end-to-end authentication extension (ORR-Ext) has a significant impact on the Quality of Service (QoS), thereby limiting its suitability to applications with stringent security requirements.
Quantum Key Distribution (QKD) represents a groundbreaking method for secure communication, leveraging the principles of quantum mechanics to facilitate the secure exchange of cryptographic keys.
QKD networks may be categorized into point-to-point and multi-node systems. Point-to-point configurations involve a direct quantum connection between two nodes, but are limited in range. Conversely, multi-node QKD networks connect several users via relay stations, extending the reach of key distribution. Multi-node QKD networks implies the need for trusted relays that serve as intermediaries for the key routing, but bring high security constraints.
In this work we propose a complete architecture for a QKD network, from the physical layer up to the network controller. Our design is based on a centralized approach for Quantum Keys Management to mitigate security risks on the trusted relays.
This paper explores the essential components, design principles, and architectural models of QKD networks, which are critical in enabling secure information exchange amid rising cybersecurity concerns. Security within QKD networks is paramount, encompassing measures to address various threats. This includes the establishment of quantum-safe classical networks, rigorous node authentication, and dynamic eavesdropping countermeasures. Moreover, our architecture accommodates scalability and interoperability to integrate new nodes and technologies seamlessly. All these features are addressed in this paper, with a focus on the centralized Key management system we implemented to drive QKD.
This study explores the implementation of a Trusted Node (TN) with an optical switch to enhance the reach of QKD systems. The use of the switch allows to re-duce the number of required Alice/Bob pairs, thereby lowering costs. A testbed was developed within the FranceQCI project to evaluate the performance of the optical switch-based QKD system in the context of Air Traffic Control (ATC) data transmission. The setup involved two polarization-based QKD transmitters, a switch and a single receiver, and two encryptors. Results show that the optical switch effectively allows the transmission of end-to-end encryption keys. Fur-thermore, the availability of encryption keys remains sufficient even during physical link interruptions, ensuring data confidentiality/integrity. User-centric evaluations demonstrated that the QKD-based encryption process did not signif-icantly impact data crossing times, aligning with operational requirements.
Continuous-variable quantum key distribution (CV-QKD) enables the secure exchange of encryption keys by transmitting quantum states over optical channels. A crucial aspect of these systems is the estimation of receiver noise, which is assumed, in the trusted noise model, to be known and not controlled by an eavesdropper. Traditionally, this noise has been estimated through electronic calibration alone, leaving out optical contributions such as local oscillator impairments. As a result, the receiver noise characterization has been incomplete, which can affect system performance, particularly in scenarios involving long transmission distances or high receiver noise, where precise receiver noise estimation becomes critical for maintaining positive secret key rates. In this work, we propose a method based on controlled attenuation at the receiver, allowing direct measurement of the total receiver noise, including both electronic and optical components. This measured value can then be integrated into the enhanced parameter estimation process to improve the accuracy of security analyses and optimize key rates in practical CV-QKD implementations.
Chairman :
Quantum Neural Networks (QNNs) represent a promising frontier in quantum machine learning, offering potential advantages in terms of expressivity and computational efficiency. However, their rapid development has outpaced critical attention to their security. In this work, we position QNN security as a foundational research challenge and present a comprehensive threat model spanning input encoding, circuit compilation, execution, and deployment. We introduce a taxonomy of emerging quantum-specific attack classes, including dataset poisoning, circuit tampering, side-channel leakage, multi-tenant interference, and federated learning threats. Our analysis highlights key gaps, including the lack of quantum-native robustness metrics, hardware variability, and cross-layer vulnerabilities. To address these issues, we outline strategic directions for developing secure-by-design QNN architectures, emphasizing formal verification, quantum-aware defenses, and adversarial benchmarking. This work lays the groundwork for a systematic approach to building trustworthy and resilient quantum learning systems.
Linear optical architectures have been extensively investigated for quantum computing and quantum machine learning applications. Recently, proposals for photonic quantum machine learning have combined linear optics with resource adaptivity, such as adaptive circuit reconfiguration, which promises to enhance expressivity and improve algorithm performances and scalability. Moreover, linear optical platforms preserve some subspaces due to the fixed number of particles during the computation, a property recently exploited to design a novel quantum convolutional neural networks. This last architecture has shown an advantage in terms of running time complexity and of the number of parameters needed with respect to other quantum neural network proposals. In this work, we design and experimentally implement the first photonic quantum convolutional neural network (PQCNN) architecture based on particle-number preserving circuits equipped with state injection, an approach recently proposed to increase the controllability of linear optical circuits. Subsequently, we experimentally validate the PQCNN for a binary image classification on a photonic platform utilizing a semiconductor quantum dot-based single-photon source and programmable integrated photonic interferometers comprising 8 and 12 modes. In order to investigate the scalability of the PQCNN design, we have performed numerical simulations on datasets of different sizes. We highlight the potential utility of a simple adaptive technique for a nonlinear Boson Sampling task, compatible with near-term quantum devices.
The Bell and Clauser-Horne-Shimony-Holt (CHSH) inequalities both test quantum mechanics against local hidden variable theories but differ fundamentally in experimental feasibility. The original Bell inequality requires perfect anti-correlation (identical outcomes when measurement settings align) and ideal detectors—assumptions unattainable in real-world setups due to noise and inefficiencies, limiting its practical utility. In contrast, the CHSH inequality tolerates imperfections, requiring only statistical correlation measurements across four setting combinations. This robustness makes it the preferred choice for experiments such as Mach-Zehnder interferometer-based tests, where quantum states (e.g., single photons) are manipulated to violate CHSH. While other Bell-type inequalities could theoretically be implemented in such setups, CHSH dominates due to its resilience to detector limitations and noise. These features make it pivotal for industry-scale Quantum Key Distribution (QKD) protocols. In QKD, entangled photon pairs are exchanged between Alice and Bob, who perform measurements in different bases, relying on perfectly randomly generated numbers. They calculate the CHSH value: S = \langle AB \rangle + \langle AB’ \rangle + \langle A’B \rangle – \langle A’B’ \rangle, where the angle brackets represent correlation functions between their measurement outcomes. If Alice and Bob observe S > 2, this confirms that they share genuine quantum entanglement; no eavesdropper can have complete information about their key, and the correlations cannot be explained by classical physics.
While the original Bell inequality’s mathematical rigour makes its practical adoption almost impossible, the CHSH approach is preferred not only for its experimental feasibility but also for its built-in self-testing capabilities, which are critical for real-world quantum security. This feature is essential for the mathematical formulation of QKD because it provides the theoretical foundation for device-independent CHSH security, the gold standard for quantum cryptography. Theoretically, it simultaneously verifies the quantum state, detects eavesdropping, enables scalable implementations, and maintains security even when the hardware cannot be trusted [1].
Despite the mathematical excellence of CHSH-based QKD, their engineering possibilities compromise their security foundations, creating space for the so-called Quantum Hacking. Several real-world quantum hacking techniques have been successfully demonstrated against commercial QKD systems as of 2025, utilising methods such as Phase Remapping Attack, Trojan Horse Attack, Time-Shift Attack, or Detector Blinding Attack. This work introduces a new method to exploit weak Random Number Generators (RNGs) in QKD systems using quantum RNG (qRNG) and machine learning. By monitoring entropy and external factors (e.g., temperature), we demonstrate how compromised RNGs enable side-channel attacks. Our framework integrates:
– Multi-modal RNG Analysis: A 12-qubit qGAN quantifies distribution similarity via relative entropy (KL divergence <0.1) and discriminator loss (3.7–17), validated on Rigetti Aspen-M-3 (80 qubits) and IonQ Aria-1 (25 qubits) hardware.
– Bias Detection: Markov chain-enhanced logistic regression identifies hardware-induced biases (59% ‘1’ frequency in compromised RNGs vs. 54% in certified devices).
– Real-Time Monitoring: A neural network classifier (58.7% accuracy) discriminates quantum-generated randomness from classical simulations using 100-bit entropy profiles.
Experimental results show CHSH scores correlate with RNG quality (Rigetti: 0.8036, IonQ: 0.8362), while gate fidelity (IonQ: 99.4% vs. Rigetti: 93.6%) impacts certifiable randomness. Combining device-independent CHSH validation with machine learning, this framework detects attacks like phase remapping and detector blinding through entropy deviations.
By bridging theoretical security with engineering realities, our methodology addresses critical QKD implementation gaps. For instance, temperature-induced RNG biases—tracked via real-time entropy monitoring—enable side-channel exploitation unless mitigated by adaptive protocols. This approach enhances NIST’s « randomness extractor » pipeline, ensuring compliance with post-quantum standards under realistic noise conditions [2].
References:
[1] Zapatero, V., van Leent, T., Arnon-Friedman, R. et al. Advances in device-independent quantum key distribution. npj Quantum Inf 9, 10 (2023).
https://www.nature.com/articles/s41534-023-00684-x
[2] Kołcz, H., Pandey, T., Shah, Y. et al. Verification of qRNG with qGAN and Classification Models. QUACC+ CTP PAS, Warsaw, Poland (2025).
Poster: https://github.com/hubertkolcz/NoiseVsRandomness
BOA: https://quacc2025.cft.edu.pl/QUACC+2025_BOA.pdf
The transition of quantum computing into practical applications depends on successfully combining quantum algorithms with existing classical software systems. Resources for creating hybrid applications are currently fragmented which demands teamwork efforts across various disciplines. Moreover there currently are no clear predefined methods, standards or solutions for these complex operations. We introduce hereby EniQmA which stands as a quantum software engineering framework for unifying this development process as well as deployment and operation of hybrid quantum-classical applications. EniQmA integrates agile workflows with domain-specific toolchains along with quantum DevOps practices to enable connecting theoretical quantum algorithms with scalable industrial applications. The framework delivers structured process models together with quality standards and cohesive dvelopment tools to produce reliable and sustainable quantum software. Through its low-code/no-code capabilities EniQmA makes quantum application development accessible to domain experts who have minimal quantum expertise or enable specialists with no programming experience. Our framework demonstrates its effectiveness in three real-world industrial scenarios which show how it helps move research into practical deployment. EniQmA creates an essential initial step for standardizing quantum software engineering which speeds up the journey to practical quantum advantage in industrial applications.
We propose a new scheme for near-term photonic quantum devices that allows to increase the expressive power of the quantum models beyond what linear optics can do. This scheme relies upon state injection, a measurement-based technique that can produce states that are more controllable, and solve learning tasks that are believed to be intractable classically. We explain how circuits made of linear optical architectures separated by state injections are well-suited for experimental implementation. In addition, we give theoretical results regarding the evolution of the purity of the resulting states, and we discuss how it impacts the distinguishability of the circuit outputs. Finally, we study a computational subroutine of learning algorithms named probability estimation, and we show that the state injection scheme we propose may offer a potential quantum advantage in a regime that can be more easily achieved than state-of-the-art adaptive techniques. Our analysis offers new possibilities for near-term advantage that rely on overcoming fewer experimental difficulties.
Keynote speaker : Frank Phillipson
(TNO)
Chairman :
Rare-earth (RE) doped crystals are a promising platform for quantum memories: at cryogenic temperatures they feature narrow optical transitions, long hyperfine coherence time [1], while maintaining very large inhomogeneous absorption lines, allowing for higher degrees of multiplexing. Current storage experiments in bulk RE doped crystal display limited efficiency for long storage time due to low absorption [2] and poor light-matter interaction strength due to free space beam divergence, especially when optically long samples are needed. Waveguides in RE doped crystals appear as a good solution to enhance memory performances: the optical Rabi frequency remains constant in the material, and the interaction length can be extended to match the desired absorption depth [3] to reach maximum efficiency, together with an improved scalability [4]. For this reason, many techniques for fabricating integrated quantum memories in RE ensemble have been investigated. Most of them either show poorer spectroscopic performances [5] or can’t allow for complex functions [6] such as couplers, curvatures or cavities.
Here we present the fabrication of single-crystalline Y2SiO5 waveguides. Those structures are made from bulk crystals and should be able preserve the bulk properties. We describe our progresses on different steps of the fabrication process including recent results on crystal bonding, YSO layer thinning down to micrometric thicknesses and dry-etching of YSO. Simulations have been performed to determine the geometry and dimensions of the waveguide’s structure. Efficient cryogenic fiber coupling is verified and erbium spectroscopy at low temperature in these crystalline waveguides will be presented and compared to bulk.
[1] M. Zhong et al., Nature, 517, no 7533, p. 177‑180, (2015).
[2] Y. Ma et al., Nat. Commun., 12., 2381 (2021)
[3] M.Afzelius et al., Phys.Rev.A, 79., 052329 (2009)
[4] Jing et al., Appl. Phys. Rev. 11., 031304 (2024)
[5] Ourari et al., Nature, 620., 977-981 (2023)
[6] G. Corrielli et al., Phys. Rev. Appl., 5., 054013 (2016)
Quantum Information networks (QIN) aim to interconnect quantum systems via quantum repeaters through end-to-end entanglement of qubits. The integration of both satellites and ground segments is essential to provide global connectivity, even over large distances.
In this paper, we study the problem of entanglement routing in large-scale quantum networks with a hybrid space-ground architecture. We decompose the routing problem into two subcomponents: request scheduling and path selection.
We design and implement an entanglement routing strategy in an emulated European quantum network, first in a custom Python-based simulator for performance forecasting, and then into the NetSquid quantum network simulator. Our work thus provides a tool allowing for a comparison between terrestrial and satellite segments in terms of successful end-to-end entanglement in a large-scale network and shows that both segments are required for optimal service.
The National Quantum-Safe Network (NQSN) in Singapore is a nationwide collaborative platform and a field-deployed test-bed aimed at demonstrating quantum-safe cryptography solutions. NQSN links up academic, public and private members, targets trials for quantum key distribution (QKD) network with different QKD protocols, post-quantum cryptography (PQC) and classical symmetric key technologies.
The so-called Quantum Information Networks (QIN), promise to revolution-ize the world with new applications based on the interconnection of quan-tum devices such as quantum computers, quantum sensors and physically secured cryptographic receivers. Such networks employ photons as a propa-gation mean of quantum information in order to create entanglement be-tween the end-users’ devices. Over long distances, satellites will become mandatory in the network as they offer a better optical losses scaling com-pared to optical fibres. In this paper, we focus on the physical principle at the heart of the Bell-State Measurement devices used in the QIN to swap entan-glement between and inside the network nodes: the Hong-Ou-Mandel effect.
Chairman :
This work presents numerical experiments aimed at verifying solutions of Poisson’s equation using two existing methodologies. First, block-diagonalization is employed to block-encode the matrix derived from Poisson’s equation through the finite difference method (FDM), significantly improving computational complexity from $N$ to $\log(N)$, where $N$ is the matrix size. Second, the Quantum Singular Value Transformation (QSVT) algorithm is applied to invert the matrix. However, while block-diagonalization improves the complexity in $N$, QSVT introduces a bottleneck due to its linear dependency on the condition number $\kappa$, which grows exponentially with $N$, posing challenges for large-scale problems. As far as we know, this is the first numerical experiments solving problems with matrix size $N=1024$ and condition number $\kappa=500000$; the largest matrix size and condition number from existing works are $16$ and $<100$, respectively.
Fault Tree Analysis (FTA) is one of the main problems for reliability studies, in particular for the probabilistic safety assessment of complex systems such as aircrafts or nuclear power plants. The extraction of Minimal Cut Sets (MCS) from a fault tree can be reduced to the problem of finding vertex separators in S-T directed acyclic graphs (S-T DAGs). Prior work introduced a quantum algorithm to identify these s-t vertex separators, but faces scalability issues in larger instances. In this paper, we present a novel hybrid quantum-classical algorithm to solve the problem of s-t connectivity using a divide and quantum strategy. Large s–t DAG instances are partitioned into many subgraphs that fit the qubit capacity of current quantum hardware; each subgraph is solved on a quantum device, yielding partial solutions. These solutions are then recombined classically to produce all global vertex separators of the original s-t DAG, which are mapped back to the MCS of the original fault tree. The results showed that we were able to solve larger instances that have not been solved using a classical algorithm previously. We also quantify the effect of realistic noise by comparing results from noisy versus noise-free simulation, and report on prototype runs using IBM’s publicly available quantum hardware.
Ensuring a safe functioning of complex systems such as nuclear facilities requires Probabilistic Safety Assessment (PSA), where identifying Minimal Cut Sets (MCS) of a Fault Tree (FT) is a fundamental task. Identifying MCS is computationally complex due to the exponential growth of possible combinations as system size increases, which motivates the exploration of alternative computational paradigms such as quantum computing. This work investigates the application of quantum annealing to this combinatorial problem by modeling the FT as a boolean function. We implement and evaluate two existing QUBO encodings of SAT formulations of FTs: one based on a polynomial-time reduction from the SATisfiability (SAT) problem to the Maximum Independent Set (MIS) problem, and another based on a 2-SAT reduction using reusable auxiliary variables. Since the goal is to extract all valid solutions to the SAT instances—each corresponding to a cut set—we compare these two approaches in terms of the number of distinct MCS obtained and the number of qubits required for each encoding. The results provide insights into the relative effectiveness of different encoding strategies for addressing fault-tree analysis with quantum annealers, which is a realistic application in a PSA context.
Monte Carlo particle transport codes are well established on classical hardware and are considered as the reference tool for nuclear applications. In a growing number of domains, the design of algorithms is progressively shifting towards the field of quantum computing, where theoretical speedups over their classical counterparts are expected. In some of these domains, Monte Carlo methods have already been converted to a quantum computing friendly setup where the expected and observed gain in complexity is quadratic.
In this work, we address the particle transport problem in view of a first implementation on these architectures. We propose a quantum algorithm to model the particle transport based on discrete-time quantum walks and compare our approach to Monte Carlo and deterministic classical numerical schemes. The proof-of-concept algorithm is applied to a 10-qubit small-size problem by simulating its behavior on both ideal and noisy quantum computers based on the qiskit framework in view of reproducing classical results.
Quantum hardware is progressing rapidly, but many high-performance algorithms are not yet feasible due to qubit overhead and the limitations of NISQ devices. Pattern matching algorithms, which on average outperform classical methods [A. Montanaro, Algorithmica 77(1), 16-39 (2017)], rely on binary quantum embeddings and are not practical until we have larger qubit counts and fault-tolerant computing. As a result, more intermediate pattern matching algorithms for qubit-efficient embeddings, i.e., amplitude encoded data, which have seen recent advancements reducing the embedding overhead, are still lacking. In this work, we introduce a sliding window approach and explore suitable primitives, including the standard and destructive swap tests, as well as the classical shadow method and the usage of a quantum artificial neuron, all of which have been performed on NISQ devices, and evaluate their performance. Additionally, we integrate the swap test into a hybrid quantum support vector machine (QSVM) sliding window pipeline, improving noise resilience and quantum image analysis. Our proposed algorithms are straightforward to implement and applicable for pattern matching on quantum computers.
Chairman :
Quantum computing holds great promise for solving classically intractable problems such as linear systems and partial differential equations (PDEs). While fully fault-tolerant quantum computers remain out of reach, current noisy intermediate-scale quantum (NISQ) devices enable the exploration of hybrid quantum-classical algorithms. Among these, Variational Quantum Algorithms (VQAs) have emerged as a leading candidate for near-term applications.
In this work, we investigate the use of VQAs to solve PDEs arising in stationary heat transfer. These problems are discretized via the finite element method (FEM), yielding linear systems of the form Ku=f, where K is the stiffness matrix. We define a cost function that encodes the thermal energy of the system, and optimize it using various ansatz families. To improve trainability and bypass barren plateaus, we introduce a remeshing strategy which gradually increases resolution by reusing optimized parameters from coarser discretizations. Our results demonstrate convergence of scalar quantities with mesh refinement.
This work provides a practical methodology for applying VQAs to PDEs, offering insight into the capabilities and limitations of current quantum hardware.
Classical simulation of molecular systems is limited by exponential scaling, a hurdle quantum algorithms like Variational Quantum Eigensolvers (VQEs) aim to overcome. Although ADAPT-VQE enhances VQEs by dynamically building ans\ »atze, it can remain computationally intensive. This work presents K-ADAPT-VQE, which improves efficiency by adding operators in chunks of K each iteration. Our results from simulating small molecular systems show that K-ADAPT-VQE substantially reduces VQE iterations and quantum function calls for achieving chemical accuracy in molecular ground state calculations.
We adopt the metric-noise-resource (MNR) framework — a holistic framework to analyze the metric of performance and the corresponding resource cost using a unified methodology, while also accounting for the effect of noise — for variational quantum eigensolver (VQE). The algorithmic control parameter in VQE affecting the noise and metric of performance is the number of gates, which is also the primary resource. While increasing the number of gates enhances the expressivity and leads to a better metric in the noiseless setting, errors are accumulated in an incoherent noise setting. This leads to a tradeoff between the resource and the metric, which we investigate to find an optimal point that minimizes the resource cost while also satisfying a target metric set by the user. To this effect, we design and validate a resource estimator which provides the minimum resource cost in terms of hardware agnostic algorithmic parameters. We convert these algorithmic parameters to physical energy consumption assuming a simple model of quantum computer with superconducting backend. Furthermore, we extrapolate the resource cost for larger problem sizes to infer that quantum energetic advantage — a regime where energy consumption of solving a problem using quantum computer is less than classical computer — is unlikely for VQEs.
Quantum‐chemistry calculations on noisy hardware are bottlenecked by qubit count and measurement overhead.
We introduce \textbf{\lossyqsci{}}—a compact form of Quantum Selected CI that (i) uses a chemistry-aware lossy Random Linear Encoder (\chemrle{}) to compress an \(M\)-orbital, \(N\)-electron Hamiltonian to \onlogm{} qubits, and (ii) restores observables via a lightweight neural–network Fermionic Expectation Decoder (\nnfed{}).
Applied to \( \mathrm{C}_2 \) and LiH, \lossyqsci{} attains chemical accuracy with roughly half the qubits and determinants required by standard QSCI, pointing to a practical route for accurate quantum-chemistry on NISQ and early fault-tolerant devices.
In the field of drug discovery, it is necessary to speed up the exploration and analysis of very large datasets of molecules. A possible approach is the clustering of molecules with similar structures to limit the exploration of the chemical-space. This computationally-intensive operation can be mapped to the quantum domain offering inherent parallelism and providing possible higher-quality solutions.
We propose in this article a novel quantum-based clustering pipeline based on the use of quantum version of Generative Adversarial Network (QuGAN) to perform the clustering of open-access molecular data. Our contribution capitalizes on a previously-developped hybrid quantum data clustering algorithm that incorporates a molecule-specific quantum generative model for small molecular graphs (e.g.: QuMolGAN).
We present preliminary results based on noiseless simulations of a quantum processing unit with $4$ qubits. It shows gains in clustering quality after training our quantum generative pipeline on graph-based data from a quantum mechanical dataset (QM9) of small organic molecules. This motivates further investigation, in particular the fine-tuning hyper-parameter optimization of the generative model, to achieve more distinct cluster separation and increased intra-cluster homogeneity.
Chairman : Víctor CANIVELL
Quantum computing has developed since the 1980s, with significant progress in its theoretical and practical applications. A critical aspect of this field is quantum hardware development, which supports research and real-world applications. One notable example of quantum computing’s potential is cryptography, where the RSA protocol has been employed to secure browsers and other internet applications. The private key of the RSA protocol is based on two prime numbers that are so large that even supercomputers cannot factor them to their prime factors in a reasonable amount of time. In 1994, Caltech alumnus Peter Shor proposed Shor’s Algorithm, which exploits the unique properties of quantum computers to factorize large numbers quickly and efficiently. Implementing this algorithm on quantum hardware would compromise the security of the RSA protocol. Quantum computing has been touted as revolutionary, but understanding the progress of different quantum hardware types is vital. This paper aims to analyze the types of quantum hardware and applications they are best suited for, presenting a comprehensive look into most quantum hardware in development. By understanding the current state of quantum hardware, we can gain valuable insights into the potential applications of quantum computing.
We report recent developments in improving coherence times of superconducting qubits at IQM Quantum Computers. We demonstrate how optimized fabrication and design result in increases in coherence times, enabling enhanced performance when integrated into quantum processing units (QPUs). We also report current efforts to mitigate the detrimental effects on coherence of remaining two-level-system (TLS) defects. Our results showcase state-of-the-art fabrication capabilities at IQM Quantum Computers, exhibiting T1 and T2 echo times up to the millisecond level in test devices. These improvements, when applied to QPU-compliant designs and combined with IQM’s quantum computer control software stack, yield consistently improved performance, enabling increased gate fidelities and paving the way to building fault-tolerant quantum computers.
Recent advancements in qubit manipulation in quantum dot arrays and classi-cal/quantum co-integration in FDSOI spin-based quantum circuits shade lights on a potential technology path for large scale quantum computing. We present this path to design and engineer good qubits in a technology as close as possible to the most advanced industrial technological nodes. We propose and investigate qubit design constrained with known industrial fabrication methods. More precisely, we repurpose W vias and, with a single contact patterning step, integrate both gates to define the electrochemical potential of quantum dots (QDs) and vias to define their coupling barriers in CMOS-based, linear qubit arrays. We show both simu-lated and experimental results of individual coupling control of QDs in arrays that were fully fabricated in a foundry on the 28nm FD-SOI platform. We show detailed wafer-level transfer characteristics for each barrier implemented on a 1×3 linear array, at room temperature and at 2K, which demonstrate that the vias are well-behaved MOSFET gates with electrostatic control over the Si channel.
Si-based qubits are considered the most promising experimental system for scaling quantum computing. Due to the intrinsic compatibility with semiconductor industry, engineering Si-based quantum processors at large scale would benefit from the well-established (mature) semiconductor manufacturing and integrated circuit capabilities. In particular, the ability to co-integrate qubits and transistors would be an asset for controlling large scale quantum circuits and keep their size comparable to a classical processor.
Considering the requirement for quantum computing, we present a path to design and engineer good qubits in a technology as close as possible to the most advanced industrial technological nodes. FDSOI CMOS technology is demonstrated as a platform to co-integrate spin qubits with integrated circuits. Advancements in qubit manipulation in quantum dot arrays will be discussed, with demonstration of the elementary operations for running a quantum computer at µs-timescale and with state-of-the-art noise figure. For cryo-control integrated circuits, realization of elementary control and read-out based on FdSOI-cryoelectronic systems and their performance will be presented. Different strategies for co-integrating qubits with integrated circuits and their challenges will be discussed.
Statistical inference involves measuring multiple systems parameterized by a common unknown parameter, with the aim of reducing uncertainty about that parameter. It is well-established that information processing, such as measurement and erasure, incurs an entropic cost. In this work, we quantify the minimal entropy required to perform inference under general measurement processes, focusing on how correlations between measurements affect this cost. We derive fundamental bounds in two paradigms: one where measurements are performed simultaneously, and another where they are performed sequentially; capturing the roles of spatial and temporal correlations, respectively. In both settings, we show that inter-measurement correlations can act as an entropy reservoir, allowing part of the entropy budget to be effectively recycled when correlations are leveraged. This recycled entropy can be used to perform additional measurements without increasing the overall entropic cost, thereby improving the quality of statistical inference. While developed in the context of inference, our framework applies more broadly, offering a thermodynamic lens on correlated measurement protocols in quantum information.
Chairman :
Quantum computers can offer exponential speedups for certain problems like integer factoring, and polynomial (e.g., quadratic) speedups for others, such as unstructured search. To what extent quantum computers can solve partial differential equations (PDEs) remains an open question. In this work, two of the most fundamental PDEs are addressed: the diffusion equation and the convection equation, both with space- and time-dependent coefficients. We present a quantum numerical scheme based on three steps: quantum state preparation; evolution with quantum Fourier transforms and diagonal operators; and measurement of observables of interest. The evolution step combines a high-order centered finite difference with a time-splitting scheme based on product formula approximations, also known as Trotterization. A novel numerical analysis to bound the different sources of error is presented. We prove that vector norm analysis guarantees similar accuracy with exponentially fewer time steps than analyses based on operator norm for Trotterization, significantly reducing the required computational resources.
A fully quantum algorithm for solving the one-dimensional linear advection-diffusion equation using the lattice Boltzmann method as a numerical procedure is presented in this work. We start by presenting a state of the art of the current usage of quantum algorithms for solving ordinary and partial differential equations. We then describe two algorithms for the one-dimensional Lattice Boltzmann method with two degrees of freedom. The first one is an existing hybrid quantum-classical algorithm with measurements at each time step, and the second one is our improved version, viz. a fully quantum algo-rithm where only one measurement is needed at the end of the algorithm. The fully quantum algorithm is first executed on a quantum simulator and then compared with a classical approach. Subsequently, the fully quantum algorithm is run on a 127-qubit quantum hardware to investigate the effect of noise and the depth of the circuit on the output state. We find fluctuations in the final result due to the noise of the qubits.
Quantum state preparation is a fundamental component of quantum algorithms, particularly in quantum machine learning and data processing, where classical data must be encoded efficiently into quantum states. Existing amplitude encoding techniques often rely on recursive bipartitions or tensor decompositions, which either lead to deep circuits or lack practical guidance for circuit construction. In this work, we introduce Tucker Iterative Quantum State Preparation (Q-Tucker), a novel method that adaptively constructs shallow, deterministic quantum circuits by exploiting the global entanglement structure of target states. Building upon the Tucker decomposition, our method factors the target quantum state into a core tensor and mode-specific operators, enabling direct decompositions across multiple subsystems.
We investigate the minimum edge multiway cut problem, a fundamental task in evaluating the resilience of telecommunication networks. This study benchmarks the problem across three quantum computing paradigms: quantum annealing on a D-Wave quantum processing unit, photonic variational quantum circuits simulated on Quandela’s Perceval platform, and IBM’s gate-based Quantum Approximate Optimization Algorithm (QAOA). We assess the comparative feasibility of these approaches for early-stage quantum optimization, highlighting trade-offs in circuit constraints, encoding overhead, and scalability. Our findings suggest that quantum annealing currently offers the most scalable performance for this class of problems, while photonic and gate-based approaches remain limited by hardware and simulation depth. These results provide actionable insights for designing quantum workflows targeting combinatorial optimization in telecom security and resilience analysis.
The quantum approximate optimisation ansatz (QAOA) is one of the flagship algorithms used to tackle combinatorial optimisation on graphs problems using a quantum computer, and is considered a strong candidate for early fault-tolerant advantage. In this work, I study the enhancement of the QAOA with a generator coordinate method (GCM), and achieve systematic performances improvements in the approximation ratio and fidelity for the maximal independent set on Erd\ »os-Rényi graphs. The cost-to-solution of the present method and the QAOA are compared by analysing the number of CNOT and $T$ gates required for either algorithm. Extrapolating on the numerical results obtained for this specific problem, it is estimated that the approach surpasses QAOA for graphs of size greater than 75 using as little as eight trial states. The potential of the method for other combinatorial optimisation problems is briefly discussed.
Chairman :
The potential of quantum computing is increasing rapidly in today’s world. Compared to the classical computers, where the time consumption increases with the increase in complexity problem-solving, quantum computing will provide the solutions with much efficiency and in lesser time frames. This research work proposes an optimized quantum algorithm to speed up the dis-covery of mismatches in DNA sequences using Grover’s search algorithm. The DNA base pairs are encoded using qubits and quantum oracles are used to find mismatches, thereby yielding a quadratic speedup compared with classical search algorithms. The DNA string is divided into chunks and Grover’s algorithm is applied to every step to preserve coherence across the complete string. The mutation mismatch probability is quantified for each nucleotide position to help in detecting potential markers of mutations. The error mitigation technique is deployed in our strategy to account for the noise inherent to quantum systems which can impact the certainty and fidelity of detecting mismatches. This research demonstrates the potential of quantum computing to transform DNA sequence analysis, offering a powerful tool for applications in the field of genetics.
As drone traffic and air taxi services expand globally, regulators face increasing pressure to manage complex airspace safely, while operators aim to optimise energy-efficient routes. Traditional computing may not scale to meet future path planning demands, making quantum computing a promising alternative. However, the variety of qubit modalities across quantum processing units (QPUs) raises questions about which platforms are most suitable for such applications. This paper presents a hybrid quantum-classical drone path planning approach that accounts for urban obstacles and its implementation on two QPUs: D-Wave’s quantum annealer and Pasqal’s neutral atom processor. We show that both quantum solver implementations can produce paths of comparable or better quality than a classical solver, and present insights on their performance, scalability, and challenges for real-world deployment.
In the era of Industry 4.0, where automation, traceability, and secure data flows are foundational to modern industrial ecosystems, blockchain technology has emerged as a powerful tool to ensure transparency, auditability, and resistance to tampering. By offering an immutable ledger of transactions or events, blockchains are increasingly adopted in manufacturing, logistics, healthcare, and energy sectors to track critical operations and ensure trust among stakeholders.
However, the integrity of any blockchain-based system critically depends on the quality of the data it ingests. If the data recorded is predictable, manipulated, or inserted without sufficient entropy or verification, the immutability of the blockchain becomes moot. This paper introduces a proof-of-concept framework that integrates Quantum Random Number Generation (QRNG) and smart contract-based logging on Ethereum to strengthen the cryptographic robustness of industrial event recording. Each event is timestamped, tagged with a quantum-generated number, described via metadata, and hashed using SHA-256 to produce a unique fingerprint. The hash is then immutably stored on a local Ethereum blockchain. The system demonstrates a fusion of quantum physical entropy and decentralized blockchain trust.
The paper proposes a Quadratic Unconstrained Binary Optimization (QUBO) formulation on a power grid fault location problem using sparse measurements. The sparse approximation problem consists of solving an underdetermined system of complex-values equations. Through enforcement of the grid-depending sparsity of the problem, the amount of quantum bits necessary to solve the system is reduced, opening possibilities for industrial, large grid applications with lowered computational cost.
This paper introduces QonSAll, a method designed to exploit the capabilities of quantum machines for identifying the set of optimal solutions to NP-Complete optimization problems. The approach relies on solving a relaxed version of the original problem, where certain complex constraints are temporarily removed. This relaxation yields a problem that quantum machines can solve more efficiently, often providing the full set of optimal solutions. If the relaxed constraints do not influence the objective function, it becomes possible to test each relaxed optimal solution for feasibility with respect to the original problem in polynomial time. The method distinguishes three possible scenarios based on the relationship between the optimal solutions of the relaxed and original problems: (I) only a subset of the relaxed optimal solutions remains valid for the original problem; (II) all optimal solutions of the relaxed problem are also optimal for the original; and (III) none of the relaxed optimal solutions apply to the original, requiring an alternative approach. QonSAll provides a structured framework to reduce problem complexity while ensuring that optimality is preserved or correctly filtered, offering a practical path toward solving difficult combinatorial problems on quantum hardware.