Monday, 1st December 2025 (Quest-IS)
Note: This program includes links that allow direct access to detailed sections of the website (keynotes). Clicking a link will automatically take you to the relevant section, even if it is located on a different page.
Auditorium 1
08:45
QUEST-IS General Chairs (F. Barbaresco & F. Gerin)
Opening & General announcements about the conference
09:00
Welcome address by EDF Executive
09:30
Keynote speaker : Pierre Rouchon
(French Academy of Sciences)
10:00
Platinium sponsors addresses
10:40-11:10 - Coffee break
Auditorium 1
11:10
Gold sponsors addresses
11:40
Keynote speaker : Olivier Ezratty
(Author of Understanding Quantum Technologies)
12:10
Inauguration and Visit of the QUEST-IS 2025 Exhibition
12:30-13:30 - Lunch break
Auditorium 1
13:30
Keynote speaker : Marco Genovese
(INRIM)
Auditorium 1
14:00 -Parallel session "Sensors"
Chairman : Magnus HÖIJER, Guido TORRESE
We propose a unified comparator architecture based on nitrogen-vacancy (NV) centers in diamond. The system replaces conventional magnetic pickup or flux feedback mechanisms with a solid-state quantum magnetometer that optically detects magnetic flux in the air-gap of a magnetic core. The NV sensor provides sensitivity to both AC and DC magnetic fields, enabling seamless operation across different metrological regimes, and fully compati-ble with a modern power electronics requirement. The proposed architecture offers key advantages such as complete electrical isolation, compact system, and potential scalability for portable or embedded applications. Modeling and recent advances in NV magnetometry suggest that the required sensitivity is achievable with current technologies. By addressing both AC and DC cur-rent measurement requirements in the field of power industries and funda-mental electrical standards, this concept paves the way toward a next-generation current ratio standard.
Grinding burns alter the hardness and mechanical properties of steel compo-nents and can result in cracks or failure. Nital etching is widely used in the industry to detect grinding burns, but it poses environmental and safety risks and requires visual detection. Alternative techniques such as magnetic Bark-hausen noise inspection have been developed, but automation is hindered by calibration drifts and by the sensor dimensions. Here, we introduce a novel non-destructive approach based on nitrogen-vacancy center in diamond to detect leakage magnetic field from grinding burns. This technology enables quantitative vector magnetic field measurement, is contactless and does not require calibration. Grinding burns with different sizes were successfully de-tected with a high spatial resolution and sensitivity.
This work presents an overview of our fully open-source, hackable quantum sensor platform based on nitrogen-vacancy (NV) center diamond magnetometry. This initiative aims to democratize access to quantum sensing by providing a comprehensive, modular, and cost-effective system. The design leverages consumer off-the-shelf (COTS) components in a novel hardware configuration, complemented by open-source firmware written in the Arduino IDE, facilitating portability, ease of customization, and future-proofing the design. By lowering the barriers to entry, our sensor serves as a compact platform for education, research, and innovation in quantum technologies, embodying the ethos of open science and community-driven development.
This paper presents an applicability review of practical quantum sensors and nascent quantum sensor technology for deployments in nondestructive evaluation in the nuclear power industry. Quantum sensors are capable of measuring several independent quantities such as electromagnetic fields, temperature, and pressure. These and some others are of interest to the nuclear power industry. It is anticipated that quantum sensors can provide value in this context beyond the value provided by classical sensors in that they can provide stable and highly accurate information, and in some cases, can measure multiple quantities simultaneously.
An exploratory exercise tasked with identifying applications where quantum sensors could provide real value to the nuclear industry was undertaken by the Electric Power Research Institute (EPRI) throughout the first 3 quarters of 2025 and culminated in a roadmap for quantum sensor technology development to benefit the industry. The results of this road-mapping work is discussed. Discussion is also given to corroborate appropriate development directions for quantum sensor technology based on identified applications and use-cases deemed to be most important for the power industry. Quantum sensors developed to benefit the nuclear industry will also benefit many adjacent industries.
Auditorium 2
14:00 - Parallel session "computing, algorithms, simulation"
Chairman : Frédéric BARBARESCO
We introduce BenchQC, a quantum computing benchmarking project that focuses on application-centric benchmarking using diverse industry use cases. Expanding the open-source platform QUARK, we evaluate key metrics across the quantum software stack to identify trends towards quantum utility, and distinguish viable research directions. This initiative contributes to the broader effort of establishing reliable benchmarking standards that drive the transition from experimental demonstrations to practical quantum advantages.
Delivering an application-oriented benchmark suite for objective multi-criteria evaluation of quantum computing performance, a key to industrial uses.
With the support of the national program on measurements, standards, and evaluation of quantum technologies MetriQs-France, a part of the French national quantum strategy, the BACQ project is dedicated to application-oriented benchmarks for quantum computing. The consortium gathering THALES, EVIDEN, an Atos business, CEA, CNRS, TERATEC, and LNE aims at establishing performance evaluation criteria of reference, meaningful for industry users.
The proposed methodology consists in the aggregation of low-level technical metrics and a multi-criteria analysis via the tool MYRIAD-Q in order to provide operational performance indicators of the different quantum computing solutions and point out the service qualities of interest to the end-users. The aggregation of the criteria and multi-criteria analysis allows fully explainable and transparent notations, comparisons between different quantum machines and with classical computers, as well as identification of the practical advantages of each quantum machine with respect to specific applications. The project will address both analog machines (quantum simulators and annealers) and gate-based machines, Noisy Intermediate Scale Quantum (NISQ) and Fault Tolerant Quantum Computing (FTQC). The followed practical approach is to have a suite of benchmarks, adaptive to some extent, appropriate to the capabilities of the available machines and able to demonstrate their respective advantages, including, in the longer term, exponential speedup of specific algorithms on FTQC machines.
Quantum AI and Quantum Machine Learning (QML) are among the most promising and dynamic research fields, with a vast variety of QML models. However, standardized benchmarking is lacking, and end users often struggle to determine whether quantum AI — and which specific approach — is suitable for their use cases. Addressing these challenges is essential to evaluate the current state of quantum AI and advance toward quantum utility.
We have developed \emph{Quant²AI}, a holistic benchmarking framework for systematic comparisons of quantum AI pipelines using high performance clusters and both quantum simulators and hardware. Our end-to-end approach evaluates not just QML models but the whole pipeline, including e.g., preprocessing and hyperprameter variations. Its modular design enables easy integration of new components, such as alternative data preparation. We provide standardized and real-world datasets, quantum and classical AI reference pipelines, state-of-the-art evaluation metrics, and intuitive visualizations. Our framework offers benchmarking as a service for both researchers for testing their newly developed quantum AI components as well as end users for an intuitive way to identifying promising quantum AI applications.
Application-oriented benchmarking of quantum computing involves measuring multiple and often conflicting Key Performance Indicators (KPIs). These KPIs must be combined to assess the overall quality of a solution or to compare two different solutions. Multi-Criteria Decision Aiding (MCDA) is a suitable methodology for this purpose. We begin by discussing the state of the art in MCDA and identifying an appropriate MCDA approach for quantum benchmarking, which involves two steps to generate an overall score from the vector of KPIs: first, normalizing the KPIs, and then aggregating the normalized scores. The normalized scores are not uniquely defined and correspond to different scales. We examine the properties of invariance, wherein a change from one admissible scale to another should not affect the comparison between options. To facilitate aggregation, there is often an assumption of some form of commensurability (comparability) across the normalized scores of various KPIs. As this strict assumption may be strong in practice, we investigate the types of aggregation functions that accommodate different invariance assumptions.
Auditorium 1
16:00
Quantum Computing
Keynote speaker : To be validated
Auditorium 1
16:30 - Parallel session "Sensors"
Chairman : Khulud ALMUTAIRI, Yasutaka AMAGAI
Detecting single photons in the optical frequency band is a well-established practice; however, this capability is just emergent at microwave frequencies. Single microwave photon detectors (SMPDs) are instrumental for efficiently detecting weak signals from incoherent emitters, with applications in axion searches, hybrid quantum systems, and superconducting quantum computing. At microwave frequencies, SMPDs rely on superconducting qubits to encode the presence or absence of an itinerant photon. Such quantum interaction provide a quantum non-demolition (QND) measurement of the photon state, in constrast with absorptive optical Single Photon detectors (SPDs). In this work, we leverage the QND nature of this interaction to repeatedly measure the photon state in a cascaded manner. By encoding the information on multiple qubits, we mitigate the intrinsic local noise of individual qubits, achieving a two-order-of-magnitude reduction in intrinsic detector noise at the cost of a slight reduction in efficiency. The photon detector concatenates two four wave mixing process coherently on a single chip. This scheme ensures fully quantum coherent dynamic of photon detection, enabling dynamical tuning of the detector’s bandwidth — a critical feature for practical use in setups affected by thermal photons.
We show how to balance the strengths of the parametric processes to maximize sensitivity and evaluate the core metrics of such a device. We demonstrate an intrinsic sensitivity of (8+-1)x10^{-24} W.Hz^{-1/2} at 8.798 GHz, with the detector noise entirely dominated by the thermal noise of the input resonator. The bandwidth tunability is 100 kHz. Limitations are discussed and addressed in recent work with a detector at 11.7 GHz.
Synchronized spectroscopy measurements for acetylene and molecular iodine reference transitions performed with an ultrastable laser signal may be ex-ploited to constrain jointly couplings of dark matter to many fundamental constants.
Quantum technology holds the potential to revolutionize fields like computing and communication by leveraging quantum properties from quantum systems. In photonic systems, there has been significant progress in quantum technology research that extensively exploited quantum optical states (photon sources) to utilize their unique statistical properties. For example, two induced states from coherent state (CS); Photon Added state (PA) and Displaced Fock state (DF) which possess quadrature-squeezing, anti-bunching and sub-Poisson were proven to improve the performance of quantum gates, boson sampling, and BB84/B92 cryptographic protocols. Recently, a new approach to deriving the photon statistics of PA, particularly its photon-number distribution, was introduced [1]. This approach employs a modified moment generating function (MGF) to manipulate the initial CS’s photon-number distribution, offering an alternative to conventional methods. While traditional derivations require performing a quantum operation first and then applying state projection (Born’s rule) to obtain the resulting photon-number distribution, the modified MGF method instead tracks the transformation of the photon-number distribution directly, bypassing the need for quantum operations like the creation operator. Nevertheless, despite its advantages, the new method is strictly limited to PA, overlooking DF. Therefore, we proposed a generalized MGF, which complements the modified MGF. This work paves the way for extending the MGF framework to a broader class of quantum state transformations, offering new insights into quantum optical state engineering and its applications in photonic quantum technology.
Rydberg atoms are highly promising for microwave electric field sensing owing to their large dipole matrix elements. While most experimental developments have focused on room-temperature vapors so far, utilizing cold atoms in this context could open new possibilities for applications where accuracy, long term stability and high-resolution at large integration times are required, such as calibrating blackbody shifts in state-of-the-art optical clocks or measuring the cosmic MW background.
At ONERA, we demonstrate a novel approach for the metrology of microwave fields with cold 87 Rubidium Rydberg atoms based on trap-loss-spectroscopy in a magneto-optical trap (MOT). This technique stands out for its simplicity, relying solely on fluorescence measurements. With a scale factor linearity at the 1% level and a long-term frequency stability equivalent to a resolution of 5 μV/cm at 2500 s and no noticeable drift over this time period, this new measurement technique appears to be particularly well-suited for metrology experiments. This method will allow in principle to reconstruct the amplitude and the frequency of the microwave field simultaneously without the need for an external reference field.
Auditorium 2
16:30 - Parallel session "computing, algorithms, simulation"
Chairman : Félicien SCHOPFER, Ronin WU
This paper explores the application of quantum computing, specifically quantum annealing via D-Wave systems, to address the Multiple Hypothesis Tracking (MHT) problem in tracking systems. This involves the critical task of assigning sensor detections to relevant targets amidst various challenges such as false alarms, noise, and target maneuvers. The NP-hard nature of MHT presents a significant obstacle, with the number of hypotheses escalating exponentially over time, leading traditional solvers to struggle under substantial computational load. By benchmarking classical solvers against D-Wave’s quantum annealing approach, the study has demonstrated remarkable improvements in computational time, achieving up to 15-fold speed-ups during high false alarm density scenarios. This work highlights the transformative potential of quantum computing over classical methods in efficiently tackling complex combinatorial optimization tasks, thereby offering a promising solution for demanding real-time tracking challenges. The research further suggests avenues for future exploration with different quantum computing technologies, underscoring the evolving landscape of technology in optimizing tracking system efficiency.
The Maximum Cardinality Matching (MCM) Gn series represents a well-established set of problems designed to evaluate the capabilities of optimization heuristics. These problems have been instrumental in probing the optimization potential of various D-Wave quantum computers, offering insights into their performance and limitations. Complementary to the NP-hard yet unconstrained MaxCut problem used in the Q-Score benchmark, the MCM Gn series provides a constrained framework that further challenges quantum optimization techniques.
In this paper, we introduce a benchmark score specifically tailored to assess the capabilities of quantum optimization heuristics such as Quantum Annealing (QA) and the Quantum Approximate Optimization Algorithm (QAOA) on quantum computers. This score aims to offer a comprehensive metric for evaluating the effectiveness of these methods in solving complex optimization problems, thereby contributing to the broader understanding of quantum computational utility and guiding future advancements in the field.
Data encoding plays a fundamental and distinctive role in Quantum Machine Learning (QML). While classical approaches process data directly as vectors, QML may require transforming classical data into quantum states through encoding circuits, known as quantum feature maps or quantum embeddings. This step leverages the inherently high-dimensional and non-linear nature of Hilbert space, enabling more efficient data separation in complex feature spaces that may be inaccessible to classical methods. This encoding part significantly affects the performance of the QML model, so it is important to choose the right encoding method for the dataset to be encoded. However, this choice is generally arbitrary, since there is no « universal » rule for knowing which encoding to choose based on a specific set of data. There are currently a variety of encoding methods using different quantum logic gates. We studied the most commonly used types of encoding methods and benchmarked them using different real datasets.
Classical quantum emulators are key tools for algorithm design, error correction, and noise mitigation. They rely on a variety of simulation methods such as statevector, tensor networks, matrix product states, decision diagrams, and others. Their performance depends not only on simulation method and circuit properties, but also on emulator-specific parameters, pre-processing, and implementation. Systematic benchmarking of emulators is essential for advancing simulation techniques and selecting the most suitable emulator for a given task. To address this need, we present an open-source, user-centric, problem-agnostic framework with a unified API for twelve leading emulators, delivering detailed performance insights for quantum computing research and development.
Current quantum computers suffer from a limited number of qubits and high error rates, limiting practical applicability. Different techniques exist to mitigate these effects and run larger algorithms. In this work, we analyze one of these techniques called quantum circuit cutting. With circuit cutting, a quantum circuit is decomposed into smaller sub-circuits, each of which can be run on smaller quantum hardware. We compare the performance of quantum circuit cutting with different cutting strategies, and then apply circuit cutting to a QAOA algorithm. Using simulations, we first show that Randomized Clifford measurements outperform both Pauli and random unitary measurements. Second, we show that circuit cutting has trouble to provide correct answers in noisy settings, especially as the number of circuits increases.