QUEST-IS'25

Posters program

Chairman : 

Quantum Cryptanalysis Security Concerns (105) Soenil SOEBEDAR

From the vantage point of a cybersecurity, PKI, and cryptography engineer, quantum cryptanalysis is not just a theoretical risk—it’s a practical and urgent concern. As someone deeply embedded in the architecture of secure communications, I recognize how quantum computing threatens to upend foundational encryption protocols like RSA, ECC, and even symmetric systems like AES. Quantum algorithms, particularly Shor’s and Grover’s, introduce attack vectors that significantly lower the effort required to decrypt sensitive data, posing existential challenges to public key infrastructure (PKI) and data confidentiality.

This technical session provides an engineer’s insight into the multifaceted risks quantum computing introduces to cryptographic systems, focusing on:

• The breakdown of traditional PKI structures under quantum pressure

• Evaluating the robustness of Post-Quantum Cryptography (PQC) for future-proofing applications

• Exploring the potential of Quantum Key Distribution (QKD) for unconditional security

• Implementing hybrid encryption frameworks to bridge classical systems with post-quantum protections

Participants will gain a deeper understanding of quantum cryptanalysis through a security practitioner’s lens, allowing them to better assess risk, redesign infrastructure, and implement adaptive countermeasures in preparation for the quantum era.

Key takeaways:

You will walk away with:

• Basics of Quantum Computing

• Basics of Cryptography

• Where can / will we use Quantum Computing, and where not to use

• Impact and Prevention on Cryptography

• Post-Quantum Computing (PQC)

• What can Quantum Computing do within Cybersecurity?

• Basics about Shor’s and Grover’s algorithms

• Threats & Security Impacts of Quantum Computing

• Password Hacking & Quantum Computing

• Facts & Myths about Quantum Computing

• Impacts of Quantum Attacks on Current Encryption

• In 4 easy steps to prepare us for Quantum Computing

• Key takeaways

The Current Landscape of Quantum Hardware Development - An Overview (106) Siddharth Raghav CHANDER

Quantum computing has developed since the 1980s, with significant progress in its theoretical and practical applications. A critical aspect of this field is quantum hardware development, which supports research and real-world applications. One notable example of quantum computing’s potential is cryptography, where the RSA protocol has been employed to secure browsers and other internet applications. The private key of the RSA protocol is based on two prime numbers that are so large that even supercomputers cannot factor them to their prime factors in a reasonable amount of time. In 1994, Caltech alumnus Peter Shor proposed Shor’s Algorithm, which exploits the unique properties of quantum computers to factorize large numbers quickly and efficiently. Implementing this algorithm on quantum hardware would compromise the security of the RSA protocol. Quantum computing has been touted as revolutionary, but understanding the progress of different quantum hardware types is vital. This paper aims to analyze the types of quantum hardware and applications they are best suited for, presenting a comprehensive look into most quantum hardware in development. By understanding the current state of quantum hardware, we can gain valuable insights into the potential applications of quantum computing.

Stock Price Trend Prediction using Quantum Neural Network algorithm (117) Shaista TARANNUM, Arjun CS, Jagarapu Surya TEJA, Kunta Indra KUMAR, Krishna Teja AVR

This paper explores the development of a Quantum Neural Network (QNN) algorithm for predicting stock price trends using Limit Order Book (LOB) data. LOB data, offering a detailed snapshot of market supply and demand, is inherently high-dimensional and complex, making it a challenging input for traditional prediction models. To address this, the project proposes a quantum-enhanced approach that converts classical LOB data into a quantum-compatible format using encoding techniques, enabling efficient processing through QNNs. Leveraging platforms such as Qiskit, Pennylane, and PyTorch, the model aims to classify stock trends—upward, downward, or stable—while demonstrating robustness against market noise. The proposed method is expected to outperform conventional deep learning models by capturing subtle patterns in financial time-series data, ultimately contributing to more informed and accurate decision-making in algorithmic trading and financial forecasting.

Hybrid Quantum-Classical Diffusion Models for Crack Segmentation in Concrete Panels (125) Gopal SANJEEV JOSHI, Alexander GENG, Ali MOGHISEH

Segmentation of cracks in concrete structures is a critical task in structural health monitoring, with significant implications for infrastructure maintenance and public safety. Although classical machine learning and deep learning techniques, particularly convolutional and diffusion-based architectures, have shown strong performance in image segmentation, the integration of quantum computing into such pipelines remains largely unexplored, with existing work primarily focused on image classification, like in [4]. This paper investigates the potential of hybrid quantum-classical machine learning approaches [1, 2] for the task of crack segmentation in concrete panels.

We propose a novel method that incorporates parameterized variational quantum circuits (VQCs) into classical diffusion-based segmentation models. This hybrid architecture aims to leverage the representational advantages of quantum circuits alongside the established strengths of classical deep learning. The approach is implemented and evaluated on a dataset of high-resolution concrete panel images annotated for crack structures [3]. We benchmark the hybrid model against purely classical baselines,

analyzing both segmentation accuracy and computational efficiency under various constraints.

Our results provide insights into the current capabilities and limitations of hybrid quantum-classical architectures in practical computer vision applications. We further discuss the implications of quantum resource constraints and suggest directions for improving the scalability and robustness of such models in future research.

Evaluating the impact of a deployed HPC-QC software stack using micro-benchmarks (126) Brice CHICHEREAU, Stéphane VIALLE, Patrick CARRIBAULT

The potential computing gains promised by quantum computing have motivated supercomputing centers to integrate Quantum Processing Units (QPUs) with supercomputers. These hybrid architectures will have to leverage new software stacks at the convergence of High Performance Computing (HPC) and Quantum Computing (QC). In this work we evaluate the influence of an existing, deployed software stack on the execution of HPC-QC codes using a micro-benchmarks framework. We investigate the stack which has been deployed at CEA’s TGCC supercomputing center for use with a Pasqal analog quantum computer. This allows us to identify possible areas of improvement for the performance of the software stack with HPC-QC applications in mind using the benchmarking framework.

Quantum annealing applications, challenges and limitations for optimisation problems compared to classical solvers (150) Finley Alexander QUINTON, Per Arne Sevle MYHR, Mostafa BARANI, Pedro CRESPO DEL GRANADO, Hongyu ZHANG

Quantum computing is rapidly advancing, harnessing the power of qubits’ superposition and entanglement for computational advantages over classical systems. However, scalability poses a primary challenge for these machines. By implementing a hybrid workflow between classical and quantum computing instances, D-Wave has succeeded in pushing this boundary to the realm of industrial use. Furthermore, they have recently opened up to mixed integer linear programming (MILP) problems, expanding their applicability to many relevant problems in the field of optimisation. However, the extent of their suitability for diverse problem categories and their computational advantages remains unclear. This study conducts a comprehensive examination by applying a selection of diverse case studies to benchmark the performance of D-Wave’s hybrid solver against that of industry-leading solvers such as CPLEX, Gurobi, and IPOPT. The findings indicate that D-Wave’s hybrid solver is currently most advantageous for integer quadratic objective functions and shows potential for quadratic constraints. To illustrate this, we applied it to a real-world energy problem, specifically the MILP unit commitment problem. While D-Wave can solve such problems, its performance has not yet matched that of its classical counterparts.

Performing a logical computation using a neutral atom quantum processor (151) Pascal SCHOLL

Quantum computing is expected to solve outstanding problems in many fields, including optimization, material science, drug discovery, or cryptography. However, today’s quantum processors are limited in both qubits number and quality, which restrains the number of use cases they can address. A solution to this problem is the emerging field of fault-tolerant quantum computing (FTQC), where the qubit quality can be made arbitrarily high, at the expense of qubits number. Recently, important progresses have been made in this field, with the demonstration of a functional quantum error-corrected qubit, or the utilization of error-detecting codes for performing elementary computations. In particular, we will show that the neutral atom platform has emerged as one of the most promising platform for FTQC against other type of platforms because of its high-fidelity operations, high qubits number, and arbitrary qubit connectivity.

Here, I will discuss the utilization of Pasqal’s neutral atom quantum processing unit (QPU) to implement logical qubits and computations in a fault-tolerant fashion. I will specifically describe the implementation of an elementary use case that could be relevant at larger scale for industrial applications, namely solving elementary differential equations. I will finally describe our plans for achieving larger scale FTQC, and our typical target applications.

Performance analysis of Multi-Angle QAOA on the Min-Vertex-Cover problem with biased initial states (152) Torbjørn SMEDSHAUG, Finley Alexander QUINTON, Mostafa BARANI

The Quantum Approximate Optimization Algorithm (QAOA) is a prominent heuristic for solving combinatorial optimization problems on near-term quantum devices. While existing research has focused largely on MaxCut and standard QAOA, comparatively little attention has been paid to other problem classes and ansatz modifications. In this work, we present a systematic computational study of the performance of MultiAngle-QAOA (MA-QAOA) and standard QAOA applied to the Minimum Vertex Cover (MVC) problem. Constraints are enforced through a Lagrangian penalty formulation, to suit the unconstrained nature of QAOA circuits. We evaluate the impact of various parameter initialization techniques (e.g., Gaussian and static), as well as warm-starting and cold-starting schemes, across multiple circuit depths (p = 1, 2, 4, 10). Our simulations are conducted under both ideal (noise-free) and noisy conditions to assess robustness. Results provide insight into the suitability of ansatz choice and enhancement strategies, and can inform applications of QAOA on constrained optimization problems.

From analog to digital computing : a modular neutral-atom based platform approach (163) Lucas BEGUIN

On the one hand, fault-tolerant universal quantum computers hold great promise for solving complex problems in the fields of chemistry, material science, and optimization, and therefore creating significant value in a range of industries. However, building error-corrected digital quantum computers featuring thousands of logical qubits and providing high-value for industrial business lines still require to solve many technical hurdles and will probably be a long-standing R&D effort for all quantum technologies. On the other hand, a range of so-called Noisy-Intermediate-Scale Quantum (NISQ) devic-es (featuring a few tens to a few hundreds of physical qubits but prone to quantum noise and errors) can already be used today to implement quantum algorithms and to explore applications.

In this talk, we will present the modular hardware platform approach fol-lowed at Pasqal for engineering and building NISQ – to – FTQC evolutive in-dustrial Quantum Processing Units (QPUs) based on neutral atoms. We will give an overview on the performances of the Orion product line – Pasqal’s first generation of NISQ analog devices. We will compare some KPIs meas-ured for the Orion QPUs operated internally at Pasqal and the Orion QPUs delivered on client premises. Finally, we will show how the Orion platform will evolve to provide better analog computing performances as well as new digital computing features in the coming years.

 

Quantum Symmetry-Aware Anomaly Detection in Post-Quantum Cryptographic Protocols Using Variational Quantum Circuits (172) Vijaykrishna SOMARAJU

This paper presents a quantum machine learning approach for anomaly detection in post-quantum cryptographic (PQC) protocol flows using Cirq-based Variational Quantum Circuits (VQCs). Our model encodes simplified structural features of PQC-secured communication graphs to identify symmetry-breaking anomalies. Both brute-force and gradient-based training methods are evaluated. We demonstrate that even low-dimensional quantum circuits can detect protocol anomalies such as entropy drift or timing perturbations, revealing the potential of symmetry-aware quantum detection systems as complementary tools to classical PQC infrastructure.

The MARK-BLU 1.0 Architecture: A Quantum Hash Function Framework For NISQ Research & Implementation (183) Bhvyadhirr BHARADWAJ

Quantum Computation has been progressing from mere theoretical promises to experimental realities, and the cryptographic primitives, more than anything else, must thereby adapt to the noisy and limited-scaled capabilities of current quantum processors. In this work, or odyssey of sorts, is introduced MARK-BLU 1.0 — a novel quantum hash function architecture constructed using parameterized quantum circuits explicitly designed for Noisy Intermediate-Scale Quantum (NISQ) devices. MARK-BLU 1.0 serves as both a research framework and an educational model, providing not only an understanding, but also a practical and interpretable blueprint for investigating entropy generation, randomness extraction, and quantum-based hash properties. Unlike classical hash functions or existing quantum proposals that assume fault-tolerant architectures, the MARK-BLU 1.0 operates within the practical limitations of current hardware, balancing circuit depth, gate fidelity, and entropy spread.

Presented here is also a comprehensive evaluation of the proposed architecture’s behavior under multiple classical-to-quantum input encodings, simulated randomness profile through entropy distribution, and sensitivity analysis through collision resistance, avalanche effect, and bit independence tests. Furthermore, the research positions MARK-BLU 1.0 within the broader landscape of quantum cryptography by contrasting it against classical and quantum hash proposals identifying the niche it fills — that of a modular, scalable, and pedagogically rich hash function which is operable « today ».

By providing reproducible code, empirical validation, and educational scaffolding, this research work aims to catalyze both further inquiry and implementation of quantum hashing on real quantum hardware, while offering a stepping stone towards future fault-tolerant cryptographic primitives.

Requirements for Early Quantum Advantage in the Capacitated Vehicle Routing Problem (214) Chinonso ONAH, Arne-Christian VOIGT, Mark BENENMANN, Kristel MICHIELSEN

We introduce a transparent, encoding-agnostic framework for determining when the Capacitated Vehicle Routing Problem (CVRP) can achieve \emph{early quantum advantage}. Our analysis shows that this is unlikely on noisy-intermediate-scale quantum (NISQ) hardware even in the best case scenario utilizing the most efficient encoding models. Closed-form resource counts combined with the latest device benchmarks yield three decisive “go/no-go’’ figures of merit—the quantum feasibility point plus the qubit- and gate-feasibility lines—that place any CVRP instance on a single decision diagram. Contrasting a direct QUBO mapping with the space-efficient higher-order (HOBO) encoding reveals a stark gap. Applied to early-advantage benchmarks such as \texttt{$Golden_5$}, our diagram shows that HOBO circuits require merely $7685$ whereas their QUBO counterparts still exceed 200\,000 qubits. In addition to identifyiing probable candidate instances for Early Quantum Advantage in CVRP, our framework therefore provides the first unifying “go/no-go’’ metric that ingests any CVRP encoding alongside any hardware profile and highlights precisely when quantum devices could challenge classical heuristics.

Defect Geometry in Quantum Systems: A Structural Framework for Anomaly and Discontinuity (219) Tomonori YOSHINO

This paper introduces a novel geometric framework for understanding structural anomalies and discontinuities in quantum systems through the lens of ”defect geometry.” Traditionally, quantum defects—such as topological singularities, decoherence-induced disruptions, or error-induced distortions—have been treated as isolated phenomena. In contrast, we propose that such defects are not exceptions but generative structures that encode valuable information about the quantum system’s underlying geometry. By extending concepts from time-weighted information geometry, we develop a formalism that integrates topological cuts, algebraic irregularities, and statistical fluctuations into a unified geometrical language. The framework introduces ”defect tensors” and ”stitching operations” to describe the tearing and reconnection of quantum state spaces, offering a new interpretation of error correction, entanglement transitions, and dissipative evolution. We demonstrate the applicability of this approach in several contexts, including quantum error correction, topological quantum computing, and quantum material characterization. Our formulation offers a new paradigm where defect structures are treated not as limitations but as central agents of structural evolution. This work lays the foundation for a structural theory of quantum anomaly and may open new pathways for designing fault-tolerant quantum systems using topologically-informed defect engineering.

Implementation and Optimization of Quantum Machine Learning Algorithm to explore its potential use case in the field of healthcare (228) Dhruba Jyoti DAS

This paper investigates the application of quantum computing techniques to enhance medical image classification using the MedMNIST dataset and the TensorFlow Quantum (TFQ) library.

Motivated by the growing complexity of medical image analysis and the potential advantages offered by quantum computing, this research focuses on the development of hybrid classical-quantum models that leverage variational quantum circuits (VQCs) within the TFQ framework.

Initial experiments establish a baseline performance using traditional

convolutional neural networks (CNNs) on the MedMNIST dataset. Subsequently, custom VQCs are designed and implemented, exploring various circuit architectures and methods for encoding quantum features. These VQCs are integrated with classical machine learning components

through the TFQ library.

A comprehensive evaluation of the quantum-enhanced models is conducted, comparing their performance to the classical baseline in terms of accuracy, precision, recall, and F1 score. The influence of hyperparameters such as qubit count and circuit depth is also examined.

Results suggest that quantum algorithms have the potential to improve medical

image classification accuracy, particularly in scenarios where classical models face challenges.

This research contributes to the emerging field of quantum computing for healthcare applications. By utilizing the TFQ library, the integration of quantum algorithms into existing medical image analysis workflows is facilitated. It is anticipated that these findings will encourage further exploration of quantum-enhanced solutions for complex medical image analysis tasks, ultimately leading to more accurate diagnoses and improved patient outcomes.

Controlling gate operations on superconducting transmon qubits coupled to a central flux-tunable resonator (229) Goeran WENDIN

A distribution of qubits coupled to a stripline cavity possesses potential for scalability—several qubits can be coupled to the cavity. In 2006, Wallquist et al. [1] theoretically investigated selective coupling of superconducting fixed-frequency charge qubits mediated by a superconducting stripline cavity terminated by a flux-tunable SQUID, allowing tuning of the resonance frequency of the resonator cavity. The frequency control was provided by a flux-biased dc superconducting quantum interference device (SQUID) attached to the end of the stripline. Selective entanglement of the qubit states was achieved by sweeping the cavity frequency through the qubit-cavity resonances. The circuit was able to accommodate several qubits and allowed one to keep the qubits at their optimal points with respect to decoherence during the whole operation. We derived an effective quantum Hamiltonian for the basic two-qubit-cavity system involved in two-qubit gate operation and analysed appropriate circuit parameters. We then presented a protocol for performing Bell inequality measurements, and discussed a composite pulse sequence generating a universal controlled-phase gate.

The scheme was never investigated experimentally for controlling qubit gates, but was very successful in other physics contexts [2,3]. However, fifteen years later a team at IQM published several papers implementing the basic scheme of coupling a number of flux-tunable transmon qubits to a common superconducting fixed-frequency resonator cavity [4,5]. This is now the central theme of the IQM Star architecture with 24 qubits at the centre of the LUMI-Q project [6,7] funded by the EuroHPC JU.

The IQM architecture is based on flux-tunable transmons and a fixed-frequency central resonator. This architecture can be used as a test-bed for algorithms that benefit from high connectivity, which offers the flexibility to encode a qubit for quantum computation or to utilize its bosonic modes, which further enables quantum simulation of bosonic systems. The operation of the QPU platform is based on the qubit-resonator conditional Z gate and the qubit-resonator MOVE operation. The latter allows for transferring a quantum state between one of the peripheral qubits and the computational resonator. Ref. [5] described the performance of the 6q Star QPU, which achieved a genuinely multi-qubit entangled Greenberger-Horne-Zeilinger (GHZ) state over all six qubits with a readout-error mitigated fidelity of 0.86.

In this Poster we will discuss how to extend the IQM Star architecture to also include a frequency-tunable central transmission line cavity, and elaborate on the opportunities for more powerful performance of software and algorithms aiming for fault-tolerant implementations.

Aller au contenu principal