by Valeria Bartsch, Matthias Kabel and Anita Schöbel (Fraunhofer ITWM)
The Fraunhofer-Gesellschaft, in cooperation with IBM, has established a national competence network in the research field of quantum computing as described in [L1].The aim is to develop quantum-based computing strategies for the next generation of quantum computers. A competence centre on quantum computing has been established at Fraunhofer ITWM [L2] which aims to develop and optimise quantum algorithms for realistic industrial challenges. In this article, we describe our work on testing, evaluating, and optimising the quantum Fourier transform (QFT) in three industrial application scenarios.
A Fourier transform decompose functions depending on space or time into functions depending on spatial or temporal frequency. Fourier transforms are needed in vastly different contexts. Application-related examples from industrial projects at Fraunhofer ITWM include:
- the non-destructive examination of materials,
- the evaluation of the homogeneity of structures and detection of deviations based on image data,
- the analysis of mechanical and thermal properties of materials, such as elasticity.
Scenario A: non-destructive examination of materials
In the mathematical formulations of all the aforementioned problems, Fourier transforms play an important role in their solutions. The Fourier transform is also often a bottleneck for solving large systems, or a real-time solution. For example, the data involved in the nondestructive examination of materials can currently not be examined in real time. The quantum formulation of the Fourier transform QFT is very memory efficient compared to the classical formulation, and scales much better. However, the price for the superiority of the QFT is the encoding and readout complexity. In our work, the quality of the results and the scalability of the QFT on real systems have been investigated and novel methods for encoding on a quantum computer and readout have been developed and implemented.
In our setup for non-destructive examination of materials, we use approximately 100 transmitters, receivers with a frequency of 100 Hz and 100 frequency points. To reconstruct a 100x100x100 voxel volume, one tera computational operations are required. This is not feasible in real time even for a modern GPU. However, a 128x128x128 matrix can be represented on 21 qubits. Two-dimensional QFT calculations show that for calculations beyond 4 qubits, the results are randomly distributed. This means that current quantum computers are not yet qualitatively capable of solving real-world problems, but the underlying algorithms work, and we are prepared to take advantage of quantum computing once it is realised in improved systems.
Scenario B: Homogeneity of structures and detection of deviations in image data
When filtering image data, QFT has been tested on small test images. Here, too, only very small data sets (in this case 2x2 pixel grey images) can be evaluated. The focus in this part of the work is on the coding of image data for the quantum computer. Efficient encoding is important, as already described in Scenario A, because the QFT algorithm offers an advantage over the classical Fourier transform algorithm but reading in and out the results can negate this speed advantage. In this case, an encoding algorithm called FRQI (Flexible Representation of Quantum Images) has been used for which there are several implementations.
To encode a classical image via FRQI, Qiskit’s MCRY gates can be used. However, these require a comparatively large number of gates, especially 2-qubit CNOT gates. This means we need either a physical connection between the qubits or additional SWAP gates until one reaches a qubit with a connection to a second gate. Since the number of connections between the qubits is low, many SWAP gates are thus required, which in turn affects the depth of the circuits and hence the time required. This is critical if the required time exceeds the coherence time, because then the computer is no longer in the quantum regime and thus the results are unusable. A high number of multi-qubit-gates also means that the errors from the individual gates add up. In self-implemented MARY implementation, significantly fewer gates are required overall and, in particular, significantly fewer 2-qubit gates as shown in Figure 1. Nevertheless, the circuit depth increases exponentially for larger images. The maximum executable image size on a real system with the MARY implementation is 32x32 pixels, which is 4 times higher than in the standard implementation.
Figure 1: Comparison of the circuit depth for a 2x2 pixel encoding on several IBMQ quantum computers. The upper plot shows the number of 2-qubit gates, the lower plot the circuit depth. The plot is taken from .
Scenario C: Analysis of mechanical and thermal properties of materials
In the context of material characterisation in the investigation of mechanical and thermal properties of materials, the physical description of material samples leads to partial differential equations, which are usually discretised with finite elements. If a regular mesh is used for discretisation – which is the case, for example, when working directly on a CT image of the material sample – the discretised differential equation can be solved memory-efficiently and quickly by using the fast Fourier transform. Replacing the Fourier transform with a QFT requires reading out the complex Fourier coefficients on the quantum computer. However, only amplitudes can be measured on the quantum computer, i.e., a direct measurement only allows to determine the magnitude of the Fourier coefficients. This problem was solved in two steps:
- By using uniform displacement and stress boundary conditions instead of the usual periodic boundary conditions for the partial differential equation, the phase of the Fourier coefficients is known a priori. Interestingly, the uniform boundary conditions (as described in ) can be applied by simply mirroring the material sample.
- Subsequently, only the sign of the Fourier coefficients was unclear. This could additionally be determined with another amplitude measurement.
Since each measurement destroys the quantum states of the qubits, the QFT must be executed for each Fourier coefficient to be read out. Hence it turned out that the complexity of the resulting QFT-based material characterisation corresponds to the complexity of a naively implemented Fourier transform. However, the results of the material characterisation on the quantum computer already agree with the predictions of the classical computer, except for stochastic errors. Our next step is to add gate error mitigation techniques to decrease the errors of the QFT.
Acknowledgements: This work was supported by the project AnQuC-2 of the Competence Center Quantum Computing Rhineland-Palatinate (Germany). The authors are deeply indebted to Fabian Friederich , Alexander Geng, Felix Givois, Andreas Keil and Ali Moghiseh. for providing the use cases and results.
[L1] Fraunhofer Competence Network Quantum Computing: https://www.fraunhofer.de/en/quantumcomputing
[L2] Competence Center Quantum Computing Rhineland-Palatinate: https://www.itwm.fraunhofer.de/en/fields-of-application/quantum-computing.html
 H. Grimm-Strele, M. Kabel: “Fast Fourier transform based homogenization with mixed uniform boundary conditions”, Int. Journal for Numerical Methods in Engineering, DOI: 10.1002/nme.6830
 A. Geng, M. Ali, K. Schladitz: “A quantum image detection for the NISQ era”, in Proc. of “Quantum Techniques in Machine Learning 2021. https://arxiv.org/abs/2110.15672
Fraunhofer Institute for Industrial Mathematics (ITWM), Germany