The Way Forward: Bringing HPC and Quantum Computing Together (part 2)

24
/
04
/
2022
21
/
03
/
2024
8
 min. read

In part one of this series, we presented the roadmap to bring high-performance computing (HPC) and quantum computing together in order to help HPC centers continue to innovate. This roadmap, which is based on the IQM and Atos survey of the state of quantum computing in HPC centers, has three steps:

  1. Gap analysis and quantum solution identification (immediate)
  2. Quantum solution design and integration (midterm)
  3. Quantum computing use case development and implementation (long-term)

The first step, which was discussed in detail in part one of our blog series, involves identifying problems that the HPC center is tackling and for which the solution currently offered by classical computers can be improved through the use of quantum computers. In this blog, we focus on the second step in the roadmap: quantum solution design and integration. That is, designing a quantum computer that will lead to the improved solutions envisioned in the first step and then integrating this quantum computer with the existing classical HPC architecture.

When carrying out the design, development and integration of the quantum computing solution, it is important to remember that for the foreseeable future, quantum computers will work as computing accelerators that require significant classical computing support. The eventual goal of a standalone, self-contained quantum computer is a laudable one, but it lies far ahead in the future. For now, the goal should be a seamless interaction between the quantum computer and the existing HPC infrastructure.

Since quantum computers will be working as computing accelerators within an existing HPC infrastructure, a close collaboration between the quantum computing provider and the HPC center is essential for the successful execution of the second step in the roadmap. This is true for both types of HPC centers that exist today: the specialized one that is tailored to specific-use cases—typically found in private companies—or the general-purpose HPC center, which is often publicly-funded or offered as a service. The nature of this collaboration should be determined by the type of center and its quantum design capabilities (for example, the ability to design quantum algorithms). For a specialized HPC center with advanced quantum computing knowledge, the collaboration could extend as far as working with the quantum computer provider to design a customized quantum computing chip. For a general-purpose HPC center at the early stages of its quantum computing innovation process, the collaboration could focus on determining where in the computation workflow can quantum accelerators be accommodated to improve the solutions to the problems that were identified in step 1 (gap analysis and quantum solution identification).

To maximize the chance of a successful collaboration, it is best for the quantum computer to be on premises. That is, the quantum computer should be located at the HPC center. Even if only general-purpose quantum accelerators are used, a high level of customization is still required to integrate quantum computers in a classical HPC infrastructure. Currently, only on-premises quantum computers allow for such customization. While cloud-based quantum computing is a useful sandbox for testing ideas as well as an excellent platform for learning, the need to carry out lengthy simulations and serve as many users as possible makes the customization of cloud quantum computing for HPC providers challenging from both technical and financial perspectives. (It may eventually be possible to implement some standard functionalities through cloud quantum computing as we will discuss in part three of this series.) Furthermore, cloud computing raises privacy and security concerns as the HPC user is working with a machine that is not entirely under their control.

Another advantage to having an on-premises quantum computer is that the training of the research scientists and developers IT staff at the HPC center becomes more straightforward. A key aim of the second step of the roadmap is to upskill the staff at the HPC center in quantum computing and an on-premises quantum computer makes this aim easier to achieve. Finally, troubleshooting of any problems is simpler if both the quantum computer and HPC infrastructure are at the same location.

In part one of this blog series, we recommended carrying out the first step in the roadmap with specific use-cases in mind in order to keep the implementation of this second step as concrete as possible. Here, the use-case we consider is an optimization problem, such as the placement of sensors in an automobile to provide maximum coverage while minimizing costs, that can be analyzed using a variational algorithm (see part one of this series).

In variational algorithms, the optimization can be separated into quantum and classical parts. These algorithms combine the use of both a quantum processing unit (QPU) and a classical central processing unit (CPU) with the goal of finding the lowest energy state of the Hamiltonian that describes the system to be optimized (see "Hamiltonians and optimization" box below). The classical computer chooses an initial set of parameters, which are supplied to the QPU that prepares the trial state of the system based on these parameters and measures the energy of this trial state. The results are then fed back to the CPU which uses the information to select a new set of parameters. The process is repeated until the desired convergence is reached (figure 1).

Figure 1.

How would this algorithm be implemented in an HPC center? We will assume that a HPC center has the commonly used architecture of interconnected nodes that communicate over a high-speed network. On each node there are many CPU cores with a local shared memory and possibly additional GP-GPU accelerators (general-purpose computing on graphics processing units). On the quantum computer side, there will be several interconnected QPUs.

For the classical and quantum computers to communicate, an API is needed. At the moment, there is no Linux for quantum computers—that is, a generic operating system that would work on many different quantum computer hardware systems—although there is active research in this direction. Therefore, the control of the quantum computer operations is carried out from the classical computer via the API. (In this sense the QPU and GP-GPU accelerators are analogous—they don’t have their own OS, but rely on an API to communicate with the host system.)

Since API development for quantum computers is in its infancy, it is important to work with the quantum computer provide to ensure that the API chosen (or designed from scratch) is optimized for the quantum computer. The number of qubits, the error rates and coherence time of the quantum computer all put demands on the performance of the API, its compiler and the software library it uses. To offer a blunt example: if the API cannot execute commands on the quantum computer within a coherence time, it is not of much use. Of course, the API should be flexible enough to grow with the quantum computer. As more qubits are added, as the error rate drops (either through better design or through quantum error-correction algorithms), and as the coherence time increases, the performance of the quantum computer will improve and the API must be able to handle this improvement in performance so that it does not become the bottleneck in the hybrid classical-computing architecture (figure 2).

Figure 2.

Once the API is in place it should be tested with toy problems whose solutions are known—either analytically or because they have been solved to high accuracy using only a classical computer. If enough representative toy problems can be solved in a robust manner using the API, this achievement can be used as a benchmark to test whether the integration of the quantum computing solution with the existing HPC infrastructure has been met. Once the performance and reliability of the API has been ascertained, the HPC research team can proceed to the third and final step in the roadmap as will be discussed in the part 3 of this blog series.

About the author:

Deborah Berebichez, Ph.D.

Business Development Consultant for IQM

Explore more

Blog
13
/
04
/
2024
13
/
04
/
2024

World Quantum Day 2024: Tech Mahindra and IQM Quantum Computers join forces to deepen quantum workforce development

Read more
5
 min. read
Blog
11
/
01
/
2024
21
/
03
/
2024

IQM Quantum Computers at Davos 2024 with Finnish Flow

Read more
3
 min. read
Blog
18
/
07
/
2023
21
/
03
/
2024

Encourage more women in quantum computing to foster diversity and inclusion, says Dr. Ines de Vega

Read more
12
 min. read
Blog
29
/
06
/
2023
21
/
03
/
2024

SQA Conference 2023 in Munich: What to expect from our speakers

Read more
6
 min. read
Blog
06
/
06
/
2023
21
/
03
/
2024

IQM Quantum Computers and QC Design partner to bring “Fault Tolerance” module to IQM Academy

Read more
6
 min. read
Blog
22
/
05
/
2023
21
/
03
/
2024

The road to hybrid quantum-HPC: Meet IQM Quantum Computers and LRZ at ISC 2023

Read more
12
 min. read
Blog
06
/
03
/
2023
04
/
04
/
2024

IQM Quantum Computers to showcase latest quantum research at APS March Meeting 2023

Read more
6
 min. read
Blog
15
/
12
/
2022
21
/
03
/
2024

Munich to host the global Superconducting Qubits and Algorithms Conference in 2023

Read more
3
 min. read
Blog
03
/
11
/
2022
21
/
03
/
2024

Invest in Finland team upbeat about Finnish quantum ecosystem following a visit to IQM  

Read more
6
 min. read
Blog
31
/
10
/
2022
21
/
03
/
2024

SQA Conference 2022 Participants: Europe well-positioned to lead quantum computing, but collaboration is key 

Read more
10
 min. read
Blog
17
/
10
/
2022
21
/
03
/
2024

SQA Conference 2022 brings together the global superconducting quantum computing community

Read more
3
 min. read
Blog
12
/
04
/
2022
21
/
03
/
2024

Quantum Computing Cheat Sheet For Circuit Magicians

Read more
1
 min. read