A key issue inhibiting the wide use of quantum computing is the tendency for a quantum system to decohere and decay through interactions with its environment. To minimize such effects, quantum computers, such as IBM’s, often have to be run at extremely low temperatures and in expensive ultra-cold chambers. Quantum error correction alleviates this problem at the expense of needing many physical qubits to represent one computational qubit. The added cost of error correction makes building a large-scale, universal quantum computer a much more technologically challenging endeavor. Realizing truly error-free quantum computing is a laudable goal, but likely one that will be hard to achieve in the near future. Errors will exist in any real quantum computing; understanding how those errors affect the efficiency and reliability of a quantum computer is an important step towards post-Moore’s law computing with quantum computers.
For instance, detailed understanding of the noise processes can allow for mitigation of the errors through noise extrapolation. Noise extrapolation involves running the quantum computation many times with different noise rates and combining the information, on a classical computer, to obtain an estimate of the noise-free result. This process requires only an extra overhead in time (rerunning the quantum computation) but has no overhead in quantum hardware, making it amenable to small, near-term quantum devices. We have demonstrated this class of methods both in simulation and on Rigetti’s quantum devices in a variety of applications [1,2].
 Matthew Otten and Stephen K. Gray, Accounting for errors in quantum algorithms via individual error reduction, npj Quantum Information 5(1), 1-6 (2019).
 Matthew Otten and Stephen K. Gray, Recovering noise-free quantum observables, Physical Review A 99(1), 012338 (2019).