Noisy intermediate-scale quantum eraThe current state of quantum computing[1] is referred to as the noisy intermediate-scale quantum (NISQ) era,[2][3] characterized by quantum processors containing up to 1,000 qubits which are not advanced enough yet for fault-tolerance or large enough to achieve quantum advantage.[4][5] These processors, which are sensitive to their environment (noisy) and prone to quantum decoherence, are not yet capable of continuous quantum error correction. This intermediate-scale is defined by the quantum volume, which is based on a moderate number of qubits and gate fidelity. The term NISQ was coined by John Preskill in 2018.[6][2] According to Microsoft Azure Quantum's scheme, NISQ computation is considered level 1, the lowest of the quantum computing implementation levels.[7][8] In October 2023, the 1,000 qubit mark was passed for the first time by Atom Computing's 1,180 qubit quantum processor.[9] However, as of 2024, only two quantum processors have over 1,000 qubits, with sub-1,000 quantum processors still remaining the norm.[10] AlgorithmsNISQ algorithms are quantum algorithms designed for quantum processors in the NISQ era. Common examples are the variational quantum eigensolver (VQE) and quantum approximate optimization algorithm (QAOA), which use NISQ devices but offload some calculations to classical processors.[2] These algorithms have been successful in quantum chemistry and have potential applications in various fields including physics, materials science, data science, cryptography, biology, and finance.[2] However, due to noise during circuit execution, they often require error mitigation techniques.[11][5][12][13] These methods constitute a way of reducing the effect of noise by running a set of circuits and applying post-processing to the measured data. In contrast to quantum error correction, where errors are continuously detected and corrected during the run of the circuit, error mitigation can only use the outcome of the noisy circuits. The Quantum Hardware LandscapeCurrent NISQ devices typically contain between 50 to 1,000 physical qubits, with leading systems from IBM, Google, and other companies pushing these boundaries. However, these qubits are inherently "noisy" - they suffer from decoherence, gate errors, and measurement errors that accumulate during computation. Gate fidelities hover around 99-99.5% for single-qubit operations and 95-99% for two-qubit gates, which while impressive, still introduce significant errors in circuits with thousands of operations. The fundamental challenge lies in the exponential scaling of quantum noise. With error rates above 0.1% per gate, quantum circuits can execute approximately 1,000 gates before noise overwhelms the signal. This constraint severely limits the depth and complexity of algorithms that can be successfully implemented on current hardware, necessitating the development of specialized NISQ algorithms that work within these constraints.[6][14] Variational Quantum EigensolverThe Variational Quantum Eigensolver represents one of the most successful NISQ algorithms, specifically designed for quantum chemistry applications. VQE tackles the fundamental problem of finding the ground state energy of molecular systems - a computation that scales exponentially with system size on classical computers but can potentially be solved in polynomial time on quantum devices.[15] Mathematical Foundation and ImplementationVQE operates on the variational principle of quantum mechanics, which states that the expectation value of any trial wavefunction provides an upper bound on the true ground state energy. The algorithm constructs a parameterized quantum circuit called an ansatz∣ψ(θ)⟩, to approximate the ground state of a molecular Hamiltonian [16] The quantum processor prepares the ansatz state and measures the Hamiltonian expectation value, while a classical optimizer iteratively adjusts the parameters θ to minimize the energy. This hybrid approach leverages quantum superposition to explore exponentially large molecular configuration spaces while relying on well-established classical optimization techniques.[17] Real-World Applications and AchievementsVQE has been successfully demonstrated on various molecular systems, from simple diatomic molecules like H₂ and LiH to more complex systems including water molecules and small organic compounds. Google's collaboration with Columbia University demonstrated VQE calculations on 16 qubits to study carbon atoms in diamond crystal structures, representing the largest quantum chemistry computation at that time.[18][19][20] The algorithm has proven particularly valuable for studying chemical reactions, transition states, and excited state properties. Recent implementations have achieved chemical accuracy (within 1 kcal/mol) for small molecules, demonstrating the potential for quantum advantage in materials discovery and drug development applications.[21] Scaling Challenges and SolutionsDespite its successes, VQE faces significant scaling challenges. The number of measurements required grows polynomial with the number of qubits, while the optimization landscape becomes increasingly complex for larger systems. The fragment molecular orbital (FMO) approach combined with VQE has shown promise for addressing scalability, allowing efficient simulation of larger molecular systems by breaking them into manageable fragments.[22] Quantum Approximate Optimization AlgorithmQAOA represents a paradigmatic NISQ algorithm for solving combinatorial optimization problems that plague industries from finance to logistics. Developed by Farhi and colleagues, QAOA encodes optimization problems as Ising Hamiltonians and uses alternating quantum evolution operators to explore solution spaces. Algorithm Structure and MethodologyQAOA constructs a quantum circuit consisting of layers, each containing a cost Hamiltonian evolution followed by a mixer Hamiltonian evolution :
Classical optimization adjusts the angles to maximize the probability of measuring good solutions.[23][24] Performance Benchmarks and Quantum AdvantageRecent theoretical and experimental work has demonstrated QAOA's potential for quantum advantage on specific problem classes. For the Max Cut problem on random graphs, QAOA at depth p=11 has been shown to outperform standard semidefinite programming algorithms. Even more remarkably, QAOA can exploit non-adiabatic quantum effects that classical algorithms cannot access, potentially circumventing fundamental limitations that constrain classical optimization methods.[25][26][27] Experimental implementations on quantum hardware have shown promising results for problems with up to 20-30 variables, though current hardware limitations restrict practical applications to relatively small problem sizes. The algorithm's performance improves with circuit depth p, but NISQ constraints limit the achievable depth, creating a fundamental trade-off between solution quality and hardware requirement.[28][29] Error Mitigation: Making Noisy Quantum Computing Practicalince NISQ devices lack full quantum error correction, error mitigation techniques become essential for extracting meaningful results from noisy quantum computations. These techniques operate through post-processing measured data rather than actively correcting errors during computation, making them suitable for near-term hardware implementations.[30][31] Zero-Noise ExtrapolationZero-noise extrapolation (ZNE) represents one of the most widely used error mitigation techniques, artificially amplifying circuit noise and extrapolating results to the zero-noise limit.[32] The method assumes that errors scale predictably with noise levels, allowing researchers to fit polynomial or exponential functions to noisy data and infer noise-free results.[33] Recent implementations of purity-assisted ZNE have shown improved performance by incorporating additional information about quantum state degradation. This approach can extend ZNE's effectiveness to higher error regimes where conventional extrapolation methods fail, though it requires additional measurement overhead. Symmetry Verification and Probabilistic Error CancellationSymmetry verification exploits conservation laws inherent in quantum systems to detect and correct errors. For quantum chemistry calculations, symmetries such as particle number conservation or spin conservation provide powerful error detection mechanisms. When measurement results violate these symmetries, they can be discarded or corrected through post-selection.[34] Probabilistic error cancellation reconstructs ideal quantum operations as linear combinations of noisy operations that can be implemented on hardware. While this approach can achieve zero bias in principle, the sampling overhead typically scales exponentially with error rates, limiting practical applications to relatively low-noise scenarios. Performance Overhead and Trade-offsError mitigation techniques inevitably increase measurement requirements, with overheads ranging from 2x to 10x or more depending on error rates and the specific method employed. This creates a fundamental trade-off between accuracy and experimental resources, requiring careful optimization for each application.[13] Recent benchmarking studies comparing different mitigation strategies have shown that symmetry verification often provides the best performance for chemistry applications, while ZNE excels for optimization problems with fewer inherent symmetries. The choice of mitigation strategy significantly impacts the overall algorithm performance and should be tailored to specific problem types and hardware characteristics.[35] Quantum Advantage: Current Status and Future ProspectsThe question of quantum advantage in the NISQ era remains hotly debated, with different complexity-theoretic frameworks providing varying conclusions about the computational power of noisy quantum devices. While theoretical work suggests that NISQ algorithms occupy a computational complexity class strictly between classical computing (BPP) and ideal quantum computing (BQP), experimental demonstrations of practical quantum advantage remain elusive.[36] Theoretical Separations and LimitationsRecent complexity theory results show that NISQ devices cannot achieve Grover-like quadratic speedups for unstructured search problems, fundamentally limiting their advantage for certain algorithm classes. However, NISQ algorithms can still achieve exponential advantages over classical methods for specific structured problems, such as the Bernstein-Vazirani problem, where quantum advantage requires only logarithmic query complexity. For quantum state learning problems, NISQ devices face exponential limitations compared to fault-tolerant quantum computers, highlighting the importance of error correction for certain applications. These theoretical insights help define the boundaries of NISQ computational power and guide algorithm development efforts.[37] Beyond-NISQ eraThe creation of a computer with tens of thousands of qubits and enough error correction would eventually end the NISQ era.[4] These beyond-NISQ devices would be able to, for example, implement Shor's algorithm for very large numbers and break RSA encryption.[38] In April 2024, researchers at Microsoft announced a significant reduction in error rates that required only 4 logical qubits, suggesting that quantum computing at scale could be years away instead of decades.[39] Industry Roadmaps and Timeline ProjectionsLeading quantum computing companies have published ambitious roadmaps for achieving fault-tolerant quantum computing within the current decade. IBM has committed to delivering a large-scale fault-tolerant quantum computer, IBM Quantum Starling, by 2029. This system aims to execute quantum circuits comprising 100 million quantum gates on 200 logical qubits, representing a computational capability requiring the memory of more than classical supercomputers to simulate. IBM's approach relies on quantum low-density parity check(qLDPC) codes and bivariate bicycle codes to minimize the physical qubit overhead required for fault tolerance.[14] Quantinuum has announced an accelerated roadmap to achieve universal fault-tolerant quantum computing by 2029-2030, building on their recent breakthroughs in implementing both Clifford and non-Clifford gates fault-tolerantly. Their trapped-ion architecture has demonstrated the industry's highest quantum volumes and longest coherence times, positioning them as a leading contender for early fault-tolerant systems. Other companies, including Google, are pursuing alternative approaches such as surface code implementations and novel qubit architectures to achieve similar timelines.[40][41] Independent analysis suggests that practical quantum applications may emerge around 2035-2040, assuming continued exponential growth in quantum hardware capabilities.[42] However, these projections depend critically on simultaneous advances across multiple technical domains, including qubit hardware, control systems, error correction algorithms, and quantum software stacks. The transition from NISQ to fault-tolerant quantum computing represents one of the most significant technological challenges in modern physics and engineering, requiring sustained international collaboration and investment to realize its transformative potential.[43][44] See also
References
External links |
Portal di Ensiklopedia Dunia