oneAPI competes with other GPU computing stacks: CUDA by Nvidia and ROCm by AMD.
Specification
The oneAPI specification extends existing developer programming models to enable multiple hardware architectures through a data-parallel language, a set of library APIs, and a low-level hardware interface to support cross-architecture programming. It builds upon industry standards and provides an open, cross-platform developer stack.[6][7]
Data Parallel C++
DPC++[8][9] is a programming language implementation of oneAPI, built upon the ISO C++ and Khronos GroupSYCL standards.[10] DPC++ is an implementation of SYCL with extensions that are proposed for inclusion in future revisions of the SYCL standard, including: unified shared memory, group algorithms, and sub-groups.[11][12][13]
Libraries
The set of APIs[6] spans several domains, including libraries for linear algebra, deep learning, machine learning, video processing, and others.
Library Name
Short
Name
Description
oneAPI DPC++ Library
oneDPL
Algorithms and functions to speed DPC++ kernel programming
Real-time video encode, decode, transcode, and processing
The source code of parts of the above libraries is available on GitHub.[14]
The oneAPI documentation also lists the "Level Zero" API defining the low-level direct-to-metal interfaces and a set of ray tracing components with its own APIs.[6]
Hardware abstraction layer
oneAPI Level Zero,[15][16][17] the low-level hardware interface, defines a set of capabilities and services that a hardware accelerator needs to interface with compiler runtimes and other developer tools.
Huawei released a DPC++ compiler for their Ascend AI Chipset[27]
Fujitsu has created an open-source ARM version of the oneAPI Deep Neural Network Library (oneDNN)[28] for their Fugaku CPU.
Unified Acceleration Foundation (UXL) and the future for oneAPI
Unified Acceleration Foundation (UXL) is a new technology consortium that are working on the continuation of the OneAPI initiative, with the goal to create a new open standard accelerator software ecosystem, related open standards and specification projects through Working Groups and Special Interest Groups (SIGs). The goal will compete with Nvidia's CUDA. The main companies behind it are Intel, Google, ARM, Qualcomm, Samsung, Imagination, and VMware.[29]
Fortenberry, Anna; Tomov, Stanimire (2022). Extending MAGMA Portability with OneAPI(PDF). 2022 Workshop on Accelerator Programming Using Directives (WACCPD). IEEE. pp. 22–31. .