This campaign is over.
Intel® Framework/Library |
Description |
Link |
Intel® oneAPI Toolkits |
A cross-architecture platform for developing high-performance, data-centric applications. |
|
Intel® oneDNN (Deep Neural Network Library) |
An open-source performance library for deep learning applications. |
Link here (Refer pdf) |
Intel® AI Analytics Toolkit |
Tools for accelerating data science and AI workloads using Intel® architecture. |
|
Intel® Distribution of OpenVINO™ Toolkit |
Optimizes deep learning models for inference on Intel® hardware, including CPUs, GPUs, and VPUs. |
|
Intel® Distribution for Python |
Optimized Python distribution for faster performance on Intel® Architectures. |
|
Intel® DAAL (Data Analytics Acceleration Library) |
Provides fast, scalable, and high-performing data analytics algorithms. |
Link here (Refer pdf) |
Intel® Neural Compressor |
A tool for model compression, quantization, and optimization to accelerate AI inference on Intel® hardware. |
|
TensorFlow* Optimizations from Intel |
Optimizations and extensions for TensorFlow to boost performance on Intel® Architecture. |
|
PyTorch* Optimizations from Intel |
Enhances PyTorch performance on Intel® processors. |
|
Intel® Tiber™ AI Cloud |
Cloud-based access to Intel’s AI accelerators for quick prototyping and development. |
Link here (Refer ITAI word link) |
Examples of Pretrained models optimized with Intel® technologies on Hugging Face
Click here to view HACKSTORM: INTEL® AI PC EDITION Terms and Conditions