The Intel® AI Analytics Toolkit includes several libraries and frameworks optimized for Intel® Architecture. Participants can use any of the library/framework of their choice that is listed here.
This ideation challenge will lay the groundwork for generating high-quality projects and prototypes, focusing on Intel® AI PC technology and harnessing the full potential of NPU and GPU capabilities.
Intel® Framework/Library |
Description |
Link |
Intel® oneAPI Toolkits |
A cross-architecture platform for developing high-performance, data-centric applications. |
|
Intel® oneDNN (Deep Neural Network Library) |
An open-source performance library for deep learning applications. |
|
Intel® AI Analytics Toolkit |
Tools for accelerating data science and AI workloads using Intel® architecture. |
|
Intel® Distribution of OpenVINO™ Toolkit |
Optimizes deep learning models for inference on Intel® hardware, including CPUs, GPUs, and VPUs. |
|
Intel® Distribution for Python |
Optimized Python distribution for faster performance on Intel® Architectures. |
|
Intel® DAAL (Data Analytics Acceleration Library) |
Provides fast, scalable, and high-performing data analytics algorithms. |
|
Intel® Neural Compressor |
A tool for model compression, quantization, and optimization to accelerate AI inference on Intel® hardware. |
|
TensorFlow* Optimizations from Intel® |
Optimizations and extensions for TensorFlow to boost performance on Intel® Architecture. |
|
PyTorch* Optimizations from Intel® |
Enhances PyTorch performance on Intel® processors. |
|
Intel® Tiber™ AI Cloud |
Cloud-based access to Intel®’s AI accelerators for quick prototyping and development. |
(refer ITAI word link) |
Examples of Pretrained models optimized with Intel® technologies on Hugging Face
Model Name |
Description |
Link |
Home |
The complete collection of the Intel® + Hugging Face Models |
|
Intel®/dynamic_tinybert |
A compact model designed for question-answering tasks, optimized for faster performance. |
|
Intel®/dpt-large |
A depth estimation model known for its accuracy in estimating depth from images. |
|
Intel®/distilbart-cnn-12-6-int8-dynamic-inc |
A distilled version of BART, optimized for text generation tasks like summarization with INT8 quantization for faster inference. |
|
Intel®/ldm3d-4c |
A model designed for generating 3D objects from text inputs, leveraging Intel®'s optimizations. |
|
For Text generation |
A collection of models from Intel® for the text generation |
|
For Text to Image generation |
A collection of models from Intel® for the image generation |
|
Intel®/neural-chat-7b-v3-1-int4-inc |
Intel® neural chat engine for the real time chat applications |