Document
scikit-learn-intelex · PyPI

scikit-learn-intelex · PyPI

Intel(R ) extension for Scikit - learn * With Intel(R) Extension for Scikit-learn you can accelerate your Scikit-learn applications and sti

Related articles

Kaspersky Secure Connection VPN Review How to Install a Backup Camera 5 Best Browser VPN Extensions in UK in 2024 [Updated] Dreamy Cloud Art Examples Will Uplift Your Space- Angela Cameron How to Change Location on YouTube TV in 2024 (Beginner’s Guide)

Intel(R ) extension for Scikit – learn *

With Intel(R) Extension for Scikit-learn you can accelerate your Scikit-learn applications and still have full conformance with all Scikit-Learn APIs and algorithms. This is a free software AI accelerator that brings over 10-100X acceleration across a variety of applications. And you do not even need to change the existing code!

The acceleration is achieved through the use of the Intel(R) oneAPI Data Analytics Library (oneDAL). Patching scikit-learn makes it a well-suited machine learning framework for dealing with real-life problems.

⚠️Intel(R) Extension for Scikit-learn contains scikit-learn patching functionality that was originally available in daal4py package. All future updates for the patches will be available only in Intel(R) Extension for Scikit-learn. We recommend you to use scikit-learn-intelex package instead of daal4py.
You can learn more about daal4py in daal4py documentation.

👀 Follow us on Medium

We publish blogs on Medium, so follow us to learn tips and tricks for more efficient data analysis with the help of Intel(R) Extension for Scikit-learn. Here are our latest blogs:

🔗 important link

💬 Support

Report issues, ask questions, and provide suggestions using:

You may reach out to project maintainers privately at onedal.maintainers@intel.com

🛠 installation

Intel(R) Extension for Scikit-learn is available at the Python Package Index,
on Anaconda Cloud in Conda-Forge channel and in Intel channel.
Intel(R) Extension for Scikit-learn is also available as a part of Intel® oneAPI AI Analytics Toolkit (AI Kit).

  • PyPi (recommended by default)
pip install scikit - learn - intelex 
  • Anaconda Cloud from Conda-Forge channel (recommended for conda users by default)
  conda config --add channels conda-forge
  conda config --set channel_priority strict
  conda install scikit - learn - intelex 
  • Anaconda Cloud from Intel channel (recommended for Intel® Distribution for Python users)
  conda config --add channels intel
  conda config --set channel_priority strict
  conda install scikit - learn - intelex 
[Click to expand] ℹ️ Supported configurations

📦 PyPi channel

OS / Python version Python 3 .8 Python 3 .9 Python 3 .10 Python 3 .11 Python 3 .12
Linux [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ]
Windows [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ]

📦 Anaconda Cloud: Conda-Forge channel

OS / Python version Python 3 .8 Python 3 .9 Python 3 .10 Python 3 .11 Python 3 .12
Linux [ CPU ] [ CPU ] [ CPU ] [ CPU ] [ CPU ]
Windows [ CPU ] [ CPU ] [ CPU ] [ CPU ] [ CPU ]

📦 Anaconda Cloud: Intel channel

OS / Python version Python 3 .8 Python 3 .9 Python 3 .10 Python 3 .11 Python 3 .12
Linux [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ]
Windows [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ] [ CPU , GPU ]

⚠️ Note: GPU support is an optional dependency. Required dependencies for GPU support
will not be downloaded. You need to manually install dpcpp_cpp_rt package.

[ click to expand ] ℹ️ How to install dpcpp_cpp_rt package
pip install --upgrade dpcpp_cpp_rt
conda install dpcpp_cpp_rt -c intel

You can build the package from sources as well.

⚡ ️ Get start

Intel cpu optimization patch

import numpy as np
from sklearnex import patch_sklearn
patch_sklearn()

from sklearn.cluster import DBSCAN

X = np.array([[1 ., 2 .] , [2 ., 2 .] , [2 ., 3 .] ,
              [8 ., 7.] , [8 ., 8 .] , [25., 80.]] , dtype=np.float32)
cluster = DBSCAN(ep=3, min_sample=2).fit(X)

Intel gpu optimization patch

import numpy as np
import dpctl
from sklearnex import patch_sklearn, config_context
patch_sklearn()

from sklearn.cluster import DBSCAN

X = np.array([[1 ., 2 .] , [2 ., 2 .] , [2 ., 3 .] ,
              [8 ., 7.] , [8 ., 8 .] , [25., 80.]] , dtype=np.float32)
with config_context(target_offload="gpu:0"):
    cluster = DBSCAN(ep=3, min_sample=2).fit(X)

🚀 Scikit-learn patching

scikit-learn-intelex · PyPI
Configurations:

  • HW : c5.24xlarge AWS EC2 Instance using an Intel Xeon Platinum 8275CL with 2 socket and 24 core per socket
  • SW: scikit-learn version 0.24.2, scikit-learn-intelex version 2021 .2 .3, Python 3 .8

Benchmarks code

[Click to expand] ℹ️ Reproduce results
  • With Intel® Extension for Scikit-learn enabled:
python runner.py --configs config / blog / skl_conda_config.json -–report
  • With the original Scikit – learn :
python runner.py --configs config / blog / skl_conda_config.json -–report --no-intel-optimized

Intel(R ) Extension is affects for Scikit – learn patching affect performance of specific Scikit – learn functionality . refer to the list of support algorithm and parameter for detail . In case when unsupported parameter are used , the package fallback into original Scikit – learn . If the patching does not cover your scenario , submit an issue on GitHub .

⚠️ We support optimizations for the last four versions of scikit-learn. The latest release of scikit-learn-intelex-2024.0.X supports scikit-learn 1 .0.X, 1 .1 .X, 1 .2 .X and 1 .3 .X.

📜 Intel(R ) extension for Scikit – learn verbose

To find out which implementation of the algorithm is currently used ( Intel(R ) extension for Scikit – learn or original Scikit – learn ) , set the environment variable :

  • On Linux: export SKLEARNEX_VERBOSE=INFO
  • On Windows: set SKLEARNEX_VERBOSE = info

For example, for DBSCAN you get one of these print statements depending on which implementation is used:

  • SKLEARNEX INFO: sklearn.cluster.DBSCAN.fit: running accelerated version on CPU
  • SKLEARNEX INFO: sklearn.cluster.DBSCAN.fit: fallback to original Scikit-learn

Read more in the documentation.