Skip to content

Commit c23095f

Browse files
authored
Update index.rst and installation.md (#1693)
* Update index.rst Update per comments from technical writer. * Update per technical writer's comments
1 parent 2af07d1 commit c23095f

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
Welcome to Intel® Extension for PyTorch* Documentation
33
######################################################
44

5-
Intel® Extension for PyTorch* extends `PyTorch\* <https://github.com/pytorch/pytorch>`_ with up-to-date features and optimizations for an extra performance boost on Intel hardware. It is a heterogeneous, high performance deep learning implementation for both CPU and XPU. XPU is a user visible device that is a counterpart of the well-known CPU and CUDA in the PyTorch* community. XPU represents Intel specific kernel and graph optimizations for various “concrete” devices. XPU runtime will choose the actual device when executing AI workloads on the XPU device. The default selected device is Intel GPU. This release introduces optimized solution on XPU particularly and lets PyTorch end-users get up-to-date features and optimizations on Intel Graphics cards.
5+
Intel® Extension for PyTorch* extends `PyTorch\* <https://github.com/pytorch/pytorch>`_ with up-to-date features and optimizations for an extra performance boost on Intel hardware. It is a heterogeneous, high-performance deep-learning implementation for both CPU and XPU. XPU is a user visible device that is a counterpart of the well-known CPU and CUDA in the PyTorch* community. XPU represents an Intel-specific kernel and graph optimizations for various “concrete” devices. The XPU runtime will choose the actual device when executing AI workloads on the XPU device. The default selected device is Intel GPU. This release introduces specific XPU solution optimizations and gives PyTorch end-users up-to-date features and optimizations on Intel Graphics cards.
66

77
Intel® Extension for PyTorch* provides aggressive optimizations for both eager mode and graph mode. Graph mode in PyTorch* normally yields better performance from optimization techniques such as operation fusion, and Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. This extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts users can enable it dynamically by ``import intel_extension_for_pytorch``. To execute AI workloads on XPU, the input tensors and models must be converted to XPU beforehand by ``input = input.to("xpu")`` and ``model = model.to("xpu")``.
88

docs/tutorials/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Intel® Extension for PyTorch\* has to work with a corresponding version of PyTo
3030

3131
|Release|OS|Intel GPU|Install Intel GPU Driver|
3232
|-|-|-|-|
33-
|v1.0.0|Ubuntu 20.04|Intel® Data Center GPU Flex Series| Refer to the [Installation Guides](https://dgpu-docs.intel.com/installation-guides/ubuntu/ubuntu-focal-dc.html) for latest driver installation. If install the verified Intel® Data Center GPU Flex Series [419.40](https://dgpu-docs.intel.com/releases/stable_419_40_20220914.html), please append the specific version after components, such as `sudo apt-get install intel-opencl-icd=22.28.23726.1+i419~u20.04`|
33+
|v1.0.0|Ubuntu 20.04|Intel® Data Center GPU Flex Series| Refer to the [Installation Guides](https://dgpu-docs.intel.com/installation-guides/ubuntu/ubuntu-focal-dc.html) for the latest driver installation. If installing the verified Intel® Data Center GPU Flex Series [419.40](https://dgpu-docs.intel.com/releases/stable_419_40_20220914.html), use a specific version for component package names, such as `sudo apt-get install intel-opencl-icd=22.28.23726.1+i419~u20.04`|
3434

3535
### Install oneAPI Base Toolkit
3636

0 commit comments

Comments
 (0)