You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* update example/llm README
* update llm inference README with llama optimization guide for both fp16 and woq
* install inc instead of inc&itrex
* woq run with xetla path
* remove accuracy failed log, verified with yuhua
* run with static cache, better perf
* add static cache for woq example
* remove wa OCL_ICD_VENDORS
* remove OCL_ICD_VENDORS in docs
* remove CCL_ROOT setting in docs
* add inc in inference requirements.txt
* add static cache in fp16/woq and add comment to explain it
* install and activate standalone dpcpp compiler for using torch.compile
* update from jun
* update transforemrs to 4.44.2
* ensure param format consistency for bash
* update optimize_transformers to ipex.llm.optimize
* remove IPEX_COMPUTE_ENGINE for common case, add it only in save quantized model scenario
* update optimize_transformers to ipex.llm.optimize
* Create README.md for cpp example
* update triton doc
* update for torch compile
* update known_issue for triton installation, link known issue to torch.compile guide
* update known issues
* fix typo
* format fix
* update 2 scenario for triton library issue
* update triton related issue
* Update torch_compile_gpu.md inference example
* Update requirements.txt for accelerate 1.1.1
* Update requirements.txt to specify huggingface-hub==0.25.2
Follow [Intel® Extension for PyTorch\* Installation](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/) to install `torch` and `intel_extension_for_pytorch` firstly.
20
20
21
21
Triton could be directly installed using the following command:
Remember to activate the oneAPI basekit by following commands.
27
+
Remember to activate the oneAPI DPC++/C++ Compiler by following commands.
28
28
29
29
```bash
30
30
# {dpcpproot} is the location for dpcpp ROOT path and it is where you installed oneAPI DPCPP, usually it is /opt/intel/oneapi/compiler/latest or ~/intel/oneapi/compiler/latest
Copy file name to clipboardExpand all lines: docs/tutorials/getting_started.md
-9Lines changed: 0 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -51,12 +51,3 @@ More examples, including training and usage of low precision data types are avai
51
51
52
52
There are some environment variables in runtime that can be used to configure executions on GPU. Please check [Advanced Configuration](./features/advanced_configuration.html#runtime-configuration) for more detailed information.
53
53
54
-
Set `OCL_ICD_VENDORS` with default path `/etc/OpenCL/vendors`.
Copy file name to clipboardExpand all lines: docs/tutorials/known_issues.md
+34-9Lines changed: 34 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,24 +83,49 @@ Troubleshooting
83
83
84
84
If you continue seeing similar issues for other shared object files, add the corresponding files under `${MKL_DPCPP_ROOT}/lib/intel64/` by `LD_PRELOAD`. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
85
85
86
-
- **Problem**: RuntimeError: could not create an engine.
87
-
- **Cause**: `OCL_ICD_VENDORS` path is wrongly set when activate a exist conda environment.
88
-
- **Solution**: `export OCL_ICD_VENDORS=/etc/OpenCL/vendors` after `conda activate`
89
-
90
-
- **Problem**: If you encounter issues related to CCL environment variable configuration when running distributed tasks.
91
-
- **Cause**: `CCL_ROOT` path is wrongly set.
92
-
- **Solution**: `export CCL_ROOT=${CONDA_PREFIX}`
93
-
94
86
- **Problem**: If you encounter issues related to MPI environment variable configuration when running distributed tasks.
95
87
- **Cause**: MPI environment variable configuration not correct.
96
88
- **Solution**: `conda deactivate` and then`conda activate` to activate the correct MPI environment variable automatically.
97
89
98
90
```
99
91
conda deactivate
100
92
conda activate
101
-
export OCL_ICD_VENDORS=/etc/OpenCL/vendors
102
93
```
103
94
95
+
96
+
- **Problem**: If you encounter issues Runtime error related to C++ compiler with `torch.compile`. Runtime Error: Failed to find C++ compiler. Please specify via CXX environment variable.
97
+
- **Cause**: Not install and activate DPC++/C++ Compiler correctly.
98
+
- **Solution**: [Install DPC++/C++ Compiler](https://www.intel.com/content/www/us/en/developer/tools/oneapi/dpc-compiler-download.html) and activate it by following commands.
99
+
100
+
```bash
101
+
# {dpcpproot} is the location for dpcpp ROOT path and it is where you installed oneAPI DPCPP, usually it is /opt/intel/oneapi/compiler/latest or ~/intel/oneapi/compiler/latest
102
+
source {dpcpproot}/env/vars.sh
103
+
```
104
+
105
+
- **Problem**: RuntimeError: Cannot find a working triton installation. Either the package is not installed or it is too old. More information on installing Triton can be found at https://github.com/openai/triton
106
+
- **Cause**: No pytorch-triton-xpu installed
107
+
- **Solution**: Resolve the issue with following command:
### Conda-based environment setup with prebuilt wheel files
35
+
36
+
Make sure the driver packages are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.5.10%2Bxpu&os=linux%2Fwsl2&package=pip).
37
+
38
+
```bash
39
+
40
+
# Get the Intel® Extension for PyTorch* source code
### Conda-based environment setup with compilation from source
35
84
36
-
Make sure the driver and Base Toolkit are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.3.110%2Bxpu&os=linux%2Fwsl2&package=source).
85
+
Make sure the driver and Base Toolkit are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.5.10%2Bxpu&os=linux%2Fwsl2&package=source).
37
86
38
87
```bash
39
88
40
89
# Get the Intel® Extension for PyTorch* source code
0 commit comments