Skip to content

Commit 86b96dd

Browse files
remove ao folder code (#895)
* remove ao folder code * update README.md
1 parent 15aeff1 commit 86b96dd

File tree

13 files changed

+4
-7
lines changed

13 files changed

+4
-7
lines changed

intel_extension_for_pytorch/__init__.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@
2626
from . import nn
2727
from . import jit
2828
from . import profiler
29-
from . import ao
3029
from . import autocast
3130

3231
from .utils.verbose import verbose

intel_extension_for_pytorch/ao/__init__.py

Lines changed: 0 additions & 1 deletion
This file was deleted.

intel_extension_for_pytorch/ao/quantization/__init__.py

Lines changed: 0 additions & 2 deletions
This file was deleted.

intel_extension_for_pytorch/ao/quantization/README.md renamed to intel_extension_for_pytorch/quantization/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ or define your own qconfig as:
100100
```python
101101
from torch.ao.quantization import MinMaxObserver, PlaceholderObserver, QConfig
102102
dynamic_qconfig = QConfig(activation = PlaceholderObserver.with_args(dtype=torch.float, compute_dtype=torch.quint8),
103-
weight = MinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_channel_symmetric))
103+
weight = MinMaxObserver.with_args(dtype=torch.qint8, qscheme=torch.per_tensor_symmetric))
104104
```
105105

106106
Note: For weight observer, it only supports dtype **torch.qint8**, and the qscheme only can be **torch.per_tensor_symmetric** or **torch.per_channel_symmetric**. For activation observer, it only supports dtype **torch.float**, and the compute_dtype can be **torch.quint8** or **torch.qint8**.
Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
from ..ao.quantization import prepare, convert, default_static_qconfig, default_dynamic_qconfig
1+
from ._quantize import prepare, convert
2+
from ._qconfig import default_static_qconfig, default_dynamic_qconfig

intel_extension_for_pytorch/ao/quantization/_module_swap_utils.py renamed to intel_extension_for_pytorch/quantization/_module_swap_utils.py

File renamed without changes.

intel_extension_for_pytorch/ao/quantization/_qconfig.py renamed to intel_extension_for_pytorch/quantization/_qconfig.py

File renamed without changes.

intel_extension_for_pytorch/ao/quantization/_quantization_state.py renamed to intel_extension_for_pytorch/quantization/_quantization_state.py

File renamed without changes.

intel_extension_for_pytorch/ao/quantization/_quantization_state_utils.py renamed to intel_extension_for_pytorch/quantization/_quantization_state_utils.py

File renamed without changes.

intel_extension_for_pytorch/ao/quantization/_quantize.py renamed to intel_extension_for_pytorch/quantization/_quantize.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
import intel_extension_for_pytorch._C as core
1010
from ._quantize_utils import auto_prepare, auto_convert, copy_prepared_model
11-
from ... import nn
11+
from .. import nn
1212

1313
def prepare(
1414
model,

0 commit comments

Comments
 (0)