This is the official implementation of Quantized FCA: Efficient Zero-Shot Texture Anomaly Detection.
We implemented our method using PyTorch and numba. For an easy installation, you can use the provide an environment file that contains all dependencies:
conda env create -f environment.yml
conda activate qfcaOr install manually:
conda create -n qfca python=3.12
conda activate qfca
# install project dependencies
pip install torch --index-url https://download.pytorch.org/whl/cu121
pip install hydra-core omegaconf numpy tqdm timm numba opencv-python matplotlib pillow tifffile scipy scikit-image scikit-learn ruff # -c conda-forge -c defaults -c pytorch -c nvidiaWe include a minimal script that allows running our code on sample images to easily try our method.
By default, running the src/test.py file will compute the anomaly score for the provided image example example/mvtec_carpet_cut_000.png.
python src/test.py
To run the evaluation code, you have to first prepare the desired dataset.
Our data loader assumes the data follows the file structure of the MVTec anomaly detection dataset.
You can download the MVTec AD from here and the woven fabric textures dataset (WFT) can be downloaded here.
Then, extract the data into the datasets directory.
To use a custom dataset with the same layout, place it in datasets and add a config file under conf/dataset.
For reference, see: conf/dataset/mvtec.yaml.
To evaluate the method, use the main.py script. We use Hydra to manage the command line interface, which makes it easy to specify the method and dataset configurations.
For example, running our QFCA+ statistics comparison (sc) with WideResnet features (fe) on the MVTec dataset is done using
python src/main.py dataset=mvtec fe=wide sc=qfcap image_size=[1024,1024] tile_size=[9,9]We automatically run the evaluation code after computing the anomaly scores for all images in the dataset.
You can inspect the metrics in the json file outputs/{experiment_name}/metrics_{padding}.json and visualize the predicted anomaly maps under outputs/{experiment_name}/{object_name}/visualize
There are several options for feature extraction and patch statistics comparison, some are inherited from the original FCA repository while some are newly introduced for experiments in our QFCA paper.
To see the available options, please check the src/conf directory.
We include a simple script to run the code in real time using the webcam feed as input. To use this option, simply set dataset=live as below:
python src/main.py dataset=live sc=qfca image_size=[512,512] tile_size=[7,7] sc.method.nr_bins=8Should you find our work useful in your research, please cite:
@inproceedings{ardelean2025quantized,
author = {Ardelean, Andrei-Timotei and Rückbeil, Patrick and Weyrich, Tim},
title = {Quantized {FCA}: Efficient Zero-Shot Texture Anomaly Detection},
booktitle = {30th Intl. Conference on Vision, Modeling, and Visualization (VMV)},
numpages = 10,
day = 30,
month = sep,
year = 2025,
authorurl = {https://reality.tf.fau.de/pub/ardelean2025quantized.html},
}Our project builds on the original FCA repository.
@inproceedings{ardelean2023highfidelity,
title = {High-Fidelity Zero-Shot Texture Anomaly Localization Using Feature Correspondence Analysis},
author = {Ardelean, Andrei-Timotei and Weyrich, Tim},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
numpages = 11,
year = {2024},
month = jan,
day = 4,
authorurl = {https://reality.tf.fau.de/pub/ardelean2024highfidelity.html},
}This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 956585 (PRIME ITN).
Please see the LICENSE.
