This repository contains a set of scripts that how cross-talk function can be used to understand the difference between pipelines for extraction of M/EEG activity from brain areas of interest. For more details about the project, please refer to the accompanying paper and consider citing it in case you use the scripts from this repository:
Kapralov, N., Studenova, A., Eguinoa, R., Nolte, G., Haufe, S., Villringer, A., & Nikulin, V. (2025). Non-uniform effects of remaining field spread on the estimation of M/EEG activity and connectivity between regions of interest. bioRxiv. https://doi.org/10.64898/2025.12.09.688708
-
The LEMON dataset.
-
The RIFT dataset.
NOTE: We used
scratch/sub*-snr-and-spectra.matfiles to keep track of subjects which were included in the original analysis, please make sure to download them! -
Python (tested with 3.12.11). The dependencies are described in
pyproject.tomland can be installed withpip install -e .[dev]. -
MATLAB (tested with R2023b) and the FieldTrip toolbox (tested with v20241025) are used to export the RIFT data to MNE-Python.
assets- static files used in analyses or data visualizationlemon-info.fif- anmne.Infoobject that contains template locations of EEG channels present in the LEMON datasetplots.mplstyle- a Matplotlib theme used for all plotsrift-*-setup.png- stimuli that were presented to participants in the RIFT experiment
data(not tracked)derivatives- intermediate results of all analysis stepsreview- source data for the literature review (to be published)simulations- simulated data for all experiments
matlab- MATLAB scripts for exporting the conditions of interest from the RIFT datasetpaper- source files required for generating the resulting PDF of the manuscriptfigure_components- components for the figure that were created/postprocessed manuallyfigures- final figuresnumbers- numbers (settings, results) exported from scripts to TeXsections- source TeX code for the main textsupplementary- source TeX code for the supplementary
results(not tracked) - plot generated by the scriptsscripts- Python scripts that describe the main analysissrc- functions that were implemented for the analysis, provided as a Python packagectfevalthat can be installed locallytests- unit tests for the analysis functionsworkflow- a set of Snakemake rules that describe the workflow and can be used to automatically run all scripts either locally or on the SLURM cluster
-
Clone the repository or download its contents.
-
Navigate to the project folder.
-
Install the Python dependencies by running the following command (optionally, in a dedicated environment):
pip install -e .[dev]The command above makes the analysis functions available as a
ctfevalpackage, e.g.:from ctfeval.connectivity import data2cs_fourier -
Create a copy of the
.env.examplefile and name it.env. This file is used to set all variables (e.g., paths) that are specific to your local environment. Ideally, specify the absolute paths, not the relative ones. -
Run the script that initializes the workspace and downloads the
fsaveragehead model:python scripts/00_prepare/00_init_workspace.py -
[optional, but recommended] Run the tests to make sure that the previous steps were performed successfully. For this, a forward model also need to be created first:
# Create the template forward model with oct6 spacing python scripts/00_prepare/01_prepare_head_model.py --spacing oct6 # All tests should pass pytest -
The main analysis is described as a set of Snakemake rules and can be run using one of the following commands:
# local execution, parallelized over 8 CPU cores snakemake --cores 8 # execution on a SLURM cluster (additional config file is required, check the docs) snakemake --executor slurmOtherwise, number prefixes of all script files indicate the order, in which the files are supposed to be executed.
-
The PDF of the paper can be built using the Makefile in two commands:
# collect the generated results (except manually post-processed figures) make collect # build the PDF make paper -
Done!