Skip to content

Conversation

@bgruening
Copy link
Member

No description provided.

toolshed.g2.bx.psu.edu/repos/ecology/srs_preprocess_s2/srs_preprocess_s2/.*:
mem: 16
toolshed.g2.bx.psu.edu/repos/ecology/wildlife_megadetector_huggingface/wildlife_megadetector_huggingface/.*:
gpus: 1
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it ok to specify here GPUs?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it doesn't work without I would say yes

mem: 12
mem: 24
env:
EGGNOG_DBMEM: --dbmem
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a related question here. @cat-bro do you know if on AU all those envs are passed into the containers without any extra magic?

cores: 4
mem: 30
scheduling:
require:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can remove that. But what is the preferred way to indicate that this tool can/should run in Singularity?

- singularity
rules:
- if: input_size >= 0.01
gpus: 1
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flag GPU here, if we don't want this in a shared DB.

- if: input_size >= 0.01
gpus: 1
params:
singularity_run_extra_arguments: ' --nv '
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is enabling Singularity containers to setup the containers for GPU/Nvidia ...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial thought would be: keep this but remove the scheduling tag for singularity and add a comment? But in general running things in singularity is the recommended approach, or not?

mem: 8
toolshed.g2.bx.psu.edu/repos/iuc/kraken2/kraken2/.*:
cores: 2
cores: 16
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have way more cores on EU ... but the memory rule here is very sophisticated ... maybe both?

toolshed.g2.bx.psu.edu/repos/iuc/nanopolishcomp_eventaligncollapse/nanopolishcomp_eventaligncollapse/.*:
cores: 10
mem: 12
toolshed.g2.bx.psu.edu/repos/iuc/ncbi_fcs_gx/ncbi_fcs_gx/.*:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very expensive and inefficient tool for VGP - TODO look at ORGs rule.

toolshed.g2.bx.psu.edu/repos/iuc/pureclip/pureclip/.*:
cores: 2
mem: 32
# 4GB is enough for most of the runs as it seems
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had this in my notes, so idea why we have here so much more memory :(

cores: 12
mem: 92
- id:
if: input_size >= 1
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are bailing out when we see large files and recommend Spades.

@bgruening bgruening changed the title WIP: EU migrate Migrate a few more entries from EU Dec 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants