Hello! I'm looking to retrain / train a larger model inspired by https://github.com/nadavbra/protein_bert and was wondering:
- Where should I store training data? It's >1TB but it seems most storage locations that large on the Sheffield HPCs are temporary storage only?
- Could that data be persisted and could I reserve GPU usage for a few weeks to a month? The original model was trained for a month, but this was likely using a lower-powered GPU!
The GitHub for ProteinBERT seems to have some pretty nice instructions for the retraining, I'm just wondering if I'll run into any issues using that much disk space and that much GPU time!
Thanks a ton,
Brooks