I noticed that it is required to use at 70GB to fully fine tune the model with a batch size of 16, so I am wondering if it is possiable to get it work on a 32GB GPU by reducing the batch size only. If no, are there any suggestions to achieve the goal? Thanks!