-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
New Qwen vision model is out, and it's (on paper) better than the 2.5 that we use now.
I would like to try it out, which means we'll need to set aside time to swap models on Pegasus and test the new one.
There are several model variations (thinking/non-thinking, various sizes). I am considering a non-thinking (for speed) non-quantized 8B model — let's see if we can fit it on our GPU, especially if we can deprecate YOLO. Alternatives would be an 8-bit 8B model or an FP16 4B model.