-
- Private work.
- Installed miniconda (py11) on wsl2 ubuntu.
- Played more with llama2, this time on desktop.
- New download link. Only played with two models,
7B
and 7B-chat
.
- Pytorch out of mem errors. Reduced batch size all the way to 1, still error. Max seq len was set. Even set
PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
, still wouldn’t finish.
- Text completion and chat completion both had memory errors.
- Didn’t have time to debug, so didn’t go deep here.
- Overall, pretty poor experience with llama – not a quick smooth sail (although didn’t troubleshoot if was the model’s mem reqs or just the cuda setup on my box).
- Aquarium monthly water change, in addition to usual biweekly maintenance.
- Nate diaz boxed well last night (unfinishable), USWNT eliminated (bad for ratings), sandhagen looked good (wrestling??), FIDE world cup continues with the expected favorites (mostly).
- 10lbs PB from yesterday: roasted peanuts, coconut oil, maple syrup, honey, salt, cinnamon. The only difference from pb -> protein bars: oat milk and protein powder.