Running local model?
Anyone have experience with running quantised models locally - i use qwen 2.5:7b but use the default ollama provides?
Curious if there is another platform or way of configuring different quantisation level on ollama - want to test a bigger model laptop does have 24GB RAM so should be able to handle more.
0 Comments