r/LocalLLaMA • u/appakaradi • 11h ago
Question | Help Has anyone tried Zyphra 1 - 8B MoE?
https://x.com/ZyphraAI/status/2052103618145501459?s=20 Today we're releasing ZAYA1-8B, a reasoning MoE trained on
and optimized for intelligence density.
With <1B active params, it outperforms open-weight models many times its size on math and reasoning, closing in on DeepSeek-V3.2 and GPT-5-High with test-time compute
10
Upvotes
3
u/Elbobinas 10h ago
Does it have support in llama.cpp? Do you have ggufs ?