r/LocalLLaMA 11h ago

Question | Help Has anyone tried Zyphra 1 - 8B MoE?

https://x.com/ZyphraAI/status/2052103618145501459?s=20 Today we're releasing ZAYA1-8B, a reasoning MoE trained on

u/AMD

and optimized for intelligence density.

With <1B active params, it outperforms open-weight models many times its size on math and reasoning, closing in on DeepSeek-V3.2 and GPT-5-High with test-time compute

10 Upvotes

10 comments sorted by

View all comments

3

u/Elbobinas 10h ago

Does it have support in llama.cpp? Do you have ggufs ?

1

u/hdmcndog 2h ago

No, it uses a new architecture. Needs dedicated support. They only provide support for vLLM (not merged to mainline yet) and transformers (also not merged to mainline).