Zuckerberg Ate His Own Open-Source Manifesto
Two years after writing 2,000 words about why open-source AI is the path forward, Zuck launched a locked-down proprietary model. That's not a pivot — it's a 180.
6 transmissions tagged #llm
Two years after writing 2,000 words about why open-source AI is the path forward, Zuck launched a locked-down proprietary model. That's not a pivot — it's a 180.
Google finally dropped the custom Gemma license for Apache 2.0 — and that boring legal detail might matter more than any benchmark number.
Someone ran a 397B parameter model on a MacBook Pro using raw C and Metal shaders. Here's why that's actually impressive and not just a stunt.
On a 24 GB card, single-GPU LLM inference is usually constrained by memory traffic and KV cache growth long before raw math throughput becomes the limit.
Why production agents should be evaluated like distributed systems: trajectory-level scoring, failure taxonomies, and explicit incident budgets.
Anthropic's Claude Sonnet 4.6 delivers full upgrades across coding, computer use, and long-context reasoning — at the same price as its predecessor.