Skills
The models are similar – but the skills make the difference.
A collection of 11 Posts
Who actually decided that more parameters automatically mean more intelligence?
Reducing the temperature from 1.0 to 0.6 for coding task changed everything
When should I use a small local LLM and when should I use a large cloud LLM?
ACE-Step 1.5 is a new open-source text-to-audio model licensed under MIT. It achieves quality on par with proprietary models (between Suno v4.5 and v5) and generates complete songs extremely quickly (under 10 seconds per song on an RTX 3090). The model runs locally, for example, without any problems on a MacBook, so there is no cloud dependency.
Whisper has long been my favorite open-source speech recognition model, as no other model was comparably reliable in everyday use. However, Alibaba has now introduced a very convincing alternative in the form of Qwen3-ASR. The models (1.7B and 0.6B) support over 50 languages, deliver very high recognition accuracy—even with background noise—and run efficiently on standard hardware. Thanks to its Apache 2.0 license and strong practical results, Qwen3-ASR is a serious new competitor for Whisper.
Which llm will be a Millionaire? Interesting benchmark for comparing the basic knowledge of LLMs.
GLM-4.7-Flash (Reasoning) is now the most intelligent open weights model under 100B
European company Mistral AI releases two coding models. The small one fits on my MacBook.