As 2024 went on, more and more AI discourse started to come around to the idea that AI scaling was probably about to end. One things I learned from this article is that "scaling laws" have a more precise meaning than what I'd known.
I had a general understanding of "scaling laws" to mean that the bigger the model the more "capabilities" were supposed to pop out. But the original meaning in AI research is about the "decrease in perplexity" (perplexity being jargon I also learned from this article meaning a measure of word prediction uncertainty). A decrease in perplexity is not quite the same thing as new capabilities.