On LLMs and Quicksort
There is no doubt that LLMs are a genuine breakthrough in AI, although they are still far from achieving an AGI utopia. Reaching human-level general intelligence will require further giant leaps. Who knows what the future may hold? When it comes to understanding how such systems work, we can approach LLMs in two ways: as a white box or as a black box. A white-box approach means examining all the intricate internal machinery and data that give the LLM its capabilities. In contrast, a black-box approach looks only at the inputs and outputs, treating the LLM like a component whose inner workings are hidden. From this perspective, using an LLM is analogous to using a well-known algorithm like quicksort: you can call a sort() function and trust that it will return a sorted list, without needing to follow the sorting process step by step. It's true that quicksort can be fully understood and even implemented in a few lines of low-level code, something we cannot do with today’s massive LLMs, but from the black-box view, this difference doesn’t matter. What matters is that both quicksort and the LLM deliver useful results, even when LLMs are not fully reliable.
The LLM’s process may be opaque, but it adds value beyond what previous tools could do. LLMs represent a new software development paradigm: an approach to programming that complements existing ones. Focusing on the paradigm, rather than the AI vs. AGI debate, opens the door to many new ideas and greater creativity. It invites us to ask what kinds of problems can now be approached in ways that were previously impossible. This is not a new observation: Andrej Karpathy’s Software 2.0 (2017) framed neural networks as a new programming model built on data and training, and Gwern Branwen’s GPT-3: A New Programming Paradigm? (2020) posed a similar question. By recognizing LLMs as a new programming paradigm, we can draw a clear line to set aside other discussions and focus on the new frontiers it opens.

