Why do LLMs however useful they are in many domains, are considered a cul-de-sac of AI development?
Let me try to simplify it for you. Imagine you have a sine function represented by a number of points and you attempt to model that function with polynomes. You can increase the number of points (more data) and you may increase the polynomial degree (better models) but you stay within the polynomial model.
What will happen: your model will work better and better for interpolation and short-range extrapolation but you will never get to the point that you can predict behavior far beyond the range of the training set. That is LLMs simplified.
The solution is to make AI use more than a polynomial model, which is the direction AI is heading right now. The ability of AGI to create models unknown to the AGI from history. Depending on your school of thought, you may hear terms like reasoning, hierarchical planning, or meta-learning, not the same thing but with the same aim.
Exciting times!
Simplified version of LLMs vs AGI
Posted
Comments
0
Author
TheoLacroix
Categories
Artificial Intelligence
Comments
There are currently no comments on this article.
Comment