Tesla bull and Grok investor Pierre Ferragu has shared his thoughts on the true nature of large language models (LLMs) amid ongoing debates in the AI space.
What Happened: Over the weekend, Carlos E. Perez, co-founder of Intuition Machine, took to X, formerly Twitter, questioning the capabilities of LLMs.
In his post, Perez noted that while LLMs can tackle complex problems, they often falter on seemingly simple logical steps.
His post spotlighted a study titled “Procedural Knowledge in Pretraining Drives Reasoning in Large Language Models,” which found that LLMs’ reasoning abilities are significantly influenced by programming code logic.
The study used EK-FAC influence functions to identify the specific training data that most impact the model’s output for a given query.
See Also: SpaceX’s Starlink Direct-To-Cell Service Gets Commercial License From FCC: Here’s What It Means
The research discovered a stark contrast in how LLMs handle factual and reasoning questions. LLMs ...