Apple’s Latest paper on “The Illusion of Thinking” is a RESET BUTTON for all AI Safety Narratives. We are back to Square 1.
From Abhivardhan, ft. Indic Pacific Legal Research
Okay - so Mehrdad Farajtabar has dropped an awesome paper for Apple's Machine Learning team quite recently.
This proves again that the whole narrative on AI Safety has been twisted by think tanks, law firms, social enterprises and a lot of influencers.
Why?
[ Detailed analysis: https://www.indicpacific.com/post/the-illusion-of-thinking-apple-s-groundbreaking-research-exposes-critical-limitations-in-ai-reasoni ]
Here's the breakdown.
1️⃣ AGI is not coming - and you can't have an AI Safety standard when the technology itself is not reliable.
2️⃣ This does not mean there are no legal problems in cybersecurity, data protection, privacy, intellectual property and economic laws. The scam that Builder.AI was clearly proves that the marketing industrial complex should stop messing around the science around artificial intelligence.
3️⃣ Now, the paper on the "Illusion of Thinking" by Apple also shows that language models do far better than reasoning models on low-complexity tasks - but that itself means there is no moat. In smaller contexts - language models might have a specific usability ground after all.
4️⃣ These Large Reasoning Models can’t execute algorithms reliably, and that’s a dealbreaker for true AGI.
5️⃣ Scaling helps, but it’s not a magic fix. AGI needs precise, step-by-step algorithm execution to handle complex tasks—like reasoning through safety protocols or solving multi-step problems.
6️⃣ If models can’t consistently assess risks or follow safety algorithms, as seen with their failures on adversarial inputs, we’re left with a shaky system that can’t be trusted, especially in critical areas like healthcare or security.
Anyways, this reminds me of what Sarvam AI is trying to do. What they are doing is achieving indigenous models, and that is completely fine. I don't think their path should be compared with OpenAI here.
Moral of the story: Artificial Intelligence is not ending. AI research has won. There should be some course-correction with specific focus on language models and alternatives as well.
But Large Reasoning Models can be out of the game for high-level complex tasks.