The myth of AI 'accelerating scientific production' - and why you should know (It's Serious for India)
From Abhivardhan, our Chairperson
Okay, less of some 'events talk' around my engagements, since I'd do that later :P
Let's talk about 3 posts - yes - 3 posts around how AI hype as a market and social phenomenon has transformed by the end of the first quarter of 2025.
In case you don't know, which is fine - I have been covering, analysing and understanding how AI hype shapes digital markets, how anti-competitive, anti-scientific, and misleading could it get since 2020.
For example - let's look at Post 1 first.
Arvind Narayanan discusses something highly crucial about a big notion that AI will somehow drive scientific progress in a hyperaccelerated way. Funnily it cannot because so many misled assumptions drive papers with less sense, and even if they make sense, they are quite less and cannot be easily encapsulated into real-life trajectories to understand research trends effectively.
Reminds me of a overstated remark by a minister of the Government of India that Indians are publishing a lot of research papers. Yeah, I remember law professors are publishing articles on criminal law in chemistry/biotech journals (check the LinkedIn feeds and you will find them in the last 2 years which Indian faculty members & students have done this).
Publishing a lot doesn't mean you know it all. Producing a lot does not ensure any quality in AI research all the time, so it depends.
This is why I like this realist statement by Arvind that producing papers is "a game researchers must play for status and career progress", and I understand how it also affects people to write distorted legal & policy literature at all times. It's not about genuineness anymore (not that it never was but the human element has become manipulative, confused and generalised).
Post 2 is more around AI development & commercialisation, by Paras Chopra.
I don't disagree with his analogy at all. Even if Generative AI may not survive in the long-run, AI is going nowhere, no matter what resource damage, information overload and law-policy framework overload this wave causes. That is concerning for India too.
This brings me to the laughable aspect of Post 3 from Max Tegmark, who I had admired so much since 2017, and now I feel disappointed in him. He thinks that Anthropic and OpenAI can replace PhD researchers.
Again, understanding technology's clinical role into making any form of research easier doesn't imply it can augment or automate into some agentic solution for anyone. It doesn't happen, nor will happen. There are some AI agents and there are achievements by Google on this too which I know - but this gold rush to further confuse people that they are in "denial of this coming tsunami" is utterly problematic.
How come this is ethical or safe when it comes to AI discourses? And what will happen then? Some think tank or research organisation will come up with bunkum research to regulate AI using some quantifiable parameter which again doesn't signify anything huge (like model weights for example).