This is a post authored by Mr Abhivardhan, our Chairperson.
This exposition on Twitter (X) by Daniel Jeffries, a technology futurist about the work of the California AI Control and Centralization Bill, i.e., SB 1047 is a must read for everyone in technology law and policy: https://twitter.com/Dan_Jeffries1/status/1794740447052525609.



I have shared some screenshots of the long post and what he has inferred about the nature of this California-based AI legislation is clearly the current unfortunate scenario of #artificialintelligence regulation across the world (maybe except Europe and China with some safety understanding).
I have regularly called out the dubiousness of so many "AI bills" and "AI policy documents" simply because every government institution is too obsessed with pushing an AI regulation narrative without understanding how a technology works. India's Ministry of Electronics and Information Technology had published an immature and arbitrary AI Advisory in March itself.
Does this mean no regulation should happen? Not at all. In fact to have a democratic and inclusive discourse on regulating (start with standardisation) AI, I had proposed AIACT.IN already for the Indian tech community. And trust me - I have made significant improvements in #India's first AI regulation proposal.
What then can be regulated? Well - there are some AI technologies whose existence has been long-run, like AI Assistants (chatbots), credit scoring tools, and other such common B2B tools since AI productivity tools are not consumer toys (most do not get it in tech policy). If you can hold sensible and democratic discourse with the tech community in India, the US and beyond, you can understand what can be pursued.
However, as described by Yann LeCun at Meta which was already hinted by Andrew Ng, AI doomerism has led us to doomers who are not even critiquing substandard, half-baked AI test cases and use cases properly. They are just reading marketed news and making judgments.
They will think that AI might "kill us all" so let's restrict AI development to a few companies across the world. Then they would assume that for AI risks, the liability question must be directly imposed on the foundation models and their derivatives (which is not a proper way to impugn liability).
This is how they will claim that all open-source AI must be banned, which is shocking because open source AI built by an MNC is much different from an open source AI built by a startup / MSME. This approach neither helps us nor would create an evolutionary approach to build, standardise and build AI in a safer way.
For a country like India, where AI development is still nascent, we at the Indian Society of Artificial Intelligence and Law have introduced aistandard.io to initiate some standardisation (not outright regulation yet). Once some things happen, we will happy to update all of you.
What do you think of Daniel and Yann's takes on AI regulation?