Andrew Ng's recent Economic Times Interview on Application Layer Building over Large Language Models
From Abhivardhan, our Chairperson
I came across this interview of Andrew Ng by The Economic Times on India's AI opportunities.
It seems to be an interesting interview, which seems to be perplexing to read through.
Here is a straightforward reality check to understand what Andrew implies on India's AI future and scenario so far.
1️⃣ "Many jobs will have 20-30% of the tasks that Al can play a material role in. I don't think Al will replace people, but people that use Al will replace people that don't."
Yes, this is true. Many #GenerativeAI tools are productivity enhancing tools, or as Gary Marcus coined the term, they are #RoughdraftAI tools, for perfecting through imperfect outputs (read my interview with The Indian Express: https://www.linkedin.com/posts/abhivardhan_ai-ethics-generativeai-activity-7224579674113241088-BLcO?utm_source=share&utm_medium=member_desktop).
2️⃣ "But I think that a much larger fraction of India's value to be created and captured would be in building applications on top of these wonderful foundation models. India's economy has a large service sector or large industrial sector, huge agricultural sector; quite a lot of FDI (foreign direct investment) as well."
This echoes Raghuram Rajan's naive take on India's economy. While the service sector is huge, AI's potential in manufacturing, supply chains, and hardware deserves more attention. Also, wrapping applications around foundation models might not be as impactful as it seems (see this paper: https://arxiv.org/abs/2301.05397)
3️⃣ "For the application layer, [...] I think it's [GenAI investments] totally worth it, partly because it's so capital efficient [...] And I'm seeing revenues pick up. [...] Now, at the foundation model layer, given the capex spends on GPUs, I think [...] from a timing point of view, there's a lot of pressure to prove out the value of that investment in the short time horizon."
Andrew has been largely cryptic on the timing point of view for GenAI infrastructure layer investments. Yet, he recognises that there is genuine pressure to prove out the value of these investments. Again, the problem that cloud companies could create a vendor lock-in effect around AI is concerning (read this: https://www.indicpacific.com/post/the-cloud-the-code-and-the-competition-microsoft-s-calculated-clash-with-openai)
Now, while the cost to build applications around foundation models is relatively much lower, we already know that many AI startups are already being sold to bigger players (read this: https://www.theinformation.com/articles/the-generative-ai-startups-that-may-look-for-a-buyer).
Second, the patentability risk around building these applications or wrappers, is a significant legal problem, which needs to be addressed. Apart from this, the problem of unclear bye-laws and access policies of cloud and foundation model providers will remain a timely concern.