Sources and Reasoning as Founding Elements to Improve AI Productisation?
From Abhivardhan, our Chairperson
I came across Aravind Srinivas' recent post on X, amidst my travel frenzy, which I found quite interesting.
He claims that in the case of AI products, there have been 2 simple things that have made a huge difference as to how aspects of UX and trust in the case of end-users can be dealt 'positively' - sources (citations, references) and reasoning traces (chain of thought).
So, let's understand this.
While these principles are compelling, they warrant critical examination to assess their effectiveness and limitations.
1️⃣ Sources (Citations): Transparency vs. Overhead 🔍
Srinivas argues that citations combat AI's "black box" problem by allowing users to verify claims against original sources. For example, Perplexity's answers link directly to websites, research papers, or news articles, mimicking academic rigour. This approach addresses the "trust paradox" in AI, where users engage with systems they distrust if verification mechanisms exist.
However, let's be clear:
Source quality 🏆: Citations only build trust if sources are credible. AI models risk amplifying misinformation if they reference low-quality or biased content.
User burden 🏋️: Overloading answers with citations can clutter interfaces, undermining UX simplicity—a core Perplexity selling point.
Partial transparency 🌗: Citations often highlight supporting evidence but may omit contradictory sources, creating a false sense of objectivity.
2️⃣ Reasoning Traces (Chain of Thought): Clarity vs. Complexity 🧩
Chain-of-thought (CoT) prompting forces AI to "show its work," breaking answers into logical steps.
But it ain't as simple as we think it to be:
Performance trade-offs ⚖️: CoT increases computational costs and latency, conflicting with Perplexity's emphasis on speed.
Illusory logic 🎭: Transparent perceptive reasoning ≠ correct reasoning. Models can generate coherent-but-wrong steps in (e.g., flawed math derivations).
User fatigue 😴: Non-technical audiences may find verbose reasoning overwhelming, preferring concise answers.
To conclude, here are some broader Implications: 🌐
Sustainability 🌱: Maintaining citation accuracy at scale requires continuous source validation—a resource-intensive task.
Cultural bias 🌍: Trust in citations depends on users' familiarity with Western academic norms, potentially alienating global audiences.
Commercial pressures 💼: As Perplexity scales, balancing ad integrations with citation integrity could strain its "truth-seeking" culture.
That being said, do look at the Indian Society of Artificial Intelligence and Law's recommendations on the Indian Budget 2025 as the Economic Survey of India for 2024-25 is released: https://www.isail.in/post/isail-ai-recommendations-on-ai-innovation-and-budgetary-considerations-in-india-for-2025