Indian PM's advisors say AI can cause 'mass schizophrenia'

Indian PM’s advisors say AI can cause ‘mass schizophrenia’

Source Node: 2464259

India’s Economic Advisory Council to the Prime Minister (EACPM) has penned a document warning that current global AI regulations are likely to be ineffective, and recommended regulating the technology with alternative tactics – like those used in financial markets.

The Council is very worried about AI. Its document warns “Through a combination of surveillance, persuasive messaging and synthetic media generation, malevolent AI could increasingly control information ecosystems and even fabricate customized deceitful realities to coerce human behavior, inducing mass schizophrenia.”

The org criticizes the US’s approach to AI as too hands-off, the UK’s as presenting risk by being pro-innovation and laissez-faire, and the EU’s AI rules as flawed due to the bloc’s member nations splintering and adopting different emphases and applications of enforcement measures.

The document also argues that China’s tendency to regulate with “an all-powerful centralized bureaucratic system” is flawed – as demonstrated by “the likely lab-leak origin of COVID-19.”

We’re through the looking glass here, people.

(For the record, the US Office of the Director of National Intelligence has found no indication that the virus leaked from a Chinese lab.)

But we digress.

The Council suggests AI be considered a “decentralized self-organizing system [that evolves] through feedback loops, phase transitions and sensitivity to initial conditions” and posits other examples of such systems – like nonlinear entities seen in financial markets, the behavior of ant colonies, or traffic patterns.

“Traditional methods fall short due to AI’s non-linear, unpredictable nature. AI systems are akin to Complex Adaptive Systems (CAS), where components interact and evolve in unpredictable ways,” explained [PDF] the council.

The Council isn’t keen on relying on “ex-ante” measures, as it’s impossible to know in advance the risk an AI system will present – its behavior is a result of too many factors.

The document therefore proposes India adopt five regulatory measures:

  • Instituting guardrails and partitions, which should ensure AI technologies neither exceed their intended function nor encroach on hazardous territories – like nuclear armament decision-making. If they somehow breach that guardrail in one system, the partitions are there to make sure it doesn’t spread.
  • Ensuring manual overrides and authorization chokepoints that keep humans in control, and keeping them safe with multi-factor authentication and a multi-tiered review process for human decision-makers.
  • Transparency and explainability with measures like open licensing for core algorithms to foster an audit-friendly environment, regular audits and assessments, and standardized development documentation.
  • Distinct accountability through predefined liability protocols, mandated standardized incident reporting, and investigation mechanisms.
  • Enacting a specialized regulatory bodythat is given a wide-ranging mandate, takes on a feedback-driven approach, monitors and tracks AI system behavior, integrates automated alert systems and establishes a national registry.

The Council recommended looking to other CAS systems for ideas on how to deliver its ideas – primarily, financial markets.

“Insights from governing chaotic systems like financial markets demonstrate feasible regulation approaches for complex technologies,” the document observes, suggesting dedicated AI regulators could be modeled on financial regulators like India’s SEBI or USA’s SEC.

Just as those bodies impose trading halts when markets are in danger, regulators could adopt similar “chokepoints” at which AI would be brought to heel. Compulsory financial reporting is a good model for the kind of disclosure AI operators could be required to file.

The authors’ concerns are fueled by a belief that AI’s increasing ubiquity – combined with the opacity of its workings – means critical infrastructure, defense operations, and many other fields are at risk.

Among the dangers they outline are “runaway AI” where systems might recursively self-improve beyond human control and “misalign with human welfare,” and the butterfly effect – a scenario “where minor changes can lead to significant, unforeseen consequences.”

“Therefore, a system of opaque state control over algorithms, training sets, and models with a goal to maximize interests can lead to catastrophic outcomes,” the Council warned.

The document notes that its proposed regulations may mean some scenarios need to be ruled out.

“We may never allow a super connected internet of everything,” the Council concedes. But it concludes that humanity may have more to gain from strong regulations.

“Those creating AI tools will not be let off easily for supposed unintended consequences – thereby inserting an ex-ante ‘Skin in the Game’. Humans will retain override and authorization powers. Regular mandated audits will have to enforce explainability.” ®

Time Stamp:

More from The Register