Article

Global perspectives on AI accountability: A comparative analysis and India's regulatory landscape

Ramendra Verma
By:
Ramendra Verma
insight featured image
Contents

As the present leader of the Global Partnership on Artificial Intelligence (GPAI), India wields substantial influence in the field of Artificial Intelligence (AI). With a GPAI membership that has more than doubled from 15 to 29 countries since joining it in June 2020, India's impact on the international AI landscape is becoming increasingly evident. The nation is strategically positioned to drive the growth of AI startups, advocate for widespread AI implementation, and shape AI development in developing regions. By taking the lead within GPAI, India can play a pivotal role in guiding global initiatives towards ethical and accountable AI standards.

The spotlight is on the necessity of regulatory frameworks and accountability measures. Several countries, including the European Union (EU) and United States (US), have taken significant measures to establish guidelines and regulations for AI's ethical and responsible use. In this article, we examine the regulatory steps taken by the EU, the US, and other countries, drawing parallels and contrasts with India's current regulatory status in AI accountability.

The EU has positioned itself as a global leader in AI regulation. On 8 December 2023, after several months of intense negotiations, the European Parliament and the Council reached a political agreement on the EU Artificial Intelligence Law. Through this regional legislation, the EU aims to create a harmonised and legal framework for managing technical process intelligence across the EU to ensure that AI is “safe” and “respects fundamental rights and EU values”.

The new AI Act classifies AI implementations into three categories - high, medium, and low-risk - with each level subject to varying levels of inspection. Applications in critical sectors, such as healthcare and transport, face rigorous conformity evaluations as they are labeled as high-risk. To prohibit certain hazardous AI practices, the legislation takes a strong stance by forbidding the use of AI systems that manipulate individuals or exploit their vulnerabilities. In addition, the act emphasises the importance of transparency and accountability in AI applications, making it mandatory for high-risk implementations to provide clear explanations of their decision-making processes. In addition, the AI Act complements the GDPR (General Data Protection Regulation) principles, underlining the need for AI technologies to not only uphold data protection standards but also respect individuals' privacy.

In the US, AI regulation is evolving through a combination of sector-specific guidelines and principles laid out by various federal agencies. While there is no comprehensive federal legislation, several initiatives have been undertaken. The National Artificial Intelligence Initiative Act, proposed in 2021, seeks to establish a national strategy for advancing AI research and development. It emphasises the importance of public-private partnerships and workforce development. Agencies like the Federal Trade Commission (FTC) and the National Institute of Standards and Technology (NIST) have issued guidelines for specific sectors, focusing on data privacy, fairness, and transparency. The Biden administration has issued executive orders directing federal agencies to prioritise AI research and development, emphasising the ethical use of AI and promoting international collaboration on AI standards.

Beyond the EU and the US, other countries have also taken steps to address the challenges posed by AI. Canada has released ethical guidelines for AI development, with a focus on around transparency, accountability, and fairness. The guidelines are designed to guide both the public and private sectors. Singapore has established the Model AI Governance Framework, outlining key principles for the responsible adoption of AI. The framework emphasises fairness, transparency, and accountability. China has released guidelines for AI development, focusing on national security, ethical considerations, and the responsible use of AI technologies. The country is also working on updating its data protection laws.

In India, efforts toward AI regulation are in progress, with the country formulating a National AI Strategy to guide the development and deployment of AI technologies. The strategy is expected to cover research and development, infrastructure, and ethical considerations. NITI Aayog, the government's policy think tank, released draft guidelines on the ethical use of AI in 2020. The guidelines focus on fairness, accountability, and transparency in AI systems.

The Digital Personal Data Protection Act (2023) aims to regulate the processing of personal data and includes provisions that impact AI applications. The Act talks about user consent, data localisation, and the rights of individuals over their data.

India faces the challenge of aligning its regulatory frameworks with global standards while addressing its unique socio-economic context. India should strive to align with global standards, especially in areas such as data protection and ethical considerations, to facilitate international collaboration and the responsible deployment of AI technologies. Regulatory efforts must involve input from diverse stakeholders, including government bodies, industry, academia, and civil society. A collaborative approach ensures that regulations are comprehensive balanced and consider the interests of all parties involved. Given the rapid evolution of AI technologies, regulatory frameworks must be dynamic and adaptable. Regular reviews and updates are essential to keep pace with technological advancements and emerging ethical considerations.

As the global community grapples with the complexities of regulating artificial intelligence, India stands at a crucial juncture. Learning from the experiences of the EU, the US, and other nations, India has the opportunity to craft a regulatory landscape that fosters innovation while upholding ethical standards and accountability.

India should prioritise uplifting the majority population while ensuring inclusive development, fair resource distribution, and addressing biases. This approach should not compromise on ethical considerations, transparency, or the well-being of marginalised communities. Leveraging technology, promoting education, and building social cohesion are essential components of a balanced strategy for sustainable and equitable development. The choices made today will shape India's future in the era of artificial intelligence, and a thoughtful, inclusive, and accountable approach will be instrumental in realising the potential benefits of AI for the nation.

This article first appeared in Business Outlook & Money on 22 February 2024.