AI's transformative power lies in its capacity to simulate human intelligence and learn from vast datasets, shaping decisions that impact diverse areas—from business and healthcare to governance and public safety. While it offers immense opportunities, AI systems also introduce ethical and governance concerns that businesses and policymakers must address to build responsible, trustworthy AI solutions. A recent study by a leading research firm revealed that security, privacy, and accuracy remain the top barriers to adopting and implementing Generative AI across enterprises.

Top challenges in adopting Gen AI, according to Fortune 500* Executives:

The study also shows that 30% of enterprises using AI reported having had a security or privacy breach in their ecosystem. These statistics reinforce the need for ethics, security, and governance to be factored into the rollout of Generative AI programs across enterprises.

AI Ethics, Security and Governance Framework

At Grant Thornton Bharat, we have built a comprehensive framework for AI Ethics, Security, and Governance, covering areas such as Responsible AI (RAI) and Explainable AI (XAI), among others. Let’s explore core pillars of ethical AI, highlight real-world frameworks, and consider the social dilemmas arising from our growing reliance on AI.

AI systems differ from traditional software by evolving through data-driven learning rather than pre-defined rules. This learning process can inadvertently incorporate biases in training data, perpetuating societal inequalities. For example, a hiring algorithm may favour certain demographics due to biased historical data. To ensure fairness, AI systems must treat everyone equally, avoiding differential impacts on similarly situated groups (based on race, sex, age, gender, ethnicity, or disabilities).

Frameworks and Tools:

  • Google’s Responsible AI Pledge: Includes fairness principles to mitigate biases through diverse datasets and regular audits.
  • Fairlearn: An open-source Python toolkit that assesses and mitigates fairness issues in AI systems.

Inclusivity in AI requires that systems serve all users fairly and without bias. However, AI can inadvertently mirror societal biases, especially when they are embedded in the training data. To ensure inclusivity, AI models must be carefully evaluated for potential biases during both the development and deployment stages.

Fostering inclusive AI is a form of socio-engineering—designing systems that reflect a broad range of voices and values. Techniques such as incorporating diverse datasets and deploying bias-reduction algorithms are essential in addressing these challenges.

Framework and Tools:

In 2023, New York City enacted an AI hiring law that mandates employers using AI or machine learning in hiring processes to conduct annual audits of their recruitment technologies. These audits must be carried out by third-party providers and assess whether the systems exhibit any bias—whether intentional or unintentional. Non-compliance with this law can result in fines ranging from $500 to $1,500 per violation and is mandatory for any business operating in New York City.

The reliability and safety of AI systems are crucial, particularly in high-stakes environments where errors can have serious consequences. It is essential to distinguish between trustworthy sources (such as verified corporate blogs) and unreliable ones to ensure accurate data input. Key aspects of reliability include content accuracy (also known as groundedness), performance, and the availability of the AI system.

Framework and Tools:

Azure AI Content Safety’s groundedness detection feature verifies that the responses generated by large language models (LLMs) are rooted in the provided source materials. This process ensures the accuracy of AI-generated responses.
OpenAI’s GPT models caution users about the potential for inaccuracies in AI outputs, underscoring the need for continual validation, regular updates, and transparency regarding inherent risks.

Azure AI Content Safety is an AI service by Microsoft that detects harmful content in both user input and AI-generated outputs, using text and image APIs to monitor applications and services for risks.

AI systems heavily rely on data, including personal information, to deliver value, making data security and privacy of utmost importance. As demonstrated by Microsoft’s Tay chatbot, AI can sometimes behave unpredictably when exposed to large volumes of unfiltered user data, raising privacy concerns.

Framework and Tools:

Google’s Vertex Responsible AI framework advocates for data minimisation and responsible data handling. With evolving global privacy laws, companies must navigate the fine balance between AI innovation and user privacy.

IBM’s watsonx.governance offers an AI risk atlas, outlining risks associated with generative AI, foundation models, and machine learning models. These risks are categorised into three types: (1) traditional AI risks, (2) risks amplified by generative AI, and (3) new risks associated with generative AI.

Similar to the OWASP Top 10 security risks for apps and data, the OWASP Top 10 for Machine Learning and Gen AI/LLM provides a structured approach to identifying and mitigating risks in these technologies.

OWASP TOP 10 Security Risks for ML

  • Input Manipulation Attack
  • Data Poisoning Attack
  • Model Inversion Attack
  • Membership Inference Attack
  • Model Theft
  • AI Supply Chain Attacks
  • Transfer Learning Attack
  • Model Skewing
  • Output Integrity Attack
  • Model Poisoning

OWASP TOP 10 Security Risks for Gen AI

  • Prompt Injection
  • Insecure Output Handling
  • Training Data Poisoning
  • Model Denial of Service
  • Supply Chain Vulnerabilities
  • Sensitive Info Disclosure
  • Insecure Plugin Design
  • Excessive Agency
  • Overreliance
  • Model Theft

AI systems often depend on intellectual property such as research papers, images, or proprietary algorithms, creating IP challenges during both development and deployment. Unauthorized use of IP by AI models can lead to ethical and legal disputes. This risk is particularly significant for generative AI and language models, as they generate text, images, audio, and video based on existing content. Unless this content is carefully curated, the risk of IP infringement increases, potentially leading to millions of dollars in penalties.

Consulting with compliance and procurement experts during data selection, model building, and testing phases can help companies navigate IP considerations. Ensuring proper attribution and permissions for external resources used in AI models is critical, and these experts should continue to be involved during the operational phase.

Framework and Tools:

The Protected Material Text Detection feature in Azure AI Content Safety checks AI-generated text for known content that may be IP protected, such as song lyrics, selected articles, and recipes. It also flags protected code, including software libraries, source code, algorithms, and other proprietary programming content sourced from known GitHub repositories.

As AI’s potential for misuse—such as in data mining or behavior analysis—becomes evident, regulatory oversight is increasingly vital. Regions like the European Union have implemented stringent AI regulations, setting standards that companies worldwide must follow.

An AI regulatory framework not only ensures compliance but also defines the ethical boundaries of AI applications. By collaborating with regulatory bodies, businesses can create systems that respect consumer rights and societal values.

Framework and Tools:

There has been a surge in international, regional, and national laws regarding data security, ownership, and AI regulation. Notable examples include GDPR in the European Union, the Australian AI Ethics Framework, the UK AI Regulations Policy, and Canada’s Bill C27, AI and Data Act. The latest addition to this list is the EU AI Act, rolled out in 2024. These frameworks define the compliance requirements for AI systems.

Additionally, the OECD AI Principles, adopted by over 40 countries, stress responsible stewardship of trustworthy AI, emphasizing transparency, fairness, and accountability in AI systems.

Explainable AI (XAI) is essential to ensure that AI systems’ decisions are understandable. For example, an AI-based resume-screening tool should be able to explain why it favored certain candidates over others. Transparent systems build user trust by making the decision-making process interpretable.

One way to achieve this is through hybrid AI approaches, where service representatives collaborate with conversational agents, maintaining human oversight while the AI system adapts and learns. This human-AI collaboration enhances transparency and mitigates potential issues in complex customer interactions.

Framework and Tools:

Diverse Counterfactual Explanations (DiCE), developed by Microsoft Research, is an XAI tool designed to explain the predictions of machine learning-based systems used in critical domains such as finance, healthcare, and education. For instance, if a machine learning model is used in an applicant tracking system (ATS), DiCE would explain why a particular candidate was selected. This helps stakeholders—such as model designers, decision-makers, and evaluators—better understand the model’s decision-making process.

Another XAI tool is Vertex Explainable AI, a component of Google Cloud’s Vertex AI offering. It provides both example-based and feature-based explanations and supports multiple ML models and image data.

AI systems, including humanoid robots, are increasingly used in sectors such as manufacturing and elder care. To ensure these systems promote positive human values, businesses must set boundaries informed by ethical guidelines (e.g., Asimov’s laws of robotics). This helps ensure that AI systems are used to protect human values and prevent harm.

Japan’s use of robots for elder care highlights the importance of establishing ethical boundaries in AI applications. These robots provide companionship and assistance while respecting human dignity.

Framework and Tools:

This principle aligns with the laws, regulations, and frameworks discussed earlier. It is included in AI ethics and governance frameworks such as the IndiaAI Responsible AI framework. The guidelines emphasize conducting critical reviews of proposed use-cases, avoiding unethical use-cases, aligning models with the values of target user groups, and adopting human-centered AI design to enhance user experience.

Accountability in AI is complex, as liability often depends on the specific parties involved. However, companies must accept responsibility for errors and biases in AI outputs and be transparent about accountability when an AI system fails. AI designers, developers, and operators must ensure they follow best practices in ethics, security, and governance during design, development, decision-making, and outcomes.

Nearly 50% of surveyed developers believe the creators of AI should be responsible for considering the broader implications of the technology (Source: Stack Overflow Developer Survey, 2018).

Establishing governance frameworks can improve accountability by clarifying responsibility for AI outcomes. Microsoft underscores the importance of ensuring security and quality, emphasizing that accountability in AI is key to building trust.

Framework and Tools:

MLOps and LLMOps tools support accountability across AI project stakeholders. For example, Azure Machine Learning’s MLOps capabilities track model metadata, capture governance data across the machine learning lifecycle, notify users of lifecycle events, and monitor applications for operational and machine-learning-related issues.

Responsible AI scorecards, dashboards, and reporting tools across various AI and generative AI tools help all stakeholders stay informed about AI ethics and governance, driving continuous improvement.

An effective AI governance framework provides guidelines for ethical AI development and deployment. Governance helps companies manage AI-related risks, from preventing misuse to addressing concerns like job displacement, while aligning AI applications with corporate values. Key governance areas include model selection criteria, Responsible AI (RAI), Explainable AI (XAI), AI monitoring and observability, and AI cost management (e.g., FinOps for AI). DataOps, MLOps, and LLMOps are integral to AI governance.

Establishing a governance body or board ensures that AI systems are regularly evaluated for ethical integrity and societal impact. Effective governance goes beyond compliance, fostering an ethical and sustainable approach to AI.

Framework and Tools:

Cloud service providers and AI infrastructure providers offer MLOps and LLMOps tools and services, such as IBM’s watsonx.ai and watsonx.governance, Vertex AI, AWS SageMaker and Bedrock, and Azure AI Studio and Azure ML.

Conclusion

As AI continues to shape society, it presents a complex dilemma: while its convenience and capabilities offer tremendous benefits, they also raise concerns about privacy, transparency, and accountability. In sectors like news, AI-driven content aggregation risks compromising journalistic integrity, while in customer service, it may reduce human empathy in interactions.

Our increasing reliance on AI brings critical questions regarding human agency, privacy, and societal impact. While the challenges surrounding AI ethics, security, and governance are significant, at Grant Thornton Bharat, we have the expertise to guide organisations through these complexities. We are committed to helping businesses develop and deploy AI solutions that align with responsible practices, ensuring fairness, transparency, and accountability. By adhering to ethical frameworks and leveraging cutting-edge tools, we support businesses in safeguarding their values while unlocking AI’s potential to drive innovation and success.

The path forward requires ongoing vigilance, collaboration, and innovation to balance AI’s vast benefits with the ethical and governance standards demanded by society.

Click here to know more about Gen AI and AI/ML Services