Apr 24, 2024

Apr 24, 2024

Building Trustworthy AI: A Roadmap for India

Written by:

Sep 1, 2023

Apurba Kundu

Apurba Kundu

The potential of AI is undeniable, but so are the risks. As India heads towards an AI-powered future, is the country prepared for the ethical and regulatory challenges it presents? How do we ensure responsible AI development in India, from data privacy to robust regulations? Let's answer that question.

The potential of AI is undeniable, but so are the risks. As India heads towards an AI-powered future, is the country prepared for the ethical and regulatory challenges it presents? How do we ensure responsible AI development in India, from data privacy to robust regulations? Let's answer that question.

The potential of AI is undeniable, but so are the risks. As India heads towards an AI-powered future, is the country prepared for the ethical and regulatory challenges it presents? How do we ensure responsible AI development in India, from data privacy to robust regulations? Let's answer that question.

“I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.”

–– Sam Altman on AI, in an interview with TIME Magazine

This statement underscores the profound impact that AI promises, while simultaneously highlighting the question of safety, transparency, and accountability of AI systems. 

As we stand on the precipice of an AI-powered future, a pivotal question demands our attention: Are we truly prepared? How do we ensure responsible and trustworthy AI? 

A significant disparity marks the current landscape. AI technology is hurtling forward with new developments nearly every week.

In stark contrast, the development and rollout of new regulations seem unable to keep up, leaving a regulatory vacuum. This raises many concerns, including potential privacy violations, intellectual property rights violations, and algorithmic bias.

So what proactive steps can be taken to ensure that AI's development remains firmly rooted in ethical principles, fosters responsible innovation, and ultimately serves as a force for good in the world?

Part 1: Data privacy – The Foundation of Trustworthy AI

Datasets fuel the development and training of AI models, ultimately shaping their outputs. However, in this data-driven landscape, a critical consideration emerges: privacy.

Privacy and AI are inextricably linked. The data used to train AI models can often contain personal information including sensitive personal data or information, raising concerns about potential misuse and unauthorised access.

For example, health AI models are trained on datasets containing sensitive medical records. Without robust privacy safeguards, such data could be breached, compromising patient confidentiality and eroding trust in the technology itself.

This is where lies the importance of data privacy by design in AI systems. This principle emphasises the proactive integration of privacy considerations throughout the entire AI development lifecycle – a solid foundation upon which trustworthy AI is built.

By prioritising data privacy by design, organisations need to make a real effort to find coherence and cooperation between the Digital Personal Data Protection Act (DPDPA), 2023 which was passed in August 2023 but hasn’t come into effect yet and other relevant laws such as the Information Technology Act, 2020, particularly Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) (SPDI) Rules, 2011, The Indian Medical Council Act, 1956 (“MCI Act”), The Indian Medical Council (Professional conduct, Etiquette and Ethics) Regulations, 2002 (“MCI Code”), Electronic Health Records Standards, 2016, Medical Devices Rules, 2017. 

Building a Robust Data Management Process

The Why

Since data privacy laws can be complex, if an organisation collects, stores, or uses any personal information, they should come up with a plan for implementing a Data Management Process.

For AI companies, this is especially important. Educating teams about data protection laws and privacy principles from the get-go empowers an organisation to identify and address potential privacy risks during model development.

Proactive teams can then build features that allow users to control their data (like opt-out options) or minimise personal information used in training datasets. Trying to add these features later can be costly and time-consuming.

By prioritising data management practices from the start, an organisation can navigate the ever-changing regulatory landscape with confidence.

By taking a proactive approach, a chain reaction of positive outcomes is created.

First, an organisation gains the trust of its users, who value that their privacy is taken seriously This strengthens a reputation for high standards and ethical practices which attract new users and employees who share the organisation’s values, fostering a virtuous growth cycle.

The How

For responsible collection, receiving, possession, storing, dealing or handling of any personal data, here are some key steps to consider:

  • Privacy policy: Develop a clear and accessible privacy policy that outlines data collection practices, usage limitations, and user rights. 


  • Data collection and usage: Clearly define the purpose for data collection and ensure it aligns with legitimate business objectives. Limit data collection to what is strictly necessary. Avoid collecting unnecessary personal information and list the intended recipients of the information. 


  • Data minimisation and accuracy: Collect only the minimum amount of data necessary to achieve the stated purpose, ensure the data is only used for the intended purposes and do not retain that information for longer than required.  Implement procedures to ensure data accuracy and provide users with mechanisms to rectify any errors.


  • Consent management: Obtain informed consent before collecting and processing personal data. This consent should be freely given, specific, informed, and unambiguous. Provide an option to decline sharing of the information. Additionally, offer people the ability to withdraw their consent at any time.


  • Grievance redressal: Establish a clear grievance redressal mechanism to address user concerns regarding data privacy. Address any discrepancies and grievances of the information in a time-bound manner. 


  • Data disclosure: Be transparent about data disclosure practices. Only disclose data to third parties with the user's consent or when legally required. If asked to share information with a Government agency, ensure there’s a request in writing stating the purpose of seeking such information and that the information will not be shared with any other person. 


  • Security measures: Implement robust security measures to protect data from unauthorised access or disclosure. Employ appropriate technical and organisational security measures to safeguard data against unauthorised access, alteration, or destruction. Regular risk assessments and security audits are crucial for identifying and mitigating potential vulnerabilities.

Alignment with fair information practices principles (FIPPs)

Consider the Fair Information Practices Principles (FIPPs) as a guiding framework for responsible data management which emphasises core tenets like accountability, minimisation, quality and integrity, purpose specification and use limitation, security, and transparency.  Aligning data management practices with user privacy principles isn't just about compliance; it's a powerful statement about commitment to responsible data handling.

Datasheets for Enhanced Transparency

There is no standardised process for documenting datasets, which can lead to severe consequences in high-stakes domains. Datasheets for datasets provide detailed information about their motivation, composition, collection process, recommended uses, and any limitations of the data and so on. 

Datasheets facilitate better communication between dataset creators and the dataset consumers and encourage the AI community to prioritise transparency, greater scrutiny, and accountability, mitigating potential biases within the data itself.

Part 2: AI Regulations – Building Guardrails for Trustworthy AI

AI presents a complex challenge for regulators: Balancing enabling innovation and mitigating potential risks.  At the core, there needs to be a focus on regulations that support building an open, transparent, and trustworthy AI. This has given rise to a dynamic and rapidly evolving regulatory landscape. While the Indian regulatory landscape is nascent, the European Union (EU) has taken a bold step by passing the EU AI Act which will come into force next year. This groundbreaking legislation is the world's first comprehensive legal framework for AI.

The EU AI Act: A Global Precedent

The EU AI Act categorises AI applications based on their risk level, with stringent requirements imposed on high-risk applications. For example, facial recognition technology or AI-powered recruitment tools will face stricter scrutiny compared to low-risk applications like chatbots.

Critically, compliance with the EU AI Act will be mandatory for any organisation offering AI models or systems within the European Union, regardless of their physical location. This signifies a global shift towards stricter regulations, prompting organisations worldwide to take notice.

Indian AI Advisory 

The Indian government issued an advisory on March 1, 2024, on AI to contain the premature deployment of AI models, promote algorithmic fairness, and contain the spread of deepfakes. However, it was quickly withdrawn and replaced with a fresh advisory. The revised advisory settles the most contentious issue — there is no longer a requirement to obtain permission from the government to deploy a new ‘foundational model’ or ‘generative AI’ service in India. Instead, platforms need to be transparent about the ‘inherent fallibility and unreliability’ of the output generated by services through a combination of labels and consent popups. 

Recommendations for responsible AI development

We must prioritise the development of responsible AI practices. Here are key areas for focus:

Data and Model Development

As most LLM training datasets are built using widely available data from across the web, the systems reproduce biases and stereotypes that cause real-world harm.

There’s a need to focus on smaller, more specific models, which are built on intentionally and locally curated datasets and can be used to fine-tune models according to knowledge from subject matter experts, thereby paving the way for strong norms for ways of collection and labelling of data

Transparency and Accountability

Advocate for increased transparency in AI tools. This means having mechanisms around opening up AI tools to public inspection, modification and remixing.

Encourage the development of industry-specific testing standards and the creation of robust third-party testing regimes to ensure responsible AI implementation.

Addressing Algorithmic Bias

Emphasise the need for demonstrably representative datasets and the establishment of clear data integrity rules and sampling to build AI models

Ensuring Trust and Safety

A built-in mechanism for additional safety features for high-risk use cases such as healthcare. 

Combating Misinformation

Deploy effective, accessible, adaptable, internationally interoperable technical tools, standards or practices including content authentication mechanisms such as watermarking or labelling to combat the spread of deepfakes and misinformation.

These recommendations are just a starting point. The public and the startup community should view regulations as an opportunity, not a burden. By integrating regulatory compliance into the development process, organisations can gain a strategic advantage and become first movers.

Why Staying Ahead Matters

Navigating this evolving regulatory landscape can be daunting, but proactive engagement offers significant benefits:

  • Compliance advantage: Familiarising with existing and upcoming regulations allows to tailor AI development practices for compliance, minimising the risk of future disruptions and penalties.


  • Future-proofing business: A proactive approach demonstrates commitment to responsible innovation, fostering trust with stakeholders and positioning the organisation for long-term success.


  • Shaping the future of AI: By actively engaging with the regulatory process, can contribute to shaping the future of AI in India. Active participation in setting up standards, and industry best practices ensures regulations are effective, not excessive, fostering a thriving responsible AI ecosystem.

Building an AI Ecosystem for India

India's rapidly growing internet user base positions it as a leader in shaping global AI practices. For India, this means fostering a robust domestic AI ecosystem that reflects its unique realities. Given the truly transformational nature of AI, we can’t afford to get it wrong. It means ensuring this technology benefits society, centring equity and access.

How should India chart its course?

  • Focus on equity: Ensure that AI benefits society by promoting inclusive governance incorporating the perspectives of diverse stakeholders, including underserved communities. By bringing together the expertise of government, industry, academia, and civil society, we can ensure AI solutions address the needs of everyone, not just the privileged few.


  • Building for Bharat: Develop AI solutions that cater to the unique needs and realities of India, including its multilingual landscape and the digital divide where only a third of the women have ever used the internet. This requires fostering the development of datasets of speech, text, images and videos for training large language models in different languages, making sure the datasets are truly representative of the challenges and diversity of the country. 


  • Homegrown innovation: While leveraging existing large language models is valuable, we need to build a strong foundation that can keep up with the rapid developments, and at the same time cater to our unique needs and realities. 

The path forward is clear.

The meteoric rise of AI demands a new kind of governance – one built on co-creation, collaboration, and inclusivity. By embracing best practices for data management and responsible AI development, we can cultivate a future where AI isn't just transformative, but truly trustworthy.

———

Further reading

  1. Privacy For All: Mozilla’s Campaign to Think Globally, Act Locally on Privacy and AI

  2. Accelerating Progress Toward Trusthworthy AI, Mozilla

  3. Datasheets for Datasets, Timnit Gebru et al.

  4. Checks and balances in the Age of AI- Its time for a radical rethink by Urvashi Aneja

About the author

Apurba is a Tech Lawyer based in Bangalore, India with experience across rights-based organisations, consulting, and social impact. She’s interested in Tech Policy focussed on Responsible AI, Data Privacy, and Intellectual Property Laws. You can find more about her here.

Join the Community

People+ai is a non-profit, housed within the EkStep Foundation. Our work is designed around the belief that technology, especially ai, will cause paradigm shifts that can help India & its people reach their potential.

Join the Community

People+ai is a non-profit, housed within the EkStep Foundation. Our work is designed around the belief that technology, especially ai, will cause paradigm shifts that can help India & its people reach their potential.