Dec 4, 2025

Dec 4, 2025

Democratising AI Resources for Real-World Impact

Written by:

The People+ai Team

The People+ai Team

Across more than 500 use cases we have analyzed with partners, the same constraints appear repeatedly. In this blog, we explore how to address these constraints.

Across more than 500 use cases we have analyzed with partners, the same constraints appear repeatedly. In this blog, we explore how to address these constraints.

Across more than 500 use cases we have analyzed with partners, the same constraints appear repeatedly. In this blog, we explore how to address these constraints.

Across the Global South, AI deployments consistently stall not because of inadequate technology, but because the infrastructure conditions for adoption are missing. No single actor in the AI value chain possesses all the capabilities required to scale.

What is missing is not more pilots but the common spine that allows pilots to become systems. This means predictable access to compute, data that is ready for AI work, safety processes that scale with deployment, and governance structures that multiple actors can rely on. Building this shared infrastructure is what democratizing AI resources actually means.

Three ideas anchor this agenda.

Using the Use Case Adoption Framework to Surface Infrastructure Gaps

AI does not fail at scale because a model falls short on accuracy by a few percentage points. Deployments stall when coordination, governance, and operating capacity are missing. When there is no common spine of data, compute, safety processes, and institutional ownership to carry a solution from one site to many.

The Use Case Adoption Framework provides a practical way to identify and strengthen this spine. It defines a use case as a repeatable, real-world application of AI tied to a clear persona and purpose, embedded in existing workflows, and designed to scale without losing reliability or trust.

The framework then breaks each use case into three components: First, the persona, purpose, and application, which identifies who is being served and what problem is being solved. Second, the infrastructure requirements, including data, compute, language resources, benchmarks, and governance needed for reliable operation. Third, the conditions needed to move from zero to one, and from one to population scale.

Across sectors like agriculture, education, migration, and government services, the framework reveals a consistent pattern. Very different use cases rely on the same horizontal capabilities. These include accessible and affordable compute, datasets that are structured and documented for AI use, multilingual and voice capabilities, AI safety and alignment tooling, and human talent with the right skills. Since these capabilities should not be concentrated in a few hands, they must be democratized to ensure an inclusive AI ecosystem.

By systematically mapping use cases against these horizontal requirements, the framework becomes a diagnostic tool. It surfaces gaps when multiple use cases report that they cannot expand because local compute is unaffordable for inference workloads, or because district-level scheme data is not in AI-ready form.

It helps prioritize investments by identifying which constraints recur across use cases, making them strong candidates for shared infrastructure like regional data environments or subsidized compute windows for public-interest workloads. It also anchors safety by making measurable public benefit, responsible governance, and user trust explicit and traceable across deployments.

This means collecting and classifying a portfolio of high-leverage use cases from India and the Global South, using the framework to systematically document their infrastructure needs and blockers, and aggregating these findings to inform the design of shared data and compute infrastructure.

This creates a feedback loop where every new deployment strengthens the common spine for the next one, instead of remaining a bespoke pilot.

Designing Pathways to AI-Ready Data

AI-ready data is not a fixed list of datasets that every country must open. It is a design choice. Each country, region, or group of countries decides for itself which data should be prepared for AI use, for what purpose, and for whom. The same sector will make different choices in different places. One region may prioritize agriculture data for smallholder farmers. Another may focus on urban services or support for micro, small, and medium enterprises.

What makes data AI-ready is simply that once these priorities are clear, the relevant datasets are organized, structured, documented, and maintained so that AI systems can reliably work with them in that local environment. This shifts the conversation from opening all data by default to making the right data AI-ready for clear public and economic goals.

This approach requires a deliberate design process. We need to define priority personas and use cases based on local needs and opportunities. Next, identify the minimum datasets these use cases depend on. And finally, agree on basic standards for quality, documentation, and access within that context.

This concept can also inform the design of a data embassy, which is a dedicated local or regional data environment, if and only if a country chooses to create one. The key principle is that data infrastructure should be built around actual deployment needs, not as abstract infrastructure hoping to find a purpose later.

Making Compute Affordable and Diverse

Compute is the most expensive and unevenly distributed AI resource globally. A small set of countries and hyperscale cloud providers host most state-of-the-art GPU capacity, while many countries exist in what can only be described as compute deserts with no public cloud regions at all. This concentration creates a significant barrier to AI adoption in regions that need it most.

India's experience under the IndiaAI Mission demonstrates both the opportunity and the remaining challenges. The mission is deploying tens of thousands of GPUs and making them available at heavily subsidized rates of around ₹65 per GPU-hour. This is significantly lower than prevailing commercial prices and among the cheapest rates globally. It shows that public investment can meaningfully reshape the compute cost curve and create new possibilities for innovation.

However, several challenges remain that require attention:
The first is the funding model and targeting. Subsidized rates need clear prioritization frameworks to determine which workloads, sectors, and institutions get first access. This is particularly important for protecting research, public-interest applications, and small innovators from being crowded out by large commercial demand. The Democratising AI Resources Fund and similar blended finance instruments can be designed to preferentially underwrite such workloads. Funders can also earmark GPU capacity or budgets for specific domains, for example by committing a defined pool of funding for healthcare use cases and tracking whether that capacity is actually being used for them.

The second challenge is making diverse formats of compute accessible. Different use cases have different requirements. Some need training capacity while others need inference. Some require batch processing while others need real-time responsiveness. Some work in the cloud while others need edge deployment. Some benefit from GPUs while others require specialized accelerators. The task is to make this diversity visible and usable to small teams through clear service level agreements, transparent pricing, and simple on-ramps for startups, universities, and state departments. This builds on best practices emerging from India's leading AI builders who have experience with scalable compute access and developer-ready tooling.

The third challenge is integrating compute with other infrastructure components. Compute should not be offered as a bare metal service. The vision already includes pre-configured workbenches where a builder can discover the right combination of datasets, models, tools, and compute for a given use case through a single interface. Linking compute nodes to regional data embassies and use-case templates transforms compute from a static asset into an active enabler of adoption.

The fourth challenge is international cooperation and collective bargaining. Working together, Global South countries can improve access and pricing for advanced hardware. Democratizing compute is not only about deploying more GPUs. It is about creating predictable, affordable, and well-governed access to the right kind of compute for the right workloads.

Moving from Concepts to Action

The path forward rests on three pillars. First, using use cases as the organizing unit to keep the focus on real problems, measurable public benefit, and safe adoption. Second, creating pathways to AI-ready data through regional data embassies that ensure AI systems are grounded in local realities and usable by diverse communities. Third, democratizing compute to make high-end computational resources a shared, governable asset rather than a bottleneck.

This involves about creating the conditions where AI can actually work for people at scale. Where a healthcare application tested in one district can expand to twenty without rebuilding everything from scratch. Where a government service reaches citizens in their own languages with models that understand local context. Where small teams and large institutions alike can access the resources they need to solve real problems.

Want to be part of shaping this agenda? We are actively seeking partners across government, civil society, and the private sector to pilot the Use Case Adoption Framework and co-design the AI infrastructure the Global South needs. Reach out to us at hello@peopleplus.ai to join the conversation.

From India To The World

Our work is designed around the belief that technology, especially AI, will cause paradigm shifts that can help people reach their potential. Join us in building AI systems that work for billions.

An EkStep Foundation Initiative

From India To The World

Our work is designed around the belief that technology, especially AI, will cause paradigm shifts that can help people reach their potential. Join us in building AI systems that work for billions.

An EkStep Foundation Initiative

From India To The World

Our work is designed around the belief that technology, especially AI, will cause paradigm shifts that can help people reach their potential. Join us in building AI systems that work for billions.

An EkStep Foundation Initiative