The open source community is once again doing what it does best: disrupting the status quo. From DeepSeek-R1 to Meta's Llama to Ai2's OLMo 2 32B, open models are now outperforming and undercutting the costs of some proprietary AI alternatives. As a result, Hugging Face recently made the case to the White House that open-source development may be America's strongest competitive advantage in AI. This sentiment was echoed in a recent study by McKinsey, Mozilla, and the Patrick J. McGovern Foundation, which found that 75% of surveyed organizations plan to increase their use of open-source AI.
In the midst of these headline-grabbing developments, I connected with Stephen Hood, Principle Lead for Open Source AI at Mozilla, to learn more about open-source AI and Mozilla's role in this movement. Through initiatives like Robust Open Online Safety Tools (ROOST) and the Builders Accelerator, they're bringing decades of open-web expertise to ensure the future of AI is built by the people. It might be their most important battle yet.
Why Should We Care About Open-Source AI?
Stephen drew a striking parallel between today's AI landscape and a pivotal moment in tech history: the browser wars of the late 90’s and early 2000’s. "Microsoft tried to turn the web into a feature of Windows and very nearly succeeded," Stephen explains. "It was really open source that prevented that.” One company could have very likely controlled our entire online experience. Instead, open alternatives like Mozilla Firefox rose up and championed user choice — and prevented a future that might have looked something like this.
But history seems to be repeating itself with AI. "We feel a certain sense of déjà vu," said Stephen. "This is another transformative technology change that we are currently on path to have controlled by a few for-profit corporations."
Open-source AI is more than just a technical approach — it's a philosophy about who gets to shape our digital future. The collaborative nature of open-source creates a powerful innovation cycle where ideas and improvements compound. Democratization of access, as Stephen hit on above, means researchers, startups, and developers worldwide can build upon these technologies rather than just a few powerful companies.
Open source also offers tremendous educational value. People can study real systems rather than simplified examples, accelerating skill development for the next generation of AI builders. And when communities can adapt models for underrepresented languages and needs that might not be commercial priorities for large companies, AI becomes more inclusive and equitable.
The list goes on and on, but here’s the bottom line: when AI is open, the code gets better, the community gets stronger, and the technology serves more people.
“History shows us the best knowledge systems, like science and open-source software, work because many different people improve them. When we build AI the same way, we get better results, and we’re more likely to be able to break the concentration of economic power that we’re heading towards with frontier models. Diverse contributors solve problems for their own communities, creating systems that work across contexts and can distribute value broadly. Collective input isn’t just theoretical — it’s practical.” - Divya Siddarth, Executive Director of the Collective Intelligence Project
The Open Source Identity Crisis
Perhaps the most highly debated question in open-source AI right now is the most basic: what exactly is “open-source AI?”
"Traditionally, it was a little simpler to draw a distinction about what is and is not open source because it was about software," Stephen explains. "Open-source AI is different because it's not just about source code anymore, it's also about the AI models themselves." Beyond source code, there are weights, parameters, training data, and more — all important ingredients in AI systems.
As the industry grappled with these questions, established institutions stepped forward to create official guidelines. The Open Source Initiative (OSI), the organization that wrote the most widely accepted standard for open-source software, spent nearly two years developing a definition for open-source AI. They established criteria that require transparent model weights and architecture…but not open training data. Rather than full disclosure of training data, OSI requires describing the data thoroughly enough that similar datasets could be replicated. Their approach balances transparency with practical considerations like copyright and private medical data; for example, it's nearly impossible to obtain rights to massive datasets.
But some open source purists – who believe that open source should mean everything is open – weren’t happy. They argued that without open data, the "open source" label loses its meaning. This ideologically split the open source community. The Allen Institute for Artificial Intelligence (Ai2), for example, set a different benchmark for what transparency can look like — all data, code, weights, and details are freely available.
The definition isn't perfect or permanent. But at the very least, having a baseline definition gives the community something to rally around — or argue about — while the field continues its rapid evolution.
Mozilla’s Open Sourcery
Mozilla's magic lies in its transparency and collaboration. Take Robust Open Online Safety Tools (ROOST), a community effort led in partnership by Mozilla, Google, OpenAI, Project Liberty, and more. ROOST is dedicated to making the internet safer by providing free, open-source tools that help detect and prevent harmful online content. These tools are especially crucial for smaller organizations like startups, nonprofits, and governments that may not have access to advanced safety resources.
Plus, Mozilla’s Llamafile is an open-source product which compiles complex LLMs into a single file that runs directly on your computer. Perfect for organizations in low-connectivity areas or those handling sensitive data, it makes open LLMs usable on everyday consumer hardware, without any specialized knowledge or skill.
Beyond tech tools, Mozilla supports open AI development through its Builders Accelerator, which funds orgs and projects filling technical gaps in the open AI ecosystem. One standout is Ersilia, an AI-powered nonprofit using AI to accelerate biomedical research, particularly for neglected diseases in the Global South.
The Prescription for Global Health
Ersilia was founded on the belief that AI and data science should benefit all countries, not just those with well-funded research institutions. Their tool, the Ersilia Model Hub, is the largest collection of ready-to-use, open-source AI/ML models for infectious and neglected disease research. By focusing on diseases that disproportionately affect the Global South, Ersilia is helping researchers in resource-limited settings gain access to AI tools that can speed up experiments and reduce the cost of developing new drugs.
This open approach has created a flourishing ecosystem where researchers across the Global South are now fine-tuning, extending, and deploying these models in ways the founders never imagined. In one example, a researcher from South Africa built an AI model to predict antimalarial activity in molecules after attending an Ersilia workshop. When she shared it through the Ersilia Model Hub, another scientist in Cameroon discovered and applied it to his medicinal plant research — finding promising hits that advanced his work. Open-source AI enabled knowledge to flow across the continent, connecting researchers who might never have collaborated otherwise.
"The future of scientific innovation lies not in isolation but in the powerful combination of cutting-edge technology and open scientific collaboration." – Miquel Duran-Frigola, Chief Scientific Officer and Co-Founder of Ersilia, in his op-ed Community, Code, and Chemistry
What Nonprofits Can Learn from Mozilla
In his article "The Hyperspace Bypass," Stephen called on AI builders to prevent centralized control, protect privacy, augment humans rather than replace them, and fight bias — concerns that resonate deeply with nonprofits. His advice for organizations navigating this terrain is straightforward: "Decide what values matter to you. You have to actually decide where you're going to stand and what things you're going to prioritize." (Shameless plug: Fast Forward just released a Nonprofit AI Policy Builder that can help you put your values into writing.)
These choices may involve trade-offs. "You might decide, even though there’s a model that more consistently delivers the functionality you need today, that because of your concerns about user privacy, you're not going to use it," Stephen suggests. Instead, you might "commit to using local or open models... and have faith that we'll get there as an open source community in the longer term."
I’ve always admired how Mozilla, a nonprofit-backed tech company, has remained influential in rooms filled by tech giants. "It's all about the power of community," Stephen reveals. "What has always allowed Mozilla to punch above its weight has been the community of people who've shared our values and mission."
Building such a community "takes work... it requires trust that must be earned and clarity of purpose." For organizations seeking to shape AI's future, this community-centered approach offers a powerful blueprint.
Quick Bytes
Other Sector Stories
Fast Forward's new Nonprofit AI Policy Builder takes the guesswork out of AI governance. In just five simple steps, organizations can transform their values into practical guardrails for responsible AI implementation. Give it a try here.
Remember the agentic AI edition from January? CareerVillage.org and Digital Green are back with updates on their groundbreaking work. A recent Diginomica article reveals the behind-the-scenes strategies powering their success with this emerging tech.
Global health is getting a strong dose of GenAI. A report from the Stanford Center for Digital Health shows AI-powered health interventions delivering early wins in low and middle-income countries. The research spotlights AI-powered nonprofits like Jacaranda Health and Noora Health, whose innovative AI approaches are already showing real-world impact.
Fast Company released its list of the most innovative nonprofits for 2025. The rankings feature Ai2, USAFacts, GiveDirectly, and the International Rescue Committee, all leveraging artificial intelligence to amplify their impact.
APN Funding News
AI for Good and Tech to the Rescue announced the AI for Good Impact Awards Programme, which celebrates impactful AI solutions that contribute to global progress. Submissions are due May 15.
MIT Solve is still accepting solutions for their 2025 Global Challenges and Indigenous Communities Fellowship. Apply by April 17.
Let’s Talk
I am living and breathing AI for humanity these days. If you are too, let’s talk!