Artificial intelligence (AI) is shaping the way we live, work, and connect with the world. From chatbots to image generators, AI is transforming our online experiences. However, this change raises serious questions. Who controls the technology behind these AI systems? And how can we ensure that everyone, not just the traditional tech giants, can equitably access and contribute to this powerful tool? Is it ok?
To explore these important issues, Mozilla commissioned two studies that take a deep dive into the challenges of access and competition in AI. They are “Access to Closed Infrastructure Models for External Researchers” (commissioned by data rights organization AWO) and “Stopping Big Tech From Becoming Big AI”. ” (commissioned by the Open Market Research Institute). These reports show how AI is being built, who controls it, and what changes are needed to ensure a fair and open AI ecosystem.
Why researcher access matters
Access to Closed-Found Models for External Researchers, written by AWO’s Esme Harrington and Dr. Mathias Vermeulen, provides better conditions for independent researchers to access and study AI models developed by large companies. We are addressing the pressing issue of the need for. The underlying models, the core technology behind many AI applications, are primarily controlled by a small number of large companies, who decide who can study or use them.
What are the access issues?
Restricted access: Companies like OpenAI, Google, and others are gatekeepers. Independent public interest research can often be left unaddressed due to limited access to researchers whose research aligns with priorities. High costs: Even when access is granted, it often comes with a high price tag that is beyond the reach of smaller or less-funded teams. Lack of transparency: These companies don’t always share how their models are updated or adjusted, making it difficult for researchers to reproduce their research or fully understand their technology. It’s almost impossible to do. Legal risks: As researchers attempt to scrutinize these models, they may face legal threats if their research reveals flaws or vulnerabilities in AI systems.
This study suggests that companies need to provide more affordable and transparent access to improve AI research. Additionally, governments should provide legal protection to researchers, especially when they are acting in the public interest by investigating potential risks.
Access to closed-based models by external researchers
read the paper
AI Race: Are Big Tech Stifling Innovation?
The second study, written by Max von Thun and Daniel Hanley of the Open Markets Institute, takes a closer look at the competitive landscape for AI. Several technology giants, including Microsoft, Google, Amazon, Meta, and Apple, are currently building a vast ecosystem to control different parts of the AI value chain. And a small number of companies control most of the key resources needed to develop advanced AI, including computing power, data, and cloud infrastructure. result? Small businesses and independent innovators are locked out of competition from the start.
What is happening in the AI market?
Market concentration: A small number of companies control the key inputs and distribution of AI. They control the data, computing power, and infrastructure everyone else needs to develop AI. Anti-competitive alliances: These large companies acquire or do business with smaller AI startups that often circumvent traditional competition controls. This could deter these smaller companies from challenging big tech and prevent other companies from competing on a level playing field. Gatekeeper power: Because Big Tech controls critical infrastructure such as cloud services and app stores, they can set unfair terms for smaller competitors. They can charge higher prices or prioritize their products over others.
The study calls for strong action from governments and regulators to avoid a repeat of the same market concentration seen in digital markets over the past two decades. It’s about creating a level playing field where small businesses can compete, innovate and offer consumers more choice. This means enforcing rules that prevent tech giants from using their platforms to give their AI products an unfair advantage. It also makes critical resources such as computing power and data more accessible to everyone, not just big technology companies.
Preventing Big Tech from turning into Big AI
read the paper
why is this important
AI has the potential to bring great benefits to society, but only if it is developed in an open, fair and accountable manner. Mozilla believes that a few powerful companies should not decide the future of AI. Instead, we need a diverse and vibrant ecosystem that includes open source, public, nonprofit, and private actors, where public interest research thrives, and competition fosters innovation and choice.
The findings highlight the need for change. Improving access to fundamental models for researchers and addressing the increasing concentration of power in AI will help develop AI in ways that benefit all of us, not just big tech companies.
Mozilla is committed to advocating for a more transparent and competitive AI environment. This research is an important step toward making that vision a reality.