Industry News | 9/5/2025
Apertus opens AI: Switzerland's transparent LLM challenges giants
Switzerland's Apertus introduces a fully open, auditable large language model developed by EPFL, ETH Zurich, and CSCS. It pledges complete transparency—from architecture to training data and development steps—making it a counterweight to opaque, corporate models. The move aims to advance sovereignty, ethics, and regulator-friendly AI while inviting researchers and regulators to review every facet.
Apertus: Switzerland's fully transparent AI
A few years ago, if you heard of a new language model, you likely pictured a big, encrypted labyrinth buried in a corporate lab. Today, Switzerland is flipping that script. The Apertus project, born from a collaboration between EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS), aims to turn open rhetoric into practical, auditable AI. It’s not just another model release; it’s a bold statement that you can, in fact, see, trace, and scrutinize every rung of the ladder from inception to deployment.
What Apertus is
- A large language model developed in two sizes: an 8-billion-parameter version for broad accessibility, and a 70-billion-parameter version for more demanding tasks. That puts Apertus in the same neighborhood as widely known open options, while promising a different kind of openness.
- Trained on the Alps supercomputer in Lugano, using a development process documented in full, with the source code, training datasets, and intermediate checkpoints all publicly available.
- Released under a permissive open-source license, designed for education, research, and commercial use without heavy licensing fees. Accessibility is facilitated through platforms like Hugging Face, with Swiss partners like Swisscom lending hands, and the Public AI network helping users connect with the project.
This isn’t a marketing slogan wrapped in glass. Think of Apertus as a blueprint: if you want to audit model weights, you should be able to. If you want to inspect the data pipeline and the safeguards that were applied, you should be able to do that, too. If something in the model behaves oddly, you should be able to trace why. That level of openness is rare in today’s AI ecosystem, where many systems stay stubbornly opaque behind NDAs and proprietary licenses.
Why transparency matters (and how it’s baked in)
The Swiss AI Initiative is positioning Apertus as a counterweight to centralized power in AI—from a handful of global corporations. The project mirrors a broader push toward digital sovereignty and compliance with local rules, while also addressing global expectations around accountability.
- Full development reproducibility: the whole pipeline is documented, not just a glossy readme. Researchers can reproduce experiments, audit code paths, and validate results against published benchmarks.
- Open datasets and training logs: the team shares the datasets and the steps used to curate, clean, and tokenize data, including intermediate checkpoints that let others inspect model evolution.
- Linguistic breadth as a design goal: the model was trained on a dataset of 15 trillion tokens spanning more than 1,000 languages, with roughly 40% non-English content. The aim is to avoid English-language dominance and to support underrepresented languages, including Swiss German and Romansh.
For developers and regulators alike, this isn’t just about having access to a model. It’s about the ability to audit, modify, adapt, and verify that the system aligns with expectations for safety, fairness, and data protection. In a world where data protection and transparency are increasingly top of mind, Apertus is designed to be auditable without sacrificing performance.
The technical footprint
- Two open-access sizes sit in the same competitive tier as other big open models, offering a balance between usability and capability.
- Training occurred on CSCS’s Alps supercomputer, emphasizing a credentials-first approach to performance and traceability.
- A permissive license lowers barriers for education, research, and commercial exploration, encouraging institutions and startups to build on top of Apertus without onerous licensing.
The accessibility story isn’t just about being free; it’s about being usable. Platform partners like Hugging Face provide a user-friendly gateway, while Swisscom and the Public AI network help connect researchers, developers, and institutions with the model. In practical terms, that means a lab bench can become a playground for students, a research office can become a testing ground for policymakers, and a startup can prototype a multilingual assistant without fear of a sudden licensing dispute.
Why this matters for regulated sectors
Apertus isn’t just a tech stunt. Its design choices respond to real-world needs in finance, healthcare, and government where trust, compliance, and explainability aren’t nice-to-haves—they’re prerequisites.
- Regulation-ready data sourcing: the training corpus relies on publicly available data, with safeguards to remove personal information and respect opt-out requests from websites.
- Alignment with data protection norms: the project explicitly acknowledges Swiss data protection laws and the transparency expectations of the EU AI Act, aiming for a model that can operate within regulated environments.
- A potential blueprint for domestic AI ecosystems: by keeping critical AI infrastructure within Swiss and European circles, Apertus addresses concerns about digital sovereignty and strategic dependencies.
Switzerland’s financial sector has been watching closely. The Swiss Bankers Association has signaled interest in a domestic LLM that can adhere to strict banking secrecy and local data protection rules. Apertus could be a proving ground for AI systems that must operate under tight regulatory scrutiny while still delivering real value in banking, insurance, and other regulated industries.
Governance, licensing, and community
Transparency is not just a feature; it’s the governance model. The Apertus team emphasizes open development, reproducibility, and broad access as foundations for trust. That approach invites feedback, forks, and community-driven improvements—handing control back to the people who rely on the technology rather than the firms that own it.
- Documentation as a living resource, not a one-off README.
- Shared experiments and checkpoints that let others verify progress and compare results on a fair, level field.
- A vision of AI as a public utility, akin to open highways and universal electricity, rather than a private, profit-driven ecosystem.
What’s next
The Apertus roadmap doesn’t end with two model sizes. The Swiss AI Initiative hints at future versions with more specialized capabilities for fields like law, health, and climate science. The overarching aim is to develop a public infrastructure for AI innovation—one that’s multilingual, auditable, and aligned with European data protections and ethics.
But here’s the thing: building a community takes time, trust, and a steady cadence of updates. If Apertus can sustain transparent, reproducible development, it could become a reference point for how large-scale AI projects are born, governed, and used in a responsible way. It’s not just about a single model; it’s about a framework for open, accountable AI that others can build on, adapt, and improve over years rather than quarters.
The big picture
Apertus isn’t just another model release. It’s an experiment in what AI can look like when openness is treated as a feature rather than a constraint. If successful, it could nudge the entire industry—especially private incumbents—to rethink transparency, governance, and engagement with regulators, academia, and the public.