Here’s what we found:

– Apertus isn’t a “Swiss ChatGPT” — it’s a foundation model designed to be adapted, not a consumer chatbot.
– It’s true that it lags behind proprietary models like GPT-4 or Claude in scale and performance. But comparing it to AI models from big US companies is like “comparing a small farmer in Valais to a massive beef producer”.
– It stands out for its ethical design, data transparency, and alignment with the EU AI Act.
– Some claims (like supporting “1,800 languages”) are misleading — yes, it handles many, but also makes trivial mistakes.
– Its ambition is global, not limited to Switzerland, though a “Swiss values charter” anchors certain principles.
– Future funding, compute resources, and scaling remain open questions.

If you’re curious to dig deeper, you can read our full fact-check here: https://www.swissinfo.ch/eng/swiss-ai/fact-and-fiction-about-the-swiss-ai-model-apertus/90110034

What's true and what's false on Switzerland's LLM Apertus
byu/SaraIbr inSwitzerland



Posted by SaraIbr

Share.

3 Comments

  1. Sorry but NOW they arrive with a communication plan? They just released it on some random day, letting journalists write anything they could come up with on a subject they are not used to and react like 2 months later?

    Couldn’t they have asked Apertus to generate an action plan and content pieces to biroadcast on launch day? Coordinate with Swiss hosting companjes to provide Apertus as one of the available LLM?

  2. NeighborhoodLoud4884 on

    A small farmer in Valais can produce high quality beef which people may prefer vs a big beef producer. Who prefers an llm model with significantly worse results?

    In the real world no one gives a shit about “ethical training data”. The only thing that counts are the results.

  3. I understand the thoughts behind it but not how it made it into production, I mean people want the best model and China (with their great open source wan)/the us will probably never go back to only using sources for training in an ethical way, so they will lead and these models will be used

    While the idea is interesting it would have been better to make specific agent teams, Llms, visual models made in collaboration with real workers to have a very good quality but also specific to some uses. Even if it means first train it on a wide dataset and then restrict it to the ethical dataset

    People and companies don’t use ai and think “oh no the environment”, they have a task and want the best result imo

    For example, an image generation model based on the work of artists who would be credited, same for music and llms (like in collaboration with the ssr/rts etc) or universities etc to avoid data poisoning

    Just my thoughts tho if I’m wrong or naive on some parts I’d gladly hear other opinions or corrections