Not so long ago there was a post about joint efforts of ETH and EPFL to train first Swiss LLM. The model is now released, model’s name is Open and can be downloaded here.

    The model is named Apertus – Latin for “open” – highlighting its distinctive feature: the entire development process, including its architecture, model weights, and training data and recipes, is openly accessible and fully documented.

    * I am not affiliated with the project itself, just huge supporter of open-source and ai/tech person by vocation.

    Swiss made LLM is here
    byu/orange_poetry inSwitzerland



    Posted by orange_poetry

    Share.

    10 Comments

    1. It’s a bit too friendly for my liking. And if you prompt it in English but the Search Results it gets are in Deutsch, wird die AI in besagter Sprache weiterantworten, despite the fact that the Original prompt and even the Webinterface is set to English.

    2. Their benchmark puts it as being similar to Llama 3.1 so not really revolutionary apart from the only trained on open source data part looks like

    3. Very funny to me that they haven’t made it available for testing, the wider public – myself included – has no idea how to run a local LLM. Supposedly some Swisscom business customers get access, but nothing on their website….

    4. I’ve added it as an available model to our private LLM Platform (includes RAG and MCP), and have made it available to our clients for testing, but unfortunately, it only takes simple requests using the API (as of now).

      I’m hoping the API can be a bit more sophisticated since our Swiss clients would prefer a homegrown solution (in addition to Infomaniak).

    5. Why is the technical report not fully public? What’s “Accuracy” supposed to mean? Potatoes? What are the scores on common benchmarks like MMLU etc?

    6. it’s a start. Could have a use for cases where confidentiality is a must like healthcare. But for that it needs to get on pair with other OS models like QWEN or DeepSeek.