Skip to content

Posts

Tutorial: Finetuning Language Models

This notebook will allow you to try out finetuning of the munin-7b-alpha model or, indeed, any other generative model out there.

We'll be finetuning the model on a Danish translated instruction tuning dataset, using the QLoRA method.

Tutorial: Merging Language Models

Model merging is a relatively new method that allows one to combine the weights of different language models into a single model.

In this notebook you'll get to try this out, as well as try to interact with the merged model to see the results!

Releasing Munin 7B Alpha - A Danish LLM

We are excited to announce the release of the first model from the Danish Foundation Models project, nicknamed Munin 7B Alpha. This model represents the beginning of our research into Danish Large Language Models (LLMs), employing continual pre-training based on the already pre-trained Mistral-7b-v0.1 model. It has been pre-trained on the Danish Gigaword dataset, which has been instrumental in training various Danish BERT-style models.