AI4DH Workshops and Lectures 2026: Registration Now Open
In January and February 2026, three seminars and workshops will be offered for digital humanities and social sciences scholars who want to deepen their knowledge of AI and apply it in their research.
All seminars and workshops are free of charge.
Title: Large language models for analyses in digital humanities
Description: Currently, large language models (LLMs) are redefining methodological approaches in many scientific areas, including digital humanities in areas such as linguistics, social sciences, literary studies, historiography, journalism, political science, folkloristics, etc. Harnessing their immense potential to digest and analyse large amounts of text, LLMs can provide complex insights into complex phenomena. LLMs are pretrained on huge text corpora by predicting the next tokens. This does not make them immune to hallucinations and biases, requiring a human-in-the-loop approach, careful methodological design, and strict qualitative and quantitative evaluation. We explain the working of LLMs needed to understand their performance. In the context of complex phenomena analyses, we discuss the two most frequent adaptations of LLMs, fine-tuning and prompt engineering, on the example of social media analysis and folkloristics. We emphasise the need to establish trust in their performance when analysing complex phenomena, outlining the evaluation methodology.
- Date and time: 26 January 2026, 1:00 to 3:00 PM
- Location: University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, lecture room 03
- Instructor: Prof Dr Marko Robnik Šikonja
- Duration: 2 x 45 minutes
- Level/difficulty: beginning
- Language of the workshop: Slovene
- Outcomes: Awareness of LLM capabilities and shortcomings in analysis of complex phenomena. Knowledge of statistical evaluation of LLMs
Title: Understanding large language models for DH
This seminar provides the foundational knowledge for the next seminar, Adapting and Fine-tuning LLMs – a Hands-on Approach for DH.
Description: Large language models are changing the way we write, read, and do intellectual jobs. We present the working of the transformer architecture of neural networks and focus on the decoder models, which are used in generative models, such as ChatGPT. Explaining their construction, pretraining, instruction following, preference alignment and fine tuning, we give necessary background to understand their behaviour. Based on this, we explain prompting strategies, such as in-context learning, and chain of thought reasoning.
- Date and time: 2 February 2026, 1:00 to 3:00 PM
- Location: University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, lecture room 03
- Instructor: Prof Dr Marko Robnik Šikonja
- Duration: 2 x 45 minutes
- Level/difficulty: Intermediate, no prerequisite knowledge needed
- Language of the workshop: Slovene
- Outcomes: Knowledge of LLM construction and recommendations for their use.
Title: Adapting and fine-tuning LLMs – a hands-on approach for DH
Description: To use large language models (LLMs) in research or business setting, it is necessary to
- access them on local computers or on GPU servers via API,
- adapt them to specific needs by fine-tuning them or adapting them with an instruction-following dataset.
We will present step by step:
1. How LLMs can be installed locally, used via API on a local or remote server.
2. How to fine-tune encoder-only models such as BERT.
3. How to fine-tune decoder-only models such as Llama, Gemma, and GaMS using techniques such as LoRA.
4. How to design and build instruction-following datasets.
5. How to do supervised finetuning with instruction-following datasets.
- Date and time: 4 February 2026, 9:00 to 17:00
- Location: University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, lecture room 03
- Instructors: Prof Dr Marko Robnik Šikonja, Aleš Žagar, Domen Vreš, Matej Klemen, Tjaša Arčon, and Rebeka Kropivšek Leskovar
- Duration: 6 x 45 minutes
- Level/difficulty: Intermediate (we recommend attending Understanding large language models for DH)
- Prerequisites: Basic knowledge of Python programming and LLM architecture is a prerequisite. For the latter, we recommend that you attend the AI4DH courses Large language models for analysis of complex phenomena and Understanding large language models beforehand.
- Language of the workshop: Slovene
- Outcomes: Practical knowledge on how to install LLM locally, how to fine-tune them for specific problems, and how to adapt them to a new domain with instruction-following dataset.
- Registration link: This seminar is limited to 15 participants. Please register only if you genuinely intend to make time and attend on 4 February.