Skip to content

AI4DH Workshops and Lectures 2026: Registration Now Open

In January and February 2026, three seminars and workshops will be offered for digital humanities and social sciences scholars who want to deepen their knowledge of AI and apply it in their research.

All seminars and workshops are free of charge.

Title: Large language models for analyses in digital humanities

Description: Currently, large language models (LLMs) are redefining methodological approaches in many scientific areas, including digital humanities in areas such as linguistics, social sciences, literary studies, historiography, journalism, political science, folkloristics, etc. Harnessing their immense potential to digest and analyse large amounts of text, LLMs can provide complex insights into complex phenomena. LLMs are pretrained on huge text corpora by predicting the next tokens. This does not make them immune to hallucinations and biases, requiring a human-in-the-loop approach, careful methodological design, and strict qualitative and quantitative evaluation. We explain the working of LLMs needed to understand their performance. In the context of complex phenomena analyses, we discuss the two most frequent adaptations of LLMs, fine-tuning and prompt engineering, on the example of social media analysis and folkloristics. We emphasise the need to establish trust in their performance when analysing complex phenomena, outlining the evaluation methodology. 

  • Date and time: 26 January 2026, 1:00 to 3:00 PM
  • Location: University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, lecture room 03
  • Instructor: Prof Dr Marko Robnik Šikonja
  • Duration: 2 x 45 minutes
  • Level/difficulty: beginning
  • Language of the workshop: English
  • Outcomes: Awareness of LLM capabilities and shortcomings in analysis of complex phenomena. Knowledge of statistical evaluation of LLMs

Title: Understanding large language models for DH

This seminar provides the foundational knowledge for the next seminar, Adapting and Fine-tuning LLMs – a Hands-on Approach for DH.

Description: Large language models are changing the way we write, read, and do intellectual jobs. We present the working of the transformer architecture of neural networks and focus on the decoder models, which are used in generative models, such as ChatGPT. Explaining their construction, pretraining, instruction following, preference alignment and fine tuning, we give necessary background to understand their behaviour. Based on this, we explain prompting strategies, such as in-context learning, and chain of thought reasoning. 

  • Date and time: 2 February 2026, 1:00 to 3:00 PM
  • Location: University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, lecture room 03
  • Instructor: Prof Dr Marko Robnik Šikonja
  • Duration: 2 x 45 minutes
  • Level/difficulty: Intermediate, no prerequisite knowledge needed
  • Language of the workshop: English
  • Outcomes: Knowledge of LLM construction and recommendations for their use.

Title: Large Language Models for Digital Humanities: A Hands-On Introduction 

Description: Large Language Models (LLMs) are increasingly used in both research and applied settings, including Digital Humanities (DH) in the broadest sense. To work with them effectively, researchers must understand not only their conceptual foundations but also the practical aspects of deployment, inference, and evaluation.

This hands-on workshop introduces participants to the practical use of LLMs in research workflows. They will learn how to access and run modern LLMs locally or on GPU servers, configure them for efficient inference, and apply them to common text-based tasks. The workshop assumes basic familiarity with Python but no prior experience with machine-learning frameworks.

The workshop combines theoretical overviews with an emphasis on guided practical exercises. By the end of the workshop, participants will be able to set up LLM-based experiments and critically assess their suitability for DH research.  

Topics covered include:

  1. Essential background concepts for working with LLMs, including machine learning basics, text representation, transformer models, and inference strategies;
  2. Installing and running LLMs locally, as well as accessing models via APIs on local or remote GPU servers;
  3. Using LLMs for text classification and related analytical tasks;
  4. Advanced prompting techniques, including Retrieval-Augmented Generation (RAG), to improve text generation;
  5. Practical use of state-of-the-art inference libraries for efficient and scalable experimentation;
  6. Individual mentoring and support during hands-on exercises and exploratory tasks.

  • Date and time: 4 February 2026, 9:00 to 16:00
  • Location: University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, lecture room 03
  • Instructors: Prof Dr Marko Robnik Šikonja, Aleš Žagar, Domen Vreš, Matej Klemen, Tjaša Arčon, and Rebeka Kropivšek Leskovar
  • Duration: 6 x 45 minutes
  • Level/difficulty: Intermediate 
  • Prerequisites: Basic knowledge of Python programming is a prerequisite. We recommend that you attend the AI4DH courses “Large language models for analysis of complex phenomena” and “Understanding large language models” beforehand. 
  • Language of the workshop: English
  • Outcomes: Practical knowledge on how to use LLMs locally, Registration link: This seminar is limited to 15 participants. Please register only if you genuinely intend to attend prior lectures and attend the course on 4 February.