Skip to content

Understanding large language models for digital humanities

Instructor: Dr Marko Robnik-Šikonja
Date and time: 2 February 2026, 1:00 to 3:00 PM
Location: University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, lecture room 03
Level: Intermediate

This seminar provides the foundational knowledge for the next seminar, Adapting and Fine-tuning LLMs – a Hands-on Approach for DH.

Large language models are changing the way we write, read, and do intellectual jobs. We present the working of the transformer architecture of neural networks and focus on the decoder models, which are used in generative models, such as ChatGPT. Explaining their construction, pretraining, instruction following, preference alignment and fine tuning, we give necessary background to understand their behaviour. Based on this, we explain prompting strategies, such as in-context learning, and chain of thought reasoning.

Outcomes: Knowledge of LLM construction and recommendations for their use.
Skills you will gain:

Understanding of the working of the transformer architecture of neural networks.


Language: English