Published March 31, 2025
| Version v1
Conference paper
Affordably Fine-tuned LLMs Provide Better Answers to Course-specific MCQs
Contributors
Others:
- Alma Mater Studiorum Università di Bologna = University of Bologna (UNIBO)
- Fondements opérationnels, logiques et algébriques des systèmes logiciels (OLAS) ; Centre Inria d'Université Côte d'Azur (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Dipartimento di Informatica - Scienza e Ingegneria [Bologna] (DISI) ; Alma Mater Studiorum Università di Bologna = University of Bologna (UNIBO)-Alma Mater Studiorum Università di Bologna = University of Bologna (UNIBO)
Description
In education, the capability of generating human-like text of Large Language Models (LLMs) inspired work on how they can increase the efficiency of learning and teaching. We study the affordability of these models for educators and students by investigating how LLMs answer multiple-choice questions (MCQs) with respect to hardware constraints and refinement techniques. We explore this space by using generic pre-trained LLMs (the 7B, 13B, and 70B variants of LLaMA-2) to answer 162 undergraduate-level MCQs from a course on Programming Languages (PL)-the MCQ dataset is a contribution of this work, which we make publicly available. Specifically, we dissect how different factors, such as using readily-available material-(parts of) the course's textbook-for fine-tuning and quantisation (to decrease resource usage) can change the accuracy of the responses. The main takeaway is that smaller textbook-based finetuned models outperform generic larger ones (whose pre-training requires conspicuous resources), making the usage of LLMs for answering MCQs resource-and material-wise affordable.
Abstract
International audienceAdditional details
Identifiers
- URL
- https://hal.science/hal-04887593
- URN
- urn:oai:HAL:hal-04887593v1
Origin repository
- Origin repository
- UNICA