Bonjour!
I’m excited to share that I’ve successfully completed and received my certification for Module 3 entitled ‘Fine-tuning a pretrained model’ of the Hugging Face’s ‘The LLM Course’.
This module covered fundamental concepts of the Transformers library, including:
- Learned about datasets on the Hub and modern data processing techniques
- Learned how to load and preprocess datasets efficiently, including using dynamic padding and data collators
- Implemented fine-tuning and evaluation using the high-level Trainer API with the latest features
- Implemented a complete custom training loop from scratch with PyTorch
- Used Accelerate to make the training code work seamlessly on multiple GPUs or TPUs
- Applied modern optimization techniques like mixed precision training and gradient accumulation
It was a very insightful experience, and I’m looking forward to applying these skills to future projects.
Here is my certification for Module 3:

I’m eager to continue with the next modules and deepen my understanding of natural language processing with Hugging Face!
Stay tuned for more updates on my learning journey.
Bonne journée,
Ahana