Pieter Delobelle Research
Click to visit website
About
Pieter Delobelle is a postdoctoral researcher at KU Leuven specializing in the alignment and fairness of large language models. His work aims to mitigate biases and promote fairness in natural language processing (NLP) systems, particularly those pertaining to Dutch language models. Delobelle's research includes efficient training methods, development of Dutch conversational agents, and understanding intrinsic and extrinsic biases in language modeling. His notable projects include RobBERT, a state-of-the-art Dutch language model, and Tweety-7B, a generative LLM for Dutch. The site also highlights his commitment to fair NLP, equitable models, and reducing computational resources in training processes. His extensive research output is shared through publications and other resources.
Platform
Task
Features
• collaboration opportunities
• development of dutch llms
• efficiency in nlp training
• fairness in nlp
Average Rating: 0.0
Average Rating: 0.0
5 Stars:
0 Ratings
4 Stars:
0 Ratings
3 Stars:
0 Ratings
2 Stars:
0 Ratings
1 Star:
0 Ratings
User Ratings
No ratings available.
Sign In to Rate this Tool
Alternatives
Yugeng Liu
Ph.D. candidate specializing in trustworthy machine learning and related research areas.
View DetailsFeatured Tools
Dezyn
Interactive architectural diagram tool with AI-powered features for flowcharts and cloud architectures.
View DetailsChoice AI
Personalized OTT entertainment platform using AI for tailored viewing experiences.
View Details