Description
Join us on Monday, October 2 @ 10:30 AM – 11:30 AM ET for the launch of our newest collaborative project, Fairness in Language Models (LMs). The project will be conducted in a laboratory format and delivered in 3 half-day sessions from October 25 - 27. The laboratory project format is a short engagement aimed for exploration and experimentation.
The 3-day lab will be focused on enhancing your understanding of fairness and bias mitigation in Language Models (LMs) using various research based techniques. At a time where LM applications hold significant influence, auditing for fairness is of most importance. This lab incorporates lectures, demonstrations, code walkthroughs, and hands-on exploration and concludes with a showcase day. By signing up for this lab, you will:
- Improve your comprehension of fairness and bias mitigation in Language Models.
- Learn to identify and evaluate both data and model biases through techniques like sentence classification, named entity recognition, instruction fine-tuning, chain-of-thought prompting, and soft-prompt tuning.
- Acquire a deeper understanding of the development of Vector's UnBIAS library to mitigate bias in data and its practical application by actively participating in hands-on experiences.
Important Note: your final registration to this project will be reported to our primary point of contact in your organization. Final approval for participation in this project will be granted by the primary point of contact.
This event is open to Vector Sponsors, Vector Researchers, and invited health partners only. Any registration that is found not to be a Vector Sponsor, Vector Researcher or invited health partner will be asked to provide verification and, if unable to do so, will not be able to attend the event. Please contact events@vectorinstitute.ai with any questions.