BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Living Open Source Foundation - ECPv6.15.13//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Living Open Source Foundation
X-ORIGINAL-URL:https://livingopensource.org
X-WR-CALDESC:Events for Living Open Source Foundation
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Africa/Lusaka
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0200
TZNAME:CAT
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Africa/Lusaka:20240531T170000
DTEND;TZID=Africa/Lusaka:20240531T180000
DTSTAMP:20260407T021318
CREATED:20240527T105348Z
LAST-MODIFIED:20240529T124718Z
UID:4278-1717174800-1717178400@livingopensource.org
SUMMARY:Introduction to Instructlab
DESCRIPTION:  \nJoin us for an exciting online introduction class on “Introduction to InstructLab AI“. \nThe session will be hosted by Sander van Vugt on May 31st at 5 PM and will last for one hour.\nParticipation is free and will show you how to run an AI on your laptop. This session is for you if you want to explore how generative AI is working\, and how to get started with it. \nregister: https://meet.google.com/mgd-ojzv-oze \nInstructLab is an innovative open source project created by IBM and Red Hat to enhance large language models (LLMs) used in generative artificial intelligence (gen AI) applications.\nIt provides a cost-effective solution for improving LLM alignment and opens the door for contributors with minimal machine learning experience. \nWhat does InstructLab do? \nInstructLab enhances LLMs like chatbots and coding assistants by:\n– Allowing less expensive and resource-intensive fine-tuning.\n– Enabling continuous improvement from community contributions.\n– Reducing the need for large amounts of human-generated data. \nHow does InstructLab work? \nInstructLab uses the LAB method\, which includes:\n1. Taxonomy-driven data curation: Human-curated training data examples.\n2. Large-scale synthetic data generation: Creating new examples and refining them for quality.\n3. Iterative\, large-scale alignment tuning: Retraining the model with synthetic data. \nHow is InstructLab different? \nInstructLab stands out by:\n– Utilizing fewer human-generated examples compared to traditional methods.\n– Enhancing LLMs continuously with community contributions.\n– Being model-agnostic\, allowing supplemental fine-tuning for various LLMs. \nComparison to Other Methods: \n– Pretraining: Involves large-scale\, resource-intensive training.\n– Alignment Tuning: InstructLab uses fewer human examples for significant improvements.\n– Retrieval-Augmented Generation (RAG): Focuses on supplementing LLMs with domain-specific knowledge without retraining\, while InstructLab enhances and unlocks new skills. \nDon’t miss this opportunity to learn about the transformative InstructLab project and how you can contribute to and benefit from it. Register now and join us for an enlightening session!
URL:https://livingopensource.org/events/introduction-to-instructlab/
LOCATION:On-Site
CATEGORIES:Tutorial
ATTACH;FMTTYPE=image/png:https://livingopensource.org/wp-content/uploads/2024/05/instructlab-banner-1.png
ORGANIZER;CN="Living Open Source Foundation":MAILTO:info@livingopensource.org
END:VEVENT
END:VCALENDAR