Cloud AI is powerful. But it demands a stable internet, subscription fees, and trust that your data stays private. For many organizations in Zambia and across the region, those are not small assumptions.
This training is the answer: A full-day hands-on session on March 17th, where expert Linux and AI trainer Sander van Vugt shows you how to host a fully functional, ChatGPT-like AI solution inside your own infrastructure. On your own hardware. Offline if needed. With your own documents built in.
IS THIS FOR YOU?
- You work in IT and want to understand what hosting AI on your own infrastructure actually requires
- Your organization or institution has limited, expensive, or unreliable internet access
- You handle sensitive data that cannot or should not leave your premises
- You want to run AI on existing hardware, without investing in expensive GPUs
- You are a Linux administrator looking to add AI hosting to your skillset
- You want to bring AI capabilities to communities or institutions that have no access to cloud services
BY THE END OF THIS DAY, YOU WILL BE ABLE TO:
- Explain the key components of a self-hosted GenAI setup, including LLMs and inference engines, so you can make informed decisions for your organization
- Run a working AI solution on low-end Linux hardware, without a GPU and without an internet connection, using llama.cpp
- Optimize a Linux system specifically for AI inference workloads
- Deploy an enterprise-grade AI solution using vLLM on Linux
- Connect your own internal documents to an AI model using RAG, so the AI answers questions based on your organization's own data
- Run a scalable, production-ready AI solution on top of Kubernetes
WHY THIS MATTERS BEYOND THE DATA CENTER
Living Open Source Foundation sponsors Cheeba Primary School in Mazabuka, Zambia. The school has no library. Pupils have no reliable way to look things up, explore topics, or access knowledge beyond what their teachers can provide in the classroom.
The solution covered in this training will be deployed at the school on low-end hardware, with no internet connection required. Pupils will be able to ask questions and get answers, in the same way anyone with a smartphone and a data plan can today.
That is what on-premises AI makes possible. Attending this session directly contributes to making it happen.
WHAT WE WILL COVER
MORNING
- Understanding key components for hosting GenAI: LLMs and inference engines
- Hosting GenAI on low-end hardware without a GPU
- Optimizing Linux for AI inference workloads
- Running a hardware-friendly solution based on llama.cpp on Linux
AFTERNOON
- Running an enterprise-friendly solution based on vLLM on Linux
- Including your own documents using RAG
- Running a scalable AI solution on top of Kubernetes
- How all of this helps forward schools in rural Zambia
PRACTICAL DETAILS
Date: Monday, March 17th
Time: 8 AM to 5 PM CAT
Location: Living Open Source Foundation headquarters, Ibex Hill off Twin Palm Road, Plot No L3270M/C, Lusaka
Price: k500 per person for non-members. Free for members
Seats available: 25
PREREQUISITES
ABOUT YOUR INSTRUCTOR
Sander van Vugt
Sander is a seasoned Linux trainer specializing in High Availability and Linux performance solutions. With 62 published books, including bestsellers like the Red Hat RHCE/RHCSA Cert Guide, and over 20 video courses, he brings extensive expertise to his training. His approach combines academic knowledge with real-world consulting experience, ensuring practical and cutting-edge Linux insights.
FREQUENTLY ASKED QUESTIONS
Can I become a member of Living Open Source Foundation?
What hardware do I need to bring?
Do I need a GPU?
Can my organization sponsor or donate hardware for the school deployment?
Start hosting your own genai
Whether you are running a school with no library, a clinic with sensitive patient records, a business with unreliable connectivity, or a government office that cannot send data abroad, this training gives you the skills to make it work.