This workshop will explain step-by-step how to set up and deploy AI-powered applications locally using Ollama. We will cover installation, configuration, and running models efficiently on your machine for seamless offline AI development.
Learning goals:
- overview of local AI solutions, with a focus on Ollama and its capabilities;
- understand system and hardware requirements for running
- large language models (LLMs) locally;
- install and operate LLMs on your machine using Ollama’s setup tools;
- develop basic AI-driven applications powered by locally hosted models; and
- demonstrate how to integrate Ollama into an agent-based AI workflow using n8n for automation and orchestration.
The session is intended for participants who are new to the topic and eager to deepen their understanding.
Duration: 1 hr
Video
Handout