Using Local AI with Ollama
Want to run AI completely offline? For free? Private? You can use Ollama.
What is Ollama?
Ollama is a tool that lets you run powerful AI models like Llama 3, Mistral, and Gemma directly on your computer.
Step 1: Install Ollama
- Go to ollama.com and download it.
- Install it and run it.
- Open your terminal and pull a model. For example, to get a fast and smart model:
bash
ollama run llama3Wait for it to download and start. You can chat with it in the terminal to make sure it works. Then press Ctrl+D to exit the chat, but keep the Ollama app running.
Step 2: Configure PicoClaw
Open your config.json and change it to point to your local Ollama.
json
{
"api_key": "ollama",
"base_url": "http://localhost:11434/v1",
"model": "llama3",
"language": "en"
}- api_key: You can just type
ollama(it doesn't really check it). - base_url: This is the standard address for Ollama.
- model: Must match the name of the model you downloaded (e.g.,
llama3,mistral).
Step 3: Run PicoClaw
bash
./picoclaw runNow, every time you talk to PicoClaw, it is talking to the AI on your computer. Unplug your internet? It still works!