Skip to content

Using Local AI with Ollama

Want to run AI completely offline? For free? Private? You can use Ollama.

What is Ollama?

Ollama is a tool that lets you run powerful AI models like Llama 3, Mistral, and Gemma directly on your computer.

Step 1: Install Ollama

  1. Go to ollama.com and download it.
  2. Install it and run it.
  3. Open your terminal and pull a model. For example, to get a fast and smart model:
bash
ollama run llama3

Wait for it to download and start. You can chat with it in the terminal to make sure it works. Then press Ctrl+D to exit the chat, but keep the Ollama app running.

Step 2: Configure PicoClaw

Open your config.json and change it to point to your local Ollama.

json
{
  "api_key": "ollama", 
  "base_url": "http://localhost:11434/v1",
  "model": "llama3",
  "language": "en"
}
  • api_key: You can just type ollama (it doesn't really check it).
  • base_url: This is the standard address for Ollama.
  • model: Must match the name of the model you downloaded (e.g., llama3, mistral).

Step 3: Run PicoClaw

bash
./picoclaw run

Now, every time you talk to PicoClaw, it is talking to the AI on your computer. Unplug your internet? It still works!

Released under the MIT License.