Using Ollama through Docker

To start the server for the first time:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

To stop and start

docker stop ollama
docker start ollama

To interact with it:

docker exec -it ollama ollama pull gemma2
docker exec -it ollama ollama run gemma2

FYI: gemma2 needs more than 8GB of RAM to run.

→ docker exec -it ollama ollama run gemma2
Error: model requires more system memory (9.1 GiB) than is available (8.4 GiB)

Questions

  • How we can move the models to an external SSD?

Moving Models to External SSD on macOS

To move Ollama models to an external SSD when using Docker:

  1. Stop the Ollama container:
    docker stop ollama
    
  2. Create a directory on your external SSD:
    mkdir /Volumes/YourSSD/ollama-models
    
  3. Update the Docker run command to mount the external SSD location:
    docker run -d \
      -v /Volumes/YourSSD/ollama-models:/root/.ollama \
      -p 11434:11434 \
      --name ollama \
      ollama/ollama
    

If you already have models and want to move them:

  1. Copy existing models from the Docker volume to your SSD:
    docker cp ollama:/root/.ollama/. /Volumes/YourSSD/ollama-models
    
  2. Remove the old Docker volume:
    docker volume rm ollama
    
  3. Start Ollama with the new mount point as shown above.

#Development #Docker #AI #LLMs