Step 1
No explicit authentication required for local usage.
CLI Profile
Fetching install matrix, auth flow, command map, and compatibility data.
Please wait while clime prepares the listing details.
CLI Profile
Run local LLM models and inference workflows.
brew
Step 1
No explicit authentication required for local usage.
Global Flags
Ai Inference
ollama serve
Start ollama
ollama create
Create a model
ollama show
Show information for a model
ollama run
Run a model
ollama stop
Stop a running model
ollama pull
Pull a model from a registry
ollama push
Push a model to a registry
Authenticate model providers and execute inference runs.