- Install ollama
- brew install --cask ollama
- Get a coding model
- # pick ONE to start
- ollama pull codellama:7b-instruct
- # or
- ollama pull mistral:7b-instruct
- # or
- ollama pull phi3:3.8b-mini-instruct
- Validate that llm is running locally
- curl http://127.0.0.1:11434/v1/models
- you should see something like "
codellama:7b-instruct" - Add this to Xcode
- Xcode ▸ Settings… ▸ Intelligence
- add Model Provider