
Installing and setting up Ollama on MacOS and running your first prompt
Table of Contents
Introduction
Ollama is a powerful tool for running large language models (LLMs) locally on your MacOS device. There are several benefits to running models locally including enhanced privacy, reduced latency, and the ability to work offline. I have been experimenting with a powershell module called AiTools that is a wrapper for multiple models. I have also been using Claude much more extensively of late and so to keep control of my credit usage I need to have a choice in models so that I leverage Claude only when required, or more specifically, I don’t need to waste my Claude credits on non-coding tasks. In this guide, we will walk you through the steps to install and set up Ollama on your MacOS system and run your first prompt.
Note
Don’t have homebrew installed? Installation instructions are at https://brew.sh/
Steps
Installing Ollama on MacOS is a straightforward process if you have Homebrew installed, you just need to run the following command in your terminal:
brew install ollama
Once the installation is complete, you can verify that Ollama is installed correctly by checking its version and also see what models are running:
ollama --version
ollama ps
Picking a model
Ollama supports a variety of models that you can choose from. You can view the available models by visiting the search page. I can pick the model that best suits my needs by using the filter options available on the page. You can also get an idea of the size of the model by clicking on the model name to see more details.
To download the model, I can run the following command in my terminal:
ollama run qwen3
Warning
I ran into an issue getting Ollama to run this command, it returned the error Error: listen tcp 127.0.0.1:11434: bind: address already in use this was simply a case of restarting Ollama using the command brew services restart ollama and then re-running the ollama run command.
On my M1 MacBook Air, this model is way too large, it became really unresponsive, so a bit of experimenting required. I can see which models are installed:
ollama list
And then maybe picking a smaller model, for example, I can try the gemma3 model, but I also need to stop the previous model first:
ollama stop qwen3
ollama run gemma3
Either way, this machine isn’t really geared for this, so I will have to experiment with other models to see what works best.
Wrapping Up
This guide has walked you through the steps to install and set up Ollama on your MacOS system and run your first prompt. With Ollama, you can leverage the power of large language models locally, enhancing your productivity and privacy. Experiment with different models to find the one that best suits your needs.
References
#mtfbwy
Comments