Run LLMs Locally with Ollama: Chat and Automate with Python
Large language models are here to stay, and I thought it’d be a good idea to write a post about how to download one and run it locally on a personal computer. Of course, this same explanation applies if you’re using a virtual machine in the cloud (AWS, GCP)—the key difference lies in hardware limitations and the size of the models you’re able to run. A personal computer will rarely match the capabilities of a cutting-edge virtual machine....