<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Local LLM on SoftwareStable Blog</title><link>https://blog.softwarestable.com/tags/local-llm/</link><description>Recent content in Local LLM on SoftwareStable Blog</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Thu, 26 Mar 2026 09:00:00 +0000</lastBuildDate><atom:link href="https://blog.softwarestable.com/tags/local-llm/index.xml" rel="self" type="application/rss+xml"/><item><title>Using Claude Code with Ollama</title><link>https://blog.softwarestable.com/posts/using-claude-code-with-ollama/</link><pubDate>Thu, 26 Mar 2026 09:00:00 +0000</pubDate><guid>https://blog.softwarestable.com/posts/using-claude-code-with-ollama/</guid><description>Running Large Language Models (LLMs) locally has become increasingly accessible, and combining this with Claude Code opens up powerful possibilities for AI-assisted development without relying on cloud services. In this post, I&amp;rsquo;ll walk through setting up Ollama to run LLMs locally and how to integrate it with Claude Code.
Installing Ollama Ollama is a lightweight tool for running LLMs locally. Installation is straightforward on macOS:
brew install ollama For other operating systems, refer to the official Ollama installation guide.</description></item></channel></rss>