How to Set up a Free, Open-Source, Local AI Assistant Using Ollama, Granite, and Open-WebUI

Suffolk University Law School LIT Lab
Suffolk University Law School LIT LabApr 2, 2026

Why It Matters

Running an open‑source LLM locally gives enterprises data control, reduces operating costs, and mitigates legal risk, paving the way for broader, responsible AI deployment.

Key Takeaways

  • Install Ollama, pull Granite model, run locally on laptop.
  • Open-WebUI provides robust interface for interacting with local models.
  • Granite is fully open-source, with transparent weights and training data.
  • Local LLM reduces reliance on cloud, cutting energy consumption.
  • IBM pledges legal defense for users of Granite against copyright claims.

Summary

The Suffach Legal Innovation and Technology Labs workshop, hosted by IBM developer advocate JJ Asgar, walked participants through installing a free, open‑source AI stack on a personal computer. The tutorial focused on three components: Ollama, a Docker‑style container manager for LLMs; Granite, IBM’s fully open‑source foundational model; and Open‑WebUI, a feature‑rich web interface for querying the model.

Asgar demonstrated how Ollama’s simple commands (e.g., "ollama pull", "ollama run") retrieve and launch Granite directly on a laptop, eliminating the need for remote cloud services. He highlighted Granite’s transparency—IBM provides the model’s weights and training data, and even offers legal defense if users face copyright claims. Open‑WebUI adds document upload and retrieval‑augmented generation (RAG) capabilities, allowing users to constrain the model’s knowledge to specific data sets.

Key moments included a live run of Granite 3.3 and Granite 4 on Asgar’s machine, illustrating comparable performance to commercial APIs while consuming far less power. He used a librarian analogy to explain RAG: the model acts as a librarian who consults a curated library (the RAG) rather than the entire internet, ensuring more reliable, domain‑specific answers. He also warned about the unsustainable energy demands of large cloud‑based models and positioned local LLMs as a greener alternative.

For businesses, this approach offers immediate cost savings, tighter data governance, and a clear legal safety net—critical factors as AI regulation tightens. By deploying open‑source models locally, organizations can experiment, prototype, and scale AI‑driven workflows without ceding control to third‑party providers, accelerating responsible AI adoption.

Original Description

The Suffolk LIT Lab invites experts for workshops with the Document Assembly Line community on the first Wednesday of every month. In this workshop, JJ Asghar demonstrated how to set up a free, open-source, local AI assistant using Ollama, Granite, and Open-WebUI
Here is the open-source AI workshop: https://ibm.github.io/opensource-ai-workshop/pre-work/
- Register for these workshops at tinyurl.com/litlab-dal-workshops
- Learn more about the Document Assembly Line at assemblyline.suffolklitlab.org
- Learn more about the LIT Lab at suffolklitlab.org

Comments

Want to join the conversation?

Loading comments...