top of page

Running DeepSeek Locally: A Step-by-Step Guide

Writer's picture: Siddhesh KadamSiddhesh Kadam

deepseek

In the world of machine learning and AI, running models locally can provide significant advantages, including data privacy, reduced latency, and greater control over your environment. In this blog, we’ll walk you through the process of running DeepSeek R1 locally using Docker and Ollama. By the end of this guide, you’ll have a fully functional DeepSeek environment running on your machine.

What is DeepSeek R1?

DeepSeek R1 is a powerful machine learning model designed for various AI tasks, including natural language processing, data analysis, and more. Running it locally allows you to leverage its capabilities without relying on cloud-based services, ensuring faster performance and enhanced data security.

Prerequisites

Before we begin, ensure you have the following installed on your machine:


  1. Docker: For containerizing and running applications.

  2. Ollama: A tool for managing and running machine learning models locally.


Step 1: Pull the Open WebUI Docker Image


The first step is to pull the Open WebUI Docker image, which provides a user-friendly interface for interacting with DeepSeek R1.


Run the following command in your terminal:

[root@siddhesh ~]# docker pull ghcr.io/open-webui/open-webui:main

This command downloads the latest version of the Open WebUI image from the GitHub Container Registry. You should see output similar to this:

main: Pulling from open-webui/open-webui
7ce705000c39: Pull complete
d02d1a1ced20: Pull complete
...
Digest: sha256:b2c83b5c7b9b244999307b4b1c0e195d41268f3d3a62b84b470c0cea5c5743fd
Status: Downloaded newer image for ghcr.io/open-webui/open-webui:main

Step 2: Run the Open WebUI Container


Once the image is downloaded, you can run the Open WebUI container using the following command:

[root@siddhesh ~]# docker run -d -p 9783:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main

Here’s what each part of the command does:

  • -d: Runs the container in detached mode (in the background).

  • -p 9783:8080: Maps port 9783 on your local machine to port 8080 in the container.

  • -v open-webui:/app/backend/data: Creates a volume for persistent data storage.

  • --name open-webui: Names the container open-webui.

  • ghcr.io/open-webui/open-webui:main: Specifies the Docker image to use.


You should see an output like this:

96b39f7b331b4c342e282466142070167da7571fe6f57fdd4eabb1e00476406f

This is the container ID, confirming that the container is running.


Step 3: Run DeepSeek R1 Using Ollama


Now that the Open WebUI is running, you can use Ollama to pull and run the DeepSeek R1 model locally.

Run the following command:

[root@siddhesh ~]# ollama run deepseek-r1:1.5b

This command downloads and runs the DeepSeek R1 model. You’ll see output similar to this:

pulling manifest
pulling aabd4debf0c8... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 GB
pulling 369ca498f347... 100% ▕██████████████████████████████████████████████████████████████████████████████████████████████▏  387 B
...
verifying sha256 digest
writing manifest
success
>>>

Once the model is downloaded, you can start interacting with it directly in your terminal.


Step 4: Access the Open WebUI


To access the Open WebUI, open your web browser and navigate to:

This will bring up the Open WebUI interface, where you can interact with DeepSeek R1 using a graphical interface.


DeepSeek

Step 5: Using DeepSeek-R1 Locally


Reload the Open WebUI page, and you should see the deespseek-r1:8b model. Just click on it to select it and start using it!


DeepSeek

Step 6: Try searching using DeepSeek R1


DeepSeek

we see the output being generated without connecting to or sending a data request to the internet.

Tips for Optimal Performance
  1. Allocate Sufficient Resources: Ensure your machine has enough CPU and RAM to handle the model.

  2. Use GPU Acceleration: If available, configure Docker and Ollama to use GPU resources for faster performance.

  3. Monitor Resource Usage: Use tools like docker stats to monitor container resource usage.


Conclusion

Running DeepSeek R1 locally using Docker and Ollama is a straightforward process that provides significant benefits, including enhanced privacy and reduced latency. By following this guide, you’ve set up a robust environment for leveraging DeepSeek R1’s capabilities on your local machine.

bottom of page