How to Install and Run DeepSeek Models Locally?

TA
Tayyab Akmal
March 18, 2025 3 min read

In today's world, running large language models (LLMs) locally has become a game-changer for developers, researchers, and businesses. It ensures data privacy, reduces latency, and allows for greater customization. In this article, I'll guide you through the process of installing and running DeepSeek models locally using Ollama and integrating them with AnythingLLM . Let’s dive in!

Step 1: Download and Install Ollama

The first step is to download and install Ollama , a lightweight framework designed to simplify the deployment of LLMs on your local machine.

  1. Visit the official Ollama website: https://ollama.com/ .
  2. Download the installer compatible with your operating system (Windows, macOS, or Linux).
  3. Follow the installation instructions provided on the website.
  4. Once installed, verify the installation by running the following command in your terminal or command prompt:
ollama --version

If the installation was successful, you should see the version number of Ollama displayed.

Step 2: Download and Run DeepSeek Models

DeepSeek offers a range of distilled models optimized for performance and efficiency. These models are based on popular architectures like Qwen and Llama. Below are the steps to download and run these models locally:

  1. Open your terminal or command prompt.
  2. Use the ollama run command to download and execute the desired DeepSeek model. Here are the commands for various models:

DeepSeek-R1-Distill-Qwen-1.5B:

ollama run deepseek-r1:1.5b

DeepSeek-R1-Distill-Qwen-7B :

ollama run deepseek-r1:7b
  • Once the model is downloaded, it will start running locally. You can interact with it directly via the terminal or integrate it into other applications.

Step 3: Integrate with AnythingLLM

To make the most out of your locally running DeepSeek model, you can integrate it with AnythingLLM , a powerful platform that allows you to build custom AI workflows.

  1. Visit the AnythingLLM website: https://anythingllm.com/ .
  2. Download the installer compatible with your operating system (Windows, macOS, or Linux).
  3. Navigate to the model selection section and choose Ollama as the model provider.
  4. Specify the model name (e.g., deepseek-r1) and the variant (e.g., 1.5b, 7b, etc.) that you downloaded earlier.
  5. Save the configuration and test the integration by interacting with the model through AnythingLLM’s interface.
Running AI models locally is not just about technology—it’s about taking control of your data, your privacy, and your innovation."

Step 4: Enjoy Your Local AI Setup

Congratulations! You now have a fully functional local AI setup powered by DeepSeek models. Whether you’re building chatbots, generating content, or performing complex analyses, this setup ensures speed, privacy, and flexibility.

Why Run DeepSeek Models Locally?

Running DeepSeek models locally offers several advantages:

  1. Data Privacy : Your data never leaves your machine, ensuring complete confidentiality.
  2. Customization : Fine-tune the models to suit your specific needs without relying on third-party APIs.
  3. Cost Efficiency : Avoid recurring API costs by hosting the models yourself.
  4. Low Latency : Achieve faster response times compared to cloud-based solutions.


Final Thoughts

The combination of Ollama , DeepSeek models , and AnythingLLM empowers users to harness the full potential of AI while maintaining control over their infrastructure. This setup is ideal for individuals and organizations looking to innovate without compromising on security or performance.

If you found this guide helpful, feel free to share it with your network. Let’s democratize access to AI and unlock new possibilities together!



Enjoyed this article?

Share it with your network

TA

Written by Tayyab Akmal

Senior Automation Engineer & QA Specialist

I write about test automation, QA best practices, and AI-powered development tools. Follow me for more insights on building quality software.

Have Questions or Feedback?

I'd love to hear your thoughts on this article. Let's connect and discuss!

Start a Conversation