How to Deploy the DeepSeek Large Language Model on Windows for Offline AI Interaction

In recent years, artificial intelligence has developed rapidly, and large language models (LLMs) have been widely used in various fields. However, many users want to run these models on their local computers to enable offline usage, improve response speed, and enhance data privacy. This article provides a step-by-step guide on deploying the DeepSeek large language model on a Windows system using Ollama.

1. Download and Install Ollama.

  1. 1. Open Edge or another browser and search for `Ollama`.
  2. Find the official website [ollama.com](https://ollama.com) and click to enter.
  3. On the homepage, locate the Download button and click to go to the download page.
  4. Select the installation package for Windows and click Download for Windows, then wait for the installer to finish downloading.
  5. Once downloaded, double-click `Ollama Setup.exe` to run the installer.
  6. Click Install on the setup screen and wait for the installation to complete.

After installation, Ollama runs in the background without a visible interface, so we need to use the command-line tool to verify the installation.

2. Verify That Ollama Is Successfully Installed.

  1. Press `Win + S`, type `CMD`, and open the Command Prompt.
  2. In the Command Prompt, enter:
    ollama
  3. Press `Enter`. If you see `Usage` and related information displayed, Ollama has been successfully installed.

3. Download and Deploy the DeepSeek Large Language Model.

  1. Visit the [Ollama website](https://ollama.com) and click Models.
  2. Find `deepseek-r1` in the model list and click to enter its page.
  3. Choose the appropriate model version (e.g., `8B`, which stands for the 8 Billion parameter version).
  4. Copy the corresponding installation command.
  5. Return to the Command Prompt, paste, and run the command:
    ollama run deepseek-r1:8b
  6. The process will take some time; wait for the model to finish downloading and installing.

4. Test the Locally Deployed DeepSeek Model.

  1. 1. In the Command Prompt, enter:
    ollama run deepseek-r1:8b
  2. Press `Enter` to start the DeepSeek interactive mode.
  3. Try entering a question to test the model, such as:
    Which is better, DeepSeek or ChatGPT?
  4. Wait for DeepSeek to generate a response and observe its speed and quality.

5. Use Chatbox AI for a More Convenient Interaction.

If you prefer not to interact with the model through the command line, you can install Chatbox AI as a visual interface.

  1. Visit the [Chatbox AI official website](https://chatboxai.app/zh).
  2. Download the Windows installation package and install Chatbox AI.
  3. In the settings, choose Use Own API Key or Local Model.
  4. Select Ollama API and choose `deepseek-r1` in the model field.
  5. Click Save, and you can now interact with DeepSeek in the Chatbox interface.

6. Conclusion: Advantages and Future Outlook of Local Deployment.

By following this guide, we have successfully deployed the DeepSeek large language model on a Windows computer. Compared to online services, local deployment improves response speed and enhances privacy protection. As hardware performance advances and AI technology evolves, running large language models locally will become increasingly popular.

7. Demo Video.

You can watch the following demo video by select the subtitle to your preferred subtitle language.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top