Deploying DeepSeek on your server in just a few clicks
DeepSeek AI is a powerful open-source AI model that can operate without requiring a GPU. When combined with Ollama, it enables running AI locally with full control over performance and privacy.
Hosting DeepSeek on your own server ensures a high level of security, eliminating the risk of data interception via API. The official DeepSeek servers are frequently overloaded. By deploying the model locally, you can utilize AI resources exclusively for your needs, without sharing them with other users.
Note: DeepSeek is a third-party development. SpaceCore Solution LTD is not responsible for the operation of this software.
What Are the System Requirements?
Let’s compare different models with varying requirements. Each model can run on both CPU and GPU. However, since we are using a server, this guide will focus on the installation and operation of the model on CPU power.
deepseek-r1:14b
9 GB
20 GB
Advanced capabilities in development and copywriting. Excellent balance of speed and functionality.
deepseek-r1:70b
42 GB
85 GB
High-level computations for business tasks. Deep data analysis and comprehensive development
deepseek-r1:671b
720 GB
768 GB
The DeepSeek R1 is the most advanced model, offering computational functions comparable to the latest ChatGPT versions, and is recommended to be hosted on a high-performance dedicated server with NVMe drives.
DeepSeek 14B Installation
Let's install the 14B model, chosen for its high performance and moderate resource consumption; this guide applies to any available model, allowing you to install a different version if needed.
The installation is performed on a Hi-CPU Pulsar plan with Ubuntu 22.04, which is an ideal choice for DeepSeek
Run the following command to update all system packages to the latest version:
Ollama is a package manager required for deploying DeepSeek. Install it with:
Once the installation is complete, use the following command to download the required DeepSeek model:
deepseek-r1:14b is the name of the selected model. To install a different version, simply replace it, e.g., deepseek-r1:32b
The installation process takes approximately 2 minutes on a Hi-CPU Pulsar server due to high network speed. Execute the following command to launch the DeepSeek model:
Once started, a command-line interface will appear where you can communicate with the AI.
You can also run a single query directly from the command line, for example:
To generate a public key and enable API access, use:
To check installed models and their status:
Заключение
We highly recommend deploying DeepSeek R1 models on servers with sufficient RAM. For stable operation, it's advisable to rent servers with at least a small memory buffer and fast NVMe disks.
The server plans listed in the comparison table are perfectly optimized for DeepSeek AI hosting. We guarantee the quality and reliability of our servers at SpaceCore. Entrust your server deployment to us and build a robust infrastructure for seamless and efficient AI usage in your business!
Last updated