Run DeepSeek-R1 Locally for Free in Just 3 Minutes!
#deepseek #ai #developer #coding
DeepSeek-R1 has been creating quite a buzz in the AI community. Developed by a Chinese AI company DeepSeek, this model is being compared to OpenAI's top models. The excitement around DeepSeek-R1 is not just because of its capabilities but also because it is open-sourced, allowing anyone to download and run it locally. In this blog, I'll guide you through setting up DeepSeek-R1 on your machine using Ollama.
Why DeepSeek-R1?

DeepSeek-R1 stands out for several reasons:
- Cost-Effective: It is cheaper than many other AI models.
- Problem-Solving: Excels in reasoning and coding tasks.
- Chain-of-Thought Reasoning: Built-in reasoning capabilities make it highly efficient.
- Open-Source: Freely available for anyone to use and modify.
Getting Started with Ollama

Before we begin, let's discuss Ollama. Ollama is a free, open-source tool that allows users to run Natural Language Processing models locally. With Ollama, you can easily download and run the DeepSeek-R1 model.
Step 1: Install Ollama
First, you'll need to download and install Ollama. Visit the Ollama website and download the version that matches your operating system. Follow the installation instructions provided on the site.

Step 2: Download DeepSeek-R1
As you can see when you go to Ollama website, you can run the different parameters of DeepSeek-R1. You can find the details of requirements here (as shown above in the screenshot).
You can run 1.5b, 7b, 8b, 14b, 32b, 70b, 671b and obviously the hardware requirements increase as you choose bigger parameter. I used 7b one in my tutorial.
Once Ollama is installed, open your terminal and type the following command to download the DeepSeek-R1 model:
This command tells Ollama to download the model. Depending on your internet speed, this might take some time. Grab a coffee while it completes!

Step 3: Verify Installation
After downloading, verify the installation by running:
You should see deepseek-r1
in the list of available models. If you do, great job! You're ready to run the model.

Step 4: Run DeepSeek-R1
Now, let's start the model using the command:
And just like that, you're interacting with DeepSeek-R1 locally. It's that simple!
Step 5: Ask a Query
The model looks good with coding tasks also. Let's check that approach too.
Below is a complete step-by-step video of using DeepSeek-R1 for different use cases.
Additional Tips
- Hardware Requirements: Ensure your machine meets the hardware requirements for the model size you choose.
- Regular Updates: Keep Ollama and DeepSeek-R1 updated to the latest versions for optimal performance.
- Community Support: Join the DeepSeek and Ollama communities for troubleshooting and tips.
My First Impression About DeepSeek-R1
My first impression about DeepSeek-R1 is just mind-blowing! It’s fast, efficient, and incredibly versatile for coding and problem-solving tasks.
Back to Home