How to Run Deepseek Offline Locally

Learn how to run DeepSeek R1 offline locally with this comprehensive step-by-step guide. Avoid server overload and enjoy seamless access by setting up a runtime environment via Ollama, installing the DeepSeek model, and using the Page Assist browser extension for a user-friendly interface. Perfect for AI enthusiasts seeking reliable offline AI model execution.

Rock Smith
Update Feb 10, 2025
Table Of Contents

    A massive event just rocked the AI community—DeepSeek has gone viral worldwide! Thanks to its free and open-source nature and output quality rivaling OpenAI's O1 model, it has gained a huge following. Just take a look at GitHub—its starred count has skyrocketed to over 60K, up from 16K the last time we checked. That’s insane!

    DeepSeek-R1
    DeepSeek-R1
    DeepSeek-R1 is a cutting-edge open-source reasoning model that rivals OpenAI's o1 in mathematical, coding, and logical tasks, leveraging breakthrough reinforcement learning technology with distilled variants for enhanced accessibility.
    Visit Website

    However, we’ve noticed that DeepSeek’s online version sometimes experiences server overload, making it hard to access. So today, we’re bringing you a step-by-step tutorial on how to run DeepSeek R1 locally and offline. Let’s dive right in!

    How to Run Deepseek

    Step 1: Install Ollama

    First, we need to install a runtime environment that allows local execution. We recommend Ollama, which supports running all versions of the DeepSeek R1 model for free.

    Head over to the official website, click the download button, and select the version compatible with your computer. For this example, we'll use a Windows PC. The file size is around 700MB. Once downloaded, double-click to install. After installation, you'll see it in your system tray—that means you're good to go!!

    Ollama

    Step 2: Copy the DeepSeek R1 Model Code

    Now, let’s get the DeepSeek R1 model:

    • Go back to Ollama’s official website and click the "Models" tab at the top left.
    • The first option you see should be DeepSeek R1—that’s right, it’s already integrated!

    Ollama

    • Click on the model, and you’ll find different versions based on your computer’s VRAM capacity.
    • If you’re unsure which one to choose, here’s a general recommendation:Lower VRAM? Pick a smaller model.
    • Slow inference speed? Downgrade to a lower version.
    • For this tutorial, we’ll use the 7B version, which is the most commonly used.
    • Click the copy button next to it—this will copy the command we need to download the model automatically.

    DeepSeek R1

    Step 3: Open Windows PowerShell

    Now, let’s install the model:

    • Find Windows PowerShell in your system’s search bar and open it.(Or press WIN + R, type cmd, and hit Enter to open the command prompt.)

    Windows PowerShell

    • Paste the command you copied earlier and hit Enter.

    Deepseek

    • Ollama will now automatically download and install DeepSeek R1 for you.
    • Once the installation is complete, you can start asking questions directly!
    • Try typing a question and pressing Enter—you should see "Think" appear, meaning DeepSeek R1 is processing your query.

    However, we understand that this command-line interface may not be user-friendly for beginners. So, if you prefer a clean and intuitive UI—something like ChatGPT—let’s move on to the next step.

    Deepseek

    Step 4: Run DeepSeek R1 Using the Page Assist Extension

    To enhance your experience, let’s set up a browser-based interface:

    • Open the Chrome Web Store and search for "Page Assist."

    Page Assist

    • Click "Add to Chrome."—The first result is the one you want
    • Once installed, find the Page Assist icon in the top-left corner. Open it, and you'll see it's already linked to the model and Ollama we just set up.
    • Select the model you installed earlier. Before using it, we need to tweak some basic settings.
    • Click the settings icon in the top-right corner, enable "RAG Settings," and select your model under "Embedding Model."

    DeepSeek R1

    • Save your changes, and you're ready to go! Now, try asking a sample question:
    "What day comes after the day before yesterday?"

    Boom! The answer appears instantly.

    DeepSeek R1

    Additionally, you can also enable the internet search feature in the bottom-left corner to ask about the latest news. Pretty straightforward, right? You can even switch between other models if you have them installed!

    AIPURE
    AIPURE
    AIPURE is a comprehensive platform that helps users discover and explore the best AI tools and services of 2024 through an easy-to-use search interface.
    Visit Website

    Final Thoughts for DeepSeek

    This is how you can run DeepSeek R1 locally and offline using Ollama and the Page Assist extension. If you have any questions, leave a comment on AIPURE YouTube! And don’t forget to like, subscribe, and turn on notifications for our channel so you don’t miss our future tutorials.

    Easily find the AI tool that suits you best.
    Find Now!
    Products data integrated
    Massive Choices
    Abundant information