Newtuple
Back to Blog
5 minute setupAI + Data

Run Deepseek R1 on your laptop in 5 minutes or less

Learn how to run Deepseek on your laptop in 5 minutes! Discover the power of Deepseek with easy steps and explore limitless possibilities.

Rahul KumarMay 22, 20251 min readUpdated November 28, 2025
Run Deepseek R1 on your laptop in 5 minutes or less

Operating and hosting an LLM model like Deepseek locally allows you to explore and generate ideas using your local machine as a powerhouse, enabling tasks such as reasoning and building agents.

The simplest way to begin is by using Ollama, which offers direct access to quantized, distilled versions of models such as Deepseek, Qwen, Mistral, and others. Additionally, OpenWebUI provides a user-friendly interface for working with these LLM models locally.

Getting started

  1. Download Ollama for your machine

    Install ollamaInstall ollama

    Install Ollama

  2. Install Deepseek R1 using Ollama with command

ollama run deepseek-r1
  1. You can try ask question right away in your terminal

    Deepseek R1 in CLI with ollama

  2. Now here is the fun part, download OpenWebUI, preferred is using Docker, once the open web ui container starts running visit http://localhost:3000/

    Using openwebuiUsing openwebui

    OpenWebUI

  3. Sign up with your credentials, select the model which you installed in earlier steps and start asking !

    Deepseek R1 using OpenWebUI

Please note that the latency of the model responses depends on machine hardware. Additionally GPUs can also be used to accelerate response generations.

References :

Stay in the loop

Get new posts, product updates, and research notes once a week.

By subscribing you agree to receive updates from Newtuple. You can unsubscribe anytime.

Comments

Ready to build production AI?

Talk to our team about AI agents, data platforms, and GenAI accelerators.

Get in Touch