1 Simon Willison's Weblog
Alanna Theissen edited this page 2 months ago


That model was trained in part utilizing their unreleased R1 "thinking" model. Today they have actually released R1 itself, along with a whole household of new models obtained from that base.

There's a great deal of things in the brand-new release.

DeepSeek-R1-Zero seems the base model. It's over 650GB in size and, like most of their other releases, is under a clean MIT license. DeepSeek caution that "DeepSeek-R1-Zero experiences difficulties such as unlimited repeating, poor readability, and language mixing." ... so they also released:

DeepSeek-R1-which "includes cold-start data before RL" and "attains efficiency similar to OpenAI-o1 across math, code, and thinking jobs". That one is also MIT certified, and is a similar size.

I don't have the ability to run models larger than about 50GB (I have an M2 with 64GB of RAM), so neither of these two designs are something I can easily play with myself. That's where the brand-new distilled designs are available in.

To support the research study neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and 6 thick designs distilled from DeepSeek-R1 based on Llama and Qwen.

This is an interesting flex! They have actually models based on Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).

Weirdly those Llama models have an MIT license connected, which I'm uncertain works with the underlying Llama license. Qwen models are Apache accredited so maybe MIT is OK?

(I likewise just noticed the MIT license files state "Copyright (c) 2023 DeepSeek" so they might need to pay a bit more attention to how they copied those in.)

Licensing aside, these distilled designs are interesting monsters.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized versions are currently starting to appear. So far I've attempted simply among those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF released by Unsloth AI-and it's really enjoyable to have fun with.

I'm running it using the mix of Ollama, LLM and the llm-ollama plugin.

First I brought and ran the model utilizing Ollama itself:

This downloads around 8.5 GB of design information and begins an interactive chat user interface.

Once the design has been brought, LLM can talk to it as well. I choose using LLM for experiments since it logs whatever to SQLite for later exploration.

I set up the plugin and ran it like this:

Each action starts with a ... pseudo-XML tag containing the chain of idea utilized to assist generate the reaction. For "a joke about a pelican and a walrus who run a tea room together" that began like this:

Okay, so I need to come up with a joke about a pelican and a walrus running a tea room together. Hmm, that's an intriguing mix. Let me think of how these two characters might engage in a humorous situation.

First, let's consider their characteristics. Pelicans are understood for their long beaks and galgbtqhistoryproject.org Webbed feet, typically seen near the beach or water. Walruses have huge teeth, thick fur, and they're usually found in colder climates, like icebergs or snowy areas. So, integrating these 2 into a tea space setting is already a funny image.

[...]
It then believed for 20 paragraphs before outputting the joke! Here's the complete transcript and here's the output at the end, after the tag:

The Joke:

A pelican and a walrus choose to open a quaint little tea room together. The walrus, with its huge size, disgaeawiki.info has a hard time to find comfortable chairs, so it rests on the table by accident, overturning the teapot. Meanwhile, the pelican, attempting to help, uses its beak to place saucers on the table, causing a couple of spills.

After a series of humorous incidents, the walrus searches for and says with a grin, "This isn't so fishy anymore." The pelican smirks and remarks, "Maybe not, but we do have a great deal of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run a not likely tea room. Their sizes lead to comical chaos.