1
0
Simon Willison's Weblog
Brandon Yee энэ хуудсыг 5 сар өмнө засварлав


That design was trained in part utilizing their unreleased R1 "thinking" model. Today they've released R1 itself, along with a whole household of brand-new models obtained from that base.

There's an entire lot of things in the brand-new release.

DeepSeek-R1-Zero appears to be the base design. It's over 650GB in size and, like the majority of their other releases, is under a clean MIT license. DeepSeek alert that "DeepSeek-R1-Zero encounters obstacles such as endless repetition, bad readability, and language mixing." ... so they also launched:

DeepSeek-R1-which "includes cold-start information before RL" and "attains performance equivalent to OpenAI-o1 throughout math, code, and thinking tasks". That one is also MIT accredited, and is a similar size.

I do not have the capability to run designs larger than about 50GB (I have an M2 with 64GB of RAM), so neither of these 2 designs are something I can quickly play with myself. That's where the new distilled designs are available in.

To support the research study neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and 6 dense models from DeepSeek-R1 based on Llama and Qwen.

This is an interesting flex! They have actually models based on Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and Llama 3.3 70B Instruct).

Weirdly those Llama designs have an MIT license attached, which I'm uncertain works with the underlying Llama license. Qwen designs are Apache licensed so possibly MIT is OK?

(I also simply observed the MIT license files state "Copyright (c) 2023 DeepSeek" so they may need to pay a little bit more attention to how they copied those in.)

Licensing aside, these distilled models are interesting beasts.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized variations are already beginning to show up. Up until now I've attempted simply among those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF released by Unsloth AI-and it's really fun to have fun with.

I'm running it using the mix of Ollama, LLM and the llm-ollama plugin.

First I brought and ran the design using Ollama itself:

This downloads around 8.5 GB of model information and starts an interactive chat interface.

Once the design has been brought, LLM can speak to it as well. I prefer using LLM for experiments because it logs whatever to SQLite for later exploration.

I set up the plugin and ran it like this:

Each reaction begins with a ... pseudo-XML tag containing the chain of thought utilized to assist generate the response. For "a joke about a pelican and a walrus who run a tea room together" that began like this:

Okay, so I need to come up with a joke about a pelican and a walrus running a tea room together. Hmm, that's a fascinating combination. Let me think about how these two characters may engage in a funny circumstance.

First, let's consider their qualities. Pelicans are known for their long beaks and Webbed feet, frequently seen near the beach or water. Walruses have huge teeth, thick fur, and macphersonwiki.mywikis.wiki they're generally found in colder environments, like icebergs or snowy locations. So, combining these two into a tea room setting is currently an amusing image.

[...]
It then believed for archmageriseswiki.com 20 paragraphs before outputting the joke! Here's the full records and here's the output at the end, after the tag:

The Joke:

A pelican and valetinowiki.racing a walrus choose to open a quaint little tea room together. The walrus, vetlek.ru with its huge size, struggles to find comfy chairs, hb9lc.org so it sits on the table by accident, knocking over the teapot. Meanwhile, the pelican, trying to assist, uses its beak to place dishes on the table, triggering a couple of spills.

After a series of comical mishaps, the walrus looks up and states with a smile, "This isn't so fishy any longer." The pelican smirks and mentions, "Maybe not, however we do have a great deal of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run an unlikely tea space. Their sizes lead to comical mayhem.