From 5972b37820adb6f4850a6f6380e03dcae0670708 Mon Sep 17 00:00:00 2001 From: winona19810108 Date: Mon, 17 Feb 2025 02:58:34 +0000 Subject: [PATCH] Add 'DeepSeek Open-Sources DeepSeek-R1 LLM with Performance Comparable To OpenAI's O1 Model' --- ...R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md | 2 ++ 1 file changed, 2 insertions(+) create mode 100644 DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md diff --git a/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md new file mode 100644 index 0000000..d8cd45a --- /dev/null +++ b/DeepSeek-Open-Sources-DeepSeek-R1-LLM-with-Performance-Comparable-To-OpenAI%27s-O1-Model.md @@ -0,0 +1,2 @@ +
[DeepSeek open-sourced](https://bihiring.com) DeepSeek-R1, an LLM fine-tuned with reinforcement learning (RL) to enhance reasoning capability. DeepSeek-R1 attains results on par with OpenAI's o1 design on a number of criteria, consisting of MATH-500 and [SWE-bench](https://www.kayserieticaretmerkezi.com).
+
DeepSeek-R1 is based on DeepSeek-V3, a [mixture](https://jobsekerz.com) of [professionals](https://thecodelab.online) (MoE) model recently [open-sourced](https://git.karma-riuk.com) by DeepSeek. This base design is fine-tuned utilizing Group Relative Policy Optimization (GRPO), a [reasoning-oriented variation](http://81.68.246.1736680) of RL. The research study group likewise performed understanding distillation from DeepSeek-R1 to open-source Qwen and Llama models and launched several variations of each \ No newline at end of file