News
📣 [10/2024] Introducing SWE-bench Multimodal! Can AI systems "see" bugs and fix them? 👀 💻 [Link]
📣 [08/2024] SWE-bench x OpenAI = SWE-bench Verified, a human-validated subset of 500 problems reviewed by software engineers! [Report]
📣 [06/2024] We've Docker-ized SWE-bench for easier, containerized, reproducible evaluation. [Report]
📣 [03/2024] Check out our latest work, SWE-agent, which achieves a 12.47% resolve rate on SWE-bench! [Link]
📣 [03/2024] We've released SWE-bench Lite! Running all of SWE-bench can take time. This subset makes it easier! [Report]
Leaderboard
Model |
% Resolved |
Org |
Date |
Logs |
Trajs |
Site |
---|---|---|---|---|---|---|
🆕 🥇 🤠✅ OpenHands + CodeAct v2.1 (claude-3-5-sonnet-20241022) |
29.38 |
|
2024-11-03 |
✓ |
✓ |
|
🆕 🥈 🤠AutoCodeRover-v2.0 (Claude-3.5-Sonnet-20241022) |
24.89 |
|
2024-11-21 |
✓ |
✓ |
|
🥉 Honeycomb |
22.06 |
|
2024-08-20 |
✓ |
✓ |
|
Amazon Q Developer Agent (v20240719-dev) |
19.75 |
|
2024-07-21 |
✓ |
✓ |
|
Factory Code Droid |
19.27 |
|
2024-06-17 |
✓ |
- |
|
AutoCodeRover (v20240620) + GPT 4o (2024-05-13) |
18.83 |
|
2024-06-28 |
✓ |
- |
|
🤠✅ SWE-agent + Claude 3.5 Sonnet |
18.13 |
|
2024-06-20 |
✓ |
✓ |
- |
🤠✅ AppMap Navie + GPT 4o (2024-05-13) |
14.60 |
|
2024-06-15 |
✓ |
- |
|
Amazon Q Developer Agent (v20240430-dev) |
13.82 |
|
2024-05-09 |
✓ |
- |
|
🤠✅ SWE-agent + GPT 4 (1106) |
12.47 |
|
2024-04-02 |
✓ |
✓ |
|
🤠✅ SWE-agent + GPT 4o (2024-05-13) |
11.99 |
|
2024-07-28 |
✓ |
✓ |
|
🤠✅ SWE-agent + Claude 3 Opus |
10.51 |
|
2024-04-02 |
✓ |
✓ |
- |
🤠✅ RAG + Claude 3 Opus |
3.79 |
|
2024-04-02 |
✓ |
- |
|
🤠✅ RAG + Claude 2 |
1.96 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + GPT 4 (1106) |
1.31 |
|
2024-04-02 |
✓ |
- |
- |
🤠✅ RAG + SWE-Llama 13B |
0.70 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + SWE-Llama 7B |
0.70 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + ChatGPT 3.5 |
0.17 |
|
2023-10-10 |
✓ |
- |
- |
Model |
% Resolved |
Org |
Date |
Logs |
Trajs |
Site |
---|---|---|---|---|---|---|
🆕 🥇 Amazon Q Developer Agent (v20241202-dev) |
55.00 |
|
2024-12-02 |
✓ |
✓ |
|
🆕 🥈 devlo |
54.20 |
|
2024-11-08 |
✓ |
✓ |
|
🥉 🤠✅ OpenHands + CodeAct v2.1 (claude-3-5-sonnet-20241022) |
53.00 |
|
2024-10-29 |
✓ |
✓ |
|
🆕 Engine Labs (2024-11-25) |
51.80 |
|
2024-11-25 |
✓ |
✓ |
|
🆕 🤠Agentless-1.5 + Claude-3.5 Sonnet (20241022) |
50.80 |
|
2024-12-02 |
✓ |
✓ |
|
Solver (2024-10-28) |
50.00 |
|
2024-10-28 |
✓ |
✓ |
|
🆕 Bytedance MarsCode Agent |
50.00 |
|
2024-11-25 |
✓ |
✓ |
|
🆕 nFactorial (2024-11-05) |
49.20 |
|
2024-11-05 |
✓ |
✓ |
|
Tools + Claude 3.5 Sonnet (2024-10-22) |
49.00 |
|
2024-10-22 |
✓ |
✓ |
|
🤠✅ Composio SWE-Kit (2024-10-25) |
48.60 |
|
2024-10-25 |
✓ |
✓ |
|
🆕 🤠✅ AppMap Navie v2 |
47.20 |
|
2024-11-06 |
✓ |
✓ |
|
Emergent E1 (v2024-10-12) |
46.60 |
|
2024-10-23 |
✓ |
✓ |
|
🆕 🤠AutoCodeRover-v2.0 (Claude-3.5-Sonnet-20241022) |
46.20 |
|
2024-11-08 |
✓ |
✓ |
|
Solver (2024-09-12) |
45.40 |
|
2024-09-24 |
✓ |
✓ |
|
Gru(2024-08-24) |
45.20 |
|
2024-08-24 |
✓ |
✓ |
|
Solver (2024-09-12) |
43.60 |
|
2024-09-20 |
✓ |
✓ |
|
nFactorial (2024-10-30) |
41.60 |
|
2024-10-30 |
✓ |
✓ |
|
🆕 Nebius AI Qwen 2.5 72B Generator + LLama 3.1 70B Critic |
40.60 |
|
2024-11-13 |
✓ |
✓ |
|
Tools + Claude 3.5 Haiku |
40.60 |
|
2024-10-22 |
✓ |
✓ |
|
Honeycomb |
40.60 |
|
2024-08-20 |
✓ |
✓ |
|
🤠Composio SWEkit + Claude 3.5 Sonnet (2024-10-16) |
40.60 |
|
2024-10-16 |
✓ |
✓ |
|
EPAM AI/Run Developer Agent v20241029 + Anthopic Claude 3.5 Sonnet |
39.60 |
|
2024-10-29 |
✓ |
✓ |
|
Amazon Q Developer Agent (v20240719-dev) |
38.80 |
|
2024-07-21 |
✓ |
✓ |
|
🤠Agentless-1.5 + GPT 4o (2024-05-13) |
38.80 |
|
2024-10-28 |
✓ |
✓ |
|
AutoCodeRover (v20240620) + GPT 4o (2024-05-13) |
38.40 |
|
2024-06-28 |
✓ |
- |
|
🤠✅ SWE-agent + Claude 3.5 Sonnet |
33.60 |
|
2024-06-20 |
✓ |
✓ |
- |
🆕 Artemis Agent v1 (2024-11-20) |
32.00 |
|
2024-11-20 |
✓ |
✓ |
|
nFactorial (2024-10-07) |
31.60 |
|
2024-10-07 |
✓ |
✓ |
|
🤠Lingma Agent + Lingma SWE-GPT 72b (v0925) |
28.80 |
|
2024-10-02 |
✓ |
✓ |
|
EPAM AI/Run Developer Agent + GPT4o |
27.00 |
|
2024-10-16 |
✓ |
✓ |
|
🤠✅ AppMap Navie + GPT 4o (2024-05-13) |
26.20 |
|
2024-06-15 |
✓ |
- |
|
nFactorial (2024-10-01) |
25.80 |
|
2024-10-01 |
✓ |
✓ |
|
Amazon Q Developer Agent (v20240430-dev) |
25.60 |
|
2024-05-09 |
✓ |
- |
|
🤠Lingma Agent + Lingma SWE-GPT 72b (v0918) |
25.00 |
|
2024-09-18 |
✓ |
✓ |
|
EPAM AI/Run Developer Agent + GPT4o |
24.00 |
|
2024-08-20 |
✓ |
✓ |
|
🤠✅ SWE-agent + GPT 4o (2024-05-13) |
23.20 |
|
2024-07-28 |
✓ |
✓ |
|
🤠✅ SWE-agent + GPT 4 (1106) |
22.40 |
|
2024-04-02 |
✓ |
✓ |
|
🤠✅ SWE-agent + Claude 3 Opus |
18.20 |
|
2024-04-02 |
✓ |
✓ |
- |
🤠Lingma Agent + Lingma SWE-GPT 7b (v0925) |
18.20 |
|
2024-10-02 |
✓ |
✓ |
|
🤠Lingma Agent + Lingma SWE-GPT 7b (v0918) |
10.20 |
|
2024-09-18 |
✓ |
✓ |
|
🤠✅ RAG + Claude 3 Opus |
7.00 |
|
2024-04-02 |
✓ |
- |
|
🤠✅ RAG + Claude 2 |
4.40 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + GPT 4 (1106) |
2.80 |
|
2024-04-02 |
✓ |
- |
- |
🤠✅ RAG + SWE-Llama 7B |
1.40 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + SWE-Llama 13B |
1.20 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + ChatGPT 3.5 |
0.40 |
|
2023-10-10 |
✓ |
- |
- |
Model |
% Resolved |
Org |
Date |
Logs |
Trajs |
Site |
---|---|---|---|---|---|---|
🆕 🥇 Globant Code Fixer Agent |
48.33 |
|
2024-11-27 |
✓ |
✓ |
|
🆕 🥈 devlo |
47.33 |
|
2024-11-22 |
✓ |
✓ |
|
🥉 🤠✅ OpenHands + CodeAct v2.1 (claude-3-5-sonnet-20241022) |
41.67 |
|
2024-10-25 |
✓ |
✓ |
|
🤠Composio SWE-Kit (2024-10-30) |
41.00 |
|
2024-10-30 |
✓ |
✓ |
|
🆕 🤠Agentless-1.5 + Claude-3.5 Sonnet (20241022) |
40.67 |
|
2024-12-02 |
✓ |
✓ |
|
Bytedance MarsCode Agent |
39.33 |
|
2024-09-12 |
✓ |
✓ |
|
🆕 🤠✅ Moatless Tools + Claude 3.5 Sonnet (20241022) |
38.33 |
- |
2024-11-17 |
✓ |
✓ |
|
Honeycomb |
38.33 |
|
2024-08-20 |
✓ |
✓ |
|
🆕 🤠✅ AppMap Navie v2 |
36.00 |
|
2024-11-13 |
✓ |
✓ |
|
Gru(2024-08-11) |
35.67 |
|
2024-08-11 |
✓ |
✓ |
|
Isoform |
35.00 |
- |
2024-08-29 |
✓ |
✓ |
|
SuperCoder2.0 |
34.00 |
|
2024-08-06 |
✓ |
✓ |
|
Bytedance MarsCode Agent + GPT 4o (2024-05-13) |
34.00 |
|
2024-07-23 |
✓ |
- |
|
Alibaba Lingma Agent |
33.00 |
|
2024-06-22 |
✓ |
✓ |
|
🤠Agentless-1.5 + GPT 4o (2024-05-13) |
32.00 |
|
2024-10-28 |
- |
- |
|
🆕 CodeShellTester + GPT 4o (2024-05-13) |
31.33 |
|
2024-11-11 |
✓ |
✓ |
|
🤠AutoCodeRover (v20240620) + GPT 4o (2024-05-13) |
30.67 |
|
2024-06-21 |
✓ |
✓ |
|
AIGCode Infant-Coder(2024-08-30) |
30.00 |
- |
2024-09-08 |
✓ |
✓ |
|
Amazon Q Developer Agent (v20240719-dev) |
29.67 |
|
2024-07-21 |
✓ |
✓ |
|
🤠Agentless + RepoGraph + GPT-4o |
29.67 |
|
2024-08-08 |
✓ |
✓ |
|
CodeR + GPT 4 (1106) |
28.33 |
|
2024-06-04 |
✓ |
✓ |
|
SIMA + GPT 4o (2024-05-13) |
27.67 |
- |
2024-07-06 |
✓ |
✓ |
|
MASAI + GPT 4o (2024-05-13) |
27.33 |
- |
2024-06-12 |
✓ |
✓ |
|
🤠Agentless + GPT 4o (2024-05-13) |
27.33 |
|
2024-06-30 |
✓ |
- |
|
🤠✅ Moatless Tools + Claude 3.5 Sonnet |
26.67 |
- |
2024-06-23 |
✓ |
✓ |
|
🤠✅ OpenHands + CodeAct v1.8 |
26.67 |
|
2024-07-25 |
✓ |
✓ |
|
IBM Research Agent-101 |
26.67 |
|
2024-06-12 |
✓ |
- |
|
🤠Aider + GPT 4o & Claude 3 Opus |
26.33 |
|
2024-05-23 |
✓ |
- |
|
HyperAgent |
25.33 |
- |
2024-09-25 |
✓ |
✓ |
|
🤠✅ Moatless Tools + GPT 4o (2024-05-13) |
24.67 |
- |
2024-06-17 |
✓ |
✓ |
|
IBM AI Agent SWE-1.0 (with open LLMs) |
23.67 |
|
2024-10-16 |
✓ |
✓ |
|
🤠✅ SWE-agent + Claude 3.5 Sonnet |
23.00 |
|
2024-06-20 |
✓ |
✓ |
- |
🤠✅ AppMap Navie + GPT 4o (2024-05-13) |
21.67 |
|
2024-06-15 |
✓ |
- |
|
Bytedance AutoSE (based on SWE-Agent) + GPT4/GPT4o Mixed (20240828) |
21.67 |
|
2024-08-28 |
✓ |
✓ |
- |
Amazon Q Developer Agent (v20240430-dev) |
20.33 |
|
2024-05-09 |
✓ |
- |
|
🤠AutoCodeRover (v20240408) + GPT 4 (0125) |
19.00 |
|
2024-05-30 |
✓ |
- |
|
🤠✅ SWE-agent + GPT 4o (2024-05-13) |
18.33 |
|
2024-07-28 |
✓ |
✓ |
|
🤠✅ SWE-agent + GPT 4 (1106) |
18.00 |
|
2024-04-02 |
✓ |
✓ |
|
🤠✅ SWE-agent + Claude 3 Opus |
11.67 |
|
2024-04-02 |
✓ |
✓ |
- |
🤠✅ RAG + Claude 3 Opus |
4.33 |
|
2024-04-02 |
✓ |
- |
|
🤠✅ RAG + Claude 2 |
3.00 |
|
2023-10-10 |
✓ |
✓ |
- |
🤠✅ RAG + GPT 4 (1106) |
2.67 |
|
2024-04-02 |
✓ |
- |
- |
🤠✅ RAG + SWE-Llama 7B |
1.33 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + SWE-Llama 13B |
1.00 |
|
2023-10-10 |
✓ |
- |
- |
🤠✅ RAG + ChatGPT 3.5 |
0.33 |
|
2023-10-10 |
✓ |
- |
- |
SWE-bench Lite is a subset of SWE-bench that's been curated to make evaluation less costly and more accessible
[Post].
SWE-bench Verified is a human annotator filtered subset that has been deemed to have a ceiling of 100% resolution rate
[Post].
- The % Resolved metric refers to the percentage of SWE-bench instances
(2294 for test, 500 for verified, 300 for lite)
that were resolved by the model.
- ✅ Checked indicates that we, the SWE-bench team, received access to the system and
were able to reproduce the patch generations.
- 🤠Open refers to submissions that have open-source code. This does not
necessarily mean the underlying model is open-source.
- The leaderboard is updated once a week on Monday.
- If you would like to submit your model to the leaderboard, please check the submission page.
- All submissions are Pass@1, do not use
hints_text
,
and are in the unassisted setting.
Resources
You can download the SWE-bench task instances from HuggingFace or directly as a JSON file (development, test sets). For your convenience, to fine tune your own model for evaluation on SWE-bench, we provide five pre-processed datasets at different retrieval settings ("Oracle", 13K, 27K, 40K, 50K "Llama"). We recommend using the 13K, 27K, or 40K datasets for evaluation. The 50K "Llama" dataset is provided for reproducing the results of the SWE-bench paper.
SWE-bench Lite is also available for download from HuggingFace.
SWE-bench Verified can be downloaded from HuggingFace.
We also provide the full SWE-Llama model weights at 13b and 7b parameters, along with their PEFT LoRA weights.
About
SWE-bench is a dataset that tests systems' ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python repositories. Evaluation is performed by unit test verification using post-PR behavior as the reference solution. Read more about SWE-bench in our paper!
Citation
@inproceedings{
jimenez2024swebench,
title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=VTF8yNQM66}
}
Disclaimer: SWE-bench is for research purposes only. Models trained and evaluated on SWE-bench can produce unexpected results. We are not responsible for any damages caused by the use of SWE-bench, including but not limited to, any loss of profit, data, or use of data.
Usage: If you would like to use this website template for your own leaderboard, please send Carlos & John an email requesting permission. If granted, please make sure to acknowledge the SWE-bench team and link to this leaderboard on the home page of the website.
Correspondence to: carlosej@princeton.edu, johnby@stanford.edu