waitmandot commited on
Commit
07598fd
·
verified ·
1 Parent(s): 039f3fb

Create train.json

Browse files
Files changed (1) hide show
  1. train.json +3 -0
train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ {"doi": "2310.06825", "chunk-id": "0", "chunk": "Mistral 7B\nAlbert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford,\nDevendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel,\nGuillaume Lample, Lucile Saulnier, L\u00e9lio Renard Lavaud, Marie-Anne Lachaux,\nPierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timoth\u00e9e Lacroix,\nWilliam El Sayed\nAbstract\nWe introduce Mistral 7B, a 7\u2013billion-parameter language model engineered for\nsuperior performance and efficiency. Mistral 7B outperforms the best open 13B\nmodel (Llama 2) across all evaluated benchmarks, and the best released 34B\nmodel (Llama 1) in reasoning, mathematics, and code generation. Our model\nleverages grouped-query attention (GQA) for faster inference, coupled with sliding\nwindow attention (SWA) to effectively handle sequences of arbitrary length with a\nreduced inference cost. We also provide a model fine-tuned to follow instructions,\nMistral 7B \u2013 Instruct, that surpasses Llama 2 13B \u2013 chat model both on human and\nautomated benchmarks. Our models are released under the Apache 2.0 license.\nCode: https://github.com/mistralai/mistral-src", "id": "2310.06825", "title": "Mistral 7B", "summary": "We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered\nfor superior performance and efficiency. Mistral 7B outperforms Llama 2 13B\nacross all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and\ncode generation. Our model leverages grouped-query attention (GQA) for faster\ninference, coupled with sliding window attention (SWA) to effectively handle\nsequences of arbitrary length with a reduced inference cost. We also provide a\nmodel fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses\nthe Llama 2 13B -- Chat model both on human and automated benchmarks. Our\nmodels are released under the Apache 2.0 license.", "source": "http://arxiv.org/pdf/2310.06825", "authors": ["Albert Q. Jiang", "Alexandre Sablayrolles", "Arthur Mensch", "Chris Bamford", "Devendra Singh Chaplot", "Diego de las Casas", "Florian Bressand", "Gianna Lengyel", "Guillaume Lample", "Lucile Saulnier", "L\u00e9lio Renard Lavaud", "Marie-Anne Lachaux", "Pierre Stock", "Teven Le Scao", "Thibaut Lavril", "Thomas Wang", "Timoth\u00e9e Lacroix", "William El Sayed"], "categories": ["cs.CL", "cs.AI", "cs.LG"], "comment": "Models and code are available at\n https://mistral.ai/news/announcing-mistral-7b/", "journal_ref": null, "primary_category": "cs.CL", "published": "20231010", "updated": "20231010", "references": [{"id": "1808.07036"}, {"id": "1809.02789"}, {"id": "1904.10509"}, {"id": "2302.13971"}, {"id": "2009.03300"}, {"id": "2305.13245"}, {"id": "1904.09728"}, {"id": "1803.05457"}, {"id": "2103.03874"}, {"id": "1905.07830"}, {"id": "2308.12950"}, {"id": "2210.09261"}, {"id": "2310.06825"}, {"id": "2307.09288"}, {"id": "2304.06364"}, {"id": "1905.10044"}, {"id": "2110.14168"}, {"id": "2108.07732"}, {"id": "2107.03374"}, {"id": "1811.00937"}, {"id": "2004.05150"}, {"id": "1705.03551"}]}
2
+ {"doi": "2310.06825", "chunk-id": "1", "chunk": "automated benchmarks. Our models are released under the Apache 2.0 license.\nCode: https://github.com/mistralai/mistral-src\nWebpage: https://mistral.ai/news/announcing-mistral-7b/\n1 Introduction\nIn the rapidly evolving domain of Natural Language Processing (NLP), the race towards higher model\nperformance often necessitates an escalation in model size. However, this scaling tends to increase\ncomputational costs and inference latency, thereby raising barriers to deployment in practical,\nreal-world scenarios. In this context, the search for balanced models delivering both high-level\nperformance and efficiency becomes critically essential. Our model, Mistral 7B, demonstrates that\na carefully designed language model can deliver high performance while maintaining an efficient\ninference. Mistral 7B outperforms the previous best 13B model (Llama 2, [ 26]) across all tested\nbenchmarks, and surpasses the best 34B model (LLaMa 34B, [ 25]) in mathematics and code\ngeneration. Furthermore, Mistral 7B approaches the coding performance of Code-Llama 7B [ 20],\nwithout sacrificing performance on non-code related benchmarks.\nMistral 7B leverages grouped-query attention (GQA) [ 1], and sliding window attention (SWA) [ 6,3].\nGQA significantly accelerates the inference speed, and also reduces the memory requirement during", "id": "2310.06825", "title": "Mistral 7B", "summary": "We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered\nfor superior performance and efficiency. Mistral 7B outperforms Llama 2 13B\nacross all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and\ncode generation. Our model leverages grouped-query attention (GQA) for faster\ninference, coupled with sliding window attention (SWA) to effectively handle\nsequences of arbitrary length with a reduced inference cost. We also provide a\nmodel fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses\nthe Llama 2 13B -- Chat model both on human and automated benchmarks. Our\nmodels are released under the Apache 2.0 license.", "source": "http://arxiv.org/pdf/2310.06825", "authors": ["Albert Q. Jiang", "Alexandre Sablayrolles", "Arthur Mensch", "Chris Bamford", "Devendra Singh Chaplot", "Diego de las Casas", "Florian Bressand", "Gianna Lengyel", "Guillaume Lample", "Lucile Saulnier", "L\u00e9lio Renard Lavaud", "Marie-Anne Lachaux", "Pierre Stock", "Teven Le Scao", "Thibaut Lavril", "Thomas Wang", "Timoth\u00e9e Lacroix", "William El Sayed"], "categories": ["cs.CL", "cs.AI", "cs.LG"], "comment": "Models and code are available at\n https://mistral.ai/news/announcing-mistral-7b/", "journal_ref": null, "primary_category": "cs.CL", "published": "20231010", "updated": "20231010", "references": [{"id": "1808.07036"}, {"id": "1809.02789"}, {"id": "1904.10509"}, {"id": "2302.13971"}, {"id": "2009.03300"}, {"id": "2305.13245"}, {"id": "1904.09728"}, {"id": "1803.05457"}, {"id": "2103.03874"}, {"id": "1905.07830"}, {"id": "2308.12950"}, {"id": "2210.09261"}, {"id": "2310.06825"}, {"id": "2307.09288"}, {"id": "2304.06364"}, {"id": "1905.10044"}, {"id": "2110.14168"}, {"id": "2108.07732"}, {"id": "2107.03374"}, {"id": "1811.00937"}, {"id": "2004.05150"}, {"id": "1705.03551"}]}
3
+ {"doi": "2310.06825", "chunk-id": "2", "chunk": "GQA significantly accelerates the inference speed, and also reduces the memory requirement during\ndecoding, allowing for higher batch sizes hence higher throughput, a crucial factor for real-time\napplications. In addition, SWA is designed to handle longer sequences more effectively at a reduced\ncomputational cost, thereby alleviating a common limitation in LLMs. These attention mechanisms\ncollectively contribute to the enhanced performance and efficiency of Mistral 7B.arXiv:2310.06825v1 [cs.CL] 10 Oct 2023\nMistral 7B is released under the Apache 2.0 license. This release is accompanied by a reference\nimplementation1facilitating easy deployment either locally or on cloud platforms such as AWS, GCP,\nor Azure using the vLLM [ 17] inference server and SkyPilot2. Integration with Hugging Face3is\nalso streamlined for easier integration. Moreover, Mistral 7B is crafted for ease of fine-tuning across\na myriad of tasks. As a demonstration of its adaptability and superior performance, we present a chat\nmodel fine-tuned from Mistral 7B that significantly outperforms the Llama 2 13B \u2013 Chat model.\nMistral 7B takes a significant step in balancing the goals of getting high performance while keeping\nlarge language models efficient. Through our work, our aim is to help the community create more", "id": "2310.06825", "title": "Mistral 7B", "summary": "We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered\nfor superior performance and efficiency. Mistral 7B outperforms Llama 2 13B\nacross all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and\ncode generation. Our model leverages grouped-query attention (GQA) for faster\ninference, coupled with sliding window attention (SWA) to effectively handle\nsequences of arbitrary length with a reduced inference cost. We also provide a\nmodel fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses\nthe Llama 2 13B -- Chat model both on human and automated benchmarks. Our\nmodels are released under the Apache 2.0 license.", "source": "http://arxiv.org/pdf/2310.06825", "authors": ["Albert Q. Jiang", "Alexandre Sablayrolles", "Arthur Mensch", "Chris Bamford", "Devendra Singh Chaplot", "Diego de las Casas", "Florian Bressand", "Gianna Lengyel", "Guillaume Lample", "Lucile Saulnier", "L\u00e9lio Renard Lavaud", "Marie-Anne Lachaux", "Pierre Stock", "Teven Le Scao", "Thibaut Lavril", "Thomas Wang", "Timoth\u00e9e Lacroix", "William El Sayed"], "categories": ["cs.CL", "cs.AI", "cs.LG"], "comment": "Models and code are available at\n https://mistral.ai/news/announcing-mistral-7b/", "journal_ref": null, "primary_category": "cs.CL", "published": "20231010", "updated": "20231010", "references": [{"id": "1808.07036"}, {"id": "1809.02789"}, {"id": "1904.10509"}, {"id": "2302.13971"}, {"id": "2009.03300"}, {"id": "2305.13245"}, {"id": "1904.09728"}, {"id": "1803.05457"}, {"id": "2103.03874"}, {"id": "1905.07830"}, {"id": "2308.12950"}, {"id": "2210.09261"}, {"id": "2310.06825"}, {"id": "2307.09288"}, {"id": "2304.06364"}, {"id": "1905.10044"}, {"id": "2110.14168"}, {"id": "2108.07732"}, {"id": "2107.03374"}, {"id": "1811.00937"}, {"id": "2004.05150"}, {"id": "1705.03551"}]}