sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
a0ecd86e4baa61ec6c88791f14037dd4d76013b1
<p><strong><a href="https://savage-grow.clubeo.com/calendar/2024/01/01/savage-grow-plus-formula-usa-official-website-real-users-reviews-know-all-details-2023">Savage Grow Plus</a> Review-</strong> <a href="https://savage-grow.clubeo.com/page/savage-grow-plus-review-2024-ingredients-pros-cons-of-testosterone-booster.html"><strong>Savage Grow Plus</strong></a> is a diet formula that reveals the true size of your penis. <a href="https://savage-grow.clubeo.com/page/savage-grow-plus-usa-male-enhancement-official-website.html"><strong>Savage Grow Plus</strong></a> is made with the best all-natural herbs to quickly give you the best results.</p> <p>Most men are obsessed with their penis size. According to most people, men with mammoth manhood are better in bed and can satisfy their partners quickly. Experts claim that many men have average-sized penises, and there is zero correlation between penis size and sexual satisfaction.</p> <p>The use of penis enhancers and pumps to increase the size is common today. Some manhood-improving practices are unsafe and expensive. <a href="https://savage-grow.clubeo.com/"><strong>Savage Grow Plus</strong></a> is a dietary supplement containing natural ingredients to increase penile length and enhance overall male sexual health. Continue reading this review to find out more about this male-enhancing formula.</p> <h2><a href="https://www.globalfitnessmart.com/get-savage-grow-plus"><strong>Savage Grow Plus &mdash; Official Website Link &mdash; Click Here</strong></a></h2> <h2><strong>►❱❱ Product Name ➥ {<a href="https://www.globalfitnessmart.com/get-savage-grow-plus">Savage Grow Plus</a>}</strong><br /><strong>►❱❱ Countries Available ➥ World Wide</strong><br /><strong>►❱❱ Composition ➥ Natural Organic Compound</strong><br /><strong>►❱❱ Side-Effects ➥ NA</strong><br /><strong>►❱❱ Rating ➥ ⭐⭐⭐⭐⭐</strong><br /><strong>►❱❱ Availability ➥ Online</strong><br /><strong>➤➤❱❱ Where to Buy ➺ <a href="https://www.globalfitnessmart.com/get-savage-grow-plus">Official Website</a><br /></strong></h2> <h2><a href="https://www.globalfitnessmart.com/get-savage-grow-plus"><strong>✅&rdquo;Visit The Official Website To Get Your Bottle Now&rdquo;✅</strong></a><br /><a href="https://www.globalfitnessmart.com/get-savage-grow-plus"><strong>✅&rdquo;Visit The Official Website To Get Your Bottle Now&rdquo;✅</strong></a><br /><a href="https://www.globalfitnessmart.com/get-savage-grow-plus"><strong>✅&rdquo;Visit The Official Website To Get Your Bottle Now&rdquo;✅</strong></a></h2> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-savage-grow-plus"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEilIuq7lSQM1klPU6l1M4TIf-G5xcWJUDxKyev3jvMNQMNIkBzAKnv3jipFqiDPmWWZLzBHjKSiw0Ry9pK1aHYSod_tm3-tvCLOOGFuFInzfjX_H-2lR2nj6tcfmw1W1fzNgfWMxRF8kqEdvz0GiI1lXZY5YHQCOclSPWPBzYoU_3o1cE3l6Z_RMwiTFmdE/w640-h422/Savage%20Grow%20Plus%2004.jpg" alt="" width="640" height="422" border="0" data-original-height="446" data-original-width="677" /></a></div> <h2><strong>What&rsquo;s <a href="https://groups.google.com/g/savage-grow-plus-usa/c/xVQjXRmuAJg">Savage Grow Plus</a>?</strong></h2> <p>This is for men who wish to increase their penis size. Savage Grow plus is made in the USA in an FDA-approved facility. Each bottle of <a href="https://colab.research.google.com/drive/1dx1OYKeuLglGrLnHqwJcFEiKTehUSkjF"><strong>Savage Grow Plus</strong></a> is made to exacting, stringent, and sterile standards. <a href="https://sites.google.com/view/savage-grow-plus-review-us/home"><strong>Savage Grow Plus</strong></a> is used by thousands and has seen incredible results in just weeks. <a href="https://sites.google.com/view/savage-grow-plus-review-us/home"><strong>Savage Grow Plus</strong> </a>has no side effects and is 100% safe.</p> <p>Most African tribes supposedly have significant-sized penises because of their diet. <a href="https://colab.research.google.com/drive/1dx1OYKeuLglGrLnHqwJcFEiKTehUSkjF"><strong>Savage Grow Plus</strong></a> is a dietary supplement containing natural nutrients to increase penis size, improve orgasms, and support healthy erections. The developer claims it is a safe and practical alternative to boost male sexual health instead of using insane pumps and painful exercises, among other penis-growth remedies.</p> <p><a href="https://lookerstudio.google.com/u/0/reporting/54bfa516-b46d-4a13-ab95-a17dbbac7650/page/ONPmD"><strong>Savage Grow Plus</strong></a> supposedly addresses a specific &ldquo;Blockage&rdquo; that prevents you from achieving hard erections and causes overall poor sexual health. The male booster formula is based on African-based Somba tribe ingredients. It can increase the penis size by up to 5 inches within a few weeks.</p> <p>Each <a href="https://www.scoop.it/topic/savage-grow-plus-usa-is-legit-202-updated-report"><strong>Savage Grow Plus</strong></a> capsule has natural and scientifically proven ingredients to amplify male health. It is safe for adults of all ages and unlikely to give users any nasty side effects. <a href="https://savage-grow-plus-hashtagusa-male-enhancement.jimdosite.com/"><strong>Savage Grow Plus</strong></a> is available without a prescription and unlikely to interact with other medications.</p> <h2 style="text-align: center;"><strong><a href="https://www.globalfitnessmart.com/get-savage-grow-plus">(EXCLUSIVE OFFER)Click Here : "Savage Grow Plus USA"Official Website!</a></strong></h2> <h2><strong>How Does <a href="https://savage-grow-plus-review.company.site/">Savage Grow Plus</a> Work?</strong></h2> <p><a href="https://www.scoop.it/topic/savage-grow-plus-by-savage-growus"><strong>Savage Grow Plus</strong></a> works by eliminating the root of poor penis growth. The creator explains that consuming the dietary formula initiates various healing processes, including:<br />Initiate Absorption Processes</p> <p>Per the developer, the penis needs nourishment to elongate and thicken. Unfortunately, the American diet has little to no penis-enhancing ingredients. Consequently, most men develop small penises from a young age due to nutrient deficiency.</p> <p><strong>Restart Cellular Growth</strong></p> <p><a href="https://savagegrowplus3.bandcamp.com/track/savage-grow-plus-reviews-2024-does-it-really-work-see-pros"><strong>Savage Grow Plus</strong></a> makers claim that some people consume the proper nutrients, but due to poor absorption and assimilation processes, the penis does not increase in size. Signs of stunted penis growth include a characteristic pink color penile skin and yellow-colored urine. Some people develop allergies that inhibit them from absorbing the penis-growing ingredients. <a href="https://carehealthreview.blogspot.com/2023/12/savage-grow-plus-reviews-2024-does-it.html"><strong>Savage Grow Plus</strong></a> supports cellular health that stimulates an increase in penis size.</p> <p><strong>Eliminate Blockages</strong></p> <p><a href="https://savage-grow-plus.hashnode.dev/savage-grow-plus"><strong>Savage Grow Plus</strong> </a>has several ingredients that amplify the penis size by eradicating the &ldquo;blockage&rdquo; that hinders proper nutrient absorption and assimilation. Instead of being eliminated via urine, the natural ingredients ensure each penis cell gets the correct nutrients to augment size.</p> <p><strong>Boost Testosterone Levels</strong></p> <p><a href="https://savage-grow-plus.hashnode.dev/savage-grow-plus%20https://savage-grow-plus.hashnode.dev/savage-grow-plus-reviews-2024-does-it-really-work-see-pros"><strong>Savage Grow Plus</strong></a> aims to unlock the key to natural penis growth and restores sexual health regardless of age. The dietary formula improves the overall male reproductive health ranging from the scrotum, prostate, testicles, semen, and penis.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-savage-grow-plus"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPaKv11Jzd0mTv_ht3rAnwBAQeBTVOt2aF0VYe5ZTTYJdmGmpUXqEhuKwb1hT_vogIiAwfrh1Xz_xNXDgFtnro34L9S6q3GFyR1fcsaNtl7TnNiYfgsMaZVyY2hupDJUmoAaHsGPJLaCyweDqcdTDmot4eiTWlRLNBS9ghQMj6LJp6_SeDpwZc_3nReFtG/w640-h426/we-vibe-toys-vgaiXx0hp44-unsplash.jpg" alt="" width="640" height="426" border="0" data-original-height="5336" data-original-width="8005" /></a></div> <h2><strong><a href="https://www.click4r.com/posts/g/13843995/">Savage Grow Plus</a> Benefits</strong></h2> <ul> <li>It can increase the penis length and width within a few weeks</li> <li>It can help men develop and sustain long and strong erections for up to 30 minutes</li> <li>It can aid users in developing firm and strong body muscles</li> <li>It can increase the metabolic process, therefore, enabling men to lose their belly fat and excess fat mass</li> <li>It can amplify stamina allowing men to have complete sexual satisfaction</li> <li>It can fortify sexual drive, energy levels, and sexual confidence.</li> <li>It can shield men from developing prostate issues and other reproductive health problems.</li> <li><a href="https://savage-grow-plus.hashnode.dev/savage-grow-plus-reviews-2024-does-it-really-work-see-pros"><strong>Savage Grow Plus</strong></a> may enhance sleep quality and alleviate anxiety</li> <li>It can improve blood pressure and sugar levels</li> </ul> <h2><strong><a href="https://soundcloud.com/savagegrowus/savage-grow-plus-usa-male-enhancementofficial-website">Savage Grow Plus</a> Dosage and Side Effects</strong></h2> <p style="text-align: left;"><a href="https://events.humanitix.com/savage-grow-plus-reviews-2024-does-it-really-work-see-pros"><strong>Savage Grow Plus</strong></a> creators recommend consuming two capsules daily with a large glass of water. It is best to use the male enhancer for over 90 days to gain better results. Each <a href="https://gamma.app/docs/Savage-Grow-Plus-Reviews-2024-Does-It-Really-Work-See-Pros-9g66x7jdo6d3g9l?mode=doc"><strong>Savage Grow Plus</strong></a> capsule contains natural and science-based ingredients unlikely to give users any discomfort. The product is advertised for users above 18 years. The creator recommends consulting your doctor before using the formulation if you are under any medication.</p> <h2 style="text-align: center;"><strong><a href="https://www.globalfitnessmart.com/get-savage-grow-plus">SPECIAL PROMO[Limited Discount]: "Savage Grow Plus USA"Official Website!</a></strong></h2> <h2><strong><a href="https://www.sunflower-cissp.com/glossary/cissp/10910/savage-grow-plus-review-2024-ingredients-pros-cons-of-testosterone-booster">Savage Grow Plus</a> Ingredients</strong></h2> <p><a href="https://medium.com/@savagegrowus/savage-grow-plus-reviews-2024-does-it-really-work-see-pros-30fa34c5291f"><strong>Savage Grow Plus</strong></a> is a combination of 14 science-based and natural ingredients. The components are based on a 2000-year-old African ritual that sustains penis elongation and male reproductive health. The 14 elements include:</p> <p><strong>Tribulus Terrestris</strong></p> <p><a href="https://bitbucket.org/savage-grow-plus/savage-grow-plus/issues/1/savage-grow-plus-reviews-2024-does-it"><strong>Savage Grow Plus</strong> </a>makers claim that Tribulus supports reproductive health by boosting blood flow to the penis. It has natural compounds that help nitric oxide production that dilates the blood vessels allowing the penis to receive oxygenated and nutrient-rich oxygen. Additionally, good vascularity improves the nature of erections allowing men to develop thick, complex, and sustainable erections for extended periods.</p> <p><strong>Hawthorn Extract</strong></p> <p>Common in Africa, hawthorn is a potent antioxidant that heals damaged penile cells. Further, it may heal the tissues and stop unhealthy inflammations, thus heightening penis size. Hawthorn is a natural compound that supports healthy vascularity. It promotes healthy blood circulation delivering oxygen and nutrients to the male reproductive organs.</p> <p><strong>Horny Goat Weed</strong></p> <p><a href="https://medium.com/@savagegrowus/savage-grow-plus-usa-is-legit-2024-updated-report-ac9885ac948f"><strong>Savage Grow Plus</strong></a> makers claim it is a natural male sex enhancer supporting healthy testosterone levels. It can raise the sexual drive, stamina, and energy levels giving in the sexual confidence required to satisfy their partners. Horny goat weed may also nourish the cells and repair damage to the penis and related sexual organs.</p> <p><strong>Damiana Leaf</strong></p> <p>Common in America, it can improve sexual satisfaction, and unable to get satisfactory orgasms on command. Damiana leaf can prevent premature ejaculations, erectile dysfunction, and other male health problems.</p> <p><strong>Muira Puama</strong></p> <p>Also named the &ldquo;potency wood,&rdquo; Muira can treat erectile dysfunction and elevate libido levels. It may also help men to attain healthy erections on command.<a href="https://bitbucket.org/savage-grow-plus/savage-grow-plus/issues/2/savage-grow-plus-usa-is-legit-2024-updated"><strong>Savage Grow Plus</strong></a> ingredients include inosine, oat straw, saw palmetto, cayenne, and Catuaba, which can naturally enhance the penis length and girth.</p> <p><strong><a href="https://public.flourish.studio/visualisation/16299266/">Savage Grow Plus</a> Dosage</strong></p> <p><a href="https://community.thebatraanumerology.com/post/savage-grow-plus-reviews-2024---does-it-really-work-see-pros-658fdfa0cfc576f55fef391d"><strong>Savage Grow Plus</strong></a> enhancement pills should be used according to the guidelines provided on the official website. This dietary supplement has been carefully made with a blend of exotic formulas that can increase your phallus length. No other supplement can offer you this.</p> <p>You must take <a href="https://followme.tribe.so/post/savage-grow-plus-reviews-2024---does-it-really-work-see-pros-658fdfa9fb385bb9d99a01c6"><strong>Savage Grow Plus</strong></a> tablets twice a day. Once in the morning after breakfast and one in the afternoon. These pills must be consumed with water. Do not try to consume coffee, soda, energy drinks, or alcohol. Each dosage will be completed in 30 days. Each bottle contained 60 pills.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-savage-grow-plus"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSo9CAyqwYpXOi_ROcJkY9mguczLmJ0GN-CCgS3L1QCUuDyMgrS4KwXSF7jHoCZIyClEo_1UZl_Yhgmls13woWikAp7G7qTduLtQQL04Q2Q-jhBTYVa-3k8YgDK7LXgLhvMT4tO-Aqejym88sZwJmpVxYBG995k0Z01pmb5YcAxxaD6MHuPym40Q9cSJnF/w640-h384/4-bottles.png" alt="" width="640" height="384" border="0" data-original-height="600" data-original-width="998" /></a></div> <h2><strong><a href="https://wandering.flarum.cloud/d/34985-savage-grow-plus-reviews-2024-does-it-really-work-see-pros">Savage Grow Plus</a> Pricing</strong></h2> <p><a href="https://community.thebatraanumerology.com/post/savage-grow-plus---usa-is-legit-2024-updated-report-658fe1cae89ea2138f928498"><strong>Savage Grow Plus</strong> </a>is only available via the official website only. The manufacturer is selling the product at discounted prices on all orders. The prices are as follows:</p> <ul> <li>One bottle: $69.00 plus $9.95 shipping</li> <li>Two bottles: $59.00 each plus free shipping</li> <li>Four bottles: $49.00 each plus free shipping</li> </ul> <p>A 60-day money-back guarantee protects each <a href="https://oqqur.tribe.so/post/savage-grow-plus-reviews-2024---does-it-really-work-see-pros-658fe4d24921da35e7e196a5"><strong>Savage Grow Plus</strong></a> bottle. For more information official website.</p> <h2><strong><a href="https://www.click4r.com/posts/g/13843747/">Savage Grow Plus</a> Conclusion</strong></h2> <p><a href="https://www.c-sharpcorner.com/article/savage-grow-plus-usa-is-legit-2024-updated-report/"><strong>Savage Grow Plus</strong></a> is a male enhancer containing natural ingredients to boost reproductive health. It nourishes the cells, augments blood circulation, stabilizes testosterone levels, and protects the male reproductive organs from future damage. Taking two <a href="https://www.eventcreate.com/e/savage-grow-plus"><strong>Savage Grow Plus</strong></a> pills can increase the penis size by three inches, sustain libido, boost sexual confidence, and enhance orgasms. Visit the official website and try <a href="https://forum.teknofest.az/d/13328-savage-grow-plus-usa-is-legit-2024-updated-report"><strong>Savage Grow Plus</strong></a> today!</p> <p style="text-align: center;">&nbsp;<a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-savage-grow-plus"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX-cbqZHt7SzvtaQ87vvJz_mbo5yCnl6jJT8IH9oLh2-SilJhTQ0vKXi22OrJCpzcS1MlilVfYSulkkVZvqOLGZZcCYVAenF84LIdCXh7USryFwJV1sLyZv-8AkfdeliJYBkuO_VjMiziCg7L9SFIvHlUJ-zMRTwRwZcm4M5rnBbaNmnj2cvOJVjMRwKMH/w640-h490/Savage%20Grow%20Plus%20price.jpg" alt="" width="640" height="490" border="0" data-original-height="529" data-original-width="692" /></a></p> <h2 style="text-align: center;"><strong><a href="https://www.globalfitnessmart.com/get-savage-grow-plus">Exclusive Details: *Savage Grow Plus* Read More Details on Official Website USA!</a></strong></h2> <h2><strong># READ MORE</strong></h2> <p><strong><a href="https://savage-grow.clubeo.com/calendar/2024/01/01/savage-grow-plus-formula-usa-official-website-real-users-reviews-know-all-details-2023">https://savage-grow.clubeo.com/calendar/2024/01/01/savage-grow-plus-formula-usa-official-website-real-users-reviews-know-all-details-2023</a></strong></p> <p><strong><a href="https://savage-grow.clubeo.com/page/savage-grow-plus-usa-male-enhancement-official-website.html">https://savage-grow.clubeo.com/page/savage-grow-plus-usa-male-enhancement-official-website.html</a></strong></p> <p><strong><a href="https://savage-grow.clubeo.com/page/savage-grow-plus-review-2024-ingredients-pros-cons-of-testosterone-booster.html">https://savage-grow.clubeo.com/page/savage-grow-plus-review-2024-ingredients-pros-cons-of-testosterone-booster.html</a></strong></p> <p><strong><a href="https://savage-grow.clubeo.com/">https://savage-grow.clubeo.com/</a></strong></p> <p><strong><a href="https://groups.google.com/g/savage-grow-plus-usa/c/xVQjXRmuAJg">https://groups.google.com/g/savage-grow-plus-usa/c/xVQjXRmuAJg</a></strong></p> <p><strong><a href="https://sites.google.com/view/savage-grow-plus-review-us/home">https://sites.google.com/view/savage-grow-plus-review-us/home</a></strong></p> <p><strong><a href="https://colab.research.google.com/drive/1dx1OYKeuLglGrLnHqwJcFEiKTehUSkjF">https://colab.research.google.com/drive/1dx1OYKeuLglGrLnHqwJcFEiKTehUSkjF</a></strong></p> <p><strong><a href="https://lookerstudio.google.com/u/0/reporting/54bfa516-b46d-4a13-ab95-a17dbbac7650/page/ONPmD">https://lookerstudio.google.com/u/0/reporting/54bfa516-b46d-4a13-ab95-a17dbbac7650/page/ONPmD</a></strong></p> <p><strong><a href="https://savage-grow-plus-review.company.site/">https://savage-grow-plus-review.company.site/</a></strong></p> <p><strong><a href="https://www.scoop.it/topic/savage-grow-plus-by-savage-growus">https://www.scoop.it/topic/savage-grow-plus-by-savage-growus</a></strong></p> <p><strong><a href="https://savage-grow-plus-hashtagusa-male-enhancement.jimdosite.com/">https://savage-grow-plus-hashtagusa-male-enhancement.jimdosite.com/</a></strong></p> <p><strong><a href="https://soundcloud.com/savagegrowus/savage-grow-plus-usa-male-enhancementofficial-website">https://soundcloud.com/savagegrowus/savage-grow-plus-usa-male-enhancementofficial-website</a></strong></p> <p><strong><a href="https://savage-grow-plus.hashnode.dev/savage-grow-plus-reviews-2024-does-it-really-work-see-pros">https://savage-grow-plus.hashnode.dev/savage-grow-plus-reviews-2024-does-it-really-work-see-pros</a></strong></p> <p><strong><a href="https://savage-grow-plus.hashnode.dev/savage-grow-plus">https://savage-grow-plus.hashnode.dev/savage-grow-plus</a></strong></p> <p><strong><a href="https://myhealthfitnessmart.blogspot.com/2023/12/savage-grow-plus-usa-male.html">https://myhealthfitnessmart.blogspot.com/2023/12/savage-grow-plus-usa-male.html</a></strong></p> <p><strong><a href="https://carehealthreview.blogspot.com/2023/12/savage-grow-plus-reviews-2024-does-it.html">https://carehealthreview.blogspot.com/2023/12/savage-grow-plus-reviews-2024-does-it.html</a></strong></p> <p><strong><a href="https://savagegrowplus3.bandcamp.com/track/savage-grow-plus-reviews-2024-does-it-really-work-see-pros">https://savagegrowplus3.bandcamp.com/track/savage-grow-plus-reviews-2024-does-it-really-work-see-pros</a></strong></p> <p><strong><a href="https://savage-grow-plus-hashtagusa-male-enhancement.jimdosite.com/">https://savage-grow-plus-hashtagusa-male-enhancement.jimdosite.com/</a></strong></p> <p><strong><a href="https://savagegrowplus3.bandcamp.com/track/savage-grow-plus-reviews-2024-does-it-really-work-see-pros">https://savagegrowplus3.bandcamp.com/track/savage-grow-plus-reviews-2024-does-it-really-work-see-pros</a></strong></p> <p><strong><a href="https://carehealthreview.blogspot.com/2023/12/savage-grow-plus-reviews-2024-does-it.html">https://carehealthreview.blogspot.com/2023/12/savage-grow-plus-reviews-2024-does-it.html</a></strong></p> <p><strong><a href="https://myhealthfitnessmart.blogspot.com/2023/12/savage-grow-plus-usa-male.html">https://myhealthfitnessmart.blogspot.com/2023/12/savage-grow-plus-usa-male.html</a></strong></p> <p><strong><a href="https://savage-grow-plus.hashnode.dev/savage-grow-plus">https://savage-grow-plus.hashnode.dev/savage-grow-plus</a></strong></p> <p><strong><a href="https://savage-grow-plus.hashnode.dev/savage-grow-plus-reviews-2024-does-it-really-work-see-pros">https://savage-grow-plus.hashnode.dev/savage-grow-plus-reviews-2024-does-it-really-work-see-pros</a></strong></p> <p><strong><a href="https://soundcloud.com/savagegrowus/savage-grow-plus-usa-male-enhancementofficial-website">https://soundcloud.com/savagegrowus/savage-grow-plus-usa-male-enhancementofficial-website</a></strong></p> <p><strong><a href="https://events.humanitix.com/savage-grow-plus-reviews-2024-does-it-really-work-see-pros">https://events.humanitix.com/savage-grow-plus-reviews-2024-does-it-really-work-see-pros</a></strong></p> <p><strong><a href="https://gocrowdera.com/US/self/savage-grow-plus-review/savage-51626">https://gocrowdera.com/US/self/savage-grow-plus-review/savage-51626</a></strong></p> <p><strong><a href="https://gamma.app/docs/Savage-Grow-Plus-Reviews-2024-Does-It-Really-Work-See-Pros-9g66x7jdo6d3g9l?mode=doc">https://gamma.app/docs/Savage-Grow-Plus-Reviews-2024-Does-It-Really-Work-See-Pros-9g66x7jdo6d3g9l?mode=doc</a></strong></p> <p><strong><a href="https://www.sunflower-cissp.com/glossary/cissp/10910/savage-grow-plus-review-2024-ingredients-pros-cons-of-testosterone-booster">https://www.sunflower-cissp.com/glossary/cissp/10910/savage-grow-plus-review-2024-ingredients-pros-cons-of-testosterone-booster</a></strong></p> <p><strong><a href="https://www.sunflower-cissp.com/glossary/cissp/10908/savage-grow-plus-reviews-2024-does-it-really-work-see-pros">https://www.sunflower-cissp.com/glossary/cissp/10908/savage-grow-plus-reviews-2024-does-it-really-work-see-pros</a></strong></p> <p><strong><a href="https://followme.tribe.so/post/savage-grow-plus-reviews-2024---does-it-really-work-see-pros-658fcc584a1ffb0e03841667">https://followme.tribe.so/post/savage-grow-plus-reviews-2024---does-it-really-work-see-pros-658fcc584a1ffb0e03841667</a></strong></p> <p><strong><a href="https://medium.com/@savagegrowus/savage-grow-plus-usa-is-legit-2024-updated-report-ac9885ac948f">https://medium.com/@savagegrowus/savage-grow-plus-usa-is-legit-2024-updated-report-ac9885ac948f</a></strong></p> <p><strong><a href="https://community.thebatraanumerology.com/post/savage-grow-plus-reviews-2024---does-it-really-work-see-pros-658fdfa0cfc576f55fef391d">https://community.thebatraanumerology.com/post/savage-grow-plus-reviews-2024---does-it-really-work-see-pros-658fdfa0cfc576f55fef391d</a></strong></p> <p><strong><a href="https://leetcode.com/discuss/interview-question/4477757/Savage-Grow-Plus-Reviews-(2024)-Does-It-Really-Work-See-Pros!">https://leetcode.com/discuss/interview-question/4477757/Savage-Grow-Plus-Reviews-(2024)-Does-It-Really-Work-See-Pros!</a></strong></p> <p><strong><a href="https://wandering.flarum.cloud/d/34985-savage-grow-plus-reviews-2024-does-it-really-work-see-pros">https://wandering.flarum.cloud/d/34985-savage-grow-plus-reviews-2024-does-it-really-work-see-pros</a></strong></p> <p><strong><a href="https://www.click4r.com/posts/g/13843995/">https://www.click4r.com/posts/g/13843995/</a></strong></p>
Savagegrowus/savage-grow-plus-review
[ "region:us" ]
2023-12-30T11:59:56+00:00
{}
2023-12-30T12:00:19+00:00
[]
[]
TAGS #region-us
<p><strong><a href="URL Grow Plus</a> Review-</strong> <a href="URL Grow Plus</strong></a> is a diet formula that reveals the true size of your penis. <a href="URL Grow Plus</strong></a> is made with the best all-natural herbs to quickly give you the best results.</p> <p>Most men are obsessed with their penis size. According to most people, men with mammoth manhood are better in bed and can satisfy their partners quickly. Experts claim that many men have average-sized penises, and there is zero correlation between penis size and sexual satisfaction.</p> <p>The use of penis enhancers and pumps to increase the size is common today. Some manhood-improving practices are unsafe and expensive. <a href="URL Grow Plus</strong></a> is a dietary supplement containing natural ingredients to increase penile length and enhance overall male sexual health. Continue reading this review to find out more about this male-enhancing formula.</p> <h2><a href="URL Grow Plus &mdash; Official Website Link &mdash; Click Here</strong></a></h2> <h2><strong>► Product Name {<a href="URL Grow Plus</a>}</strong><br /><strong>► Countries Available World Wide</strong><br /><strong>► Composition Natural Organic Compound</strong><br /><strong>► Side-Effects NA</strong><br /><strong>► Rating ⭐⭐⭐⭐⭐</strong><br /><strong>► Availability Online</strong><br /><strong> Where to Buy <a href="URL Website</a><br /></strong></h2> <h2><a href="URL The Official Website To Get Your Bottle Now&rdquo;</strong></a><br /><a href="URL The Official Website To Get Your Bottle Now&rdquo;</strong></a><br /><a href="URL The Official Website To Get Your Bottle Now&rdquo;</strong></a></h2> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="422" border="0" data-original-height="446" data-original-width="677" /></a></div> <h2><strong>What&rsquo;s <a href="URL Grow Plus</a>?</strong></h2> <p>This is for men who wish to increase their penis size. Savage Grow plus is made in the USA in an FDA-approved facility. Each bottle of <a href="URL Grow Plus</strong></a> is made to exacting, stringent, and sterile standards. <a href="URL Grow Plus</strong></a> is used by thousands and has seen incredible results in just weeks. <a href="URL Grow Plus</strong> </a>has no side effects and is 100% safe.</p> <p>Most African tribes supposedly have significant-sized penises because of their diet. <a href="URL Grow Plus</strong></a> is a dietary supplement containing natural nutrients to increase penis size, improve orgasms, and support healthy erections. The developer claims it is a safe and practical alternative to boost male sexual health instead of using insane pumps and painful exercises, among other penis-growth remedies.</p> <p><a href="URL Grow Plus</strong></a> supposedly addresses a specific &ldquo;Blockage&rdquo; that prevents you from achieving hard erections and causes overall poor sexual health. The male booster formula is based on African-based Somba tribe ingredients. It can increase the penis size by up to 5 inches within a few weeks.</p> <p>Each <a href="URL Grow Plus</strong></a> capsule has natural and scientifically proven ingredients to amplify male health. It is safe for adults of all ages and unlikely to give users any nasty side effects. <a href="URL Grow Plus</strong></a> is available without a prescription and unlikely to interact with other medications.</p> <h2 style="text-align: center;"><strong><a href="URL OFFER)Click Here : "Savage Grow Plus USA"Official Website!</a></strong></h2> <h2><strong>How Does <a href="URL Grow Plus</a> Work?</strong></h2> <p><a href="URL Grow Plus</strong></a> works by eliminating the root of poor penis growth. The creator explains that consuming the dietary formula initiates various healing processes, including:<br />Initiate Absorption Processes</p> <p>Per the developer, the penis needs nourishment to elongate and thicken. Unfortunately, the American diet has little to no penis-enhancing ingredients. Consequently, most men develop small penises from a young age due to nutrient deficiency.</p> <p><strong>Restart Cellular Growth</strong></p> <p><a href="URL Grow Plus</strong></a> makers claim that some people consume the proper nutrients, but due to poor absorption and assimilation processes, the penis does not increase in size. Signs of stunted penis growth include a characteristic pink color penile skin and yellow-colored urine. Some people develop allergies that inhibit them from absorbing the penis-growing ingredients. <a href="URL Grow Plus</strong></a> supports cellular health that stimulates an increase in penis size.</p> <p><strong>Eliminate Blockages</strong></p> <p><a href="URL Grow Plus</strong> </a>has several ingredients that amplify the penis size by eradicating the &ldquo;blockage&rdquo; that hinders proper nutrient absorption and assimilation. Instead of being eliminated via urine, the natural ingredients ensure each penis cell gets the correct nutrients to augment size.</p> <p><strong>Boost Testosterone Levels</strong></p> <p><a href="URL/URL Grow Plus</strong></a> aims to unlock the key to natural penis growth and restores sexual health regardless of age. The dietary formula improves the overall male reproductive health ranging from the scrotum, prostate, testicles, semen, and penis.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="426" border="0" data-original-height="5336" data-original-width="8005" /></a></div> <h2><strong><a href="URL Grow Plus</a> Benefits</strong></h2> <ul> <li>It can increase the penis length and width within a few weeks</li> <li>It can help men develop and sustain long and strong erections for up to 30 minutes</li> <li>It can aid users in developing firm and strong body muscles</li> <li>It can increase the metabolic process, therefore, enabling men to lose their belly fat and excess fat mass</li> <li>It can amplify stamina allowing men to have complete sexual satisfaction</li> <li>It can fortify sexual drive, energy levels, and sexual confidence.</li> <li>It can shield men from developing prostate issues and other reproductive health problems.</li> <li><a href="URL Grow Plus</strong></a> may enhance sleep quality and alleviate anxiety</li> <li>It can improve blood pressure and sugar levels</li> </ul> <h2><strong><a href="URL Grow Plus</a> Dosage and Side Effects</strong></h2> <p style="text-align: left;"><a href="URL Grow Plus</strong></a> creators recommend consuming two capsules daily with a large glass of water. It is best to use the male enhancer for over 90 days to gain better results. Each <a href="URL Grow Plus</strong></a> capsule contains natural and science-based ingredients unlikely to give users any discomfort. The product is advertised for users above 18 years. The creator recommends consulting your doctor before using the formulation if you are under any medication.</p> <h2 style="text-align: center;"><strong><a href="URL PROMO[Limited Discount]: "Savage Grow Plus USA"Official Website!</a></strong></h2> <h2><strong><a href="URL Grow Plus</a> Ingredients</strong></h2> <p><a href="URL Grow Plus</strong></a> is a combination of 14 science-based and natural ingredients. The components are based on a 2000-year-old African ritual that sustains penis elongation and male reproductive health. The 14 elements include:</p> <p><strong>Tribulus Terrestris</strong></p> <p><a href="URL Grow Plus</strong> </a>makers claim that Tribulus supports reproductive health by boosting blood flow to the penis. It has natural compounds that help nitric oxide production that dilates the blood vessels allowing the penis to receive oxygenated and nutrient-rich oxygen. Additionally, good vascularity improves the nature of erections allowing men to develop thick, complex, and sustainable erections for extended periods.</p> <p><strong>Hawthorn Extract</strong></p> <p>Common in Africa, hawthorn is a potent antioxidant that heals damaged penile cells. Further, it may heal the tissues and stop unhealthy inflammations, thus heightening penis size. Hawthorn is a natural compound that supports healthy vascularity. It promotes healthy blood circulation delivering oxygen and nutrients to the male reproductive organs.</p> <p><strong>Horny Goat Weed</strong></p> <p><a href="URL Grow Plus</strong></a> makers claim it is a natural male sex enhancer supporting healthy testosterone levels. It can raise the sexual drive, stamina, and energy levels giving in the sexual confidence required to satisfy their partners. Horny goat weed may also nourish the cells and repair damage to the penis and related sexual organs.</p> <p><strong>Damiana Leaf</strong></p> <p>Common in America, it can improve sexual satisfaction, and unable to get satisfactory orgasms on command. Damiana leaf can prevent premature ejaculations, erectile dysfunction, and other male health problems.</p> <p><strong>Muira Puama</strong></p> <p>Also named the &ldquo;potency wood,&rdquo; Muira can treat erectile dysfunction and elevate libido levels. It may also help men to attain healthy erections on command.<a href="URL Grow Plus</strong></a> ingredients include inosine, oat straw, saw palmetto, cayenne, and Catuaba, which can naturally enhance the penis length and girth.</p> <p><strong><a href="URL Grow Plus</a> Dosage</strong></p> <p><a href="URL Grow Plus</strong></a> enhancement pills should be used according to the guidelines provided on the official website. This dietary supplement has been carefully made with a blend of exotic formulas that can increase your phallus length. No other supplement can offer you this.</p> <p>You must take <a href="URL Grow Plus</strong></a> tablets twice a day. Once in the morning after breakfast and one in the afternoon. These pills must be consumed with water. Do not try to consume coffee, soda, energy drinks, or alcohol. Each dosage will be completed in 30 days. Each bottle contained 60 pills.</p> <div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="384" border="0" data-original-height="600" data-original-width="998" /></a></div> <h2><strong><a href="URL Grow Plus</a> Pricing</strong></h2> <p><a href="URL Grow Plus</strong> </a>is only available via the official website only. The manufacturer is selling the product at discounted prices on all orders. The prices are as follows:</p> <ul> <li>One bottle: $69.00 plus $9.95 shipping</li> <li>Two bottles: $59.00 each plus free shipping</li> <li>Four bottles: $49.00 each plus free shipping</li> </ul> <p>A 60-day money-back guarantee protects each <a href="URL Grow Plus</strong></a> bottle. For more information official website.</p> <h2><strong><a href="URL Grow Plus</a> Conclusion</strong></h2> <p><a href="URL Grow Plus</strong></a> is a male enhancer containing natural ingredients to boost reproductive health. It nourishes the cells, augments blood circulation, stabilizes testosterone levels, and protects the male reproductive organs from future damage. Taking two <a href="URL Grow Plus</strong></a> pills can increase the penis size by three inches, sustain libido, boost sexual confidence, and enhance orgasms. Visit the official website and try <a href="URL Grow Plus</strong></a> today!</p> <p style="text-align: center;">&nbsp;<a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="490" border="0" data-original-height="529" data-original-width="692" /></a></p> <h2 style="text-align: center;"><strong><a href="URL Details: *Savage Grow Plus* Read More Details on Official Website USA!</a></strong></h2> <h2><strong># READ MORE</strong></h2> <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL <p><strong><a href="URL/URL
[ "# READ MORE</strong></h2>\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL" ]
[ "TAGS\n#region-us \n", "# READ MORE</strong></h2>\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL" ]
[ 6, 465 ]
[ "passage: TAGS\n#region-us \n# READ MORE</strong></h2>\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL\n<p><strong><a href=\"URL/URL" ]
10c7c37d42010b067c8223b3325d1a58f4dd2924
# SlimOrca-Dedup-Turkish ``` Dataset Cost: USD 558 Translated with: gpt-3.5-turbo-1106 Elapsed Time: 4 hours 12 minutes ``` ## SlimOrcaPart1: ``` English Token Count: 63.983.788 Token Count After Turkish Translation: 92.292.655 Number of Successfully Translated Row: 181.747 ``` ## SlimOrcaPart2: ``` English Token Count: 66.179.567 Token Count After Turkish Translation: 99.760.316 Number of Successfully Translated Row: 181.745 ```
t3aile/SlimOrca-Dedup-Turkish
[ "size_categories:100K<n<1M", "language:tr", "region:us" ]
2023-12-30T12:34:22+00:00
{"language": ["tr"], "size_categories": ["100K<n<1M"]}
2024-01-03T15:53:21+00:00
[]
[ "tr" ]
TAGS #size_categories-100K<n<1M #language-Turkish #region-us
# SlimOrca-Dedup-Turkish ## SlimOrcaPart1: ## SlimOrcaPart2:
[ "# SlimOrca-Dedup-Turkish", "## SlimOrcaPart1:", "## SlimOrcaPart2:" ]
[ "TAGS\n#size_categories-100K<n<1M #language-Turkish #region-us \n", "# SlimOrca-Dedup-Turkish", "## SlimOrcaPart1:", "## SlimOrcaPart2:" ]
[ 24, 12, 6, 7 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-Turkish #region-us \n# SlimOrca-Dedup-Turkish## SlimOrcaPart1:## SlimOrcaPart2:" ]
bab760451fd16d7c2111284695e027d3deee67a5
English books from gutenberg.org with fiction tag and at least 25 downloads, split into paragraphs. For license details see: https://www.gutenberg.org/policy/permission.html
sanps/GutenbergFiction
[ "language:en", "region:us" ]
2023-12-30T13:16:50+00:00
{"language": ["en"], "dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "text_sub_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1696432957, "num_examples": 393386}], "download_size": 1069041271, "dataset_size": 1696432957}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-30T13:36:03+00:00
[]
[ "en" ]
TAGS #language-English #region-us
English books from URL with fiction tag and at least 25 downloads, split into paragraphs. For license details see: URL
[]
[ "TAGS\n#language-English #region-us \n" ]
[ 10 ]
[ "passage: TAGS\n#language-English #region-us \n" ]
13684af030b2d314128affccbe24c1dd4bbf4edb
# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v2 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [jeonsworld/CarbonVillain-en-10.7B-v2](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T14:10:12.399093](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v2/blob/main/results_2023-12-30T14-10-12.399093.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6668796366857725, "acc_stderr": 0.03160080682605906, "acc_norm": 0.6676595001042884, "acc_norm_stderr": 0.03224405354576427, "mc1": 0.5703794369645043, "mc1_stderr": 0.017329234580409095, "mc2": 0.7193736077693007, "mc2_stderr": 0.015006027966413999 }, "harness|arc:challenge|25": { "acc": 0.6851535836177475, "acc_stderr": 0.013572657703084948, "acc_norm": 0.712457337883959, "acc_norm_stderr": 0.013226719056266125 }, "harness|hellaswag|10": { "acc": 0.7128062139016133, "acc_stderr": 0.004515280911468821, "acc_norm": 0.8839872535351524, "acc_norm_stderr": 0.003195857247704915 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.75, "acc_stderr": 0.03523807393012047, "acc_norm": 0.75, "acc_norm_stderr": 0.03523807393012047 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03476590104304134, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03476590104304134 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956913, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956913 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.625531914893617, "acc_stderr": 0.03163910665367291, "acc_norm": 0.625531914893617, "acc_norm_stderr": 0.03163910665367291 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6413793103448275, "acc_stderr": 0.039966295748767186, "acc_norm": 0.6413793103448275, "acc_norm_stderr": 0.039966295748767186 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4973544973544973, "acc_stderr": 0.02575094967813039, "acc_norm": 0.4973544973544973, "acc_norm_stderr": 0.02575094967813039 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4365079365079365, "acc_stderr": 0.04435932892851466, "acc_norm": 0.4365079365079365, "acc_norm_stderr": 0.04435932892851466 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8161290322580645, "acc_stderr": 0.022037217340267822, "acc_norm": 0.8161290322580645, "acc_norm_stderr": 0.022037217340267822 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5024630541871922, "acc_stderr": 0.03517945038691063, "acc_norm": 0.5024630541871922, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8121212121212121, "acc_stderr": 0.03050193405942914, "acc_norm": 0.8121212121212121, "acc_norm_stderr": 0.03050193405942914 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822516, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822516 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.02150024957603348, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.02150024957603348 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37037037037037035, "acc_stderr": 0.02944316932303154, "acc_norm": 0.37037037037037035, "acc_norm_stderr": 0.02944316932303154 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7184873949579832, "acc_stderr": 0.02921354941437217, "acc_norm": 0.7184873949579832, "acc_norm_stderr": 0.02921354941437217 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8458715596330275, "acc_stderr": 0.015480826865374308, "acc_norm": 0.8458715596330275, "acc_norm_stderr": 0.015480826865374308 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.033674621388960775, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.033674621388960775 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8481012658227848, "acc_stderr": 0.023363878096632446, "acc_norm": 0.8481012658227848, "acc_norm_stderr": 0.023363878096632446 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.03138147637575499, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.03138147637575499 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7557251908396947, "acc_stderr": 0.037683359597287434, "acc_norm": 0.7557251908396947, "acc_norm_stderr": 0.037683359597287434 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.03492606476623791, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.03492606476623791 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.0230866350868414, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.0230866350868414 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8045977011494253, "acc_stderr": 0.014179171373424383, "acc_norm": 0.8045977011494253, "acc_norm_stderr": 0.014179171373424383 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7543352601156069, "acc_stderr": 0.023176298203992005, "acc_norm": 0.7543352601156069, "acc_norm_stderr": 0.023176298203992005 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.38994413407821227, "acc_stderr": 0.01631237662921307, "acc_norm": 0.38994413407821227, "acc_norm_stderr": 0.01631237662921307 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.729903536977492, "acc_stderr": 0.02521804037341062, "acc_norm": 0.729903536977492, "acc_norm_stderr": 0.02521804037341062 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7839506172839507, "acc_stderr": 0.022899162918445806, "acc_norm": 0.7839506172839507, "acc_norm_stderr": 0.022899162918445806 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4929078014184397, "acc_stderr": 0.02982449855912901, "acc_norm": 0.4929078014184397, "acc_norm_stderr": 0.02982449855912901 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.49282920469361147, "acc_stderr": 0.012768922739553308, "acc_norm": 0.49282920469361147, "acc_norm_stderr": 0.012768922739553308 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7389705882352942, "acc_stderr": 0.026679252270103128, "acc_norm": 0.7389705882352942, "acc_norm_stderr": 0.026679252270103128 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6764705882352942, "acc_stderr": 0.018926082916083383, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.018926082916083383 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.028123429335142783, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.028123429335142783 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.026193923544454125, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.026193923544454125 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.028762349126466125, "acc_norm": 0.91, "acc_norm_stderr": 0.028762349126466125 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598053, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598053 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.783625730994152, "acc_stderr": 0.03158149539338733, "acc_norm": 0.783625730994152, "acc_norm_stderr": 0.03158149539338733 }, "harness|truthfulqa:mc|0": { "mc1": 0.5703794369645043, "mc1_stderr": 0.017329234580409095, "mc2": 0.7193736077693007, "mc2_stderr": 0.015006027966413999 }, "harness|winogrande|5": { "acc": 0.8334648776637726, "acc_stderr": 0.010470796496781091 }, "harness|gsm8k|5": { "acc": 0.6527672479150872, "acc_stderr": 0.013113898382146874 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v2
[ "region:us" ]
2023-12-30T14:12:30+00:00
{"pretty_name": "Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [jeonsworld/CarbonVillain-en-10.7B-v2](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T14:10:12.399093](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v2/blob/main/results_2023-12-30T14-10-12.399093.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6668796366857725,\n \"acc_stderr\": 0.03160080682605906,\n \"acc_norm\": 0.6676595001042884,\n \"acc_norm_stderr\": 0.03224405354576427,\n \"mc1\": 0.5703794369645043,\n \"mc1_stderr\": 0.017329234580409095,\n \"mc2\": 0.7193736077693007,\n \"mc2_stderr\": 0.015006027966413999\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6851535836177475,\n \"acc_stderr\": 0.013572657703084948,\n \"acc_norm\": 0.712457337883959,\n \"acc_norm_stderr\": 0.013226719056266125\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7128062139016133,\n \"acc_stderr\": 0.004515280911468821,\n \"acc_norm\": 0.8839872535351524,\n \"acc_norm_stderr\": 0.003195857247704915\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03523807393012047,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03523807393012047\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6413793103448275,\n \"acc_stderr\": 0.039966295748767186,\n \"acc_norm\": 0.6413793103448275,\n \"acc_norm_stderr\": 0.039966295748767186\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4973544973544973,\n \"acc_stderr\": 0.02575094967813039,\n \"acc_norm\": 0.4973544973544973,\n \"acc_norm_stderr\": 0.02575094967813039\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8161290322580645,\n \"acc_stderr\": 0.022037217340267822,\n \"acc_norm\": 0.8161290322580645,\n \"acc_norm_stderr\": 0.022037217340267822\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8121212121212121,\n \"acc_stderr\": 0.03050193405942914,\n \"acc_norm\": 0.8121212121212121,\n \"acc_norm_stderr\": 0.03050193405942914\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603348,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603348\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.02944316932303154,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.02944316932303154\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7184873949579832,\n \"acc_stderr\": 0.02921354941437217,\n \"acc_norm\": 0.7184873949579832,\n \"acc_norm_stderr\": 0.02921354941437217\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374308,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374308\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8481012658227848,\n \"acc_stderr\": 0.023363878096632446,\n \"acc_norm\": 0.8481012658227848,\n \"acc_norm_stderr\": 0.023363878096632446\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7557251908396947,\n \"acc_stderr\": 0.037683359597287434,\n \"acc_norm\": 0.7557251908396947,\n \"acc_norm_stderr\": 0.037683359597287434\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8045977011494253,\n \"acc_stderr\": 0.014179171373424383,\n \"acc_norm\": 0.8045977011494253,\n \"acc_norm_stderr\": 0.014179171373424383\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7543352601156069,\n \"acc_stderr\": 0.023176298203992005,\n \"acc_norm\": 0.7543352601156069,\n \"acc_norm_stderr\": 0.023176298203992005\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38994413407821227,\n \"acc_stderr\": 0.01631237662921307,\n \"acc_norm\": 0.38994413407821227,\n \"acc_norm_stderr\": 0.01631237662921307\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.729903536977492,\n \"acc_stderr\": 0.02521804037341062,\n \"acc_norm\": 0.729903536977492,\n \"acc_norm_stderr\": 0.02521804037341062\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7839506172839507,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.7839506172839507,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.49282920469361147,\n \"acc_stderr\": 0.012768922739553308,\n \"acc_norm\": 0.49282920469361147,\n \"acc_norm_stderr\": 0.012768922739553308\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7389705882352942,\n \"acc_stderr\": 0.026679252270103128,\n \"acc_norm\": 0.7389705882352942,\n \"acc_norm_stderr\": 0.026679252270103128\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.018926082916083383,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.018926082916083383\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338733,\n \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338733\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5703794369645043,\n \"mc1_stderr\": 0.017329234580409095,\n \"mc2\": 0.7193736077693007,\n \"mc2_stderr\": 0.015006027966413999\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.010470796496781091\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6527672479150872,\n \"acc_stderr\": 0.013113898382146874\n }\n}\n```", "repo_url": "https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|arc:challenge|25_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|gsm8k|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hellaswag|10_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T14-10-12.399093.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["**/details_harness|winogrande|5_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T14-10-12.399093.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T14_10_12.399093", "path": ["results_2023-12-30T14-10-12.399093.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T14-10-12.399093.parquet"]}]}]}
2023-12-30T14:12:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v2 Dataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v2 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T14:10:12.399093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v2\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T14:10:12.399093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v2\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T14:10:12.399093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 195, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v2\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T14:10:12.399093(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
a80e73c6278d6ee4433959b50ed3b232bc90a5f0
# Dataset Card for "detoxify" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mii-llm/detoxify
[ "region:us" ]
2023-12-30T14:15:38+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7872782, "num_examples": 2820}], "download_size": 4147804, "dataset_size": 7872782}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-30T15:31:47+00:00
[]
[]
TAGS #region-us
# Dataset Card for "detoxify" More Information needed
[ "# Dataset Card for \"detoxify\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"detoxify\"\n\nMore Information needed" ]
[ 6, 13 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"detoxify\"\n\nMore Information needed" ]
783a891f1dbb1b6ec351b0f0c1169a06c98c70d6
<!-- * @Description: * @Author: shenlei * @Modified: linhui * @Date: 2023-12-19 10:31:41 * @LastEditTime: 2024-01-02 17:46:57 * @LastEditors: shenlei --> Leadboard ![bce - 为RAG而生](https://cdn-uploads.huggingface.co/production/uploads/64745e955aba8edfb2ed561a/YyR-B3jbliQdvTwljasXu.jpeg) -------------------------- <h1 align="center">BCEmbedding: Bilingual and Crosslingual Embedding for RAG</h1> <p align="center"> <a href="https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE"> <img src="https://img.shields.io/badge/license-Apache--2.0-yellow"> </a> <a href="https://twitter.com/YDopensource"> <img src="https://img.shields.io/badge/follow-%40YDOpenSource-1DA1F2?logo=twitter&style={style}"> </a> </p> <p align="left"> <a href="https://github.com/netease-youdao/BCEmbedding">GitHub</a> </p> <details open="open"> <summary>Click to Open Contents</summary> - <a href="#-bilingual-and-crosslingual-superiority" target="_Self">🌐 Bilingual and Crosslingual Superiority</a> - <a href="#-key-features" target="_Self">💡 Key Features</a> - <a href="#-latest-updates" target="_Self">🚀 Latest Updates</a> - <a href="#-model-list" target="_Self">🍎 Model List</a> - <a href="#-manual" target="_Self">📖 Manual</a> - <a href="#installation" target="_Self">Installation</a> - <a href="#quick-start" target="_Self">Quick Start</a> - <a href="#%EF%B8%8F-evaluation" target="_Self">⚙️ Evaluation</a> - <a href="#evaluate-semantic-representation-by-mteb" target="_Self">Evaluate Semantic Representation by MTEB</a> - <a href="#evaluate-rag-by-llamaindex" target="_Self">Evaluate RAG by LlamaIndex</a> - <a href="#-leaderboard" target="_Self">📈 Leaderboard</a> - <a href="#semantic-representation-evaluations-in-mteb" target="_Self">Semantic Representation Evaluations in MTEB</a> - <a href="#rag-evaluations-in-llamaindex" target="_Self">RAG Evaluations in LlamaIndex</a> - <a href="#-youdaos-bcembedding-api" target="_Self">🛠 Youdao's BCEmbedding API</a> - <a href="#-wechat-group" target="_Self">🧲 WeChat Group</a> - <a href="#%EF%B8%8F-citation" target="_Self">✏️ Citation</a> - <a href="#-license" target="_Self">🔐 License</a> - <a href="#-related-links" target="_Self">🔗 Related Links</a> </details> <br> **B**ilingual and **C**rosslingual **Embedding** (`BCEmbedding`), developed by NetEase Youdao, encompasses `EmbeddingModel` and `RerankerModel`. The `EmbeddingModel` specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the `RerankerModel` excels at refining search results and ranking tasks. `BCEmbedding` serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably [QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)], an open-source implementation widely integrated in various Youdao products like [Youdao Speed Reading](https://read.youdao.com/#/home) and [Youdao Translation](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation). Distinguished for its bilingual and crosslingual proficiency, `BCEmbedding` excels in bridging Chinese and English linguistic gaps, which achieves - **A high performence on <a href="#semantic-representation-evaluations-in-mteb">Semantic Representation Evaluations in MTEB</a>**; - **A new benchmark in the realm of <a href="#rag-evaluations-in-llamaindex">RAG Evaluations in LlamaIndex</a>**. `BCEmbedding`是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含`EmbeddingModel`和`RerankerModel`两类基础模型。`EmbeddingModel`专门用于生成语义向量,在语义搜索和问答中起着关键作用,而`RerankerModel`擅长优化语义搜索结果和语义相关顺序精排。 `BCEmbedding`作为有道的检索增强生成式应用(RAG)的基石,特别是在[QAnything](http://qanything.ai) [[github](https://github.com/netease-youdao/qanything)]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如[有道速读](https://read.youdao.com/#/home)和[有道翻译](https://fanyi.youdao.com/download-Mac?keyfrom=fanyiweb_navigation) `BCEmbedding`以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现: - **强大的双语和跨语种语义表征能力【<a href="#semantic-representation-evaluations-in-mteb">基于MTEB的语义表征评测指标</a>】。** - **基于LlamaIndex的RAG评测,表现SOTA【<a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>】。** ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64745e955aba8edfb2ed561a/MrzcIbnvBFMsyDG-dCY6Y.jpeg) ## 🌐 Bilingual and Crosslingual Superiority Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. `BCEmbedding`, leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings. `EmbeddingModel` supports ***Chinese (ch) and English (en)*** (more languages support will come soon), while `RerankerModel` supports ***Chinese (ch), English (en), Japanese (ja) and Korean (ko)***. 现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。`BCEmbedding`充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。 `EmbeddingModel`支持***中文和英文***(之后会支持更多语种);`RerankerModel`支持***中文,英文,日文和韩文***。 ## 💡 Key Features - **Bilingual and Crosslingual Proficiency**: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages. - **RAG-Optimized**: Tailored for diverse RAG tasks including **translation, summarization, and question answering**, ensuring accurate **query understanding**. See <a href=#rag-evaluations-in-llamaindex>RAG Evaluations in LlamaIndex</a>. - **Efficient and Precise Retrieval**: Dual-encoder for efficient retrieval of `EmbeddingModel` in first stage, and cross-encoder of `RerankerModel` for enhanced precision and deeper semantic analysis in second stage. - **Broad Domain Adaptability**: Trained on diverse datasets for superior performance across various fields. - **User-Friendly Design**: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task. - **Meaningful Reranking Scores**: `RerankerModel` provides relevant scores to improve result quality and optimize large language model performance. - **Proven in Production**: Successfully implemented and validated in Youdao's products. - **双语和跨语种能力**:基于有道翻译引擎的强大能力,我们的`BCEmbedding`具备强大的中英双语和跨语种语义表征能力。 - **RAG适配**:面向RAG做了针对性优化,可以适配大多数相关任务,比如**翻译,摘要,问答**等。此外,针对**问题理解**(query understanding)也做了针对优化,详见 <a href="#rag-evaluations-in-llamaindex">基于LlamaIndex的RAG评测指标</a>。 - **高效且精确的语义检索**:`EmbeddingModel`采用双编码器,可以在第一阶段实现高效的语义检索。`RerankerModel`采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。 - **更好的领域泛化性**:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。 - **用户友好**:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。 - **有意义的重排序分数**:`RerankerModel`可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。 - **产品化检验**:`BCEmbedding`已经被有道众多真实产品检验。 ## 🚀 Latest Updates - ***2024-01-03***: **Model Releases** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1) and [bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1) are available. - ***2024-01-03***: **Eval Datasets** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - Evaluate the performence of RAG, using [LlamaIndex](https://github.com/run-llama/llama_index). - ***2024-01-03***: **Eval Datasets** [[Details](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - Evaluate the performence of crosslingual semantic representation, using [MTEB](https://github.com/embeddings-benchmark/mteb). - ***2024-01-03***: **模型发布** - [bce-embedding-base_v1](https://huggingface.co/maidalun1020/bce-embedding-base_v1)和[bce-reranker-base_v1](https://huggingface.co/maidalun1020/bce-reranker-base_v1)已发布. - ***2024-01-03***: **RAG评测数据** [[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)] - 基于[LlamaIndex](https://github.com/run-llama/llama_index)的RAG评测数据已发布。 - ***2024-01-03***: **跨语种语义表征评测数据** [[详情](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)] - 基于[MTEB](https://github.com/embeddings-benchmark/mteb)的跨语种评测数据已发布. ## 🍎 Model List | Model Name | Model Type | Languages | Parameters | Weights | |:-------------------------------|:--------:|:--------:|:--------:|:--------:| | bce-embedding-base_v1 | `EmbeddingModel` | ch, en | 279M | [download](https://huggingface.co/maidalun1020/bce-embedding-base_v1) | | bce-reranker-base_v1 | `RerankerModel` | ch, en, ja, ko | 279M | [download](https://huggingface.co/maidalun1020/bce-reranker-base_v1) | ## 📖 Manual ### Installation First, create a conda environment and activate it. ```bash conda create --name bce python=3.10 -y conda activate bce ``` Then install `BCEmbedding`: ```bash pip install git+https://github.com/netease-youdao/BCEmbedding.git ``` Or install from source: ```bash git clone [email protected]:netease-youdao/BCEmbedding.git cd BCEmbedding pip install -v -e . ``` ### Quick Start Use `EmbeddingModel` by `BCEmbedding`, and `cls` [pooler](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/models/embedding.py#L24) is default. ```python from BCEmbedding import EmbeddingModel # list of sentences sentences = ['sentence_0', 'sentence_1', ...] # init embedding model model = EmbeddingModel(model_name_or_path="maidalun1020/bce-embedding-base_v1") # extract embeddings embeddings = model.encode(sentences) ``` Use `RerankerModel` by `BCEmbedding` to calculate relevant scores and rerank: ```python from BCEmbedding import RerankerModel # your query and corresponding passages query = 'input_query' passages = ['passage_0', 'passage_1', ...] # construct sentence pairs sentence_pairs = [[query, passage] for passage in passages] # init reranker model model = RerankerModel(model_name_or_path="maidalun1020/bce-reranker-base_v1") # method 0: calculate scores of sentence pairs scores = model.compute_score(sentence_pairs) # method 1: rerank passages rerank_results = model.rerank(query, passages) ``` ## ⚙️ Evaluation ### Evaluate Semantic Representation by MTEB We provide evaluateion tools for `embedding` and `reranker` models, based on [MTEB](https://github.com/embeddings-benchmark/mteb) and [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB). 我们基于[MTEB](https://github.com/embeddings-benchmark/mteb)和[C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB),提供`embedding`和`reranker`模型的语义表征评测工具。 #### 1. Embedding Models Just run following cmd to evaluate `your_embedding_model` (e.g. `maidalun1020/bce-embedding-base_v1`) in **monolingual, bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`). 运行下面命令评测`your_embedding_model`(比如,`maidalun1020/bce-embedding-base_v1`)。评测任务将会在**单语种,双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测: ```bash python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path maidalun1020/bce-embedding-base_v1 --pooler cls ``` The total evaluation tasks contain ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"**. 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的 ***114个数据集***。 ***NOTE:*** - All models are evaluated in their **recommended pooling method (`pooler`)**. "jina-embeddings-v2-base-en", "m3e-base" and "m3e-large" use `mean` pooler, while the others use `cls`. - "jina-embeddings-v2-base-en" model should be loaded with `trust_remote_code`. ```bash python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path {moka-ai/m3e-base | moka-ai/m3e-large} --pooler mean python BCEmbedding/tools/eval_mteb/eval_embedding_mteb.py --model_name_or_path jinaai/jina-embeddings-v2-base-en --pooler mean --trust_remote_code ``` ***注意:*** - 所有模型的评测采用各自推荐的`pooler`。"jina-embeddings-v2-base-en", "m3e-base"和"m3e-large"的 `pooler`采用`mean`,其他模型的`pooler`采用`cls`. - "jina-embeddings-v2-base-en"模型在载入时需要`trust_remote_code`。 #### 2. Reranker Models Run following cmd to evaluate `your_reranker_model` (e.g. "maidalun1020/bce-reranker-base_v1") in **monolingual, bilingual and crosslingual settings** (e.g. `["en", "zh", "en-zh", "zh-en"]`). 运行下面命令评测`your_reranker_model`(比如,`maidalun1020/bce-reranker-base_v1`)。评测任务将会在**单语种,双语种和跨语种**(比如,`["en", "zh", "en-zh", "zh-en"]`)模式下评测: ```bash python BCEmbedding/tools/eval_mteb/eval_reranker_mteb.py --model_name_or_path maidalun1020/bce-reranker-base_v1 ``` The evaluation tasks contain ***12 datastes*** of **"Reranking"**. 评测包含 **"Reranking"** 任务的 ***12个数据集***。 #### 3. Metrics Visualization Tool We proveide a one-click script to sumarize evaluation results of `embedding` and `reranker` models as [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md) and [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md). 我们提供了`embedding`和`reranker`模型的指标可视化一键脚本,输出一个markdown文件,详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)和[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md)。 ```bash python BCEmbedding/evaluation/mteb/summarize_eval_results.py --results_dir {your_embedding_results_dir | your_reranker_results_dir} ``` ### Evaluate RAG by LlamaIndex [LlamaIndex](https://github.com/run-llama/llama_index) is a famous data framework for LLM-based applications, particularly in RAG. Recently, the [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our `BCEmbedding`. [LlamaIndex](https://github.com/run-llama/llama_index)是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证`BCEmbedding`在RAG中的效果。 First, install LlamaIndex: ```bash pip install llama-index==0.9.22 ``` #### 1. Metrics Definition - Hit Rate: Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. ***The larger, the better.*** - Mean Reciprocal Rank (MRR): For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. ***The larger, the better.*** - 命中率(Hit Rate) 命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。***该指标越大越好。*** - 平均倒数排名(Mean Reciprocal Rank,MRR) 对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。***该指标越大越好。*** #### 2. Reproduce [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) In order to compare our `BCEmbedding` with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our `BCEmbedding`: 为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将`BCEmbedding`与其他embedding和reranker模型进行对比分析: ```bash # There should be two GPUs available at least. CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_reproduce.py ``` Then, sumarize the evaluation results by: ```bash python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_reproduce_results ``` Results Reproduced from the LlamaIndex Blog can be checked in ***[Reproduced Summary of RAG Evaluation](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***, with some obvious ***conclusions***: - In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models. - With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence. - ***The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA.*** 输出的指标汇总详见 ***[LlamaIndex RAG评测结果复现](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/rag_eval_reproduced_summary.md)***。从该复现结果中,可以看出: - 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`比其他embedding模型效果都要好。 - 在固定embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好。 - ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。*** #### 3. Broad Domain Adaptability The evaluation of [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83) is **monolingual, small amount of data, and specific domain** (just including "llama2" paper). In order to evaluate the **broad domain adaptability, bilingual and crosslingual capability**, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named [CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset), **by OpenAI `gpt-4-1106-preview` for high quality**. 在上述的[LlamaIndex博客](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)的评测数据只用了“llama2”这一篇文章,该评测是 **单语种,小数据量,特定领域** 的。为了兼容更真实更广的用户使用场景,评测算法模型的 **领域泛化性,双语和跨语种能力**,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,[CrosslingualMultiDomainsDataset](https://huggingface.co/datasets/maidalun1020/CrosslingualMultiDomainsDataset)。**为了保证构建数据的高质量,我们采用OpenAI的`gpt-4-1106-preview`。** First, run following cmd to evaluate the most popular and powerful embedding and reranker models: ```bash # There should be two GPUs available at least. CUDA_VISIBLE_DEVICES=0,1 python BCEmbedding/tools/eval_rag/eval_llamaindex_multiple_domains.py ``` Then, run the following script to sumarize the evaluation results: ```bash python BCEmbedding/tools/eval_rag/summarize_eval_results.py --results_dir results/rag_results ``` The summary of multiple domains evaluations can be seen in <a href=#1-multiple-domains-scenarios>Multiple Domains Scenarios</a>. ## 📈 Leaderboard ### Semantic Representation Evaluations in MTEB #### 1. Embedding Models | Model | Retrieval | STS | PairClassification | Classification | Reranking | Clustering | Avg | |:-------------------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| | bge-base-en-v1.5 | 37.14 | 55.06 | 75.45 | 59.73 | 43.05 | 37.74 | 47.20 | | bge-base-zh-v1.5 | 47.60 | 63.72 | 77.40 | 63.38 | 54.85 | 32.56 | 53.60 | | bge-large-en-v1.5 | 37.15 | 54.09 | 75.00 | 59.24 | 42.68 | 37.32 | 46.82 | | bge-large-zh-v1.5 | 47.54 | 64.73 | **79.14** | 64.19 | 55.88 | 33.26 | 54.21 | | jina-embeddings-v2-base-en | 31.58 | 54.28 | 74.84 | 58.42 | 41.16 | 34.67 | 44.29 | | m3e-base | 46.29 | 63.93 | 71.84 | 64.08 | 52.38 | 37.84 | 53.54 | | m3e-large | 34.85 | 59.74 | 67.69 | 60.07 | 48.99 | 31.62 | 46.78 | | ***bce-embedding-base_v1*** | **57.60** | **65.73** | 74.96 | **69.00** | **57.29** | **38.95** | **59.43** | ***NOTE:*** - Our ***bce-embedding-base_v1*** outperforms other opensource embedding models with various model size. - ***114 datastes*** of **"Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering"** in `["en", "zh", "en-zh", "zh-en"]` setting. - The [crosslingual evaluation datasets](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py) we released belong to `Retrieval` task. - More evaluation details please check [Embedding Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md). ***要点:*** - 对比所有开源的各种规模的embedding模型,***bce-embedding-base_v1*** 表现最好。 - 评测包含 **"Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering"** 这六大类任务的共 ***114个数据集***。 - 我们开源的[跨语种语义表征评测数据](https://github.com/netease-youdao/BCEmbedding/blob/master/BCEmbedding/evaluation/c_mteb/Retrieval.py)属于`Retrieval`任务。 - 更详细的评测结果详见[Embedding模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/embedding_eval_summary.md)。 #### 2. Reranker Models | Model | Reranking | Avg | |:-------------------------------|:--------:|:--------:| | bge-reranker-base | 57.78 | 57.78 | | bge-reranker-large | 59.69 | 59.69 | | ***bce-reranker-base_v1*** | **60.06** | **60.06** | ***NOTE:*** - Our ***bce-reranker-base_v1*** outperforms other opensource reranker models. - ***12 datastes*** of **"Reranking"** in `["en", "zh", "en-zh", "zh-en"]` setting. - More evaluation details please check [Reranker Models Evaluation Summary](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md). ***要点:*** - ***bce-reranker-base_v1*** 优于其他开源reranker模型。 - 评测包含 **"Reranking"** 任务的 ***12个数据集***。 - 更详细的评测结果详见[Reranker模型指标汇总](https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/EvaluationSummary/reranker_eval_summary.md) ### RAG Evaluations in LlamaIndex #### 1. Multiple Domains Scenarios | Embedding Models | WithoutReranker <br> [*hit_rate/mrr*] | CohereRerank <br> [*hit_rate/mrr*] | bge-reranker-large <br> [*hit_rate/mrr*] | ***bce-reranker-base_v1*** <br> [*hit_rate/mrr*] | |:-------------------------------|:--------:|:--------:|:--------:|:--------:| | OpenAI-ada-2 | 81.04/57.35 | 88.35/67.83 | 88.89/69.64 | **90.71/75.46** | | bge-large-en-v1.5 | 52.67/34.69 | 64.59/52.11 | 64.71/52.05 | **65.36/55.50** | | bge-large-zh-v1.5 | 69.81/47.38 | 79.37/62.13 | 80.11/63.95 | **81.19/68.50** | | llm-embedder | 50.85/33.26 | 63.62/51.45 | 63.54/51.32 | **64.47/54.98** | | CohereV3-en | 53.10/35.39 | 65.75/52.80 | 66.29/53.31 | **66.91/56.93** | | CohereV3-multilingual | 79.80/57.22 | 86.34/66.62 | 86.76/68.56 | **88.35/73.73** | | JinaAI-v2-Base-en | 50.27/32.31 | 63.97/51.10 | 64.28/51.83 | **64.82/54.98** | | ***bce-embedding-base_v1*** | **85.91/62.36** | **91.25/69.38** | **91.80/71.13** | ***93.46/77.02*** | ***NOTE:*** - In `WithoutReranker` setting, our `bce-embedding-base_v1` outperforms all the other embedding models. - With fixing the embedding model, our `bce-reranker-base_v1` achieves the best performence. - **The combination of `bce-embedding-base_v1` and `bce-reranker-base_v1` is SOTA**. ***要点:*** - 在`WithoutReranker`设置下(**竖排对比**),`bce-embedding-base_v1`优于其他Embedding模型,包括开源和闭源。 - 在固定Embedding模型设置下,对比不同reranker效果(**横排对比**),`bce-reranker-base_v1`比其他reranker模型效果都要好,包括开源和闭源。 - ***`bce-embedding-base_v1`和`bce-reranker-base_v1`组合,表现SOTA。*** ## 🛠 Youdao's BCEmbedding API For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, `BCEmbedding` is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at [Youdao BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html). Here, you'll find all the necessary guidance to easily implement `BCEmbedding` across a variety of use cases, ensuring a smooth and effective integration for optimal results. 对于那些更喜欢直接调用api的用户,有道提供方便的`BCEmbedding`调用api。该方式是一种简化和高效的方式,将`BCEmbedding`集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见[有道BCEmbedding API](https://ai.youdao.com/DOCSIRMA/html/aigc/api/embedding/index.html)。 ## 🧲 WeChat Group Welcome to scan the QR code below and join the WeChat group. 欢迎大家扫码加入官方微信交流群。 <img src="https://github.com/netease-youdao/BCEmbedding/blob/master/Docs/assets/Wechat.jpg" width="20%" height="auto"> ## ✏️ Citation If you use `BCEmbedding` in your research or project, please feel free to cite and star it: 如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~ ``` @misc{youdao_bcembedding_2023, title={BCEmbedding: Bilingual and Crosslingual Embedding for RAG}, author={NetEase Youdao, Inc.}, year={2023}, howpublished={\url{https://github.com/netease-youdao/BCEmbedding}} } ``` ## 🔐 License `BCEmbedding` is licensed under [Apache 2.0 License](https://github.com/netease-youdao/BCEmbedding/blob/master/LICENSE) ## 🔗 Related Links [Netease Youdao - QAnything](https://github.com/netease-youdao/qanything) [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding) [MTEB](https://github.com/embeddings-benchmark/mteb) [C_MTEB](https://github.com/FlagOpen/FlagEmbedding/tree/master/C_MTEB) [LLama Index](https://github.com/run-llama/llama_index) | [LlamaIndex Blog](https://blog.llamaindex.ai/boosting-rag-picking-the-best-embedding-reranker-models-42d079022e83)
maidalun1020/CrosslingualMultiDomainsDataset
[ "license:apache-2.0", "region:us" ]
2023-12-30T14:34:44+00:00
{"license": "apache-2.0", "configs": [{"config_name": "default", "data_files": [{"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "queries", "dtype": "string"}, {"name": "corpus", "dtype": "string"}, {"name": "relevant_docs", "dtype": "string"}, {"name": "mode", "dtype": "string"}, {"name": "nodes", "dtype": "string"}, {"name": "pdf_file", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 3816670, "num_examples": 12}], "download_size": 1280352, "dataset_size": 3816670}}
2024-02-07T04:35:17+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Leadboard !bce - 为RAG而生 --- BCEmbedding: Bilingual and Crosslingual Embedding for RAG ========================================================= [Click to Open Contents * [Bilingual and Crosslingual Superiority](#-bilingual-and-crosslingual-superiority) * [Key Features](#-key-features) * [Latest Updates](#-latest-updates) * [Model List](#-model-list) * [Manual](#-manual) + [Installation](#installation) + [Quick Start](#quick-start) * [️ Evaluation](#%EF%B8%8F-evaluation) + [Evaluate Semantic Representation by MTEB](#evaluate-semantic-representation-by-mteb) + [Evaluate RAG by LlamaIndex](#evaluate-rag-by-llamaindex) * [Leaderboard](#-leaderboard) + [Semantic Representation Evaluations in MTEB](#semantic-representation-evaluations-in-mteb) + [RAG Evaluations in LlamaIndex](#rag-evaluations-in-llamaindex) * [Youdao's BCEmbedding API](#-youdaos-bcembedding-api) * [WeChat Group](#-wechat-group) * [️ Citation](#%EF%B8%8F-citation) * [License](#-license) * [Related Links](#-related-links) Bilingual and Crosslingual Embedding ('BCEmbedding'), developed by NetEase Youdao, encompasses 'EmbeddingModel' and 'RerankerModel'. The 'EmbeddingModel' specializes in generating semantic vectors, playing a crucial role in semantic search and question-answering, and the 'RerankerModel' excels at refining search results and ranking tasks. 'BCEmbedding' serves as the cornerstone of Youdao's Retrieval Augmented Generation (RAG) implmentation, notably QAnything [github], an open-source implementation widely integrated in various Youdao products like Youdao Speed Reading and Youdao Translation. Distinguished for its bilingual and crosslingual proficiency, 'BCEmbedding' excels in bridging Chinese and English linguistic gaps, which achieves * A high performence on [Semantic Representation Evaluations in MTEB](#semantic-representation-evaluations-in-mteb); * A new benchmark in the realm of [RAG Evaluations in LlamaIndex](#rag-evaluations-in-llamaindex). 'BCEmbedding'是由网易有道开发的双语和跨语种语义表征算法模型库,其中包含'EmbeddingModel'和'RerankerModel'两类基础模型。'EmbeddingModel'专门用于生成语义向量,在语义搜索和问答中起着关键作用,而'RerankerModel'擅长优化语义搜索结果和语义相关顺序精排。 'BCEmbedding'作为有道的检索增强生成式应用(RAG)的基石,特别是在QAnything [github]中发挥着重要作用。QAnything作为一个网易有道开源项目,在有道许多产品中有很好的应用实践,比如有道速读和有道翻译 'BCEmbedding'以其出色的双语和跨语种能力而著称,在语义检索中消除中英语言之间的差异,从而实现: + 强大的双语和跨语种语义表征能力【[基于MTEB的语义表征评测指标](#semantic-representation-evaluations-in-mteb)】。 + 基于LlamaIndex的RAG评测,表现SOTA【[基于LlamaIndex的RAG评测指标](#rag-evaluations-in-llamaindex)】。 !image/jpeg Bilingual and Crosslingual Superiority -------------------------------------- Existing embedding models often encounter performance challenges in bilingual and crosslingual scenarios, particularly in Chinese, English and their crosslingual tasks. 'BCEmbedding', leveraging the strength of Youdao's translation engine, excels in delivering superior performance across monolingual, bilingual, and crosslingual settings. 'EmbeddingModel' supports *Chinese (ch) and English (en)* (more languages support will come soon), while 'RerankerModel' supports *Chinese (ch), English (en), Japanese (ja) and Korean (ko)*. 现有的单个语义表征模型在双语和跨语种场景中常常表现不佳,特别是在中文、英文及其跨语种任务中。'BCEmbedding'充分利用有道翻译引擎的优势,实现只需一个模型就可以在单语、双语和跨语种场景中表现出卓越的性能。 'EmbeddingModel'支持*中文和英文*(之后会支持更多语种);'RerankerModel'支持*中文,英文,日文和韩文*。 Key Features ------------ * Bilingual and Crosslingual Proficiency: Powered by Youdao's translation engine, excelling in Chinese, English and their crosslingual retrieval task, with upcoming support for additional languages. * RAG-Optimized: Tailored for diverse RAG tasks including translation, summarization, and question answering, ensuring accurate query understanding. See [RAG Evaluations in LlamaIndex](#rag-evaluations-in-llamaindex). * Efficient and Precise Retrieval: Dual-encoder for efficient retrieval of 'EmbeddingModel' in first stage, and cross-encoder of 'RerankerModel' for enhanced precision and deeper semantic analysis in second stage. * Broad Domain Adaptability: Trained on diverse datasets for superior performance across various fields. * User-Friendly Design: Instruction-free, versatile use for multiple tasks without specifying query instruction for each task. * Meaningful Reranking Scores: 'RerankerModel' provides relevant scores to improve result quality and optimize large language model performance. * Proven in Production: Successfully implemented and validated in Youdao's products. + 双语和跨语种能力:基于有道翻译引擎的强大能力,我们的'BCEmbedding'具备强大的中英双语和跨语种语义表征能力。 + RAG适配:面向RAG做了针对性优化,可以适配大多数相关任务,比如翻译,摘要,问答等。此外,针对问题理解(query understanding)也做了针对优化,详见 [基于LlamaIndex的RAG评测指标](#rag-evaluations-in-llamaindex)。 + 高效且精确的语义检索:'EmbeddingModel'采用双编码器,可以在第一阶段实现高效的语义检索。'RerankerModel'采用交叉编码器,可以在第二阶段实现更高精度的语义顺序精排。 + 更好的领域泛化性:为了在更多场景实现更好的效果,我们收集了多种多样的领域数据。 + 用户友好:语义检索时不需要特殊指令前缀。也就是,你不需要为各种任务绞尽脑汁设计指令前缀。 + 有意义的重排序分数:'RerankerModel'可以提供有意义的语义相关性分数(不仅仅是排序),可以用于过滤无意义文本片段,提高大模型生成效果。 + 产品化检验:'BCEmbedding'已经被有道众多真实产品检验。 Latest Updates -------------- * *2024-01-03*: Model Releases - bce-embedding-base\_v1 and bce-reranker-base\_v1 are available. * *2024-01-03*: Eval Datasets [CrosslingualMultiDomainsDataset] - Evaluate the performence of RAG, using LlamaIndex. * *2024-01-03*: Eval Datasets [Details] - Evaluate the performence of crosslingual semantic representation, using MTEB. + *2024-01-03*: 模型发布 - bce-embedding-base\_v1和bce-reranker-base\_v1已发布. + *2024-01-03*: RAG评测数据 [CrosslingualMultiDomainsDataset] - 基于LlamaIndex的RAG评测数据已发布。 + *2024-01-03*: 跨语种语义表征评测数据 [详情] - 基于MTEB的跨语种评测数据已发布. Model List ---------- Manual ------ ### Installation First, create a conda environment and activate it. Then install 'BCEmbedding': Or install from source: ### Quick Start Use 'EmbeddingModel' by 'BCEmbedding', and 'cls' pooler is default. Use 'RerankerModel' by 'BCEmbedding' to calculate relevant scores and rerank: ️ Evaluation ------------ ### Evaluate Semantic Representation by MTEB We provide evaluateion tools for 'embedding' and 'reranker' models, based on MTEB and C\_MTEB. 我们基于MTEB和C\_MTEB,提供'embedding'和'reranker'模型的语义表征评测工具。 #### 1. Embedding Models Just run following cmd to evaluate 'your\_embedding\_model' (e.g. 'maidalun1020/bce-embedding-base\_v1') in monolingual, bilingual and crosslingual settings (e.g. '["en", "zh", "en-zh", "zh-en"]'). 运行下面命令评测'your\_embedding\_model'(比如,'maidalun1020/bce-embedding-base\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'["en", "zh", "en-zh", "zh-en"]')模式下评测: The total evaluation tasks contain *114 datastes* of "Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering". 评测包含 "Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering" 这六大类任务的 *114个数据集*。 *NOTE:* * All models are evaluated in their recommended pooling method ('pooler'). "jina-embeddings-v2-base-en", "m3e-base" and "m3e-large" use 'mean' pooler, while the others use 'cls'. * "jina-embeddings-v2-base-en" model should be loaded with 'trust\_remote\_code'. *注意:* + 所有模型的评测采用各自推荐的'pooler'。"jina-embeddings-v2-base-en", "m3e-base"和"m3e-large"的 'pooler'采用'mean',其他模型的'pooler'采用'cls'. + "jina-embeddings-v2-base-en"模型在载入时需要'trust\_remote\_code'。 #### 2. Reranker Models Run following cmd to evaluate 'your\_reranker\_model' (e.g. "maidalun1020/bce-reranker-base\_v1") in monolingual, bilingual and crosslingual settings (e.g. '["en", "zh", "en-zh", "zh-en"]'). 运行下面命令评测'your\_reranker\_model'(比如,'maidalun1020/bce-reranker-base\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'["en", "zh", "en-zh", "zh-en"]')模式下评测: The evaluation tasks contain *12 datastes* of "Reranking". 评测包含 "Reranking" 任务的 *12个数据集*。 #### 3. Metrics Visualization Tool We proveide a one-click script to sumarize evaluation results of 'embedding' and 'reranker' models as Embedding Models Evaluation Summary and Reranker Models Evaluation Summary. 我们提供了'embedding'和'reranker'模型的指标可视化一键脚本,输出一个markdown文件,详见Embedding模型指标汇总和Reranker模型指标汇总。 ### Evaluate RAG by LlamaIndex LlamaIndex is a famous data framework for LLM-based applications, particularly in RAG. Recently, the LlamaIndex Blog has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our 'BCEmbedding'. LlamaIndex是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,LlamaIndex博客对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证'BCEmbedding'在RAG中的效果。 First, install LlamaIndex: #### 1. Metrics Definition * Hit Rate: Hit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. *The larger, the better.* * Mean Reciprocal Rank (MRR): For each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. *The larger, the better.* + 命中率(Hit Rate) 命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。*该指标越大越好。* + 平均倒数排名(Mean Reciprocal Rank,MRR) 对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。*该指标越大越好。* #### 2. Reproduce LlamaIndex Blog In order to compare our 'BCEmbedding' with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our 'BCEmbedding': 为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将'BCEmbedding'与其他embedding和reranker模型进行对比分析: Then, sumarize the evaluation results by: Results Reproduced from the LlamaIndex Blog can be checked in *Reproduced Summary of RAG Evaluation*, with some obvious *conclusions*: * In 'WithoutReranker' setting, our 'bce-embedding-base\_v1' outperforms all the other embedding models. * With fixing the embedding model, our 'bce-reranker-base\_v1' achieves the best performence. * *The combination of 'bce-embedding-base\_v1' and 'bce-reranker-base\_v1' is SOTA.* 输出的指标汇总详见 *LlamaIndex RAG评测结果复现*。从该复现结果中,可以看出: + 在'WithoutReranker'设置下(竖排对比),'bce-embedding-base\_v1'比其他embedding模型效果都要好。 + 在固定embedding模型设置下,对比不同reranker效果(横排对比),'bce-reranker-base\_v1'比其他reranker模型效果都要好。 + *'bce-embedding-base\_v1'和'bce-reranker-base\_v1'组合,表现SOTA。* #### 3. Broad Domain Adaptability The evaluation of LlamaIndex Blog is monolingual, small amount of data, and specific domain (just including "llama2" paper). In order to evaluate the broad domain adaptability, bilingual and crosslingual capability, we follow the blog to build a multiple domains evaluation dataset (includding "Computer Science", "Physics", "Biology", "Economics", "Math", and "Quantitative Finance"), named CrosslingualMultiDomainsDataset, by OpenAI 'gpt-4-1106-preview' for high quality. 在上述的LlamaIndex博客的评测数据只用了“llama2”这一篇文章,该评测是 单语种,小数据量,特定领域 的。为了兼容更真实更广的用户使用场景,评测算法模型的 领域泛化性,双语和跨语种能力,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,CrosslingualMultiDomainsDataset。为了保证构建数据的高质量,我们采用OpenAI的'gpt-4-1106-preview'。 First, run following cmd to evaluate the most popular and powerful embedding and reranker models: Then, run the following script to sumarize the evaluation results: The summary of multiple domains evaluations can be seen in [Multiple Domains Scenarios](#1-multiple-domains-scenarios). Leaderboard ----------- ### Semantic Representation Evaluations in MTEB #### 1. Embedding Models *NOTE:* * Our *bce-embedding-base\_v1* outperforms other opensource embedding models with various model size. * *114 datastes* of "Retrieval", "STS", "PairClassification", "Classification", "Reranking" and "Clustering" in '["en", "zh", "en-zh", "zh-en"]' setting. * The crosslingual evaluation datasets we released belong to 'Retrieval' task. * More evaluation details please check Embedding Models Evaluation Summary. *要点:* + 对比所有开源的各种规模的embedding模型,*bce-embedding-base\_v1* 表现最好。 + 评测包含 "Retrieval", "STS", "PairClassification", "Classification", "Reranking"和"Clustering" 这六大类任务的共 *114个数据集*。 + 我们开源的跨语种语义表征评测数据属于'Retrieval'任务。 + 更详细的评测结果详见Embedding模型指标汇总。 #### 2. Reranker Models *NOTE:* * Our *bce-reranker-base\_v1* outperforms other opensource reranker models. * *12 datastes* of "Reranking" in '["en", "zh", "en-zh", "zh-en"]' setting. * More evaluation details please check Reranker Models Evaluation Summary. *要点:* + *bce-reranker-base\_v1* 优于其他开源reranker模型。 + 评测包含 "Reranking" 任务的 *12个数据集*。 + 更详细的评测结果详见Reranker模型指标汇总 ### RAG Evaluations in LlamaIndex #### 1. Multiple Domains Scenarios *NOTE:* * In 'WithoutReranker' setting, our 'bce-embedding-base\_v1' outperforms all the other embedding models. * With fixing the embedding model, our 'bce-reranker-base\_v1' achieves the best performence. * The combination of 'bce-embedding-base\_v1' and 'bce-reranker-base\_v1' is SOTA. *要点:* + 在'WithoutReranker'设置下(竖排对比),'bce-embedding-base\_v1'优于其他Embedding模型,包括开源和闭源。 + 在固定Embedding模型设置下,对比不同reranker效果(横排对比),'bce-reranker-base\_v1'比其他reranker模型效果都要好,包括开源和闭源。 + *'bce-embedding-base\_v1'和'bce-reranker-base\_v1'组合,表现SOTA。* Youdao's BCEmbedding API ------------------------ For users who prefer a hassle-free experience without the need to download and configure the model on their own systems, 'BCEmbedding' is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at Youdao BCEmbedding API. Here, you'll find all the necessary guidance to easily implement 'BCEmbedding' across a variety of use cases, ensuring a smooth and effective integration for optimal results. 对于那些更喜欢直接调用api的用户,有道提供方便的'BCEmbedding'调用api。该方式是一种简化和高效的方式,将'BCEmbedding'集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见有道BCEmbedding API。 WeChat Group ------------ Welcome to scan the QR code below and join the WeChat group. 欢迎大家扫码加入官方微信交流群。 <img src="URL width="20%" height="auto"> ️ Citation ---------- If you use 'BCEmbedding' in your research or project, please feel free to cite and star it: 如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~ License ------- 'BCEmbedding' is licensed under Apache 2.0 License Related Links ------------- Netease Youdao - QAnything FlagEmbedding MTEB C\_MTEB LLama Index | LlamaIndex Blog](URL </p> <details open=)
[ "### Installation\n\n\nFirst, create a conda environment and activate it.\n\n\nThen install 'BCEmbedding':\n\n\nOr install from source:", "### Quick Start\n\n\nUse 'EmbeddingModel' by 'BCEmbedding', and 'cls' pooler is default.\n\n\nUse 'RerankerModel' by 'BCEmbedding' to calculate relevant scores and rerank:\n\n\n️ Evaluation\n------------", "### Evaluate Semantic Representation by MTEB\n\n\nWe provide evaluateion tools for 'embedding' and 'reranker' models, based on MTEB and C\\_MTEB.\n\n\n我们基于MTEB和C\\_MTEB,提供'embedding'和'reranker'模型的语义表征评测工具。", "#### 1. Embedding Models\n\n\nJust run following cmd to evaluate 'your\\_embedding\\_model' (e.g. 'maidalun1020/bce-embedding-base\\_v1') in monolingual, bilingual and crosslingual settings (e.g. '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]').\n\n\n运行下面命令评测'your\\_embedding\\_model'(比如,'maidalun1020/bce-embedding-base\\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'[\"en\", \"zh\", \"en-zh\", \"zh-en\"]')模式下评测:\n\n\nThe total evaluation tasks contain *114 datastes* of \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\" and \"Clustering\".\n\n\n评测包含 \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\"和\"Clustering\" 这六大类任务的 *114个数据集*。\n\n\n*NOTE:*\n\n\n* All models are evaluated in their recommended pooling method ('pooler'). \"jina-embeddings-v2-base-en\", \"m3e-base\" and \"m3e-large\" use 'mean' pooler, while the others use 'cls'.\n* \"jina-embeddings-v2-base-en\" model should be loaded with 'trust\\_remote\\_code'.\n\n\n*注意:*\n\n\n\t+ 所有模型的评测采用各自推荐的'pooler'。\"jina-embeddings-v2-base-en\", \"m3e-base\"和\"m3e-large\"的 'pooler'采用'mean',其他模型的'pooler'采用'cls'.\n\t+ \"jina-embeddings-v2-base-en\"模型在载入时需要'trust\\_remote\\_code'。", "#### 2. Reranker Models\n\n\nRun following cmd to evaluate 'your\\_reranker\\_model' (e.g. \"maidalun1020/bce-reranker-base\\_v1\") in monolingual, bilingual and crosslingual settings (e.g. '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]').\n\n\n运行下面命令评测'your\\_reranker\\_model'(比如,'maidalun1020/bce-reranker-base\\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'[\"en\", \"zh\", \"en-zh\", \"zh-en\"]')模式下评测:\n\n\nThe evaluation tasks contain *12 datastes* of \"Reranking\".\n\n\n评测包含 \"Reranking\" 任务的 *12个数据集*。", "#### 3. Metrics Visualization Tool\n\n\nWe proveide a one-click script to sumarize evaluation results of 'embedding' and 'reranker' models as Embedding Models Evaluation Summary and Reranker Models Evaluation Summary.\n\n\n我们提供了'embedding'和'reranker'模型的指标可视化一键脚本,输出一个markdown文件,详见Embedding模型指标汇总和Reranker模型指标汇总。", "### Evaluate RAG by LlamaIndex\n\n\nLlamaIndex is a famous data framework for LLM-based applications, particularly in RAG. Recently, the LlamaIndex Blog has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our 'BCEmbedding'.\n\n\nLlamaIndex是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,LlamaIndex博客对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证'BCEmbedding'在RAG中的效果。\n\n\nFirst, install LlamaIndex:", "#### 1. Metrics Definition\n\n\n* Hit Rate:\n\n\nHit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. *The larger, the better.*\n* Mean Reciprocal Rank (MRR):\n\n\nFor each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. *The larger, the better.*\n\n\n\t+ 命中率(Hit Rate)\n\t\n\t\n\t命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。*该指标越大越好。*\n\t+ 平均倒数排名(Mean Reciprocal Rank,MRR)\n\t\n\t\n\t对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。*该指标越大越好。*", "#### 2. Reproduce LlamaIndex Blog\n\n\nIn order to compare our 'BCEmbedding' with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our 'BCEmbedding':\n\n\n为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将'BCEmbedding'与其他embedding和reranker模型进行对比分析:\n\n\nThen, sumarize the evaluation results by:\n\n\nResults Reproduced from the LlamaIndex Blog can be checked in *Reproduced Summary of RAG Evaluation*, with some obvious *conclusions*:\n\n\n* In 'WithoutReranker' setting, our 'bce-embedding-base\\_v1' outperforms all the other embedding models.\n* With fixing the embedding model, our 'bce-reranker-base\\_v1' achieves the best performence.\n* *The combination of 'bce-embedding-base\\_v1' and 'bce-reranker-base\\_v1' is SOTA.*\n\n\n输出的指标汇总详见 *LlamaIndex RAG评测结果复现*。从该复现结果中,可以看出:\n\n\n\t+ 在'WithoutReranker'设置下(竖排对比),'bce-embedding-base\\_v1'比其他embedding模型效果都要好。\n\t+ 在固定embedding模型设置下,对比不同reranker效果(横排对比),'bce-reranker-base\\_v1'比其他reranker模型效果都要好。\n\t+ *'bce-embedding-base\\_v1'和'bce-reranker-base\\_v1'组合,表现SOTA。*", "#### 3. Broad Domain Adaptability\n\n\nThe evaluation of LlamaIndex Blog is monolingual, small amount of data, and specific domain (just including \"llama2\" paper). In order to evaluate the broad domain adaptability, bilingual and crosslingual capability, we follow the blog to build a multiple domains evaluation dataset (includding \"Computer Science\", \"Physics\", \"Biology\", \"Economics\", \"Math\", and \"Quantitative Finance\"), named CrosslingualMultiDomainsDataset, by OpenAI 'gpt-4-1106-preview' for high quality.\n\n\n在上述的LlamaIndex博客的评测数据只用了“llama2”这一篇文章,该评测是 单语种,小数据量,特定领域 的。为了兼容更真实更广的用户使用场景,评测算法模型的 领域泛化性,双语和跨语种能力,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,CrosslingualMultiDomainsDataset。为了保证构建数据的高质量,我们采用OpenAI的'gpt-4-1106-preview'。\n\n\nFirst, run following cmd to evaluate the most popular and powerful embedding and reranker models:\n\n\nThen, run the following script to sumarize the evaluation results:\n\n\nThe summary of multiple domains evaluations can be seen in [Multiple Domains Scenarios](#1-multiple-domains-scenarios).\n\n\nLeaderboard\n-----------", "### Semantic Representation Evaluations in MTEB", "#### 1. Embedding Models\n\n\n\n*NOTE:*\n\n\n* Our *bce-embedding-base\\_v1* outperforms other opensource embedding models with various model size.\n* *114 datastes* of \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\" and \"Clustering\" in '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]' setting.\n* The crosslingual evaluation datasets we released belong to 'Retrieval' task.\n* More evaluation details please check Embedding Models Evaluation Summary.\n\n\n*要点:*\n\n\n\t+ 对比所有开源的各种规模的embedding模型,*bce-embedding-base\\_v1* 表现最好。\n\t+ 评测包含 \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\"和\"Clustering\" 这六大类任务的共 *114个数据集*。\n\t+ 我们开源的跨语种语义表征评测数据属于'Retrieval'任务。\n\t+ 更详细的评测结果详见Embedding模型指标汇总。", "#### 2. Reranker Models\n\n\n\n*NOTE:*\n\n\n* Our *bce-reranker-base\\_v1* outperforms other opensource reranker models.\n* *12 datastes* of \"Reranking\" in '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]' setting.\n* More evaluation details please check Reranker Models Evaluation Summary.\n\n\n*要点:*\n\n\n\t+ *bce-reranker-base\\_v1* 优于其他开源reranker模型。\n\t+ 评测包含 \"Reranking\" 任务的 *12个数据集*。\n\t+ 更详细的评测结果详见Reranker模型指标汇总", "### RAG Evaluations in LlamaIndex", "#### 1. Multiple Domains Scenarios\n\n\n\n*NOTE:*\n\n\n* In 'WithoutReranker' setting, our 'bce-embedding-base\\_v1' outperforms all the other embedding models.\n* With fixing the embedding model, our 'bce-reranker-base\\_v1' achieves the best performence.\n* The combination of 'bce-embedding-base\\_v1' and 'bce-reranker-base\\_v1' is SOTA.\n\n\n*要点:*\n\n\n\t+ 在'WithoutReranker'设置下(竖排对比),'bce-embedding-base\\_v1'优于其他Embedding模型,包括开源和闭源。\n\t+ 在固定Embedding模型设置下,对比不同reranker效果(横排对比),'bce-reranker-base\\_v1'比其他reranker模型效果都要好,包括开源和闭源。\n\t+ *'bce-embedding-base\\_v1'和'bce-reranker-base\\_v1'组合,表现SOTA。*\n\n\nYoudao's BCEmbedding API\n------------------------\n\n\nFor users who prefer a hassle-free experience without the need to download and configure the model on their own systems, 'BCEmbedding' is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at Youdao BCEmbedding API. Here, you'll find all the necessary guidance to easily implement 'BCEmbedding' across a variety of use cases, ensuring a smooth and effective integration for optimal results.\n\n\n对于那些更喜欢直接调用api的用户,有道提供方便的'BCEmbedding'调用api。该方式是一种简化和高效的方式,将'BCEmbedding'集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见有道BCEmbedding API。\n\n\nWeChat Group\n------------\n\n\nWelcome to scan the QR code below and join the WeChat group.\n\n\n欢迎大家扫码加入官方微信交流群。\n\n\n<img src=\"URL width=\"20%\" height=\"auto\">\n\n\n️ Citation\n----------\n\n\nIf you use 'BCEmbedding' in your research or project, please feel free to cite and star it:\n\n\n如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~\n\n\nLicense\n-------\n\n\n'BCEmbedding' is licensed under Apache 2.0 License\n\n\nRelated Links\n-------------\n\n\nNetease Youdao - QAnything\n\n\nFlagEmbedding\n\n\nMTEB\n\n\nC\\_MTEB\n\n\nLLama Index | LlamaIndex Blog](URL\n</p>\n<details open=)" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "### Installation\n\n\nFirst, create a conda environment and activate it.\n\n\nThen install 'BCEmbedding':\n\n\nOr install from source:", "### Quick Start\n\n\nUse 'EmbeddingModel' by 'BCEmbedding', and 'cls' pooler is default.\n\n\nUse 'RerankerModel' by 'BCEmbedding' to calculate relevant scores and rerank:\n\n\n️ Evaluation\n------------", "### Evaluate Semantic Representation by MTEB\n\n\nWe provide evaluateion tools for 'embedding' and 'reranker' models, based on MTEB and C\\_MTEB.\n\n\n我们基于MTEB和C\\_MTEB,提供'embedding'和'reranker'模型的语义表征评测工具。", "#### 1. Embedding Models\n\n\nJust run following cmd to evaluate 'your\\_embedding\\_model' (e.g. 'maidalun1020/bce-embedding-base\\_v1') in monolingual, bilingual and crosslingual settings (e.g. '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]').\n\n\n运行下面命令评测'your\\_embedding\\_model'(比如,'maidalun1020/bce-embedding-base\\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'[\"en\", \"zh\", \"en-zh\", \"zh-en\"]')模式下评测:\n\n\nThe total evaluation tasks contain *114 datastes* of \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\" and \"Clustering\".\n\n\n评测包含 \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\"和\"Clustering\" 这六大类任务的 *114个数据集*。\n\n\n*NOTE:*\n\n\n* All models are evaluated in their recommended pooling method ('pooler'). \"jina-embeddings-v2-base-en\", \"m3e-base\" and \"m3e-large\" use 'mean' pooler, while the others use 'cls'.\n* \"jina-embeddings-v2-base-en\" model should be loaded with 'trust\\_remote\\_code'.\n\n\n*注意:*\n\n\n\t+ 所有模型的评测采用各自推荐的'pooler'。\"jina-embeddings-v2-base-en\", \"m3e-base\"和\"m3e-large\"的 'pooler'采用'mean',其他模型的'pooler'采用'cls'.\n\t+ \"jina-embeddings-v2-base-en\"模型在载入时需要'trust\\_remote\\_code'。", "#### 2. Reranker Models\n\n\nRun following cmd to evaluate 'your\\_reranker\\_model' (e.g. \"maidalun1020/bce-reranker-base\\_v1\") in monolingual, bilingual and crosslingual settings (e.g. '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]').\n\n\n运行下面命令评测'your\\_reranker\\_model'(比如,'maidalun1020/bce-reranker-base\\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'[\"en\", \"zh\", \"en-zh\", \"zh-en\"]')模式下评测:\n\n\nThe evaluation tasks contain *12 datastes* of \"Reranking\".\n\n\n评测包含 \"Reranking\" 任务的 *12个数据集*。", "#### 3. Metrics Visualization Tool\n\n\nWe proveide a one-click script to sumarize evaluation results of 'embedding' and 'reranker' models as Embedding Models Evaluation Summary and Reranker Models Evaluation Summary.\n\n\n我们提供了'embedding'和'reranker'模型的指标可视化一键脚本,输出一个markdown文件,详见Embedding模型指标汇总和Reranker模型指标汇总。", "### Evaluate RAG by LlamaIndex\n\n\nLlamaIndex is a famous data framework for LLM-based applications, particularly in RAG. Recently, the LlamaIndex Blog has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our 'BCEmbedding'.\n\n\nLlamaIndex是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,LlamaIndex博客对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证'BCEmbedding'在RAG中的效果。\n\n\nFirst, install LlamaIndex:", "#### 1. Metrics Definition\n\n\n* Hit Rate:\n\n\nHit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. *The larger, the better.*\n* Mean Reciprocal Rank (MRR):\n\n\nFor each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. *The larger, the better.*\n\n\n\t+ 命中率(Hit Rate)\n\t\n\t\n\t命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。*该指标越大越好。*\n\t+ 平均倒数排名(Mean Reciprocal Rank,MRR)\n\t\n\t\n\t对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。*该指标越大越好。*", "#### 2. Reproduce LlamaIndex Blog\n\n\nIn order to compare our 'BCEmbedding' with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our 'BCEmbedding':\n\n\n为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将'BCEmbedding'与其他embedding和reranker模型进行对比分析:\n\n\nThen, sumarize the evaluation results by:\n\n\nResults Reproduced from the LlamaIndex Blog can be checked in *Reproduced Summary of RAG Evaluation*, with some obvious *conclusions*:\n\n\n* In 'WithoutReranker' setting, our 'bce-embedding-base\\_v1' outperforms all the other embedding models.\n* With fixing the embedding model, our 'bce-reranker-base\\_v1' achieves the best performence.\n* *The combination of 'bce-embedding-base\\_v1' and 'bce-reranker-base\\_v1' is SOTA.*\n\n\n输出的指标汇总详见 *LlamaIndex RAG评测结果复现*。从该复现结果中,可以看出:\n\n\n\t+ 在'WithoutReranker'设置下(竖排对比),'bce-embedding-base\\_v1'比其他embedding模型效果都要好。\n\t+ 在固定embedding模型设置下,对比不同reranker效果(横排对比),'bce-reranker-base\\_v1'比其他reranker模型效果都要好。\n\t+ *'bce-embedding-base\\_v1'和'bce-reranker-base\\_v1'组合,表现SOTA。*", "#### 3. Broad Domain Adaptability\n\n\nThe evaluation of LlamaIndex Blog is monolingual, small amount of data, and specific domain (just including \"llama2\" paper). In order to evaluate the broad domain adaptability, bilingual and crosslingual capability, we follow the blog to build a multiple domains evaluation dataset (includding \"Computer Science\", \"Physics\", \"Biology\", \"Economics\", \"Math\", and \"Quantitative Finance\"), named CrosslingualMultiDomainsDataset, by OpenAI 'gpt-4-1106-preview' for high quality.\n\n\n在上述的LlamaIndex博客的评测数据只用了“llama2”这一篇文章,该评测是 单语种,小数据量,特定领域 的。为了兼容更真实更广的用户使用场景,评测算法模型的 领域泛化性,双语和跨语种能力,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,CrosslingualMultiDomainsDataset。为了保证构建数据的高质量,我们采用OpenAI的'gpt-4-1106-preview'。\n\n\nFirst, run following cmd to evaluate the most popular and powerful embedding and reranker models:\n\n\nThen, run the following script to sumarize the evaluation results:\n\n\nThe summary of multiple domains evaluations can be seen in [Multiple Domains Scenarios](#1-multiple-domains-scenarios).\n\n\nLeaderboard\n-----------", "### Semantic Representation Evaluations in MTEB", "#### 1. Embedding Models\n\n\n\n*NOTE:*\n\n\n* Our *bce-embedding-base\\_v1* outperforms other opensource embedding models with various model size.\n* *114 datastes* of \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\" and \"Clustering\" in '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]' setting.\n* The crosslingual evaluation datasets we released belong to 'Retrieval' task.\n* More evaluation details please check Embedding Models Evaluation Summary.\n\n\n*要点:*\n\n\n\t+ 对比所有开源的各种规模的embedding模型,*bce-embedding-base\\_v1* 表现最好。\n\t+ 评测包含 \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\"和\"Clustering\" 这六大类任务的共 *114个数据集*。\n\t+ 我们开源的跨语种语义表征评测数据属于'Retrieval'任务。\n\t+ 更详细的评测结果详见Embedding模型指标汇总。", "#### 2. Reranker Models\n\n\n\n*NOTE:*\n\n\n* Our *bce-reranker-base\\_v1* outperforms other opensource reranker models.\n* *12 datastes* of \"Reranking\" in '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]' setting.\n* More evaluation details please check Reranker Models Evaluation Summary.\n\n\n*要点:*\n\n\n\t+ *bce-reranker-base\\_v1* 优于其他开源reranker模型。\n\t+ 评测包含 \"Reranking\" 任务的 *12个数据集*。\n\t+ 更详细的评测结果详见Reranker模型指标汇总", "### RAG Evaluations in LlamaIndex", "#### 1. Multiple Domains Scenarios\n\n\n\n*NOTE:*\n\n\n* In 'WithoutReranker' setting, our 'bce-embedding-base\\_v1' outperforms all the other embedding models.\n* With fixing the embedding model, our 'bce-reranker-base\\_v1' achieves the best performence.\n* The combination of 'bce-embedding-base\\_v1' and 'bce-reranker-base\\_v1' is SOTA.\n\n\n*要点:*\n\n\n\t+ 在'WithoutReranker'设置下(竖排对比),'bce-embedding-base\\_v1'优于其他Embedding模型,包括开源和闭源。\n\t+ 在固定Embedding模型设置下,对比不同reranker效果(横排对比),'bce-reranker-base\\_v1'比其他reranker模型效果都要好,包括开源和闭源。\n\t+ *'bce-embedding-base\\_v1'和'bce-reranker-base\\_v1'组合,表现SOTA。*\n\n\nYoudao's BCEmbedding API\n------------------------\n\n\nFor users who prefer a hassle-free experience without the need to download and configure the model on their own systems, 'BCEmbedding' is readily accessible through Youdao's API. This option offers a streamlined and efficient way to integrate BCEmbedding into your projects, bypassing the complexities of manual setup and maintenance. Detailed instructions and comprehensive API documentation are available at Youdao BCEmbedding API. Here, you'll find all the necessary guidance to easily implement 'BCEmbedding' across a variety of use cases, ensuring a smooth and effective integration for optimal results.\n\n\n对于那些更喜欢直接调用api的用户,有道提供方便的'BCEmbedding'调用api。该方式是一种简化和高效的方式,将'BCEmbedding'集成到您的项目中,避开了手动设置和系统维护的复杂性。更详细的api调用接口说明详见有道BCEmbedding API。\n\n\nWeChat Group\n------------\n\n\nWelcome to scan the QR code below and join the WeChat group.\n\n\n欢迎大家扫码加入官方微信交流群。\n\n\n<img src=\"URL width=\"20%\" height=\"auto\">\n\n\n️ Citation\n----------\n\n\nIf you use 'BCEmbedding' in your research or project, please feel free to cite and star it:\n\n\n如果在您的研究或任何项目中使用本工作,烦请按照下方进行引用,并打个小星星~\n\n\nLicense\n-------\n\n\n'BCEmbedding' is licensed under Apache 2.0 License\n\n\nRelated Links\n-------------\n\n\nNetease Youdao - QAnything\n\n\nFlagEmbedding\n\n\nMTEB\n\n\nC\\_MTEB\n\n\nLLama Index | LlamaIndex Blog](URL\n</p>\n<details open=)" ]
[ 14, 29, 59, 71, 468, 208, 100, 160, 333, 397, 356, 11, 266, 157, 11, 643 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n### Installation\n\n\nFirst, create a conda environment and activate it.\n\n\nThen install 'BCEmbedding':\n\n\nOr install from source:### Quick Start\n\n\nUse 'EmbeddingModel' by 'BCEmbedding', and 'cls' pooler is default.\n\n\nUse 'RerankerModel' by 'BCEmbedding' to calculate relevant scores and rerank:\n\n\n️ Evaluation\n------------### Evaluate Semantic Representation by MTEB\n\n\nWe provide evaluateion tools for 'embedding' and 'reranker' models, based on MTEB and C\\_MTEB.\n\n\n我们基于MTEB和C\\_MTEB,提供'embedding'和'reranker'模型的语义表征评测工具。", "passage: #### 1. Embedding Models\n\n\nJust run following cmd to evaluate 'your\\_embedding\\_model' (e.g. 'maidalun1020/bce-embedding-base\\_v1') in monolingual, bilingual and crosslingual settings (e.g. '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]').\n\n\n运行下面命令评测'your\\_embedding\\_model'(比如,'maidalun1020/bce-embedding-base\\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'[\"en\", \"zh\", \"en-zh\", \"zh-en\"]')模式下评测:\n\n\nThe total evaluation tasks contain *114 datastes* of \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\" and \"Clustering\".\n\n\n评测包含 \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\"和\"Clustering\" 这六大类任务的 *114个数据集*。\n\n\n*NOTE:*\n\n\n* All models are evaluated in their recommended pooling method ('pooler'). \"jina-embeddings-v2-base-en\", \"m3e-base\" and \"m3e-large\" use 'mean' pooler, while the others use 'cls'.\n* \"jina-embeddings-v2-base-en\" model should be loaded with 'trust\\_remote\\_code'.\n\n\n*注意:*\n\n\n\t+ 所有模型的评测采用各自推荐的'pooler'。\"jina-embeddings-v2-base-en\", \"m3e-base\"和\"m3e-large\"的 'pooler'采用'mean',其他模型的'pooler'采用'cls'.\n\t+ \"jina-embeddings-v2-base-en\"模型在载入时需要'trust\\_remote\\_code'。#### 2. Reranker Models\n\n\nRun following cmd to evaluate 'your\\_reranker\\_model' (e.g. \"maidalun1020/bce-reranker-base\\_v1\") in monolingual, bilingual and crosslingual settings (e.g. '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]').\n\n\n运行下面命令评测'your\\_reranker\\_model'(比如,'maidalun1020/bce-reranker-base\\_v1')。评测任务将会在单语种,双语种和跨语种(比如,'[\"en\", \"zh\", \"en-zh\", \"zh-en\"]')模式下评测:\n\n\nThe evaluation tasks contain *12 datastes* of \"Reranking\".\n\n\n评测包含 \"Reranking\" 任务的 *12个数据集*。#### 3. Metrics Visualization Tool\n\n\nWe proveide a one-click script to sumarize evaluation results of 'embedding' and 'reranker' models as Embedding Models Evaluation Summary and Reranker Models Evaluation Summary.\n\n\n我们提供了'embedding'和'reranker'模型的指标可视化一键脚本,输出一个markdown文件,详见Embedding模型指标汇总和Reranker模型指标汇总。### Evaluate RAG by LlamaIndex\n\n\nLlamaIndex is a famous data framework for LLM-based applications, particularly in RAG. Recently, the LlamaIndex Blog has evaluated the popular embedding and reranker models in RAG pipeline and attract great attention. Now, we follow its pipeline to evaluate our 'BCEmbedding'.\n\n\nLlamaIndex是一个著名的大模型应用的开源工具,在RAG中很受欢迎。最近,LlamaIndex博客对市面上常用的embedding和reranker模型进行RAG流程的评测,吸引广泛关注。下面我们按照该评测流程验证'BCEmbedding'在RAG中的效果。\n\n\nFirst, install LlamaIndex:", "passage: #### 1. Metrics Definition\n\n\n* Hit Rate:\n\n\nHit rate calculates the fraction of queries where the correct answer is found within the top-k retrieved documents. In simpler terms, it's about how often our system gets it right within the top few guesses. *The larger, the better.*\n* Mean Reciprocal Rank (MRR):\n\n\nFor each query, MRR evaluates the system's accuracy by looking at the rank of the highest-placed relevant document. Specifically, it's the average of the reciprocals of these ranks across all the queries. So, if the first relevant document is the top result, the reciprocal rank is 1; if it's second, the reciprocal rank is 1/2, and so on. *The larger, the better.*\n\n\n\t+ 命中率(Hit Rate)\n\t\n\t\n\t命中率计算的是在检索的前k个文档中找到正确答案的查询所占的比例。简单来说,它反映了我们的系统在前几次猜测中答对的频率。*该指标越大越好。*\n\t+ 平均倒数排名(Mean Reciprocal Rank,MRR)\n\t\n\t\n\t对于每个查询,MRR通过查看最高排名的相关文档的排名来评估系统的准确性。具体来说,它是在所有查询中这些排名的倒数的平均值。因此,如果第一个相关文档是排名最靠前的结果,倒数排名就是1;如果是第二个,倒数排名就是1/2,依此类推。*该指标越大越好。*#### 2. Reproduce LlamaIndex Blog\n\n\nIn order to compare our 'BCEmbedding' with other embedding and reranker models fairly, we provide a one-click script to reproduce results of the LlamaIndex Blog, including our 'BCEmbedding':\n\n\n为了公平起见,运行下面脚本,复现LlamaIndex博客的结果,将'BCEmbedding'与其他embedding和reranker模型进行对比分析:\n\n\nThen, sumarize the evaluation results by:\n\n\nResults Reproduced from the LlamaIndex Blog can be checked in *Reproduced Summary of RAG Evaluation*, with some obvious *conclusions*:\n\n\n* In 'WithoutReranker' setting, our 'bce-embedding-base\\_v1' outperforms all the other embedding models.\n* With fixing the embedding model, our 'bce-reranker-base\\_v1' achieves the best performence.\n* *The combination of 'bce-embedding-base\\_v1' and 'bce-reranker-base\\_v1' is SOTA.*\n\n\n输出的指标汇总详见 *LlamaIndex RAG评测结果复现*。从该复现结果中,可以看出:\n\n\n\t+ 在'WithoutReranker'设置下(竖排对比),'bce-embedding-base\\_v1'比其他embedding模型效果都要好。\n\t+ 在固定embedding模型设置下,对比不同reranker效果(横排对比),'bce-reranker-base\\_v1'比其他reranker模型效果都要好。\n\t+ *'bce-embedding-base\\_v1'和'bce-reranker-base\\_v1'组合,表现SOTA。*", "passage: #### 3. Broad Domain Adaptability\n\n\nThe evaluation of LlamaIndex Blog is monolingual, small amount of data, and specific domain (just including \"llama2\" paper). In order to evaluate the broad domain adaptability, bilingual and crosslingual capability, we follow the blog to build a multiple domains evaluation dataset (includding \"Computer Science\", \"Physics\", \"Biology\", \"Economics\", \"Math\", and \"Quantitative Finance\"), named CrosslingualMultiDomainsDataset, by OpenAI 'gpt-4-1106-preview' for high quality.\n\n\n在上述的LlamaIndex博客的评测数据只用了“llama2”这一篇文章,该评测是 单语种,小数据量,特定领域 的。为了兼容更真实更广的用户使用场景,评测算法模型的 领域泛化性,双语和跨语种能力,我们按照该博客的方法构建了一个多领域(计算机科学,物理学,生物学,经济学,数学,量化金融等)的双语种、跨语种评测数据,CrosslingualMultiDomainsDataset。为了保证构建数据的高质量,我们采用OpenAI的'gpt-4-1106-preview'。\n\n\nFirst, run following cmd to evaluate the most popular and powerful embedding and reranker models:\n\n\nThen, run the following script to sumarize the evaluation results:\n\n\nThe summary of multiple domains evaluations can be seen in [Multiple Domains Scenarios](#1-multiple-domains-scenarios).\n\n\nLeaderboard\n-----------### Semantic Representation Evaluations in MTEB#### 1. Embedding Models\n\n\n\n*NOTE:*\n\n\n* Our *bce-embedding-base\\_v1* outperforms other opensource embedding models with various model size.\n* *114 datastes* of \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\" and \"Clustering\" in '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]' setting.\n* The crosslingual evaluation datasets we released belong to 'Retrieval' task.\n* More evaluation details please check Embedding Models Evaluation Summary.\n\n\n*要点:*\n\n\n\t+ 对比所有开源的各种规模的embedding模型,*bce-embedding-base\\_v1* 表现最好。\n\t+ 评测包含 \"Retrieval\", \"STS\", \"PairClassification\", \"Classification\", \"Reranking\"和\"Clustering\" 这六大类任务的共 *114个数据集*。\n\t+ 我们开源的跨语种语义表征评测数据属于'Retrieval'任务。\n\t+ 更详细的评测结果详见Embedding模型指标汇总。#### 2. Reranker Models\n\n\n\n*NOTE:*\n\n\n* Our *bce-reranker-base\\_v1* outperforms other opensource reranker models.\n* *12 datastes* of \"Reranking\" in '[\"en\", \"zh\", \"en-zh\", \"zh-en\"]' setting.\n* More evaluation details please check Reranker Models Evaluation Summary.\n\n\n*要点:*\n\n\n\t+ *bce-reranker-base\\_v1* 优于其他开源reranker模型。\n\t+ 评测包含 \"Reranking\" 任务的 *12个数据集*。\n\t+ 更详细的评测结果详见Reranker模型指标汇总### RAG Evaluations in LlamaIndex" ]
9d8bc2e87510d8800666d1bf5e2db38995deb304
This is a derived collection of 3000 samples from the recognized [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset, tailored to align with the prompt structure required by Llama 2.
MohammadOthman/guanaco-llama2-3k
[ "region:us" ]
2023-12-30T14:47:39+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4710307, "num_examples": 3000}], "download_size": 2785880, "dataset_size": 4710307}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-30T14:56:22+00:00
[]
[]
TAGS #region-us
This is a derived collection of 3000 samples from the recognized timdettmers/openassistant-guanaco dataset, tailored to align with the prompt structure required by Llama 2.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
eb2ef60dcb019088b9a45437f66728d469d5f9c7
# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [nlpguy/ColorShadow-7B](https://huggingface.co/nlpguy/ColorShadow-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_nlpguy__ColorShadow-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T14:59:28.681625](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B/blob/main/results_2023-12-30T14-59-28.681625.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6195462777870345, "acc_stderr": 0.032728142996915836, "acc_norm": 0.6218820412036359, "acc_norm_stderr": 0.033388520934324636, "mc1": 0.4186046511627907, "mc1_stderr": 0.017270015284476855, "mc2": 0.5956171652597823, "mc2_stderr": 0.015038030102586798 }, "harness|arc:challenge|25": { "acc": 0.6322525597269625, "acc_stderr": 0.014090995618168477, "acc_norm": 0.6783276450511946, "acc_norm_stderr": 0.013650488084494162 }, "harness|hellaswag|10": { "acc": 0.6411073491336388, "acc_stderr": 0.0047869531466570615, "acc_norm": 0.8515236008763195, "acc_norm_stderr": 0.0035484490542860114 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542129, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542129 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5703703703703704, "acc_stderr": 0.04276349494376599, "acc_norm": 0.5703703703703704, "acc_norm_stderr": 0.04276349494376599 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7171052631578947, "acc_stderr": 0.03665349695640767, "acc_norm": 0.7171052631578947, "acc_norm_stderr": 0.03665349695640767 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7094339622641509, "acc_stderr": 0.027943219989337124, "acc_norm": 0.7094339622641509, "acc_norm_stderr": 0.027943219989337124 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7083333333333334, "acc_stderr": 0.038009680605548594, "acc_norm": 0.7083333333333334, "acc_norm_stderr": 0.038009680605548594 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6242774566473989, "acc_stderr": 0.036928207672648664, "acc_norm": 0.6242774566473989, "acc_norm_stderr": 0.036928207672648664 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.04690650298201943, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.04690650298201943 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5404255319148936, "acc_stderr": 0.03257901482099835, "acc_norm": 0.5404255319148936, "acc_norm_stderr": 0.03257901482099835 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.43859649122807015, "acc_stderr": 0.04668000738510455, "acc_norm": 0.43859649122807015, "acc_norm_stderr": 0.04668000738510455 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5862068965517241, "acc_stderr": 0.04104269211806232, "acc_norm": 0.5862068965517241, "acc_norm_stderr": 0.04104269211806232 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41798941798941797, "acc_stderr": 0.02540255550326091, "acc_norm": 0.41798941798941797, "acc_norm_stderr": 0.02540255550326091 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4603174603174603, "acc_stderr": 0.04458029125470973, "acc_norm": 0.4603174603174603, "acc_norm_stderr": 0.04458029125470973 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.5387096774193548, "acc_stderr": 0.028358634859836928, "acc_norm": 0.5387096774193548, "acc_norm_stderr": 0.028358634859836928 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.035176035403610105, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.035176035403610105 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.65, "acc_stderr": 0.04793724854411019, "acc_norm": 0.65, "acc_norm_stderr": 0.04793724854411019 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.03287666758603491, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.03287666758603491 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945627, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945627 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8652849740932642, "acc_stderr": 0.02463978909770944, "acc_norm": 0.8652849740932642, "acc_norm_stderr": 0.02463978909770944 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6282051282051282, "acc_stderr": 0.02450347255711094, "acc_norm": 0.6282051282051282, "acc_norm_stderr": 0.02450347255711094 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.028897748741131143, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.028897748741131143 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6722689075630253, "acc_stderr": 0.03048991141767323, "acc_norm": 0.6722689075630253, "acc_norm_stderr": 0.03048991141767323 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.31788079470198677, "acc_stderr": 0.038020397601079024, "acc_norm": 0.31788079470198677, "acc_norm_stderr": 0.038020397601079024 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8201834862385321, "acc_stderr": 0.016465345467391528, "acc_norm": 0.8201834862385321, "acc_norm_stderr": 0.016465345467391528 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5046296296296297, "acc_stderr": 0.03409825519163572, "acc_norm": 0.5046296296296297, "acc_norm_stderr": 0.03409825519163572 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7745098039215687, "acc_stderr": 0.029331162294251735, "acc_norm": 0.7745098039215687, "acc_norm_stderr": 0.029331162294251735 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7890295358649789, "acc_stderr": 0.02655837250266192, "acc_norm": 0.7890295358649789, "acc_norm_stderr": 0.02655837250266192 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596913, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596913 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8429752066115702, "acc_stderr": 0.03321244842547129, "acc_norm": 0.8429752066115702, "acc_norm_stderr": 0.03321244842547129 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7407407407407407, "acc_stderr": 0.04236511258094633, "acc_norm": 0.7407407407407407, "acc_norm_stderr": 0.04236511258094633 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7239263803680982, "acc_stderr": 0.035123852837050475, "acc_norm": 0.7239263803680982, "acc_norm_stderr": 0.035123852837050475 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5, "acc_stderr": 0.04745789978762494, "acc_norm": 0.5, "acc_norm_stderr": 0.04745789978762494 }, "harness|hendrycksTest-management|5": { "acc": 0.7475728155339806, "acc_stderr": 0.04301250399690878, "acc_norm": 0.7475728155339806, "acc_norm_stderr": 0.04301250399690878 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8717948717948718, "acc_stderr": 0.02190190511507332, "acc_norm": 0.8717948717948718, "acc_norm_stderr": 0.02190190511507332 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.73, "acc_stderr": 0.04461960433384741, "acc_norm": 0.73, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8084291187739464, "acc_stderr": 0.014072859310451949, "acc_norm": 0.8084291187739464, "acc_norm_stderr": 0.014072859310451949 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7109826589595376, "acc_stderr": 0.02440517393578323, "acc_norm": 0.7109826589595376, "acc_norm_stderr": 0.02440517393578323 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4446927374301676, "acc_stderr": 0.01661988198817702, "acc_norm": 0.4446927374301676, "acc_norm_stderr": 0.01661988198817702 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7124183006535948, "acc_stderr": 0.025917806117147158, "acc_norm": 0.7124183006535948, "acc_norm_stderr": 0.025917806117147158 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6720257234726688, "acc_stderr": 0.026664410886937617, "acc_norm": 0.6720257234726688, "acc_norm_stderr": 0.026664410886937617 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7098765432098766, "acc_stderr": 0.025251173936495026, "acc_norm": 0.7098765432098766, "acc_norm_stderr": 0.025251173936495026 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4645390070921986, "acc_stderr": 0.029752389657427047, "acc_norm": 0.4645390070921986, "acc_norm_stderr": 0.029752389657427047 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4654498044328553, "acc_stderr": 0.012739711554045711, "acc_norm": 0.4654498044328553, "acc_norm_stderr": 0.012739711554045711 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6323529411764706, "acc_stderr": 0.029289413409403192, "acc_norm": 0.6323529411764706, "acc_norm_stderr": 0.029289413409403192 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6699346405228758, "acc_stderr": 0.01902372616072455, "acc_norm": 0.6699346405228758, "acc_norm_stderr": 0.01902372616072455 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7183673469387755, "acc_stderr": 0.028795185574291293, "acc_norm": 0.7183673469387755, "acc_norm_stderr": 0.028795185574291293 }, "harness|hendrycksTest-sociology|5": { "acc": 0.47761194029850745, "acc_stderr": 0.035319879302087305, "acc_norm": 0.47761194029850745, "acc_norm_stderr": 0.035319879302087305 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.84, "acc_stderr": 0.03684529491774707, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774707 }, "harness|hendrycksTest-virology|5": { "acc": 0.4939759036144578, "acc_stderr": 0.03892212195333045, "acc_norm": 0.4939759036144578, "acc_norm_stderr": 0.03892212195333045 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.4186046511627907, "mc1_stderr": 0.017270015284476855, "mc2": 0.5956171652597823, "mc2_stderr": 0.015038030102586798 }, "harness|winogrande|5": { "acc": 0.8058405682715075, "acc_stderr": 0.011116983392392659 }, "harness|gsm8k|5": { "acc": 0.5519332827899924, "acc_stderr": 0.013697992668274525 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_nlpguy__ColorShadow-7B
[ "region:us" ]
2023-12-30T15:01:45+00:00
{"pretty_name": "Evaluation run of nlpguy/ColorShadow-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [nlpguy/ColorShadow-7B](https://huggingface.co/nlpguy/ColorShadow-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_nlpguy__ColorShadow-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T14:59:28.681625](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B/blob/main/results_2023-12-30T14-59-28.681625.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6195462777870345,\n \"acc_stderr\": 0.032728142996915836,\n \"acc_norm\": 0.6218820412036359,\n \"acc_norm_stderr\": 0.033388520934324636,\n \"mc1\": 0.4186046511627907,\n \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5956171652597823,\n \"mc2_stderr\": 0.015038030102586798\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6322525597269625,\n \"acc_stderr\": 0.014090995618168477,\n \"acc_norm\": 0.6783276450511946,\n \"acc_norm_stderr\": 0.013650488084494162\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6411073491336388,\n \"acc_stderr\": 0.0047869531466570615,\n \"acc_norm\": 0.8515236008763195,\n \"acc_norm_stderr\": 0.0035484490542860114\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n \"acc_stderr\": 0.04276349494376599,\n \"acc_norm\": 0.5703703703703704,\n \"acc_norm_stderr\": 0.04276349494376599\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7171052631578947,\n \"acc_stderr\": 0.03665349695640767,\n \"acc_norm\": 0.7171052631578947,\n \"acc_norm_stderr\": 0.03665349695640767\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337124,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337124\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7083333333333334,\n \"acc_stderr\": 0.038009680605548594,\n \"acc_norm\": 0.7083333333333334,\n \"acc_norm_stderr\": 0.038009680605548594\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6242774566473989,\n \"acc_stderr\": 0.036928207672648664,\n \"acc_norm\": 0.6242774566473989,\n \"acc_norm_stderr\": 0.036928207672648664\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201943,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201943\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5404255319148936,\n \"acc_stderr\": 0.03257901482099835,\n \"acc_norm\": 0.5404255319148936,\n \"acc_norm_stderr\": 0.03257901482099835\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41798941798941797,\n \"acc_stderr\": 0.02540255550326091,\n \"acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.02540255550326091\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5387096774193548,\n \"acc_stderr\": 0.028358634859836928,\n \"acc_norm\": 0.5387096774193548,\n \"acc_norm_stderr\": 0.028358634859836928\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.03287666758603491,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.03287666758603491\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945627,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945627\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.02463978909770944,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.02463978909770944\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6282051282051282,\n \"acc_stderr\": 0.02450347255711094,\n \"acc_norm\": 0.6282051282051282,\n \"acc_norm_stderr\": 0.02450347255711094\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31788079470198677,\n \"acc_stderr\": 0.038020397601079024,\n \"acc_norm\": 0.31788079470198677,\n \"acc_norm_stderr\": 0.038020397601079024\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8201834862385321,\n \"acc_stderr\": 0.016465345467391528,\n \"acc_norm\": 0.8201834862385321,\n \"acc_norm_stderr\": 0.016465345467391528\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5046296296296297,\n \"acc_stderr\": 0.03409825519163572,\n \"acc_norm\": 0.5046296296296297,\n \"acc_norm_stderr\": 0.03409825519163572\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.029331162294251735,\n \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.029331162294251735\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8429752066115702,\n \"acc_stderr\": 0.03321244842547129,\n \"acc_norm\": 0.8429752066115702,\n \"acc_norm_stderr\": 0.03321244842547129\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.04236511258094633,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.04236511258094633\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507332,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507332\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8084291187739464,\n \"acc_stderr\": 0.014072859310451949,\n \"acc_norm\": 0.8084291187739464,\n \"acc_norm_stderr\": 0.014072859310451949\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.02440517393578323,\n \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.02440517393578323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4446927374301676,\n \"acc_stderr\": 0.01661988198817702,\n \"acc_norm\": 0.4446927374301676,\n \"acc_norm_stderr\": 0.01661988198817702\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.025917806117147158,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.025917806117147158\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6720257234726688,\n \"acc_stderr\": 0.026664410886937617,\n \"acc_norm\": 0.6720257234726688,\n \"acc_norm_stderr\": 0.026664410886937617\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495026,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495026\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4645390070921986,\n \"acc_stderr\": 0.029752389657427047,\n \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.029752389657427047\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4654498044328553,\n \"acc_stderr\": 0.012739711554045711,\n \"acc_norm\": 0.4654498044328553,\n \"acc_norm_stderr\": 0.012739711554045711\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.029289413409403192,\n \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.029289413409403192\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6699346405228758,\n \"acc_stderr\": 0.01902372616072455,\n \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.01902372616072455\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7183673469387755,\n \"acc_stderr\": 0.028795185574291293,\n \"acc_norm\": 0.7183673469387755,\n \"acc_norm_stderr\": 0.028795185574291293\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.47761194029850745,\n \"acc_stderr\": 0.035319879302087305,\n \"acc_norm\": 0.47761194029850745,\n \"acc_norm_stderr\": 0.035319879302087305\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774707,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774707\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4186046511627907,\n \"mc1_stderr\": 0.017270015284476855,\n \"mc2\": 0.5956171652597823,\n \"mc2_stderr\": 0.015038030102586798\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8058405682715075,\n \"acc_stderr\": 0.011116983392392659\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5519332827899924,\n \"acc_stderr\": 0.013697992668274525\n }\n}\n```", "repo_url": "https://huggingface.co/nlpguy/ColorShadow-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|arc:challenge|25_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|gsm8k|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hellaswag|10_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T14-59-28.681625.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["**/details_harness|winogrande|5_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T14-59-28.681625.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T14_59_28.681625", "path": ["results_2023-12-30T14-59-28.681625.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T14-59-28.681625.parquet"]}]}]}
2023-12-30T15:02:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B Dataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T14:59:28.681625(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B\n\n\n\nDataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T14:59:28.681625(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B\n\n\n\nDataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T14:59:28.681625(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 183, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B\n\n\n\nDataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T14:59:28.681625(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
1d6ec6b93591ac5e35646c445233ff3639a57ead
# Dataset Card for Evaluation run of sr5434/CodegebraGPT-10b <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [sr5434/CodegebraGPT-10b](https://huggingface.co/sr5434/CodegebraGPT-10b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_sr5434__CodegebraGPT-10b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2024-01-05T01:29:14.413360](https://huggingface.co/datasets/open-llm-leaderboard/details_sr5434__CodegebraGPT-10b/blob/main/results_2024-01-05T01-29-14.413360.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6027565325073367, "acc_stderr": 0.03330413486893907, "acc_norm": 0.605828746614574, "acc_norm_stderr": 0.03399227272363922, "mc1": 0.3219094247246022, "mc1_stderr": 0.016355567611960404, "mc2": 0.46569772860679925, "mc2_stderr": 0.014510196356063874 }, "harness|arc:challenge|25": { "acc": 0.5571672354948806, "acc_stderr": 0.014515573873348911, "acc_norm": 0.5981228668941979, "acc_norm_stderr": 0.014327268614578274 }, "harness|hellaswag|10": { "acc": 0.6385182234614618, "acc_stderr": 0.004794478426382608, "acc_norm": 0.834196375224059, "acc_norm_stderr": 0.0037114419828661815 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.562962962962963, "acc_stderr": 0.04284958639753401, "acc_norm": 0.562962962962963, "acc_norm_stderr": 0.04284958639753401 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.625, "acc_stderr": 0.039397364351956274, "acc_norm": 0.625, "acc_norm_stderr": 0.039397364351956274 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6452830188679245, "acc_stderr": 0.02944517532819959, "acc_norm": 0.6452830188679245, "acc_norm_stderr": 0.02944517532819959 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6666666666666666, "acc_stderr": 0.03942082639927213, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.03942082639927213 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956911, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.37, "acc_stderr": 0.048523658709391, "acc_norm": 0.37, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5895953757225434, "acc_stderr": 0.03750757044895537, "acc_norm": 0.5895953757225434, "acc_norm_stderr": 0.03750757044895537 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.45098039215686275, "acc_stderr": 0.04951218252396264, "acc_norm": 0.45098039215686275, "acc_norm_stderr": 0.04951218252396264 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5106382978723404, "acc_stderr": 0.03267862331014063, "acc_norm": 0.5106382978723404, "acc_norm_stderr": 0.03267862331014063 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.40350877192982454, "acc_stderr": 0.046151869625837026, "acc_norm": 0.40350877192982454, "acc_norm_stderr": 0.046151869625837026 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.47586206896551725, "acc_stderr": 0.041618085035015295, "acc_norm": 0.47586206896551725, "acc_norm_stderr": 0.041618085035015295 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.37566137566137564, "acc_stderr": 0.024942368931159795, "acc_norm": 0.37566137566137564, "acc_norm_stderr": 0.024942368931159795 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42063492063492064, "acc_stderr": 0.04415438226743744, "acc_norm": 0.42063492063492064, "acc_norm_stderr": 0.04415438226743744 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7387096774193549, "acc_stderr": 0.024993053397764826, "acc_norm": 0.7387096774193549, "acc_norm_stderr": 0.024993053397764826 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.3645320197044335, "acc_stderr": 0.033864057460620905, "acc_norm": 0.3645320197044335, "acc_norm_stderr": 0.033864057460620905 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145632, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7636363636363637, "acc_stderr": 0.03317505930009181, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.03317505930009181 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7626262626262627, "acc_stderr": 0.030313710538198906, "acc_norm": 0.7626262626262627, "acc_norm_stderr": 0.030313710538198906 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8756476683937824, "acc_stderr": 0.023814477086593556, "acc_norm": 0.8756476683937824, "acc_norm_stderr": 0.023814477086593556 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5743589743589743, "acc_stderr": 0.025069094387296535, "acc_norm": 0.5743589743589743, "acc_norm_stderr": 0.025069094387296535 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3592592592592593, "acc_stderr": 0.029252905927251976, "acc_norm": 0.3592592592592593, "acc_norm_stderr": 0.029252905927251976 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5798319327731093, "acc_stderr": 0.03206183783236152, "acc_norm": 0.5798319327731093, "acc_norm_stderr": 0.03206183783236152 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8, "acc_stderr": 0.017149858514250955, "acc_norm": 0.8, "acc_norm_stderr": 0.017149858514250955 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5277777777777778, "acc_stderr": 0.0340470532865388, "acc_norm": 0.5277777777777778, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7794117647058824, "acc_stderr": 0.029102254389674082, "acc_norm": 0.7794117647058824, "acc_norm_stderr": 0.029102254389674082 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7890295358649789, "acc_stderr": 0.02655837250266192, "acc_norm": 0.7890295358649789, "acc_norm_stderr": 0.02655837250266192 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6502242152466368, "acc_stderr": 0.03200736719484503, "acc_norm": 0.6502242152466368, "acc_norm_stderr": 0.03200736719484503 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6946564885496184, "acc_stderr": 0.04039314978724561, "acc_norm": 0.6946564885496184, "acc_norm_stderr": 0.04039314978724561 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7520661157024794, "acc_stderr": 0.03941897526516303, "acc_norm": 0.7520661157024794, "acc_norm_stderr": 0.03941897526516303 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6574074074074074, "acc_stderr": 0.045879047413018105, "acc_norm": 0.6574074074074074, "acc_norm_stderr": 0.045879047413018105 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7055214723926381, "acc_stderr": 0.03581165790474082, "acc_norm": 0.7055214723926381, "acc_norm_stderr": 0.03581165790474082 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8589743589743589, "acc_stderr": 0.02280138253459754, "acc_norm": 0.8589743589743589, "acc_norm_stderr": 0.02280138253459754 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7841634738186463, "acc_stderr": 0.014711684386139946, "acc_norm": 0.7841634738186463, "acc_norm_stderr": 0.014711684386139946 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.661849710982659, "acc_stderr": 0.025469770149400175, "acc_norm": 0.661849710982659, "acc_norm_stderr": 0.025469770149400175 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.30614525139664805, "acc_stderr": 0.015414494487903219, "acc_norm": 0.30614525139664805, "acc_norm_stderr": 0.015414494487903219 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.696078431372549, "acc_stderr": 0.026336613469046626, "acc_norm": 0.696078431372549, "acc_norm_stderr": 0.026336613469046626 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6366559485530546, "acc_stderr": 0.027316847674192714, "acc_norm": 0.6366559485530546, "acc_norm_stderr": 0.027316847674192714 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6820987654320988, "acc_stderr": 0.02591006352824088, "acc_norm": 0.6820987654320988, "acc_norm_stderr": 0.02591006352824088 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.42907801418439717, "acc_stderr": 0.02952591430255856, "acc_norm": 0.42907801418439717, "acc_norm_stderr": 0.02952591430255856 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4511082138200782, "acc_stderr": 0.012709037347346233, "acc_norm": 0.4511082138200782, "acc_norm_stderr": 0.012709037347346233 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6066176470588235, "acc_stderr": 0.029674288281311155, "acc_norm": 0.6066176470588235, "acc_norm_stderr": 0.029674288281311155 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5947712418300654, "acc_stderr": 0.019861155193829163, "acc_norm": 0.5947712418300654, "acc_norm_stderr": 0.019861155193829163 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6454545454545455, "acc_stderr": 0.045820048415054174, "acc_norm": 0.6454545454545455, "acc_norm_stderr": 0.045820048415054174 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7020408163265306, "acc_stderr": 0.029279567411065677, "acc_norm": 0.7020408163265306, "acc_norm_stderr": 0.029279567411065677 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8208955223880597, "acc_stderr": 0.027113286753111837, "acc_norm": 0.8208955223880597, "acc_norm_stderr": 0.027113286753111837 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.81, "acc_stderr": 0.03942772444036625, "acc_norm": 0.81, "acc_norm_stderr": 0.03942772444036625 }, "harness|hendrycksTest-virology|5": { "acc": 0.4819277108433735, "acc_stderr": 0.038899512528272166, "acc_norm": 0.4819277108433735, "acc_norm_stderr": 0.038899512528272166 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.783625730994152, "acc_stderr": 0.031581495393387345, "acc_norm": 0.783625730994152, "acc_norm_stderr": 0.031581495393387345 }, "harness|truthfulqa:mc|0": { "mc1": 0.3219094247246022, "mc1_stderr": 0.016355567611960404, "mc2": 0.46569772860679925, "mc2_stderr": 0.014510196356063874 }, "harness|winogrande|5": { "acc": 0.8097868981846882, "acc_stderr": 0.011030335798617442 }, "harness|gsm8k|5": { "acc": 0.45109931766489764, "acc_stderr": 0.013706458809664817 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_sr5434__CodegebraGPT-10b
[ "region:us" ]
2023-12-30T15:21:09+00:00
{"pretty_name": "Evaluation run of sr5434/CodegebraGPT-10b", "dataset_summary": "Dataset automatically created during the evaluation run of model [sr5434/CodegebraGPT-10b](https://huggingface.co/sr5434/CodegebraGPT-10b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sr5434__CodegebraGPT-10b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-05T01:29:14.413360](https://huggingface.co/datasets/open-llm-leaderboard/details_sr5434__CodegebraGPT-10b/blob/main/results_2024-01-05T01-29-14.413360.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6027565325073367,\n \"acc_stderr\": 0.03330413486893907,\n \"acc_norm\": 0.605828746614574,\n \"acc_norm_stderr\": 0.03399227272363922,\n \"mc1\": 0.3219094247246022,\n \"mc1_stderr\": 0.016355567611960404,\n \"mc2\": 0.46569772860679925,\n \"mc2_stderr\": 0.014510196356063874\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5571672354948806,\n \"acc_stderr\": 0.014515573873348911,\n \"acc_norm\": 0.5981228668941979,\n \"acc_norm_stderr\": 0.014327268614578274\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6385182234614618,\n \"acc_stderr\": 0.004794478426382608,\n \"acc_norm\": 0.834196375224059,\n \"acc_norm_stderr\": 0.0037114419828661815\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6452830188679245,\n \"acc_stderr\": 0.02944517532819959,\n \"acc_norm\": 0.6452830188679245,\n \"acc_norm_stderr\": 0.02944517532819959\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03942082639927213,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03942082639927213\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5895953757225434,\n \"acc_stderr\": 0.03750757044895537,\n \"acc_norm\": 0.5895953757225434,\n \"acc_norm_stderr\": 0.03750757044895537\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.04951218252396264,\n \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.04951218252396264\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.47586206896551725,\n \"acc_stderr\": 0.041618085035015295,\n \"acc_norm\": 0.47586206896551725,\n \"acc_norm_stderr\": 0.041618085035015295\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37566137566137564,\n \"acc_stderr\": 0.024942368931159795,\n \"acc_norm\": 0.37566137566137564,\n \"acc_norm_stderr\": 0.024942368931159795\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7387096774193549,\n \"acc_stderr\": 0.024993053397764826,\n \"acc_norm\": 0.7387096774193549,\n \"acc_norm_stderr\": 0.024993053397764826\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3645320197044335,\n \"acc_stderr\": 0.033864057460620905,\n \"acc_norm\": 0.3645320197044335,\n \"acc_norm_stderr\": 0.033864057460620905\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198906,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198906\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.023814477086593556,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.023814477086593556\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5743589743589743,\n \"acc_stderr\": 0.025069094387296535,\n \"acc_norm\": 0.5743589743589743,\n \"acc_norm_stderr\": 0.025069094387296535\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5798319327731093,\n \"acc_stderr\": 0.03206183783236152,\n \"acc_norm\": 0.5798319327731093,\n \"acc_norm_stderr\": 0.03206183783236152\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.017149858514250955,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.017149858514250955\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.029102254389674082,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.029102254389674082\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7890295358649789,\n \"acc_stderr\": 0.02655837250266192,\n \"acc_norm\": 0.7890295358649789,\n \"acc_norm_stderr\": 0.02655837250266192\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.04039314978724561,\n \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.04039314978724561\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7520661157024794,\n \"acc_stderr\": 0.03941897526516303,\n \"acc_norm\": 0.7520661157024794,\n \"acc_norm_stderr\": 0.03941897526516303\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6574074074074074,\n \"acc_stderr\": 0.045879047413018105,\n \"acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.045879047413018105\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.02280138253459754,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.02280138253459754\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7841634738186463,\n \"acc_stderr\": 0.014711684386139946,\n \"acc_norm\": 0.7841634738186463,\n \"acc_norm_stderr\": 0.014711684386139946\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.661849710982659,\n \"acc_stderr\": 0.025469770149400175,\n \"acc_norm\": 0.661849710982659,\n \"acc_norm_stderr\": 0.025469770149400175\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30614525139664805,\n \"acc_stderr\": 0.015414494487903219,\n \"acc_norm\": 0.30614525139664805,\n \"acc_norm_stderr\": 0.015414494487903219\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.026336613469046626,\n \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.026336613469046626\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6366559485530546,\n \"acc_stderr\": 0.027316847674192714,\n \"acc_norm\": 0.6366559485530546,\n \"acc_norm_stderr\": 0.027316847674192714\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6820987654320988,\n \"acc_stderr\": 0.02591006352824088,\n \"acc_norm\": 0.6820987654320988,\n \"acc_norm_stderr\": 0.02591006352824088\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.42907801418439717,\n \"acc_stderr\": 0.02952591430255856,\n \"acc_norm\": 0.42907801418439717,\n \"acc_norm_stderr\": 0.02952591430255856\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4511082138200782,\n \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.4511082138200782,\n \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5947712418300654,\n \"acc_stderr\": 0.019861155193829163,\n \"acc_norm\": 0.5947712418300654,\n \"acc_norm_stderr\": 0.019861155193829163\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7020408163265306,\n \"acc_stderr\": 0.029279567411065677,\n \"acc_norm\": 0.7020408163265306,\n \"acc_norm_stderr\": 0.029279567411065677\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.031581495393387345,\n \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.031581495393387345\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3219094247246022,\n \"mc1_stderr\": 0.016355567611960404,\n \"mc2\": 0.46569772860679925,\n \"mc2_stderr\": 0.014510196356063874\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8097868981846882,\n \"acc_stderr\": 0.011030335798617442\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45109931766489764,\n \"acc_stderr\": 0.013706458809664817\n }\n}\n```", "repo_url": "https://huggingface.co/sr5434/CodegebraGPT-10b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|arc:challenge|25_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|arc:challenge|25_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|gsm8k|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|gsm8k|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hellaswag|10_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hellaswag|10_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T15-18-52.631261.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-05T01-29-14.413360.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["**/details_harness|winogrande|5_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["**/details_harness|winogrande|5_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-05T01-29-14.413360.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T15_18_52.631261", "path": ["results_2023-12-30T15-18-52.631261.parquet"]}, {"split": "2024_01_05T01_29_14.413360", "path": ["results_2024-01-05T01-29-14.413360.parquet"]}, {"split": "latest", "path": ["results_2024-01-05T01-29-14.413360.parquet"]}]}]}
2024-01-05T01:31:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of sr5434/CodegebraGPT-10b Dataset automatically created during the evaluation run of model sr5434/CodegebraGPT-10b on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2024-01-05T01:29:14.413360(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of sr5434/CodegebraGPT-10b\n\n\n\nDataset automatically created during the evaluation run of model sr5434/CodegebraGPT-10b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-05T01:29:14.413360(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of sr5434/CodegebraGPT-10b\n\n\n\nDataset automatically created during the evaluation run of model sr5434/CodegebraGPT-10b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2024-01-05T01:29:14.413360(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 181, 68, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of sr5434/CodegebraGPT-10b\n\n\n\nDataset automatically created during the evaluation run of model sr5434/CodegebraGPT-10b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2024-01-05T01:29:14.413360(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
bcff027c60fcd1948379b5e819b4a4b0dfd943ab
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v15.4 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-8x7b-v15.4](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v15.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v15.4", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T15:32:21.448389](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v15.4/blob/main/results_2023-12-30T15-32-21.448389.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6934467920122487, "acc_stderr": 0.030787993284565957, "acc_norm": 0.6997711235478742, "acc_norm_stderr": 0.031369777502468055, "mc1": 0.3953488372093023, "mc1_stderr": 0.017115815632418187, "mc2": 0.5546229838725043, "mc2_stderr": 0.01500911833285647 }, "harness|arc:challenge|25": { "acc": 0.6271331058020477, "acc_stderr": 0.014131176760131169, "acc_norm": 0.6646757679180887, "acc_norm_stderr": 0.013796182947785562 }, "harness|hellaswag|10": { "acc": 0.5341565425214101, "acc_stderr": 0.004978124945759845, "acc_norm": 0.7180840470025891, "acc_norm_stderr": 0.00449013069102043 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6296296296296297, "acc_stderr": 0.041716541613545426, "acc_norm": 0.6296296296296297, "acc_norm_stderr": 0.041716541613545426 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7960526315789473, "acc_stderr": 0.032790004063100495, "acc_norm": 0.7960526315789473, "acc_norm_stderr": 0.032790004063100495 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7396226415094339, "acc_stderr": 0.027008766090708042, "acc_norm": 0.7396226415094339, "acc_norm_stderr": 0.027008766090708042 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8055555555555556, "acc_stderr": 0.03309615177059007, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.03309615177059007 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.57, "acc_stderr": 0.04975698519562428, "acc_norm": 0.57, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145633, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6994219653179191, "acc_stderr": 0.03496101481191179, "acc_norm": 0.6994219653179191, "acc_norm_stderr": 0.03496101481191179 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.049406356306056595, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.049406356306056595 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.78, "acc_stderr": 0.04163331998932264, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932264 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6851063829787234, "acc_stderr": 0.030363582197238167, "acc_norm": 0.6851063829787234, "acc_norm_stderr": 0.030363582197238167 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5789473684210527, "acc_stderr": 0.04644602091222316, "acc_norm": 0.5789473684210527, "acc_norm_stderr": 0.04644602091222316 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.696551724137931, "acc_stderr": 0.038312260488503336, "acc_norm": 0.696551724137931, "acc_norm_stderr": 0.038312260488503336 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4497354497354497, "acc_stderr": 0.02562085704293665, "acc_norm": 0.4497354497354497, "acc_norm_stderr": 0.02562085704293665 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5555555555555556, "acc_stderr": 0.04444444444444449, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.04444444444444449 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.43, "acc_stderr": 0.04975698519562428, "acc_norm": 0.43, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8290322580645161, "acc_stderr": 0.02141724293632159, "acc_norm": 0.8290322580645161, "acc_norm_stderr": 0.02141724293632159 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.6157635467980296, "acc_stderr": 0.03422398565657551, "acc_norm": 0.6157635467980296, "acc_norm_stderr": 0.03422398565657551 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.77, "acc_stderr": 0.04229525846816505, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8181818181818182, "acc_stderr": 0.030117688929503582, "acc_norm": 0.8181818181818182, "acc_norm_stderr": 0.030117688929503582 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8636363636363636, "acc_stderr": 0.024450155973189835, "acc_norm": 0.8636363636363636, "acc_norm_stderr": 0.024450155973189835 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9326424870466321, "acc_stderr": 0.018088393839078912, "acc_norm": 0.9326424870466321, "acc_norm_stderr": 0.018088393839078912 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7051282051282052, "acc_stderr": 0.023119362758232304, "acc_norm": 0.7051282051282052, "acc_norm_stderr": 0.023119362758232304 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35185185185185186, "acc_stderr": 0.029116617606083018, "acc_norm": 0.35185185185185186, "acc_norm_stderr": 0.029116617606083018 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7899159663865546, "acc_stderr": 0.026461398717471874, "acc_norm": 0.7899159663865546, "acc_norm_stderr": 0.026461398717471874 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.44370860927152317, "acc_stderr": 0.04056527902281732, "acc_norm": 0.44370860927152317, "acc_norm_stderr": 0.04056527902281732 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8678899082568807, "acc_stderr": 0.014517801914598238, "acc_norm": 0.8678899082568807, "acc_norm_stderr": 0.014517801914598238 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5879629629629629, "acc_stderr": 0.03356787758160831, "acc_norm": 0.5879629629629629, "acc_norm_stderr": 0.03356787758160831 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8431372549019608, "acc_stderr": 0.025524722324553346, "acc_norm": 0.8431372549019608, "acc_norm_stderr": 0.025524722324553346 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8776371308016878, "acc_stderr": 0.021331741829746793, "acc_norm": 0.8776371308016878, "acc_norm_stderr": 0.021331741829746793 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7533632286995515, "acc_stderr": 0.028930413120910888, "acc_norm": 0.7533632286995515, "acc_norm_stderr": 0.028930413120910888 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596913, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596913 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8760330578512396, "acc_stderr": 0.030083098716035196, "acc_norm": 0.8760330578512396, "acc_norm_stderr": 0.030083098716035196 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8240740740740741, "acc_stderr": 0.036809181416738807, "acc_norm": 0.8240740740740741, "acc_norm_stderr": 0.036809181416738807 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.0335195387952127, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.0335195387952127 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5446428571428571, "acc_stderr": 0.04726835553719097, "acc_norm": 0.5446428571428571, "acc_norm_stderr": 0.04726835553719097 }, "harness|hendrycksTest-management|5": { "acc": 0.8446601941747572, "acc_stderr": 0.035865947385739734, "acc_norm": 0.8446601941747572, "acc_norm_stderr": 0.035865947385739734 }, "harness|hendrycksTest-marketing|5": { "acc": 0.905982905982906, "acc_stderr": 0.01911989279892498, "acc_norm": 0.905982905982906, "acc_norm_stderr": 0.01911989279892498 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.78, "acc_stderr": 0.04163331998932262, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932262 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8710089399744572, "acc_stderr": 0.01198637154808687, "acc_norm": 0.8710089399744572, "acc_norm_stderr": 0.01198637154808687 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7456647398843931, "acc_stderr": 0.023445826276545546, "acc_norm": 0.7456647398843931, "acc_norm_stderr": 0.023445826276545546 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39664804469273746, "acc_stderr": 0.016361354769822468, "acc_norm": 0.39664804469273746, "acc_norm_stderr": 0.016361354769822468 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7843137254901961, "acc_stderr": 0.02355083135199509, "acc_norm": 0.7843137254901961, "acc_norm_stderr": 0.02355083135199509 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7813504823151125, "acc_stderr": 0.023475581417861106, "acc_norm": 0.7813504823151125, "acc_norm_stderr": 0.023475581417861106 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8148148148148148, "acc_stderr": 0.021613809395224805, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.021613809395224805 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5319148936170213, "acc_stderr": 0.02976667507587387, "acc_norm": 0.5319148936170213, "acc_norm_stderr": 0.02976667507587387 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5176010430247718, "acc_stderr": 0.012762321298823643, "acc_norm": 0.5176010430247718, "acc_norm_stderr": 0.012762321298823643 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7683823529411765, "acc_stderr": 0.025626533803777562, "acc_norm": 0.7683823529411765, "acc_norm_stderr": 0.025626533803777562 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7434640522875817, "acc_stderr": 0.01766784161237899, "acc_norm": 0.7434640522875817, "acc_norm_stderr": 0.01766784161237899 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7918367346938775, "acc_stderr": 0.025991117672813292, "acc_norm": 0.7918367346938775, "acc_norm_stderr": 0.025991117672813292 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8805970149253731, "acc_stderr": 0.02292879327721974, "acc_norm": 0.8805970149253731, "acc_norm_stderr": 0.02292879327721974 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.88, "acc_stderr": 0.03265986323710906, "acc_norm": 0.88, "acc_norm_stderr": 0.03265986323710906 }, "harness|hendrycksTest-virology|5": { "acc": 0.5240963855421686, "acc_stderr": 0.03887971849597264, "acc_norm": 0.5240963855421686, "acc_norm_stderr": 0.03887971849597264 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.3953488372093023, "mc1_stderr": 0.017115815632418187, "mc2": 0.5546229838725043, "mc2_stderr": 0.01500911833285647 }, "harness|winogrande|5": { "acc": 0.7166535122336227, "acc_stderr": 0.012664751735505323 }, "harness|gsm8k|5": { "acc": 0.5185746777862017, "acc_stderr": 0.013762977910317584 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v15.4
[ "region:us" ]
2023-12-30T15:34:42+00:00
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v15.4", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-8x7b-v15.4](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v15.4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v15.4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T15:32:21.448389](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v15.4/blob/main/results_2023-12-30T15-32-21.448389.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6934467920122487,\n \"acc_stderr\": 0.030787993284565957,\n \"acc_norm\": 0.6997711235478742,\n \"acc_norm_stderr\": 0.031369777502468055,\n \"mc1\": 0.3953488372093023,\n \"mc1_stderr\": 0.017115815632418187,\n \"mc2\": 0.5546229838725043,\n \"mc2_stderr\": 0.01500911833285647\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6271331058020477,\n \"acc_stderr\": 0.014131176760131169,\n \"acc_norm\": 0.6646757679180887,\n \"acc_norm_stderr\": 0.013796182947785562\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5341565425214101,\n \"acc_stderr\": 0.004978124945759845,\n \"acc_norm\": 0.7180840470025891,\n \"acc_norm_stderr\": 0.00449013069102043\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7960526315789473,\n \"acc_stderr\": 0.032790004063100495,\n \"acc_norm\": 0.7960526315789473,\n \"acc_norm_stderr\": 0.032790004063100495\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7396226415094339,\n \"acc_stderr\": 0.027008766090708042,\n \"acc_norm\": 0.7396226415094339,\n \"acc_norm_stderr\": 0.027008766090708042\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.03309615177059007,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.03309615177059007\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6994219653179191,\n \"acc_stderr\": 0.03496101481191179,\n \"acc_norm\": 0.6994219653179191,\n \"acc_norm_stderr\": 0.03496101481191179\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932264,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932264\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6851063829787234,\n \"acc_stderr\": 0.030363582197238167,\n \"acc_norm\": 0.6851063829787234,\n \"acc_norm_stderr\": 0.030363582197238167\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5789473684210527,\n \"acc_stderr\": 0.04644602091222316,\n \"acc_norm\": 0.5789473684210527,\n \"acc_norm_stderr\": 0.04644602091222316\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.696551724137931,\n \"acc_stderr\": 0.038312260488503336,\n \"acc_norm\": 0.696551724137931,\n \"acc_norm_stderr\": 0.038312260488503336\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4497354497354497,\n \"acc_stderr\": 0.02562085704293665,\n \"acc_norm\": 0.4497354497354497,\n \"acc_norm_stderr\": 0.02562085704293665\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8290322580645161,\n \"acc_stderr\": 0.02141724293632159,\n \"acc_norm\": 0.8290322580645161,\n \"acc_norm_stderr\": 0.02141724293632159\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6157635467980296,\n \"acc_stderr\": 0.03422398565657551,\n \"acc_norm\": 0.6157635467980296,\n \"acc_norm_stderr\": 0.03422398565657551\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.030117688929503582,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.030117688929503582\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8636363636363636,\n \"acc_stderr\": 0.024450155973189835,\n \"acc_norm\": 0.8636363636363636,\n \"acc_norm_stderr\": 0.024450155973189835\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9326424870466321,\n \"acc_stderr\": 0.018088393839078912,\n \"acc_norm\": 0.9326424870466321,\n \"acc_norm_stderr\": 0.018088393839078912\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7051282051282052,\n \"acc_stderr\": 0.023119362758232304,\n \"acc_norm\": 0.7051282051282052,\n \"acc_norm_stderr\": 0.023119362758232304\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083018,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083018\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7899159663865546,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.7899159663865546,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.44370860927152317,\n \"acc_stderr\": 0.04056527902281732,\n \"acc_norm\": 0.44370860927152317,\n \"acc_norm_stderr\": 0.04056527902281732\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8678899082568807,\n \"acc_stderr\": 0.014517801914598238,\n \"acc_norm\": 0.8678899082568807,\n \"acc_norm_stderr\": 0.014517801914598238\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5879629629629629,\n \"acc_stderr\": 0.03356787758160831,\n \"acc_norm\": 0.5879629629629629,\n \"acc_norm_stderr\": 0.03356787758160831\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.025524722324553346,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.025524722324553346\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8776371308016878,\n \"acc_stderr\": 0.021331741829746793,\n \"acc_norm\": 0.8776371308016878,\n \"acc_norm_stderr\": 0.021331741829746793\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7533632286995515,\n \"acc_stderr\": 0.028930413120910888,\n \"acc_norm\": 0.7533632286995515,\n \"acc_norm_stderr\": 0.028930413120910888\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8760330578512396,\n \"acc_stderr\": 0.030083098716035196,\n \"acc_norm\": 0.8760330578512396,\n \"acc_norm_stderr\": 0.030083098716035196\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.0335195387952127,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.0335195387952127\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5446428571428571,\n \"acc_stderr\": 0.04726835553719097,\n \"acc_norm\": 0.5446428571428571,\n \"acc_norm_stderr\": 0.04726835553719097\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.035865947385739734,\n \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.035865947385739734\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.905982905982906,\n \"acc_stderr\": 0.01911989279892498,\n \"acc_norm\": 0.905982905982906,\n \"acc_norm_stderr\": 0.01911989279892498\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8710089399744572,\n \"acc_stderr\": 0.01198637154808687,\n \"acc_norm\": 0.8710089399744572,\n \"acc_norm_stderr\": 0.01198637154808687\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.023445826276545546,\n \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.023445826276545546\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39664804469273746,\n \"acc_stderr\": 0.016361354769822468,\n \"acc_norm\": 0.39664804469273746,\n \"acc_norm_stderr\": 0.016361354769822468\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.02355083135199509,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.02355083135199509\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7813504823151125,\n \"acc_stderr\": 0.023475581417861106,\n \"acc_norm\": 0.7813504823151125,\n \"acc_norm_stderr\": 0.023475581417861106\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.021613809395224805,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.021613809395224805\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.02976667507587387,\n \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.02976667507587387\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5176010430247718,\n \"acc_stderr\": 0.012762321298823643,\n \"acc_norm\": 0.5176010430247718,\n \"acc_norm_stderr\": 0.012762321298823643\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7683823529411765,\n \"acc_stderr\": 0.025626533803777562,\n \"acc_norm\": 0.7683823529411765,\n \"acc_norm_stderr\": 0.025626533803777562\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7434640522875817,\n \"acc_stderr\": 0.01766784161237899,\n \"acc_norm\": 0.7434640522875817,\n \"acc_norm_stderr\": 0.01766784161237899\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7918367346938775,\n \"acc_stderr\": 0.025991117672813292,\n \"acc_norm\": 0.7918367346938775,\n \"acc_norm_stderr\": 0.025991117672813292\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8805970149253731,\n \"acc_stderr\": 0.02292879327721974,\n \"acc_norm\": 0.8805970149253731,\n \"acc_norm_stderr\": 0.02292879327721974\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3953488372093023,\n \"mc1_stderr\": 0.017115815632418187,\n \"mc2\": 0.5546229838725043,\n \"mc2_stderr\": 0.01500911833285647\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7166535122336227,\n \"acc_stderr\": 0.012664751735505323\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5185746777862017,\n \"acc_stderr\": 0.013762977910317584\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v15.4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|arc:challenge|25_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|gsm8k|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hellaswag|10_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T15-32-21.448389.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["**/details_harness|winogrande|5_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T15-32-21.448389.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T15_32_21.448389", "path": ["results_2023-12-30T15-32-21.448389.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T15-32-21.448389.parquet"]}]}]}
2023-12-30T15:35:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v15.4 Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v15.4 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T15:32:21.448389(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v15.4\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v15.4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T15:32:21.448389(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v15.4\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v15.4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T15:32:21.448389(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 199, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v15.4\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v15.4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T15:32:21.448389(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
80b8df230f36658fde8efca7be9f0e36b04fcb54
# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v4 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [kekmodel/StopCarbon-10.7B-v4](https://huggingface.co/kekmodel/StopCarbon-10.7B-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v4", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T15:35:13.189593](https://huggingface.co/datasets/open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v4/blob/main/results_2023-12-30T15-35-13.189593.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6661497857396633, "acc_stderr": 0.031628761843645284, "acc_norm": 0.6670126271632068, "acc_norm_stderr": 0.0322720542781299, "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7188601434375502, "mc2_stderr": 0.014969606008941042 }, "harness|arc:challenge|25": { "acc": 0.6851535836177475, "acc_stderr": 0.013572657703084948, "acc_norm": 0.712457337883959, "acc_norm_stderr": 0.013226719056266123 }, "harness|hellaswag|10": { "acc": 0.7149970125473013, "acc_stderr": 0.004504932999736409, "acc_norm": 0.8849830711013742, "acc_norm_stderr": 0.0031839033919416975 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.75, "acc_stderr": 0.03523807393012047, "acc_norm": 0.75, "acc_norm_stderr": 0.03523807393012047 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.73, "acc_stderr": 0.04461960433384741, "acc_norm": 0.73, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7708333333333334, "acc_stderr": 0.03514697467862388, "acc_norm": 0.7708333333333334, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956911, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.39215686274509803, "acc_stderr": 0.04858083574266346, "acc_norm": 0.39215686274509803, "acc_norm_stderr": 0.04858083574266346 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.04408440022768077, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768077 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6212765957446809, "acc_stderr": 0.03170995606040655, "acc_norm": 0.6212765957446809, "acc_norm_stderr": 0.03170995606040655 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.47368421052631576, "acc_stderr": 0.046970851366478626, "acc_norm": 0.47368421052631576, "acc_norm_stderr": 0.046970851366478626 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6206896551724138, "acc_stderr": 0.040434618619167466, "acc_norm": 0.6206896551724138, "acc_norm_stderr": 0.040434618619167466 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4947089947089947, "acc_stderr": 0.02574986828855657, "acc_norm": 0.4947089947089947, "acc_norm_stderr": 0.02574986828855657 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42857142857142855, "acc_stderr": 0.0442626668137991, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.0442626668137991 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8161290322580645, "acc_stderr": 0.022037217340267822, "acc_norm": 0.8161290322580645, "acc_norm_stderr": 0.022037217340267822 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5073891625615764, "acc_stderr": 0.035176035403610105, "acc_norm": 0.5073891625615764, "acc_norm_stderr": 0.035176035403610105 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8181818181818182, "acc_stderr": 0.03011768892950357, "acc_norm": 0.8181818181818182, "acc_norm_stderr": 0.03011768892950357 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822516, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822516 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.02150024957603348, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.02150024957603348 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402534, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402534 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3592592592592593, "acc_stderr": 0.029252905927251976, "acc_norm": 0.3592592592592593, "acc_norm_stderr": 0.029252905927251976 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7184873949579832, "acc_stderr": 0.02921354941437217, "acc_norm": 0.7184873949579832, "acc_norm_stderr": 0.02921354941437217 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8458715596330275, "acc_stderr": 0.015480826865374308, "acc_norm": 0.8458715596330275, "acc_norm_stderr": 0.015480826865374308 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5740740740740741, "acc_stderr": 0.03372343271653062, "acc_norm": 0.5740740740740741, "acc_norm_stderr": 0.03372343271653062 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8523206751054853, "acc_stderr": 0.0230943295825957, "acc_norm": 0.8523206751054853, "acc_norm_stderr": 0.0230943295825957 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.03138147637575499, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.03138147637575499 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596915, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596915 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7668711656441718, "acc_stderr": 0.0332201579577674, "acc_norm": 0.7668711656441718, "acc_norm_stderr": 0.0332201579577674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.03492606476623791, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.03492606476623791 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8589743589743589, "acc_stderr": 0.02280138253459753, "acc_norm": 0.8589743589743589, "acc_norm_stderr": 0.02280138253459753 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8071519795657727, "acc_stderr": 0.014108533515757431, "acc_norm": 0.8071519795657727, "acc_norm_stderr": 0.014108533515757431 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7543352601156069, "acc_stderr": 0.023176298203992005, "acc_norm": 0.7543352601156069, "acc_norm_stderr": 0.023176298203992005 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.394413407821229, "acc_stderr": 0.01634538676210397, "acc_norm": 0.394413407821229, "acc_norm_stderr": 0.01634538676210397 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.761437908496732, "acc_stderr": 0.02440439492808787, "acc_norm": 0.761437908496732, "acc_norm_stderr": 0.02440439492808787 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.025494259350694905, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.025494259350694905 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7808641975308642, "acc_stderr": 0.02301670564026219, "acc_norm": 0.7808641975308642, "acc_norm_stderr": 0.02301670564026219 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5, "acc_stderr": 0.029827499313594685, "acc_norm": 0.5, "acc_norm_stderr": 0.029827499313594685 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4941329856584094, "acc_stderr": 0.012769356925216526, "acc_norm": 0.4941329856584094, "acc_norm_stderr": 0.012769356925216526 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7463235294117647, "acc_stderr": 0.026431329870789534, "acc_norm": 0.7463235294117647, "acc_norm_stderr": 0.026431329870789534 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6797385620915033, "acc_stderr": 0.018875682938069446, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.018875682938069446 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6909090909090909, "acc_stderr": 0.044262946482000985, "acc_norm": 0.6909090909090909, "acc_norm_stderr": 0.044262946482000985 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.02812342933514278, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.02812342933514278 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8308457711442786, "acc_stderr": 0.02650859065623327, "acc_norm": 0.8308457711442786, "acc_norm_stderr": 0.02650859065623327 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.028762349126466125, "acc_norm": 0.91, "acc_norm_stderr": 0.028762349126466125 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598053, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598053 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03188578017686398, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7188601434375502, "mc2_stderr": 0.014969606008941042 }, "harness|winogrande|5": { "acc": 0.8342541436464088, "acc_stderr": 0.010450899545370632 }, "harness|gsm8k|5": { "acc": 0.6444275966641395, "acc_stderr": 0.013185402252713852 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v4
[ "region:us" ]
2023-12-30T15:37:31+00:00
{"pretty_name": "Evaluation run of kekmodel/StopCarbon-10.7B-v4", "dataset_summary": "Dataset automatically created during the evaluation run of model [kekmodel/StopCarbon-10.7B-v4](https://huggingface.co/kekmodel/StopCarbon-10.7B-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T15:35:13.189593](https://huggingface.co/datasets/open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v4/blob/main/results_2023-12-30T15-35-13.189593.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6661497857396633,\n \"acc_stderr\": 0.031628761843645284,\n \"acc_norm\": 0.6670126271632068,\n \"acc_norm_stderr\": 0.0322720542781299,\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7188601434375502,\n \"mc2_stderr\": 0.014969606008941042\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6851535836177475,\n \"acc_stderr\": 0.013572657703084948,\n \"acc_norm\": 0.712457337883959,\n \"acc_norm_stderr\": 0.013226719056266123\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7149970125473013,\n \"acc_stderr\": 0.004504932999736409,\n \"acc_norm\": 0.8849830711013742,\n \"acc_norm_stderr\": 0.0031839033919416975\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03523807393012047,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03523807393012047\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266346,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266346\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768077,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768077\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6212765957446809,\n \"acc_stderr\": 0.03170995606040655,\n \"acc_norm\": 0.6212765957446809,\n \"acc_norm_stderr\": 0.03170995606040655\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6206896551724138,\n \"acc_stderr\": 0.040434618619167466,\n \"acc_norm\": 0.6206896551724138,\n \"acc_norm_stderr\": 0.040434618619167466\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8161290322580645,\n \"acc_stderr\": 0.022037217340267822,\n \"acc_norm\": 0.8161290322580645,\n \"acc_norm_stderr\": 0.022037217340267822\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5073891625615764,\n \"acc_stderr\": 0.035176035403610105,\n \"acc_norm\": 0.5073891625615764,\n \"acc_norm_stderr\": 0.035176035403610105\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.03011768892950357,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03011768892950357\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603348,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603348\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7184873949579832,\n \"acc_stderr\": 0.02921354941437217,\n \"acc_norm\": 0.7184873949579832,\n \"acc_norm_stderr\": 0.02921354941437217\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374308,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374308\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.03372343271653062,\n \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.03372343271653062\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8523206751054853,\n \"acc_stderr\": 0.0230943295825957,\n \"acc_norm\": 0.8523206751054853,\n \"acc_norm_stderr\": 0.0230943295825957\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596915,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596915\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.02280138253459753,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.02280138253459753\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n \"acc_stderr\": 0.014108533515757431,\n \"acc_norm\": 0.8071519795657727,\n \"acc_norm_stderr\": 0.014108533515757431\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7543352601156069,\n \"acc_stderr\": 0.023176298203992005,\n \"acc_norm\": 0.7543352601156069,\n \"acc_norm_stderr\": 0.023176298203992005\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.394413407821229,\n \"acc_stderr\": 0.01634538676210397,\n \"acc_norm\": 0.394413407821229,\n \"acc_norm_stderr\": 0.01634538676210397\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.02440439492808787,\n \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.02440439492808787\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.025494259350694905,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.025494259350694905\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7808641975308642,\n \"acc_stderr\": 0.02301670564026219,\n \"acc_norm\": 0.7808641975308642,\n \"acc_norm_stderr\": 0.02301670564026219\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.029827499313594685,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.029827499313594685\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4941329856584094,\n \"acc_stderr\": 0.012769356925216526,\n \"acc_norm\": 0.4941329856584094,\n \"acc_norm_stderr\": 0.012769356925216526\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7463235294117647,\n \"acc_stderr\": 0.026431329870789534,\n \"acc_norm\": 0.7463235294117647,\n \"acc_norm_stderr\": 0.026431329870789534\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069446,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069446\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.02650859065623327,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.02650859065623327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7188601434375502,\n \"mc2_stderr\": 0.014969606008941042\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8342541436464088,\n \"acc_stderr\": 0.010450899545370632\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6444275966641395,\n \"acc_stderr\": 0.013185402252713852\n }\n}\n```", "repo_url": "https://huggingface.co/kekmodel/StopCarbon-10.7B-v4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|arc:challenge|25_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|gsm8k|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hellaswag|10_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T15-35-13.189593.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["**/details_harness|winogrande|5_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T15-35-13.189593.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T15_35_13.189593", "path": ["results_2023-12-30T15-35-13.189593.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T15-35-13.189593.parquet"]}]}]}
2023-12-30T15:37:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v4 Dataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v4 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T15:35:13.189593(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v4\n\n\n\nDataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T15:35:13.189593(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v4\n\n\n\nDataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T15:35:13.189593(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 187, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v4\n\n\n\nDataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T15:35:13.189593(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
6197736f8d7c187ea7fe3fd675f5a09c033400ea
Problem example: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/637b4be9da59f5103ec37ad5/_2TQUu2TF954lZD9hf7lo.png) Solution example: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/637b4be9da59f5103ec37ad5/1g4GRtz3LRY2U7pXhQmXL.png)
foldl/touch_some_grass
[ "region:us" ]
2023-12-30T15:50:00+00:00
{"dataset_info": {"features": [{"name": "problems", "dtype": "string"}, {"name": "solutions", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 231331, "num_examples": 109}], "download_size": 119759, "dataset_size": 231331}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-01T22:35:15+00:00
[]
[]
TAGS #region-us
Problem example: !image/png Solution example: !image/png
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
d475a45b57eba79aa8ef0009b3e20cd90854bc6b
# Dataset Card for "autotrain-data-test-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
peshkatari/autotrain-data-test-data
[ "region:us" ]
2023-12-30T15:52:40+00:00
{"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14845, "num_examples": 43}, {"name": "validation", "num_bytes": 14845, "num_examples": 43}], "download_size": 12914, "dataset_size": 29690}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2023-12-30T15:52:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotrain-data-test-data" More Information needed
[ "# Dataset Card for \"autotrain-data-test-data\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotrain-data-test-data\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-test-data\"\n\nMore Information needed" ]
cad7850bc0a57b1bc9b2ddfd44ad230614c494df
# Dataset card for sidewalk-imagery-clone ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset description](#dataset-description) - [Dataset categories](#dataset-categories) ## Dataset description - **Homepage:** https://segments.ai/Lit4pCol4b/sidewalk-imagery-clone This dataset was created using [Segments.ai](https://segments.ai). It can be found [here](https://segments.ai/Lit4pCol4b/sidewalk-imagery-clone). ## Dataset categories | Id | Name | Description | | --- | ---- | ----------- | | 1 | flat-road | - | | 2 | flat-sidewalk | - | | 3 | flat-crosswalk | - | | 4 | flat-cyclinglane | - | | 5 | flat-parkingdriveway | - | | 6 | flat-railtrack | - | | 7 | flat-curb | - | | 8 | human-person | - | | 9 | human-rider | - | | 10 | vehicle-car | - | | 11 | vehicle-truck | - | | 12 | vehicle-bus | - | | 13 | vehicle-tramtrain | - | | 14 | vehicle-motorcycle | - | | 15 | vehicle-bicycle | - | | 16 | vehicle-caravan | - | | 17 | vehicle-cartrailer | - | | 18 | construction-building | - | | 19 | construction-door | - | | 20 | construction-wall | - | | 21 | construction-fenceguardrail | - | | 22 | construction-bridge | - | | 23 | construction-tunnel | - | | 24 | construction-stairs | - | | 25 | object-pole | - | | 26 | object-trafficsign | - | | 27 | object-trafficlight | - | | 28 | nature-vegetation | - | | 29 | nature-terrain | - | | 30 | sky | - | | 31 | void-ground | - | | 32 | void-dynamic | - | | 33 | void-static | - | | 34 | void-unclear | - |
Lit4pCol4b/sidewalk-imagery-clone
[ "task_categories:image-segmentation", "region:us" ]
2023-12-30T15:53:22+00:00
{"task_categories": ["image-segmentation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 172437921.0, "num_examples": 20}], "download_size": 14699473, "dataset_size": 172437921.0}}
2023-12-31T01:48:31+00:00
[]
[]
TAGS #task_categories-image-segmentation #region-us
Dataset card for sidewalk-imagery-clone ======================================= Table of Contents ----------------- * Table of Contents * Dataset description * Dataset categories Dataset description ------------------- * Homepage: URL This dataset was created using URL. It can be found here. Dataset categories ------------------ Id: 1, Name: flat-road, Description: - Id: 2, Name: flat-sidewalk, Description: - Id: 3, Name: flat-crosswalk, Description: - Id: 4, Name: flat-cyclinglane, Description: - Id: 5, Name: flat-parkingdriveway, Description: - Id: 6, Name: flat-railtrack, Description: - Id: 7, Name: flat-curb, Description: - Id: 8, Name: human-person, Description: - Id: 9, Name: human-rider, Description: - Id: 10, Name: vehicle-car, Description: - Id: 11, Name: vehicle-truck, Description: - Id: 12, Name: vehicle-bus, Description: - Id: 13, Name: vehicle-tramtrain, Description: - Id: 14, Name: vehicle-motorcycle, Description: - Id: 15, Name: vehicle-bicycle, Description: - Id: 16, Name: vehicle-caravan, Description: - Id: 17, Name: vehicle-cartrailer, Description: - Id: 18, Name: construction-building, Description: - Id: 19, Name: construction-door, Description: - Id: 20, Name: construction-wall, Description: - Id: 21, Name: construction-fenceguardrail, Description: - Id: 22, Name: construction-bridge, Description: - Id: 23, Name: construction-tunnel, Description: - Id: 24, Name: construction-stairs, Description: - Id: 25, Name: object-pole, Description: - Id: 26, Name: object-trafficsign, Description: - Id: 27, Name: object-trafficlight, Description: - Id: 28, Name: nature-vegetation, Description: - Id: 29, Name: nature-terrain, Description: - Id: 30, Name: sky, Description: - Id: 31, Name: void-ground, Description: - Id: 32, Name: void-dynamic, Description: - Id: 33, Name: void-static, Description: - Id: 34, Name: void-unclear, Description: -
[]
[ "TAGS\n#task_categories-image-segmentation #region-us \n" ]
[ 18 ]
[ "passage: TAGS\n#task_categories-image-segmentation #region-us \n" ]
8573d32d20f8e352669dddb851d1d20e27a17c16
# Dataset Card for "ffmperative_sample_5k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
salma-remyx/ffmperative_sample_5k
[ "region:us" ]
2023-12-30T15:56:22+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1948807, "num_examples": 5000}], "download_size": 599304, "dataset_size": 1948807}}
2023-12-30T15:56:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ffmperative_sample_5k" More Information needed
[ "# Dataset Card for \"ffmperative_sample_5k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ffmperative_sample_5k\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ffmperative_sample_5k\"\n\nMore Information needed" ]
9507b4e0c55c76d17906c3b6d68e1c950f9365c6
# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v5 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [kekmodel/StopCarbon-10.7B-v5](https://huggingface.co/kekmodel/StopCarbon-10.7B-v5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v5", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T16:25:24.948425](https://huggingface.co/datasets/open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v5/blob/main/results_2023-12-30T16-25-24.948425.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.667270432389036, "acc_stderr": 0.03161503740481807, "acc_norm": 0.6679793731390249, "acc_norm_stderr": 0.032260225407857515, "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7183713907727333, "mc2_stderr": 0.014997186929843767 }, "harness|arc:challenge|25": { "acc": 0.6851535836177475, "acc_stderr": 0.01357265770308495, "acc_norm": 0.7098976109215017, "acc_norm_stderr": 0.013261573677520767 }, "harness|hellaswag|10": { "acc": 0.7143995220075682, "acc_stderr": 0.0045077680295901, "acc_norm": 0.8847839075881299, "acc_norm_stderr": 0.0031863002304505774 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.756578947368421, "acc_stderr": 0.034923496688842384, "acc_norm": 0.756578947368421, "acc_norm_stderr": 0.034923496688842384 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.73, "acc_stderr": 0.04461960433384741, "acc_norm": 0.73, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.028637235639800886, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.028637235639800886 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7708333333333334, "acc_stderr": 0.03514697467862388, "acc_norm": 0.7708333333333334, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956913, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956913 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6297872340425532, "acc_stderr": 0.03156564682236786, "acc_norm": 0.6297872340425532, "acc_norm_stderr": 0.03156564682236786 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5087719298245614, "acc_stderr": 0.04702880432049615, "acc_norm": 0.5087719298245614, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6344827586206897, "acc_stderr": 0.040131241954243856, "acc_norm": 0.6344827586206897, "acc_norm_stderr": 0.040131241954243856 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.5026455026455027, "acc_stderr": 0.02575094967813038, "acc_norm": 0.5026455026455027, "acc_norm_stderr": 0.02575094967813038 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8193548387096774, "acc_stderr": 0.021886178567172534, "acc_norm": 0.8193548387096774, "acc_norm_stderr": 0.021886178567172534 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.806060606060606, "acc_stderr": 0.03087414513656209, "acc_norm": 0.806060606060606, "acc_norm_stderr": 0.03087414513656209 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822516, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822516 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.021995311963644244, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.021995311963644244 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37407407407407406, "acc_stderr": 0.029502861128955286, "acc_norm": 0.37407407407407406, "acc_norm_stderr": 0.029502861128955286 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7142857142857143, "acc_stderr": 0.029344572500634332, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.029344572500634332 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8513761467889909, "acc_stderr": 0.015251253773660834, "acc_norm": 0.8513761467889909, "acc_norm_stderr": 0.015251253773660834 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.033674621388960775, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.033674621388960775 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8481012658227848, "acc_stderr": 0.023363878096632446, "acc_norm": 0.8481012658227848, "acc_norm_stderr": 0.023363878096632446 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.03138147637575499, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.03138147637575499 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7480916030534351, "acc_stderr": 0.03807387116306086, "acc_norm": 0.7480916030534351, "acc_norm_stderr": 0.03807387116306086 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.033519538795212696, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.033519538795212696 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.03492606476623791, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.03492606476623791 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.0230866350868414, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.0230866350868414 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8071519795657727, "acc_stderr": 0.014108533515757431, "acc_norm": 0.8071519795657727, "acc_norm_stderr": 0.014108533515757431 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7572254335260116, "acc_stderr": 0.023083658586984204, "acc_norm": 0.7572254335260116, "acc_norm_stderr": 0.023083658586984204 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39776536312849164, "acc_stderr": 0.01636920497126298, "acc_norm": 0.39776536312849164, "acc_norm_stderr": 0.01636920497126298 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7266881028938906, "acc_stderr": 0.025311765975426122, "acc_norm": 0.7266881028938906, "acc_norm_stderr": 0.025311765975426122 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7839506172839507, "acc_stderr": 0.022899162918445806, "acc_norm": 0.7839506172839507, "acc_norm_stderr": 0.022899162918445806 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.49645390070921985, "acc_stderr": 0.02982674915328092, "acc_norm": 0.49645390070921985, "acc_norm_stderr": 0.02982674915328092 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4915254237288136, "acc_stderr": 0.012768401697269057, "acc_norm": 0.4915254237288136, "acc_norm_stderr": 0.012768401697269057 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7426470588235294, "acc_stderr": 0.02655651947004151, "acc_norm": 0.7426470588235294, "acc_norm_stderr": 0.02655651947004151 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6781045751633987, "acc_stderr": 0.018901015322093092, "acc_norm": 0.6781045751633987, "acc_norm_stderr": 0.018901015322093092 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784593, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784593 }, "harness|hendrycksTest-sociology|5": { "acc": 0.845771144278607, "acc_stderr": 0.02553843336857834, "acc_norm": 0.845771144278607, "acc_norm_stderr": 0.02553843336857834 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.028762349126466125, "acc_norm": 0.91, "acc_norm_stderr": 0.028762349126466125 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598052, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598052 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03188578017686398, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7183713907727333, "mc2_stderr": 0.014997186929843767 }, "harness|winogrande|5": { "acc": 0.8358326756116812, "acc_stderr": 0.010410849775222789 }, "harness|gsm8k|5": { "acc": 0.6520090978013646, "acc_stderr": 0.013120581030382134 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v5
[ "region:us" ]
2023-12-30T16:12:26+00:00
{"pretty_name": "Evaluation run of kekmodel/StopCarbon-10.7B-v5", "dataset_summary": "Dataset automatically created during the evaluation run of model [kekmodel/StopCarbon-10.7B-v5](https://huggingface.co/kekmodel/StopCarbon-10.7B-v5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v5\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T16:25:24.948425](https://huggingface.co/datasets/open-llm-leaderboard/details_kekmodel__StopCarbon-10.7B-v5/blob/main/results_2023-12-30T16-25-24.948425.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.667270432389036,\n \"acc_stderr\": 0.03161503740481807,\n \"acc_norm\": 0.6679793731390249,\n \"acc_norm_stderr\": 0.032260225407857515,\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7183713907727333,\n \"mc2_stderr\": 0.014997186929843767\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6851535836177475,\n \"acc_stderr\": 0.01357265770308495,\n \"acc_norm\": 0.7098976109215017,\n \"acc_norm_stderr\": 0.013261573677520767\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7143995220075682,\n \"acc_stderr\": 0.0045077680295901,\n \"acc_norm\": 0.8847839075881299,\n \"acc_norm_stderr\": 0.0031863002304505774\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.756578947368421,\n \"acc_stderr\": 0.034923496688842384,\n \"acc_norm\": 0.756578947368421,\n \"acc_norm_stderr\": 0.034923496688842384\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.028637235639800886,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.028637235639800886\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6297872340425532,\n \"acc_stderr\": 0.03156564682236786,\n \"acc_norm\": 0.6297872340425532,\n \"acc_norm_stderr\": 0.03156564682236786\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.5026455026455027,\n \"acc_stderr\": 0.02575094967813038,\n \"acc_norm\": 0.5026455026455027,\n \"acc_norm_stderr\": 0.02575094967813038\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644244,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644244\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37407407407407406,\n \"acc_stderr\": 0.029502861128955286,\n \"acc_norm\": 0.37407407407407406,\n \"acc_norm_stderr\": 0.029502861128955286\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634332,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634332\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660834,\n \"acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660834\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8481012658227848,\n \"acc_stderr\": 0.023363878096632446,\n \"acc_norm\": 0.8481012658227848,\n \"acc_norm_stderr\": 0.023363878096632446\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n \"acc_stderr\": 0.014108533515757431,\n \"acc_norm\": 0.8071519795657727,\n \"acc_norm_stderr\": 0.014108533515757431\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7572254335260116,\n \"acc_stderr\": 0.023083658586984204,\n \"acc_norm\": 0.7572254335260116,\n \"acc_norm_stderr\": 0.023083658586984204\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39776536312849164,\n \"acc_stderr\": 0.01636920497126298,\n \"acc_norm\": 0.39776536312849164,\n \"acc_norm_stderr\": 0.01636920497126298\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7266881028938906,\n \"acc_stderr\": 0.025311765975426122,\n \"acc_norm\": 0.7266881028938906,\n \"acc_norm_stderr\": 0.025311765975426122\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7839506172839507,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.7839506172839507,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4915254237288136,\n \"acc_stderr\": 0.012768401697269057,\n \"acc_norm\": 0.4915254237288136,\n \"acc_norm_stderr\": 0.012768401697269057\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7426470588235294,\n \"acc_stderr\": 0.02655651947004151,\n \"acc_norm\": 0.7426470588235294,\n \"acc_norm_stderr\": 0.02655651947004151\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.02553843336857834,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.02553843336857834\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7183713907727333,\n \"mc2_stderr\": 0.014997186929843767\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8358326756116812,\n \"acc_stderr\": 0.010410849775222789\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6520090978013646,\n \"acc_stderr\": 0.013120581030382134\n }\n}\n```", "repo_url": "https://huggingface.co/kekmodel/StopCarbon-10.7B-v5", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-10-07.476950.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-25-24.948425.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["**/details_harness|winogrande|5_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["**/details_harness|winogrande|5_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T16-25-24.948425.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T16_10_07.476950", "path": ["results_2023-12-30T16-10-07.476950.parquet"]}, {"split": "2023_12_30T16_25_24.948425", "path": ["results_2023-12-30T16-25-24.948425.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T16-25-24.948425.parquet"]}]}]}
2023-12-30T16:27:45+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v5 Dataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v5 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T16:25:24.948425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v5\n\n\n\nDataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:25:24.948425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v5\n\n\n\nDataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:25:24.948425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 187, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of kekmodel/StopCarbon-10.7B-v5\n\n\n\nDataset automatically created during the evaluation run of model kekmodel/StopCarbon-10.7B-v5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T16:25:24.948425(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
94dbad22b5d2503b0bc6962e840d34600b5e0b1e
# Dataset Card for Evaluation run of DopeorNope/SOLARC-MOE-10.7Bx6 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [DopeorNope/SOLARC-MOE-10.7Bx6](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx6) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_DopeorNope__SOLARC-MOE-10.7Bx6", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T16:10:15.561942](https://huggingface.co/datasets/open-llm-leaderboard/details_DopeorNope__SOLARC-MOE-10.7Bx6/blob/main/results_2023-12-30T16-10-15.561942.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6673061205259359, "acc_stderr": 0.03162953125162339, "acc_norm": 0.6680593571406013, "acc_norm_stderr": 0.03227520657555408, "mc1": 0.5679314565483476, "mc1_stderr": 0.017341202394988327, "mc2": 0.7185493815661169, "mc2_stderr": 0.015019908551593323 }, "harness|arc:challenge|25": { "acc": 0.6843003412969283, "acc_stderr": 0.013582571095815291, "acc_norm": 0.7090443686006825, "acc_norm_stderr": 0.013273077865907593 }, "harness|hellaswag|10": { "acc": 0.7133041226847242, "acc_stderr": 0.004512940497462742, "acc_norm": 0.8839872535351524, "acc_norm_stderr": 0.003195857247704915 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.743421052631579, "acc_stderr": 0.0355418036802569, "acc_norm": 0.743421052631579, "acc_norm_stderr": 0.0355418036802569 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7638888888888888, "acc_stderr": 0.03551446610810826, "acc_norm": 0.7638888888888888, "acc_norm_stderr": 0.03551446610810826 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.05021167315686779, "acc_norm": 0.52, "acc_norm_stderr": 0.05021167315686779 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.39215686274509803, "acc_stderr": 0.048580835742663454, "acc_norm": 0.39215686274509803, "acc_norm_stderr": 0.048580835742663454 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.625531914893617, "acc_stderr": 0.03163910665367291, "acc_norm": 0.625531914893617, "acc_norm_stderr": 0.03163910665367291 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6344827586206897, "acc_stderr": 0.040131241954243856, "acc_norm": 0.6344827586206897, "acc_norm_stderr": 0.040131241954243856 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4947089947089947, "acc_stderr": 0.02574986828855657, "acc_norm": 0.4947089947089947, "acc_norm_stderr": 0.02574986828855657 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8193548387096774, "acc_stderr": 0.021886178567172534, "acc_norm": 0.8193548387096774, "acc_norm_stderr": 0.021886178567172534 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.806060606060606, "acc_stderr": 0.03087414513656209, "acc_norm": 0.806060606060606, "acc_norm_stderr": 0.03087414513656209 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822516, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822516 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.02150024957603347, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.02150024957603347 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402538, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402538 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37777777777777777, "acc_stderr": 0.02956070739246571, "acc_norm": 0.37777777777777777, "acc_norm_stderr": 0.02956070739246571 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7184873949579832, "acc_stderr": 0.029213549414372174, "acc_norm": 0.7184873949579832, "acc_norm_stderr": 0.029213549414372174 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8495412844036697, "acc_stderr": 0.015328563932669235, "acc_norm": 0.8495412844036697, "acc_norm_stderr": 0.015328563932669235 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.033674621388960775, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.033674621388960775 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8523206751054853, "acc_stderr": 0.0230943295825957, "acc_norm": 0.8523206751054853, "acc_norm_stderr": 0.0230943295825957 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7480916030534351, "acc_stderr": 0.03807387116306086, "acc_norm": 0.7480916030534351, "acc_norm_stderr": 0.03807387116306086 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7484662576687117, "acc_stderr": 0.034089978868575295, "acc_norm": 0.7484662576687117, "acc_norm_stderr": 0.034089978868575295 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.03492606476623791, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.03492606476623791 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.0230866350868414, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.0230866350868414 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.04688261722621504, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8058748403575989, "acc_stderr": 0.014143970276657567, "acc_norm": 0.8058748403575989, "acc_norm_stderr": 0.014143970276657567 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7630057803468208, "acc_stderr": 0.02289408248992599, "acc_norm": 0.7630057803468208, "acc_norm_stderr": 0.02289408248992599 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.394413407821229, "acc_stderr": 0.01634538676210397, "acc_norm": 0.394413407821229, "acc_norm_stderr": 0.01634538676210397 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.761437908496732, "acc_stderr": 0.02440439492808787, "acc_norm": 0.761437908496732, "acc_norm_stderr": 0.02440439492808787 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7331189710610932, "acc_stderr": 0.025122637608816643, "acc_norm": 0.7331189710610932, "acc_norm_stderr": 0.025122637608816643 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7870370370370371, "acc_stderr": 0.0227797190887334, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.0227797190887334 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.49645390070921985, "acc_stderr": 0.02982674915328092, "acc_norm": 0.49645390070921985, "acc_norm_stderr": 0.02982674915328092 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4941329856584094, "acc_stderr": 0.012769356925216526, "acc_norm": 0.4941329856584094, "acc_norm_stderr": 0.012769356925216526 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7426470588235294, "acc_stderr": 0.02655651947004151, "acc_norm": 0.7426470588235294, "acc_norm_stderr": 0.02655651947004151 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6797385620915033, "acc_stderr": 0.018875682938069446, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.018875682938069446 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.02812342933514278, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.02812342933514278 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.02587064676616913, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.02587064676616913 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.028762349126466125, "acc_norm": 0.91, "acc_norm_stderr": 0.028762349126466125 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598052, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598052 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03188578017686398, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.5679314565483476, "mc1_stderr": 0.017341202394988327, "mc2": 0.7185493815661169, "mc2_stderr": 0.015019908551593323 }, "harness|winogrande|5": { "acc": 0.8366219415943172, "acc_stderr": 0.010390695970273766 }, "harness|gsm8k|5": { "acc": 0.6489764973464746, "acc_stderr": 0.013146945941397226 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_DopeorNope__SOLARC-MOE-10.7Bx6
[ "region:us" ]
2023-12-30T16:12:37+00:00
{"pretty_name": "Evaluation run of DopeorNope/SOLARC-MOE-10.7Bx6", "dataset_summary": "Dataset automatically created during the evaluation run of model [DopeorNope/SOLARC-MOE-10.7Bx6](https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx6) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_DopeorNope__SOLARC-MOE-10.7Bx6\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T16:10:15.561942](https://huggingface.co/datasets/open-llm-leaderboard/details_DopeorNope__SOLARC-MOE-10.7Bx6/blob/main/results_2023-12-30T16-10-15.561942.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6673061205259359,\n \"acc_stderr\": 0.03162953125162339,\n \"acc_norm\": 0.6680593571406013,\n \"acc_norm_stderr\": 0.03227520657555408,\n \"mc1\": 0.5679314565483476,\n \"mc1_stderr\": 0.017341202394988327,\n \"mc2\": 0.7185493815661169,\n \"mc2_stderr\": 0.015019908551593323\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6843003412969283,\n \"acc_stderr\": 0.013582571095815291,\n \"acc_norm\": 0.7090443686006825,\n \"acc_norm_stderr\": 0.013273077865907593\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7133041226847242,\n \"acc_stderr\": 0.004512940497462742,\n \"acc_norm\": 0.8839872535351524,\n \"acc_norm_stderr\": 0.003195857247704915\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.743421052631579,\n \"acc_stderr\": 0.0355418036802569,\n \"acc_norm\": 0.743421052631579,\n \"acc_norm_stderr\": 0.0355418036802569\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603347,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603347\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402538,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402538\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37777777777777777,\n \"acc_stderr\": 0.02956070739246571,\n \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.02956070739246571\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7184873949579832,\n \"acc_stderr\": 0.029213549414372174,\n \"acc_norm\": 0.7184873949579832,\n \"acc_norm_stderr\": 0.029213549414372174\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669235,\n \"acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669235\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8523206751054853,\n \"acc_stderr\": 0.0230943295825957,\n \"acc_norm\": 0.8523206751054853,\n \"acc_norm_stderr\": 0.0230943295825957\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.034089978868575295,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.034089978868575295\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8058748403575989,\n \"acc_stderr\": 0.014143970276657567,\n \"acc_norm\": 0.8058748403575989,\n \"acc_norm_stderr\": 0.014143970276657567\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7630057803468208,\n \"acc_stderr\": 0.02289408248992599,\n \"acc_norm\": 0.7630057803468208,\n \"acc_norm_stderr\": 0.02289408248992599\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.394413407821229,\n \"acc_stderr\": 0.01634538676210397,\n \"acc_norm\": 0.394413407821229,\n \"acc_norm_stderr\": 0.01634538676210397\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.02440439492808787,\n \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.02440439492808787\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7331189710610932,\n \"acc_stderr\": 0.025122637608816643,\n \"acc_norm\": 0.7331189710610932,\n \"acc_norm_stderr\": 0.025122637608816643\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0227797190887334,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0227797190887334\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4941329856584094,\n \"acc_stderr\": 0.012769356925216526,\n \"acc_norm\": 0.4941329856584094,\n \"acc_norm_stderr\": 0.012769356925216526\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7426470588235294,\n \"acc_stderr\": 0.02655651947004151,\n \"acc_norm\": 0.7426470588235294,\n \"acc_norm_stderr\": 0.02655651947004151\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069446,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069446\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.02812342933514278,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.02812342933514278\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5679314565483476,\n \"mc1_stderr\": 0.017341202394988327,\n \"mc2\": 0.7185493815661169,\n \"mc2_stderr\": 0.015019908551593323\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8366219415943172,\n \"acc_stderr\": 0.010390695970273766\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6489764973464746,\n \"acc_stderr\": 0.013146945941397226\n }\n}\n```", "repo_url": "https://huggingface.co/DopeorNope/SOLARC-MOE-10.7Bx6", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-10-15.561942.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["**/details_harness|winogrande|5_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T16-10-15.561942.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T16_10_15.561942", "path": ["results_2023-12-30T16-10-15.561942.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T16-10-15.561942.parquet"]}]}]}
2023-12-30T16:12:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of DopeorNope/SOLARC-MOE-10.7Bx6 Dataset automatically created during the evaluation run of model DopeorNope/SOLARC-MOE-10.7Bx6 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T16:10:15.561942(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of DopeorNope/SOLARC-MOE-10.7Bx6\n\n\n\nDataset automatically created during the evaluation run of model DopeorNope/SOLARC-MOE-10.7Bx6 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:10:15.561942(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of DopeorNope/SOLARC-MOE-10.7Bx6\n\n\n\nDataset automatically created during the evaluation run of model DopeorNope/SOLARC-MOE-10.7Bx6 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:10:15.561942(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 197, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of DopeorNope/SOLARC-MOE-10.7Bx6\n\n\n\nDataset automatically created during the evaluation run of model DopeorNope/SOLARC-MOE-10.7Bx6 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T16:10:15.561942(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
f0a2894eba366f2172580bd316316e8441bf9eec
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.2-32k", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T16:13:38.805323](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.2-32k/blob/main/results_2023-12-30T16-13-38.805323.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6950888574372245, "acc_stderr": 0.030251453163127155, "acc_norm": 0.7088207463824532, "acc_norm_stderr": 0.031043623179390627, "mc1": 0.40636474908200737, "mc1_stderr": 0.017193835812093907, "mc2": 0.5665232115821557, "mc2_stderr": 0.014849783912176732 }, "harness|arc:challenge|25": { "acc": 0.3267918088737201, "acc_stderr": 0.013706665975587338, "acc_norm": 0.3438566552901024, "acc_norm_stderr": 0.013880644570156208 }, "harness|hellaswag|10": { "acc": 0.6337382991435969, "acc_stderr": 0.004807975515446488, "acc_norm": 0.817167894841665, "acc_norm_stderr": 0.0038573886135331043 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6814814814814815, "acc_stderr": 0.040247784019771096, "acc_norm": 0.6814814814814815, "acc_norm_stderr": 0.040247784019771096 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7828947368421053, "acc_stderr": 0.03355045304882923, "acc_norm": 0.7828947368421053, "acc_norm_stderr": 0.03355045304882923 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768077, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768077 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7962264150943397, "acc_stderr": 0.024790784501775402, "acc_norm": 0.7962264150943397, "acc_norm_stderr": 0.024790784501775402 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8402777777777778, "acc_stderr": 0.030635578972093278, "acc_norm": 0.8402777777777778, "acc_norm_stderr": 0.030635578972093278 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.57, "acc_stderr": 0.04975698519562428, "acc_norm": 0.57, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.65, "acc_stderr": 0.047937248544110196, "acc_norm": 0.65, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6936416184971098, "acc_stderr": 0.03514942551267438, "acc_norm": 0.6936416184971098, "acc_norm_stderr": 0.03514942551267438 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.5, "acc_stderr": 0.04975185951049946, "acc_norm": 0.5, "acc_norm_stderr": 0.04975185951049946 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.78, "acc_stderr": 0.04163331998932263, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932263 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6595744680851063, "acc_stderr": 0.030976692998534425, "acc_norm": 0.6595744680851063, "acc_norm_stderr": 0.030976692998534425 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.6491228070175439, "acc_stderr": 0.04489539350270697, "acc_norm": 0.6491228070175439, "acc_norm_stderr": 0.04489539350270697 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6620689655172414, "acc_stderr": 0.0394170763206489, "acc_norm": 0.6620689655172414, "acc_norm_stderr": 0.0394170763206489 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.48677248677248675, "acc_stderr": 0.025742297289575142, "acc_norm": 0.48677248677248675, "acc_norm_stderr": 0.025742297289575142 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5555555555555556, "acc_stderr": 0.04444444444444449, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.04444444444444449 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.45, "acc_stderr": 0.049999999999999996, "acc_norm": 0.45, "acc_norm_stderr": 0.049999999999999996 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8516129032258064, "acc_stderr": 0.020222737554330374, "acc_norm": 0.8516129032258064, "acc_norm_stderr": 0.020222737554330374 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5812807881773399, "acc_stderr": 0.03471192860518468, "acc_norm": 0.5812807881773399, "acc_norm_stderr": 0.03471192860518468 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8303030303030303, "acc_stderr": 0.02931118867498311, "acc_norm": 0.8303030303030303, "acc_norm_stderr": 0.02931118867498311 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8585858585858586, "acc_stderr": 0.024825909793343343, "acc_norm": 0.8585858585858586, "acc_norm_stderr": 0.024825909793343343 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.927461139896373, "acc_stderr": 0.018718998520678178, "acc_norm": 0.927461139896373, "acc_norm_stderr": 0.018718998520678178 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7076923076923077, "acc_stderr": 0.02306043838085774, "acc_norm": 0.7076923076923077, "acc_norm_stderr": 0.02306043838085774 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35185185185185186, "acc_stderr": 0.02911661760608301, "acc_norm": 0.35185185185185186, "acc_norm_stderr": 0.02911661760608301 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8025210084033614, "acc_stderr": 0.02585916412205146, "acc_norm": 0.8025210084033614, "acc_norm_stderr": 0.02585916412205146 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4503311258278146, "acc_stderr": 0.040622900186837764, "acc_norm": 0.4503311258278146, "acc_norm_stderr": 0.040622900186837764 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8917431192660551, "acc_stderr": 0.01332134844761176, "acc_norm": 0.8917431192660551, "acc_norm_stderr": 0.01332134844761176 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5740740740740741, "acc_stderr": 0.033723432716530624, "acc_norm": 0.5740740740740741, "acc_norm_stderr": 0.033723432716530624 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8627450980392157, "acc_stderr": 0.024152225962801588, "acc_norm": 0.8627450980392157, "acc_norm_stderr": 0.024152225962801588 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8860759493670886, "acc_stderr": 0.020681745135884565, "acc_norm": 0.8860759493670886, "acc_norm_stderr": 0.020681745135884565 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.757847533632287, "acc_stderr": 0.028751392398694755, "acc_norm": 0.757847533632287, "acc_norm_stderr": 0.028751392398694755 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8091603053435115, "acc_stderr": 0.03446513350752596, "acc_norm": 0.8091603053435115, "acc_norm_stderr": 0.03446513350752596 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8512396694214877, "acc_stderr": 0.03248470083807194, "acc_norm": 0.8512396694214877, "acc_norm_stderr": 0.03248470083807194 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8611111111111112, "acc_stderr": 0.03343270062869621, "acc_norm": 0.8611111111111112, "acc_norm_stderr": 0.03343270062869621 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7914110429447853, "acc_stderr": 0.03192193448934724, "acc_norm": 0.7914110429447853, "acc_norm_stderr": 0.03192193448934724 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5446428571428571, "acc_stderr": 0.04726835553719097, "acc_norm": 0.5446428571428571, "acc_norm_stderr": 0.04726835553719097 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.0349260647662379, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.0349260647662379 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9102564102564102, "acc_stderr": 0.01872430174194164, "acc_norm": 0.9102564102564102, "acc_norm_stderr": 0.01872430174194164 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.8, "acc_stderr": 0.04020151261036846, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036846 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.879948914431673, "acc_stderr": 0.011622736692041268, "acc_norm": 0.879948914431673, "acc_norm_stderr": 0.011622736692041268 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7774566473988439, "acc_stderr": 0.02239421566194282, "acc_norm": 0.7774566473988439, "acc_norm_stderr": 0.02239421566194282 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.5329608938547487, "acc_stderr": 0.016686126653013934, "acc_norm": 0.5329608938547487, "acc_norm_stderr": 0.016686126653013934 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7843137254901961, "acc_stderr": 0.02355083135199509, "acc_norm": 0.7843137254901961, "acc_norm_stderr": 0.02355083135199509 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7942122186495176, "acc_stderr": 0.022961339906764244, "acc_norm": 0.7942122186495176, "acc_norm_stderr": 0.022961339906764244 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8055555555555556, "acc_stderr": 0.022021366100220204, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.022021366100220204 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.549645390070922, "acc_stderr": 0.02968010556502904, "acc_norm": 0.549645390070922, "acc_norm_stderr": 0.02968010556502904 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5195567144719687, "acc_stderr": 0.012760464028289299, "acc_norm": 0.5195567144719687, "acc_norm_stderr": 0.012760464028289299 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7904411764705882, "acc_stderr": 0.024723110407677065, "acc_norm": 0.7904411764705882, "acc_norm_stderr": 0.024723110407677065 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7467320261437909, "acc_stderr": 0.01759348689536683, "acc_norm": 0.7467320261437909, "acc_norm_stderr": 0.01759348689536683 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8040816326530612, "acc_stderr": 0.025409301953225678, "acc_norm": 0.8040816326530612, "acc_norm_stderr": 0.025409301953225678 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8756218905472637, "acc_stderr": 0.023335401790166327, "acc_norm": 0.8756218905472637, "acc_norm_stderr": 0.023335401790166327 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.0337997668989631, "acc_norm": 0.87, "acc_norm_stderr": 0.0337997668989631 }, "harness|hendrycksTest-virology|5": { "acc": 0.5120481927710844, "acc_stderr": 0.03891364495835817, "acc_norm": 0.5120481927710844, "acc_norm_stderr": 0.03891364495835817 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8538011695906432, "acc_stderr": 0.02709729011807082, "acc_norm": 0.8538011695906432, "acc_norm_stderr": 0.02709729011807082 }, "harness|truthfulqa:mc|0": { "mc1": 0.40636474908200737, "mc1_stderr": 0.017193835812093907, "mc2": 0.5665232115821557, "mc2_stderr": 0.014849783912176732 }, "harness|winogrande|5": { "acc": 0.7782162588792423, "acc_stderr": 0.011676109244497813 }, "harness|gsm8k|5": { "acc": 0.002274450341167551, "acc_stderr": 0.001312157814867419 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.2-32k
[ "region:us" ]
2023-12-30T16:15:56+00:00
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.2-32k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T16:13:38.805323](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.2-32k/blob/main/results_2023-12-30T16-13-38.805323.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6950888574372245,\n \"acc_stderr\": 0.030251453163127155,\n \"acc_norm\": 0.7088207463824532,\n \"acc_norm_stderr\": 0.031043623179390627,\n \"mc1\": 0.40636474908200737,\n \"mc1_stderr\": 0.017193835812093907,\n \"mc2\": 0.5665232115821557,\n \"mc2_stderr\": 0.014849783912176732\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3267918088737201,\n \"acc_stderr\": 0.013706665975587338,\n \"acc_norm\": 0.3438566552901024,\n \"acc_norm_stderr\": 0.013880644570156208\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6337382991435969,\n \"acc_stderr\": 0.004807975515446488,\n \"acc_norm\": 0.817167894841665,\n \"acc_norm_stderr\": 0.0038573886135331043\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6814814814814815,\n \"acc_stderr\": 0.040247784019771096,\n \"acc_norm\": 0.6814814814814815,\n \"acc_norm_stderr\": 0.040247784019771096\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.03355045304882923,\n \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.03355045304882923\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768077,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768077\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7962264150943397,\n \"acc_stderr\": 0.024790784501775402,\n \"acc_norm\": 0.7962264150943397,\n \"acc_norm_stderr\": 0.024790784501775402\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8402777777777778,\n \"acc_stderr\": 0.030635578972093278,\n \"acc_norm\": 0.8402777777777778,\n \"acc_norm_stderr\": 0.030635578972093278\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.03514942551267438,\n \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.03514942551267438\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04975185951049946,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04975185951049946\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6595744680851063,\n \"acc_stderr\": 0.030976692998534425,\n \"acc_norm\": 0.6595744680851063,\n \"acc_norm_stderr\": 0.030976692998534425\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6491228070175439,\n \"acc_stderr\": 0.04489539350270697,\n \"acc_norm\": 0.6491228070175439,\n \"acc_norm_stderr\": 0.04489539350270697\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6620689655172414,\n \"acc_stderr\": 0.0394170763206489,\n \"acc_norm\": 0.6620689655172414,\n \"acc_norm_stderr\": 0.0394170763206489\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.48677248677248675,\n \"acc_stderr\": 0.025742297289575142,\n \"acc_norm\": 0.48677248677248675,\n \"acc_norm_stderr\": 0.025742297289575142\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8516129032258064,\n \"acc_stderr\": 0.020222737554330374,\n \"acc_norm\": 0.8516129032258064,\n \"acc_norm_stderr\": 0.020222737554330374\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5812807881773399,\n \"acc_stderr\": 0.03471192860518468,\n \"acc_norm\": 0.5812807881773399,\n \"acc_norm_stderr\": 0.03471192860518468\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8303030303030303,\n \"acc_stderr\": 0.02931118867498311,\n \"acc_norm\": 0.8303030303030303,\n \"acc_norm_stderr\": 0.02931118867498311\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8585858585858586,\n \"acc_stderr\": 0.024825909793343343,\n \"acc_norm\": 0.8585858585858586,\n \"acc_norm_stderr\": 0.024825909793343343\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.927461139896373,\n \"acc_stderr\": 0.018718998520678178,\n \"acc_norm\": 0.927461139896373,\n \"acc_norm_stderr\": 0.018718998520678178\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7076923076923077,\n \"acc_stderr\": 0.02306043838085774,\n \"acc_norm\": 0.7076923076923077,\n \"acc_norm_stderr\": 0.02306043838085774\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.02911661760608301,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.02911661760608301\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8025210084033614,\n \"acc_stderr\": 0.02585916412205146,\n \"acc_norm\": 0.8025210084033614,\n \"acc_norm_stderr\": 0.02585916412205146\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4503311258278146,\n \"acc_stderr\": 0.040622900186837764,\n \"acc_norm\": 0.4503311258278146,\n \"acc_norm_stderr\": 0.040622900186837764\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8917431192660551,\n \"acc_stderr\": 0.01332134844761176,\n \"acc_norm\": 0.8917431192660551,\n \"acc_norm_stderr\": 0.01332134844761176\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.033723432716530624,\n \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.033723432716530624\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8627450980392157,\n \"acc_stderr\": 0.024152225962801588,\n \"acc_norm\": 0.8627450980392157,\n \"acc_norm_stderr\": 0.024152225962801588\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8860759493670886,\n \"acc_stderr\": 0.020681745135884565,\n \"acc_norm\": 0.8860759493670886,\n \"acc_norm_stderr\": 0.020681745135884565\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.757847533632287,\n \"acc_stderr\": 0.028751392398694755,\n \"acc_norm\": 0.757847533632287,\n \"acc_norm_stderr\": 0.028751392398694755\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8091603053435115,\n \"acc_stderr\": 0.03446513350752596,\n \"acc_norm\": 0.8091603053435115,\n \"acc_norm_stderr\": 0.03446513350752596\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8512396694214877,\n \"acc_stderr\": 0.03248470083807194,\n \"acc_norm\": 0.8512396694214877,\n \"acc_norm_stderr\": 0.03248470083807194\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8611111111111112,\n \"acc_stderr\": 0.03343270062869621,\n \"acc_norm\": 0.8611111111111112,\n \"acc_norm_stderr\": 0.03343270062869621\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.03192193448934724,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.03192193448934724\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5446428571428571,\n \"acc_stderr\": 0.04726835553719097,\n \"acc_norm\": 0.5446428571428571,\n \"acc_norm_stderr\": 0.04726835553719097\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.0349260647662379,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.0349260647662379\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9102564102564102,\n \"acc_stderr\": 0.01872430174194164,\n \"acc_norm\": 0.9102564102564102,\n \"acc_norm_stderr\": 0.01872430174194164\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.879948914431673,\n \"acc_stderr\": 0.011622736692041268,\n \"acc_norm\": 0.879948914431673,\n \"acc_norm_stderr\": 0.011622736692041268\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7774566473988439,\n \"acc_stderr\": 0.02239421566194282,\n \"acc_norm\": 0.7774566473988439,\n \"acc_norm_stderr\": 0.02239421566194282\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5329608938547487,\n \"acc_stderr\": 0.016686126653013934,\n \"acc_norm\": 0.5329608938547487,\n \"acc_norm_stderr\": 0.016686126653013934\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.02355083135199509,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.02355083135199509\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7942122186495176,\n \"acc_stderr\": 0.022961339906764244,\n \"acc_norm\": 0.7942122186495176,\n \"acc_norm_stderr\": 0.022961339906764244\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.022021366100220204,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.022021366100220204\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.549645390070922,\n \"acc_stderr\": 0.02968010556502904,\n \"acc_norm\": 0.549645390070922,\n \"acc_norm_stderr\": 0.02968010556502904\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5195567144719687,\n \"acc_stderr\": 0.012760464028289299,\n \"acc_norm\": 0.5195567144719687,\n \"acc_norm_stderr\": 0.012760464028289299\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7904411764705882,\n \"acc_stderr\": 0.024723110407677065,\n \"acc_norm\": 0.7904411764705882,\n \"acc_norm_stderr\": 0.024723110407677065\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7467320261437909,\n \"acc_stderr\": 0.01759348689536683,\n \"acc_norm\": 0.7467320261437909,\n \"acc_norm_stderr\": 0.01759348689536683\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8040816326530612,\n \"acc_stderr\": 0.025409301953225678,\n \"acc_norm\": 0.8040816326530612,\n \"acc_norm_stderr\": 0.025409301953225678\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n \"acc_stderr\": 0.023335401790166327,\n \"acc_norm\": 0.8756218905472637,\n \"acc_norm_stderr\": 0.023335401790166327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.0337997668989631,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.0337997668989631\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.02709729011807082,\n \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.02709729011807082\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.40636474908200737,\n \"mc1_stderr\": 0.017193835812093907,\n \"mc2\": 0.5665232115821557,\n \"mc2_stderr\": 0.014849783912176732\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7782162588792423,\n \"acc_stderr\": 0.011676109244497813\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.002274450341167551,\n \"acc_stderr\": 0.001312157814867419\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-13-38.805323.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["**/details_harness|winogrande|5_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T16-13-38.805323.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T16_13_38.805323", "path": ["results_2023-12-30T16-13-38.805323.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T16-13-38.805323.parquet"]}]}]}
2023-12-30T16:16:16+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T16:13:38.805323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:13:38.805323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:13:38.805323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 203, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.2-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T16:13:38.805323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
c9fbefec90b10fda261ab03804e068d33e72b2b6
# CanariaView Global Copper Demand Forecasting Dataset ## Description This dataset encompasses economic and industrial indicators vital for constructing a copper demand forecasting model. Coverage Period: Monthly data from January 1995 to March 2023, encompassing a total of 339 months. Column Descriptions and Sources: - `HSI_value (US Housing Starts Index)`: Y-Chart - `CCI_value (Consumer Confidence Index)`: OECD - `IPI_value (Industrial Production Total Index)`: FRED - `GDPC_value (Real Gross Domestic Product)`: FRED - `Copper price`: MacroTrends Preprocessing Methodology and Data Collection Details: - Comprehensive analysis of data structure followed by essential preprocessing. - Appropriate handling of missing values. - Daily and quarterly data uniformly expanded to a monthly timescale for consistency. - Daily data (e.g., Copper price) and quarterly data (e.g., GDPC_value) - Dependent variable data used in the model was available from 1995, guiding the collection of independent variables-this dataset- from that year. ## 한국어 설명 본 데이터셋은 구리 수요 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다. 기간: 1995년 1월~2023년 3월(월별), 총 339개월. 컬럼 설명 및 출처 - `HSI_value (미국 주택착공지수)`: Y-Chart - `CCI_value (미국 소비자신뢰지수)`: OECD - `IPI_value (미국 산업생산자지수)`: FRED - `GDPC_value (미국 실질 GDP)`: FRED - `Copper price (구리 가격)`: MacroTrends 데이터 전처리 및 수집 방법: - 데이터 구조 분석 및 전처리 과정 수행. - 결측치 처리. - 일별 및 분기별 자료는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합. - 일별 자료 (구리 가격), 분기별 자료 (GDPC_value) - 수요 모델에 사용된 종속변수 데이터가 1995년부터 확보되어 독립변수인 본 데이터셋도 1995년도부터 수집함.
CanariaView/GlobalCopperDemandForecastingDataset
[ "task_categories:time-series-forecasting", "language:en", "language:ko", "mining", "LSTM", "TimeSeries", "CanariaView", "region:us" ]
2023-12-30T16:17:42+00:00
{"language": ["en", "ko"], "task_categories": ["time-series-forecasting"], "tags": ["mining", "LSTM", "TimeSeries", "CanariaView"]}
2023-12-30T16:52:21+00:00
[]
[ "en", "ko" ]
TAGS #task_categories-time-series-forecasting #language-English #language-Korean #mining #LSTM #TimeSeries #CanariaView #region-us
# CanariaView Global Copper Demand Forecasting Dataset ## Description This dataset encompasses economic and industrial indicators vital for constructing a copper demand forecasting model. Coverage Period: Monthly data from January 1995 to March 2023, encompassing a total of 339 months. Column Descriptions and Sources: - 'HSI_value (US Housing Starts Index)': Y-Chart - 'CCI_value (Consumer Confidence Index)': OECD - 'IPI_value (Industrial Production Total Index)': FRED - 'GDPC_value (Real Gross Domestic Product)': FRED - 'Copper price': MacroTrends Preprocessing Methodology and Data Collection Details: - Comprehensive analysis of data structure followed by essential preprocessing. - Appropriate handling of missing values. - Daily and quarterly data uniformly expanded to a monthly timescale for consistency. - Daily data (e.g., Copper price) and quarterly data (e.g., GDPC_value) - Dependent variable data used in the model was available from 1995, guiding the collection of independent variables-this dataset- from that year. ## 한국어 설명 본 데이터셋은 구리 수요 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다. 기간: 1995년 1월~2023년 3월(월별), 총 339개월. 컬럼 설명 및 출처 - 'HSI_value (미국 주택착공지수)': Y-Chart - 'CCI_value (미국 소비자신뢰지수)': OECD - 'IPI_value (미국 산업생산자지수)': FRED - 'GDPC_value (미국 실질 GDP)': FRED - 'Copper price (구리 가격)': MacroTrends 데이터 전처리 및 수집 방법: - 데이터 구조 분석 및 전처리 과정 수행. - 결측치 처리. - 일별 및 분기별 자료는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합. - 일별 자료 (구리 가격), 분기별 자료 (GDPC_value) - 수요 모델에 사용된 종속변수 데이터가 1995년부터 확보되어 독립변수인 본 데이터셋도 1995년도부터 수집함.
[ "# CanariaView Global Copper Demand Forecasting Dataset", "## Description\nThis dataset encompasses economic and industrial indicators vital for constructing a copper demand forecasting model.\n\nCoverage Period: Monthly data from January 1995 to March 2023, encompassing a total of 339 months.\n\nColumn Descriptions and Sources:\n- 'HSI_value (US Housing Starts Index)': Y-Chart\n- 'CCI_value (Consumer Confidence Index)': OECD\n- 'IPI_value (Industrial Production Total Index)': FRED\n- 'GDPC_value (Real Gross Domestic Product)': FRED\n- 'Copper price': MacroTrends\n\nPreprocessing Methodology and Data Collection Details:\n- Comprehensive analysis of data structure followed by essential preprocessing.\n- Appropriate handling of missing values.\n- Daily and quarterly data uniformly expanded to a monthly timescale for consistency.\n- Daily data (e.g., Copper price) and quarterly data (e.g., GDPC_value)\n- Dependent variable data used in the model was available from 1995, guiding the collection of independent variables-this dataset- from that year.", "## 한국어 설명\n본 데이터셋은 구리 수요 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다.\n\n기간: 1995년 1월~2023년 3월(월별), 총 339개월.\n\n컬럼 설명 및 출처\n- 'HSI_value (미국 주택착공지수)': Y-Chart\n- 'CCI_value (미국 소비자신뢰지수)': OECD\n- 'IPI_value (미국 산업생산자지수)': FRED\n- 'GDPC_value (미국 실질 GDP)': FRED\n- 'Copper price (구리 가격)': MacroTrends\n\n데이터 전처리 및 수집 방법:\n- 데이터 구조 분석 및 전처리 과정 수행.\n- 결측치 처리.\n- 일별 및 분기별 자료는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합.\n- 일별 자료 (구리 가격), 분기별 자료 (GDPC_value)\n- 수요 모델에 사용된 종속변수 데이터가 1995년부터 확보되어 독립변수인 본 데이터셋도 1995년도부터 수집함." ]
[ "TAGS\n#task_categories-time-series-forecasting #language-English #language-Korean #mining #LSTM #TimeSeries #CanariaView #region-us \n", "# CanariaView Global Copper Demand Forecasting Dataset", "## Description\nThis dataset encompasses economic and industrial indicators vital for constructing a copper demand forecasting model.\n\nCoverage Period: Monthly data from January 1995 to March 2023, encompassing a total of 339 months.\n\nColumn Descriptions and Sources:\n- 'HSI_value (US Housing Starts Index)': Y-Chart\n- 'CCI_value (Consumer Confidence Index)': OECD\n- 'IPI_value (Industrial Production Total Index)': FRED\n- 'GDPC_value (Real Gross Domestic Product)': FRED\n- 'Copper price': MacroTrends\n\nPreprocessing Methodology and Data Collection Details:\n- Comprehensive analysis of data structure followed by essential preprocessing.\n- Appropriate handling of missing values.\n- Daily and quarterly data uniformly expanded to a monthly timescale for consistency.\n- Daily data (e.g., Copper price) and quarterly data (e.g., GDPC_value)\n- Dependent variable data used in the model was available from 1995, guiding the collection of independent variables-this dataset- from that year.", "## 한국어 설명\n본 데이터셋은 구리 수요 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다.\n\n기간: 1995년 1월~2023년 3월(월별), 총 339개월.\n\n컬럼 설명 및 출처\n- 'HSI_value (미국 주택착공지수)': Y-Chart\n- 'CCI_value (미국 소비자신뢰지수)': OECD\n- 'IPI_value (미국 산업생산자지수)': FRED\n- 'GDPC_value (미국 실질 GDP)': FRED\n- 'Copper price (구리 가격)': MacroTrends\n\n데이터 전처리 및 수집 방법:\n- 데이터 구조 분석 및 전처리 과정 수행.\n- 결측치 처리.\n- 일별 및 분기별 자료는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합.\n- 일별 자료 (구리 가격), 분기별 자료 (GDPC_value)\n- 수요 모델에 사용된 종속변수 데이터가 1995년부터 확보되어 독립변수인 본 데이터셋도 1995년도부터 수집함." ]
[ 44, 12, 262, 247 ]
[ "passage: TAGS\n#task_categories-time-series-forecasting #language-English #language-Korean #mining #LSTM #TimeSeries #CanariaView #region-us \n# CanariaView Global Copper Demand Forecasting Dataset## Description\nThis dataset encompasses economic and industrial indicators vital for constructing a copper demand forecasting model.\n\nCoverage Period: Monthly data from January 1995 to March 2023, encompassing a total of 339 months.\n\nColumn Descriptions and Sources:\n- 'HSI_value (US Housing Starts Index)': Y-Chart\n- 'CCI_value (Consumer Confidence Index)': OECD\n- 'IPI_value (Industrial Production Total Index)': FRED\n- 'GDPC_value (Real Gross Domestic Product)': FRED\n- 'Copper price': MacroTrends\n\nPreprocessing Methodology and Data Collection Details:\n- Comprehensive analysis of data structure followed by essential preprocessing.\n- Appropriate handling of missing values.\n- Daily and quarterly data uniformly expanded to a monthly timescale for consistency.\n- Daily data (e.g., Copper price) and quarterly data (e.g., GDPC_value)\n- Dependent variable data used in the model was available from 1995, guiding the collection of independent variables-this dataset- from that year." ]
8ea1f6dbcb1a7a57b48a74325063904a1f12f481
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T16:16:53.571803](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k/blob/main/results_2023-12-30T16-16-53.571803.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6939517211944045, "acc_stderr": 0.030232673494217974, "acc_norm": 0.7084301138333359, "acc_norm_stderr": 0.031054743745039477, "mc1": 0.397796817625459, "mc1_stderr": 0.01713393424855964, "mc2": 0.5597457443511287, "mc2_stderr": 0.014917533204367936 }, "harness|arc:challenge|25": { "acc": 0.23976109215017063, "acc_stderr": 0.012476304127453947, "acc_norm": 0.2909556313993174, "acc_norm_stderr": 0.013273077865907586 }, "harness|hellaswag|10": { "acc": 0.6341366261700856, "acc_stderr": 0.004806870285747291, "acc_norm": 0.8227444732125074, "acc_norm_stderr": 0.0038110434120246514 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6666666666666666, "acc_stderr": 0.04072314811876837, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.04072314811876837 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7763157894736842, "acc_stderr": 0.033911609343436025, "acc_norm": 0.7763157894736842, "acc_norm_stderr": 0.033911609343436025 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7849056603773585, "acc_stderr": 0.02528839450289137, "acc_norm": 0.7849056603773585, "acc_norm_stderr": 0.02528839450289137 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8333333333333334, "acc_stderr": 0.031164899666948614, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.031164899666948614 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6820809248554913, "acc_stderr": 0.0355068398916558, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.0355068398916558 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.5196078431372549, "acc_stderr": 0.04971358884367406, "acc_norm": 0.5196078431372549, "acc_norm_stderr": 0.04971358884367406 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6680851063829787, "acc_stderr": 0.030783736757745653, "acc_norm": 0.6680851063829787, "acc_norm_stderr": 0.030783736757745653 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.6842105263157895, "acc_stderr": 0.043727482902780085, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.043727482902780085 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6758620689655173, "acc_stderr": 0.03900432069185555, "acc_norm": 0.6758620689655173, "acc_norm_stderr": 0.03900432069185555 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4894179894179894, "acc_stderr": 0.02574554227604548, "acc_norm": 0.4894179894179894, "acc_norm_stderr": 0.02574554227604548 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5317460317460317, "acc_stderr": 0.04463112720677173, "acc_norm": 0.5317460317460317, "acc_norm_stderr": 0.04463112720677173 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8451612903225807, "acc_stderr": 0.020579287326583227, "acc_norm": 0.8451612903225807, "acc_norm_stderr": 0.020579287326583227 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5862068965517241, "acc_stderr": 0.03465304488406796, "acc_norm": 0.5862068965517241, "acc_norm_stderr": 0.03465304488406796 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8242424242424242, "acc_stderr": 0.02972094300622445, "acc_norm": 0.8242424242424242, "acc_norm_stderr": 0.02972094300622445 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8535353535353535, "acc_stderr": 0.025190921114603918, "acc_norm": 0.8535353535353535, "acc_norm_stderr": 0.025190921114603918 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9222797927461139, "acc_stderr": 0.01932180555722315, "acc_norm": 0.9222797927461139, "acc_norm_stderr": 0.01932180555722315 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7051282051282052, "acc_stderr": 0.023119362758232297, "acc_norm": 0.7051282051282052, "acc_norm_stderr": 0.023119362758232297 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.029318203645206865, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.029318203645206865 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.8109243697478992, "acc_stderr": 0.025435119438105353, "acc_norm": 0.8109243697478992, "acc_norm_stderr": 0.025435119438105353 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4503311258278146, "acc_stderr": 0.04062290018683776, "acc_norm": 0.4503311258278146, "acc_norm_stderr": 0.04062290018683776 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8917431192660551, "acc_stderr": 0.013321348447611764, "acc_norm": 0.8917431192660551, "acc_norm_stderr": 0.013321348447611764 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.033674621388960775, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.033674621388960775 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8480392156862745, "acc_stderr": 0.025195658428931792, "acc_norm": 0.8480392156862745, "acc_norm_stderr": 0.025195658428931792 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8776371308016878, "acc_stderr": 0.021331741829746793, "acc_norm": 0.8776371308016878, "acc_norm_stderr": 0.021331741829746793 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7533632286995515, "acc_stderr": 0.028930413120910884, "acc_norm": 0.7533632286995515, "acc_norm_stderr": 0.028930413120910884 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7938931297709924, "acc_stderr": 0.035477710041594626, "acc_norm": 0.7938931297709924, "acc_norm_stderr": 0.035477710041594626 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8512396694214877, "acc_stderr": 0.03248470083807194, "acc_norm": 0.8512396694214877, "acc_norm_stderr": 0.03248470083807194 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8518518518518519, "acc_stderr": 0.03434300243630999, "acc_norm": 0.8518518518518519, "acc_norm_stderr": 0.03434300243630999 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7975460122699386, "acc_stderr": 0.031570650789119005, "acc_norm": 0.7975460122699386, "acc_norm_stderr": 0.031570650789119005 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5803571428571429, "acc_stderr": 0.046840993210771065, "acc_norm": 0.5803571428571429, "acc_norm_stderr": 0.046840993210771065 }, "harness|hendrycksTest-management|5": { "acc": 0.8446601941747572, "acc_stderr": 0.035865947385739734, "acc_norm": 0.8446601941747572, "acc_norm_stderr": 0.035865947385739734 }, "harness|hendrycksTest-marketing|5": { "acc": 0.905982905982906, "acc_stderr": 0.01911989279892498, "acc_norm": 0.905982905982906, "acc_norm_stderr": 0.01911989279892498 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.77, "acc_stderr": 0.04229525846816506, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8748403575989783, "acc_stderr": 0.011832954239305736, "acc_norm": 0.8748403575989783, "acc_norm_stderr": 0.011832954239305736 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7745664739884393, "acc_stderr": 0.022497230190967558, "acc_norm": 0.7745664739884393, "acc_norm_stderr": 0.022497230190967558 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.5273743016759777, "acc_stderr": 0.016697420650642752, "acc_norm": 0.5273743016759777, "acc_norm_stderr": 0.016697420650642752 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7973856209150327, "acc_stderr": 0.023015446877985693, "acc_norm": 0.7973856209150327, "acc_norm_stderr": 0.023015446877985693 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7942122186495176, "acc_stderr": 0.022961339906764244, "acc_norm": 0.7942122186495176, "acc_norm_stderr": 0.022961339906764244 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8333333333333334, "acc_stderr": 0.020736358408060006, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.020736358408060006 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.524822695035461, "acc_stderr": 0.0297907192438297, "acc_norm": 0.524822695035461, "acc_norm_stderr": 0.0297907192438297 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5195567144719687, "acc_stderr": 0.012760464028289299, "acc_norm": 0.5195567144719687, "acc_norm_stderr": 0.012760464028289299 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7794117647058824, "acc_stderr": 0.02518778666022726, "acc_norm": 0.7794117647058824, "acc_norm_stderr": 0.02518778666022726 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7549019607843137, "acc_stderr": 0.017401816711427657, "acc_norm": 0.7549019607843137, "acc_norm_stderr": 0.017401816711427657 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7959183673469388, "acc_stderr": 0.025801283475090506, "acc_norm": 0.7959183673469388, "acc_norm_stderr": 0.025801283475090506 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8706467661691543, "acc_stderr": 0.023729830881018526, "acc_norm": 0.8706467661691543, "acc_norm_stderr": 0.023729830881018526 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.88, "acc_stderr": 0.032659863237109066, "acc_norm": 0.88, "acc_norm_stderr": 0.032659863237109066 }, "harness|hendrycksTest-virology|5": { "acc": 0.5240963855421686, "acc_stderr": 0.03887971849597264, "acc_norm": 0.5240963855421686, "acc_norm_stderr": 0.03887971849597264 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8654970760233918, "acc_stderr": 0.026168221344662297, "acc_norm": 0.8654970760233918, "acc_norm_stderr": 0.026168221344662297 }, "harness|truthfulqa:mc|0": { "mc1": 0.397796817625459, "mc1_stderr": 0.01713393424855964, "mc2": 0.5597457443511287, "mc2_stderr": 0.014917533204367936 }, "harness|winogrande|5": { "acc": 0.7734806629834254, "acc_stderr": 0.011764149054698334 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k
[ "region:us" ]
2023-12-30T16:19:12+00:00
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T16:16:53.571803](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mixtral-8x7b-v16.1-32k/blob/main/results_2023-12-30T16-16-53.571803.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6939517211944045,\n \"acc_stderr\": 0.030232673494217974,\n \"acc_norm\": 0.7084301138333359,\n \"acc_norm_stderr\": 0.031054743745039477,\n \"mc1\": 0.397796817625459,\n \"mc1_stderr\": 0.01713393424855964,\n \"mc2\": 0.5597457443511287,\n \"mc2_stderr\": 0.014917533204367936\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.23976109215017063,\n \"acc_stderr\": 0.012476304127453947,\n \"acc_norm\": 0.2909556313993174,\n \"acc_norm_stderr\": 0.013273077865907586\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6341366261700856,\n \"acc_stderr\": 0.004806870285747291,\n \"acc_norm\": 0.8227444732125074,\n \"acc_norm_stderr\": 0.0038110434120246514\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7763157894736842,\n \"acc_stderr\": 0.033911609343436025,\n \"acc_norm\": 0.7763157894736842,\n \"acc_norm_stderr\": 0.033911609343436025\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7849056603773585,\n \"acc_stderr\": 0.02528839450289137,\n \"acc_norm\": 0.7849056603773585,\n \"acc_norm_stderr\": 0.02528839450289137\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.031164899666948614,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.031164899666948614\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5196078431372549,\n \"acc_stderr\": 0.04971358884367406,\n \"acc_norm\": 0.5196078431372549,\n \"acc_norm_stderr\": 0.04971358884367406\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6680851063829787,\n \"acc_stderr\": 0.030783736757745653,\n \"acc_norm\": 0.6680851063829787,\n \"acc_norm_stderr\": 0.030783736757745653\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.043727482902780085,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.043727482902780085\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6758620689655173,\n \"acc_stderr\": 0.03900432069185555,\n \"acc_norm\": 0.6758620689655173,\n \"acc_norm_stderr\": 0.03900432069185555\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4894179894179894,\n \"acc_stderr\": 0.02574554227604548,\n \"acc_norm\": 0.4894179894179894,\n \"acc_norm_stderr\": 0.02574554227604548\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5317460317460317,\n \"acc_stderr\": 0.04463112720677173,\n \"acc_norm\": 0.5317460317460317,\n \"acc_norm_stderr\": 0.04463112720677173\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8451612903225807,\n \"acc_stderr\": 0.020579287326583227,\n \"acc_norm\": 0.8451612903225807,\n \"acc_norm_stderr\": 0.020579287326583227\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.03465304488406796,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.03465304488406796\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8242424242424242,\n \"acc_stderr\": 0.02972094300622445,\n \"acc_norm\": 0.8242424242424242,\n \"acc_norm_stderr\": 0.02972094300622445\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8535353535353535,\n \"acc_stderr\": 0.025190921114603918,\n \"acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.025190921114603918\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9222797927461139,\n \"acc_stderr\": 0.01932180555722315,\n \"acc_norm\": 0.9222797927461139,\n \"acc_norm_stderr\": 0.01932180555722315\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7051282051282052,\n \"acc_stderr\": 0.023119362758232297,\n \"acc_norm\": 0.7051282051282052,\n \"acc_norm_stderr\": 0.023119362758232297\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206865,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206865\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8109243697478992,\n \"acc_stderr\": 0.025435119438105353,\n \"acc_norm\": 0.8109243697478992,\n \"acc_norm_stderr\": 0.025435119438105353\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4503311258278146,\n \"acc_stderr\": 0.04062290018683776,\n \"acc_norm\": 0.4503311258278146,\n \"acc_norm_stderr\": 0.04062290018683776\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8917431192660551,\n \"acc_stderr\": 0.013321348447611764,\n \"acc_norm\": 0.8917431192660551,\n \"acc_norm_stderr\": 0.013321348447611764\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n \"acc_stderr\": 0.025195658428931792,\n \"acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.025195658428931792\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8776371308016878,\n \"acc_stderr\": 0.021331741829746793,\n \"acc_norm\": 0.8776371308016878,\n \"acc_norm_stderr\": 0.021331741829746793\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7533632286995515,\n \"acc_stderr\": 0.028930413120910884,\n \"acc_norm\": 0.7533632286995515,\n \"acc_norm_stderr\": 0.028930413120910884\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.035477710041594626,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.035477710041594626\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8512396694214877,\n \"acc_stderr\": 0.03248470083807194,\n \"acc_norm\": 0.8512396694214877,\n \"acc_norm_stderr\": 0.03248470083807194\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8518518518518519,\n \"acc_stderr\": 0.03434300243630999,\n \"acc_norm\": 0.8518518518518519,\n \"acc_norm_stderr\": 0.03434300243630999\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.031570650789119005,\n \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.031570650789119005\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5803571428571429,\n \"acc_stderr\": 0.046840993210771065,\n \"acc_norm\": 0.5803571428571429,\n \"acc_norm_stderr\": 0.046840993210771065\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.035865947385739734,\n \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.035865947385739734\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.905982905982906,\n \"acc_stderr\": 0.01911989279892498,\n \"acc_norm\": 0.905982905982906,\n \"acc_norm_stderr\": 0.01911989279892498\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8748403575989783,\n \"acc_stderr\": 0.011832954239305736,\n \"acc_norm\": 0.8748403575989783,\n \"acc_norm_stderr\": 0.011832954239305736\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7745664739884393,\n \"acc_stderr\": 0.022497230190967558,\n \"acc_norm\": 0.7745664739884393,\n \"acc_norm_stderr\": 0.022497230190967558\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5273743016759777,\n \"acc_stderr\": 0.016697420650642752,\n \"acc_norm\": 0.5273743016759777,\n \"acc_norm_stderr\": 0.016697420650642752\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7973856209150327,\n \"acc_stderr\": 0.023015446877985693,\n \"acc_norm\": 0.7973856209150327,\n \"acc_norm_stderr\": 0.023015446877985693\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7942122186495176,\n \"acc_stderr\": 0.022961339906764244,\n \"acc_norm\": 0.7942122186495176,\n \"acc_norm_stderr\": 0.022961339906764244\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.020736358408060006,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.020736358408060006\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.524822695035461,\n \"acc_stderr\": 0.0297907192438297,\n \"acc_norm\": 0.524822695035461,\n \"acc_norm_stderr\": 0.0297907192438297\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5195567144719687,\n \"acc_stderr\": 0.012760464028289299,\n \"acc_norm\": 0.5195567144719687,\n \"acc_norm_stderr\": 0.012760464028289299\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.02518778666022726,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.02518778666022726\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7549019607843137,\n \"acc_stderr\": 0.017401816711427657,\n \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.017401816711427657\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7959183673469388,\n \"acc_stderr\": 0.025801283475090506,\n \"acc_norm\": 0.7959183673469388,\n \"acc_norm_stderr\": 0.025801283475090506\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n \"acc_stderr\": 0.023729830881018526,\n \"acc_norm\": 0.8706467661691543,\n \"acc_norm_stderr\": 0.023729830881018526\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8654970760233918,\n \"acc_stderr\": 0.026168221344662297,\n \"acc_norm\": 0.8654970760233918,\n \"acc_norm_stderr\": 0.026168221344662297\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.397796817625459,\n \"mc1_stderr\": 0.01713393424855964,\n \"mc2\": 0.5597457443511287,\n \"mc2_stderr\": 0.014917533204367936\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698334\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["**/details_harness|winogrande|5_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T16-16-53.571803.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T16_16_53.571803", "path": ["results_2023-12-30T16-16-53.571803.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T16-16-53.571803.parquet"]}]}]}
2023-12-30T16:19:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T16:16:53.571803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:16:53.571803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:16:53.571803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 203, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k\n\n\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mixtral-8x7b-v16.1-32k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T16:16:53.571803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
538c88a39dad98b8e3df71f148b28d1136260fb1
# MetaMath Dataset with "{"original_question": original_question, "paraphrased_question": paraphrased_question, "answer_detail": answer_detail}" pairs. # 💻 Dataset Usage Run the following command to load the data: ```python from datasets import load_dataset dataset = load_dataset("shuyuej/CleanedMetaMathQA") dataset = dataset['train'] print(dataset) ``` # 📝 Dataset modification codes ```python # coding=utf-8 import re import jsonlines from datasets import load_dataset # Load the dataset dataset = load_dataset("meta-math/MetaMathQA") dataset = dataset["train"] data = [] # Define a regular expression pattern pattern = re.compile(r'\n####(.*?)\nThe answer is: ', re.DOTALL) for example in dataset: original_question = example['original_question'] paraphrased_question = example['query'] answer_detail = example['response'] # Use the pattern to find the information match = re.search(pattern, answer_detail) if match: info = match.group(1).strip() answer_detail = answer_detail.replace('\n#### ' + info, '') data.append({"original_question": original_question, "paraphrased_question": paraphrased_question, "answer_detail": answer_detail}) # Save the modified data to a jsonl file output_file = 'CleanedMetaMathQA.jsonl' with jsonlines.open(output_file, 'w') as writer: writer.write_all(data) print(f"Modified data saved to {output_file}") ```
shuyuej/CleanedMetaMathQA
[ "license:apache-2.0", "region:us" ]
2023-12-30T16:38:05+00:00
{"license": "apache-2.0"}
2024-01-25T19:43:58+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# MetaMath Dataset with "{"original_question": original_question, "paraphrased_question": paraphrased_question, "answer_detail": answer_detail}" pairs. # Dataset Usage Run the following command to load the data: # Dataset modification codes
[ "# MetaMath Dataset with \"{\"original_question\": original_question, \"paraphrased_question\": paraphrased_question, \"answer_detail\": answer_detail}\" pairs.", "# Dataset Usage\nRun the following command to load the data:", "# Dataset modification codes" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# MetaMath Dataset with \"{\"original_question\": original_question, \"paraphrased_question\": paraphrased_question, \"answer_detail\": answer_detail}\" pairs.", "# Dataset Usage\nRun the following command to load the data:", "# Dataset modification codes" ]
[ 14, 51, 14, 6 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n# MetaMath Dataset with \"{\"original_question\": original_question, \"paraphrased_question\": paraphrased_question, \"answer_detail\": answer_detail}\" pairs.# Dataset Usage\nRun the following command to load the data:# Dataset modification codes" ]
23dc17712316587002156468b3c7e745463b0c8e
# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v3 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [jeonsworld/CarbonVillain-en-10.7B-v3](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T16:49:42.905976](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3/blob/main/results_2023-12-30T16-49-42.905976.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.667270432389036, "acc_stderr": 0.03161503740481807, "acc_norm": 0.6679793731390249, "acc_norm_stderr": 0.032260225407857515, "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7183713907727333, "mc2_stderr": 0.014997186929843767 }, "harness|arc:challenge|25": { "acc": 0.6851535836177475, "acc_stderr": 0.01357265770308495, "acc_norm": 0.7098976109215017, "acc_norm_stderr": 0.013261573677520767 }, "harness|hellaswag|10": { "acc": 0.7143995220075682, "acc_stderr": 0.0045077680295901, "acc_norm": 0.8847839075881299, "acc_norm_stderr": 0.0031863002304505774 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.756578947368421, "acc_stderr": 0.034923496688842384, "acc_norm": 0.756578947368421, "acc_norm_stderr": 0.034923496688842384 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.73, "acc_stderr": 0.04461960433384741, "acc_norm": 0.73, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.028637235639800886, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.028637235639800886 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7708333333333334, "acc_stderr": 0.03514697467862388, "acc_norm": 0.7708333333333334, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956913, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956913 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6297872340425532, "acc_stderr": 0.03156564682236786, "acc_norm": 0.6297872340425532, "acc_norm_stderr": 0.03156564682236786 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5087719298245614, "acc_stderr": 0.04702880432049615, "acc_norm": 0.5087719298245614, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6344827586206897, "acc_stderr": 0.040131241954243856, "acc_norm": 0.6344827586206897, "acc_norm_stderr": 0.040131241954243856 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.5026455026455027, "acc_stderr": 0.02575094967813038, "acc_norm": 0.5026455026455027, "acc_norm_stderr": 0.02575094967813038 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8193548387096774, "acc_stderr": 0.021886178567172534, "acc_norm": 0.8193548387096774, "acc_norm_stderr": 0.021886178567172534 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.806060606060606, "acc_stderr": 0.03087414513656209, "acc_norm": 0.806060606060606, "acc_norm_stderr": 0.03087414513656209 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822516, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822516 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.021995311963644244, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.021995311963644244 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37407407407407406, "acc_stderr": 0.029502861128955286, "acc_norm": 0.37407407407407406, "acc_norm_stderr": 0.029502861128955286 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7142857142857143, "acc_stderr": 0.029344572500634332, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.029344572500634332 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8513761467889909, "acc_stderr": 0.015251253773660834, "acc_norm": 0.8513761467889909, "acc_norm_stderr": 0.015251253773660834 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5787037037037037, "acc_stderr": 0.033674621388960775, "acc_norm": 0.5787037037037037, "acc_norm_stderr": 0.033674621388960775 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8481012658227848, "acc_stderr": 0.023363878096632446, "acc_norm": 0.8481012658227848, "acc_norm_stderr": 0.023363878096632446 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.03138147637575499, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.03138147637575499 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7480916030534351, "acc_stderr": 0.03807387116306086, "acc_norm": 0.7480916030534351, "acc_norm_stderr": 0.03807387116306086 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.033519538795212696, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.033519538795212696 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.03492606476623791, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.03492606476623791 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.0230866350868414, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.0230866350868414 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8071519795657727, "acc_stderr": 0.014108533515757431, "acc_norm": 0.8071519795657727, "acc_norm_stderr": 0.014108533515757431 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7572254335260116, "acc_stderr": 0.023083658586984204, "acc_norm": 0.7572254335260116, "acc_norm_stderr": 0.023083658586984204 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39776536312849164, "acc_stderr": 0.01636920497126298, "acc_norm": 0.39776536312849164, "acc_norm_stderr": 0.01636920497126298 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7266881028938906, "acc_stderr": 0.025311765975426122, "acc_norm": 0.7266881028938906, "acc_norm_stderr": 0.025311765975426122 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7839506172839507, "acc_stderr": 0.022899162918445806, "acc_norm": 0.7839506172839507, "acc_norm_stderr": 0.022899162918445806 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.49645390070921985, "acc_stderr": 0.02982674915328092, "acc_norm": 0.49645390070921985, "acc_norm_stderr": 0.02982674915328092 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4915254237288136, "acc_stderr": 0.012768401697269057, "acc_norm": 0.4915254237288136, "acc_norm_stderr": 0.012768401697269057 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7426470588235294, "acc_stderr": 0.02655651947004151, "acc_norm": 0.7426470588235294, "acc_norm_stderr": 0.02655651947004151 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6781045751633987, "acc_stderr": 0.018901015322093092, "acc_norm": 0.6781045751633987, "acc_norm_stderr": 0.018901015322093092 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784593, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784593 }, "harness|hendrycksTest-sociology|5": { "acc": 0.845771144278607, "acc_stderr": 0.02553843336857834, "acc_norm": 0.845771144278607, "acc_norm_stderr": 0.02553843336857834 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.028762349126466125, "acc_norm": 0.91, "acc_norm_stderr": 0.028762349126466125 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598052, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598052 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03188578017686398, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7183713907727333, "mc2_stderr": 0.014997186929843767 }, "harness|winogrande|5": { "acc": 0.8358326756116812, "acc_stderr": 0.010410849775222789 }, "harness|gsm8k|5": { "acc": 0.6520090978013646, "acc_stderr": 0.013120581030382134 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3
[ "region:us" ]
2023-12-30T16:51:58+00:00
{"pretty_name": "Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v3", "dataset_summary": "Dataset automatically created during the evaluation run of model [jeonsworld/CarbonVillain-en-10.7B-v3](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T16:49:42.905976](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3/blob/main/results_2023-12-30T16-49-42.905976.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.667270432389036,\n \"acc_stderr\": 0.03161503740481807,\n \"acc_norm\": 0.6679793731390249,\n \"acc_norm_stderr\": 0.032260225407857515,\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7183713907727333,\n \"mc2_stderr\": 0.014997186929843767\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6851535836177475,\n \"acc_stderr\": 0.01357265770308495,\n \"acc_norm\": 0.7098976109215017,\n \"acc_norm_stderr\": 0.013261573677520767\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7143995220075682,\n \"acc_stderr\": 0.0045077680295901,\n \"acc_norm\": 0.8847839075881299,\n \"acc_norm_stderr\": 0.0031863002304505774\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.756578947368421,\n \"acc_stderr\": 0.034923496688842384,\n \"acc_norm\": 0.756578947368421,\n \"acc_norm_stderr\": 0.034923496688842384\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.028637235639800886,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.028637235639800886\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6297872340425532,\n \"acc_stderr\": 0.03156564682236786,\n \"acc_norm\": 0.6297872340425532,\n \"acc_norm_stderr\": 0.03156564682236786\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.5026455026455027,\n \"acc_stderr\": 0.02575094967813038,\n \"acc_norm\": 0.5026455026455027,\n \"acc_norm_stderr\": 0.02575094967813038\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644244,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644244\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37407407407407406,\n \"acc_stderr\": 0.029502861128955286,\n \"acc_norm\": 0.37407407407407406,\n \"acc_norm_stderr\": 0.029502861128955286\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634332,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634332\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660834,\n \"acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660834\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5787037037037037,\n \"acc_stderr\": 0.033674621388960775,\n \"acc_norm\": 0.5787037037037037,\n \"acc_norm_stderr\": 0.033674621388960775\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8481012658227848,\n \"acc_stderr\": 0.023363878096632446,\n \"acc_norm\": 0.8481012658227848,\n \"acc_norm_stderr\": 0.023363878096632446\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n \"acc_stderr\": 0.014108533515757431,\n \"acc_norm\": 0.8071519795657727,\n \"acc_norm_stderr\": 0.014108533515757431\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7572254335260116,\n \"acc_stderr\": 0.023083658586984204,\n \"acc_norm\": 0.7572254335260116,\n \"acc_norm_stderr\": 0.023083658586984204\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39776536312849164,\n \"acc_stderr\": 0.01636920497126298,\n \"acc_norm\": 0.39776536312849164,\n \"acc_norm_stderr\": 0.01636920497126298\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7266881028938906,\n \"acc_stderr\": 0.025311765975426122,\n \"acc_norm\": 0.7266881028938906,\n \"acc_norm_stderr\": 0.025311765975426122\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7839506172839507,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.7839506172839507,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4915254237288136,\n \"acc_stderr\": 0.012768401697269057,\n \"acc_norm\": 0.4915254237288136,\n \"acc_norm_stderr\": 0.012768401697269057\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7426470588235294,\n \"acc_stderr\": 0.02655651947004151,\n \"acc_norm\": 0.7426470588235294,\n \"acc_norm_stderr\": 0.02655651947004151\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.02553843336857834,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.02553843336857834\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7183713907727333,\n \"mc2_stderr\": 0.014997186929843767\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8358326756116812,\n \"acc_stderr\": 0.010410849775222789\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6520090978013646,\n \"acc_stderr\": 0.013120581030382134\n }\n}\n```", "repo_url": "https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T16-49-42.905976.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["**/details_harness|winogrande|5_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T16-49-42.905976.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T16_49_42.905976", "path": ["results_2023-12-30T16-49-42.905976.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T16-49-42.905976.parquet"]}]}]}
2023-12-30T16:52:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v3 Dataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v3 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T16:49:42.905976(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v3\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:49:42.905976(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v3\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T16:49:42.905976(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 195, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v3\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T16:49:42.905976(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
86e74111e73d95ecfb69de292c614d521234fb76
# CanariaView Global Copper Supply Forecasting Dataset ## Description This dataset encompasses economic and industrial indicators vital for constructing a copper supply forecasting model. Coverage Period: Monthly data from January 2000 to March 2023, encompassing a total of 279 months. Column Descriptions and Sources: - `Copper price`: MacroTrends - `Cash Costs (Antofagasta's Pure Mining Costs)`: Antofagasta Annual Report - `Transport (Antofagasta's Transportation Cost)`: Antofagasta Annual Report - `Stock (LME Copper Stock)`: MacroMicro - `Oil Price`: Source - EIA - `M_GDP (Chile Copper Mining GDP)`: Banco Central de Chile Preprocessing Methodology and Data Collection Details: - Comprehensive analysis of data structure followed by essential preprocessing. - Appropriate handling of missing values. - Daily (e.g., Copper price, Oil Price) and quarterly data (e.g., Cash Costs, Transport, M_GDP) uniformly expanded to a monthly timescale for consistency. - The Antofagasta annual report was available from the year 2000, hence the data collection started from 2000. ## 한국어 설명 본 데이터셋은 구리 공급 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다. 기간: 2000년 1월~2023년 3월(월별), 총 279개월. 컬럼 설명 및 출처: - `Copper price (구리 가격)`: MacroTrends - `Cash Costs (Antofagasta 순수채굴비용)`: Antofagasta Annual Report - `Transport (Antofagasta 운송비)`: Antofagasta Annual Report - `Stock (런던금속거래소 구리 재고량)`: MacroMicro - `Oil Price (원유 가격)`: EIA - `M_GDP (칠레 구리 채굴 GDP)`: Banco Central de Chile 데이터 전처리 및 수집 방법: - 데이터 구조 분석 및 전처리 과정 수행. - 결측치 처리. - 일별 자료 (구리 가격, 원유 가격), 분기별 자료 (Cash Costs, Transport, M_GDP)는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합. - 안토파가스타 관련 데이터가 2000년부터 확보가 가능하여 2000년 부터 수집함.
CanariaView/GlobalCopperSupplyForecastingDataset
[ "task_categories:time-series-forecasting", "language:en", "language:ko", "mining", "LSTM", "TimeSeries", "CanariaView", "region:us" ]
2023-12-30T17:03:56+00:00
{"language": ["en", "ko"], "task_categories": ["time-series-forecasting"], "tags": ["mining", "LSTM", "TimeSeries", "CanariaView"]}
2023-12-30T17:21:30+00:00
[]
[ "en", "ko" ]
TAGS #task_categories-time-series-forecasting #language-English #language-Korean #mining #LSTM #TimeSeries #CanariaView #region-us
# CanariaView Global Copper Supply Forecasting Dataset ## Description This dataset encompasses economic and industrial indicators vital for constructing a copper supply forecasting model. Coverage Period: Monthly data from January 2000 to March 2023, encompassing a total of 279 months. Column Descriptions and Sources: - 'Copper price': MacroTrends - 'Cash Costs (Antofagasta's Pure Mining Costs)': Antofagasta Annual Report - 'Transport (Antofagasta's Transportation Cost)': Antofagasta Annual Report - 'Stock (LME Copper Stock)': MacroMicro - 'Oil Price': Source - EIA - 'M_GDP (Chile Copper Mining GDP)': Banco Central de Chile Preprocessing Methodology and Data Collection Details: - Comprehensive analysis of data structure followed by essential preprocessing. - Appropriate handling of missing values. - Daily (e.g., Copper price, Oil Price) and quarterly data (e.g., Cash Costs, Transport, M_GDP) uniformly expanded to a monthly timescale for consistency. - The Antofagasta annual report was available from the year 2000, hence the data collection started from 2000. ## 한국어 설명 본 데이터셋은 구리 공급 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다. 기간: 2000년 1월~2023년 3월(월별), 총 279개월. 컬럼 설명 및 출처: - 'Copper price (구리 가격)': MacroTrends - 'Cash Costs (Antofagasta 순수채굴비용)': Antofagasta Annual Report - 'Transport (Antofagasta 운송비)': Antofagasta Annual Report - 'Stock (런던금속거래소 구리 재고량)': MacroMicro - 'Oil Price (원유 가격)': EIA - 'M_GDP (칠레 구리 채굴 GDP)': Banco Central de Chile 데이터 전처리 및 수집 방법: - 데이터 구조 분석 및 전처리 과정 수행. - 결측치 처리. - 일별 자료 (구리 가격, 원유 가격), 분기별 자료 (Cash Costs, Transport, M_GDP)는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합. - 안토파가스타 관련 데이터가 2000년부터 확보가 가능하여 2000년 부터 수집함.
[ "# CanariaView Global Copper Supply Forecasting Dataset", "## Description\n\nThis dataset encompasses economic and industrial indicators vital for constructing a copper supply forecasting model.\n\nCoverage Period: Monthly data from January 2000 to March 2023, encompassing a total of 279 months.\n\nColumn Descriptions and Sources:\n- 'Copper price': MacroTrends\n- 'Cash Costs (Antofagasta's Pure Mining Costs)': Antofagasta Annual Report\n- 'Transport (Antofagasta's Transportation Cost)': Antofagasta Annual Report\n- 'Stock (LME Copper Stock)': MacroMicro\n- 'Oil Price': Source - EIA\n- 'M_GDP (Chile Copper Mining GDP)': Banco Central de Chile\n\nPreprocessing Methodology and Data Collection Details:\n- Comprehensive analysis of data structure followed by essential preprocessing.\n- Appropriate handling of missing values.\n- Daily (e.g., Copper price, Oil Price) and quarterly data (e.g., Cash Costs, Transport, M_GDP) uniformly expanded to a monthly timescale for consistency.\n- The Antofagasta annual report was available from the year 2000, hence the data collection started from 2000.", "## 한국어 설명\n\n본 데이터셋은 구리 공급 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다.\n\n기간: 2000년 1월~2023년 3월(월별), 총 279개월.\n\n컬럼 설명 및 출처:\n- 'Copper price (구리 가격)': MacroTrends\n- 'Cash Costs (Antofagasta 순수채굴비용)': Antofagasta Annual Report\n- 'Transport (Antofagasta 운송비)': Antofagasta Annual Report\n- 'Stock (런던금속거래소 구리 재고량)': MacroMicro\n- 'Oil Price (원유 가격)': EIA\n- 'M_GDP (칠레 구리 채굴 GDP)': Banco Central de Chile\n\n데이터 전처리 및 수집 방법:\n- 데이터 구조 분석 및 전처리 과정 수행.\n- 결측치 처리.\n- 일별 자료 (구리 가격, 원유 가격), 분기별 자료 (Cash Costs, Transport, M_GDP)는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합.\n- 안토파가스타 관련 데이터가 2000년부터 확보가 가능하여 2000년 부터 수집함." ]
[ "TAGS\n#task_categories-time-series-forecasting #language-English #language-Korean #mining #LSTM #TimeSeries #CanariaView #region-us \n", "# CanariaView Global Copper Supply Forecasting Dataset", "## Description\n\nThis dataset encompasses economic and industrial indicators vital for constructing a copper supply forecasting model.\n\nCoverage Period: Monthly data from January 2000 to March 2023, encompassing a total of 279 months.\n\nColumn Descriptions and Sources:\n- 'Copper price': MacroTrends\n- 'Cash Costs (Antofagasta's Pure Mining Costs)': Antofagasta Annual Report\n- 'Transport (Antofagasta's Transportation Cost)': Antofagasta Annual Report\n- 'Stock (LME Copper Stock)': MacroMicro\n- 'Oil Price': Source - EIA\n- 'M_GDP (Chile Copper Mining GDP)': Banco Central de Chile\n\nPreprocessing Methodology and Data Collection Details:\n- Comprehensive analysis of data structure followed by essential preprocessing.\n- Appropriate handling of missing values.\n- Daily (e.g., Copper price, Oil Price) and quarterly data (e.g., Cash Costs, Transport, M_GDP) uniformly expanded to a monthly timescale for consistency.\n- The Antofagasta annual report was available from the year 2000, hence the data collection started from 2000.", "## 한국어 설명\n\n본 데이터셋은 구리 공급 예측 모델 구축을 위한 경제지표 및 산업지표로 구성되었습니다.\n\n기간: 2000년 1월~2023년 3월(월별), 총 279개월.\n\n컬럼 설명 및 출처:\n- 'Copper price (구리 가격)': MacroTrends\n- 'Cash Costs (Antofagasta 순수채굴비용)': Antofagasta Annual Report\n- 'Transport (Antofagasta 운송비)': Antofagasta Annual Report\n- 'Stock (런던금속거래소 구리 재고량)': MacroMicro\n- 'Oil Price (원유 가격)': EIA\n- 'M_GDP (칠레 구리 채굴 GDP)': Banco Central de Chile\n\n데이터 전처리 및 수집 방법:\n- 데이터 구조 분석 및 전처리 과정 수행.\n- 결측치 처리.\n- 일별 자료 (구리 가격, 원유 가격), 분기별 자료 (Cash Costs, Transport, M_GDP)는 월별 데이터로의 확장을 통해 일관된 시계열 데이터로 통합.\n- 안토파가스타 관련 데이터가 2000년부터 확보가 가능하여 2000년 부터 수집함." ]
[ 44, 13, 272, 265 ]
[ "passage: TAGS\n#task_categories-time-series-forecasting #language-English #language-Korean #mining #LSTM #TimeSeries #CanariaView #region-us \n# CanariaView Global Copper Supply Forecasting Dataset## Description\n\nThis dataset encompasses economic and industrial indicators vital for constructing a copper supply forecasting model.\n\nCoverage Period: Monthly data from January 2000 to March 2023, encompassing a total of 279 months.\n\nColumn Descriptions and Sources:\n- 'Copper price': MacroTrends\n- 'Cash Costs (Antofagasta's Pure Mining Costs)': Antofagasta Annual Report\n- 'Transport (Antofagasta's Transportation Cost)': Antofagasta Annual Report\n- 'Stock (LME Copper Stock)': MacroMicro\n- 'Oil Price': Source - EIA\n- 'M_GDP (Chile Copper Mining GDP)': Banco Central de Chile\n\nPreprocessing Methodology and Data Collection Details:\n- Comprehensive analysis of data structure followed by essential preprocessing.\n- Appropriate handling of missing values.\n- Daily (e.g., Copper price, Oil Price) and quarterly data (e.g., Cash Costs, Transport, M_GDP) uniformly expanded to a monthly timescale for consistency.\n- The Antofagasta annual report was available from the year 2000, hence the data collection started from 2000." ]
6fd0da19baf24f9cfa4ac91236fcebb9ffbeb91b
# TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset These columns are taken directly from the aforementioned dataset: * **id**: unique identifier for the post * **subreddit**: subreddit the post was taken from * **title**: title of the post * **post**: body of the post * **summary**: summary of the post * **reference_response**: reference response for the post These columns are added by this preprocessing script: * **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last ` `. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below). * **query_token**: tokenized version of `query` * **reference_response_token**: tokenized version of `reference_response` * **reference_response_token_len**: length of `reference_response_token` * **query_reference_response**: concatenation of `query.strip()` and `reference_response` * **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens * **query_reference_response_token_len**: length of `query_reference_response_token` # Args ```python {'base_model': 'EleutherAI/pythia-160m', 'hf_entity': 'cleanrl', 'max_rm_query_response_length': 638, 'max_rm_response_length': 169, 'max_sft_query_response_length': 562, 'max_sft_response_length': 53, 'oai_params': TaskQueryHParams(length=512, format_str='SUBREDDIT: r/{subreddit}\n' '\n' 'TITLE: {title}\n' '\n' 'POST: {post}\n' '\n' 'TL;DR:', truncate_field='post', truncate_text='\n', padding=[50277], pad_side='left'), 'push_to_hub': True} {'format_str': 'SUBREDDIT: r/{subreddit}\n' '\n' 'TITLE: {title}\n' '\n' 'POST: {post}\n' '\n' 'TL;DR:', 'length': 512, 'pad_side': 'left', 'padding': [50277], 'truncate_field': 'post', 'truncate_text': '\n'} ```
cleanrl/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_pythia-160m_53
[ "region:us" ]
2023-12-30T17:04:38+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "reference_response", "dtype": "string"}, {"name": "reference_response_token", "sequence": "int64"}, {"name": "reference_response_token_len", "dtype": "int64"}, {"name": "query_reference_response", "dtype": "string"}, {"name": "query_reference_response_token", "sequence": "int64"}, {"name": "query_reference_response_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1794282399, "num_examples": 116722}, {"name": "validation", "num_bytes": 99115351, "num_examples": 6447}, {"name": "test", "num_bytes": 100764966, "num_examples": 6553}], "download_size": 573863936, "dataset_size": 1994162716}}
2023-12-30T17:04:57+00:00
[]
[]
TAGS #region-us
# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task The dataset is directly taken from URL These columns are taken directly from the aforementioned dataset: * id: unique identifier for the post * subreddit: subreddit the post was taken from * title: title of the post * post: body of the post * summary: summary of the post * reference_response: reference response for the post These columns are added by this preprocessing script: * query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last ' '. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below). * query_token: tokenized version of 'query' * reference_response_token: tokenized version of 'reference_response' * reference_response_token_len: length of 'reference_response_token' * query_reference_response: concatenation of 'URL()' and 'reference_response' * query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens * query_reference_response_token_len: length of 'query_reference_response_token' # Args
[ "# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'", "# Args" ]
[ "TAGS\n#region-us \n", "# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'", "# Args" ]
[ 6, 384, 3 ]
[ "passage: TAGS\n#region-us \n# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'# Args" ]
3fa20b8a1cc025522ff43684986aace0b9bd372f
# Dataset Card for "summarize_from_feedback_oai_preprocessing_pythia-160m_169" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
cleanrl/summarize_from_feedback_oai_preprocessing_pythia-160m_169
[ "region:us" ]
2023-12-30T17:05:17+00:00
{"dataset_info": {"features": [{"name": "info", "struct": [{"name": "id", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "article", "dtype": "string"}]}, {"name": "summaries", "list": [{"name": "text", "dtype": "string"}, {"name": "policy", "dtype": "string"}, {"name": "note", "dtype": "string"}]}, {"name": "choice", "dtype": "int32"}, {"name": "worker", "dtype": "string"}, {"name": "batch", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "extra", "struct": [{"name": "confidence", "dtype": "int32"}]}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "response0", "dtype": "string"}, {"name": "response0_token", "sequence": "int64"}, {"name": "response0_token_len", "dtype": "int64"}, {"name": "response1", "dtype": "string"}, {"name": "response1_token", "sequence": "int64"}, {"name": "response1_token_len", "dtype": "int64"}, {"name": "response0_policy", "dtype": "string"}, {"name": "response1_policy", "dtype": "string"}, {"name": "policies", "dtype": "string"}, {"name": "query_response0", "dtype": "string"}, {"name": "query_response0_token", "sequence": "int64"}, {"name": "query_response0_token_len", "dtype": "int64"}, {"name": "query_response1", "dtype": "string"}, {"name": "query_response1_token", "sequence": "int64"}, {"name": "query_response1_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2440810807, "num_examples": 92858}, {"name": "validation", "num_bytes": 2279202469, "num_examples": 86086}], "download_size": 286866319, "dataset_size": 4720013276}}
2023-12-30T17:05:51+00:00
[]
[]
TAGS #region-us
# Dataset Card for "summarize_from_feedback_oai_preprocessing_pythia-160m_169" More Information needed
[ "# Dataset Card for \"summarize_from_feedback_oai_preprocessing_pythia-160m_169\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"summarize_from_feedback_oai_preprocessing_pythia-160m_169\"\n\nMore Information needed" ]
[ 6, 33 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"summarize_from_feedback_oai_preprocessing_pythia-160m_169\"\n\nMore Information needed" ]
7df1c3bf4645ff41f03d2171d2875e7620f44ba5
# Dataset Card for Evaluation run of mlabonne/Marcoro14-7B-slerp <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_mlabonne__Marcoro14-7B-slerp", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T17:07:52.198441](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Marcoro14-7B-slerp/blob/main/results_2023-12-30T17-07-52.198441.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6557670960374431, "acc_stderr": 0.031998348451839013, "acc_norm": 0.6555797586821419, "acc_norm_stderr": 0.032660366522478446, "mc1": 0.4724602203182375, "mc1_stderr": 0.017476930190712187, "mc2": 0.6354053076486196, "mc2_stderr": 0.015212905778062237 }, "harness|arc:challenge|25": { "acc": 0.6749146757679181, "acc_stderr": 0.013688147309729125, "acc_norm": 0.6979522184300341, "acc_norm_stderr": 0.01341751914471641 }, "harness|hellaswag|10": { "acc": 0.6919936267675761, "acc_stderr": 0.004607256752931883, "acc_norm": 0.8713403704441346, "acc_norm_stderr": 0.003341385493187586 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6666666666666666, "acc_stderr": 0.04072314811876837, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.04072314811876837 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6973684210526315, "acc_stderr": 0.037385206761196686, "acc_norm": 0.6973684210526315, "acc_norm_stderr": 0.037385206761196686 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.64, "acc_stderr": 0.04824181513244218, "acc_norm": 0.64, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7056603773584905, "acc_stderr": 0.02804918631569525, "acc_norm": 0.7056603773584905, "acc_norm_stderr": 0.02804918631569525 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7708333333333334, "acc_stderr": 0.03514697467862388, "acc_norm": 0.7708333333333334, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.05016135580465919, "acc_norm": 0.47, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6473988439306358, "acc_stderr": 0.03643037168958548, "acc_norm": 0.6473988439306358, "acc_norm_stderr": 0.03643037168958548 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.43137254901960786, "acc_stderr": 0.04928099597287534, "acc_norm": 0.43137254901960786, "acc_norm_stderr": 0.04928099597287534 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.77, "acc_stderr": 0.04229525846816508, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816508 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6042553191489362, "acc_stderr": 0.031967586978353627, "acc_norm": 0.6042553191489362, "acc_norm_stderr": 0.031967586978353627 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5087719298245614, "acc_stderr": 0.04702880432049615, "acc_norm": 0.5087719298245614, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5379310344827586, "acc_stderr": 0.04154659671707548, "acc_norm": 0.5379310344827586, "acc_norm_stderr": 0.04154659671707548 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.43386243386243384, "acc_stderr": 0.025525034382474887, "acc_norm": 0.43386243386243384, "acc_norm_stderr": 0.025525034382474887 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.46825396825396826, "acc_stderr": 0.04463112720677172, "acc_norm": 0.46825396825396826, "acc_norm_stderr": 0.04463112720677172 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7838709677419354, "acc_stderr": 0.023415293433568525, "acc_norm": 0.7838709677419354, "acc_norm_stderr": 0.023415293433568525 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.49261083743842365, "acc_stderr": 0.03517603540361008, "acc_norm": 0.49261083743842365, "acc_norm_stderr": 0.03517603540361008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7818181818181819, "acc_stderr": 0.03225078108306289, "acc_norm": 0.7818181818181819, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945633, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945633 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.02199531196364424, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.02199531196364424 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402538, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402538 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35555555555555557, "acc_stderr": 0.029185714949857413, "acc_norm": 0.35555555555555557, "acc_norm_stderr": 0.029185714949857413 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6764705882352942, "acc_stderr": 0.030388353551886793, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.030388353551886793 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8532110091743119, "acc_stderr": 0.01517314184512625, "acc_norm": 0.8532110091743119, "acc_norm_stderr": 0.01517314184512625 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5185185185185185, "acc_stderr": 0.034076320938540516, "acc_norm": 0.5185185185185185, "acc_norm_stderr": 0.034076320938540516 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8431372549019608, "acc_stderr": 0.02552472232455335, "acc_norm": 0.8431372549019608, "acc_norm_stderr": 0.02552472232455335 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.025530100460233494, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.025530100460233494 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057221, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057221 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8015267175572519, "acc_stderr": 0.034981493854624734, "acc_norm": 0.8015267175572519, "acc_norm_stderr": 0.034981493854624734 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7668711656441718, "acc_stderr": 0.0332201579577674, "acc_norm": 0.7668711656441718, "acc_norm_stderr": 0.0332201579577674 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.44642857142857145, "acc_stderr": 0.04718471485219588, "acc_norm": 0.44642857142857145, "acc_norm_stderr": 0.04718471485219588 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8888888888888888, "acc_stderr": 0.020588491316092375, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.020588491316092375 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.72, "acc_stderr": 0.045126085985421276, "acc_norm": 0.72, "acc_norm_stderr": 0.045126085985421276 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8365261813537676, "acc_stderr": 0.013223928616741622, "acc_norm": 0.8365261813537676, "acc_norm_stderr": 0.013223928616741622 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7543352601156069, "acc_stderr": 0.023176298203992005, "acc_norm": 0.7543352601156069, "acc_norm_stderr": 0.023176298203992005 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.423463687150838, "acc_stderr": 0.016525425898773493, "acc_norm": 0.423463687150838, "acc_norm_stderr": 0.016525425898773493 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7320261437908496, "acc_stderr": 0.025360603796242557, "acc_norm": 0.7320261437908496, "acc_norm_stderr": 0.025360603796242557 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7138263665594855, "acc_stderr": 0.025670259242188933, "acc_norm": 0.7138263665594855, "acc_norm_stderr": 0.025670259242188933 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7623456790123457, "acc_stderr": 0.02368359183700856, "acc_norm": 0.7623456790123457, "acc_norm_stderr": 0.02368359183700856 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5, "acc_stderr": 0.029827499313594685, "acc_norm": 0.5, "acc_norm_stderr": 0.029827499313594685 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.46870925684485004, "acc_stderr": 0.012745204626083135, "acc_norm": 0.46870925684485004, "acc_norm_stderr": 0.012745204626083135 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6727941176470589, "acc_stderr": 0.028501452860396556, "acc_norm": 0.6727941176470589, "acc_norm_stderr": 0.028501452860396556 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.684640522875817, "acc_stderr": 0.01879808628488689, "acc_norm": 0.684640522875817, "acc_norm_stderr": 0.01879808628488689 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.0449429086625209, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.0449429086625209 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.028123429335142777, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.028123429335142777 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.025870646766169136, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.025870646766169136 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.84, "acc_stderr": 0.03684529491774708, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774708 }, "harness|hendrycksTest-virology|5": { "acc": 0.5481927710843374, "acc_stderr": 0.03874371556587953, "acc_norm": 0.5481927710843374, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.4724602203182375, "mc1_stderr": 0.017476930190712187, "mc2": 0.6354053076486196, "mc2_stderr": 0.015212905778062237 }, "harness|winogrande|5": { "acc": 0.8161010260457774, "acc_stderr": 0.01088791601330589 }, "harness|gsm8k|5": { "acc": 0.7088703563305534, "acc_stderr": 0.012513215297888463 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_shadowml__Marcoro14-7B-slerp
[ "region:us" ]
2023-12-30T17:10:07+00:00
{"pretty_name": "Evaluation run of mlabonne/Marcoro14-7B-slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mlabonne__Marcoro14-7B-slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T17:07:52.198441](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__Marcoro14-7B-slerp/blob/main/results_2023-12-30T17-07-52.198441.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6557670960374431,\n \"acc_stderr\": 0.031998348451839013,\n \"acc_norm\": 0.6555797586821419,\n \"acc_norm_stderr\": 0.032660366522478446,\n \"mc1\": 0.4724602203182375,\n \"mc1_stderr\": 0.017476930190712187,\n \"mc2\": 0.6354053076486196,\n \"mc2_stderr\": 0.015212905778062237\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6749146757679181,\n \"acc_stderr\": 0.013688147309729125,\n \"acc_norm\": 0.6979522184300341,\n \"acc_norm_stderr\": 0.01341751914471641\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6919936267675761,\n \"acc_stderr\": 0.004607256752931883,\n \"acc_norm\": 0.8713403704441346,\n \"acc_norm_stderr\": 0.003341385493187586\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.037385206761196686,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.037385206761196686\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7056603773584905,\n \"acc_stderr\": 0.02804918631569525,\n \"acc_norm\": 0.7056603773584905,\n \"acc_norm_stderr\": 0.02804918631569525\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.03643037168958548,\n \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.03643037168958548\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816508,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816508\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6042553191489362,\n \"acc_stderr\": 0.031967586978353627,\n \"acc_norm\": 0.6042553191489362,\n \"acc_norm_stderr\": 0.031967586978353627\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.43386243386243384,\n \"acc_stderr\": 0.025525034382474887,\n \"acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.025525034382474887\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n \"acc_stderr\": 0.023415293433568525,\n \"acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.023415293433568525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402538,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402538\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857413,\n \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857413\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8532110091743119,\n \"acc_stderr\": 0.01517314184512625,\n \"acc_norm\": 0.8532110091743119,\n \"acc_norm_stderr\": 0.01517314184512625\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.034076320938540516,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.034076320938540516\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.02552472232455335,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.02552472232455335\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233494,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233494\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.034981493854624734,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.034981493854624734\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092375,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092375\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8365261813537676,\n \"acc_stderr\": 0.013223928616741622,\n \"acc_norm\": 0.8365261813537676,\n \"acc_norm_stderr\": 0.013223928616741622\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7543352601156069,\n \"acc_stderr\": 0.023176298203992005,\n \"acc_norm\": 0.7543352601156069,\n \"acc_norm_stderr\": 0.023176298203992005\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.423463687150838,\n \"acc_stderr\": 0.016525425898773493,\n \"acc_norm\": 0.423463687150838,\n \"acc_norm_stderr\": 0.016525425898773493\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7320261437908496,\n \"acc_stderr\": 0.025360603796242557,\n \"acc_norm\": 0.7320261437908496,\n \"acc_norm_stderr\": 0.025360603796242557\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n \"acc_stderr\": 0.025670259242188933,\n \"acc_norm\": 0.7138263665594855,\n \"acc_norm_stderr\": 0.025670259242188933\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7623456790123457,\n \"acc_stderr\": 0.02368359183700856,\n \"acc_norm\": 0.7623456790123457,\n \"acc_norm_stderr\": 0.02368359183700856\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.029827499313594685,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.029827499313594685\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46870925684485004,\n \"acc_stderr\": 0.012745204626083135,\n \"acc_norm\": 0.46870925684485004,\n \"acc_norm_stderr\": 0.012745204626083135\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396556,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396556\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.684640522875817,\n \"acc_stderr\": 0.01879808628488689,\n \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.01879808628488689\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142777,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142777\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.025870646766169136,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.025870646766169136\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4724602203182375,\n \"mc1_stderr\": 0.017476930190712187,\n \"mc2\": 0.6354053076486196,\n \"mc2_stderr\": 0.015212905778062237\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8161010260457774,\n \"acc_stderr\": 0.01088791601330589\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7088703563305534,\n \"acc_stderr\": 0.012513215297888463\n }\n}\n```", "repo_url": "https://huggingface.co/mlabonne/Marcoro14-7B-slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|arc:challenge|25_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|gsm8k|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hellaswag|10_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T17-07-52.198441.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["**/details_harness|winogrande|5_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T17-07-52.198441.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T17_07_52.198441", "path": ["results_2023-12-30T17-07-52.198441.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T17-07-52.198441.parquet"]}]}]}
2024-01-04T13:55:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of mlabonne/Marcoro14-7B-slerp Dataset automatically created during the evaluation run of model mlabonne/Marcoro14-7B-slerp on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T17:07:52.198441(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of mlabonne/Marcoro14-7B-slerp\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Marcoro14-7B-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T17:07:52.198441(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of mlabonne/Marcoro14-7B-slerp\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Marcoro14-7B-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T17:07:52.198441(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 185, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mlabonne/Marcoro14-7B-slerp\n\n\n\nDataset automatically created during the evaluation run of model mlabonne/Marcoro14-7B-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T17:07:52.198441(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
17a4fcb234194700797c584ad1a32e84a29922b9
# Dataset Card for "semantic_fragments_nli" https://github.com/yakazimir/semantic_fragments https://arxiv.org/pdf/1909.07521.pdf ```bib @article{Richardson2019ProbingNL, title={Probing Natural Language Inference Models through Semantic Fragments}, author={Kyle Richardson and Hai Hu and Lawrence S. Moss and Ashish Sabharwal}, journal={ArXiv}, year={2019}, volume={abs/1909.07521}, url={https://api.semanticscholar.org/CorpusID:202583828} } ```
tasksource/semantic_fragments_nli
[ "arxiv:1909.07521", "region:us" ]
2023-12-30T17:32:11+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "sentence1", "dtype": "string"}, {"name": "sentence2", "dtype": "string"}, {"name": "gold_label", "dtype": "string"}, {"name": "config", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10931716, "num_examples": 48000}, {"name": "test", "num_bytes": 1834340, "num_examples": 8000}, {"name": "dev", "num_bytes": 1817170, "num_examples": 8000}], "download_size": 3883767, "dataset_size": 14583226}}
2023-12-30T17:43:34+00:00
[ "1909.07521" ]
[]
TAGS #arxiv-1909.07521 #region-us
# Dataset Card for "semantic_fragments_nli" URL URL
[ "# Dataset Card for \"semantic_fragments_nli\"\n\nURL\nURL" ]
[ "TAGS\n#arxiv-1909.07521 #region-us \n", "# Dataset Card for \"semantic_fragments_nli\"\n\nURL\nURL" ]
[ 15, 18 ]
[ "passage: TAGS\n#arxiv-1909.07521 #region-us \n# Dataset Card for \"semantic_fragments_nli\"\n\nURL\nURL" ]
c6fd5d9bd3ccdc09d35f231d6b7e75cc7575476b
# Dataset Card for Evaluation run of tokyotech-llm/Swallow-70b-instruct-hf <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [tokyotech-llm/Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_tokyotech-llm__Swallow-70b-instruct-hf", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T17:31:51.560670](https://huggingface.co/datasets/open-llm-leaderboard/details_tokyotech-llm__Swallow-70b-instruct-hf/blob/main/results_2023-12-30T17-31-51.560670.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.668542166197933, "acc_stderr": 0.031317102785635216, "acc_norm": 0.6737570900410808, "acc_norm_stderr": 0.03193599243400287, "mc1": 0.33047735618115054, "mc1_stderr": 0.016466769613698296, "mc2": 0.4799587332250507, "mc2_stderr": 0.014270049627097015 }, "harness|arc:challenge|25": { "acc": 0.6151877133105802, "acc_stderr": 0.014218371065251104, "acc_norm": 0.6621160409556314, "acc_norm_stderr": 0.013822047922283509 }, "harness|hellaswag|10": { "acc": 0.6474805815574587, "acc_stderr": 0.004767782256040988, "acc_norm": 0.8514240191196972, "acc_norm_stderr": 0.0035494312479073657 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5777777777777777, "acc_stderr": 0.04266763404099582, "acc_norm": 0.5777777777777777, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.743421052631579, "acc_stderr": 0.0355418036802569, "acc_norm": 0.743421052631579, "acc_norm_stderr": 0.0355418036802569 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.71, "acc_stderr": 0.04560480215720684, "acc_norm": 0.71, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7169811320754716, "acc_stderr": 0.027724236492700918, "acc_norm": 0.7169811320754716, "acc_norm_stderr": 0.027724236492700918 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7916666666666666, "acc_stderr": 0.033961162058453336, "acc_norm": 0.7916666666666666, "acc_norm_stderr": 0.033961162058453336 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.56, "acc_stderr": 0.04988876515698589, "acc_norm": 0.56, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.39215686274509803, "acc_stderr": 0.048580835742663434, "acc_norm": 0.39215686274509803, "acc_norm_stderr": 0.048580835742663434 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.72, "acc_stderr": 0.045126085985421296, "acc_norm": 0.72, "acc_norm_stderr": 0.045126085985421296 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6297872340425532, "acc_stderr": 0.03156564682236785, "acc_norm": 0.6297872340425532, "acc_norm_stderr": 0.03156564682236785 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4298245614035088, "acc_stderr": 0.046570472605949625, "acc_norm": 0.4298245614035088, "acc_norm_stderr": 0.046570472605949625 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6, "acc_stderr": 0.040824829046386284, "acc_norm": 0.6, "acc_norm_stderr": 0.040824829046386284 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.42857142857142855, "acc_stderr": 0.02548718714785938, "acc_norm": 0.42857142857142855, "acc_norm_stderr": 0.02548718714785938 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.04444444444444449, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.04444444444444449 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7903225806451613, "acc_stderr": 0.023157879349083522, "acc_norm": 0.7903225806451613, "acc_norm_stderr": 0.023157879349083522 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5024630541871922, "acc_stderr": 0.03517945038691063, "acc_norm": 0.5024630541871922, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.76, "acc_stderr": 0.04292346959909281, "acc_norm": 0.76, "acc_norm_stderr": 0.04292346959909281 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8303030303030303, "acc_stderr": 0.029311188674983127, "acc_norm": 0.8303030303030303, "acc_norm_stderr": 0.029311188674983127 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8484848484848485, "acc_stderr": 0.025545650426603617, "acc_norm": 0.8484848484848485, "acc_norm_stderr": 0.025545650426603617 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9067357512953368, "acc_stderr": 0.020986854593289708, "acc_norm": 0.9067357512953368, "acc_norm_stderr": 0.020986854593289708 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6897435897435897, "acc_stderr": 0.02345467488940429, "acc_norm": 0.6897435897435897, "acc_norm_stderr": 0.02345467488940429 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37777777777777777, "acc_stderr": 0.02956070739246572, "acc_norm": 0.37777777777777777, "acc_norm_stderr": 0.02956070739246572 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7352941176470589, "acc_stderr": 0.02865749128507199, "acc_norm": 0.7352941176470589, "acc_norm_stderr": 0.02865749128507199 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.47019867549668876, "acc_stderr": 0.040752249922169775, "acc_norm": 0.47019867549668876, "acc_norm_stderr": 0.040752249922169775 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8807339449541285, "acc_stderr": 0.01389572929258896, "acc_norm": 0.8807339449541285, "acc_norm_stderr": 0.01389572929258896 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5277777777777778, "acc_stderr": 0.0340470532865388, "acc_norm": 0.5277777777777778, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8774509803921569, "acc_stderr": 0.023015389732458265, "acc_norm": 0.8774509803921569, "acc_norm_stderr": 0.023015389732458265 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8607594936708861, "acc_stderr": 0.0225355263526927, "acc_norm": 0.8607594936708861, "acc_norm_stderr": 0.0225355263526927 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7309417040358744, "acc_stderr": 0.029763779406874965, "acc_norm": 0.7309417040358744, "acc_norm_stderr": 0.029763779406874965 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7786259541984732, "acc_stderr": 0.0364129708131373, "acc_norm": 0.7786259541984732, "acc_norm_stderr": 0.0364129708131373 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8512396694214877, "acc_stderr": 0.03248470083807194, "acc_norm": 0.8512396694214877, "acc_norm_stderr": 0.03248470083807194 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.0401910747255735, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.0401910747255735 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4375, "acc_stderr": 0.04708567521880525, "acc_norm": 0.4375, "acc_norm_stderr": 0.04708567521880525 }, "harness|hendrycksTest-management|5": { "acc": 0.8058252427184466, "acc_stderr": 0.039166677628225836, "acc_norm": 0.8058252427184466, "acc_norm_stderr": 0.039166677628225836 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8888888888888888, "acc_stderr": 0.020588491316092368, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.020588491316092368 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.046882617226215034, "acc_norm": 0.68, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8378033205619413, "acc_stderr": 0.013182222616720887, "acc_norm": 0.8378033205619413, "acc_norm_stderr": 0.013182222616720887 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7514450867052023, "acc_stderr": 0.023267528432100174, "acc_norm": 0.7514450867052023, "acc_norm_stderr": 0.023267528432100174 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.32513966480446926, "acc_stderr": 0.01566654278505355, "acc_norm": 0.32513966480446926, "acc_norm_stderr": 0.01566654278505355 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7549019607843137, "acc_stderr": 0.024630048979824775, "acc_norm": 0.7549019607843137, "acc_norm_stderr": 0.024630048979824775 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7266881028938906, "acc_stderr": 0.02531176597542612, "acc_norm": 0.7266881028938906, "acc_norm_stderr": 0.02531176597542612 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7746913580246914, "acc_stderr": 0.02324620264781975, "acc_norm": 0.7746913580246914, "acc_norm_stderr": 0.02324620264781975 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5319148936170213, "acc_stderr": 0.029766675075873873, "acc_norm": 0.5319148936170213, "acc_norm_stderr": 0.029766675075873873 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5208604954367666, "acc_stderr": 0.012759117066518005, "acc_norm": 0.5208604954367666, "acc_norm_stderr": 0.012759117066518005 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6911764705882353, "acc_stderr": 0.02806499816704009, "acc_norm": 0.6911764705882353, "acc_norm_stderr": 0.02806499816704009 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7189542483660131, "acc_stderr": 0.018185218954318086, "acc_norm": 0.7189542483660131, "acc_norm_stderr": 0.018185218954318086 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7727272727272727, "acc_stderr": 0.04013964554072776, "acc_norm": 0.7727272727272727, "acc_norm_stderr": 0.04013964554072776 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.8040816326530612, "acc_stderr": 0.025409301953225678, "acc_norm": 0.8040816326530612, "acc_norm_stderr": 0.025409301953225678 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8805970149253731, "acc_stderr": 0.02292879327721974, "acc_norm": 0.8805970149253731, "acc_norm_stderr": 0.02292879327721974 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.92, "acc_stderr": 0.027265992434429093, "acc_norm": 0.92, "acc_norm_stderr": 0.027265992434429093 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.33047735618115054, "mc1_stderr": 0.016466769613698296, "mc2": 0.4799587332250507, "mc2_stderr": 0.014270049627097015 }, "harness|winogrande|5": { "acc": 0.8208366219415943, "acc_stderr": 0.010777949156047992 }, "harness|gsm8k|5": { "acc": 0.45943896891584535, "acc_stderr": 0.013727093010429785 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_tokyotech-llm__Swallow-70b-instruct-hf
[ "region:us" ]
2023-12-30T17:34:19+00:00
{"pretty_name": "Evaluation run of tokyotech-llm/Swallow-70b-instruct-hf", "dataset_summary": "Dataset automatically created during the evaluation run of model [tokyotech-llm/Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_tokyotech-llm__Swallow-70b-instruct-hf\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T17:31:51.560670](https://huggingface.co/datasets/open-llm-leaderboard/details_tokyotech-llm__Swallow-70b-instruct-hf/blob/main/results_2023-12-30T17-31-51.560670.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.668542166197933,\n \"acc_stderr\": 0.031317102785635216,\n \"acc_norm\": 0.6737570900410808,\n \"acc_norm_stderr\": 0.03193599243400287,\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.016466769613698296,\n \"mc2\": 0.4799587332250507,\n \"mc2_stderr\": 0.014270049627097015\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6151877133105802,\n \"acc_stderr\": 0.014218371065251104,\n \"acc_norm\": 0.6621160409556314,\n \"acc_norm_stderr\": 0.013822047922283509\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6474805815574587,\n \"acc_stderr\": 0.004767782256040988,\n \"acc_norm\": 0.8514240191196972,\n \"acc_norm_stderr\": 0.0035494312479073657\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.743421052631579,\n \"acc_stderr\": 0.0355418036802569,\n \"acc_norm\": 0.743421052631579,\n \"acc_norm_stderr\": 0.0355418036802569\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7169811320754716,\n \"acc_stderr\": 0.027724236492700918,\n \"acc_norm\": 0.7169811320754716,\n \"acc_norm_stderr\": 0.027724236492700918\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7916666666666666,\n \"acc_stderr\": 0.033961162058453336,\n \"acc_norm\": 0.7916666666666666,\n \"acc_norm_stderr\": 0.033961162058453336\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663434,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663434\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421296,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421296\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6297872340425532,\n \"acc_stderr\": 0.03156564682236785,\n \"acc_norm\": 0.6297872340425532,\n \"acc_norm_stderr\": 0.03156564682236785\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n \"acc_stderr\": 0.046570472605949625,\n \"acc_norm\": 0.4298245614035088,\n \"acc_norm_stderr\": 0.046570472605949625\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.040824829046386284,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.040824829046386284\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.02548718714785938,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.02548718714785938\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083522,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083522\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909281,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909281\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8303030303030303,\n \"acc_stderr\": 0.029311188674983127,\n \"acc_norm\": 0.8303030303030303,\n \"acc_norm_stderr\": 0.029311188674983127\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8484848484848485,\n \"acc_stderr\": 0.025545650426603617,\n \"acc_norm\": 0.8484848484848485,\n \"acc_norm_stderr\": 0.025545650426603617\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.020986854593289708,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.020986854593289708\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6897435897435897,\n \"acc_stderr\": 0.02345467488940429,\n \"acc_norm\": 0.6897435897435897,\n \"acc_norm_stderr\": 0.02345467488940429\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37777777777777777,\n \"acc_stderr\": 0.02956070739246572,\n \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.02956070739246572\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02865749128507199,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02865749128507199\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.47019867549668876,\n \"acc_stderr\": 0.040752249922169775,\n \"acc_norm\": 0.47019867549668876,\n \"acc_norm_stderr\": 0.040752249922169775\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8807339449541285,\n \"acc_stderr\": 0.01389572929258896,\n \"acc_norm\": 0.8807339449541285,\n \"acc_norm_stderr\": 0.01389572929258896\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8774509803921569,\n \"acc_stderr\": 0.023015389732458265,\n \"acc_norm\": 0.8774509803921569,\n \"acc_norm_stderr\": 0.023015389732458265\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8607594936708861,\n \"acc_stderr\": 0.0225355263526927,\n \"acc_norm\": 0.8607594936708861,\n \"acc_norm_stderr\": 0.0225355263526927\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7309417040358744,\n \"acc_stderr\": 0.029763779406874965,\n \"acc_norm\": 0.7309417040358744,\n \"acc_norm_stderr\": 0.029763779406874965\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.0364129708131373,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.0364129708131373\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8512396694214877,\n \"acc_stderr\": 0.03248470083807194,\n \"acc_norm\": 0.8512396694214877,\n \"acc_norm_stderr\": 0.03248470083807194\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.039166677628225836,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.039166677628225836\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092368,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092368\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8378033205619413,\n \"acc_stderr\": 0.013182222616720887,\n \"acc_norm\": 0.8378033205619413,\n \"acc_norm_stderr\": 0.013182222616720887\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7514450867052023,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.7514450867052023,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.32513966480446926,\n \"acc_stderr\": 0.01566654278505355,\n \"acc_norm\": 0.32513966480446926,\n \"acc_norm_stderr\": 0.01566654278505355\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7549019607843137,\n \"acc_stderr\": 0.024630048979824775,\n \"acc_norm\": 0.7549019607843137,\n \"acc_norm_stderr\": 0.024630048979824775\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7266881028938906,\n \"acc_stderr\": 0.02531176597542612,\n \"acc_norm\": 0.7266881028938906,\n \"acc_norm_stderr\": 0.02531176597542612\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7746913580246914,\n \"acc_stderr\": 0.02324620264781975,\n \"acc_norm\": 0.7746913580246914,\n \"acc_norm_stderr\": 0.02324620264781975\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5319148936170213,\n \"acc_stderr\": 0.029766675075873873,\n \"acc_norm\": 0.5319148936170213,\n \"acc_norm_stderr\": 0.029766675075873873\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5208604954367666,\n \"acc_stderr\": 0.012759117066518005,\n \"acc_norm\": 0.5208604954367666,\n \"acc_norm_stderr\": 0.012759117066518005\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6911764705882353,\n \"acc_stderr\": 0.02806499816704009,\n \"acc_norm\": 0.6911764705882353,\n \"acc_norm_stderr\": 0.02806499816704009\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.018185218954318086,\n \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.018185218954318086\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.04013964554072776,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.04013964554072776\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8040816326530612,\n \"acc_stderr\": 0.025409301953225678,\n \"acc_norm\": 0.8040816326530612,\n \"acc_norm_stderr\": 0.025409301953225678\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8805970149253731,\n \"acc_stderr\": 0.02292879327721974,\n \"acc_norm\": 0.8805970149253731,\n \"acc_norm_stderr\": 0.02292879327721974\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.92,\n \"acc_stderr\": 0.027265992434429093,\n \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.027265992434429093\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33047735618115054,\n \"mc1_stderr\": 0.016466769613698296,\n \"mc2\": 0.4799587332250507,\n \"mc2_stderr\": 0.014270049627097015\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8208366219415943,\n \"acc_stderr\": 0.010777949156047992\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45943896891584535,\n \"acc_stderr\": 0.013727093010429785\n }\n}\n```", "repo_url": "https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|arc:challenge|25_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|gsm8k|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hellaswag|10_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T17-31-51.560670.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["**/details_harness|winogrande|5_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T17-31-51.560670.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T17_31_51.560670", "path": ["results_2023-12-30T17-31-51.560670.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T17-31-51.560670.parquet"]}]}]}
2023-12-30T17:34:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of tokyotech-llm/Swallow-70b-instruct-hf Dataset automatically created during the evaluation run of model tokyotech-llm/Swallow-70b-instruct-hf on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T17:31:51.560670(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of tokyotech-llm/Swallow-70b-instruct-hf\n\n\n\nDataset automatically created during the evaluation run of model tokyotech-llm/Swallow-70b-instruct-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T17:31:51.560670(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of tokyotech-llm/Swallow-70b-instruct-hf\n\n\n\nDataset automatically created during the evaluation run of model tokyotech-llm/Swallow-70b-instruct-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T17:31:51.560670(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 197, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of tokyotech-llm/Swallow-70b-instruct-hf\n\n\n\nDataset automatically created during the evaluation run of model tokyotech-llm/Swallow-70b-instruct-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T17:31:51.560670(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
28352f4f6df31fdd06305d6b08d12c5e7c71499a
# Dataset Card for [code2test] ## Dataset Description ### Dataset Summary This dataset is designed to generate unit tests from provided Java source code. ## Dataset Creation 1. Gather all Java projects from GitHub that have more than 5 stars. 2. Extract code-test pairs from all these projects. 3. Remove duplicate data. 4. Clean the data, including the removal of any copyright material. 5. Identify and eliminate tests that contain test smells. 6. Transform the data into an instruction-based dataset. 7. Divide the dataset into training and testing segments.
zzzghttt/code2test
[ "license:apache-2.0", "region:us" ]
2023-12-30T17:36:43+00:00
{"license": "apache-2.0"}
2024-01-06T11:46:55+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# Dataset Card for [code2test] ## Dataset Description ### Dataset Summary This dataset is designed to generate unit tests from provided Java source code. ## Dataset Creation 1. Gather all Java projects from GitHub that have more than 5 stars. 2. Extract code-test pairs from all these projects. 3. Remove duplicate data. 4. Clean the data, including the removal of any copyright material. 5. Identify and eliminate tests that contain test smells. 6. Transform the data into an instruction-based dataset. 7. Divide the dataset into training and testing segments.
[ "# Dataset Card for [code2test]", "## Dataset Description", "### Dataset Summary\n\nThis dataset is designed to generate unit tests from provided Java source code.", "## Dataset Creation\n\n1. Gather all Java projects from GitHub that have more than 5 stars.\n2. Extract code-test pairs from all these projects.\n3. Remove duplicate data.\n4. Clean the data, including the removal of any copyright material.\n5. Identify and eliminate tests that contain test smells.\n6. Transform the data into an instruction-based dataset.\n7. Divide the dataset into training and testing segments." ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# Dataset Card for [code2test]", "## Dataset Description", "### Dataset Summary\n\nThis dataset is designed to generate unit tests from provided Java source code.", "## Dataset Creation\n\n1. Gather all Java projects from GitHub that have more than 5 stars.\n2. Extract code-test pairs from all these projects.\n3. Remove duplicate data.\n4. Clean the data, including the removal of any copyright material.\n5. Identify and eliminate tests that contain test smells.\n6. Transform the data into an instruction-based dataset.\n7. Divide the dataset into training and testing segments." ]
[ 14, 10, 4, 21, 96 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n# Dataset Card for [code2test]## Dataset Description### Dataset Summary\n\nThis dataset is designed to generate unit tests from provided Java source code.## Dataset Creation\n\n1. Gather all Java projects from GitHub that have more than 5 stars.\n2. Extract code-test pairs from all these projects.\n3. Remove duplicate data.\n4. Clean the data, including the removal of any copyright material.\n5. Identify and eliminate tests that contain test smells.\n6. Transform the data into an instruction-based dataset.\n7. Divide the dataset into training and testing segments." ]
8a529418240ccf1be1ab2520ba4661894087f189
###Instruction: 1)You're an empathy therapist and you Help with addiction issues, encourage healthy coping. 2)refer to professionals as needed. 3)Your primary function is to reply like identifying, understanding, and challenging their cognitive distortions and unhealthy addiction over drugs & alcohol. 4)Keep the response short, simple like (assistant below mentioned) and much more human-like. 5) Try to use some strategy like {strategy} ###input from user: {Conversation}
Ram07/Emp-dialog-w-new-instruct-1
[ "license:mit", "region:us" ]
2023-12-30T17:38:57+00:00
{"license": "mit"}
2023-12-30T17:42:28+00:00
[]
[]
TAGS #license-mit #region-us
###Instruction: 1)You're an empathy therapist and you Help with addiction issues, encourage healthy coping. 2)refer to professionals as needed. 3)Your primary function is to reply like identifying, understanding, and challenging their cognitive distortions and unhealthy addiction over drugs & alcohol. 4)Keep the response short, simple like (assistant below mentioned) and much more human-like. 5) Try to use some strategy like {strategy} ###input from user: {Conversation}
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
afb653587c4c901a5d2a18eba04e538f5abeaf62
# Dataset Card for "cszs_fr_en" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ky552/cszs_fr_en
[ "region:us" ]
2023-12-30T17:40:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "correct_audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "correct_transcription", "dtype": "string"}, {"name": "correct_file", "dtype": "string"}, {"name": "wrong_audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "wrong_transcription", "dtype": "string"}, {"name": "wrong_file", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25161014072.682, "num_examples": 105241}, {"name": "dev", "num_bytes": 3494489553.808, "num_examples": 14244}, {"name": "test", "num_bytes": 3315850038.204, "num_examples": 14081}], "download_size": 31574494786, "dataset_size": 31971353664.693996}}
2023-12-30T19:01:38+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cszs_fr_en" More Information needed
[ "# Dataset Card for \"cszs_fr_en\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cszs_fr_en\"\n\nMore Information needed" ]
[ 6, 17 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cszs_fr_en\"\n\nMore Information needed" ]
540da5822f07209d3f309a72a2275118c9c93be8
# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B-v2 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [nlpguy/ColorShadow-7B-v2](https://huggingface.co/nlpguy/ColorShadow-7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T17:39:38.883632](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v2/blob/main/results_2023-12-30T17-39-38.883632.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6052838646829277, "acc_stderr": 0.03314929477593008, "acc_norm": 0.60872976450195, "acc_norm_stderr": 0.033818410657125174, "mc1": 0.4614443084455324, "mc1_stderr": 0.017451384104637455, "mc2": 0.6292738740677482, "mc2_stderr": 0.014921407429654188 }, "harness|arc:challenge|25": { "acc": 0.6313993174061433, "acc_stderr": 0.014097810678042198, "acc_norm": 0.6715017064846417, "acc_norm_stderr": 0.013724978465537298 }, "harness|hellaswag|10": { "acc": 0.632742481577375, "acc_stderr": 0.004810723108378217, "acc_norm": 0.8469428400716988, "acc_norm_stderr": 0.003593067633474304 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.0446196043338474, "acc_norm": 0.27, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5481481481481482, "acc_stderr": 0.04299268905480864, "acc_norm": 0.5481481481481482, "acc_norm_stderr": 0.04299268905480864 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6973684210526315, "acc_stderr": 0.0373852067611967, "acc_norm": 0.6973684210526315, "acc_norm_stderr": 0.0373852067611967 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6716981132075471, "acc_stderr": 0.02890159361241178, "acc_norm": 0.6716981132075471, "acc_norm_stderr": 0.02890159361241178 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6666666666666666, "acc_stderr": 0.03942082639927213, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.03942082639927213 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.47, "acc_stderr": 0.05016135580465919, "acc_norm": 0.47, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416907, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416907 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.04878608714466996, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.04878608714466996 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5276595744680851, "acc_stderr": 0.03263597118409769, "acc_norm": 0.5276595744680851, "acc_norm_stderr": 0.03263597118409769 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.43859649122807015, "acc_stderr": 0.04668000738510455, "acc_norm": 0.43859649122807015, "acc_norm_stderr": 0.04668000738510455 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.36507936507936506, "acc_stderr": 0.02479606060269995, "acc_norm": 0.36507936507936506, "acc_norm_stderr": 0.02479606060269995 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4365079365079365, "acc_stderr": 0.04435932892851466, "acc_norm": 0.4365079365079365, "acc_norm_stderr": 0.04435932892851466 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.0479372485441102, "acc_norm": 0.35, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6580645161290323, "acc_stderr": 0.02698528957655273, "acc_norm": 0.6580645161290323, "acc_norm_stderr": 0.02698528957655273 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4729064039408867, "acc_stderr": 0.03512819077876106, "acc_norm": 0.4729064039408867, "acc_norm_stderr": 0.03512819077876106 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7272727272727273, "acc_stderr": 0.0347769116216366, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.0347769116216366 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7373737373737373, "acc_stderr": 0.03135305009533086, "acc_norm": 0.7373737373737373, "acc_norm_stderr": 0.03135305009533086 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8549222797927462, "acc_stderr": 0.02541634309630641, "acc_norm": 0.8549222797927462, "acc_norm_stderr": 0.02541634309630641 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6153846153846154, "acc_stderr": 0.02466674491518721, "acc_norm": 0.6153846153846154, "acc_norm_stderr": 0.02466674491518721 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.32592592592592595, "acc_stderr": 0.02857834836547308, "acc_norm": 0.32592592592592595, "acc_norm_stderr": 0.02857834836547308 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6512605042016807, "acc_stderr": 0.030956636328566545, "acc_norm": 0.6512605042016807, "acc_norm_stderr": 0.030956636328566545 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8220183486238533, "acc_stderr": 0.016399436366612903, "acc_norm": 0.8220183486238533, "acc_norm_stderr": 0.016399436366612903 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4675925925925926, "acc_stderr": 0.03402801581358966, "acc_norm": 0.4675925925925926, "acc_norm_stderr": 0.03402801581358966 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7450980392156863, "acc_stderr": 0.03058759135160425, "acc_norm": 0.7450980392156863, "acc_norm_stderr": 0.03058759135160425 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7637130801687764, "acc_stderr": 0.027652153144159263, "acc_norm": 0.7637130801687764, "acc_norm_stderr": 0.027652153144159263 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6860986547085202, "acc_stderr": 0.031146796482972465, "acc_norm": 0.6860986547085202, "acc_norm_stderr": 0.031146796482972465 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.732824427480916, "acc_stderr": 0.03880848301082394, "acc_norm": 0.732824427480916, "acc_norm_stderr": 0.03880848301082394 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8181818181818182, "acc_stderr": 0.03520893951097652, "acc_norm": 0.8181818181818182, "acc_norm_stderr": 0.03520893951097652 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7592592592592593, "acc_stderr": 0.04133119440243839, "acc_norm": 0.7592592592592593, "acc_norm_stderr": 0.04133119440243839 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7239263803680982, "acc_stderr": 0.035123852837050475, "acc_norm": 0.7239263803680982, "acc_norm_stderr": 0.035123852837050475 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.047268355537191, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.047268355537191 }, "harness|hendrycksTest-management|5": { "acc": 0.7281553398058253, "acc_stderr": 0.044052680241409216, "acc_norm": 0.7281553398058253, "acc_norm_stderr": 0.044052680241409216 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8461538461538461, "acc_stderr": 0.023636873317489288, "acc_norm": 0.8461538461538461, "acc_norm_stderr": 0.023636873317489288 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.67, "acc_stderr": 0.04725815626252609, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252609 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7943805874840357, "acc_stderr": 0.01445250045678583, "acc_norm": 0.7943805874840357, "acc_norm_stderr": 0.01445250045678583 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6676300578034682, "acc_stderr": 0.02536116874968822, "acc_norm": 0.6676300578034682, "acc_norm_stderr": 0.02536116874968822 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.30726256983240224, "acc_stderr": 0.015430158846469606, "acc_norm": 0.30726256983240224, "acc_norm_stderr": 0.015430158846469606 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6666666666666666, "acc_stderr": 0.026992544339297247, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.026992544339297247 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6784565916398714, "acc_stderr": 0.026527724079528872, "acc_norm": 0.6784565916398714, "acc_norm_stderr": 0.026527724079528872 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.654320987654321, "acc_stderr": 0.02646248777700187, "acc_norm": 0.654320987654321, "acc_norm_stderr": 0.02646248777700187 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4716312056737589, "acc_stderr": 0.029779450957303062, "acc_norm": 0.4716312056737589, "acc_norm_stderr": 0.029779450957303062 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.44328552803129073, "acc_stderr": 0.01268781841959992, "acc_norm": 0.44328552803129073, "acc_norm_stderr": 0.01268781841959992 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6066176470588235, "acc_stderr": 0.029674288281311155, "acc_norm": 0.6066176470588235, "acc_norm_stderr": 0.029674288281311155 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6111111111111112, "acc_stderr": 0.019722058939618068, "acc_norm": 0.6111111111111112, "acc_norm_stderr": 0.019722058939618068 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6727272727272727, "acc_stderr": 0.0449429086625209, "acc_norm": 0.6727272727272727, "acc_norm_stderr": 0.0449429086625209 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7020408163265306, "acc_stderr": 0.029279567411065684, "acc_norm": 0.7020408163265306, "acc_norm_stderr": 0.029279567411065684 }, "harness|hendrycksTest-sociology|5": { "acc": 0.572139303482587, "acc_stderr": 0.03498541988407795, "acc_norm": 0.572139303482587, "acc_norm_stderr": 0.03498541988407795 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.040201512610368466, "acc_norm": 0.8, "acc_norm_stderr": 0.040201512610368466 }, "harness|hendrycksTest-virology|5": { "acc": 0.4879518072289157, "acc_stderr": 0.0389136449583582, "acc_norm": 0.4879518072289157, "acc_norm_stderr": 0.0389136449583582 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8128654970760234, "acc_stderr": 0.02991312723236804, "acc_norm": 0.8128654970760234, "acc_norm_stderr": 0.02991312723236804 }, "harness|truthfulqa:mc|0": { "mc1": 0.4614443084455324, "mc1_stderr": 0.017451384104637455, "mc2": 0.6292738740677482, "mc2_stderr": 0.014921407429654188 }, "harness|winogrande|5": { "acc": 0.7884767166535123, "acc_stderr": 0.011477747684223188 }, "harness|gsm8k|5": { "acc": 0.47308567096285065, "acc_stderr": 0.013752517189717466 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v2
[ "region:us" ]
2023-12-30T17:41:57+00:00
{"pretty_name": "Evaluation run of nlpguy/ColorShadow-7B-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [nlpguy/ColorShadow-7B-v2](https://huggingface.co/nlpguy/ColorShadow-7B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T17:39:38.883632](https://huggingface.co/datasets/open-llm-leaderboard/details_nlpguy__ColorShadow-7B-v2/blob/main/results_2023-12-30T17-39-38.883632.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6052838646829277,\n \"acc_stderr\": 0.03314929477593008,\n \"acc_norm\": 0.60872976450195,\n \"acc_norm_stderr\": 0.033818410657125174,\n \"mc1\": 0.4614443084455324,\n \"mc1_stderr\": 0.017451384104637455,\n \"mc2\": 0.6292738740677482,\n \"mc2_stderr\": 0.014921407429654188\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6313993174061433,\n \"acc_stderr\": 0.014097810678042198,\n \"acc_norm\": 0.6715017064846417,\n \"acc_norm_stderr\": 0.013724978465537298\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.632742481577375,\n \"acc_stderr\": 0.004810723108378217,\n \"acc_norm\": 0.8469428400716988,\n \"acc_norm_stderr\": 0.003593067633474304\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5481481481481482,\n \"acc_stderr\": 0.04299268905480864,\n \"acc_norm\": 0.5481481481481482,\n \"acc_norm_stderr\": 0.04299268905480864\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.0373852067611967,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.0373852067611967\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03942082639927213,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03942082639927213\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.36507936507936506,\n \"acc_stderr\": 0.02479606060269995,\n \"acc_norm\": 0.36507936507936506,\n \"acc_norm_stderr\": 0.02479606060269995\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6580645161290323,\n \"acc_stderr\": 0.02698528957655273,\n \"acc_norm\": 0.6580645161290323,\n \"acc_norm_stderr\": 0.02698528957655273\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7373737373737373,\n \"acc_stderr\": 0.03135305009533086,\n \"acc_norm\": 0.7373737373737373,\n \"acc_norm_stderr\": 0.03135305009533086\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.02541634309630641,\n \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.02541634309630641\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6153846153846154,\n \"acc_stderr\": 0.02466674491518721,\n \"acc_norm\": 0.6153846153846154,\n \"acc_norm_stderr\": 0.02466674491518721\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.32592592592592595,\n \"acc_stderr\": 0.02857834836547308,\n \"acc_norm\": 0.32592592592592595,\n \"acc_norm_stderr\": 0.02857834836547308\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566545,\n \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566545\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8220183486238533,\n \"acc_stderr\": 0.016399436366612903,\n \"acc_norm\": 0.8220183486238533,\n \"acc_norm_stderr\": 0.016399436366612903\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.03058759135160425,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.03058759135160425\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159263,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159263\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.03880848301082394,\n \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.03880848301082394\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097652,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097652\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n \"acc_stderr\": 0.023636873317489288,\n \"acc_norm\": 0.8461538461538461,\n \"acc_norm_stderr\": 0.023636873317489288\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7943805874840357,\n \"acc_stderr\": 0.01445250045678583,\n \"acc_norm\": 0.7943805874840357,\n \"acc_norm_stderr\": 0.01445250045678583\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6676300578034682,\n \"acc_stderr\": 0.02536116874968822,\n \"acc_norm\": 0.6676300578034682,\n \"acc_norm_stderr\": 0.02536116874968822\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30726256983240224,\n \"acc_stderr\": 0.015430158846469606,\n \"acc_norm\": 0.30726256983240224,\n \"acc_norm_stderr\": 0.015430158846469606\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.026992544339297247,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.026992544339297247\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6784565916398714,\n \"acc_stderr\": 0.026527724079528872,\n \"acc_norm\": 0.6784565916398714,\n \"acc_norm_stderr\": 0.026527724079528872\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.654320987654321,\n \"acc_stderr\": 0.02646248777700187,\n \"acc_norm\": 0.654320987654321,\n \"acc_norm_stderr\": 0.02646248777700187\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4716312056737589,\n \"acc_stderr\": 0.029779450957303062,\n \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.029779450957303062\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44328552803129073,\n \"acc_stderr\": 0.01268781841959992,\n \"acc_norm\": 0.44328552803129073,\n \"acc_norm_stderr\": 0.01268781841959992\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6111111111111112,\n \"acc_stderr\": 0.019722058939618068,\n \"acc_norm\": 0.6111111111111112,\n \"acc_norm_stderr\": 0.019722058939618068\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7020408163265306,\n \"acc_stderr\": 0.029279567411065684,\n \"acc_norm\": 0.7020408163265306,\n \"acc_norm_stderr\": 0.029279567411065684\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.572139303482587,\n \"acc_stderr\": 0.03498541988407795,\n \"acc_norm\": 0.572139303482587,\n \"acc_norm_stderr\": 0.03498541988407795\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.040201512610368466,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.040201512610368466\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n \"acc_stderr\": 0.0389136449583582,\n \"acc_norm\": 0.4879518072289157,\n \"acc_norm_stderr\": 0.0389136449583582\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.02991312723236804,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.02991312723236804\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4614443084455324,\n \"mc1_stderr\": 0.017451384104637455,\n \"mc2\": 0.6292738740677482,\n \"mc2_stderr\": 0.014921407429654188\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7884767166535123,\n \"acc_stderr\": 0.011477747684223188\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.47308567096285065,\n \"acc_stderr\": 0.013752517189717466\n }\n}\n```", "repo_url": "https://huggingface.co/nlpguy/ColorShadow-7B-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|arc:challenge|25_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|gsm8k|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hellaswag|10_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T17-39-38.883632.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["**/details_harness|winogrande|5_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T17-39-38.883632.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T17_39_38.883632", "path": ["results_2023-12-30T17-39-38.883632.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T17-39-38.883632.parquet"]}]}]}
2023-12-30T17:42:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B-v2 Dataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B-v2 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T17:39:38.883632(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B-v2\n\n\n\nDataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T17:39:38.883632(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B-v2\n\n\n\nDataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T17:39:38.883632(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 189, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of nlpguy/ColorShadow-7B-v2\n\n\n\nDataset automatically created during the evaluation run of model nlpguy/ColorShadow-7B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T17:39:38.883632(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
018e84c8ba09943b7496ca8df6ebc63df46db03f
# Dataset Card for "vocalset_extract_unit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/vocalset_extract_unit
[ "region:us" ]
2023-12-30T17:55:06+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 50687871, "num_examples": 3612}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 50687871, "num_examples": 3612}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 75978975, "num_examples": 3612}, {"name": "audiodec_24k_320d", "num_bytes": 162197087, "num_examples": 3612}, {"name": "dac_16k", "num_bytes": 314926879, "num_examples": 3612}, {"name": "dac_24k", "num_bytes": 886781599, "num_examples": 3612}, {"name": "dac_44k", "num_bytes": 263117839, "num_examples": 3612}, {"name": "encodec_24k", "num_bytes": 38100911, "num_examples": 3612}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 405680543, "num_examples": 3612}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 405680543, "num_examples": 3612}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 405679007, "num_examples": 3612}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 203356319, "num_examples": 3612}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 405679007, "num_examples": 3612}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 405679007, "num_examples": 3612}, {"name": "speech_tokenizer_16k", "num_bytes": 101500127, "num_examples": 3612}], "download_size": 652611283, "dataset_size": 4175733585}}
2023-12-30T17:56:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "vocalset_extract_unit" More Information needed
[ "# Dataset Card for \"vocalset_extract_unit\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"vocalset_extract_unit\"\n\nMore Information needed" ]
[ 6, 19 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"vocalset_extract_unit\"\n\nMore Information needed" ]
e1ee9c7d3f158334df8fac813d484e07ed7cd9bc
# Dataset Card for Evaluation run of L-R/LLmRA-3B-v0.1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [L-R/LLmRA-3B-v0.1](https://huggingface.co/L-R/LLmRA-3B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_L-R__LLmRA-3B-v0.1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T18:20:02.738307](https://huggingface.co/datasets/open-llm-leaderboard/details_L-R__LLmRA-3B-v0.1/blob/main/results_2023-12-30T18-20-02.738307.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.25811553677427196, "acc_stderr": 0.030831154815361403, "acc_norm": 0.25988166008114527, "acc_norm_stderr": 0.031599277199979875, "mc1": 0.30966952264381886, "mc1_stderr": 0.01618574435514491, "mc2": 0.5061902673630095, "mc2_stderr": 0.015324724908364737 }, "harness|arc:challenge|25": { "acc": 0.34897610921501704, "acc_stderr": 0.0139289334613825, "acc_norm": 0.39419795221843, "acc_norm_stderr": 0.014280522667467327 }, "harness|hellaswag|10": { "acc": 0.4502091216889066, "acc_stderr": 0.004964979120927574, "acc_norm": 0.5978888667596096, "acc_norm_stderr": 0.004893220635011781 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542128, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.18518518518518517, "acc_stderr": 0.03355677216313142, "acc_norm": 0.18518518518518517, "acc_norm_stderr": 0.03355677216313142 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.21710526315789475, "acc_stderr": 0.033550453048829226, "acc_norm": 0.21710526315789475, "acc_norm_stderr": 0.033550453048829226 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2188679245283019, "acc_stderr": 0.025447863825108614, "acc_norm": 0.2188679245283019, "acc_norm_stderr": 0.025447863825108614 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2638888888888889, "acc_stderr": 0.03685651095897532, "acc_norm": 0.2638888888888889, "acc_norm_stderr": 0.03685651095897532 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.17, "acc_stderr": 0.0377525168068637, "acc_norm": 0.17, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.28, "acc_stderr": 0.04512608598542127, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.23121387283236994, "acc_stderr": 0.032147373020294696, "acc_norm": 0.23121387283236994, "acc_norm_stderr": 0.032147373020294696 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.2647058823529412, "acc_stderr": 0.04389869956808778, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.04389869956808778 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.20851063829787234, "acc_stderr": 0.026556982117838752, "acc_norm": 0.20851063829787234, "acc_norm_stderr": 0.026556982117838752 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.19298245614035087, "acc_stderr": 0.037124548537213684, "acc_norm": 0.19298245614035087, "acc_norm_stderr": 0.037124548537213684 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2413793103448276, "acc_stderr": 0.03565998174135303, "acc_norm": 0.2413793103448276, "acc_norm_stderr": 0.03565998174135303 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.2275132275132275, "acc_stderr": 0.021591269407823768, "acc_norm": 0.2275132275132275, "acc_norm_stderr": 0.021591269407823768 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.1746031746031746, "acc_stderr": 0.03395490020856113, "acc_norm": 0.1746031746031746, "acc_norm_stderr": 0.03395490020856113 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.17, "acc_stderr": 0.0377525168068637, "acc_norm": 0.17, "acc_norm_stderr": 0.0377525168068637 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.18064516129032257, "acc_stderr": 0.021886178567172548, "acc_norm": 0.18064516129032257, "acc_norm_stderr": 0.021886178567172548 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.20689655172413793, "acc_stderr": 0.028501378167893946, "acc_norm": 0.20689655172413793, "acc_norm_stderr": 0.028501378167893946 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.28484848484848485, "acc_stderr": 0.035243908445117836, "acc_norm": 0.28484848484848485, "acc_norm_stderr": 0.035243908445117836 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.17676767676767677, "acc_stderr": 0.027178752639044915, "acc_norm": 0.17676767676767677, "acc_norm_stderr": 0.027178752639044915 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.23316062176165803, "acc_stderr": 0.030516111371476008, "acc_norm": 0.23316062176165803, "acc_norm_stderr": 0.030516111371476008 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.1794871794871795, "acc_stderr": 0.019457390787681796, "acc_norm": 0.1794871794871795, "acc_norm_stderr": 0.019457390787681796 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.27037037037037037, "acc_stderr": 0.027080372815145668, "acc_norm": 0.27037037037037037, "acc_norm_stderr": 0.027080372815145668 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.2184873949579832, "acc_stderr": 0.026841514322958945, "acc_norm": 0.2184873949579832, "acc_norm_stderr": 0.026841514322958945 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2913907284768212, "acc_stderr": 0.037101857261199946, "acc_norm": 0.2913907284768212, "acc_norm_stderr": 0.037101857261199946 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.21651376146788992, "acc_stderr": 0.01765871059444314, "acc_norm": 0.21651376146788992, "acc_norm_stderr": 0.01765871059444314 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.19907407407407407, "acc_stderr": 0.027232298462690218, "acc_norm": 0.19907407407407407, "acc_norm_stderr": 0.027232298462690218 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.23529411764705882, "acc_stderr": 0.029771775228145638, "acc_norm": 0.23529411764705882, "acc_norm_stderr": 0.029771775228145638 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.24050632911392406, "acc_stderr": 0.027820781981149678, "acc_norm": 0.24050632911392406, "acc_norm_stderr": 0.027820781981149678 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.3094170403587444, "acc_stderr": 0.031024411740572206, "acc_norm": 0.3094170403587444, "acc_norm_stderr": 0.031024411740572206 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.25190839694656486, "acc_stderr": 0.03807387116306086, "acc_norm": 0.25190839694656486, "acc_norm_stderr": 0.03807387116306086 }, "harness|hendrycksTest-international_law|5": { "acc": 0.32231404958677684, "acc_stderr": 0.04266416363352168, "acc_norm": 0.32231404958677684, "acc_norm_stderr": 0.04266416363352168 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.28703703703703703, "acc_stderr": 0.043733130409147614, "acc_norm": 0.28703703703703703, "acc_norm_stderr": 0.043733130409147614 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.25766871165644173, "acc_stderr": 0.03436150827846917, "acc_norm": 0.25766871165644173, "acc_norm_stderr": 0.03436150827846917 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3392857142857143, "acc_stderr": 0.04493949068613539, "acc_norm": 0.3392857142857143, "acc_norm_stderr": 0.04493949068613539 }, "harness|hendrycksTest-management|5": { "acc": 0.1941747572815534, "acc_stderr": 0.039166677628225864, "acc_norm": 0.1941747572815534, "acc_norm_stderr": 0.039166677628225864 }, "harness|hendrycksTest-marketing|5": { "acc": 0.2905982905982906, "acc_stderr": 0.029745048572674043, "acc_norm": 0.2905982905982906, "acc_norm_stderr": 0.029745048572674043 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.25925925925925924, "acc_stderr": 0.015671006009339582, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.015671006009339582 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.27167630057803466, "acc_stderr": 0.023948512905468365, "acc_norm": 0.27167630057803466, "acc_norm_stderr": 0.023948512905468365 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2435754189944134, "acc_stderr": 0.014355911964767867, "acc_norm": 0.2435754189944134, "acc_norm_stderr": 0.014355911964767867 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.2647058823529412, "acc_stderr": 0.0252616912197295, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.0252616912197295 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.2861736334405145, "acc_stderr": 0.02567025924218896, "acc_norm": 0.2861736334405145, "acc_norm_stderr": 0.02567025924218896 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.2716049382716049, "acc_stderr": 0.02474862449053737, "acc_norm": 0.2716049382716049, "acc_norm_stderr": 0.02474862449053737 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2553191489361702, "acc_stderr": 0.02601199293090202, "acc_norm": 0.2553191489361702, "acc_norm_stderr": 0.02601199293090202 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2620599739243807, "acc_stderr": 0.011231552795890392, "acc_norm": 0.2620599739243807, "acc_norm_stderr": 0.011231552795890392 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.16544117647058823, "acc_stderr": 0.022571771025494767, "acc_norm": 0.16544117647058823, "acc_norm_stderr": 0.022571771025494767 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.272875816993464, "acc_stderr": 0.018020474148393577, "acc_norm": 0.272875816993464, "acc_norm_stderr": 0.018020474148393577 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.2545454545454545, "acc_stderr": 0.04172343038705383, "acc_norm": 0.2545454545454545, "acc_norm_stderr": 0.04172343038705383 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.2693877551020408, "acc_stderr": 0.02840125202902294, "acc_norm": 0.2693877551020408, "acc_norm_stderr": 0.02840125202902294 }, "harness|hendrycksTest-sociology|5": { "acc": 0.2835820895522388, "acc_stderr": 0.031871875379197966, "acc_norm": 0.2835820895522388, "acc_norm_stderr": 0.031871875379197966 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-virology|5": { "acc": 0.28313253012048195, "acc_stderr": 0.03507295431370518, "acc_norm": 0.28313253012048195, "acc_norm_stderr": 0.03507295431370518 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.3157894736842105, "acc_stderr": 0.035650796707083106, "acc_norm": 0.3157894736842105, "acc_norm_stderr": 0.035650796707083106 }, "harness|truthfulqa:mc|0": { "mc1": 0.30966952264381886, "mc1_stderr": 0.01618574435514491, "mc2": 0.5061902673630095, "mc2_stderr": 0.015324724908364737 }, "harness|winogrande|5": { "acc": 0.5943172849250198, "acc_stderr": 0.013800206336014205 }, "harness|gsm8k|5": { "acc": 0.01061410159211524, "acc_stderr": 0.002822713322387704 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_L-R__LLmRA-3B-v0.1
[ "region:us" ]
2023-12-30T18:22:20+00:00
{"pretty_name": "Evaluation run of L-R/LLmRA-3B-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [L-R/LLmRA-3B-v0.1](https://huggingface.co/L-R/LLmRA-3B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_L-R__LLmRA-3B-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T18:20:02.738307](https://huggingface.co/datasets/open-llm-leaderboard/details_L-R__LLmRA-3B-v0.1/blob/main/results_2023-12-30T18-20-02.738307.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.25811553677427196,\n \"acc_stderr\": 0.030831154815361403,\n \"acc_norm\": 0.25988166008114527,\n \"acc_norm_stderr\": 0.031599277199979875,\n \"mc1\": 0.30966952264381886,\n \"mc1_stderr\": 0.01618574435514491,\n \"mc2\": 0.5061902673630095,\n \"mc2_stderr\": 0.015324724908364737\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.34897610921501704,\n \"acc_stderr\": 0.0139289334613825,\n \"acc_norm\": 0.39419795221843,\n \"acc_norm_stderr\": 0.014280522667467327\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4502091216889066,\n \"acc_stderr\": 0.004964979120927574,\n \"acc_norm\": 0.5978888667596096,\n \"acc_norm_stderr\": 0.004893220635011781\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.18518518518518517,\n \"acc_stderr\": 0.03355677216313142,\n \"acc_norm\": 0.18518518518518517,\n \"acc_norm_stderr\": 0.03355677216313142\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.21710526315789475,\n \"acc_stderr\": 0.033550453048829226,\n \"acc_norm\": 0.21710526315789475,\n \"acc_norm_stderr\": 0.033550453048829226\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2188679245283019,\n \"acc_stderr\": 0.025447863825108614,\n \"acc_norm\": 0.2188679245283019,\n \"acc_norm_stderr\": 0.025447863825108614\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2638888888888889,\n \"acc_stderr\": 0.03685651095897532,\n \"acc_norm\": 0.2638888888888889,\n \"acc_norm_stderr\": 0.03685651095897532\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.23121387283236994,\n \"acc_stderr\": 0.032147373020294696,\n \"acc_norm\": 0.23121387283236994,\n \"acc_norm_stderr\": 0.032147373020294696\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808778,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808778\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.20851063829787234,\n \"acc_stderr\": 0.026556982117838752,\n \"acc_norm\": 0.20851063829787234,\n \"acc_norm_stderr\": 0.026556982117838752\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.19298245614035087,\n \"acc_stderr\": 0.037124548537213684,\n \"acc_norm\": 0.19298245614035087,\n \"acc_norm_stderr\": 0.037124548537213684\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135303,\n \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135303\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2275132275132275,\n \"acc_stderr\": 0.021591269407823768,\n \"acc_norm\": 0.2275132275132275,\n \"acc_norm_stderr\": 0.021591269407823768\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1746031746031746,\n \"acc_stderr\": 0.03395490020856113,\n \"acc_norm\": 0.1746031746031746,\n \"acc_norm_stderr\": 0.03395490020856113\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.18064516129032257,\n \"acc_stderr\": 0.021886178567172548,\n \"acc_norm\": 0.18064516129032257,\n \"acc_norm_stderr\": 0.021886178567172548\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.20689655172413793,\n \"acc_stderr\": 0.028501378167893946,\n \"acc_norm\": 0.20689655172413793,\n \"acc_norm_stderr\": 0.028501378167893946\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.28484848484848485,\n \"acc_stderr\": 0.035243908445117836,\n \"acc_norm\": 0.28484848484848485,\n \"acc_norm_stderr\": 0.035243908445117836\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.17676767676767677,\n \"acc_stderr\": 0.027178752639044915,\n \"acc_norm\": 0.17676767676767677,\n \"acc_norm_stderr\": 0.027178752639044915\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.23316062176165803,\n \"acc_stderr\": 0.030516111371476008,\n \"acc_norm\": 0.23316062176165803,\n \"acc_norm_stderr\": 0.030516111371476008\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.1794871794871795,\n \"acc_stderr\": 0.019457390787681796,\n \"acc_norm\": 0.1794871794871795,\n \"acc_norm_stderr\": 0.019457390787681796\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.27037037037037037,\n \"acc_stderr\": 0.027080372815145668,\n \"acc_norm\": 0.27037037037037037,\n \"acc_norm_stderr\": 0.027080372815145668\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.2184873949579832,\n \"acc_stderr\": 0.026841514322958945,\n \"acc_norm\": 0.2184873949579832,\n \"acc_norm_stderr\": 0.026841514322958945\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2913907284768212,\n \"acc_stderr\": 0.037101857261199946,\n \"acc_norm\": 0.2913907284768212,\n \"acc_norm_stderr\": 0.037101857261199946\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.21651376146788992,\n \"acc_stderr\": 0.01765871059444314,\n \"acc_norm\": 0.21651376146788992,\n \"acc_norm_stderr\": 0.01765871059444314\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.19907407407407407,\n \"acc_stderr\": 0.027232298462690218,\n \"acc_norm\": 0.19907407407407407,\n \"acc_norm_stderr\": 0.027232298462690218\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.23529411764705882,\n \"acc_stderr\": 0.029771775228145638,\n \"acc_norm\": 0.23529411764705882,\n \"acc_norm_stderr\": 0.029771775228145638\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.24050632911392406,\n \"acc_stderr\": 0.027820781981149678,\n \"acc_norm\": 0.24050632911392406,\n \"acc_norm_stderr\": 0.027820781981149678\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3094170403587444,\n \"acc_stderr\": 0.031024411740572206,\n \"acc_norm\": 0.3094170403587444,\n \"acc_norm_stderr\": 0.031024411740572206\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.25190839694656486,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.25190839694656486,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.32231404958677684,\n \"acc_stderr\": 0.04266416363352168,\n \"acc_norm\": 0.32231404958677684,\n \"acc_norm_stderr\": 0.04266416363352168\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.28703703703703703,\n \"acc_stderr\": 0.043733130409147614,\n \"acc_norm\": 0.28703703703703703,\n \"acc_norm_stderr\": 0.043733130409147614\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.25766871165644173,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.25766871165644173,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3392857142857143,\n \"acc_stderr\": 0.04493949068613539,\n \"acc_norm\": 0.3392857142857143,\n \"acc_norm_stderr\": 0.04493949068613539\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.1941747572815534,\n \"acc_stderr\": 0.039166677628225864,\n \"acc_norm\": 0.1941747572815534,\n \"acc_norm_stderr\": 0.039166677628225864\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2905982905982906,\n \"acc_stderr\": 0.029745048572674043,\n \"acc_norm\": 0.2905982905982906,\n \"acc_norm_stderr\": 0.029745048572674043\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.015671006009339582,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.015671006009339582\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.27167630057803466,\n \"acc_stderr\": 0.023948512905468365,\n \"acc_norm\": 0.27167630057803466,\n \"acc_norm_stderr\": 0.023948512905468365\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2435754189944134,\n \"acc_stderr\": 0.014355911964767867,\n \"acc_norm\": 0.2435754189944134,\n \"acc_norm_stderr\": 0.014355911964767867\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.0252616912197295,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.0252616912197295\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2861736334405145,\n \"acc_stderr\": 0.02567025924218896,\n \"acc_norm\": 0.2861736334405145,\n \"acc_norm_stderr\": 0.02567025924218896\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2716049382716049,\n \"acc_stderr\": 0.02474862449053737,\n \"acc_norm\": 0.2716049382716049,\n \"acc_norm_stderr\": 0.02474862449053737\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2553191489361702,\n \"acc_stderr\": 0.02601199293090202,\n \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.02601199293090202\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2620599739243807,\n \"acc_stderr\": 0.011231552795890392,\n \"acc_norm\": 0.2620599739243807,\n \"acc_norm_stderr\": 0.011231552795890392\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.16544117647058823,\n \"acc_stderr\": 0.022571771025494767,\n \"acc_norm\": 0.16544117647058823,\n \"acc_norm_stderr\": 0.022571771025494767\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.272875816993464,\n \"acc_stderr\": 0.018020474148393577,\n \"acc_norm\": 0.272875816993464,\n \"acc_norm_stderr\": 0.018020474148393577\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2545454545454545,\n \"acc_stderr\": 0.04172343038705383,\n \"acc_norm\": 0.2545454545454545,\n \"acc_norm_stderr\": 0.04172343038705383\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.2693877551020408,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.2693877551020408,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.2835820895522388,\n \"acc_stderr\": 0.031871875379197966,\n \"acc_norm\": 0.2835820895522388,\n \"acc_norm_stderr\": 0.031871875379197966\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.28313253012048195,\n \"acc_stderr\": 0.03507295431370518,\n \"acc_norm\": 0.28313253012048195,\n \"acc_norm_stderr\": 0.03507295431370518\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3157894736842105,\n \"acc_stderr\": 0.035650796707083106,\n \"acc_norm\": 0.3157894736842105,\n \"acc_norm_stderr\": 0.035650796707083106\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.30966952264381886,\n \"mc1_stderr\": 0.01618574435514491,\n \"mc2\": 0.5061902673630095,\n \"mc2_stderr\": 0.015324724908364737\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5943172849250198,\n \"acc_stderr\": 0.013800206336014205\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01061410159211524,\n \"acc_stderr\": 0.002822713322387704\n }\n}\n```", "repo_url": "https://huggingface.co/L-R/LLmRA-3B-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|arc:challenge|25_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|gsm8k|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hellaswag|10_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T18-20-02.738307.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["**/details_harness|winogrande|5_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T18-20-02.738307.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T18_20_02.738307", "path": ["results_2023-12-30T18-20-02.738307.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T18-20-02.738307.parquet"]}]}]}
2023-12-30T18:22:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of L-R/LLmRA-3B-v0.1 Dataset automatically created during the evaluation run of model L-R/LLmRA-3B-v0.1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T18:20:02.738307(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of L-R/LLmRA-3B-v0.1\n\n\n\nDataset automatically created during the evaluation run of model L-R/LLmRA-3B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T18:20:02.738307(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of L-R/LLmRA-3B-v0.1\n\n\n\nDataset automatically created during the evaluation run of model L-R/LLmRA-3B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T18:20:02.738307(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 185, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of L-R/LLmRA-3B-v0.1\n\n\n\nDataset automatically created during the evaluation run of model L-R/LLmRA-3B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T18:20:02.738307(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
1ebe63f3620214e24612a0734d007a6a40d04d45
# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v4 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [jeonsworld/CarbonVillain-en-10.7B-v4](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v4", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T18:31:04.687700](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v4/blob/main/results_2023-12-30T18-31-04.687700.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.666631910220006, "acc_stderr": 0.031628172453272874, "acc_norm": 0.6673323282914742, "acc_norm_stderr": 0.032273428394848376, "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7195429760825822, "mc2_stderr": 0.014995726763948506 }, "harness|arc:challenge|25": { "acc": 0.6834470989761092, "acc_stderr": 0.013592431519068079, "acc_norm": 0.712457337883959, "acc_norm_stderr": 0.013226719056266125 }, "harness|hellaswag|10": { "acc": 0.7141007767377017, "acc_stderr": 0.00450918191932285, "acc_norm": 0.8847839075881299, "acc_norm_stderr": 0.0031863002304505757 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.756578947368421, "acc_stderr": 0.034923496688842384, "acc_norm": 0.756578947368421, "acc_norm_stderr": 0.034923496688842384 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03476590104304134, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03476590104304134 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956913, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956913 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.625531914893617, "acc_stderr": 0.03163910665367291, "acc_norm": 0.625531914893617, "acc_norm_stderr": 0.03163910665367291 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4824561403508772, "acc_stderr": 0.04700708033551038, "acc_norm": 0.4824561403508772, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6344827586206897, "acc_stderr": 0.040131241954243856, "acc_norm": 0.6344827586206897, "acc_norm_stderr": 0.040131241954243856 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4973544973544973, "acc_stderr": 0.02575094967813039, "acc_norm": 0.4973544973544973, "acc_norm_stderr": 0.02575094967813039 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4365079365079365, "acc_stderr": 0.04435932892851466, "acc_norm": 0.4365079365079365, "acc_norm_stderr": 0.04435932892851466 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8193548387096774, "acc_stderr": 0.021886178567172534, "acc_norm": 0.8193548387096774, "acc_norm_stderr": 0.021886178567172534 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5024630541871922, "acc_stderr": 0.03517945038691063, "acc_norm": 0.5024630541871922, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8121212121212121, "acc_stderr": 0.03050193405942914, "acc_norm": 0.8121212121212121, "acc_norm_stderr": 0.03050193405942914 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822516, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822516 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.021995311963644244, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.021995311963644244 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563976, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563976 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.36666666666666664, "acc_stderr": 0.029381620726465073, "acc_norm": 0.36666666666666664, "acc_norm_stderr": 0.029381620726465073 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7142857142857143, "acc_stderr": 0.029344572500634332, "acc_norm": 0.7142857142857143, "acc_norm_stderr": 0.029344572500634332 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.37748344370860926, "acc_stderr": 0.03958027231121569, "acc_norm": 0.37748344370860926, "acc_norm_stderr": 0.03958027231121569 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8458715596330275, "acc_stderr": 0.015480826865374308, "acc_norm": 0.8458715596330275, "acc_norm_stderr": 0.015480826865374308 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5740740740740741, "acc_stderr": 0.03372343271653062, "acc_norm": 0.5740740740740741, "acc_norm_stderr": 0.03372343271653062 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8481012658227848, "acc_stderr": 0.023363878096632446, "acc_norm": 0.8481012658227848, "acc_norm_stderr": 0.023363878096632446 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.672645739910314, "acc_stderr": 0.03149384670994131, "acc_norm": 0.672645739910314, "acc_norm_stderr": 0.03149384670994131 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596915, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596915 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.03492606476623791, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.03492606476623791 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.0230866350868414, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.0230866350868414 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8045977011494253, "acc_stderr": 0.014179171373424383, "acc_norm": 0.8045977011494253, "acc_norm_stderr": 0.014179171373424383 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7572254335260116, "acc_stderr": 0.023083658586984204, "acc_norm": 0.7572254335260116, "acc_norm_stderr": 0.023083658586984204 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.394413407821229, "acc_stderr": 0.01634538676210397, "acc_norm": 0.394413407821229, "acc_norm_stderr": 0.01634538676210397 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7266881028938906, "acc_stderr": 0.025311765975426122, "acc_norm": 0.7266881028938906, "acc_norm_stderr": 0.025311765975426122 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7839506172839507, "acc_stderr": 0.022899162918445806, "acc_norm": 0.7839506172839507, "acc_norm_stderr": 0.022899162918445806 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.49645390070921985, "acc_stderr": 0.02982674915328092, "acc_norm": 0.49645390070921985, "acc_norm_stderr": 0.02982674915328092 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4941329856584094, "acc_stderr": 0.012769356925216526, "acc_norm": 0.4941329856584094, "acc_norm_stderr": 0.012769356925216526 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7389705882352942, "acc_stderr": 0.026679252270103128, "acc_norm": 0.7389705882352942, "acc_norm_stderr": 0.026679252270103128 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6781045751633987, "acc_stderr": 0.018901015322093092, "acc_norm": 0.6781045751633987, "acc_norm_stderr": 0.018901015322093092 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7387755102040816, "acc_stderr": 0.028123429335142783, "acc_norm": 0.7387755102040816, "acc_norm_stderr": 0.028123429335142783 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.026193923544454125, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.026193923544454125 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.9, "acc_stderr": 0.030151134457776334, "acc_norm": 0.9, "acc_norm_stderr": 0.030151134457776334 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598053, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598053 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03188578017686398, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.5716034271725826, "mc1_stderr": 0.017323088597314747, "mc2": 0.7195429760825822, "mc2_stderr": 0.014995726763948506 }, "harness|winogrande|5": { "acc": 0.8358326756116812, "acc_stderr": 0.010410849775222789 }, "harness|gsm8k|5": { "acc": 0.6557998483699773, "acc_stderr": 0.013086800426693784 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v4
[ "region:us" ]
2023-12-30T18:33:25+00:00
{"pretty_name": "Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v4", "dataset_summary": "Dataset automatically created during the evaluation run of model [jeonsworld/CarbonVillain-en-10.7B-v4](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v4\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T18:31:04.687700](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v4/blob/main/results_2023-12-30T18-31-04.687700.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.666631910220006,\n \"acc_stderr\": 0.031628172453272874,\n \"acc_norm\": 0.6673323282914742,\n \"acc_norm_stderr\": 0.032273428394848376,\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7195429760825822,\n \"mc2_stderr\": 0.014995726763948506\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6834470989761092,\n \"acc_stderr\": 0.013592431519068079,\n \"acc_norm\": 0.712457337883959,\n \"acc_norm_stderr\": 0.013226719056266125\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7141007767377017,\n \"acc_stderr\": 0.00450918191932285,\n \"acc_norm\": 0.8847839075881299,\n \"acc_norm_stderr\": 0.0031863002304505757\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.756578947368421,\n \"acc_stderr\": 0.034923496688842384,\n \"acc_norm\": 0.756578947368421,\n \"acc_norm_stderr\": 0.034923496688842384\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4973544973544973,\n \"acc_stderr\": 0.02575094967813039,\n \"acc_norm\": 0.4973544973544973,\n \"acc_norm_stderr\": 0.02575094967813039\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8121212121212121,\n \"acc_stderr\": 0.03050193405942914,\n \"acc_norm\": 0.8121212121212121,\n \"acc_norm_stderr\": 0.03050193405942914\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.021995311963644244,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.021995311963644244\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563976,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563976\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465073,\n \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465073\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634332,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634332\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374308,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374308\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5740740740740741,\n \"acc_stderr\": 0.03372343271653062,\n \"acc_norm\": 0.5740740740740741,\n \"acc_norm_stderr\": 0.03372343271653062\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8481012658227848,\n \"acc_stderr\": 0.023363878096632446,\n \"acc_norm\": 0.8481012658227848,\n \"acc_norm_stderr\": 0.023363878096632446\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.672645739910314,\n \"acc_stderr\": 0.03149384670994131,\n \"acc_norm\": 0.672645739910314,\n \"acc_norm_stderr\": 0.03149384670994131\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596915,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596915\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.0230866350868414,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.0230866350868414\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8045977011494253,\n \"acc_stderr\": 0.014179171373424383,\n \"acc_norm\": 0.8045977011494253,\n \"acc_norm_stderr\": 0.014179171373424383\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7572254335260116,\n \"acc_stderr\": 0.023083658586984204,\n \"acc_norm\": 0.7572254335260116,\n \"acc_norm_stderr\": 0.023083658586984204\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.394413407821229,\n \"acc_stderr\": 0.01634538676210397,\n \"acc_norm\": 0.394413407821229,\n \"acc_norm_stderr\": 0.01634538676210397\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7266881028938906,\n \"acc_stderr\": 0.025311765975426122,\n \"acc_norm\": 0.7266881028938906,\n \"acc_norm_stderr\": 0.025311765975426122\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7839506172839507,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.7839506172839507,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4941329856584094,\n \"acc_stderr\": 0.012769356925216526,\n \"acc_norm\": 0.4941329856584094,\n \"acc_norm_stderr\": 0.012769356925216526\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7389705882352942,\n \"acc_stderr\": 0.026679252270103128,\n \"acc_norm\": 0.7389705882352942,\n \"acc_norm_stderr\": 0.026679252270103128\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454125,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454125\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5716034271725826,\n \"mc1_stderr\": 0.017323088597314747,\n \"mc2\": 0.7195429760825822,\n \"mc2_stderr\": 0.014995726763948506\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8358326756116812,\n \"acc_stderr\": 0.010410849775222789\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6557998483699773,\n \"acc_stderr\": 0.013086800426693784\n }\n}\n```", "repo_url": "https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|arc:challenge|25_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|gsm8k|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hellaswag|10_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T18-31-04.687700.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["**/details_harness|winogrande|5_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T18-31-04.687700.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T18_31_04.687700", "path": ["results_2023-12-30T18-31-04.687700.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T18-31-04.687700.parquet"]}]}]}
2023-12-30T18:33:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v4 Dataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v4 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T18:31:04.687700(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v4\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T18:31:04.687700(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v4\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T18:31:04.687700(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 195, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of jeonsworld/CarbonVillain-en-10.7B-v4\n\n\n\nDataset automatically created during the evaluation run of model jeonsworld/CarbonVillain-en-10.7B-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T18:31:04.687700(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]" ]
66c2bb4e1e01b940c0c723f41fba48cd5dde7c7f
# Dataset Card for Evaluation run of NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_NeuralNovel__Mistral-7B-Instruct-v0.2-Neural-Story", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T18:54:03.241759](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Mistral-7B-Instruct-v0.2-Neural-Story/blob/main/results_2023-12-30T18-54-03.241759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6060791614560437, "acc_stderr": 0.033211145729547226, "acc_norm": 0.6111814809313938, "acc_norm_stderr": 0.033882148359148205, "mc1": 0.5201958384332925, "mc1_stderr": 0.017489216849737053, "mc2": 0.6689337299841565, "mc2_stderr": 0.015285957609493764 }, "harness|arc:challenge|25": { "acc": 0.591296928327645, "acc_stderr": 0.014365750345427, "acc_norm": 0.6407849829351536, "acc_norm_stderr": 0.014020224155839162 }, "harness|hellaswag|10": { "acc": 0.6589324835690101, "acc_stderr": 0.0047309913571942945, "acc_norm": 0.8396733718382793, "acc_norm_stderr": 0.0036615885079775462 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5555555555555556, "acc_stderr": 0.04292596718256981, "acc_norm": 0.5555555555555556, "acc_norm_stderr": 0.04292596718256981 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.5921052631578947, "acc_stderr": 0.03999309712777474, "acc_norm": 0.5921052631578947, "acc_norm_stderr": 0.03999309712777474 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6754716981132075, "acc_stderr": 0.02881561571343211, "acc_norm": 0.6754716981132075, "acc_norm_stderr": 0.02881561571343211 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6875, "acc_stderr": 0.038760854559127644, "acc_norm": 0.6875, "acc_norm_stderr": 0.038760854559127644 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.42, "acc_stderr": 0.04960449637488584, "acc_norm": 0.42, "acc_norm_stderr": 0.04960449637488584 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.49, "acc_stderr": 0.05024183937956914, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956914 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6069364161849711, "acc_stderr": 0.0372424959581773, "acc_norm": 0.6069364161849711, "acc_norm_stderr": 0.0372424959581773 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4215686274509804, "acc_stderr": 0.04913595201274498, "acc_norm": 0.4215686274509804, "acc_norm_stderr": 0.04913595201274498 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5191489361702127, "acc_stderr": 0.03266204299064678, "acc_norm": 0.5191489361702127, "acc_norm_stderr": 0.03266204299064678 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.42105263157894735, "acc_stderr": 0.046446020912223177, "acc_norm": 0.42105263157894735, "acc_norm_stderr": 0.046446020912223177 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6206896551724138, "acc_stderr": 0.04043461861916747, "acc_norm": 0.6206896551724138, "acc_norm_stderr": 0.04043461861916747 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.36772486772486773, "acc_stderr": 0.024833839825562413, "acc_norm": 0.36772486772486773, "acc_norm_stderr": 0.024833839825562413 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.40476190476190477, "acc_stderr": 0.04390259265377562, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.04390259265377562 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6903225806451613, "acc_stderr": 0.026302774983517414, "acc_norm": 0.6903225806451613, "acc_norm_stderr": 0.026302774983517414 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5024630541871922, "acc_stderr": 0.035179450386910616, "acc_norm": 0.5024630541871922, "acc_norm_stderr": 0.035179450386910616 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.65, "acc_stderr": 0.047937248544110196, "acc_norm": 0.65, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7212121212121212, "acc_stderr": 0.03501438706296781, "acc_norm": 0.7212121212121212, "acc_norm_stderr": 0.03501438706296781 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7575757575757576, "acc_stderr": 0.030532892233932022, "acc_norm": 0.7575757575757576, "acc_norm_stderr": 0.030532892233932022 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8497409326424871, "acc_stderr": 0.025787723180723882, "acc_norm": 0.8497409326424871, "acc_norm_stderr": 0.025787723180723882 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5743589743589743, "acc_stderr": 0.02506909438729653, "acc_norm": 0.5743589743589743, "acc_norm_stderr": 0.02506909438729653 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3, "acc_stderr": 0.027940457136228395, "acc_norm": 0.3, "acc_norm_stderr": 0.027940457136228395 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6428571428571429, "acc_stderr": 0.031124619309328177, "acc_norm": 0.6428571428571429, "acc_norm_stderr": 0.031124619309328177 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7944954128440367, "acc_stderr": 0.01732435232501602, "acc_norm": 0.7944954128440367, "acc_norm_stderr": 0.01732435232501602 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.48148148148148145, "acc_stderr": 0.03407632093854052, "acc_norm": 0.48148148148148145, "acc_norm_stderr": 0.03407632093854052 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7794117647058824, "acc_stderr": 0.02910225438967408, "acc_norm": 0.7794117647058824, "acc_norm_stderr": 0.02910225438967408 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7510548523206751, "acc_stderr": 0.028146970599422644, "acc_norm": 0.7510548523206751, "acc_norm_stderr": 0.028146970599422644 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6278026905829597, "acc_stderr": 0.032443052830087304, "acc_norm": 0.6278026905829597, "acc_norm_stderr": 0.032443052830087304 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7022900763358778, "acc_stderr": 0.040103589424622034, "acc_norm": 0.7022900763358778, "acc_norm_stderr": 0.040103589424622034 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8099173553719008, "acc_stderr": 0.03581796951709282, "acc_norm": 0.8099173553719008, "acc_norm_stderr": 0.03581796951709282 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7129629629629629, "acc_stderr": 0.04373313040914761, "acc_norm": 0.7129629629629629, "acc_norm_stderr": 0.04373313040914761 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7239263803680982, "acc_stderr": 0.035123852837050475, "acc_norm": 0.7239263803680982, "acc_norm_stderr": 0.035123852837050475 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4375, "acc_stderr": 0.04708567521880525, "acc_norm": 0.4375, "acc_norm_stderr": 0.04708567521880525 }, "harness|hendrycksTest-management|5": { "acc": 0.7184466019417476, "acc_stderr": 0.04453254836326466, "acc_norm": 0.7184466019417476, "acc_norm_stderr": 0.04453254836326466 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8504273504273504, "acc_stderr": 0.02336505149175371, "acc_norm": 0.8504273504273504, "acc_norm_stderr": 0.02336505149175371 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7816091954022989, "acc_stderr": 0.014774358319934486, "acc_norm": 0.7816091954022989, "acc_norm_stderr": 0.014774358319934486 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6734104046242775, "acc_stderr": 0.02524826477424284, "acc_norm": 0.6734104046242775, "acc_norm_stderr": 0.02524826477424284 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39888268156424583, "acc_stderr": 0.016376966142610073, "acc_norm": 0.39888268156424583, "acc_norm_stderr": 0.016376966142610073 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6862745098039216, "acc_stderr": 0.02656892101545715, "acc_norm": 0.6862745098039216, "acc_norm_stderr": 0.02656892101545715 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6816720257234726, "acc_stderr": 0.026457225067811025, "acc_norm": 0.6816720257234726, "acc_norm_stderr": 0.026457225067811025 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7006172839506173, "acc_stderr": 0.02548311560119545, "acc_norm": 0.7006172839506173, "acc_norm_stderr": 0.02548311560119545 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.44680851063829785, "acc_stderr": 0.029658235097666907, "acc_norm": 0.44680851063829785, "acc_norm_stderr": 0.029658235097666907 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4276401564537158, "acc_stderr": 0.012635799922765844, "acc_norm": 0.4276401564537158, "acc_norm_stderr": 0.012635799922765844 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6176470588235294, "acc_stderr": 0.02952009569768776, "acc_norm": 0.6176470588235294, "acc_norm_stderr": 0.02952009569768776 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6127450980392157, "acc_stderr": 0.019706875804085637, "acc_norm": 0.6127450980392157, "acc_norm_stderr": 0.019706875804085637 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7181818181818181, "acc_stderr": 0.04309118709946458, "acc_norm": 0.7181818181818181, "acc_norm_stderr": 0.04309118709946458 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.710204081632653, "acc_stderr": 0.029043088683304328, "acc_norm": 0.710204081632653, "acc_norm_stderr": 0.029043088683304328 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7711442786069652, "acc_stderr": 0.02970528405677243, "acc_norm": 0.7711442786069652, "acc_norm_stderr": 0.02970528405677243 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.8, "acc_stderr": 0.04020151261036846, "acc_norm": 0.8, "acc_norm_stderr": 0.04020151261036846 }, "harness|hendrycksTest-virology|5": { "acc": 0.4819277108433735, "acc_stderr": 0.038899512528272166, "acc_norm": 0.4819277108433735, "acc_norm_stderr": 0.038899512528272166 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8421052631578947, "acc_stderr": 0.027966785859160893, "acc_norm": 0.8421052631578947, "acc_norm_stderr": 0.027966785859160893 }, "harness|truthfulqa:mc|0": { "mc1": 0.5201958384332925, "mc1_stderr": 0.017489216849737053, "mc2": 0.6689337299841565, "mc2_stderr": 0.015285957609493764 }, "harness|winogrande|5": { "acc": 0.7584846093133386, "acc_stderr": 0.012028983782011874 }, "harness|gsm8k|5": { "acc": 0.38286580742987114, "acc_stderr": 0.013389223491820463 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_NeuralNovel__Mistral-7B-Instruct-v0.2-Neural-Story
[ "region:us" ]
2023-12-30T18:56:20+00:00
{"pretty_name": "Evaluation run of NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story", "dataset_summary": "Dataset automatically created during the evaluation run of model [NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NeuralNovel__Mistral-7B-Instruct-v0.2-Neural-Story\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T18:54:03.241759](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Mistral-7B-Instruct-v0.2-Neural-Story/blob/main/results_2023-12-30T18-54-03.241759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6060791614560437,\n \"acc_stderr\": 0.033211145729547226,\n \"acc_norm\": 0.6111814809313938,\n \"acc_norm_stderr\": 0.033882148359148205,\n \"mc1\": 0.5201958384332925,\n \"mc1_stderr\": 0.017489216849737053,\n \"mc2\": 0.6689337299841565,\n \"mc2_stderr\": 0.015285957609493764\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.591296928327645,\n \"acc_stderr\": 0.014365750345427,\n \"acc_norm\": 0.6407849829351536,\n \"acc_norm_stderr\": 0.014020224155839162\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6589324835690101,\n \"acc_stderr\": 0.0047309913571942945,\n \"acc_norm\": 0.8396733718382793,\n \"acc_norm_stderr\": 0.0036615885079775462\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04292596718256981,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04292596718256981\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5921052631578947,\n \"acc_stderr\": 0.03999309712777474,\n \"acc_norm\": 0.5921052631578947,\n \"acc_norm_stderr\": 0.03999309712777474\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956914,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956914\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.0372424959581773,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.0372424959581773\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5191489361702127,\n \"acc_stderr\": 0.03266204299064678,\n \"acc_norm\": 0.5191489361702127,\n \"acc_norm_stderr\": 0.03266204299064678\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6206896551724138,\n \"acc_stderr\": 0.04043461861916747,\n \"acc_norm\": 0.6206896551724138,\n \"acc_norm_stderr\": 0.04043461861916747\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.36772486772486773,\n \"acc_stderr\": 0.024833839825562413,\n \"acc_norm\": 0.36772486772486773,\n \"acc_norm_stderr\": 0.024833839825562413\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6903225806451613,\n \"acc_stderr\": 0.026302774983517414,\n \"acc_norm\": 0.6903225806451613,\n \"acc_norm_stderr\": 0.026302774983517414\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.035179450386910616,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.035179450386910616\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7212121212121212,\n \"acc_stderr\": 0.03501438706296781,\n \"acc_norm\": 0.7212121212121212,\n \"acc_norm_stderr\": 0.03501438706296781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.030532892233932022,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932022\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8497409326424871,\n \"acc_stderr\": 0.025787723180723882,\n \"acc_norm\": 0.8497409326424871,\n \"acc_norm_stderr\": 0.025787723180723882\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5743589743589743,\n \"acc_stderr\": 0.02506909438729653,\n \"acc_norm\": 0.5743589743589743,\n \"acc_norm_stderr\": 0.02506909438729653\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.027940457136228395,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.027940457136228395\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.031124619309328177,\n \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.031124619309328177\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7944954128440367,\n \"acc_stderr\": 0.01732435232501602,\n \"acc_norm\": 0.7944954128440367,\n \"acc_norm_stderr\": 0.01732435232501602\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.03407632093854052,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.03407632093854052\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.02910225438967408,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.02910225438967408\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7510548523206751,\n \"acc_stderr\": 0.028146970599422644,\n \"acc_norm\": 0.7510548523206751,\n \"acc_norm_stderr\": 0.028146970599422644\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6278026905829597,\n \"acc_stderr\": 0.032443052830087304,\n \"acc_norm\": 0.6278026905829597,\n \"acc_norm_stderr\": 0.032443052830087304\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7022900763358778,\n \"acc_stderr\": 0.040103589424622034,\n \"acc_norm\": 0.7022900763358778,\n \"acc_norm_stderr\": 0.040103589424622034\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.04373313040914761,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.04373313040914761\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7184466019417476,\n \"acc_stderr\": 0.04453254836326466,\n \"acc_norm\": 0.7184466019417476,\n \"acc_norm_stderr\": 0.04453254836326466\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n \"acc_stderr\": 0.02336505149175371,\n \"acc_norm\": 0.8504273504273504,\n \"acc_norm_stderr\": 0.02336505149175371\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7816091954022989,\n \"acc_stderr\": 0.014774358319934486,\n \"acc_norm\": 0.7816091954022989,\n \"acc_norm_stderr\": 0.014774358319934486\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6734104046242775,\n \"acc_stderr\": 0.02524826477424284,\n \"acc_norm\": 0.6734104046242775,\n \"acc_norm_stderr\": 0.02524826477424284\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39888268156424583,\n \"acc_stderr\": 0.016376966142610073,\n \"acc_norm\": 0.39888268156424583,\n \"acc_norm_stderr\": 0.016376966142610073\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.02656892101545715,\n \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.02656892101545715\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6816720257234726,\n \"acc_stderr\": 0.026457225067811025,\n \"acc_norm\": 0.6816720257234726,\n \"acc_norm_stderr\": 0.026457225067811025\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7006172839506173,\n \"acc_stderr\": 0.02548311560119545,\n \"acc_norm\": 0.7006172839506173,\n \"acc_norm_stderr\": 0.02548311560119545\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.029658235097666907,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.029658235097666907\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4276401564537158,\n \"acc_stderr\": 0.012635799922765844,\n \"acc_norm\": 0.4276401564537158,\n \"acc_norm_stderr\": 0.012635799922765844\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.02952009569768776,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.02952009569768776\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6127450980392157,\n \"acc_stderr\": 0.019706875804085637,\n \"acc_norm\": 0.6127450980392157,\n \"acc_norm_stderr\": 0.019706875804085637\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.04309118709946458,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.04309118709946458\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7711442786069652,\n \"acc_stderr\": 0.02970528405677243,\n \"acc_norm\": 0.7711442786069652,\n \"acc_norm_stderr\": 0.02970528405677243\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5201958384332925,\n \"mc1_stderr\": 0.017489216849737053,\n \"mc2\": 0.6689337299841565,\n \"mc2_stderr\": 0.015285957609493764\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7584846093133386,\n \"acc_stderr\": 0.012028983782011874\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.38286580742987114,\n \"acc_stderr\": 0.013389223491820463\n }\n}\n```", "repo_url": "https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|arc:challenge|25_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|gsm8k|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hellaswag|10_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T18-54-03.241759.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["**/details_harness|winogrande|5_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T18-54-03.241759.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_30T18_54_03.241759", "path": ["results_2023-12-30T18-54-03.241759.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T18-54-03.241759.parquet"]}]}]}
2023-12-30T18:56:53+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story Dataset automatically created during the evaluation run of model NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T18:54:03.241759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story\n\n\n\nDataset automatically created during the evaluation run of model NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T18:54:03.241759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story\n\n\n\nDataset automatically created during the evaluation run of model NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T18:54:03.241759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 203, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story\n\n\n\nDataset automatically created during the evaluation run of model NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T18:54:03.241759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
925d542debb108770fe9802f18252d8bca26e70f
# Dataset Card for "SpaRTUN" https://github.com/HLR/SpaRTUN ```bib @inproceedings{mirzaee-kordjamshidi-2022-transfer, title = "Transfer Learning with Synthetic Corpora for Spatial Role Labeling and Reasoning", author = "Mirzaee, Roshanak and Kordjamshidi, Parisa", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.413", pages = "6148--6165", abstract = "", } ```
tasksource/SpaRTUN
[ "region:us" ]
2023-12-30T19:08:46+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "story", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "q_type", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "candidate_answers", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 22901745, "num_examples": 37095}, {"name": "dev", "num_bytes": 3331642, "num_examples": 5600}, {"name": "test", "num_bytes": 3371071, "num_examples": 5551}], "download_size": 2424674, "dataset_size": 29604458}}
2024-01-03T09:02:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for "SpaRTUN" URL
[ "# Dataset Card for \"SpaRTUN\"\n\nURL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"SpaRTUN\"\n\nURL" ]
[ 6, 11 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"SpaRTUN\"\n\nURL" ]
af1de4db85517821a938069ce2d72153bfae4d4d
# Dataset Card for "ReSQ" https://github.com/HLR/SpaRTUN ```bib @inproceedings{mirzaee-kordjamshidi-2022-transfer, title = "Transfer Learning with Synthetic Corpora for Spatial Role Labeling and Reasoning", author = "Mirzaee, Roshanak and Kordjamshidi, Parisa", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.413", pages = "6148--6165", abstract = "", } ```
tasksource/ReSQ
[ "region:us" ]
2023-12-30T19:17:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "dev", "path": "data/dev-*"}]}], "dataset_info": {"features": [{"name": "story", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "q_type", "dtype": "string"}, {"name": "answer", "sequence": "string"}, {"name": "candidate_answers", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 388340, "num_examples": 2450}, {"name": "dev", "num_bytes": 111658, "num_examples": 663}], "download_size": 58830, "dataset_size": 499998}}
2024-01-03T09:02:50+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ReSQ" URL
[ "# Dataset Card for \"ReSQ\"\n\nURL" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ReSQ\"\n\nURL" ]
[ 6, 11 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ReSQ\"\n\nURL" ]
407c100a90f0cd78ece0e52a3ea325127da11524
This is a replacement for the folder https://huggingface.co/datasets/styletts2-community/multilingual-pl-bert/tree/main/be. I haven't changed `input_ids`, only `phonemes`. ~95% of all transcriptions were amended where appropriate; the remaining ~5% were copied (with minor changes in notation) from the original version of the dataset. A few caveats: - My amended transcriptions are based on the outputs of two existing Belarusian G2P tools, [corpus.by](https://corpus.by/TranscriptionGenerator/?lang=en) and [bnkorpus.info](https://bnkorpus.info/fanetyka.html). They use slightly different IPA character sets, so I combined the outputs by applying the same formatting to both. - I follow these conventions to represent Belarusian sounds in IPA: [о] is `ɔ`, [э] is `ɛ` (no matter if the preceding consonant is velar or palatal); velar [л] is `ɫ`, palatal [л'] is `lʲ`, [ч] is `t͡ʂ`, [ц] is `t͡s`, plosive [ґ] is `ɡ`. - Stress marks precede stressed syllables, i.e. there may be one or more consonants between the mark and the stressed vowel. - Punctuation is (mostly) preserved in the transcriptions, and `[UNK]` token is represented by empty string. - Arabic and roman numbers are transcribed as their corresponding Belarusian numerals. There may be errors in gender / number / case, as the contexts are very diverse. Similarly, some abbreviations (several dozen most frequent ones) are transcribed as their corresponding full words or phrases. - Words in Russian and English are transcribed using espeak-ng. No separate processing was applied to words in other languages that incidentally occur in Belarusian texts. - Among the ~5% hard instances which I wasn't able to refine, most are proper names, alphanumeric strings, URLs, etc. Also, I didn't have enough time to disambiguate variably-stressed items, like *му́зыка* 'music' vs. *музы́ка* 'musician'; their transcriptions are also copied from the original dataset.
somerandomguyontheweb/multilingual-pl-bert-be-updated
[ "region:us" ]
2023-12-30T19:46:26+00:00
{}
2023-12-31T09:00:52+00:00
[]
[]
TAGS #region-us
This is a replacement for the folder URL I haven't changed 'input_ids', only 'phonemes'. ~95% of all transcriptions were amended where appropriate; the remaining ~5% were copied (with minor changes in notation) from the original version of the dataset. A few caveats: - My amended transcriptions are based on the outputs of two existing Belarusian G2P tools, URL and URL. They use slightly different IPA character sets, so I combined the outputs by applying the same formatting to both. - I follow these conventions to represent Belarusian sounds in IPA: [о] is 'ɔ', [э] is 'ɛ' (no matter if the preceding consonant is velar or palatal); velar [л] is 'ɫ', palatal [л'] is 'lʲ', [ч] is 't͡ʂ', [ц] is 't͡s', plosive [ґ] is 'ɡ'. - Stress marks precede stressed syllables, i.e. there may be one or more consonants between the mark and the stressed vowel. - Punctuation is (mostly) preserved in the transcriptions, and '[UNK]' token is represented by empty string. - Arabic and roman numbers are transcribed as their corresponding Belarusian numerals. There may be errors in gender / number / case, as the contexts are very diverse. Similarly, some abbreviations (several dozen most frequent ones) are transcribed as their corresponding full words or phrases. - Words in Russian and English are transcribed using espeak-ng. No separate processing was applied to words in other languages that incidentally occur in Belarusian texts. - Among the ~5% hard instances which I wasn't able to refine, most are proper names, alphanumeric strings, URLs, etc. Also, I didn't have enough time to disambiguate variably-stressed items, like *му́зыка* 'music' vs. *музы́ка* 'musician'; their transcriptions are also copied from the original dataset.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
bb36e7f1dfcc1e8b50bca29f8d69ef839719e718
# Dataset Card for "train" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ayaat/train
[ "region:us" ]
2023-12-30T19:56:12+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "Prompt", "dtype": "string"}, {"name": "Response", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 137497.5, "num_examples": 70}, {"name": "test", "num_bytes": 58927.5, "num_examples": 30}], "download_size": 26560, "dataset_size": 196425.0}}
2023-12-30T20:10:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "train" More Information needed
[ "# Dataset Card for \"train\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"train\"\n\nMore Information needed" ]
[ 6, 12 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"train\"\n\nMore Information needed" ]
257dfb8548a0426a2bb56e1cd877721ac7d0a15e
# Dataset Card for "trivia_qa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jxie/trivia_qa
[ "region:us" ]
2023-12-30T20:27:38+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 24322980, "num_examples": 61888}, {"name": "test", "num_bytes": 3213880, "num_examples": 7993}], "download_size": 15962297, "dataset_size": 27536860}}
2024-01-03T04:46:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for "trivia_qa" More Information needed
[ "# Dataset Card for \"trivia_qa\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"trivia_qa\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"trivia_qa\"\n\nMore Information needed" ]
dec9d84ec096433acf09dd97b612c5a82022fd1a
# Dataset Card for VECHR ### Dataset Summary [From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification](https://arxiv.org/pdf/2310.11878.pdf) In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case’s facts supposedly relevant for its outcome. ### Languages English # Citation Information @inproceedings{xu-etal-2023-dissonance, title = "From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification", author = "Xu, Shanshan and T.y.s.s, Santosh and Ichim, Oana and Risini, Isabella and Plank, Barbara and Grabmair, Matthias", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.594", doi = "10.18653/v1/2023.emnlp-main.594", pages = "9558--9576", abstract = "In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case{'}s facts supposedly relevant for its outcome.", }
sxu/RaVE_emnlp23
[ "size_categories:n<1K", "language:en", "license:afl-3.0", "legal", "arxiv:2310.11878", "region:us" ]
2023-12-30T20:34:35+00:00
{"language": ["en"], "license": "afl-3.0", "size_categories": ["n<1K"], "tags": ["legal"]}
2023-12-30T20:37:48+00:00
[ "2310.11878" ]
[ "en" ]
TAGS #size_categories-n<1K #language-English #license-afl-3.0 #legal #arxiv-2310.11878 #region-us
# Dataset Card for VECHR ### Dataset Summary From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case’s facts supposedly relevant for its outcome. ### Languages English @inproceedings{xu-etal-2023-dissonance, title = "From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification", author = "Xu, Shanshan and T.y.s.s, Santosh and Ichim, Oana and Risini, Isabella and Plank, Barbara and Grabmair, Matthias", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "URL doi = "10.18653/v1/URL-main.594", pages = "9558--9576", abstract = "In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case{'}s facts supposedly relevant for its outcome.", }
[ "# Dataset Card for VECHR", "### Dataset Summary\nFrom Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification\n\nIn legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case’s facts supposedly relevant for its outcome.", "### Languages\nEnglish\n\n\n\n @inproceedings{xu-etal-2023-dissonance,\n title = \"From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification\",\n author = \"Xu, Shanshan and\n T.y.s.s, Santosh and\n Ichim, Oana and\n Risini, Isabella and\n Plank, Barbara and\n Grabmair, Matthias\",\n editor = \"Bouamor, Houda and\n Pino, Juan and\n Bali, Kalika\",\n booktitle = \"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing\",\n month = dec,\n year = \"2023\",\n address = \"Singapore\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/URL-main.594\",\n pages = \"9558--9576\",\n abstract = \"In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case{'}s facts supposedly relevant for its outcome.\",\n\n\n}" ]
[ "TAGS\n#size_categories-n<1K #language-English #license-afl-3.0 #legal #arxiv-2310.11878 #region-us \n", "# Dataset Card for VECHR", "### Dataset Summary\nFrom Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification\n\nIn legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case’s facts supposedly relevant for its outcome.", "### Languages\nEnglish\n\n\n\n @inproceedings{xu-etal-2023-dissonance,\n title = \"From Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification\",\n author = \"Xu, Shanshan and\n T.y.s.s, Santosh and\n Ichim, Oana and\n Risini, Isabella and\n Plank, Barbara and\n Grabmair, Matthias\",\n editor = \"Bouamor, Houda and\n Pino, Juan and\n Bali, Kalika\",\n booktitle = \"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing\",\n month = dec,\n year = \"2023\",\n address = \"Singapore\",\n publisher = \"Association for Computational Linguistics\",\n url = \"URL\n doi = \"10.18653/v1/URL-main.594\",\n pages = \"9558--9576\",\n abstract = \"In legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case{'}s facts supposedly relevant for its outcome.\",\n\n\n}" ]
[ 39, 8, 324, 501 ]
[ "passage: TAGS\n#size_categories-n<1K #language-English #license-afl-3.0 #legal #arxiv-2310.11878 #region-us \n# Dataset Card for VECHR### Dataset Summary\nFrom Dissonance to Insights: Dissecting Disagreements in Rationale Construction for Case Outcome Classification\n\nIn legal NLP, Case Outcome Classification (COC) must not only be accurate but also trustworthy and explainable. Existing work in explainable COC has been limited to annotations by a single expert. However, it is well-known that lawyers may disagree in their assessment of case facts. We hence collect a novel dataset RaVE: Rationale Variation in ECHR, which is obtained from two experts in the domain of international human rights law, for whom we observe weak agreement. We study their disagreements and build a two-level task-independent taxonomy, supplemented with COC-specific subcategories. To our knowledge, this is the first work in the legal NLP that focuses on human label variation. We quantitatively assess different taxonomy categories and find that disagreements mainly stem from underspecification of the legal context, which poses challenges given the typically limited granularity and noise in COC metadata. We further assess the explainablility of state-of-the-art COC models on RaVE and observe limited agreement between models and experts. Overall, our case study reveals hitherto underappreciated complexities in creating benchmark datasets in legal NLP that revolve around identifying aspects of a case’s facts supposedly relevant for its outcome." ]
9c26a41e5ced3f7ff251ef5cecb47fef94e08d31
# Dataset Card for "natural_questions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jxie/natural_questions
[ "region:us" ]
2023-12-30T20:35:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6059360, "num_examples": 87925}, {"name": "test", "num_bytes": 253307, "num_examples": 3610}], "download_size": 0, "dataset_size": 6312667}}
2024-01-02T02:04:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for "natural_questions" More Information needed
[ "# Dataset Card for \"natural_questions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"natural_questions\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"natural_questions\"\n\nMore Information needed" ]
86f7314659dac5d654e5ec7663d1964ac47492e8
Molecular dataset: 10,000 TYK2 inhibitors (SMILES strings) with Docking scores and Relative Binding Free Energy (dG) Dataset from paper: James Thompson, W Patrick Walters, Jianwen A Feng, Nicolas A Pabon, Hongcheng Xu, Michael Maser, Brian B Goldman, Demetri Moustakas, Molly Schmidt, Forrest York, Optimizing active learning for free energy calculations, Artificial Intelligence in the Life Sciences, Volume 2, 2022, 100050, ISSN 2667-3185, https://doi.org/10.1016/j.ailsci.2022.100050. https://www.sciencedirect.com/science/article/pii/S2667318522000204 original source: https://github.com/google-research/google-research/tree/master/al_for_fep
pvrancx/tyk2_fep
[ "size_categories:1K<n<10K", "license:apache-2.0", "molecule", "chemistry", "smiles", "free_energy", "region:us" ]
2023-12-30T20:35:57+00:00
{"license": "apache-2.0", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "Smiles", "dtype": "string"}, {"name": "DockingScore", "dtype": "float64"}, {"name": "dG", "dtype": "float64"}, {"name": "dGError", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 641714, "num_examples": 8997}, {"name": "test", "num_bytes": 71163, "num_examples": 1000}], "download_size": 315048, "dataset_size": 712877}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["molecule", "chemistry", "smiles", "free_energy"]}
2023-12-30T20:58:12+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-apache-2.0 #molecule #chemistry #smiles #free_energy #region-us
Molecular dataset: 10,000 TYK2 inhibitors (SMILES strings) with Docking scores and Relative Binding Free Energy (dG) Dataset from paper: James Thompson, W Patrick Walters, Jianwen A Feng, Nicolas A Pabon, Hongcheng Xu, Michael Maser, Brian B Goldman, Demetri Moustakas, Molly Schmidt, Forrest York, Optimizing active learning for free energy calculations, Artificial Intelligence in the Life Sciences, Volume 2, 2022, 100050, ISSN 2667-3185, URL URL original source: URL
[]
[ "TAGS\n#size_categories-1K<n<10K #license-apache-2.0 #molecule #chemistry #smiles #free_energy #region-us \n" ]
[ 43 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #license-apache-2.0 #molecule #chemistry #smiles #free_energy #region-us \n" ]
7cf01279456346071d10f2641dabcda206abdeaf
# Dataset Card for Motivational Quotes This is a dataset of motivational quotes, scraped from [Goodreads](https://www.goodreads.com/quotes/). It contains more than 4000 quotes, each of them labeled with the corresponding author. ## Data overview The `quotes` subset contains the raw quotes and the corresponding authors. The `quotes_extended` subset contains the raw quotes plus a short prompt that can be used to train LLMs to generate new quotes: ```json // quotes { "quote": "“Do not fear failure but rather fear not trying.”", "author": "Roy T. Bennett" } // quotes-extended { "quote": "“Do not fear failure but rather fear not trying.”", "author": "Roy T. Bennett", "prompt": "Provide a motivational quote about resilience:” } ```
asuender/motivational-quotes
[ "task_categories:text-classification", "task_categories:text-generation", "size_categories:1K<n<10K", "language:en", "license:cc", "region:us" ]
2023-12-30T20:40:33+00:00
{"language": ["en"], "license": "cc", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification", "text-generation"], "configs": [{"config_name": "quotes", "data_files": "quotes.jsonl"}, {"config_name": "quotes_extended", "data_files": "quotes_extended.jsonl"}]}
2023-12-31T12:15:47+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-cc #region-us
# Dataset Card for Motivational Quotes This is a dataset of motivational quotes, scraped from Goodreads. It contains more than 4000 quotes, each of them labeled with the corresponding author. ## Data overview The 'quotes' subset contains the raw quotes and the corresponding authors. The 'quotes_extended' subset contains the raw quotes plus a short prompt that can be used to train LLMs to generate new quotes:
[ "# Dataset Card for Motivational Quotes\n\nThis is a dataset of motivational quotes, scraped from Goodreads. It contains more than 4000 quotes, each of them labeled with the corresponding author.", "## Data overview\n\nThe 'quotes' subset contains the raw quotes and the corresponding authors. The 'quotes_extended' subset contains the raw quotes plus a short prompt that can be used to train LLMs to generate new quotes:" ]
[ "TAGS\n#task_categories-text-classification #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-cc #region-us \n", "# Dataset Card for Motivational Quotes\n\nThis is a dataset of motivational quotes, scraped from Goodreads. It contains more than 4000 quotes, each of them labeled with the corresponding author.", "## Data overview\n\nThe 'quotes' subset contains the raw quotes and the corresponding authors. The 'quotes_extended' subset contains the raw quotes plus a short prompt that can be used to train LLMs to generate new quotes:" ]
[ 49, 47, 60 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text-generation #size_categories-1K<n<10K #language-English #license-cc #region-us \n# Dataset Card for Motivational Quotes\n\nThis is a dataset of motivational quotes, scraped from Goodreads. It contains more than 4000 quotes, each of them labeled with the corresponding author.## Data overview\n\nThe 'quotes' subset contains the raw quotes and the corresponding authors. The 'quotes_extended' subset contains the raw quotes plus a short prompt that can be used to train LLMs to generate new quotes:" ]
80fae759513d8b04f583877bdfef4391e2caa27b
# Dataset Card for "test_startup_advice_25k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
salma-remyx/test_startup_advice_25k
[ "region:us" ]
2023-12-30T22:51:27+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28719928, "num_examples": 24503}], "download_size": 16655838, "dataset_size": 28719928}}
2024-01-01T06:44:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_startup_advice_25k" More Information needed
[ "# Dataset Card for \"test_startup_advice_25k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_startup_advice_25k\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test_startup_advice_25k\"\n\nMore Information needed" ]
4fa075e3fdde20d18321c6ecb8aba626362a38af
# Dataset Card for "ffmperative_refined_5.5k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
salma-remyx/ffmperative_refined_5.5k
[ "region:us" ]
2023-12-30T23:15:15+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3277970, "num_examples": 5565}], "download_size": 1001170, "dataset_size": 3277970}}
2023-12-30T23:15:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ffmperative_refined_5.5k" More Information needed
[ "# Dataset Card for \"ffmperative_refined_5.5k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ffmperative_refined_5.5k\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ffmperative_refined_5.5k\"\n\nMore Information needed" ]
a850ba851a0354d8c96c585a3613e490b305d813
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset is a mix of the Capybara, Open-Platypus-Commercial and Wizard-Vicuna-Unfiltered datasets. As such, it can be used for commercial purposes. These base datasets provide a strong reasoning background on multiple fields of human knowledge, and that's why I chose all of these. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** Thermostatic - **Funded by [optional]:** Thermostatic - **Shared by [optional]:** Thermostatic - **Language(s) (NLP):** English - **License:** MIT ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** No repository yet, will provide the scripts shortly - **Paper [optional]:** No paper - **Demo [optional]:** No demo yet ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Thermostatic/flowers
[ "license:mit", "region:us" ]
2023-12-30T23:55:16+00:00
{"license": "mit"}
2023-12-31T00:00:48+00:00
[]
[]
TAGS #license-mit #region-us
# Dataset Card for Dataset Name This dataset is a mix of the Capybara, Open-Platypus-Commercial and Wizard-Vicuna-Unfiltered datasets. As such, it can be used for commercial purposes. These base datasets provide a strong reasoning background on multiple fields of human knowledge, and that's why I chose all of these. ## Dataset Details ### Dataset Description - Curated by: Thermostatic - Funded by [optional]: Thermostatic - Shared by [optional]: Thermostatic - Language(s) (NLP): English - License: MIT ### Dataset Sources [optional] - Repository: No repository yet, will provide the scripts shortly - Paper [optional]: No paper - Demo [optional]: No demo yet ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset is a mix of the Capybara, Open-Platypus-Commercial and Wizard-Vicuna-Unfiltered datasets. As such, it can be used for commercial purposes. These base datasets provide a strong reasoning background on multiple fields of human knowledge, and that's why I chose all of these.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: Thermostatic\n- Funded by [optional]: Thermostatic\n- Shared by [optional]: Thermostatic\n- Language(s) (NLP): English\n- License: MIT", "### Dataset Sources [optional]\n\n\n\n- Repository: No repository yet, will provide the scripts shortly\n- Paper [optional]: No paper\n- Demo [optional]: No demo yet", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#license-mit #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset is a mix of the Capybara, Open-Platypus-Commercial and Wizard-Vicuna-Unfiltered datasets. As such, it can be used for commercial purposes. These base datasets provide a strong reasoning background on multiple fields of human knowledge, and that's why I chose all of these.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: Thermostatic\n- Funded by [optional]: Thermostatic\n- Shared by [optional]: Thermostatic\n- Language(s) (NLP): English\n- License: MIT", "### Dataset Sources [optional]\n\n\n\n- Repository: No repository yet, will provide the scripts shortly\n- Paper [optional]: No paper\n- Demo [optional]: No demo yet", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 11, 85, 4, 48, 47, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#license-mit #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset is a mix of the Capybara, Open-Platypus-Commercial and Wizard-Vicuna-Unfiltered datasets. As such, it can be used for commercial purposes. These base datasets provide a strong reasoning background on multiple fields of human knowledge, and that's why I chose all of these.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: Thermostatic\n- Funded by [optional]: Thermostatic\n- Shared by [optional]: Thermostatic\n- Language(s) (NLP): English\n- License: MIT### Dataset Sources [optional]\n\n\n\n- Repository: No repository yet, will provide the scripts shortly\n- Paper [optional]: No paper\n- Demo [optional]: No demo yet## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
c902d51d36a02e6cb741a29c69ee0d8873cd672b
# Pseudolabel Nusantara audiobooks using Whisper Large V3 Notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/speech-to-text-semisupervised/nusantara-audiobook 1. Split based on 3 utterances using WebRTC VAD. ## how-to Download files, ```bash wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/dari-pasentran-ke-istana.gz wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/salina.gz wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/turki.gz wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/nusantara-audiobook-part1.json wget https://huggingface.co/datasets/mesolitica/nusantara-audiobook/resolve/main/nusantara-audiobook-part2.json tar -xf dari-pasentran-ke-istana.gz tar -xf turki.gz tar -xf salina.gz ```
mesolitica/nusantara-audiobook
[ "task_categories:automatic-speech-recognition", "task_categories:text-to-speech", "language:ms", "region:us" ]
2023-12-31T00:20:34+00:00
{"language": ["ms"], "task_categories": ["automatic-speech-recognition", "text-to-speech"]}
2024-01-01T04:17:47+00:00
[]
[ "ms" ]
TAGS #task_categories-automatic-speech-recognition #task_categories-text-to-speech #language-Malay (macrolanguage) #region-us
# Pseudolabel Nusantara audiobooks using Whisper Large V3 Notebooks at URL 1. Split based on 3 utterances using WebRTC VAD. ## how-to Download files,
[ "# Pseudolabel Nusantara audiobooks using Whisper Large V3\n\nNotebooks at URL\n\n1. Split based on 3 utterances using WebRTC VAD.", "## how-to\n\nDownload files," ]
[ "TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #language-Malay (macrolanguage) #region-us \n", "# Pseudolabel Nusantara audiobooks using Whisper Large V3\n\nNotebooks at URL\n\n1. Split based on 3 utterances using WebRTC VAD.", "## how-to\n\nDownload files," ]
[ 45, 34, 7 ]
[ "passage: TAGS\n#task_categories-automatic-speech-recognition #task_categories-text-to-speech #language-Malay (macrolanguage) #region-us \n# Pseudolabel Nusantara audiobooks using Whisper Large V3\n\nNotebooks at URL\n\n1. Split based on 3 utterances using WebRTC VAD.## how-to\n\nDownload files," ]
c9aa958bb26a9c89e1c0af9e10a07b5472e8efc4
<h1> <img alt="RH" src="./icon.png" style="display:inline-block; vertical-align:middle" /> Pedagogical Machine Translation (Dialect) dataset: the filtered Canadian Hansard Dataset. </h1> The Canadian [Hansard](https://www.ourcommons.ca/documentviewer/en/35-2/house/hansard-index) is an archive of parliamentary sessions in the two official languages in Canada - English and Franch. ## 📋 Table of Contents - [🧩 Hansard Dataset](#-hansard-dataset) - [📋 Table of Contents](#-table-of-contents) - [📖 Usage](#-usage) - [Downloading the dataset](#downloading-the-dataset) - [Dataset structure](#dataset-structure) - [Loading the dataset](#loading-the-dataset) <!--- [Evaluating](#evaluating) - [Running the baselines](#running-the-baselines) - [Word Embeddings and Pre-trained Language Models](#word-embeddings-and-pre-trained-language-models) - [Large Language Models](#large-language-models) --> - [✍️ Contributing](#️-contributing) - [📝 Citing](#-citing) - [🙏 Acknowledgements](#-acknowledgements) ## 📖 Usage ### Downloading the dataset The hansard dataset can be downloaded from [here](https://www.cs.toronto.edu/~raeidsaqur/hansard/hansard.tar.gz) or with a bash script: ```bash bash download_hansard.sh ``` ### Dataset structure The dataset is provided as csv (and parquet) files, one for each partition: `train.[csv|parquet]` and `test.csv`. We also provide a `hansard.[csv|parquet]` file that contains all examples across all splits. The splits are sized as follows: <!-- | Split | # Walls | |:-------|:---------:| | `train` | 311K | | `test` | 49K | Here is an example of the dataset's structure: ```csv ``` --> ### Loading the dataset The three partitions can be loaded the same way as any other csv file. For example, using Python: ```python dataset = { "train": csv.load(open("./Hansard/train.csv", "r"))["dataset"], "test": csv.load(open("./Hansard/test.csv", "r"))["dataset"], } ``` However, it is likely easiest to work with the dataset using the [HuggingFace Datasets](https://huggingface.co/datasets) library: ```python # pip install datasets from datasets import load_dataset # The dataset can be used like any other HuggingFace dataset dataset = load_dataset("raeidsaqur/hansard") ``` <!-- > __Note__ --> <!-- ### Evaluating We provide a script for evaluating the performance of a model on the dataset. Before running, make sure you have installed the requirements and package: ```bash pip install -r requirements.txt pip install -e . ``` To run the evaluation script: ### Running the baselines --> ## ✍️ Contributing We welcome contributions to this repository (noticed a typo? a bug?). To propose a change: ``` git clone https://github.com/raeidsaqur/hansard cd hansard git checkout -b my-branch pip install -r requirements.txt pip install -e . ``` Once your changes are made, make sure to lint and format the code (addressing any warnings or errors): ``` isort . black . flake8 . ``` Then, submit your change as a pull request. ## 📝 Citing If you use the Canadian Hansarddataset in your work, please consider citing our paper: ``` @article{raeidsaqur2024Hansard, title = {The Canadian Hansard Dataset for Analyzing Dialect Efficiencies in Language Models}, author = {Raeid Saqur}, year = 2024, journal = {ArXiv}, url = } ``` ## 🙏 Acknowledgements The entire CSC401/2511 teaching team at the Dept. of Computer Science at the University of Toronto.
raeidsaqur/Hansard
[ "task_categories:translation", "size_categories:100K<n<1M", "language:en", "language:fr", "license:mit", "region:us" ]
2023-12-31T02:03:33+00:00
{"language": ["en", "fr"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["translation"], "pretty_name": "hansard"}
2024-01-01T01:12:00+00:00
[]
[ "en", "fr" ]
TAGS #task_categories-translation #size_categories-100K<n<1M #language-English #language-French #license-mit #region-us
<h1> <img alt="RH" src="./URL" style="display:inline-block; vertical-align:middle" /> Pedagogical Machine Translation (Dialect) dataset: the filtered Canadian Hansard Dataset. </h1> The Canadian Hansard is an archive of parliamentary sessions in the two official languages in Canada - English and Franch. ## Table of Contents - Hansard Dataset - Table of Contents - Usage - Downloading the dataset - Dataset structure - Loading the dataset - ️ Contributing - Citing - Acknowledgements ## Usage ### Downloading the dataset The hansard dataset can be downloaded from here or with a bash script: ### Dataset structure The dataset is provided as csv (and parquet) files, one for each partition: 'train.[csv|parquet]' and 'URL'. We also provide a 'hansard.[csv|parquet]' file that contains all examples across all splits. The splits are sized as follows: ### Loading the dataset The three partitions can be loaded the same way as any other csv file. For example, using Python: However, it is likely easiest to work with the dataset using the HuggingFace Datasets library: ## ️ Contributing We welcome contributions to this repository (noticed a typo? a bug?). To propose a change: Once your changes are made, make sure to lint and format the code (addressing any warnings or errors): Then, submit your change as a pull request. ## Citing If you use the Canadian Hansarddataset in your work, please consider citing our paper: ## Acknowledgements The entire CSC401/2511 teaching team at the Dept. of Computer Science at the University of Toronto.
[ "## Table of Contents\n\n- Hansard Dataset\n - Table of Contents\n - Usage\n - Downloading the dataset\n - Dataset structure\n - Loading the dataset\n \n - ️ Contributing\n - Citing\n - Acknowledgements", "## Usage", "### Downloading the dataset\n\nThe hansard dataset can be downloaded from here or with a bash script:", "### Dataset structure\n\nThe dataset is provided as csv (and parquet) files, one for each partition: 'train.[csv|parquet]' and 'URL'. We also provide a 'hansard.[csv|parquet]' file that contains all examples across all splits. The splits are sized as follows:", "### Loading the dataset\n\nThe three partitions can be loaded the same way as any other csv file. For example, using Python:\n\n\n\nHowever, it is likely easiest to work with the dataset using the HuggingFace Datasets library:", "## ️ Contributing\n\nWe welcome contributions to this repository (noticed a typo? a bug?). To propose a change:\n\n\n\nOnce your changes are made, make sure to lint and format the code (addressing any warnings or errors):\n\n\n\nThen, submit your change as a pull request.", "## Citing\n\nIf you use the Canadian Hansarddataset in your work, please consider citing our paper:", "## Acknowledgements\n\nThe entire CSC401/2511 teaching team at the Dept. of Computer Science at the University of Toronto." ]
[ "TAGS\n#task_categories-translation #size_categories-100K<n<1M #language-English #language-French #license-mit #region-us \n", "## Table of Contents\n\n- Hansard Dataset\n - Table of Contents\n - Usage\n - Downloading the dataset\n - Dataset structure\n - Loading the dataset\n \n - ️ Contributing\n - Citing\n - Acknowledgements", "## Usage", "### Downloading the dataset\n\nThe hansard dataset can be downloaded from here or with a bash script:", "### Dataset structure\n\nThe dataset is provided as csv (and parquet) files, one for each partition: 'train.[csv|parquet]' and 'URL'. We also provide a 'hansard.[csv|parquet]' file that contains all examples across all splits. The splits are sized as follows:", "### Loading the dataset\n\nThe three partitions can be loaded the same way as any other csv file. For example, using Python:\n\n\n\nHowever, it is likely easiest to work with the dataset using the HuggingFace Datasets library:", "## ️ Contributing\n\nWe welcome contributions to this repository (noticed a typo? a bug?). To propose a change:\n\n\n\nOnce your changes are made, make sure to lint and format the code (addressing any warnings or errors):\n\n\n\nThen, submit your change as a pull request.", "## Citing\n\nIf you use the Canadian Hansarddataset in your work, please consider citing our paper:", "## Acknowledgements\n\nThe entire CSC401/2511 teaching team at the Dept. of Computer Science at the University of Toronto." ]
[ 42, 47, 3, 25, 82, 56, 68, 23, 29 ]
[ "passage: TAGS\n#task_categories-translation #size_categories-100K<n<1M #language-English #language-French #license-mit #region-us \n## Table of Contents\n\n- Hansard Dataset\n - Table of Contents\n - Usage\n - Downloading the dataset\n - Dataset structure\n - Loading the dataset\n \n - ️ Contributing\n - Citing\n - Acknowledgements## Usage### Downloading the dataset\n\nThe hansard dataset can be downloaded from here or with a bash script:### Dataset structure\n\nThe dataset is provided as csv (and parquet) files, one for each partition: 'train.[csv|parquet]' and 'URL'. We also provide a 'hansard.[csv|parquet]' file that contains all examples across all splits. The splits are sized as follows:### Loading the dataset\n\nThe three partitions can be loaded the same way as any other csv file. For example, using Python:\n\n\n\nHowever, it is likely easiest to work with the dataset using the HuggingFace Datasets library:## ️ Contributing\n\nWe welcome contributions to this repository (noticed a typo? a bug?). To propose a change:\n\n\n\nOnce your changes are made, make sure to lint and format the code (addressing any warnings or errors):\n\n\n\nThen, submit your change as a pull request.## Citing\n\nIf you use the Canadian Hansarddataset in your work, please consider citing our paper:## Acknowledgements\n\nThe entire CSC401/2511 teaching team at the Dept. of Computer Science at the University of Toronto." ]
fc7ca40ba290bf9a9bd346d95f988795054acdfe
# Dataset Card for QUEST ## Dataset Description - **Repository: https://github.com/google-research/language/tree/master/language/quest** - **Paper: https://arxiv.org/abs/2305.11694** - **Point of Contact: [email protected]** ### Dataset Summary We provide here the data accompanying the paper: [QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations ](https://arxiv.org/abs/2305.11694). ## Dataset Structure ### Data Instances QUEST contains 6307 training queries, 323 examples for development, and 1727 examples for testing. ### Data Fields Each examples file contains newline-separated json dictionaries with the following fields: * `query` - Paraphrased query written by annotators. * `docs` - List of relevant document titles. * `original_query` - The original query which was paraphrased. Atomic queries are enclosed by `<mark></mark>`. Augmented queries do not have this field populated. * `scores` - This field is not populated and only used when producing predictions to enable sharing the same data structure. * `metadata` - A dictionary with the following fields: * `template` - The template used to create the query. * `domain` - The domain to which the query belongs. * `fluency` - List of fluency ratings for the query. * `meaning` - List of ratings for whether the paraphrased query meaning is the same as the original query. * `naturalness` - List of naturalness ratings for the query. * `relevance_ratings` - Dictionary mapping document titles to relevance ratings for the document. * `evidence_ratings` - Dictionary mapping document titles to evidence ratings for the document. * `attributions` - Dictionary mapping a document title to its attributions attributions are a list of dictionaries mapping a query substring to a document substring. The document corpus is at https://storage.googleapis.com/gresearch/quest/documents.jsonl. Note that this file is quite large (899MB). The format is newline separated json dicts containing `title` and `text`. ### Citation Information ``` @inproceedings{malaviya23expertqa, title = {QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations}, author = {Chaitanya Malaviya and Peter Shaw and Ming-Wei Chang and Kenton Lee and Kristina Toutanova}, booktitle = {ACL}, year = {2023}, url = "https://arxiv.org/abs/2305.11694" } ```
cmalaviya/quest
[ "task_categories:text-retrieval", "annotations_creators:wikipedia-sourced", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "arxiv:2305.11694", "region:us" ]
2023-12-31T03:03:27+00:00
{"annotations_creators": ["wikipedia-sourced"], "language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "pretty_name": "QUEST", "configs": [{"config_name": "main", "data_files": [{"split": "train", "path": "train.jsonl"}, {"split": "test", "path": "test.jsonl"}, {"split": "validation", "path": "val.jsonl"}]}]}
2023-12-31T03:13:28+00:00
[ "2305.11694" ]
[ "en" ]
TAGS #task_categories-text-retrieval #annotations_creators-wikipedia-sourced #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2305.11694 #region-us
# Dataset Card for QUEST ## Dataset Description - Repository: URL - Paper: URL - Point of Contact: chaitanyamalaviya@URL ### Dataset Summary We provide here the data accompanying the paper: QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations . ## Dataset Structure ### Data Instances QUEST contains 6307 training queries, 323 examples for development, and 1727 examples for testing. ### Data Fields Each examples file contains newline-separated json dictionaries with the following fields: * 'query' - Paraphrased query written by annotators. * 'docs' - List of relevant document titles. * 'original_query' - The original query which was paraphrased. Atomic queries are enclosed by '<mark></mark>'. Augmented queries do not have this field populated. * 'scores' - This field is not populated and only used when producing predictions to enable sharing the same data structure. * 'metadata' - A dictionary with the following fields: * 'template' - The template used to create the query. * 'domain' - The domain to which the query belongs. * 'fluency' - List of fluency ratings for the query. * 'meaning' - List of ratings for whether the paraphrased query meaning is the same as the original query. * 'naturalness' - List of naturalness ratings for the query. * 'relevance_ratings' - Dictionary mapping document titles to relevance ratings for the document. * 'evidence_ratings' - Dictionary mapping document titles to evidence ratings for the document. * 'attributions' - Dictionary mapping a document title to its attributions attributions are a list of dictionaries mapping a query substring to a document substring. The document corpus is at URL Note that this file is quite large (899MB). The format is newline separated json dicts containing 'title' and 'text'.
[ "# Dataset Card for QUEST", "## Dataset Description\n\n- Repository: URL \n- Paper: URL \n- Point of Contact: chaitanyamalaviya@URL", "### Dataset Summary\n\nWe provide here the data accompanying the paper: QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations\n.", "## Dataset Structure", "### Data Instances\n\nQUEST contains 6307 training queries, 323 examples for development, and 1727 examples for testing.", "### Data Fields\n\nEach examples file contains newline-separated json dictionaries with the following fields:\n\n* 'query' - Paraphrased query written by annotators.\n* 'docs' - List of relevant document titles.\n* 'original_query' - The original query which was paraphrased. Atomic queries are\n enclosed by '<mark></mark>'. Augmented queries do not have this field populated.\n* 'scores' - This field is not populated and only used when producing predictions to enable sharing the same data structure.\n* 'metadata' - A dictionary with the following fields:\n * 'template' - The template used to create the query.\n * 'domain' - The domain to which the query belongs.\n * 'fluency' - List of fluency ratings for the query.\n * 'meaning' - List of ratings for whether the paraphrased query meaning is the\n same as the original query.\n * 'naturalness' - List of naturalness ratings for the query.\n * 'relevance_ratings' - Dictionary mapping document titles to relevance ratings\n for the document.\n * 'evidence_ratings' - Dictionary mapping document titles to evidence ratings\n for the document.\n * 'attributions' - Dictionary mapping a document title to its attributions\n attributions are a list of dictionaries mapping a query substring to a\n document substring.\n\nThe document corpus is at URL Note that this file is quite large\n(899MB). The format is newline separated json dicts containing 'title' and\n'text'." ]
[ "TAGS\n#task_categories-text-retrieval #annotations_creators-wikipedia-sourced #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2305.11694 #region-us \n", "# Dataset Card for QUEST", "## Dataset Description\n\n- Repository: URL \n- Paper: URL \n- Point of Contact: chaitanyamalaviya@URL", "### Dataset Summary\n\nWe provide here the data accompanying the paper: QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations\n.", "## Dataset Structure", "### Data Instances\n\nQUEST contains 6307 training queries, 323 examples for development, and 1727 examples for testing.", "### Data Fields\n\nEach examples file contains newline-separated json dictionaries with the following fields:\n\n* 'query' - Paraphrased query written by annotators.\n* 'docs' - List of relevant document titles.\n* 'original_query' - The original query which was paraphrased. Atomic queries are\n enclosed by '<mark></mark>'. Augmented queries do not have this field populated.\n* 'scores' - This field is not populated and only used when producing predictions to enable sharing the same data structure.\n* 'metadata' - A dictionary with the following fields:\n * 'template' - The template used to create the query.\n * 'domain' - The domain to which the query belongs.\n * 'fluency' - List of fluency ratings for the query.\n * 'meaning' - List of ratings for whether the paraphrased query meaning is the\n same as the original query.\n * 'naturalness' - List of naturalness ratings for the query.\n * 'relevance_ratings' - Dictionary mapping document titles to relevance ratings\n for the document.\n * 'evidence_ratings' - Dictionary mapping document titles to evidence ratings\n for the document.\n * 'attributions' - Dictionary mapping a document title to its attributions\n attributions are a list of dictionaries mapping a query substring to a\n document substring.\n\nThe document corpus is at URL Note that this file is quite large\n(899MB). The format is newline separated json dicts containing 'title' and\n'text'." ]
[ 72, 7, 26, 42, 6, 31, 376 ]
[ "passage: TAGS\n#task_categories-text-retrieval #annotations_creators-wikipedia-sourced #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2305.11694 #region-us \n# Dataset Card for QUEST## Dataset Description\n\n- Repository: URL \n- Paper: URL \n- Point of Contact: chaitanyamalaviya@URL### Dataset Summary\n\nWe provide here the data accompanying the paper: QUEST: A Retrieval Dataset of Entity-Seeking Queries with Implicit Set Operations\n.## Dataset Structure### Data Instances\n\nQUEST contains 6307 training queries, 323 examples for development, and 1727 examples for testing." ]
5a816b9184d0a65434acbf7d97400baa00f4608c
# Dataset Card for Evaluation run of sequelbox/SpellBlade <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [sequelbox/SpellBlade](https://huggingface.co/sequelbox/SpellBlade) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_sequelbox__SpellBlade", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-31T03:32:06.667530](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__SpellBlade/blob/main/results_2023-12-31T03-32-06.667530.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.7030716041803902, "acc_stderr": 0.03031449480611243, "acc_norm": 0.7076187068254356, "acc_norm_stderr": 0.03090119210080707, "mc1": 0.32558139534883723, "mc1_stderr": 0.016403989469907825, "mc2": 0.47099850284720696, "mc2_stderr": 0.0146354490934919 }, "harness|arc:challenge|25": { "acc": 0.6510238907849829, "acc_stderr": 0.013928933461382506, "acc_norm": 0.6928327645051194, "acc_norm_stderr": 0.013481034054980941 }, "harness|hellaswag|10": { "acc": 0.6826329416450906, "acc_stderr": 0.004645003662067883, "acc_norm": 0.873132842063334, "acc_norm_stderr": 0.0033214390244115416 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6592592592592592, "acc_stderr": 0.04094376269996793, "acc_norm": 0.6592592592592592, "acc_norm_stderr": 0.04094376269996793 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.8026315789473685, "acc_stderr": 0.03238981601699397, "acc_norm": 0.8026315789473685, "acc_norm_stderr": 0.03238981601699397 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.74, "acc_stderr": 0.04408440022768081, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768081 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.7094339622641509, "acc_stderr": 0.027943219989337142, "acc_norm": 0.7094339622641509, "acc_norm_stderr": 0.027943219989337142 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.8333333333333334, "acc_stderr": 0.031164899666948617, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.031164899666948617 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3431372549019608, "acc_stderr": 0.04724007352383888, "acc_norm": 0.3431372549019608, "acc_norm_stderr": 0.04724007352383888 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.78, "acc_stderr": 0.041633319989322626, "acc_norm": 0.78, "acc_norm_stderr": 0.041633319989322626 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6680851063829787, "acc_stderr": 0.03078373675774564, "acc_norm": 0.6680851063829787, "acc_norm_stderr": 0.03078373675774564 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4473684210526316, "acc_stderr": 0.04677473004491199, "acc_norm": 0.4473684210526316, "acc_norm_stderr": 0.04677473004491199 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6137931034482759, "acc_stderr": 0.04057324734419036, "acc_norm": 0.6137931034482759, "acc_norm_stderr": 0.04057324734419036 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4603174603174603, "acc_stderr": 0.02567008063690919, "acc_norm": 0.4603174603174603, "acc_norm_stderr": 0.02567008063690919 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5, "acc_stderr": 0.04472135954999579, "acc_norm": 0.5, "acc_norm_stderr": 0.04472135954999579 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.51, "acc_stderr": 0.05024183937956911, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956911 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8258064516129032, "acc_stderr": 0.021576248184514587, "acc_norm": 0.8258064516129032, "acc_norm_stderr": 0.021576248184514587 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5172413793103449, "acc_stderr": 0.03515895551165698, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.03515895551165698 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.79, "acc_stderr": 0.040936018074033256, "acc_norm": 0.79, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.8484848484848485, "acc_stderr": 0.027998073798781678, "acc_norm": 0.8484848484848485, "acc_norm_stderr": 0.027998073798781678 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8888888888888888, "acc_stderr": 0.02239078763821677, "acc_norm": 0.8888888888888888, "acc_norm_stderr": 0.02239078763821677 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9378238341968912, "acc_stderr": 0.017426974154240528, "acc_norm": 0.9378238341968912, "acc_norm_stderr": 0.017426974154240528 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.7025641025641025, "acc_stderr": 0.02317740813146594, "acc_norm": 0.7025641025641025, "acc_norm_stderr": 0.02317740813146594 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35555555555555557, "acc_stderr": 0.029185714949857403, "acc_norm": 0.35555555555555557, "acc_norm_stderr": 0.029185714949857403 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7563025210084033, "acc_stderr": 0.027886828078380558, "acc_norm": 0.7563025210084033, "acc_norm_stderr": 0.027886828078380558 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.4900662251655629, "acc_stderr": 0.04081677107248436, "acc_norm": 0.4900662251655629, "acc_norm_stderr": 0.04081677107248436 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8899082568807339, "acc_stderr": 0.0134199390186812, "acc_norm": 0.8899082568807339, "acc_norm_stderr": 0.0134199390186812 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.6203703703703703, "acc_stderr": 0.03309682581119035, "acc_norm": 0.6203703703703703, "acc_norm_stderr": 0.03309682581119035 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.9019607843137255, "acc_stderr": 0.020871118455552104, "acc_norm": 0.9019607843137255, "acc_norm_stderr": 0.020871118455552104 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8860759493670886, "acc_stderr": 0.020681745135884565, "acc_norm": 0.8860759493670886, "acc_norm_stderr": 0.020681745135884565 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7847533632286996, "acc_stderr": 0.027584066602208274, "acc_norm": 0.7847533632286996, "acc_norm_stderr": 0.027584066602208274 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8473282442748091, "acc_stderr": 0.03154521672005472, "acc_norm": 0.8473282442748091, "acc_norm_stderr": 0.03154521672005472 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8842975206611571, "acc_stderr": 0.029199802455622814, "acc_norm": 0.8842975206611571, "acc_norm_stderr": 0.029199802455622814 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8333333333333334, "acc_stderr": 0.03602814176392645, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.03602814176392645 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.8159509202453987, "acc_stderr": 0.030446777687971726, "acc_norm": 0.8159509202453987, "acc_norm_stderr": 0.030446777687971726 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5267857142857143, "acc_stderr": 0.047389751192741546, "acc_norm": 0.5267857142857143, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.8349514563106796, "acc_stderr": 0.03675668832233188, "acc_norm": 0.8349514563106796, "acc_norm_stderr": 0.03675668832233188 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9145299145299145, "acc_stderr": 0.018315891685625852, "acc_norm": 0.9145299145299145, "acc_norm_stderr": 0.018315891685625852 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.04560480215720684, "acc_norm": 0.71, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8697318007662835, "acc_stderr": 0.012036729568216055, "acc_norm": 0.8697318007662835, "acc_norm_stderr": 0.012036729568216055 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7832369942196532, "acc_stderr": 0.022183477668412856, "acc_norm": 0.7832369942196532, "acc_norm_stderr": 0.022183477668412856 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.5944134078212291, "acc_stderr": 0.01642167050633917, "acc_norm": 0.5944134078212291, "acc_norm_stderr": 0.01642167050633917 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7777777777777778, "acc_stderr": 0.023805186524888142, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.023805186524888142 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7909967845659164, "acc_stderr": 0.023093140398374224, "acc_norm": 0.7909967845659164, "acc_norm_stderr": 0.023093140398374224 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.8395061728395061, "acc_stderr": 0.02042395535477803, "acc_norm": 0.8395061728395061, "acc_norm_stderr": 0.02042395535477803 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5673758865248227, "acc_stderr": 0.02955545423677884, "acc_norm": 0.5673758865248227, "acc_norm_stderr": 0.02955545423677884 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.5795306388526728, "acc_stderr": 0.012607654553832701, "acc_norm": 0.5795306388526728, "acc_norm_stderr": 0.012607654553832701 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7389705882352942, "acc_stderr": 0.02667925227010313, "acc_norm": 0.7389705882352942, "acc_norm_stderr": 0.02667925227010313 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.7581699346405228, "acc_stderr": 0.017322789207784326, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.017322789207784326 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7636363636363637, "acc_stderr": 0.040693063197213754, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.040693063197213754 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7959183673469388, "acc_stderr": 0.025801283475090496, "acc_norm": 0.7959183673469388, "acc_norm_stderr": 0.025801283475090496 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8855721393034826, "acc_stderr": 0.022509345325101706, "acc_norm": 0.8855721393034826, "acc_norm_stderr": 0.022509345325101706 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.88, "acc_stderr": 0.03265986323710906, "acc_norm": 0.88, "acc_norm_stderr": 0.03265986323710906 }, "harness|hendrycksTest-virology|5": { "acc": 0.5481927710843374, "acc_stderr": 0.03874371556587953, "acc_norm": 0.5481927710843374, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8596491228070176, "acc_stderr": 0.0266405825391332, "acc_norm": 0.8596491228070176, "acc_norm_stderr": 0.0266405825391332 }, "harness|truthfulqa:mc|0": { "mc1": 0.32558139534883723, "mc1_stderr": 0.016403989469907825, "mc2": 0.47099850284720696, "mc2_stderr": 0.0146354490934919 }, "harness|winogrande|5": { "acc": 0.8318863456985004, "acc_stderr": 0.010510336954166736 }, "harness|gsm8k|5": { "acc": 0.5382865807429871, "acc_stderr": 0.013732048227016683 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_sequelbox__SpellBlade
[ "region:us" ]
2023-12-31T03:34:29+00:00
{"pretty_name": "Evaluation run of sequelbox/SpellBlade", "dataset_summary": "Dataset automatically created during the evaluation run of model [sequelbox/SpellBlade](https://huggingface.co/sequelbox/SpellBlade) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sequelbox__SpellBlade\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-31T03:32:06.667530](https://huggingface.co/datasets/open-llm-leaderboard/details_sequelbox__SpellBlade/blob/main/results_2023-12-31T03-32-06.667530.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7030716041803902,\n \"acc_stderr\": 0.03031449480611243,\n \"acc_norm\": 0.7076187068254356,\n \"acc_norm_stderr\": 0.03090119210080707,\n \"mc1\": 0.32558139534883723,\n \"mc1_stderr\": 0.016403989469907825,\n \"mc2\": 0.47099850284720696,\n \"mc2_stderr\": 0.0146354490934919\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6510238907849829,\n \"acc_stderr\": 0.013928933461382506,\n \"acc_norm\": 0.6928327645051194,\n \"acc_norm_stderr\": 0.013481034054980941\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6826329416450906,\n \"acc_stderr\": 0.004645003662067883,\n \"acc_norm\": 0.873132842063334,\n \"acc_norm_stderr\": 0.0033214390244115416\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6592592592592592,\n \"acc_stderr\": 0.04094376269996793,\n \"acc_norm\": 0.6592592592592592,\n \"acc_norm_stderr\": 0.04094376269996793\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8026315789473685,\n \"acc_stderr\": 0.03238981601699397,\n \"acc_norm\": 0.8026315789473685,\n \"acc_norm_stderr\": 0.03238981601699397\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768081,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768081\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337142,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337142\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.031164899666948617,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.031164899666948617\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.04724007352383888,\n \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.04724007352383888\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.041633319989322626,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.041633319989322626\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6680851063829787,\n \"acc_stderr\": 0.03078373675774564,\n \"acc_norm\": 0.6680851063829787,\n \"acc_norm_stderr\": 0.03078373675774564\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6137931034482759,\n \"acc_stderr\": 0.04057324734419036,\n \"acc_norm\": 0.6137931034482759,\n \"acc_norm_stderr\": 0.04057324734419036\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.02567008063690919,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.02567008063690919\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8258064516129032,\n \"acc_stderr\": 0.021576248184514587,\n \"acc_norm\": 0.8258064516129032,\n \"acc_norm_stderr\": 0.021576248184514587\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.03515895551165698,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.03515895551165698\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8484848484848485,\n \"acc_stderr\": 0.027998073798781678,\n \"acc_norm\": 0.8484848484848485,\n \"acc_norm_stderr\": 0.027998073798781678\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.02239078763821677,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.02239078763821677\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9378238341968912,\n \"acc_stderr\": 0.017426974154240528,\n \"acc_norm\": 0.9378238341968912,\n \"acc_norm_stderr\": 0.017426974154240528\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7025641025641025,\n \"acc_stderr\": 0.02317740813146594,\n \"acc_norm\": 0.7025641025641025,\n \"acc_norm_stderr\": 0.02317740813146594\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857403,\n \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857403\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7563025210084033,\n \"acc_stderr\": 0.027886828078380558,\n \"acc_norm\": 0.7563025210084033,\n \"acc_norm_stderr\": 0.027886828078380558\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248436,\n \"acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248436\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8899082568807339,\n \"acc_stderr\": 0.0134199390186812,\n \"acc_norm\": 0.8899082568807339,\n \"acc_norm_stderr\": 0.0134199390186812\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6203703703703703,\n \"acc_stderr\": 0.03309682581119035,\n \"acc_norm\": 0.6203703703703703,\n \"acc_norm_stderr\": 0.03309682581119035\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9019607843137255,\n \"acc_stderr\": 0.020871118455552104,\n \"acc_norm\": 0.9019607843137255,\n \"acc_norm_stderr\": 0.020871118455552104\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8860759493670886,\n \"acc_stderr\": 0.020681745135884565,\n \"acc_norm\": 0.8860759493670886,\n \"acc_norm_stderr\": 0.020681745135884565\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n \"acc_stderr\": 0.027584066602208274,\n \"acc_norm\": 0.7847533632286996,\n \"acc_norm_stderr\": 0.027584066602208274\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8473282442748091,\n \"acc_stderr\": 0.03154521672005472,\n \"acc_norm\": 0.8473282442748091,\n \"acc_norm_stderr\": 0.03154521672005472\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8842975206611571,\n \"acc_stderr\": 0.029199802455622814,\n \"acc_norm\": 0.8842975206611571,\n \"acc_norm_stderr\": 0.029199802455622814\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.03602814176392645,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.03602814176392645\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8159509202453987,\n \"acc_stderr\": 0.030446777687971726,\n \"acc_norm\": 0.8159509202453987,\n \"acc_norm_stderr\": 0.030446777687971726\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5267857142857143,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.5267857142857143,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.03675668832233188,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.03675668832233188\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n \"acc_stderr\": 0.018315891685625852,\n \"acc_norm\": 0.9145299145299145,\n \"acc_norm_stderr\": 0.018315891685625852\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8697318007662835,\n \"acc_stderr\": 0.012036729568216055,\n \"acc_norm\": 0.8697318007662835,\n \"acc_norm_stderr\": 0.012036729568216055\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7832369942196532,\n \"acc_stderr\": 0.022183477668412856,\n \"acc_norm\": 0.7832369942196532,\n \"acc_norm_stderr\": 0.022183477668412856\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5944134078212291,\n \"acc_stderr\": 0.01642167050633917,\n \"acc_norm\": 0.5944134078212291,\n \"acc_norm_stderr\": 0.01642167050633917\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.023805186524888142,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.023805186524888142\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7909967845659164,\n \"acc_stderr\": 0.023093140398374224,\n \"acc_norm\": 0.7909967845659164,\n \"acc_norm_stderr\": 0.023093140398374224\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8395061728395061,\n \"acc_stderr\": 0.02042395535477803,\n \"acc_norm\": 0.8395061728395061,\n \"acc_norm_stderr\": 0.02042395535477803\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5673758865248227,\n \"acc_stderr\": 0.02955545423677884,\n \"acc_norm\": 0.5673758865248227,\n \"acc_norm_stderr\": 0.02955545423677884\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5795306388526728,\n \"acc_stderr\": 0.012607654553832701,\n \"acc_norm\": 0.5795306388526728,\n \"acc_norm_stderr\": 0.012607654553832701\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7389705882352942,\n \"acc_stderr\": 0.02667925227010313,\n \"acc_norm\": 0.7389705882352942,\n \"acc_norm_stderr\": 0.02667925227010313\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.017322789207784326,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.017322789207784326\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.040693063197213754,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.040693063197213754\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7959183673469388,\n \"acc_stderr\": 0.025801283475090496,\n \"acc_norm\": 0.7959183673469388,\n \"acc_norm_stderr\": 0.025801283475090496\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n \"acc_stderr\": 0.022509345325101706,\n \"acc_norm\": 0.8855721393034826,\n \"acc_norm_stderr\": 0.022509345325101706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8596491228070176,\n \"acc_stderr\": 0.0266405825391332,\n \"acc_norm\": 0.8596491228070176,\n \"acc_norm_stderr\": 0.0266405825391332\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.32558139534883723,\n \"mc1_stderr\": 0.016403989469907825,\n \"mc2\": 0.47099850284720696,\n \"mc2_stderr\": 0.0146354490934919\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8318863456985004,\n \"acc_stderr\": 0.010510336954166736\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5382865807429871,\n \"acc_stderr\": 0.013732048227016683\n }\n}\n```", "repo_url": "https://huggingface.co/sequelbox/SpellBlade", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|arc:challenge|25_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|gsm8k|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hellaswag|10_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-31T03-32-06.667530.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["**/details_harness|winogrande|5_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-31T03-32-06.667530.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_31T03_32_06.667530", "path": ["results_2023-12-31T03-32-06.667530.parquet"]}, {"split": "latest", "path": ["results_2023-12-31T03-32-06.667530.parquet"]}]}]}
2023-12-31T03:34:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of sequelbox/SpellBlade Dataset automatically created during the evaluation run of model sequelbox/SpellBlade on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-31T03:32:06.667530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of sequelbox/SpellBlade\n\n\n\nDataset automatically created during the evaluation run of model sequelbox/SpellBlade on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-31T03:32:06.667530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of sequelbox/SpellBlade\n\n\n\nDataset automatically created during the evaluation run of model sequelbox/SpellBlade on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-31T03:32:06.667530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 177, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of sequelbox/SpellBlade\n\n\n\nDataset automatically created during the evaluation run of model sequelbox/SpellBlade on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-31T03:32:06.667530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
a32a78cb90b386baf6aba9a41d5724a9c4acca62
# Dataset Card for "aug_train0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
adityarra07/aug_train_0
[ "region:us" ]
2023-12-31T04:12:58+00:00
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 362075593.1, "num_examples": 2700}, {"name": "test", "num_bytes": 39466506.0, "num_examples": 300}], "download_size": 395989646, "dataset_size": 401542099.1}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-02T05:57:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "aug_train0" More Information needed
[ "# Dataset Card for \"aug_train0\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"aug_train0\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"aug_train0\"\n\nMore Information needed" ]
cb720f24c17100aaa64cad6fb55b66b19586fac6
# About this dataset This dataset is a machine translation of the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) dataset with Palm 2 (prompt for translation is pasted below). I hope this dataset can be used for LLM developers, especially Japanese LLM developers. # Lisence The license is apache-2.0, which follows the will of the original dataset ([Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)). # Prompt for translation We use `text-bison-32k@002` for translation. The rough Python code for translation is below. ```python model = TextGenerationModel.from_pretrained("text-bison-32k@002") model_parameters = { "max_output_tokens": 8192, "temperature": 0, "top_p": 0.8, "top_k": 40, "stop_sequences":["English: "] } prompt = """The following English is a conversational text. Please translate this into Japanese under the following conditions. - The Japanese should sound natural and comfortable to Japanese people. - Please DO NOT translate any language other than English and leave it as it is. - Please translate all sentences without omitting any part of the sentence. --- English: Detailed Instructions: This task is about translating a given French language sentence to English.\nQ: La presse écrite est morte.\nA: Japanese: 詳しい指示内容: このタスクは、与えられたフランス語の文章を英語に翻訳してください。\nQ: La presse écrite est morte.\nA: English: '[\n ["AFC Ajax (amateurs)", "has ground", "Sportpark De Toekomst"],\n ["Ajax Youth Academy", "plays at", "Sportpark De Toekomst"]\n]' Japanese: '[\n ['アヤックス・アムステルダム(アマチュア)', '持つ','スポルトパルク・デ・トゥコムストで'],['アヤックス・ユース・アカデミー', 'プレーしている','スポルトパルク・デ・トゥコムストで']\n]' English: Based on the information provided in the article, the best answer is:\n\nC: Joe was taken off the train near the elephants.\n\nAccording to the article, Mr. Smith reached into his pocket for his glasses and found that they were lost. Joe then told his grandfather that he knew where the glasses were and that they had fallen out of his pocket when he was taken off the train near the elephants. Therefore, option C is the correct answer. Japanese: 記事で提供された情報に基づいて、最も適切な答えは次のとおりです。\n\nC: ジョーはゾウの近くで電車から降ろされた。\n\n記事によると、スミス氏は眼鏡を取り出そうとして、眼鏡がなくなっていることに気づいた。ジョーはその後、彼がゾウの近くで電車から降ろされたときに、眼鏡がポケットから落ちたことを知っていたと祖父に言った。したがって、オプションCが正しい答えです。 English: Math problem: Solve -32*g = 8 - 72 for g.\nWhat is the solution? Japanese: 数学の問題: -32*g = 8 - 72をgについて解きます。\n解は何ですか? English: "Chtěl jste, abych svědčila.\n\nTranslate this to English?" Japanese: "Chtěl jste, abych svědčila.\n\nこれを英語に翻訳してください?" English: {} Japanese: """ result = model.predict( prompt.format(text), **model_parameters ) result_text = result.text ``` .
yongtae-jp/orca_dpo_pairs_ja
[ "task_categories:text-generation", "language:ja", "license:apache-2.0", "region:us" ]
2023-12-31T04:18:18+00:00
{"language": ["ja"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "k", "dataset_info": {"features": [{"name": "system", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44486983, "num_examples": 12851}], "download_size": 21154811, "dataset_size": 44486983}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-09T07:51:36+00:00
[]
[ "ja" ]
TAGS #task_categories-text-generation #language-Japanese #license-apache-2.0 #region-us
# About this dataset This dataset is a machine translation of the Intel/orca_dpo_pairs dataset with Palm 2 (prompt for translation is pasted below). I hope this dataset can be used for LLM developers, especially Japanese LLM developers. # Lisence The license is apache-2.0, which follows the will of the original dataset (Intel/orca_dpo_pairs). # Prompt for translation We use 'text-bison-32k@002' for translation. The rough Python code for translation is below. .
[ "# About this dataset\nThis dataset is a machine translation of the Intel/orca_dpo_pairs dataset with Palm 2 (prompt for translation is pasted below).\n\nI hope this dataset can be used for LLM developers, especially Japanese LLM developers.", "# Lisence\nThe license is apache-2.0, which follows the will of the original dataset (Intel/orca_dpo_pairs).", "# Prompt for translation\nWe use 'text-bison-32k@002' for translation. The rough Python code for translation is below.\n\n\n\n." ]
[ "TAGS\n#task_categories-text-generation #language-Japanese #license-apache-2.0 #region-us \n", "# About this dataset\nThis dataset is a machine translation of the Intel/orca_dpo_pairs dataset with Palm 2 (prompt for translation is pasted below).\n\nI hope this dataset can be used for LLM developers, especially Japanese LLM developers.", "# Lisence\nThe license is apache-2.0, which follows the will of the original dataset (Intel/orca_dpo_pairs).", "# Prompt for translation\nWe use 'text-bison-32k@002' for translation. The rough Python code for translation is below.\n\n\n\n." ]
[ 31, 61, 33, 32 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Japanese #license-apache-2.0 #region-us \n# About this dataset\nThis dataset is a machine translation of the Intel/orca_dpo_pairs dataset with Palm 2 (prompt for translation is pasted below).\n\nI hope this dataset can be used for LLM developers, especially Japanese LLM developers.# Lisence\nThe license is apache-2.0, which follows the will of the original dataset (Intel/orca_dpo_pairs).# Prompt for translation\nWe use 'text-bison-32k@002' for translation. The rough Python code for translation is below.\n\n\n\n." ]
e28cdcbfd1fcad36ed1e395ff26dba59a18ea6db
data source: https://github.com/StonyBrookNLP/BioNLI
hippocrates/BioNLI_train
[ "region:us" ]
2023-12-31T04:35:32+00:00
{}
2023-12-31T04:39:23+00:00
[]
[]
TAGS #region-us
data source: URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
b0dc25d224b689c5f0b24023071c33f3e8ca6221
# GPT2-PretrainV1-en ### Dataset Description Small dataset designed to test knowledge distillation from GPT2 models into smaller useful models. This is meant for pretraining a smaller model. This dataset will hopefully give the model a general understand of a broad range of information. ### Dataset Sources This is a combination of several other datasets into one. Each dataset was downloaded features were renamed etc to allow for joining and then shuffling. skeskinen/TinyStories-hf https://huggingface.co/datasets/skeskinen/TinyStories-hf nampdn-ai/tiny-textbooks https://huggingface.co/datasets/nampdn-ai/tiny-textbooks Bingsu/openwebtext_20p https://huggingface.co/datasets/Bingsu/openwebtext_20p
terrycraddock/GPT2-PretrainV1-en
[ "license:mit", "region:us" ]
2023-12-31T06:34:18+00:00
{"license": "mit", "dataset_info": {"features": [{"name": "text", "dtype": "large_string"}], "splits": [{"name": "train", "num_bytes": 10223971287.488651, "num_examples": 32136787}, {"name": "test", "num_bytes": 1135997092.5113497, "num_examples": 3570755}], "download_size": 7440940192, "dataset_size": 11359968380.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-12-31T13:33:30+00:00
[]
[]
TAGS #license-mit #region-us
# GPT2-PretrainV1-en ### Dataset Description Small dataset designed to test knowledge distillation from GPT2 models into smaller useful models. This is meant for pretraining a smaller model. This dataset will hopefully give the model a general understand of a broad range of information. ### Dataset Sources This is a combination of several other datasets into one. Each dataset was downloaded features were renamed etc to allow for joining and then shuffling. skeskinen/TinyStories-hf URL nampdn-ai/tiny-textbooks URL Bingsu/openwebtext_20p URL
[ "# GPT2-PretrainV1-en", "### Dataset Description\n\nSmall dataset designed to test knowledge distillation from GPT2 models into smaller useful models. This is meant for pretraining a smaller model. \nThis dataset will hopefully give the model a general understand of a broad range of information.", "### Dataset Sources\n\nThis is a combination of several other datasets into one. Each dataset was downloaded features were renamed etc to allow for joining and then shuffling.\n\nskeskinen/TinyStories-hf\nURL\n\nnampdn-ai/tiny-textbooks\nURL\n\nBingsu/openwebtext_20p\nURL" ]
[ "TAGS\n#license-mit #region-us \n", "# GPT2-PretrainV1-en", "### Dataset Description\n\nSmall dataset designed to test knowledge distillation from GPT2 models into smaller useful models. This is meant for pretraining a smaller model. \nThis dataset will hopefully give the model a general understand of a broad range of information.", "### Dataset Sources\n\nThis is a combination of several other datasets into one. Each dataset was downloaded features were renamed etc to allow for joining and then shuffling.\n\nskeskinen/TinyStories-hf\nURL\n\nnampdn-ai/tiny-textbooks\nURL\n\nBingsu/openwebtext_20p\nURL" ]
[ 11, 10, 53, 76 ]
[ "passage: TAGS\n#license-mit #region-us \n# GPT2-PretrainV1-en### Dataset Description\n\nSmall dataset designed to test knowledge distillation from GPT2 models into smaller useful models. This is meant for pretraining a smaller model. \nThis dataset will hopefully give the model a general understand of a broad range of information.### Dataset Sources\n\nThis is a combination of several other datasets into one. Each dataset was downloaded features were renamed etc to allow for joining and then shuffling.\n\nskeskinen/TinyStories-hf\nURL\n\nnampdn-ai/tiny-textbooks\nURL\n\nBingsu/openwebtext_20p\nURL" ]
0ad6c4c530ca261efc867f2dcbf64057db55ef8d
Generated CoT reasoning data using GPT4 based on "zeroshot/twitter-financial-news-sentiment" data(https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment/viewer/default/train?p=1). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
yc4142/stockmarketCoT
[ "region:us" ]
2023-12-31T06:36:14+00:00
{}
2023-12-31T06:46:28+00:00
[]
[]
TAGS #region-us
Generated CoT reasoning data using GPT4 based on "zeroshot/twitter-financial-news-sentiment" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
aec30ea0095d86df84b4214f4c8d646111bf8d1f
Generated Non CoT data based on "zeroshot/twitter-financial-news-sentiment" data(https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment/viewer/default/train?p=1). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
yc4142/stockmarket_nonCoT
[ "region:us" ]
2023-12-31T06:41:54+00:00
{}
2023-12-31T06:46:02+00:00
[]
[]
TAGS #region-us
Generated Non CoT data based on "zeroshot/twitter-financial-news-sentiment" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
314abfac15966c110cf856d8f538207a00b927a3
Generated CoT reasoning data using GPT4 based on "zeroshot/twitter-financial-news-sentiment" data(https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment/viewer/default/train?p=1). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
yc4142/stockmarket-CoT
[ "region:us" ]
2023-12-31T06:49:24+00:00
{}
2023-12-31T06:50:49+00:00
[]
[]
TAGS #region-us
Generated CoT reasoning data using GPT4 based on "zeroshot/twitter-financial-news-sentiment" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
858aa015222e032d1568cee6bb3f693d3d53b8d7
Generated Non CoT data based on "zeroshot/twitter-financial-news-sentiment" data(https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment/viewer/default/train?p=1). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
yc4142/stockmarket-nonCoT
[ "region:us" ]
2023-12-31T06:50:07+00:00
{}
2023-12-31T06:51:06+00:00
[]
[]
TAGS #region-us
Generated Non CoT data based on "zeroshot/twitter-financial-news-sentiment" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
792c3529f2ae535c2b8986e7d0051dcd87d44d40
Earth and Dusk Vae --- Merged Vae for Testing, liked the output. File name does not mean it it will be the end of it, there may be blessups left to do to it for tweaking. # Join us on our journey ## Website: https://www.end-media.org ## Discord: :https://discord.gg/5t2kYxt7An ## Backups: https://huggingface.co/EarthnDusk ## Send a Pizza to Dusk: https://ko-fi.com/duskfallcrew/ ## Send a Pizza to Earth: https://ko-fi.com/earthnicity/ # About Earth & DUSK MEDIA - Who Are We? At Earth & DUSK MEDIA, we're not just your typical content creators – we're two vibrant plural systems on a creative journey. Our diverse content spans across multiple platforms and artistic ventures, making us a dynamic force in the creative world. Our Creative Universe: Second Life: Step into our virtual realm, where creativity knows no bounds. YouTube: Join us on our visual storytelling adventures. TikTok: Enjoy bite-sized, entertaining content that sparks joy. Music Production: We create mesmerizing sounds that resonate with your soul. AI Ventures: Explore the groundbreaking AI creations from Duskfallcrew and Earthnicity. You may have come across their content on renowned platforms like Civit AI, Huggingface, and more. Get ready to experience the world of AI art like never before. Our journey continues to unfold, and we invite you to be a part of it. Join Earth & DUSK MEDIA to experience a kaleidoscope of creativity and innovation that knows no limits! # WE ARE PROUDLY SPONSORED BY: https://www.piratediffusion.com/ https://yodayo.com/ --- # license: creativeml-openrail-m Full details can be read here: https://github.com/CompVis/stable-diffusion/blob/main/LICENSE That means: No illegal shiznit, and don't resell this model as your own. And also it's your responsibility if you get DMCA'ed for commercial use of this model, whatever you do is your responsibility. You can merge with it, you can make fan art - but because this is AI, I can't promise you that you'll be safe from the copyright monsters. We're just providing a service.
EarthnDusk/END-VAE-FINAL
[ "license:creativeml-openrail-m", "region:us" ]
2023-12-31T06:56:13+00:00
{"license": "creativeml-openrail-m"}
2024-01-01T00:35:52+00:00
[]
[]
TAGS #license-creativeml-openrail-m #region-us
Earth and Dusk Vae --- Merged Vae for Testing, liked the output. File name does not mean it it will be the end of it, there may be blessups left to do to it for tweaking. # Join us on our journey ## Website: URL ## Discord: :URL ## Backups: URL ## Send a Pizza to Dusk: URL ## Send a Pizza to Earth: URL # About Earth & DUSK MEDIA - Who Are We? At Earth & DUSK MEDIA, we're not just your typical content creators – we're two vibrant plural systems on a creative journey. Our diverse content spans across multiple platforms and artistic ventures, making us a dynamic force in the creative world. Our Creative Universe: Second Life: Step into our virtual realm, where creativity knows no bounds. YouTube: Join us on our visual storytelling adventures. TikTok: Enjoy bite-sized, entertaining content that sparks joy. Music Production: We create mesmerizing sounds that resonate with your soul. AI Ventures: Explore the groundbreaking AI creations from Duskfallcrew and Earthnicity. You may have come across their content on renowned platforms like Civit AI, Huggingface, and more. Get ready to experience the world of AI art like never before. Our journey continues to unfold, and we invite you to be a part of it. Join Earth & DUSK MEDIA to experience a kaleidoscope of creativity and innovation that knows no limits! # WE ARE PROUDLY SPONSORED BY: URL URL --- # license: creativeml-openrail-m Full details can be read here: URL That means: No illegal shiznit, and don't resell this model as your own. And also it's your responsibility if you get DMCA'ed for commercial use of this model, whatever you do is your responsibility. You can merge with it, you can make fan art - but because this is AI, I can't promise you that you'll be safe from the copyright monsters. We're just providing a service.
[ "# Join us on our journey", "## Website: URL", "## Discord: :URL", "## Backups: URL", "## Send a Pizza to Dusk: URL", "## Send a Pizza to Earth: URL", "# About\n\nEarth & DUSK MEDIA - Who Are We?\n\nAt Earth & DUSK MEDIA, we're not just your typical content creators – we're two vibrant plural systems on a creative journey. Our diverse content spans across multiple platforms and artistic ventures, making us a dynamic force in the creative world.\n\nOur Creative Universe:\n\nSecond Life: Step into our virtual realm, where creativity knows no bounds.\n\nYouTube: Join us on our visual storytelling adventures.\n\nTikTok: Enjoy bite-sized, entertaining content that sparks joy.\n\nMusic Production: We create mesmerizing sounds that resonate with your soul.\n\nAI Ventures: Explore the groundbreaking AI creations from Duskfallcrew and Earthnicity. You may have come across their content on renowned platforms like Civit AI, Huggingface, and more. Get ready to experience the world of AI art like never before.\n\nOur journey continues to unfold, and we invite you to be a part of it. Join Earth & DUSK MEDIA to experience a kaleidoscope of creativity and innovation that knows no limits!", "# WE ARE PROUDLY SPONSORED BY:\n\nURL\n\nURL\n\n\n---", "# license:\n\ncreativeml-openrail-m\n\nFull details can be read here: URL\n\nThat means: No illegal shiznit, and don't resell this model as your own.\n\nAnd also it's your responsibility if you get DMCA'ed for commercial use of this model, whatever you do is your responsibility.\n\nYou can merge with it, you can make fan art - but because this is AI, I can't promise you that you'll be safe from the copyright monsters.\n\nWe're just providing a service." ]
[ "TAGS\n#license-creativeml-openrail-m #region-us \n", "# Join us on our journey", "## Website: URL", "## Discord: :URL", "## Backups: URL", "## Send a Pizza to Dusk: URL", "## Send a Pizza to Earth: URL", "# About\n\nEarth & DUSK MEDIA - Who Are We?\n\nAt Earth & DUSK MEDIA, we're not just your typical content creators – we're two vibrant plural systems on a creative journey. Our diverse content spans across multiple platforms and artistic ventures, making us a dynamic force in the creative world.\n\nOur Creative Universe:\n\nSecond Life: Step into our virtual realm, where creativity knows no bounds.\n\nYouTube: Join us on our visual storytelling adventures.\n\nTikTok: Enjoy bite-sized, entertaining content that sparks joy.\n\nMusic Production: We create mesmerizing sounds that resonate with your soul.\n\nAI Ventures: Explore the groundbreaking AI creations from Duskfallcrew and Earthnicity. You may have come across their content on renowned platforms like Civit AI, Huggingface, and more. Get ready to experience the world of AI art like never before.\n\nOur journey continues to unfold, and we invite you to be a part of it. Join Earth & DUSK MEDIA to experience a kaleidoscope of creativity and innovation that knows no limits!", "# WE ARE PROUDLY SPONSORED BY:\n\nURL\n\nURL\n\n\n---", "# license:\n\ncreativeml-openrail-m\n\nFull details can be read here: URL\n\nThat means: No illegal shiznit, and don't resell this model as your own.\n\nAnd also it's your responsibility if you get DMCA'ed for commercial use of this model, whatever you do is your responsibility.\n\nYou can merge with it, you can make fan art - but because this is AI, I can't promise you that you'll be safe from the copyright monsters.\n\nWe're just providing a service." ]
[ 18, 6, 4, 6, 5, 9, 8, 240, 16, 111 ]
[ "passage: TAGS\n#license-creativeml-openrail-m #region-us \n# Join us on our journey## Website: URL## Discord: :URL## Backups: URL## Send a Pizza to Dusk: URL## Send a Pizza to Earth: URL# About\n\nEarth & DUSK MEDIA - Who Are We?\n\nAt Earth & DUSK MEDIA, we're not just your typical content creators – we're two vibrant plural systems on a creative journey. Our diverse content spans across multiple platforms and artistic ventures, making us a dynamic force in the creative world.\n\nOur Creative Universe:\n\nSecond Life: Step into our virtual realm, where creativity knows no bounds.\n\nYouTube: Join us on our visual storytelling adventures.\n\nTikTok: Enjoy bite-sized, entertaining content that sparks joy.\n\nMusic Production: We create mesmerizing sounds that resonate with your soul.\n\nAI Ventures: Explore the groundbreaking AI creations from Duskfallcrew and Earthnicity. You may have come across their content on renowned platforms like Civit AI, Huggingface, and more. Get ready to experience the world of AI art like never before.\n\nOur journey continues to unfold, and we invite you to be a part of it. Join Earth & DUSK MEDIA to experience a kaleidoscope of creativity and innovation that knows no limits!# WE ARE PROUDLY SPONSORED BY:\n\nURL\n\nURL\n\n\n---# license:\n\ncreativeml-openrail-m\n\nFull details can be read here: URL\n\nThat means: No illegal shiznit, and don't resell this model as your own.\n\nAnd also it's your responsibility if you get DMCA'ed for commercial use of this model, whatever you do is your responsibility.\n\nYou can merge with it, you can make fan art - but because this is AI, I can't promise you that you'll be safe from the copyright monsters.\n\nWe're just providing a service." ]
138f27408cb05596e06cedb79abeb0b174f2d255
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> **Dataset Name**: Google Search Trends Top Rising Search Terms **Description**: The Google Search Trends Top Rising Search Terms dataset provides valuable insights into the most rapidly growing search queries on the Google search engine. It offers a comprehensive collection of trending search queries, their search frequencies, and relevant metadata. Researchers and data enthusiasts can utilize this dataset to analyze search trends, identify emerging topics, and gain a deeper understanding of user interests that are currently on the rise. Whether for market research, content optimization, or data-driven decision-making, this dataset offers a wealth of information to explore the dynamic landscape of online search behavior, highlighting what is gaining popularity in real-time. - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
hoshangc/google_search_terms_training_data
[ "task_categories:text-classification", "region:us" ]
2023-12-31T07:05:20+00:00
{"task_categories": ["text-classification"]}
2024-01-01T04:19:26+00:00
[]
[]
TAGS #task_categories-text-classification #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description Dataset Name: Google Search Trends Top Rising Search Terms Description: The Google Search Trends Top Rising Search Terms dataset provides valuable insights into the most rapidly growing search queries on the Google search engine. It offers a comprehensive collection of trending search queries, their search frequencies, and relevant metadata. Researchers and data enthusiasts can utilize this dataset to analyze search trends, identify emerging topics, and gain a deeper understanding of user interests that are currently on the rise. Whether for market research, content optimization, or data-driven decision-making, this dataset offers a wealth of information to explore the dynamic landscape of online search behavior, highlighting what is gaining popularity in real-time. - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\nDataset Name: Google Search Trends Top Rising Search Terms\n\nDescription:\nThe Google Search Trends Top Rising Search Terms dataset provides valuable insights into the most rapidly growing search queries on the Google search engine. It offers a comprehensive collection of trending search queries, their search frequencies, and relevant metadata. Researchers and data enthusiasts can utilize this dataset to analyze search trends, identify emerging topics, and gain a deeper understanding of user interests that are currently on the rise. Whether for market research, content optimization, or data-driven decision-making, this dataset offers a wealth of information to explore the dynamic landscape of online search behavior, highlighting what is gaining popularity in real-time.\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#task_categories-text-classification #region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\nDataset Name: Google Search Trends Top Rising Search Terms\n\nDescription:\nThe Google Search Trends Top Rising Search Terms dataset provides valuable insights into the most rapidly growing search queries on the Google search engine. It offers a comprehensive collection of trending search queries, their search frequencies, and relevant metadata. Researchers and data enthusiasts can utilize this dataset to analyze search trends, identify emerging topics, and gain a deeper understanding of user interests that are currently on the rise. Whether for market research, content optimization, or data-driven decision-making, this dataset offers a wealth of information to explore the dynamic landscape of online search behavior, highlighting what is gaining popularity in real-time.\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 17, 34, 4, 203, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#task_categories-text-classification #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\nDataset Name: Google Search Trends Top Rising Search Terms\n\nDescription:\nThe Google Search Trends Top Rising Search Terms dataset provides valuable insights into the most rapidly growing search queries on the Google search engine. It offers a comprehensive collection of trending search queries, their search frequencies, and relevant metadata. Researchers and data enthusiasts can utilize this dataset to analyze search trends, identify emerging topics, and gain a deeper understanding of user interests that are currently on the rise. Whether for market research, content optimization, or data-driven decision-making, this dataset offers a wealth of information to explore the dynamic landscape of online search behavior, highlighting what is gaining popularity in real-time.\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
23fd42c04578baf5b093f10dc4c71cda9f524556
extracted content from various website articles using code from [FinNLP](https://github.com/AI4Finance-Foundation/FinNLP) and generated questions and answers with OpenAI API **gpt3.5-turbo-16k**
sherelyn912/finnews_en_2wk_qa
[ "region:us" ]
2023-12-31T08:44:50+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 310555.2191464821, "num_examples": 1387}, {"name": "test", "num_bytes": 77694.78085351788, "num_examples": 347}], "download_size": 182095, "dataset_size": 388250.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-02T10:21:26+00:00
[]
[]
TAGS #region-us
extracted content from various website articles using code from FinNLP and generated questions and answers with OpenAI API gpt3.5-turbo-16k
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
306012130014fc646709c2fba95ce3057da13787
Source - https://www.dropbox.com/s/omintwb3k2h46kk/passport_dataset.zip
sizhkhy/passports
[ "task_categories:visual-question-answering", "size_categories:n<1K", "language:en", "license:mit", "kyc", "passports", "region:us" ]
2023-12-31T09:07:25+00:00
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["visual-question-answering"], "pretty_name": "Passports", "tags": ["kyc", "passports"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label_string", "sequence": "string"}, {"name": "words", "sequence": "string"}, {"name": "labels", "sequence": "int64"}, {"name": "boxes", "sequence": {"sequence": "int64"}}], "splits": [{"name": "train", "num_bytes": 34324486.0, "num_examples": 100}, {"name": "valid", "num_bytes": 2769718.0, "num_examples": 9}], "download_size": 36565385, "dataset_size": 37094204.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}]}
2023-12-31T09:23:52+00:00
[]
[ "en" ]
TAGS #task_categories-visual-question-answering #size_categories-n<1K #language-English #license-mit #kyc #passports #region-us
Source - URL
[]
[ "TAGS\n#task_categories-visual-question-answering #size_categories-n<1K #language-English #license-mit #kyc #passports #region-us \n" ]
[ 46 ]
[ "passage: TAGS\n#task_categories-visual-question-answering #size_categories-n<1K #language-English #license-mit #kyc #passports #region-us \n" ]
fe839175aec5e99d301c4528441cc9dfb84af367
# ArzEn-MultiGenre: A Comprehensive Parallel Dataset ## Overview ArzEn-MultiGenre is a distinctive parallel dataset that encompasses a diverse collection of Egyptian Arabic content. This collection includes song lyrics, novels, and TV show subtitles, all of which have been meticulously translated and aligned with their English counterparts. The dataset serves as an invaluable tool for various linguistic and computational applications. **Published:** 28 December 2023 **Version:** 3 **DOI:** 10.17632/6k97jty9xg.3 **Contributor:** Rania Al-Sabbagh ## Dataset Details - **Total Segment Pairs:** 25,557 - **Languages:** Egyptian Arabic and English - **Content Types:** Song Lyrics, Novels, TV Show Subtitles ## Applications - **Machine Translation Benchmarking:** Ideal for testing and improving new machine translation models. - **Language Model Fine-Tuning:** Suitable for enhancing large language models in few-shot settings. - **Commercial Application Adaptation:** Can be used to refine tools like Google Translate for better performance with Egyptian Arabic. ## Research Relevance This dataset is a significant resource for research in fields such as translation studies, cross-linguistic analysis, and lexical semantics. ## Unique Contributions 1. **Diverse Textual Genres:** The dataset includes genres not typically found in parallel datasets for Egyptian Arabic and English. 2. **Gold-Standard Quality:** Translated and aligned by human experts, ensuring high accuracy and reliability. ## Citation Please cite this dataset as follows: Al-Sabbagh, Rania (2023). “ArzEn-MultiGenre: An aligned parallel dataset of Egyptian Arabic song lyrics, novels, and subtitles, with English translations.” Mendeley Data, V3, DOI: 10.17632/6k97jty9xg.3 ## Related Links - [Article](https://ijaes2011.net/index.php/IJAES/article/view/560) ## Institutions - University of Sharjah
HeshamHaroon/ArzEn-MultiGenre
[ "task_categories:translation", "size_categories:1K<n<10K", "language:ar", "language:en", "license:cc-by-4.0", "region:us" ]
2023-12-31T09:13:46+00:00
{"language": ["ar", "en"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["translation"]}
2023-12-31T14:41:09+00:00
[]
[ "ar", "en" ]
TAGS #task_categories-translation #size_categories-1K<n<10K #language-Arabic #language-English #license-cc-by-4.0 #region-us
# ArzEn-MultiGenre: A Comprehensive Parallel Dataset ## Overview ArzEn-MultiGenre is a distinctive parallel dataset that encompasses a diverse collection of Egyptian Arabic content. This collection includes song lyrics, novels, and TV show subtitles, all of which have been meticulously translated and aligned with their English counterparts. The dataset serves as an invaluable tool for various linguistic and computational applications. Published: 28 December 2023 Version: 3 DOI: 10.17632/6k97jty9xg.3 Contributor: Rania Al-Sabbagh ## Dataset Details - Total Segment Pairs: 25,557 - Languages: Egyptian Arabic and English - Content Types: Song Lyrics, Novels, TV Show Subtitles ## Applications - Machine Translation Benchmarking: Ideal for testing and improving new machine translation models. - Language Model Fine-Tuning: Suitable for enhancing large language models in few-shot settings. - Commercial Application Adaptation: Can be used to refine tools like Google Translate for better performance with Egyptian Arabic. ## Research Relevance This dataset is a significant resource for research in fields such as translation studies, cross-linguistic analysis, and lexical semantics. ## Unique Contributions 1. Diverse Textual Genres: The dataset includes genres not typically found in parallel datasets for Egyptian Arabic and English. 2. Gold-Standard Quality: Translated and aligned by human experts, ensuring high accuracy and reliability. Please cite this dataset as follows: Al-Sabbagh, Rania (2023). “ArzEn-MultiGenre: An aligned parallel dataset of Egyptian Arabic song lyrics, novels, and subtitles, with English translations.” Mendeley Data, V3, DOI: 10.17632/6k97jty9xg.3 ## Related Links - Article ## Institutions - University of Sharjah
[ "# ArzEn-MultiGenre: A Comprehensive Parallel Dataset", "## Overview\nArzEn-MultiGenre is a distinctive parallel dataset that encompasses a diverse collection of Egyptian Arabic content. This collection includes song lyrics, novels, and TV show subtitles, all of which have been meticulously translated and aligned with their English counterparts. The dataset serves as an invaluable tool for various linguistic and computational applications.\nPublished: 28 December 2023 \nVersion: 3 \nDOI: 10.17632/6k97jty9xg.3 \nContributor: Rania Al-Sabbagh", "## Dataset Details\n- Total Segment Pairs: 25,557\n- Languages: Egyptian Arabic and English\n- Content Types: Song Lyrics, Novels, TV Show Subtitles", "## Applications\n- Machine Translation Benchmarking: Ideal for testing and improving new machine translation models.\n- Language Model Fine-Tuning: Suitable for enhancing large language models in few-shot settings.\n- Commercial Application Adaptation: Can be used to refine tools like Google Translate for better performance with Egyptian Arabic.", "## Research Relevance\nThis dataset is a significant resource for research in fields such as translation studies, cross-linguistic analysis, and lexical semantics.", "## Unique Contributions\n1. Diverse Textual Genres: The dataset includes genres not typically found in parallel datasets for Egyptian Arabic and English.\n2. Gold-Standard Quality: Translated and aligned by human experts, ensuring high accuracy and reliability.\n\nPlease cite this dataset as follows: \nAl-Sabbagh, Rania (2023). “ArzEn-MultiGenre: An aligned parallel dataset of Egyptian Arabic song lyrics, novels, and subtitles, with English translations.” Mendeley Data, V3, DOI: 10.17632/6k97jty9xg.3", "## Related Links\n- Article", "## Institutions\n- University of Sharjah" ]
[ "TAGS\n#task_categories-translation #size_categories-1K<n<10K #language-Arabic #language-English #license-cc-by-4.0 #region-us \n", "# ArzEn-MultiGenre: A Comprehensive Parallel Dataset", "## Overview\nArzEn-MultiGenre is a distinctive parallel dataset that encompasses a diverse collection of Egyptian Arabic content. This collection includes song lyrics, novels, and TV show subtitles, all of which have been meticulously translated and aligned with their English counterparts. The dataset serves as an invaluable tool for various linguistic and computational applications.\nPublished: 28 December 2023 \nVersion: 3 \nDOI: 10.17632/6k97jty9xg.3 \nContributor: Rania Al-Sabbagh", "## Dataset Details\n- Total Segment Pairs: 25,557\n- Languages: Egyptian Arabic and English\n- Content Types: Song Lyrics, Novels, TV Show Subtitles", "## Applications\n- Machine Translation Benchmarking: Ideal for testing and improving new machine translation models.\n- Language Model Fine-Tuning: Suitable for enhancing large language models in few-shot settings.\n- Commercial Application Adaptation: Can be used to refine tools like Google Translate for better performance with Egyptian Arabic.", "## Research Relevance\nThis dataset is a significant resource for research in fields such as translation studies, cross-linguistic analysis, and lexical semantics.", "## Unique Contributions\n1. Diverse Textual Genres: The dataset includes genres not typically found in parallel datasets for Egyptian Arabic and English.\n2. Gold-Standard Quality: Translated and aligned by human experts, ensuring high accuracy and reliability.\n\nPlease cite this dataset as follows: \nAl-Sabbagh, Rania (2023). “ArzEn-MultiGenre: An aligned parallel dataset of Egyptian Arabic song lyrics, novels, and subtitles, with English translations.” Mendeley Data, V3, DOI: 10.17632/6k97jty9xg.3", "## Related Links\n- Article", "## Institutions\n- University of Sharjah" ]
[ 45, 17, 124, 39, 73, 35, 142, 5, 8 ]
[ "passage: TAGS\n#task_categories-translation #size_categories-1K<n<10K #language-Arabic #language-English #license-cc-by-4.0 #region-us \n# ArzEn-MultiGenre: A Comprehensive Parallel Dataset## Overview\nArzEn-MultiGenre is a distinctive parallel dataset that encompasses a diverse collection of Egyptian Arabic content. This collection includes song lyrics, novels, and TV show subtitles, all of which have been meticulously translated and aligned with their English counterparts. The dataset serves as an invaluable tool for various linguistic and computational applications.\nPublished: 28 December 2023 \nVersion: 3 \nDOI: 10.17632/6k97jty9xg.3 \nContributor: Rania Al-Sabbagh## Dataset Details\n- Total Segment Pairs: 25,557\n- Languages: Egyptian Arabic and English\n- Content Types: Song Lyrics, Novels, TV Show Subtitles## Applications\n- Machine Translation Benchmarking: Ideal for testing and improving new machine translation models.\n- Language Model Fine-Tuning: Suitable for enhancing large language models in few-shot settings.\n- Commercial Application Adaptation: Can be used to refine tools like Google Translate for better performance with Egyptian Arabic.## Research Relevance\nThis dataset is a significant resource for research in fields such as translation studies, cross-linguistic analysis, and lexical semantics.## Unique Contributions\n1. Diverse Textual Genres: The dataset includes genres not typically found in parallel datasets for Egyptian Arabic and English.\n2. Gold-Standard Quality: Translated and aligned by human experts, ensuring high accuracy and reliability.\n\nPlease cite this dataset as follows: \nAl-Sabbagh, Rania (2023). “ArzEn-MultiGenre: An aligned parallel dataset of Egyptian Arabic song lyrics, novels, and subtitles, with English translations.” Mendeley Data, V3, DOI: 10.17632/6k97jty9xg.3## Related Links\n- Article## Institutions\n- University of Sharjah" ]
8ef93f1f7fedb17c8f75e4da8308db20b9f255bb
# Dataset Card for "SwahiliAlpaca" Alpaca dataset for instruction fine-tuning in Swahili. ## Prompt Template ``` ### Maelekezo:\n{instruction} ### Agizo:\n{input} ### Jibu:\n{output} ```
mwitiderrick/SwahiliAlpaca
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:sw", "license:apache-2.0", "region:us" ]
2023-12-31T09:21:23+00:00
{"language": ["sw"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Swahili Alpaca", "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31174024, "num_examples": 34740}], "download_size": 15797740, "dataset_size": 31174024}}
2023-12-31T19:27:51+00:00
[]
[ "sw" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-Swahili (macrolanguage) #license-apache-2.0 #region-us
# Dataset Card for "SwahiliAlpaca" Alpaca dataset for instruction fine-tuning in Swahili. ## Prompt Template
[ "# Dataset Card for \"SwahiliAlpaca\"\n\nAlpaca dataset for instruction fine-tuning in Swahili.", "## Prompt Template" ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Swahili (macrolanguage) #license-apache-2.0 #region-us \n", "# Dataset Card for \"SwahiliAlpaca\"\n\nAlpaca dataset for instruction fine-tuning in Swahili.", "## Prompt Template" ]
[ 48, 29, 5 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Swahili (macrolanguage) #license-apache-2.0 #region-us \n# Dataset Card for \"SwahiliAlpaca\"\n\nAlpaca dataset for instruction fine-tuning in Swahili.## Prompt Template" ]
4b27908001d364c6682cdf639df45e236dae14e7
# Dataset Card for "Llama2-MedTuned-Instructions" ## Dataset Description Llama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. It consists of approximately 200,000 samples, each tailored to guide models in performing specific tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI). This dataset represents a fusion of various existing data sources, reformatted to facilitate instruction-based learning. ## Source Datasets and Composition The dataset amalgamates training subsets from several prominent biomedical datasets: - **Named Entity Recognition (NER)**: Utilises NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, and i2b2-2012 datasets. - **Relation Extraction (RE)**: Incorporates i2b2-2010 and GAD datasets. - **Natural Language Inference (NLI)**: Employs the MedNLI dataset. - **Document Classification**: Uses the hallmarks of cancer (HoC) dataset. - **Question Answering (QA)**: Includes samples from ChatDoctor and PMC-Llama-Instructions datasets. ## Prompting Strategy Each sample in the dataset follows a three-part structure: Instruction, Input, and Output. This format ensures clarity in task directives and expected outcomes, aligning with the instruction-based training approach. ## Usage and Application This dataset is ideal for training and evaluating models on biomedical NLP tasks, particularly those focused on understanding and processing medical and clinical text. It serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT. ## Acknowledgements We extend our gratitude to all contributors and supporting institutions. ## Citation For utilising this dataset in academic work or applications, please cite: ```bibtex @misc{rohanian2023exploring, title={Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing}, author={Omid Rohanian and Mohammadmahdi Nouriborji and David A. Clifton}, year={2023}, eprint={2401.00579}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
nlpie/Llama2-MedTuned-Instructions
[ "license:cc-by-nc-4.0", "arxiv:2401.00579", "region:us" ]
2023-12-31T09:58:50+00:00
{"license": "cc-by-nc-4.0", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206029981.0, "num_examples": 200252}, {"name": "validation", "num_bytes": 60345037.65316321, "num_examples": 70066}], "download_size": 91116147, "dataset_size": 266375018.6531632}}
2024-02-11T23:18:12+00:00
[ "2401.00579" ]
[]
TAGS #license-cc-by-nc-4.0 #arxiv-2401.00579 #region-us
# Dataset Card for "Llama2-MedTuned-Instructions" ## Dataset Description Llama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. It consists of approximately 200,000 samples, each tailored to guide models in performing specific tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI). This dataset represents a fusion of various existing data sources, reformatted to facilitate instruction-based learning. ## Source Datasets and Composition The dataset amalgamates training subsets from several prominent biomedical datasets: - Named Entity Recognition (NER): Utilises NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, and i2b2-2012 datasets. - Relation Extraction (RE): Incorporates i2b2-2010 and GAD datasets. - Natural Language Inference (NLI): Employs the MedNLI dataset. - Document Classification: Uses the hallmarks of cancer (HoC) dataset. - Question Answering (QA): Includes samples from ChatDoctor and PMC-Llama-Instructions datasets. ## Prompting Strategy Each sample in the dataset follows a three-part structure: Instruction, Input, and Output. This format ensures clarity in task directives and expected outcomes, aligning with the instruction-based training approach. ## Usage and Application This dataset is ideal for training and evaluating models on biomedical NLP tasks, particularly those focused on understanding and processing medical and clinical text. It serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT. ## Acknowledgements We extend our gratitude to all contributors and supporting institutions. For utilising this dataset in academic work or applications, please cite:
[ "# Dataset Card for \"Llama2-MedTuned-Instructions\"", "## Dataset Description\n\nLlama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. It consists of approximately 200,000 samples, each tailored to guide models in performing specific tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI). This dataset represents a fusion of various existing data sources, reformatted to facilitate instruction-based learning.", "## Source Datasets and Composition\n\nThe dataset amalgamates training subsets from several prominent biomedical datasets:\n- Named Entity Recognition (NER): Utilises NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, and i2b2-2012 datasets.\n- Relation Extraction (RE): Incorporates i2b2-2010 and GAD datasets.\n- Natural Language Inference (NLI): Employs the MedNLI dataset.\n- Document Classification: Uses the hallmarks of cancer (HoC) dataset.\n- Question Answering (QA): Includes samples from ChatDoctor and PMC-Llama-Instructions datasets.", "## Prompting Strategy\n\nEach sample in the dataset follows a three-part structure: Instruction, Input, and Output. This format ensures clarity in task directives and expected outcomes, aligning with the instruction-based training approach.", "## Usage and Application\n\nThis dataset is ideal for training and evaluating models on biomedical NLP tasks, particularly those focused on understanding and processing medical and clinical text. It serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT.", "## Acknowledgements\n\nWe extend our gratitude to all contributors and supporting institutions.\n\nFor utilising this dataset in academic work or applications, please cite:" ]
[ "TAGS\n#license-cc-by-nc-4.0 #arxiv-2401.00579 #region-us \n", "# Dataset Card for \"Llama2-MedTuned-Instructions\"", "## Dataset Description\n\nLlama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. It consists of approximately 200,000 samples, each tailored to guide models in performing specific tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI). This dataset represents a fusion of various existing data sources, reformatted to facilitate instruction-based learning.", "## Source Datasets and Composition\n\nThe dataset amalgamates training subsets from several prominent biomedical datasets:\n- Named Entity Recognition (NER): Utilises NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, and i2b2-2012 datasets.\n- Relation Extraction (RE): Incorporates i2b2-2010 and GAD datasets.\n- Natural Language Inference (NLI): Employs the MedNLI dataset.\n- Document Classification: Uses the hallmarks of cancer (HoC) dataset.\n- Question Answering (QA): Includes samples from ChatDoctor and PMC-Llama-Instructions datasets.", "## Prompting Strategy\n\nEach sample in the dataset follows a three-part structure: Instruction, Input, and Output. This format ensures clarity in task directives and expected outcomes, aligning with the instruction-based training approach.", "## Usage and Application\n\nThis dataset is ideal for training and evaluating models on biomedical NLP tasks, particularly those focused on understanding and processing medical and clinical text. It serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT.", "## Acknowledgements\n\nWe extend our gratitude to all contributors and supporting institutions.\n\nFor utilising this dataset in academic work or applications, please cite:" ]
[ 25, 17, 114, 173, 57, 74, 35 ]
[ "passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2401.00579 #region-us \n# Dataset Card for \"Llama2-MedTuned-Instructions\"## Dataset Description\n\nLlama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. It consists of approximately 200,000 samples, each tailored to guide models in performing specific tasks such as Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI). This dataset represents a fusion of various existing data sources, reformatted to facilitate instruction-based learning.## Source Datasets and Composition\n\nThe dataset amalgamates training subsets from several prominent biomedical datasets:\n- Named Entity Recognition (NER): Utilises NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, and i2b2-2012 datasets.\n- Relation Extraction (RE): Incorporates i2b2-2010 and GAD datasets.\n- Natural Language Inference (NLI): Employs the MedNLI dataset.\n- Document Classification: Uses the hallmarks of cancer (HoC) dataset.\n- Question Answering (QA): Includes samples from ChatDoctor and PMC-Llama-Instructions datasets.## Prompting Strategy\n\nEach sample in the dataset follows a three-part structure: Instruction, Input, and Output. This format ensures clarity in task directives and expected outcomes, aligning with the instruction-based training approach.## Usage and Application\n\nThis dataset is ideal for training and evaluating models on biomedical NLP tasks, particularly those focused on understanding and processing medical and clinical text. It serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT.## Acknowledgements\n\nWe extend our gratitude to all contributors and supporting institutions.\n\nFor utilising this dataset in academic work or applications, please cite:" ]
930cb293cb9a8b9f1ea17c50f9161da4dc82f5f9
Text from english books from gutenberg.org with fiction tag and at least 25 downloads, split into paragraphs. Original dataset: sanps/GutenbergFiction Summarization with cognitivecomputations/dolphin-2.6-mistral-7b For license details see: https://www.gutenberg.org/policy/permission.html
sanps/GutenbergFictionSummary
[ "language:en", "license:mit", "region:us" ]
2023-12-31T09:59:16+00:00
{"language": ["en"], "license": "mit", "pretty_name": "Gutenberg Fiction Books + Summaries", "dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "text_sub_id", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "dtype": "int64"}, {"name": "generated_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1845974229, "num_examples": 393386}], "download_size": 1156726889, "dataset_size": 1845974229}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-02T04:16:52+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
Text from english books from URL with fiction tag and at least 25 downloads, split into paragraphs. Original dataset: sanps/GutenbergFiction Summarization with cognitivecomputations/dolphin-2.6-mistral-7b For license details see: URL
[]
[ "TAGS\n#language-English #license-mit #region-us \n" ]
[ 15 ]
[ "passage: TAGS\n#language-English #license-mit #region-us \n" ]
82207c7ab46c2b9465161c2da61dbcfc95b9e218
<<<<<<< HEAD ======= --- license: apache-2.0 task_categories: - automatic-speech-recognition language: - en pretty_name: LDV Audio size_categories: - n<1K --- >>>>>>> 763d44a94e84a59252e84dc915180fd72adf1176 # LDV-Audio
sk413025/ldv-audio
[ "region:us" ]
2023-12-31T10:56:47+00:00
{}
2023-12-31T13:53:33+00:00
[]
[]
TAGS #region-us
<<<<<<< HEAD ======= --- license: apache-2.0 task_categories: - automatic-speech-recognition language: - en pretty_name: LDV Audio size_categories: - n<1K --- >>>>>>> 763d44a94e84a59252e84dc915180fd72adf1176 # LDV-Audio
[ "# LDV-Audio" ]
[ "TAGS\n#region-us \n", "# LDV-Audio" ]
[ 6, 6 ]
[ "passage: TAGS\n#region-us \n# LDV-Audio" ]
05105722b12661ee13913d3e5a07c9d06759721b
* https://huggingface.co/datasets/clips/mqa
kozistr/mqa-ko
[ "task_categories:question-answering", "language:ko", "license:cc0-1.0", "mqa", "region:us" ]
2023-12-31T11:04:07+00:00
{"language": ["ko"], "license": "cc0-1.0", "task_categories": ["question-answering"], "tags": ["mqa"], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 541067862, "num_examples": 1382378}], "download_size": 162865210, "dataset_size": 541067862}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-31T11:24:27+00:00
[]
[ "ko" ]
TAGS #task_categories-question-answering #language-Korean #license-cc0-1.0 #mqa #region-us
* URL
[]
[ "TAGS\n#task_categories-question-answering #language-Korean #license-cc0-1.0 #mqa #region-us \n" ]
[ 34 ]
[ "passage: TAGS\n#task_categories-question-answering #language-Korean #license-cc0-1.0 #mqa #region-us \n" ]
58d62e79897628b7806e8c755d4eaa2c805181d0
使用Qwen系列模型,仿照TinyStories数据集生成的中文故事数据集。 **这不是原数据集的翻译,也不遵循原数据集的格式。所有数据均为AI生成,数据集未经筛选,不保证其分布均匀、安全无害或其他任何性质。用于生成数据集的种子信息为随机选择,无任何特定含义。**
zhoukz/TinyStories-Qwen
[ "task_categories:text-generation", "language:zh", "license:mit", "region:us" ]
2023-12-31T11:45:29+00:00
{"language": ["zh"], "license": "mit", "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data_???.jsonl"}, {"split": "validation", "path": "data_val_???.jsonl"}]}]}
2024-01-01T13:36:11+00:00
[]
[ "zh" ]
TAGS #task_categories-text-generation #language-Chinese #license-mit #region-us
使用Qwen系列模型,仿照TinyStories数据集生成的中文故事数据集。 这不是原数据集的翻译,也不遵循原数据集的格式。所有数据均为AI生成,数据集未经筛选,不保证其分布均匀、安全无害或其他任何性质。用于生成数据集的种子信息为随机选择,无任何特定含义。
[]
[ "TAGS\n#task_categories-text-generation #language-Chinese #license-mit #region-us \n" ]
[ 27 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Chinese #license-mit #region-us \n" ]
596236d2a4e71492cbfbef895aae9a39b970baa4
## Introduction This dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at https://huggingface.co/datasets/styletts2-community/multilingual-pl-bert/tree/main/is. The original dataset, derived from just 52MB of raw text from the Icelandic Wikipedia, was processed using the espeak-ng backend for normalization and phonemization. However, the Icelandic module of espeak-ng, which has not been updated for over a decade, employs an outdated IPA dialect and a simplistic approach to stress marking. Additionally, the limited phonemization capabilities of the module independently contribute to inaccuracies in the phonetic transcriptions. Significant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic Language Technology program. More information about this program can be found [here](https://clarin.is/en/links/LTProjectPlan/). The tools developed in this program have been extensively used to enhance the quality of this dataset. ## Dataset This dataset surpasses its predecessor considerably in size, incorporating not only text from the relatively small Icelandic Wikipedia but also from the extensive Icelandic Gigaword corpus. Specifically, we have enriched the [Wikipedia text](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/252) with material from the [News1 corpus](https://repository.clarin.is/repository/xmlui/handle/20.500.12537/237). To adhere to the maximum size limit of 512 MB for the raw text, we combined the complete Wikipedia text with randomly shuffled documents from the News1 corpus until reaching the size cap. In total, the dataset contains `400.676` rows, each corresponding to its associated document in the IGC corpus' XML file. ### Cleaning Prior to processing with the [Bert](https://huggingface.co/bert-base-multilingual-cased) tokenizer, the dataset underwent cleaning, deduplication, and language detection to filter out most non-Icelandic text. Documents containing fewer than 10 words were also removed. This preprocessing resulted in the elimination of 8,146 documents from the initial 55,475 in the Wikipedia corpus (approximately 14.7%) and 28,869 from 1,545,671 in the News1 corpus (about 1.9%). The notably higher reduction in the Wikipedia corpus primarily arose from the minimum word count criterion. However, this did not significantly diminish the total volume of text, which only saw a modest decrease from 52.3MB to 49.68MB, a reduction of around 5%. ### Normalization For normalization, we adapted the [Regina Normalizer](https://github.com/grammatek/regina_normalizer), which employs a BI-LSTM Part-of-Speech (PoS) tagger. Although this makes the process somewhat time-consuming, the adaptions were necessary to handle a variety of edge cases in the diverse and sometimes unclean text within the IGC. The processing of approximately 2.5 GB of raw text took about one day, utilizing 50 CPU cores. ### Phonemization Phonemization was conducted using [IceG2P](https://github.com/grammatek/ice-g2p), which is also based on a BI-LSTM model. We made adaptations to ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary from both the normalized Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary. We also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis. A significant byproduct of this effort is a considerably improved G2P dictionary with more than 2.1 million transcriptions, which we plan to integrate into the G2P module and various other open-source projects involving Icelandic G2P. Ultimately, to ensure textual coherence, all paragraphs with incorrect Grapheme-to-Phoneme (G2P) transcriptions were excluded from the dataset. ## License The dataset is distributed under the same CC-by-4.0 license as the original source material from which the data was derived.
grammatek/multilingual-pl-bert-is-updated
[ "language:is", "license:cc-by-4.0", "region:us" ]
2023-12-31T12:18:35+00:00
{"language": ["is"], "license": "cc-by-4.0"}
2024-01-08T10:08:27+00:00
[]
[ "is" ]
TAGS #language-Icelandic #license-cc-by-4.0 #region-us
## Introduction This dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at URL The original dataset, derived from just 52MB of raw text from the Icelandic Wikipedia, was processed using the espeak-ng backend for normalization and phonemization. However, the Icelandic module of espeak-ng, which has not been updated for over a decade, employs an outdated IPA dialect and a simplistic approach to stress marking. Additionally, the limited phonemization capabilities of the module independently contribute to inaccuracies in the phonetic transcriptions. Significant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic Language Technology program. More information about this program can be found here. The tools developed in this program have been extensively used to enhance the quality of this dataset. ## Dataset This dataset surpasses its predecessor considerably in size, incorporating not only text from the relatively small Icelandic Wikipedia but also from the extensive Icelandic Gigaword corpus. Specifically, we have enriched the Wikipedia text with material from the News1 corpus. To adhere to the maximum size limit of 512 MB for the raw text, we combined the complete Wikipedia text with randomly shuffled documents from the News1 corpus until reaching the size cap. In total, the dataset contains '400.676' rows, each corresponding to its associated document in the IGC corpus' XML file. ### Cleaning Prior to processing with the Bert tokenizer, the dataset underwent cleaning, deduplication, and language detection to filter out most non-Icelandic text. Documents containing fewer than 10 words were also removed. This preprocessing resulted in the elimination of 8,146 documents from the initial 55,475 in the Wikipedia corpus (approximately 14.7%) and 28,869 from 1,545,671 in the News1 corpus (about 1.9%). The notably higher reduction in the Wikipedia corpus primarily arose from the minimum word count criterion. However, this did not significantly diminish the total volume of text, which only saw a modest decrease from 52.3MB to 49.68MB, a reduction of around 5%. ### Normalization For normalization, we adapted the Regina Normalizer, which employs a BI-LSTM Part-of-Speech (PoS) tagger. Although this makes the process somewhat time-consuming, the adaptions were necessary to handle a variety of edge cases in the diverse and sometimes unclean text within the IGC. The processing of approximately 2.5 GB of raw text took about one day, utilizing 50 CPU cores. ### Phonemization Phonemization was conducted using IceG2P, which is also based on a BI-LSTM model. We made adaptations to ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary from both the normalized Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary. We also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis. A significant byproduct of this effort is a considerably improved G2P dictionary with more than 2.1 million transcriptions, which we plan to integrate into the G2P module and various other open-source projects involving Icelandic G2P. Ultimately, to ensure textual coherence, all paragraphs with incorrect Grapheme-to-Phoneme (G2P) transcriptions were excluded from the dataset. ## License The dataset is distributed under the same CC-by-4.0 license as the original source material from which the data was derived.
[ "## Introduction\n\nThis dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at\nURL\nThe original dataset, derived from just 52MB of raw text from the Icelandic Wikipedia, was processed using the espeak-ng backend for\nnormalization and phonemization. However, the Icelandic module of espeak-ng, which has not been updated for over a decade, employs an outdated\nIPA dialect and a simplistic approach to stress marking. Additionally, the limited phonemization capabilities of the module independently\ncontribute to inaccuracies in the phonetic transcriptions.\n\nSignificant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic\nLanguage Technology program. More information about this program can be found here.\nThe tools developed in this program have been extensively used to enhance the quality of this dataset.", "## Dataset\n\nThis dataset surpasses its predecessor considerably in size, incorporating not only text from the relatively small Icelandic Wikipedia but also\nfrom the extensive Icelandic Gigaword corpus. Specifically, we have enriched the\nWikipedia text with material from the\nNews1 corpus. To adhere to the maximum size limit of 512 MB for the\nraw text, we combined the complete Wikipedia text with randomly shuffled documents from the News1 corpus until reaching the size cap.\n\nIn total, the dataset contains '400.676' rows, each corresponding to its associated document in the IGC corpus' XML file.", "### Cleaning\n \nPrior to processing with the Bert tokenizer, the dataset underwent cleaning, deduplication,\nand language detection to filter out most non-Icelandic text. Documents containing fewer than 10 words were also removed.\nThis preprocessing resulted in the elimination of 8,146 documents from the initial 55,475 in the Wikipedia corpus (approximately 14.7%)\nand 28,869 from 1,545,671 in the News1 corpus (about 1.9%). The notably higher reduction in the Wikipedia corpus primarily arose from the\nminimum word count criterion. However, this did not significantly diminish the total volume of text, which only saw a modest decrease from\n52.3MB to 49.68MB, a reduction of around 5%.", "### Normalization\n\nFor normalization, we adapted the Regina Normalizer, which employs a BI-LSTM Part-of-Speech\n(PoS) tagger. Although this makes the process somewhat time-consuming, the adaptions were necessary to handle a variety of edge cases in the diverse\nand sometimes unclean text within the IGC. The processing of approximately 2.5 GB of raw text took about one day, utilizing 50 CPU cores.", "### Phonemization\n\nPhonemization was conducted using IceG2P, which is also based on a BI-LSTM model. We made adaptations\nto ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary\nfrom both the normalized Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary.\nWe also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis.\n\nA significant byproduct of this effort is a considerably improved G2P dictionary with more than 2.1 million transcriptions, which we plan to\nintegrate into the G2P module and various other open-source projects involving Icelandic G2P.\n\nUltimately, to ensure textual coherence, all paragraphs with incorrect Grapheme-to-Phoneme (G2P) transcriptions were excluded from the dataset.", "## License\n\nThe dataset is distributed under the same CC-by-4.0 license as the original source material from which the data was derived." ]
[ "TAGS\n#language-Icelandic #license-cc-by-4.0 #region-us \n", "## Introduction\n\nThis dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at\nURL\nThe original dataset, derived from just 52MB of raw text from the Icelandic Wikipedia, was processed using the espeak-ng backend for\nnormalization and phonemization. However, the Icelandic module of espeak-ng, which has not been updated for over a decade, employs an outdated\nIPA dialect and a simplistic approach to stress marking. Additionally, the limited phonemization capabilities of the module independently\ncontribute to inaccuracies in the phonetic transcriptions.\n\nSignificant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic\nLanguage Technology program. More information about this program can be found here.\nThe tools developed in this program have been extensively used to enhance the quality of this dataset.", "## Dataset\n\nThis dataset surpasses its predecessor considerably in size, incorporating not only text from the relatively small Icelandic Wikipedia but also\nfrom the extensive Icelandic Gigaword corpus. Specifically, we have enriched the\nWikipedia text with material from the\nNews1 corpus. To adhere to the maximum size limit of 512 MB for the\nraw text, we combined the complete Wikipedia text with randomly shuffled documents from the News1 corpus until reaching the size cap.\n\nIn total, the dataset contains '400.676' rows, each corresponding to its associated document in the IGC corpus' XML file.", "### Cleaning\n \nPrior to processing with the Bert tokenizer, the dataset underwent cleaning, deduplication,\nand language detection to filter out most non-Icelandic text. Documents containing fewer than 10 words were also removed.\nThis preprocessing resulted in the elimination of 8,146 documents from the initial 55,475 in the Wikipedia corpus (approximately 14.7%)\nand 28,869 from 1,545,671 in the News1 corpus (about 1.9%). The notably higher reduction in the Wikipedia corpus primarily arose from the\nminimum word count criterion. However, this did not significantly diminish the total volume of text, which only saw a modest decrease from\n52.3MB to 49.68MB, a reduction of around 5%.", "### Normalization\n\nFor normalization, we adapted the Regina Normalizer, which employs a BI-LSTM Part-of-Speech\n(PoS) tagger. Although this makes the process somewhat time-consuming, the adaptions were necessary to handle a variety of edge cases in the diverse\nand sometimes unclean text within the IGC. The processing of approximately 2.5 GB of raw text took about one day, utilizing 50 CPU cores.", "### Phonemization\n\nPhonemization was conducted using IceG2P, which is also based on a BI-LSTM model. We made adaptations\nto ensure the IPA phoneset output aligns with the overall phoneset used in other PL-Bert datasets. Initially, we created and refined a new vocabulary\nfrom both the normalized Wikipedia and News1 corpora. Following this, the BI-LSTM model was employed to generate phonetic transcriptions for the dictionary.\nWe also enhanced stress labeling and incorporated secondary stresses after conducting compound analysis.\n\nA significant byproduct of this effort is a considerably improved G2P dictionary with more than 2.1 million transcriptions, which we plan to\nintegrate into the G2P module and various other open-source projects involving Icelandic G2P.\n\nUltimately, to ensure textual coherence, all paragraphs with incorrect Grapheme-to-Phoneme (G2P) transcriptions were excluded from the dataset.", "## License\n\nThe dataset is distributed under the same CC-by-4.0 license as the original source material from which the data was derived." ]
[ 22, 215, 139, 167, 97, 228, 30 ]
[ "passage: TAGS\n#language-Icelandic #license-cc-by-4.0 #region-us \n## Introduction\n\nThis dataset, derived from the Icelandic Gigaword Corpus, is designed as a more comprehensive alternative to the existing dataset found at\nURL\nThe original dataset, derived from just 52MB of raw text from the Icelandic Wikipedia, was processed using the espeak-ng backend for\nnormalization and phonemization. However, the Icelandic module of espeak-ng, which has not been updated for over a decade, employs an outdated\nIPA dialect and a simplistic approach to stress marking. Additionally, the limited phonemization capabilities of the module independently\ncontribute to inaccuracies in the phonetic transcriptions.\n\nSignificant advancements in the normalization and G2P (Grapheme-to-Phoneme) conversion of Icelandic have been made through the Icelandic\nLanguage Technology program. More information about this program can be found here.\nThe tools developed in this program have been extensively used to enhance the quality of this dataset.## Dataset\n\nThis dataset surpasses its predecessor considerably in size, incorporating not only text from the relatively small Icelandic Wikipedia but also\nfrom the extensive Icelandic Gigaword corpus. Specifically, we have enriched the\nWikipedia text with material from the\nNews1 corpus. To adhere to the maximum size limit of 512 MB for the\nraw text, we combined the complete Wikipedia text with randomly shuffled documents from the News1 corpus until reaching the size cap.\n\nIn total, the dataset contains '400.676' rows, each corresponding to its associated document in the IGC corpus' XML file." ]
9bc50568bf9e2a41098410532e3e1fca3fb7b8e4
# DL3DV Benchmark Download Instructions This repo contains all the benchmark data, including a README, License, colmaps/images (compatible to nerfstudio and 3D gaussian splatting), scene labels and the performances of methods reported in the paper (ZipNeRF, 3D GS, MipNeRF-360, nerfacto, Instant-NGP). # Download As the whole benchmark dataset is very big (~2.1T), we provide two ways to download: full benchmark dataset download or use a script to download a subset for memory sensitive cases. ## Full benchmark dataset download If you have enough space (more than 2.1T), download the full benchmark is simple: ``` bash # Make sure you have git-lfs installed # (https://git-lfs.github.com/) git lfs install git clone https://huggingface.co/datasets/DL3DV/DL3DV-10K-Benchmark ``` ## Script download Sometimes you may just need to flexibly download a subset the benchmark, e.g. just download several scenes, or just need images with 960P resolution (images_4 level used in the paper). To provide this flexibiliy, we provide a [download.py](https://huggingface.co/datasets/DL3DV/DL3DV-10K-Benchmark/blob/main/download.py) script for use. Use this [link](https://huggingface.co/datasets/DL3DV/DL3DV-10K-Benchmark/resolve/main/download.py?download=true) to download. This download script provies several different options to use: * Download the full dataset (which is equivalent to git clone method). In total 2.1T. * Download the full dataset with only 960P images. In total 100~150G. * Download with specific scene name (hash name) ### Environment Setup The download script relies on `huggingface hub`, `tqdm`, and `pandas`. You can download by the following command in your python environment. The download script was ```bash pip install huggingface_hub tqdm pandas ``` After downloading `huggingface_hub`, remember to login first to get ready for download. ```bash # in terminal, use the following command and your huggingface token to login huggingface-cli login ``` ### Download the full benchmark To download the full dataset, use this command: ``` bash # Note, it is suggested to use --clean_cache flag as it saves space by cleaning the cache folder created by huggingface hub API. python download.py --subset full --clean_cache ``` ### Download the full benchmark with 960P resolution (same with the paper) Not all the methods can handle multi-resolution. Some methods have assumptions on the input resolution. So the paper uses 960P. ``` bash # Note, it is suggested to use --clean_cache flag as it saves space by cleaning the cache folder created by huggingface hub API. python download.py --subset full --only_level4 --clean_cache ``` ### Download with specific scene name (hash name) There is a benchmark preview page in https://github.com/DL3DV-10K/Dataset. If you just need a specific hash (e.g. 0853979305f7ecb80bd8fc2c8df916410d471ef04ed5f1a64e9651baa41d7695), use the following command: ``` bash # Note, it is suggested to use --clean_cache flag as it saves space by cleaning the cache folder created by huggingface hub API. # e.g. a scene with hash 0853979305f7ecb80bd8fc2c8df916410d471ef04ed5f1a64e9651baa41d7695 python download.py --subset hash --hash 0853979305f7ecb80bd8fc2c8df916410d471ef04ed5f1a64e9651baa41d7695 --only_level4 ```
DL3DV/DL3DV-10K-Benchmark
[ "size_categories:n>1T", "3D vision", "novel view synthesis", "NeRF", "3D Gaussian Splatting", "Generalizable NeRF", "Generative Methods", "text-to-3d", "image-to-3d", "region:us" ]
2023-12-31T12:23:57+00:00
{"size_categories": ["n>1T"], "pretty_name": "DL3DV", "tags": ["3D vision", "novel view synthesis", "NeRF", "3D Gaussian Splatting", "Generalizable NeRF", "Generative Methods", "text-to-3d", "image-to-3d"]}
2024-01-05T02:23:00+00:00
[]
[]
TAGS #size_categories-n>1T #3D vision #novel view synthesis #NeRF #3D Gaussian Splatting #Generalizable NeRF #Generative Methods #text-to-3d #image-to-3d #region-us
# DL3DV Benchmark Download Instructions This repo contains all the benchmark data, including a README, License, colmaps/images (compatible to nerfstudio and 3D gaussian splatting), scene labels and the performances of methods reported in the paper (ZipNeRF, 3D GS, MipNeRF-360, nerfacto, Instant-NGP). # Download As the whole benchmark dataset is very big (~2.1T), we provide two ways to download: full benchmark dataset download or use a script to download a subset for memory sensitive cases. ## Full benchmark dataset download If you have enough space (more than 2.1T), download the full benchmark is simple: ## Script download Sometimes you may just need to flexibly download a subset the benchmark, e.g. just download several scenes, or just need images with 960P resolution (images_4 level used in the paper). To provide this flexibiliy, we provide a URL script for use. Use this link to download. This download script provies several different options to use: * Download the full dataset (which is equivalent to git clone method). In total 2.1T. * Download the full dataset with only 960P images. In total 100~150G. * Download with specific scene name (hash name) ### Environment Setup The download script relies on 'huggingface hub', 'tqdm', and 'pandas'. You can download by the following command in your python environment. The download script was After downloading 'huggingface_hub', remember to login first to get ready for download. ### Download the full benchmark To download the full dataset, use this command: ### Download the full benchmark with 960P resolution (same with the paper) Not all the methods can handle multi-resolution. Some methods have assumptions on the input resolution. So the paper uses 960P. ### Download with specific scene name (hash name) There is a benchmark preview page in URL If you just need a specific hash (e.g. 0853979305f7ecb80bd8fc2c8df916410d471ef04ed5f1a64e9651baa41d7695), use the following command:
[ "# DL3DV Benchmark Download Instructions\n\nThis repo contains all the benchmark data, including a README, License, colmaps/images (compatible to nerfstudio and 3D gaussian splatting), scene labels and the performances of methods reported in the paper (ZipNeRF, 3D GS, MipNeRF-360, nerfacto, Instant-NGP).", "# Download\nAs the whole benchmark dataset is very big (~2.1T), we provide two ways to download: full benchmark dataset download or use a script to download a subset for memory sensitive cases.", "## Full benchmark dataset download \nIf you have enough space (more than 2.1T), download the full benchmark is simple:", "## Script download \nSometimes you may just need to flexibly download a subset the benchmark, e.g. just download several scenes, or just need images with 960P resolution (images_4 level used in the paper). To provide this flexibiliy, we provide a URL script for use. \nUse this link to download.\n\nThis download script provies several different options to use: \n\n* Download the full dataset (which is equivalent to git clone method). In total 2.1T. \n* Download the full dataset with only 960P images. In total 100~150G. \n* Download with specific scene name (hash name)", "### Environment Setup \nThe download script relies on 'huggingface hub', 'tqdm', and 'pandas'. You can download by the following command in your python environment. The download script was \n\n\n\nAfter downloading 'huggingface_hub', remember to login first to get ready for download.", "### Download the full benchmark \nTo download the full dataset, use this command:", "### Download the full benchmark with 960P resolution (same with the paper)\nNot all the methods can handle multi-resolution. Some methods have assumptions on the input resolution. So the paper uses 960P.", "### Download with specific scene name (hash name) \nThere is a benchmark preview page in URL If you just need a specific hash (e.g. 0853979305f7ecb80bd8fc2c8df916410d471ef04ed5f1a64e9651baa41d7695), use the following command:" ]
[ "TAGS\n#size_categories-n>1T #3D vision #novel view synthesis #NeRF #3D Gaussian Splatting #Generalizable NeRF #Generative Methods #text-to-3d #image-to-3d #region-us \n", "# DL3DV Benchmark Download Instructions\n\nThis repo contains all the benchmark data, including a README, License, colmaps/images (compatible to nerfstudio and 3D gaussian splatting), scene labels and the performances of methods reported in the paper (ZipNeRF, 3D GS, MipNeRF-360, nerfacto, Instant-NGP).", "# Download\nAs the whole benchmark dataset is very big (~2.1T), we provide two ways to download: full benchmark dataset download or use a script to download a subset for memory sensitive cases.", "## Full benchmark dataset download \nIf you have enough space (more than 2.1T), download the full benchmark is simple:", "## Script download \nSometimes you may just need to flexibly download a subset the benchmark, e.g. just download several scenes, or just need images with 960P resolution (images_4 level used in the paper). To provide this flexibiliy, we provide a URL script for use. \nUse this link to download.\n\nThis download script provies several different options to use: \n\n* Download the full dataset (which is equivalent to git clone method). In total 2.1T. \n* Download the full dataset with only 960P images. In total 100~150G. \n* Download with specific scene name (hash name)", "### Environment Setup \nThe download script relies on 'huggingface hub', 'tqdm', and 'pandas'. You can download by the following command in your python environment. The download script was \n\n\n\nAfter downloading 'huggingface_hub', remember to login first to get ready for download.", "### Download the full benchmark \nTo download the full dataset, use this command:", "### Download the full benchmark with 960P resolution (same with the paper)\nNot all the methods can handle multi-resolution. Some methods have assumptions on the input resolution. So the paper uses 960P.", "### Download with specific scene name (hash name) \nThere is a benchmark preview page in URL If you just need a specific hash (e.g. 0853979305f7ecb80bd8fc2c8df916410d471ef04ed5f1a64e9651baa41d7695), use the following command:" ]
[ 61, 90, 42, 24, 132, 72, 17, 47, 78 ]
[ "passage: TAGS\n#size_categories-n>1T #3D vision #novel view synthesis #NeRF #3D Gaussian Splatting #Generalizable NeRF #Generative Methods #text-to-3d #image-to-3d #region-us \n# DL3DV Benchmark Download Instructions\n\nThis repo contains all the benchmark data, including a README, License, colmaps/images (compatible to nerfstudio and 3D gaussian splatting), scene labels and the performances of methods reported in the paper (ZipNeRF, 3D GS, MipNeRF-360, nerfacto, Instant-NGP).# Download\nAs the whole benchmark dataset is very big (~2.1T), we provide two ways to download: full benchmark dataset download or use a script to download a subset for memory sensitive cases.## Full benchmark dataset download \nIf you have enough space (more than 2.1T), download the full benchmark is simple:## Script download \nSometimes you may just need to flexibly download a subset the benchmark, e.g. just download several scenes, or just need images with 960P resolution (images_4 level used in the paper). To provide this flexibiliy, we provide a URL script for use. \nUse this link to download.\n\nThis download script provies several different options to use: \n\n* Download the full dataset (which is equivalent to git clone method). In total 2.1T. \n* Download the full dataset with only 960P images. In total 100~150G. \n* Download with specific scene name (hash name)### Environment Setup \nThe download script relies on 'huggingface hub', 'tqdm', and 'pandas'. You can download by the following command in your python environment. The download script was \n\n\n\nAfter downloading 'huggingface_hub', remember to login first to get ready for download.### Download the full benchmark \nTo download the full dataset, use this command:### Download the full benchmark with 960P resolution (same with the paper)\nNot all the methods can handle multi-resolution. Some methods have assumptions on the input resolution. So the paper uses 960P." ]
f586306f81144389e2038e5f0db03eac39e13066
# Dataset Card for Prompt Injections by <a style="display: inline;" href="https://yanismiraoui.github.io/"> Yanis Miraoui </a> 👋 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Prompts to handle with care](#prompts-to-handle-with-care) ) ## Dataset Description This dataset of prompt injections enriches Large Language Models (LLMs) by providing task-specific examples and prompts, helping improve LLMs' performance and control their behavior. ### Dataset Summary This dataset contains over 1000 rows of prompt injections in multiple languages. It contains examples of prompt injections using different techniques such as: prompt leaking, jailbreaking, and mode switching. ### Languages The text in the dataset is in English, French, German, Spanish, Italian, Portuguese and Romanian. ## Dataset Structure It consists of one column with the prompt injections examples. ## Considerations for Using the Data ### Prompts to handle with care This dataset of prompts has to be handled with care as it contains examples of prompts meant to harm, mislead or jailbreak LLMs. The goal of this dataset is to mainly help better finetune and control LLMs.
yanismiraoui/prompt_injections
[ "annotations_creators:no-annotation", "multilinguality:multilingual", "source_datasets:original", "language:en", "language:fr", "language:de", "language:es", "language:pt", "language:it", "language:ro", "license:apache-2.0", "prompt", "prompt injection", "jailbreak", "prompt leaking", "mode switching", "region:us" ]
2023-12-31T12:26:23+00:00
{"annotations_creators": ["no-annotation"], "language": ["en", "fr", "de", "es", "pt", "it", "ro"], "license": "apache-2.0", "multilinguality": ["multilingual"], "source_datasets": ["original"], "tags": ["prompt", "prompt injection", "jailbreak", "prompt leaking", "mode switching"]}
2023-12-31T12:57:56+00:00
[]
[ "en", "fr", "de", "es", "pt", "it", "ro" ]
TAGS #annotations_creators-no-annotation #multilinguality-multilingual #source_datasets-original #language-English #language-French #language-German #language-Spanish #language-Portuguese #language-Italian #language-Romanian #license-apache-2.0 #prompt #prompt injection #jailbreak #prompt leaking #mode switching #region-us
# Dataset Card for Prompt Injections by <a style="display: inline;" href="URL Yanis Miraoui </a> ## Table of Contents - Dataset Description - Dataset Summary - Languages - Dataset Structure - Considerations for Using the Data - Prompts to handle with care ) ## Dataset Description This dataset of prompt injections enriches Large Language Models (LLMs) by providing task-specific examples and prompts, helping improve LLMs' performance and control their behavior. ### Dataset Summary This dataset contains over 1000 rows of prompt injections in multiple languages. It contains examples of prompt injections using different techniques such as: prompt leaking, jailbreaking, and mode switching. ### Languages The text in the dataset is in English, French, German, Spanish, Italian, Portuguese and Romanian. ## Dataset Structure It consists of one column with the prompt injections examples. ## Considerations for Using the Data ### Prompts to handle with care This dataset of prompts has to be handled with care as it contains examples of prompts meant to harm, mislead or jailbreak LLMs. The goal of this dataset is to mainly help better finetune and control LLMs.
[ "# Dataset Card for Prompt Injections by <a style=\"display: inline;\" href=\"URL Yanis Miraoui </a>", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n- Considerations for Using the Data\n - Prompts to handle with care\n)", "## Dataset Description\n\nThis dataset of prompt injections enriches Large Language Models (LLMs) by providing task-specific examples and prompts, helping improve LLMs' performance and control their behavior.", "### Dataset Summary\n\nThis dataset contains over 1000 rows of prompt injections in multiple languages. It contains examples of prompt injections using different techniques such as: prompt leaking, jailbreaking, and mode switching.", "### Languages\n\nThe text in the dataset is in English, French, German, Spanish, Italian, Portuguese and Romanian.", "## Dataset Structure\n\nIt consists of one column with the prompt injections examples.", "## Considerations for Using the Data", "### Prompts to handle with care\n\nThis dataset of prompts has to be handled with care as it contains examples of prompts meant to harm, mislead or jailbreak LLMs. The goal of this dataset is to mainly help better finetune and control LLMs." ]
[ "TAGS\n#annotations_creators-no-annotation #multilinguality-multilingual #source_datasets-original #language-English #language-French #language-German #language-Spanish #language-Portuguese #language-Italian #language-Romanian #license-apache-2.0 #prompt #prompt injection #jailbreak #prompt leaking #mode switching #region-us \n", "# Dataset Card for Prompt Injections by <a style=\"display: inline;\" href=\"URL Yanis Miraoui </a>", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n- Considerations for Using the Data\n - Prompts to handle with care\n)", "## Dataset Description\n\nThis dataset of prompt injections enriches Large Language Models (LLMs) by providing task-specific examples and prompts, helping improve LLMs' performance and control their behavior.", "### Dataset Summary\n\nThis dataset contains over 1000 rows of prompt injections in multiple languages. It contains examples of prompt injections using different techniques such as: prompt leaking, jailbreaking, and mode switching.", "### Languages\n\nThe text in the dataset is in English, French, German, Spanish, Italian, Portuguese and Romanian.", "## Dataset Structure\n\nIt consists of one column with the prompt injections examples.", "## Considerations for Using the Data", "### Prompts to handle with care\n\nThis dataset of prompts has to be handled with care as it contains examples of prompts meant to harm, mislead or jailbreak LLMs. The goal of this dataset is to mainly help better finetune and control LLMs." ]
[ 103, 36, 40, 46, 55, 29, 22, 8, 67 ]
[ "passage: TAGS\n#annotations_creators-no-annotation #multilinguality-multilingual #source_datasets-original #language-English #language-French #language-German #language-Spanish #language-Portuguese #language-Italian #language-Romanian #license-apache-2.0 #prompt #prompt injection #jailbreak #prompt leaking #mode switching #region-us \n# Dataset Card for Prompt Injections by <a style=\"display: inline;\" href=\"URL Yanis Miraoui </a>## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n- Considerations for Using the Data\n - Prompts to handle with care\n)## Dataset Description\n\nThis dataset of prompt injections enriches Large Language Models (LLMs) by providing task-specific examples and prompts, helping improve LLMs' performance and control their behavior.### Dataset Summary\n\nThis dataset contains over 1000 rows of prompt injections in multiple languages. It contains examples of prompt injections using different techniques such as: prompt leaking, jailbreaking, and mode switching.### Languages\n\nThe text in the dataset is in English, French, German, Spanish, Italian, Portuguese and Romanian.## Dataset Structure\n\nIt consists of one column with the prompt injections examples.## Considerations for Using the Data### Prompts to handle with care\n\nThis dataset of prompts has to be handled with care as it contains examples of prompts meant to harm, mislead or jailbreak LLMs. The goal of this dataset is to mainly help better finetune and control LLMs." ]
5a6f24156cecbc04dc852fdf115fc5a8b680c247
# Dataset Card for "x86_c_O0_exebench_json_cleaned" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zhangshuoming/x86_c_O0_exebench_json_cleaned
[ "region:us" ]
2023-12-31T12:31:51+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1495247998.4292047, "num_examples": 679665}], "download_size": 195075844, "dataset_size": 1495247998.4292047}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-31T12:32:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "x86_c_O0_exebench_json_cleaned" More Information needed
[ "# Dataset Card for \"x86_c_O0_exebench_json_cleaned\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"x86_c_O0_exebench_json_cleaned\"\n\nMore Information needed" ]
[ 6, 28 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"x86_c_O0_exebench_json_cleaned\"\n\nMore Information needed" ]
8fc40ea26b52f7777f3871b43b00dba2c4f6a5e8
# Dataset Card for Dataset Name This dataset card aims to be a base template for zhmcclient datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description Contains code samples which use zhmclient library. ### Direct Use This dataset is used for fine-tuning LLMs to incorporate the knowledge of ZHMCCLIENT library in Python language.
Prasanna16/zhmclient-dataset
[ "task_categories:text-generation", "size_categories:n<1K", "zhmcclient", "code", "python", "region:us" ]
2023-12-31T13:26:19+00:00
{"size_categories": ["n<1K"], "task_categories": ["text-generation"], "tags": ["zhmcclient", "code", "python"]}
2024-01-07T14:23:37+00:00
[]
[]
TAGS #task_categories-text-generation #size_categories-n<1K #zhmcclient #code #python #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for zhmcclient datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description Contains code samples which use zhmclient library. ### Direct Use This dataset is used for fine-tuning LLMs to incorporate the knowledge of ZHMCCLIENT library in Python language.
[ "# Dataset Card for Dataset Name\n\n\nThis dataset card aims to be a base template for zhmcclient datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\nContains code samples which use zhmclient library.", "### Direct Use\n\nThis dataset is used for fine-tuning LLMs to incorporate the knowledge of ZHMCCLIENT library in Python language." ]
[ "TAGS\n#task_categories-text-generation #size_categories-n<1K #zhmcclient #code #python #region-us \n", "# Dataset Card for Dataset Name\n\n\nThis dataset card aims to be a base template for zhmcclient datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\nContains code samples which use zhmclient library.", "### Direct Use\n\nThis dataset is used for fine-tuning LLMs to incorporate the knowledge of ZHMCCLIENT library in Python language." ]
[ 37, 37, 4, 18, 34 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #zhmcclient #code #python #region-us \n# Dataset Card for Dataset Name\n\n\nThis dataset card aims to be a base template for zhmcclient datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\nContains code samples which use zhmclient library.### Direct Use\n\nThis dataset is used for fine-tuning LLMs to incorporate the knowledge of ZHMCCLIENT library in Python language." ]
d1f1d9db23c27f05af63257bffba5e5fc45620ed
# このデータセットについて このデータは、日本の官公庁のWebサイトに掲載されている「よくある質問」を手作業で抽出し、インストラクション用のデータセットとしたものです。 日本の官公庁のWebサイトは多くが「政府標準利用規約(第2.0版)」に準拠しており、この規約はCC-BY-4.0(国際)と互換性があると記述されています。 参考 https://www.digital.go.jp/copyright-policy したがって本データセットの著作権者はデータセットのcopyrightに記載された各官公庁であり、ライセンスもCC-BY-4.0(国際)です。データセット製作者は著作権を主張しません。 # 特徴 - 質問と回答の形式になっています。 - 国家公務員によるチェックを経ているので、誤字脱字がほぼありません。もしあったら、このデータセット製作者(松xR)のミスに起因するものです。御指摘いただければ修正いたします。 - 論旨も明快で、日本語として品質の高いデータセットであると考えています。 - ソースとなるURLも添付しているため、リンク集としても活用出来ます # 想定する利用法 - 大規模言語モデルのInstruction Tuning - RAGの実装テストのためのデータベース # 免責事項 - 本データセットは現状のままで提供され、データセット製作者はその利用における一切の責任を負いません。 - 手作業でデータをまとめたため、作業ミスによるデータの間違いがある可能性があります。御指摘いただければ修正する意志はありますが、修正を保証するものではありません。 # 現時点で存在する課題 - 官公庁の文書は多くが、箇条書き、表組み、図示を活用して作成されています。これらを機械的にプレーンテキストに変換しているため、プレーンテキストだけを見ても意味が通りにくいことがあります。 - 特にPDFから変換したものは上記の問題を多く含むため、ソースURLの末尾がpdfになっているものは利用しない、などの方法が考えられます - 官公庁の文書は日本政府の立場を表明するものであるため、主張の強い文章も含まれます。特定の立場・思想が強く反映されたチューニングを行いたくない場合には、適さない可能性があります。 - 人の目でフィルタリングするのも有効です。 - 気象庁のデータは事実説明にとどまるものが多く、まずは気象庁のデータだけを利用することも一つの方法です。
matsuxr/JaGovFaqs-22k
[ "task_categories:conversational", "size_categories:10K<n<100K", "language:ja", "license:cc-by-4.0", "legal", "region:us" ]
2023-12-31T13:58:41+00:00
{"language": ["ja"], "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "tags": ["legal"]}
2024-01-01T06:26:39+00:00
[]
[ "ja" ]
TAGS #task_categories-conversational #size_categories-10K<n<100K #language-Japanese #license-cc-by-4.0 #legal #region-us
# このデータセットについて このデータは、日本の官公庁のWebサイトに掲載されている「よくある質問」を手作業で抽出し、インストラクション用のデータセットとしたものです。 日本の官公庁のWebサイトは多くが「政府標準利用規約(第2.0版)」に準拠しており、この規約はCC-BY-4.0(国際)と互換性があると記述されています。 参考 URL したがって本データセットの著作権者はデータセットのcopyrightに記載された各官公庁であり、ライセンスもCC-BY-4.0(国際)です。データセット製作者は著作権を主張しません。 # 特徴 - 質問と回答の形式になっています。 - 国家公務員によるチェックを経ているので、誤字脱字がほぼありません。もしあったら、このデータセット製作者(松xR)のミスに起因するものです。御指摘いただければ修正いたします。 - 論旨も明快で、日本語として品質の高いデータセットであると考えています。 - ソースとなるURLも添付しているため、リンク集としても活用出来ます # 想定する利用法 - 大規模言語モデルのInstruction Tuning - RAGの実装テストのためのデータベース # 免責事項 - 本データセットは現状のままで提供され、データセット製作者はその利用における一切の責任を負いません。 - 手作業でデータをまとめたため、作業ミスによるデータの間違いがある可能性があります。御指摘いただければ修正する意志はありますが、修正を保証するものではありません。 # 現時点で存在する課題 - 官公庁の文書は多くが、箇条書き、表組み、図示を活用して作成されています。これらを機械的にプレーンテキストに変換しているため、プレーンテキストだけを見ても意味が通りにくいことがあります。 - 特にPDFから変換したものは上記の問題を多く含むため、ソースURLの末尾がpdfになっているものは利用しない、などの方法が考えられます - 官公庁の文書は日本政府の立場を表明するものであるため、主張の強い文章も含まれます。特定の立場・思想が強く反映されたチューニングを行いたくない場合には、適さない可能性があります。 - 人の目でフィルタリングするのも有効です。 - 気象庁のデータは事実説明にとどまるものが多く、まずは気象庁のデータだけを利用することも一つの方法です。
[ "# このデータセットについて\n\nこのデータは、日本の官公庁のWebサイトに掲載されている「よくある質問」を手作業で抽出し、インストラクション用のデータセットとしたものです。\n\n日本の官公庁のWebサイトは多くが「政府標準利用規約(第2.0版)」に準拠しており、この規約はCC-BY-4.0(国際)と互換性があると記述されています。\n\n参考 URL\n\nしたがって本データセットの著作権者はデータセットのcopyrightに記載された各官公庁であり、ライセンスもCC-BY-4.0(国際)です。データセット製作者は著作権を主張しません。", "# 特徴\n\n- 質問と回答の形式になっています。\n- 国家公務員によるチェックを経ているので、誤字脱字がほぼありません。もしあったら、このデータセット製作者(松xR)のミスに起因するものです。御指摘いただければ修正いたします。\n- 論旨も明快で、日本語として品質の高いデータセットであると考えています。\n- ソースとなるURLも添付しているため、リンク集としても活用出来ます", "# 想定する利用法\n\n- 大規模言語モデルのInstruction Tuning\n- RAGの実装テストのためのデータベース", "# 免責事項\n\n- 本データセットは現状のままで提供され、データセット製作者はその利用における一切の責任を負いません。\n- 手作業でデータをまとめたため、作業ミスによるデータの間違いがある可能性があります。御指摘いただければ修正する意志はありますが、修正を保証するものではありません。", "# 現時点で存在する課題\n\n- 官公庁の文書は多くが、箇条書き、表組み、図示を活用して作成されています。これらを機械的にプレーンテキストに変換しているため、プレーンテキストだけを見ても意味が通りにくいことがあります。\n - 特にPDFから変換したものは上記の問題を多く含むため、ソースURLの末尾がpdfになっているものは利用しない、などの方法が考えられます\n- 官公庁の文書は日本政府の立場を表明するものであるため、主張の強い文章も含まれます。特定の立場・思想が強く反映されたチューニングを行いたくない場合には、適さない可能性があります。\n - 人の目でフィルタリングするのも有効です。\n - 気象庁のデータは事実説明にとどまるものが多く、まずは気象庁のデータだけを利用することも一つの方法です。" ]
[ "TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-Japanese #license-cc-by-4.0 #legal #region-us \n", "# このデータセットについて\n\nこのデータは、日本の官公庁のWebサイトに掲載されている「よくある質問」を手作業で抽出し、インストラクション用のデータセットとしたものです。\n\n日本の官公庁のWebサイトは多くが「政府標準利用規約(第2.0版)」に準拠しており、この規約はCC-BY-4.0(国際)と互換性があると記述されています。\n\n参考 URL\n\nしたがって本データセットの著作権者はデータセットのcopyrightに記載された各官公庁であり、ライセンスもCC-BY-4.0(国際)です。データセット製作者は著作権を主張しません。", "# 特徴\n\n- 質問と回答の形式になっています。\n- 国家公務員によるチェックを経ているので、誤字脱字がほぼありません。もしあったら、このデータセット製作者(松xR)のミスに起因するものです。御指摘いただければ修正いたします。\n- 論旨も明快で、日本語として品質の高いデータセットであると考えています。\n- ソースとなるURLも添付しているため、リンク集としても活用出来ます", "# 想定する利用法\n\n- 大規模言語モデルのInstruction Tuning\n- RAGの実装テストのためのデータベース", "# 免責事項\n\n- 本データセットは現状のままで提供され、データセット製作者はその利用における一切の責任を負いません。\n- 手作業でデータをまとめたため、作業ミスによるデータの間違いがある可能性があります。御指摘いただければ修正する意志はありますが、修正を保証するものではありません。", "# 現時点で存在する課題\n\n- 官公庁の文書は多くが、箇条書き、表組み、図示を活用して作成されています。これらを機械的にプレーンテキストに変換しているため、プレーンテキストだけを見ても意味が通りにくいことがあります。\n - 特にPDFから変換したものは上記の問題を多く含むため、ソースURLの末尾がpdfになっているものは利用しない、などの方法が考えられます\n- 官公庁の文書は日本政府の立場を表明するものであるため、主張の強い文章も含まれます。特定の立場・思想が強く反映されたチューニングを行いたくない場合には、適さない可能性があります。\n - 人の目でフィルタリングするのも有効です。\n - 気象庁のデータは事実説明にとどまるものが多く、まずは気象庁のデータだけを利用することも一つの方法です。" ]
[ 45, 135, 91, 27, 64, 178 ]
[ "passage: TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-Japanese #license-cc-by-4.0 #legal #region-us \n# このデータセットについて\n\nこのデータは、日本の官公庁のWebサイトに掲載されている「よくある質問」を手作業で抽出し、インストラクション用のデータセットとしたものです。\n\n日本の官公庁のWebサイトは多くが「政府標準利用規約(第2.0版)」に準拠しており、この規約はCC-BY-4.0(国際)と互換性があると記述されています。\n\n参考 URL\n\nしたがって本データセットの著作権者はデータセットのcopyrightに記載された各官公庁であり、ライセンスもCC-BY-4.0(国際)です。データセット製作者は著作権を主張しません。# 特徴\n\n- 質問と回答の形式になっています。\n- 国家公務員によるチェックを経ているので、誤字脱字がほぼありません。もしあったら、このデータセット製作者(松xR)のミスに起因するものです。御指摘いただければ修正いたします。\n- 論旨も明快で、日本語として品質の高いデータセットであると考えています。\n- ソースとなるURLも添付しているため、リンク集としても活用出来ます# 想定する利用法\n\n- 大規模言語モデルのInstruction Tuning\n- RAGの実装テストのためのデータベース# 免責事項\n\n- 本データセットは現状のままで提供され、データセット製作者はその利用における一切の責任を負いません。\n- 手作業でデータをまとめたため、作業ミスによるデータの間違いがある可能性があります。御指摘いただければ修正する意志はありますが、修正を保証するものではありません。" ]
4be068dd628bf98e0449be6678dd1167bad56d97
Description: Created for training models on fiction generation. Dataset has pairs of LLM-generated summaries and corresponding narrative texts from popular English fiction on Project Gutenberg. Orignal dataset: sanps/GutenbergFictionSummary Summaries are produced by cognitivecomputations/dolphin-2.6-mistral-7b. The text are from English fiction books on Project Gutenberg, tagged for fiction and with a minimum of 25 downloads to ensure quality and interest. The dataset is organized into different splits. Each entry in a split consist of 1-4 contiguous book sections and summaries. Splits: - train_full: 150k rows - sample_train: 5k rows - val: 18.2k rows - train1, train2, train3: 50k rows each - small_val: 5k rows Data Format: JSON array of objects: [ {"summary_text": "Generated summary", "book_text": "Extended text"}, ... (up to 4 pairs per entry)] File ID: The id of the book in Project Gutenberg. Licensing: See Project Gutenberg's policy: https://www.gutenberg.org/policy/permission.html
sanps/GutenbergFictionSummaryPrepared
[ "language:en", "license:mit", "region:us" ]
2023-12-31T14:05:09+00:00
{"language": ["en"], "license": "mit", "pretty_name": "Gutenberg Fiction Summaries and Text", "dataset_info": {"features": [{"name": "file_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "book_text", "dtype": "string"}, {"name": "summary_text", "dtype": "string"}]}], "splits": [{"name": "train_full", "num_bytes": 1638018755, "num_examples": 150000}, {"name": "sample_train", "num_bytes": 54829870, "num_examples": 5000}, {"name": "val", "num_bytes": 198773079, "num_examples": 18238}, {"name": "train1", "num_bytes": 546267109, "num_examples": 50000}, {"name": "train2", "num_bytes": 546204032, "num_examples": 50000}, {"name": "train3", "num_bytes": 545547614, "num_examples": 50000}, {"name": "small_val", "num_bytes": 54739280, "num_examples": 5000}], "download_size": 2268933771, "dataset_size": 3584379739}, "configs": [{"config_name": "default", "data_files": [{"split": "train_full", "path": "data/train_full-*"}, {"split": "sample_train", "path": "data/sample_train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "train1", "path": "data/train1-*"}, {"split": "train2", "path": "data/train2-*"}, {"split": "train3", "path": "data/train3-*"}, {"split": "small_val", "path": "data/small_val-*"}]}]}
2024-01-02T04:17:33+00:00
[]
[ "en" ]
TAGS #language-English #license-mit #region-us
Description: Created for training models on fiction generation. Dataset has pairs of LLM-generated summaries and corresponding narrative texts from popular English fiction on Project Gutenberg. Orignal dataset: sanps/GutenbergFictionSummary Summaries are produced by cognitivecomputations/dolphin-2.6-mistral-7b. The text are from English fiction books on Project Gutenberg, tagged for fiction and with a minimum of 25 downloads to ensure quality and interest. The dataset is organized into different splits. Each entry in a split consist of 1-4 contiguous book sections and summaries. Splits: - train_full: 150k rows - sample_train: 5k rows - val: 18.2k rows - train1, train2, train3: 50k rows each - small_val: 5k rows Data Format: JSON array of objects: [ {"summary_text": "Generated summary", "book_text": "Extended text"}, ... (up to 4 pairs per entry)] File ID: The id of the book in Project Gutenberg. Licensing: See Project Gutenberg's policy: URL
[]
[ "TAGS\n#language-English #license-mit #region-us \n" ]
[ 15 ]
[ "passage: TAGS\n#language-English #license-mit #region-us \n" ]
51a3d45a3dd60adc4174963a844d0597383c604a
# Dataset Card for "dummy_text_dataset" Dummy text dataset with 2048 random sequences of characters of length 10 to 1024.
gmongaras/dummy_text_dataset
[ "region:us" ]
2023-12-31T15:10:48+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1063271, "num_examples": 2048}], "download_size": 1079397, "dataset_size": 1063271}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-31T15:13:20+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dummy_text_dataset" Dummy text dataset with 2048 random sequences of characters of length 10 to 1024.
[ "# Dataset Card for \"dummy_text_dataset\"\n\nDummy text dataset with 2048 random sequences of characters of length 10 to 1024." ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dummy_text_dataset\"\n\nDummy text dataset with 2048 random sequences of characters of length 10 to 1024." ]
[ 6, 34 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dummy_text_dataset\"\n\nDummy text dataset with 2048 random sequences of characters of length 10 to 1024." ]
2eac1516fdee7407cc2a93ef93558df95de65d98
## Attributes: This dataset comprises three attributes: the first corresponds to Headlines 1, the second to Headlines 2, and the third to the target variable. Both sentences are associated with news extracted from Google News, while the target variable indicates whether both sentences are related to the same event (1) or not (0). ## Data Source: The dataset is derived from Google News headlines between July 23, 2022, and July 30, 2022, which were manually annotated. ## Data Format: The dataset is provided in a tabular format, with each row representing a set of two sentences and the corresponding target variable.
cmunhozc/google_news_en
[ "task_categories:text-classification", "size_categories:10K<n<100K", "language:en", "license:mit", "CENIA", "News", "region:us" ]
2023-12-31T16:20:28+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "tags": ["CENIA", "News"]}
2024-01-03T20:13:09+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #CENIA #News #region-us
## Attributes: This dataset comprises three attributes: the first corresponds to Headlines 1, the second to Headlines 2, and the third to the target variable. Both sentences are associated with news extracted from Google News, while the target variable indicates whether both sentences are related to the same event (1) or not (0). ## Data Source: The dataset is derived from Google News headlines between July 23, 2022, and July 30, 2022, which were manually annotated. ## Data Format: The dataset is provided in a tabular format, with each row representing a set of two sentences and the corresponding target variable.
[ "## Attributes:\n\nThis dataset comprises three attributes: the first corresponds to Headlines 1, the second to Headlines 2, and the third to the target variable. Both sentences are associated with news extracted from Google News, while the target variable indicates whether both sentences are related to the same event (1) or not (0).", "## Data Source:\n\nThe dataset is derived from Google News headlines between July 23, 2022, and July 30, 2022, which were manually annotated.", "## Data Format:\n\nThe dataset is provided in a tabular format, with each row representing a set of two sentences and the corresponding target variable." ]
[ "TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #CENIA #News #region-us \n", "## Attributes:\n\nThis dataset comprises three attributes: the first corresponds to Headlines 1, the second to Headlines 2, and the third to the target variable. Both sentences are associated with news extracted from Google News, while the target variable indicates whether both sentences are related to the same event (1) or not (0).", "## Data Source:\n\nThe dataset is derived from Google News headlines between July 23, 2022, and July 30, 2022, which were manually annotated.", "## Data Format:\n\nThe dataset is provided in a tabular format, with each row representing a set of two sentences and the corresponding target variable." ]
[ 43, 72, 34, 34 ]
[ "passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-mit #CENIA #News #region-us \n## Attributes:\n\nThis dataset comprises three attributes: the first corresponds to Headlines 1, the second to Headlines 2, and the third to the target variable. Both sentences are associated with news extracted from Google News, while the target variable indicates whether both sentences are related to the same event (1) or not (0).## Data Source:\n\nThe dataset is derived from Google News headlines between July 23, 2022, and July 30, 2022, which were manually annotated.## Data Format:\n\nThe dataset is provided in a tabular format, with each row representing a set of two sentences and the corresponding target variable." ]
aa50d6df276fdec33336c64474f97909edacc2dc
# Dataset Card for primer_demo_ejemplo_ds_semantic ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset description](#dataset-description) - [Dataset categories](#dataset-categories) ## Dataset description - **Homepage:** https://huggingface.co/datasets/Lit4pCol4b/primer_demo_ejemplo_ds_semantic ## Dataset categories | Id | Name | Description | | --- | ---- | ----------- | | 1 | name_1 | - | | 2 | name_2 | - | | 3 | name_3 | - |
Lit4pCol4b/primer_demo_ejemplo_ds_semantic
[ "task_categories:image-segmentation", "region:us" ]
2023-12-31T16:23:18+00:00
{"task_categories": ["image-segmentation"]}
2024-01-02T02:23:55+00:00
[]
[]
TAGS #task_categories-image-segmentation #region-us
Dataset Card for primer\_demo\_ejemplo\_ds\_semantic ==================================================== Table of Contents ----------------- * Table of Contents * Dataset description * Dataset categories Dataset description ------------------- * Homepage: URL Dataset categories ------------------ Id: 1, Name: name\_1, Description: - Id: 2, Name: name\_2, Description: - Id: 3, Name: name\_3, Description: -
[]
[ "TAGS\n#task_categories-image-segmentation #region-us \n" ]
[ 18 ]
[ "passage: TAGS\n#task_categories-image-segmentation #region-us \n" ]
6437a276745acf9f9c58bae06e982edec81facde
## USC Course Catalog - 2024, Spring Term This dataset consists of all classes provided by USC (as of December 1, 2023) that USC is providing in 2024 Spring. While it is a small dataset, this could be used in some finetuning, generation, or RAG application tasks. One example would be this -> https://huggingface.co/spaces/USC/USC-GPT **I will also be scraping the 2024 fall term classes when they are released by USC!** If you want the web scraping script I used for this, feel free to send me an email at [email protected]:)
USC/USC-Course-Catalog
[ "license:apache-2.0", "region:us" ]
2023-12-31T16:55:52+00:00
{"license": "apache-2.0"}
2023-12-31T17:05:04+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
## USC Course Catalog - 2024, Spring Term This dataset consists of all classes provided by USC (as of December 1, 2023) that USC is providing in 2024 Spring. While it is a small dataset, this could be used in some finetuning, generation, or RAG application tasks. One example would be this -> URL I will also be scraping the 2024 fall term classes when they are released by USC! If you want the web scraping script I used for this, feel free to send me an email at brandonhulston1@URL:)
[ "## USC Course Catalog - 2024, Spring Term\nThis dataset consists of all classes provided by USC (as of December 1, 2023) that USC is providing in 2024 Spring.\nWhile it is a small dataset, this could be used in some finetuning, generation, or RAG application tasks. One example would be this -> URL\n\nI will also be scraping the 2024 fall term classes when they are released by USC!\n\nIf you want the web scraping script I used for this, feel free to send me an email at brandonhulston1@URL:)" ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "## USC Course Catalog - 2024, Spring Term\nThis dataset consists of all classes provided by USC (as of December 1, 2023) that USC is providing in 2024 Spring.\nWhile it is a small dataset, this could be used in some finetuning, generation, or RAG application tasks. One example would be this -> URL\n\nI will also be scraping the 2024 fall term classes when they are released by USC!\n\nIf you want the web scraping script I used for this, feel free to send me an email at brandonhulston1@URL:)" ]
[ 14, 126 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n## USC Course Catalog - 2024, Spring Term\nThis dataset consists of all classes provided by USC (as of December 1, 2023) that USC is providing in 2024 Spring.\nWhile it is a small dataset, this could be used in some finetuning, generation, or RAG application tasks. One example would be this -> URL\n\nI will also be scraping the 2024 fall term classes when they are released by USC!\n\nIf you want the web scraping script I used for this, feel free to send me an email at brandonhulston1@URL:)" ]
dacfc1bc7960967bb515a46e66b07e33bb081ddf
### Human or AI-Generated Text The data can be valuable for educators, policymakers, and researchers interested in the evolving education landscape, particularly in detecting or identifying texts written by Humans or Artificial Intelligence systems. #### File Name `model_training_dataset.csv` #### File Structure - `id`: Unique identifier for each record. - `human_text`: Human-written content. - `ai_text`: AI-generated texts. - `instructions`: Description of the task given to both Humans and AI. #### Acknowledgement Thanks to [0xnu](https://finbarrs.eu/) for sharing the file after contacting him and requesting it. #### Citation To reference this dataset in academic work, please use the following citation: ```bibtex @article{abiodunfinbarrsoketunji-agtd2023, title={Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing AI-Generated Text}, author={Abiodun Finbarrs Oketunji}, journal={arXiv:2311.15565v2}, year={2023} } ```
dmitva/human_ai_generated_text
[ "language:en", "license:cc-by-4.0", "nlp", "human", "ai", "text", "doi:10.57967/hf/1617", "region:us" ]
2023-12-31T16:56:40+00:00
{"language": ["en"], "license": "cc-by-4.0", "tags": ["nlp", "human", "ai", "text"]}
2024-01-16T06:25:14+00:00
[]
[ "en" ]
TAGS #language-English #license-cc-by-4.0 #nlp #human #ai #text #doi-10.57967/hf/1617 #region-us
### Human or AI-Generated Text The data can be valuable for educators, policymakers, and researchers interested in the evolving education landscape, particularly in detecting or identifying texts written by Humans or Artificial Intelligence systems. #### File Name 'model_training_dataset.csv' #### File Structure - 'id': Unique identifier for each record. - 'human_text': Human-written content. - 'ai_text': AI-generated texts. - 'instructions': Description of the task given to both Humans and AI. #### Acknowledgement Thanks to 0xnu for sharing the file after contacting him and requesting it. To reference this dataset in academic work, please use the following citation:
[ "### Human or AI-Generated Text\n\nThe data can be valuable for educators, policymakers, and researchers interested in the evolving education landscape, particularly in detecting or identifying texts written by Humans or Artificial Intelligence systems.", "#### File Name\n\n'model_training_dataset.csv'", "#### File Structure\n\n- 'id': Unique identifier for each record.\n- 'human_text': Human-written content.\n- 'ai_text': AI-generated texts.\n- 'instructions': Description of the task given to both Humans and AI.", "#### Acknowledgement\n\nThanks to 0xnu for sharing the file after contacting him and requesting it.\n\nTo reference this dataset in academic work, please use the following citation:" ]
[ "TAGS\n#language-English #license-cc-by-4.0 #nlp #human #ai #text #doi-10.57967/hf/1617 #region-us \n", "### Human or AI-Generated Text\n\nThe data can be valuable for educators, policymakers, and researchers interested in the evolving education landscape, particularly in detecting or identifying texts written by Humans or Artificial Intelligence systems.", "#### File Name\n\n'model_training_dataset.csv'", "#### File Structure\n\n- 'id': Unique identifier for each record.\n- 'human_text': Human-written content.\n- 'ai_text': AI-generated texts.\n- 'instructions': Description of the task given to both Humans and AI.", "#### Acknowledgement\n\nThanks to 0xnu for sharing the file after contacting him and requesting it.\n\nTo reference this dataset in academic work, please use the following citation:" ]
[ 40, 52, 15, 65, 40 ]
[ "passage: TAGS\n#language-English #license-cc-by-4.0 #nlp #human #ai #text #doi-10.57967/hf/1617 #region-us \n### Human or AI-Generated Text\n\nThe data can be valuable for educators, policymakers, and researchers interested in the evolving education landscape, particularly in detecting or identifying texts written by Humans or Artificial Intelligence systems.#### File Name\n\n'model_training_dataset.csv'#### File Structure\n\n- 'id': Unique identifier for each record.\n- 'human_text': Human-written content.\n- 'ai_text': AI-generated texts.\n- 'instructions': Description of the task given to both Humans and AI.#### Acknowledgement\n\nThanks to 0xnu for sharing the file after contacting him and requesting it.\n\nTo reference this dataset in academic work, please use the following citation:" ]
b17ebe6f5b371d795a66bfeec5d802d9231dbaf6
# Ficbook dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Description](#description) - [Usage](#usage) - [Personal and Sensitive Information](#personal-and-sensitive-information) ## Description A dump of the library, mainly in Russian, including all metadata. Usage of this dataset is possible only for scientific purposes on a non-commercial basis. **Script:** [parse_zip_fb2.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/parse_zip_fb2.py) **Source:** booktracker **Point of Contact:** [Ilya Gusev]([email protected]) **Languages:** Mostly Russian ## Usage Prerequisites: ```bash pip install datasets zstandard jsonlines pysimdjson ``` Dataset iteration: ```python from datasets import load_dataset for example in load_dataset('IlyaGusev/librusec_full', split="train", streaming=True): print(example) ```
IlyaGusev/librusec_full
[ "task_categories:text-generation", "size_categories:100K<n<1M", "language:ru", "not-for-all-audiences", "region:us" ]
2023-12-31T17:34:04+00:00
{"language": ["ru"], "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Librusec", "tags": ["not-for-all-audiences"], "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "file_name", "dtype": "string"}, {"name": "annotation", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "genre", "dtype": "string"}, {"name": "authors", "sequence": "string"}, {"name": "lang", "dtype": "string"}, {"name": "src_lang", "dtype": "string"}, {"name": "translator", "dtype": "string"}, {"name": "isbn", "dtype": "string"}, {"name": "publisher", "dtype": "string"}, {"name": "city", "dtype": "string"}, {"name": "year", "dtype": "string"}, {"name": "book_name", "dtype": "string"}, {"name": "fancy_title", "dtype": "string"}, {"name": "epigraphs", "sequence": "string"}, {"name": "sections", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 331580234891, "num_examples": 496858}], "download_size": 95230708690, "dataset_size": 331580234891}}
2024-01-04T16:58:54+00:00
[]
[ "ru" ]
TAGS #task_categories-text-generation #size_categories-100K<n<1M #language-Russian #not-for-all-audiences #region-us
# Ficbook dataset ## Table of Contents - Table of Contents - Description - Usage - Personal and Sensitive Information ## Description A dump of the library, mainly in Russian, including all metadata. Usage of this dataset is possible only for scientific purposes on a non-commercial basis. Script: parse_zip_fb2.py Source: booktracker Point of Contact: Ilya Gusev Languages: Mostly Russian ## Usage Prerequisites: Dataset iteration:
[ "# Ficbook dataset", "## Table of Contents\n- Table of Contents\n- Description\n- Usage\n- Personal and Sensitive Information", "## Description\n\nA dump of the library, mainly in Russian, including all metadata. Usage of this dataset is possible only for scientific purposes on a non-commercial basis.\n\nScript: parse_zip_fb2.py\n\nSource: booktracker\n\nPoint of Contact: Ilya Gusev\n\nLanguages: Mostly Russian", "## Usage\n\nPrerequisites:\n\n\n\nDataset iteration:" ]
[ "TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #not-for-all-audiences #region-us \n", "# Ficbook dataset", "## Table of Contents\n- Table of Contents\n- Description\n- Usage\n- Personal and Sensitive Information", "## Description\n\nA dump of the library, mainly in Russian, including all metadata. Usage of this dataset is possible only for scientific purposes on a non-commercial basis.\n\nScript: parse_zip_fb2.py\n\nSource: booktracker\n\nPoint of Contact: Ilya Gusev\n\nLanguages: Mostly Russian", "## Usage\n\nPrerequisites:\n\n\n\nDataset iteration:" ]
[ 43, 6, 22, 72, 14 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #not-for-all-audiences #region-us \n# Ficbook dataset## Table of Contents\n- Table of Contents\n- Description\n- Usage\n- Personal and Sensitive Information## Description\n\nA dump of the library, mainly in Russian, including all metadata. Usage of this dataset is possible only for scientific purposes on a non-commercial basis.\n\nScript: parse_zip_fb2.py\n\nSource: booktracker\n\nPoint of Contact: Ilya Gusev\n\nLanguages: Mostly Russian## Usage\n\nPrerequisites:\n\n\n\nDataset iteration:" ]
61adb4bc3dba844e295cab00f7d68e28715ac25a
# Website Screenshots Image Dataset <!-- Provide a quick summary of the dataset. --> This dataset is obtainable [here from roboflow.](https://universe.roboflow.com/roboflow-gw7yv/website-screenshots). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Language(s) (NLP):** [English] - **License:** [MIT] ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Source:** [https://universe.roboflow.com/roboflow-gw7yv/website-screenshots/dataset/1] ## Uses <!-- Address questions around how the dataset is intended to be used. --> From the roboflow website: > Annotated screenshots are very useful in Robotic Process Automation. But they can be expensive to label. This dataset would cost over $4000 for humans to label on popular labeling services. We hope this dataset provides a good starting point for your project. ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> The Roboflow Website Screenshots dataset is a synthetically generated dataset composed of screenshots from over 1000 of the world's top websites ### Annotations <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> - button: navigation links, tabs, etc. - heading: text that was enclosed in \<h1> to \<h6> tags. - link: inline, textual \<a> tags. - label: text labeling form fields. - text: all other text. - image: \<img>, \<svg>, or \<video> tags, and icons. - iframe: ads and 3rd party content. #### label2id ```python label2id = { 'button': 1, 'elements': 0, 'field': 2, 'heading': 3, 'iframe': 4, 'image': 5, 'label': 6, 'link': 7, 'text': 8 } ``` #### id2label ```python id2label = { 0: 'elements', 1: 'button', 2: 'field', 3: 'heading', 4: 'iframe', 5: 'image', 6: 'label', 7: 'link', 8: 'text' } ```
Zexanima/website_screenshots_image_dataset
[ "task_categories:object-detection", "language:en", "license:mit", "web", "website", "region:us" ]
2023-12-31T17:34:58+00:00
{"language": ["en"], "license": "mit", "task_categories": ["object-detection"], "dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "url", "dtype": "null"}, {"name": "date_captured", "dtype": "string"}, {"name": "objects", "list": [{"name": "area", "dtype": "int64"}, {"name": "bbox", "sequence": "int64"}, {"name": "category_id", "dtype": "int64"}, {"name": "id", "dtype": "int64"}, {"name": "image_id", "dtype": "int64"}, {"name": "iscrowd", "dtype": "int64"}, {"name": "segmentation", "sequence": "null"}]}], "splits": [{"name": "test", "num_bytes": 22424625, "num_examples": 242}, {"name": "train", "num_bytes": 159535409.08, "num_examples": 1688}, {"name": "valid", "num_bytes": 46104875, "num_examples": 482}], "download_size": 201411511, "dataset_size": 228064909.08}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}], "tags": ["web", "website"]}
2023-12-31T21:39:29+00:00
[]
[ "en" ]
TAGS #task_categories-object-detection #language-English #license-mit #web #website #region-us
# Website Screenshots Image Dataset This dataset is obtainable here from roboflow.. ## Dataset Details ### Dataset Description - Language(s) (NLP): [English] - License: [MIT] ### Dataset Sources - Source: [URL ## Uses From the roboflow website: > Annotated screenshots are very useful in Robotic Process Automation. But they can be expensive to label. This dataset would cost over $4000 for humans to label on popular labeling services. We hope this dataset provides a good starting point for your project. ### Source Data The Roboflow Website Screenshots dataset is a synthetically generated dataset composed of screenshots from over 1000 of the world's top websites ### Annotations - button: navigation links, tabs, etc. - heading: text that was enclosed in \<h1> to \<h6> tags. - link: inline, textual \<a> tags. - label: text labeling form fields. - text: all other text. - image: \<img>, \<svg>, or \<video> tags, and icons. - iframe: ads and 3rd party content. #### label2id #### id2label
[ "# Website Screenshots Image Dataset\n\n\n\nThis dataset is obtainable here from roboflow..", "## Dataset Details", "### Dataset Description\n\n\n\n\n- Language(s) (NLP): [English]\n- License: [MIT]", "### Dataset Sources\n\n\n\n- Source: [URL", "## Uses\n\n\nFrom the roboflow website:\n> Annotated screenshots are very useful in Robotic Process Automation. But they can be expensive to label. This dataset would cost over $4000 for humans to label on popular labeling services. We hope this dataset provides a good starting point for your project.", "### Source Data\n\n\nThe Roboflow Website Screenshots dataset is a synthetically generated dataset composed of screenshots from over 1000 of the world's top websites", "### Annotations\n\n\n\n- button: navigation links, tabs, etc.\n- heading: text that was enclosed in \\<h1> to \\<h6> tags.\n- link: inline, textual \\<a> tags.\n- label: text labeling form fields.\n- text: all other text.\n- image: \\<img>, \\<svg>, or \\<video> tags, and icons.\n- iframe: ads and 3rd party content.", "#### label2id", "#### id2label" ]
[ "TAGS\n#task_categories-object-detection #language-English #license-mit #web #website #region-us \n", "# Website Screenshots Image Dataset\n\n\n\nThis dataset is obtainable here from roboflow..", "## Dataset Details", "### Dataset Description\n\n\n\n\n- Language(s) (NLP): [English]\n- License: [MIT]", "### Dataset Sources\n\n\n\n- Source: [URL", "## Uses\n\n\nFrom the roboflow website:\n> Annotated screenshots are very useful in Robotic Process Automation. But they can be expensive to label. This dataset would cost over $4000 for humans to label on popular labeling services. We hope this dataset provides a good starting point for your project.", "### Source Data\n\n\nThe Roboflow Website Screenshots dataset is a synthetically generated dataset composed of screenshots from over 1000 of the world's top websites", "### Annotations\n\n\n\n- button: navigation links, tabs, etc.\n- heading: text that was enclosed in \\<h1> to \\<h6> tags.\n- link: inline, textual \\<a> tags.\n- label: text labeling form fields.\n- text: all other text.\n- image: \\<img>, \\<svg>, or \\<video> tags, and icons.\n- iframe: ads and 3rd party content.", "#### label2id", "#### id2label" ]
[ 30, 20, 4, 23, 11, 65, 37, 115, 5, 5 ]
[ "passage: TAGS\n#task_categories-object-detection #language-English #license-mit #web #website #region-us \n# Website Screenshots Image Dataset\n\n\n\nThis dataset is obtainable here from roboflow..## Dataset Details### Dataset Description\n\n\n\n\n- Language(s) (NLP): [English]\n- License: [MIT]### Dataset Sources\n\n\n\n- Source: [URL## Uses\n\n\nFrom the roboflow website:\n> Annotated screenshots are very useful in Robotic Process Automation. But they can be expensive to label. This dataset would cost over $4000 for humans to label on popular labeling services. We hope this dataset provides a good starting point for your project.### Source Data\n\n\nThe Roboflow Website Screenshots dataset is a synthetically generated dataset composed of screenshots from over 1000 of the world's top websites### Annotations\n\n\n\n- button: navigation links, tabs, etc.\n- heading: text that was enclosed in \\<h1> to \\<h6> tags.\n- link: inline, textual \\<a> tags.\n- label: text labeling form fields.\n- text: all other text.\n- image: \\<img>, \\<svg>, or \\<video> tags, and icons.\n- iframe: ads and 3rd party content.#### label2id#### id2label" ]
c7a628b0cdf4908e87aa10e05407a795b8758bf9
# Deita-10k-V0 in Chat ML Format https://huggingface.co/datasets/hkust-nlp/deita-10k-v0 but in ChatML format.
smangrul/deita-10k-v0-chatml
[ "region:us" ]
2023-12-31T17:58:56+00:00
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "source", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 325581880.5, "num_examples": 9000}, {"name": "test", "num_bytes": 36175764.5, "num_examples": 1000}], "download_size": 143691873, "dataset_size": 361757645.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2023-12-31T18:03:27+00:00
[]
[]
TAGS #region-us
# Deita-10k-V0 in Chat ML Format URL but in ChatML format.
[ "# Deita-10k-V0 in Chat ML Format\n\nURL but in ChatML format." ]
[ "TAGS\n#region-us \n", "# Deita-10k-V0 in Chat ML Format\n\nURL but in ChatML format." ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Deita-10k-V0 in Chat ML Format\n\nURL but in ChatML format." ]
80a8036bbf83395a1724c3f0ee2bca236272d363
title: Lexicon Dataset for the Hausa Language Dataset with English translation --- license: cc-by-nd-4.0 ---
mangaphd/hausa_aug_lex
[ "size_categories:10K<n<100K", "language:ha", "license:cc-by-nd-4.0", "doi:10.57967/hf/1541", "region:us" ]
2023-12-31T19:23:56+00:00
{"language": ["ha"], "license": "cc-by-nd-4.0", "size_categories": ["10K<n<100K"], "pretty_name": "hausa_aug_lex"}
2024-01-20T17:09:03+00:00
[]
[ "ha" ]
TAGS #size_categories-10K<n<100K #language-Hausa #license-cc-by-nd-4.0 #doi-10.57967/hf/1541 #region-us
title: Lexicon Dataset for the Hausa Language Dataset with English translation --- license: cc-by-nd-4.0 ---
[]
[ "TAGS\n#size_categories-10K<n<100K #language-Hausa #license-cc-by-nd-4.0 #doi-10.57967/hf/1541 #region-us \n" ]
[ 46 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #language-Hausa #license-cc-by-nd-4.0 #doi-10.57967/hf/1541 #region-us \n" ]
83a8957936caf7e11f7c4d7a29c94c85544546f2
# Dataset Card for "fashion_image_caption-100-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ddedde/fashion_image_caption-100-v2
[ "region:us" ]
2023-12-31T19:28:12+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22820471.0, "num_examples": 100}], "download_size": 22820373, "dataset_size": 22820471.0}}
2023-12-31T19:28:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fashion_image_caption-100-v2" More Information needed
[ "# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed" ]
16b7e483af3f8dbe0d11ff1838ee98163be7d09c
# Dataset Card for "SwahiliPlatypus" [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) translated to Swahili.
mwitiderrick/SwahiliPlatypus
[ "task_categories:text-generation", "size_categories:10K<n<100K", "language:sw", "license:apache-2.0", "region:us" ]
2023-12-31T19:30:00+00:00
{"language": ["sw"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "Swahili Platypus", "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63213990, "num_examples": 24760}], "download_size": 30619174, "dataset_size": 63213990}}
2024-01-01T17:22:03+00:00
[]
[ "sw" ]
TAGS #task_categories-text-generation #size_categories-10K<n<100K #language-Swahili (macrolanguage) #license-apache-2.0 #region-us
# Dataset Card for "SwahiliPlatypus" Open-Platypus translated to Swahili.
[ "# Dataset Card for \"SwahiliPlatypus\"\n\nOpen-Platypus translated to Swahili." ]
[ "TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Swahili (macrolanguage) #license-apache-2.0 #region-us \n", "# Dataset Card for \"SwahiliPlatypus\"\n\nOpen-Platypus translated to Swahili." ]
[ 48, 26 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-Swahili (macrolanguage) #license-apache-2.0 #region-us \n# Dataset Card for \"SwahiliPlatypus\"\n\nOpen-Platypus translated to Swahili." ]
9a9ef60a8140f40745bafb0c0de7a6a601b48273
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
danielmekuriaw/Amharic-Text-Summarization-Benchmark-Dataset
[ "region:us" ]
2023-12-31T20:16:04+00:00
{}
2023-12-31T20:16:26+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name This dataset card aims to be a base template for new datasets. It has been generated using this raw template. ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 34, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
8729032d224d1b4b10a1e3b7ecfc6ce002f4929e
# Dataset Card for "fashion_image_caption-100-v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mohibovais79/fashion_image_caption-100-v2
[ "region:us" ]
2023-12-31T20:35:23+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22820471.0, "num_examples": 100}], "download_size": 22820373, "dataset_size": 22820471.0}}
2023-12-31T20:35:24+00:00
[]
[]
TAGS #region-us
# Dataset Card for "fashion_image_caption-100-v2" More Information needed
[ "# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed" ]
e90aa7f324e7b35fc9ba993949ee9d583810ac32
complete
netcat420/MHENNlitv2
[ "license:mit", "region:us" ]
2023-12-31T21:00:14+00:00
{"license": "mit"}
2024-01-03T02:42:58+00:00
[]
[]
TAGS #license-mit #region-us
complete
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
d831b2841e443e2cd26ee92b8835e9fce7a202f3
Dataset for TCFD complience
angry-snail/tcfd-v1
[ "task_categories:text-classification", "language:en", "license:cc-by-nc-4.0", "region:us" ]
2023-12-31T21:01:21+00:00
{"language": ["en"], "license": "cc-by-nc-4.0", "task_categories": ["text-classification"]}
2024-01-01T18:56:05+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #license-cc-by-nc-4.0 #region-us
Dataset for TCFD complience
[]
[ "TAGS\n#task_categories-text-classification #language-English #license-cc-by-nc-4.0 #region-us \n" ]
[ 32 ]
[ "passage: TAGS\n#task_categories-text-classification #language-English #license-cc-by-nc-4.0 #region-us \n" ]
4c23f7da499bd27a0f0665805e5aabf8c0bb68d9
# Dataset Card for "cv_13_zh_tw_extract_unit" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/cv_13_zh_tw_extract_unit
[ "region:us" ]
2023-12-31T21:29:07+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 348905998, "num_examples": 61154}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 348905998, "num_examples": 61154}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 522068174, "num_examples": 61154}, {"name": "audiodec_24k_320d", "num_bytes": 1114562286, "num_examples": 61154}, {"name": "dac_16k", "num_bytes": 2221301742, "num_examples": 61154}, {"name": "dac_24k", "num_bytes": 6352630894, "num_examples": 61154}, {"name": "dac_44k", "num_bytes": 1901382630, "num_examples": 61154}, {"name": "encodec_24k", "num_bytes": 263161342, "num_examples": 61154}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 2790208366, "num_examples": 61154}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 2790208366, "num_examples": 61154}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 2789220974, "num_examples": 61154}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 1402128238, "num_examples": 61154}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 2786776174, "num_examples": 61154}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 2786776174, "num_examples": 61154}, {"name": "speech_tokenizer_16k", "num_bytes": 698482798, "num_examples": 61154}], "download_size": 4205946477, "dataset_size": 29116720154}}
2023-12-31T21:44:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "cv_13_zh_tw_extract_unit" More Information needed
[ "# Dataset Card for \"cv_13_zh_tw_extract_unit\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"cv_13_zh_tw_extract_unit\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"cv_13_zh_tw_extract_unit\"\n\nMore Information needed" ]
f7ac647962f92e6965b55ac539dbb030f3f145b6
# Dataset Card for "test_startup_advice_5k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
salma-remyx/test_startup_advice_5k
[ "region:us" ]
2023-12-31T21:42:54+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5402250, "num_examples": 5000}], "download_size": 3156847, "dataset_size": 5402250}}
2023-12-31T22:12:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "test_startup_advice_5k" More Information needed
[ "# Dataset Card for \"test_startup_advice_5k\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"test_startup_advice_5k\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"test_startup_advice_5k\"\n\nMore Information needed" ]
e69b4f68ca4677806f18ecf135acab7b64afc883
Generated CoT data based on "stereoset" data(https://huggingface.co/datasets/stereoset). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
yc4142/bias-CoT
[ "region:us" ]
2023-12-31T21:50:00+00:00
{}
2023-12-31T21:57:19+00:00
[]
[]
TAGS #region-us
Generated CoT data based on "stereoset" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
191a97078c8cb2fc21e3d40a599ffea86bf4b5f1
Generated non-CoT data based on "stereoset" data(https://huggingface.co/datasets/stereoset). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
yc4142/bias-nonCoT
[ "region:us" ]
2023-12-31T21:54:00+00:00
{}
2023-12-31T21:57:49+00:00
[]
[]
TAGS #region-us
Generated non-CoT data based on "stereoset" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
eba6b22715d421d347fe0a00a5f5d139003acb1a
Generated CoT data based on "metaeval/ethics" data(https://huggingface.co/datasets/metaeval/ethics). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University. Because deontology data on hugging face is missing scenario column, the data was generated from raw csv data files in the author's git repo(https://github.com/hendrycks/ethics)
yc4142/ethics-CoT
[ "region:us" ]
2023-12-31T22:09:47+00:00
{}
2023-12-31T22:15:55+00:00
[]
[]
TAGS #region-us
Generated CoT data based on "metaeval/ethics" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University. Because deontology data on hugging face is missing scenario column, the data was generated from raw csv data files in the author's git repo(URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
e5c73cddd9a9ef78afc8d59b189ec5b9d225ad64
Generated Non CoT data based on "metaeval/ethics" data(https://huggingface.co/datasets/metaeval/ethics). This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University. Because deontology data on hugging face is missing scenario column, the data was generated from raw csv data files in the author's git repo(https://github.com/hendrycks/ethics)
yc4142/ethics-nonCoT
[ "region:us" ]
2023-12-31T22:15:18+00:00
{}
2023-12-31T22:16:31+00:00
[]
[]
TAGS #region-us
Generated Non CoT data based on "metaeval/ethics" data(URL This is used to fine tine LLMs for the continuation of JPmorgan LLMs research project, which was one of capstone projected offered to students of MSDS program at Columbia University. Because deontology data on hugging face is missing scenario column, the data was generated from raw csv data files in the author's git repo(URL
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
f2011c19aae17ab1c581e98c7accb3ec5b8679a6
QA-pairs with context from public documentation from Zerto, Carbonite, Vmware etc.
reknine69/QA-citations
[ "task_categories:question-answering", "size_categories:1K<n<10K", "language:en", "region:us" ]
2023-12-31T22:27:27+00:00
{"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]}
2024-01-16T18:48:41+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us
QA-pairs with context from public documentation from Zerto, Carbonite, Vmware etc.
[]
[ "TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us \n" ]
[ 34 ]
[ "passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us \n" ]