awacke1 commited on
Commit
b0a0a1e
·
1 Parent(s): 738c22e

Create app.py

Browse files
Files changed (1) hide show
  1. app.py +38 -0
app.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import streamlit as st
2
+
3
+ def main():
4
+ st.title("SQuAD: Stanford Question Answering Dataset")
5
+ st.header("What is SQuAD?")
6
+
7
+ st.markdown("""
8
+ Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
9
+
10
+ SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering.
11
+
12
+ SQuAD 1.1, the previous version of the SQuAD dataset, contains 100,000+ question-answer pairs on 500+ articles.
13
+ """)
14
+
15
+ st.header("Getting Started")
16
+ st.markdown("""
17
+ We've built a few resources to help you get started with the dataset.
18
+
19
+ Download a copy of the dataset (distributed under the CC BY-SA 4.0 license):
20
+
21
+ To evaluate your models, we have also made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use python evaluate-v2.0.py <path_to_dev-v2.0> <path_to_predictions>.
22
+
23
+ Once you have a built a model that works to your expectations on the dev set, you submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, we do not release the test set to the public. Instead, we require you to submit your model so that we can run it on the test set for you. Here's a tutorial walking you through official evaluation of your model:
24
+
25
+ Because SQuAD is an ongoing effort, we expect the dataset to evolve.
26
+
27
+ To keep up to date with major changes to the dataset, please subscribe:
28
+
29
+ email address
30
+ """)
31
+
32
+ st.header("Have Questions?")
33
+ st.markdown("""
34
+ Ask us questions at our google group or at [email protected].
35
+ """)
36
+
37
+ if __name__ == "__main__":
38
+ main()