|
{ |
|
"paper_id": "S14-1008", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:33:05.946947Z" |
|
}, |
|
"title": "Learning the Peculiar Value of Actions", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Dahlmeier", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "SAP Asia", |
|
"location": { |
|
"country": "Singapore" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We consider the task of automatically estimating the value of human actions. We cast the problem as a supervised learningto-rank problem between pairs of action descriptions. We present a large, novel data set for this task which consists of challenges from the I Will If You Will Earth Hour challenge. We show that an SVM ranking model with simple linguistic features can accurately predict the relative value of actions.", |
|
"pdf_parse": { |
|
"paper_id": "S14-1008", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We consider the task of automatically estimating the value of human actions. We cast the problem as a supervised learningto-rank problem between pairs of action descriptions. We present a large, novel data set for this task which consists of challenges from the I Will If You Will Earth Hour challenge. We show that an SVM ranking model with simple linguistic features can accurately predict the relative value of actions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The question on how humans conceptualize value is of great interest to researchers in various fields, including linguistics (Jackendoff, 2006) . The link between value and language arises from the fact that we cannot directly observe value due to its abstract nature and instead often study language expressions that describe actions which have some value attached to them. This creates an interesting link between the semantics of the words that describe the actions and the underlying moral value of the actions. Jackendoff (2006) describes value as an \"internal accounting system\" for ethical decision processes that exhibits both valence (good or bad) and magnitude (better or worse). Most interestingly, value is governed by a \"peculiar logic\" that provides constraints on which actions are deemed morally acceptable and which are not. In particular, the principal of reciprocity states that the valence and magnitude of reciprocal actions (actions that are done \"in return\" for something else) should match, i.e., positive valued actions should This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http: //creativecommons.org/licenses/by/4.0/ match with positive valued reciprocal actions (reactions) of similar magnitude, and conversely negatively valued actions should match with negative valued reciprocal actions (reactions) of similar magnitude.", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 142, |
|
"text": "(Jackendoff, 2006)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 532, |
|
"text": "Jackendoff (2006)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we consider the task of automatically estimating the value of actions. We present a simple and effective method for learning the value of actions from ranked pairs of textual action descriptions based on a statistical learning-to-rank approach. Our experiments are based on a novel data set that we create from challenges submitted to the I Will if You Will Earth Hour challenge where participants pledge to do something daring or challenging if other people commit to sustainable actions for the planet. Our method achieves a surprisingly high accuracy of up to 94.72% in a 10-fold cross-validation experiment. The results show that the value of actions can accurately be estimated by machine learning methods based on lexical descriptions of the actions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The main contribution of this paper is that we show how the semantics of value in language can accurately be learned from empirical data using a learning-to-rank approach. Our work shows an interesting link between empirical research on semantics in natural language processing and the concept of value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our approach is based on the concept of value as presented by Jackendoff (2006) who describes value as an abstract property that is attributed to objects, persons, and actions. He further describes logical inference rules that humans use to determine which actions are deemed morally acceptable and which are not. The most important inference rule for our work is the principal of reciprocation, things that are done \"in return\" for some other action (Fiengo and Lasnik, 1973) . In English, this relation is often expressed by the prepo-sition for, as shown by the following example sentences (Jackendoff, 2006 The first two examples describe actions with positive value, while the last two examples describe actions with negative value. We expect that the valence values of reciprocal actions match: positively valued actions demand a positively valued action in return, while negatively valued actions trigger negatively valued responses. If we switch the example sentences and match positive actions with negative actions, we get sentences that sound counter-intuitive or perhaps comical (we prefix counter-intuitive sentences with a hash character '#').", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 79, |
|
"text": "Jackendoff (2006)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 476, |
|
"text": "(Fiengo and Lasnik, 1973)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 593, |
|
"end": 610, |
|
"text": "(Jackendoff, 2006", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Logic of Value", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. #Susan insulted Sam for behaving nicely. 2. #Lois slashed Fred's tires for fixing her computer.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Logic of Value", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Similarly, we expect that the magnitudes of value between reciprocal actions match. Sentences where the magnitude of the value of the response action does not match the magnitude of the initial action seem odd or socially inappropriate (overacting/underacting). We observe that reciprocal actions typically match each other in valence and magnitude. Coming back to our initial goal of learning the value of actions, this gives us a method for comparing the value of actions that were done in return to the same initial action.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Logic of Value", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The I Will If You Will (IWIYW) challenge 1 is part of the World Wildlife Fund's Earth Hour campaign 1 www.earthhour.org/i-will-if-you-will I will quit smoking if you will start recycling. (500 people) I will adopt a panda if you will start recycling. (1000 people) I will dance gangnam style if you will plant a tree. (100 people) I will dye my hair red if you will upload an IWIYW challenge. (500 people) I will learn Java if you will upload an IWIYW challenge. (10,000 people) which has the goal to increase awareness of sustainability issues. In this challenge, participants make a pledge to do something daring or challenging if a certain number of people commit to sustainable actions for the planet. The challenges are created by ordinary people on the Earth Hour campaign website. Each challenge takes the form of a simple school yard dare: I will do X, if you will do Y, where X is typically some daring or challenging task that the challenge creator commits to do if a sufficient number of people commit to do action Y which is some sustainable action for the planet. Together with the textual description, each challenge includes the number of people that need to commit to doing Y in order for the challenge creator to perform X. Examples of the challenges are shown in Table 1 . It is important to note that during the challenge creation on the IWIYW website, the X challenge is a free text input field that allows the author to come up with creative and interesting challenges. The sustainable actions Y and the number of people that need to commit to it are usually chosen from a fixed list of choices. As a result, there is a large number of different X actions and a comparably smaller number of Y actions. The collected challenges provide a unique data set that allows us to quantitatively measure the value of each promised task by the number of people that need to fulfill the sustainable action.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1281, |
|
"end": 1288, |
|
"text": "Table 1", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "I Will If You Will challenge", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this section, we present our approach for estimating the value of actions. Our approach casts the problem as a supervised learning-to-rank problem between pairs of actions. Given, a textual description of an action a, we want to estimate its value magnitude v. We represent the action a via a set of features that are extracted from the description of the action. We use a linear model that combines the features into a single scalar value for the value", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "v v = w T x a ,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "where x a is the feature vector for action description a and w is a learned weight vector. The goal is to learn a suitable weight vector w that approximates the true relationship between textual expressions of actions and their magnitude of value. Instead of estimating the value directly, we take an alternative approach and consider the task of learning the relative ranking of pairs of actions. We follow the pairwise approach to ranking (Herbrich et al., 1999; Cao et al., 2007) that reduces ranking to a binary classification problem. Ranking the values v 1 and v 2 of two actions a 1 and a 2 is equivalent to determining the sign of the dot product between the weight vector w and the difference between the feature vectors x a 1 and x", |
|
"cite_spans": [ |
|
{ |
|
"start": 441, |
|
"end": 464, |
|
"text": "(Herbrich et al., 1999;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 465, |
|
"end": 482, |
|
"text": "Cao et al., 2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "a 2 . v 1 > v 2 \u21d4 w T x a 1 > w T x a 2 \u21d4 w T x a 1 \u2212 w T x a 2 > 0 \u21d4 w T (x a 1 \u2212 x a 2 ) > 0", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each ranking pair of actions, we create two complimentary classification instances:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "(x a 1 \u2212 x a 2 , l 1 ) and (x a 2 \u2212 x a 1 , l 2 )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": ", where the labels are l 1 = +1, l 2 = \u22121 if the first challenge has higher value than the second challenge and l 1 = \u22121, l 2 = +1 otherwise. We can train a standard linear classifier on the generated training instances to learn the weight vector w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the case of the IWIYW data, there is no explicit ranking between actions. However, we are able to create ranking pairs for the IWIYW data in the following way. As we have seen, there is only a small set of different You Will challenges that are reciprocal actions for a diverse set of I Will challenges. Thus, many I Will challenges will end up having the same You Will challenge. We can use the You Will challenges as a pivot to effectively \"join\" the I Will challenges. The number of required people to perform Y induces a natural ordering between the values of the I Will actions where a higher number of required participants means that the I Will task has higher value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For example, for the challenges displayed in Table 1, we can use the common You Will challenges to create the following ranked challenge pairs. I will quit smoking < I will adopt a panda I will dye my hair red < I will learn Java 3According to the examples, adopting a panda has higher value than quitting smoking and learning Java has higher value than dying ones hair red. The third challenge does not share a common You Will challenge with any other challenge and therefore no ranking pairs can be formed with it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "As the IWIYW challenges are created online in a non-controlled environment, we have to expect that there is some noise in the automatically created ranked challenges. However, a robust learning algorithm has to be able to handle a certain amount of noise. We note that our method is not limited to the IWIYW data set but can be applied to any data set of actions where relative rankings are provided or can be induced.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The choice of appropriate feature representations is crucial to the success of any machine learning method. We start by parsing each I Will If You Will challenge with a constituency parser. Because each challenge has the same I Will If You Will structure, it is easy to identify the subtrees that correspond to the I Will and You Will parts of the challenge. An example parse tree of a challenge is shown in Figure 1 . The yield of the You Will subtree serves as a pivot to join different I Will challenges. To represent the I Will action a as a feature vector x a , we extract the following lexical and syntax features from the I Will subtree of the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 408, |
|
"end": 416, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Verb: We extract the verb of the I Will clause as a feature. To identify the verb, we pick the left-most verb of the I Will subtree based on its part-of-speech (POS) tag. We extract the lowercased word token as a feature. For example, for the sentence in Figure 1 , the verb feature is verb=quit. If the verb is negated (the left sibling of the I Will subtree spans exactly the word not), we add the postfix NOT to the verb feature, for example verb=quit NOT.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 257, |
|
"end": 265, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Object: We take the right sibling of the I will verb as the object of the action. If the right sibling is a particle with constituent label PRT, e.g., travel around the UK on bike, we skip the particle and take the next sibling as the object. If the object is a prepositional phrase with constituent tag PP, e.g., go without electricity for a month, we take the second child of the prepositional phrase as the object phrase. We then extract two features to represent the object. First, we extract the lowercased head word of the object as a feature. Second, we extract the concatenation of all the words in the yield of the object node as a single feature to capture the complete argument for longer objects. In our example sentence, the object head feature and the complete object feature are identical: object head=smoking and object=smoking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Unigram: We take all lowercased words that are not stopwords in the I Will part of the sentence as binary features. In our example sentence, the unigram features unigr quit and unigr smoking would be active.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\u2022 Bigram: We take all lowercased bigrams in the I Will part of the sentence as binary features. We do not remove stopwords for bigram features. In our example sentence, the bigram features bigr quit smoking would be active.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We note that our method is not restricted to these feature templates. More sophisticated features, like tree kernels (Collins and Duffy, 2002) or se-mantic role labeling (Palmer et al., 2010) , can be imagined.", |
|
"cite_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 142, |
|
"text": "(Collins and Duffy, 2002)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 170, |
|
"end": 191, |
|
"text": "(Palmer et al., 2010)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We evaluate our approach using standard 10-fold cross-validation and report macro-average accuracy scores for each of the feature sets. The classifier in all our experiments is a linear SVM implemented in SVM-light (Joachims, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 215, |
|
"end": 231, |
|
"text": "(Joachims, 2006)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "We obtained a snapshot of 18,290 challenges created during the 2013 IWIYW challenge. The snapshot was taken in mid May 2013, just 1.5 weeks before the 2013 Earth Hour event day. We perform the following pre-processing. We normalize the text to proper UTF-8 encoding and remove challenges where the complete sentence contained less than 7 tokens. These challenges were usually empty or incomplete. We filter the challenges using the langid.py tool (Lui and Baldwin, 2012) and only keep English challenges. We normalized the casing of the sentences by first lowercasing all texts and then re-casing each sentence with a simple re-casing model that replaces a word with its most frequent casing form. The re-casing model is trained on the Brown corpus (Ku and Francis, 1967) . We tokenize the sentences with the Penn Treebank tokenizer. We parse the sentences with the Stanford parser (Klein and Manning, 2003a; Klein and Manning, 2003b) to ob- We create binary classifications examples between pairs of actions as described in Section 4. As we create all possible combinations between I Will challenges with common You Will challenges, the number of ranking pairs for training is large. In our case, we ended up with over 840,000 classification instances. We note that not every I Will action is guaranteed to be included in the final set of ranking pairs as challenges with a unique You Will part that is not found in any other challenge cannot be joined and are effectively ignored. However, this is not a problem for our experiments. The binary classification instances are used to train and test a ranking model for estimating the value of actions as described in the last section.", |
|
"cite_spans": [ |
|
{ |
|
"start": 447, |
|
"end": 470, |
|
"text": "(Lui and Baldwin, 2012)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 771, |
|
"text": "(Ku and Francis, 1967)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 908, |
|
"text": "(Klein and Manning, 2003a;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 909, |
|
"end": 934, |
|
"text": "Klein and Manning, 2003b)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The results of our cross-validation experiments are shown in Table 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 61, |
|
"end": 68, |
|
"text": "Table 2", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The random baseline for all experiments is 50%. Just using the verb of the I Will action as a feature improves over the random baseline to 62.41%. Using a unigram bag-of-words representation of the actions achieves a very respectable score of 84.81%. When we combine unigrams with the verb feature, we achieve 85.73%. One of the most surprising results of our experiments is that the object of the action alone is a very effective feature, achieving 89.04%. When combined with the verb feature, the object feature achieves 91.15% which shows that the verb and object carry most of the relevant information that the model requires to gauge the value of actions. Using bigrams as features, seems to catch this information just as accurately, achieving 92.51% accuracy. The score is further improved by combining the different feature sets. The best result of 94.72% is obtained by combining all the features: unigrams, bigrams, verb, and object. In summary, these results show that our method is able to accurately predict the relative value of actions using simple linguistic features, which is the main contribution of this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The concept of value and reciprocity has been extensively studied in the social sciences (Gergen and Greenberg, 1980), anthropology (Sahlins, 1972) , economics (Fehr and G\u00e4chter, 2000) , and philosophy (Becker, 1990) . In linguistics, value has been studied by Jackendoff (2006) . His work forms the starting point of our approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 100, |
|
"text": "(Gergen and", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 101, |
|
"end": 147, |
|
"text": "Greenberg, 1980), anthropology (Sahlins, 1972)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 184, |
|
"text": "(Fehr and G\u00e4chter, 2000)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 202, |
|
"end": 216, |
|
"text": "(Becker, 1990)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 278, |
|
"text": "Jackendoff (2006)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "In natural language processing, there has been very little work on the concept of value. Paul et al. (2009) and Girju and Paul (2011) address the problem of semi-automatically mining patterns that encode reciprocal relationships using pronoun templates. Their work focuses on mining patterns of reciprocity while our work uses expressions of reciprocal actions to learn the value of actions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 107, |
|
"text": "Paul et al. (2009)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 133, |
|
"text": "Girju and Paul (2011)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "None of the above works tries to estimate the value of actions, as we do in this work. In fact, we are not aware of any other work that tries to estimate the value of actions from lexical expressions of value.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have presented a simple and effective method for learning the value of actions from reciprocal sentences. We show that our SVM-based ranking model with simple linguistic features is able to accurately rank pairs of actions from the I Will If You Will Earth Hour challenge, achieving an accuracy of up to 94.72%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We thank Sid Das from Earth Hour for sharing the IWIYW data with us. We thank Marek Kowalkiewicz for helpful discussions. The research is partially funded by the Economic Development Board and the National Research Foundation of Singapore.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Reciprocity. University of", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Lawrence", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Becker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lawrence C Becker, editor. 1990. Reciprocity. Uni- versity of Chicago Press.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Learning to rank: from pairwise approach to listwise approach", |
|
"authors": [ |
|
{ |
|
"first": "Zhe", |
|
"middle": [], |
|
"last": "Cao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tie-Yan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Feng", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hang", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 24th International Conference on Machine Learning (ICML)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "129--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning (ICML), pages 129-136.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Convolution kernels for natural language", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nigel", |
|
"middle": [], |
|
"last": "Duffy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Advances in Neural Information Processing Systems 14 (NIPS 2001)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "625--632", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins and Nigel Duffy. 2002. Convolution kernels for natural language. In Advances in Neu- ral Information Processing Systems 14 (NIPS 2001), pages 625-632.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Cooperation and punishment in public goods experiments", |
|
"authors": [ |
|
{ |
|
"first": "Ernst", |
|
"middle": [], |
|
"last": "Fehr", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "G\u00e4chter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "980--994", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ernst Fehr and Simon G\u00e4chter. 2000. Cooperation and punishment in public goods experiments. pages 980-994.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The logical structure of reciprocal sentences in English. Foundations of language", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Fiengo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Howard", |
|
"middle": [], |
|
"last": "Lasnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1973, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "447--468", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Fiengo and Howard Lasnik. 1973. The logical structure of reciprocal sentences in English. Foun- dations of language, pages 447-468.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Social exchange: Advances in theory and research", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kenneth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Willis", |
|
"middle": [], |
|
"last": "Gergen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martin", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Greenberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1980, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kenneth J. Gergen and Willis Richard H. Greenberg, Martin S., editors. 1980. Social exchange: Ad- vances in theory and research. Plenum Press.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Modeling reciprocity in social interactions with probabilistic latent space models", |
|
"authors": [ |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Natural Language Engineering", |
|
"volume": "17", |
|
"issue": "1", |
|
"pages": "1--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roxana Girju and Michael J Paul. 2011. Modeling reciprocity in social interactions with probabilistic latent space models. Natural Language Engineer- ing, 17(1):1-36.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Support vector learning for ordinal regression", |
|
"authors": [ |
|
{ |
|
"first": "Ralf", |
|
"middle": [], |
|
"last": "Herbrich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thore", |
|
"middle": [], |
|
"last": "Graepel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Klaus", |
|
"middle": [], |
|
"last": "Obermayer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 1999 International Conference on Articial Neural Networks", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "97--102", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ralf Herbrich, Thore Graepel, and Klaus Obermayer. 1999. Support vector learning for ordinal regres- sion. In In Proceedings of the 1999 International Conference on Articial Neural Networks, pages 97- 102.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The peculiar logic of value", |
|
"authors": [ |
|
{ |
|
"first": "Ray", |
|
"middle": [], |
|
"last": "Jackendoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of Cognition and Culture", |
|
"volume": "6", |
|
"issue": "3-4", |
|
"pages": "375--407", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ray Jackendoff. 2006. The peculiar logic of value. Journal of Cognition and Culture, 6(3-4):375-407.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Training linear SVMs in linear time", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "217--226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Joachims. 2006. Training linear SVMs in lin- ear time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 217-226.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Accurate unlexicalized parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "423--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2003a. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics (ACL 2003), pages 423-430.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Fast exact inference with a factored model for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Advances in Neural Information Processing Systems", |
|
"volume": "15", |
|
"issue": "", |
|
"pages": "423--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Klein and Christopher D. Manning. 2003b. Fast exact inference with a factored model for natural language parsing. Advances in Neural Information Processing Systems 15 (NIPS 2002), pages 423-430.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Computational Analysis of Present-Day American English", |
|
"authors": [ |
|
{ |
|
"first": "Henry", |
|
"middle": [], |
|
"last": "Ku", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W. Nelson", |
|
"middle": [], |
|
"last": "Francis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1967, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Henry Ku and W. Nelson Francis. 1967. Computa- tional Analysis of Present-Day American English. Brown University Press.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "An off-theshelf language identification tool", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--30", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Lui and Timothy Baldwin. 2012. An off-the- shelf language identification tool. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012), pages 25- 30.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Semantic role labeling", |
|
"authors": [ |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Gildea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Synthesis Lectures on Human Language Technologies", |
|
"volume": "3", |
|
"issue": "1", |
|
"pages": "1--103", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Martha Palmer, Daniel Gildea, and Nianwen Xue. 2010. Semantic role labeling. Synthesis Lectures on Human Language Technologies, 3(1):1-103.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Mining the web for reciprocal relationships", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Paul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roxana", |
|
"middle": [], |
|
"last": "Girju", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chen", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Paul, Roxana Girju, and Chen Li. 2009. Min- ing the web for reciprocal relationships. In Proceed- ings of the 13th Conference on Computational Nat- ural Language Learning (CoNLL), pages 75-83.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Parse tree of a I Will If You Will challenge. The subtrees governing the I Will and You Will part of the sentence are marked." |
|
}, |
|
"TABREF2": { |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Examples of I Will If You Will challenges.", |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table><tr><td>: Results of 10-fold cross-validation exper-</td></tr><tr><td>iments.</td></tr><tr><td>tain a constituency parse tree for each challenge.</td></tr><tr><td>After pre-processing, we are left with 5,499 chal-</td></tr><tr><td>lenges (4,982 unique), with 4,474 unique I Will</td></tr><tr><td>challenges and 70 unique You Will challenges.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |