|
{ |
|
"paper_id": "U10-1009", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:09:23.463796Z" |
|
}, |
|
"title": "Classifying User Forum Participants: Separating the Gurus from the Hacks, and Other Tales of the Internet", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Melbourne", |
|
"location": { |
|
"postCode": "3010", |
|
"settlement": "VIC", |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Melbourne", |
|
"location": { |
|
"postCode": "3010", |
|
"settlement": "VIC", |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Nicta", |
|
"middle": [], |
|
"last": "Vrl", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Melbourne", |
|
"location": { |
|
"postCode": "3010", |
|
"settlement": "VIC", |
|
"country": "Australia" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper introduces a novel user classification task in the context of web user forums. We present a definition of four basic user characteristics and an annotated dataset. We outline a series of approaches for predicting user characteristics, utilising aggregated post features and user/thread network analysis in a supervised learning context. Using the proposed feature sets, we achieve results above both a naive baseline and a bag-ofwords approach, for all four of our basic user characteristics. In all cases, our bestperforming classifier is statistically indistinct from an upper bound based on the inter-annotator agreement for the task.", |
|
"pdf_parse": { |
|
"paper_id": "U10-1009", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper introduces a novel user classification task in the context of web user forums. We present a definition of four basic user characteristics and an annotated dataset. We outline a series of approaches for predicting user characteristics, utilising aggregated post features and user/thread network analysis in a supervised learning context. Using the proposed feature sets, we achieve results above both a naive baseline and a bag-ofwords approach, for all four of our basic user characteristics. In all cases, our bestperforming classifier is statistically indistinct from an upper bound based on the inter-annotator agreement for the task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The most natural form of communication is through dialogue, and in the Internet age this manifests itself via modalities such as forums and mailing lists. What these systems have in common is that they are a textual representation of a threaded discourse. The Internet is full of communities which engage in innumerable discourses, generating massive quantities of data in the process. This data is rich in information, and with the help of computers we are able to archive it, index it, query it and retrieve it. In theory, this would allow people to take a question to an online community, search its archives for the same or similar questions, follow up on the contents of prior discussion and find an answer. However, in practice, search forum accessibility tends to be limited at best, prompting recent interest in information access for user forums (Cong et al., 2008; Elsas and Carbonell, 2009; Seo et al., 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 855, |
|
"end": 874, |
|
"text": "(Cong et al., 2008;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 875, |
|
"end": 901, |
|
"text": "Elsas and Carbonell, 2009;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 902, |
|
"end": 919, |
|
"text": "Seo et al., 2009)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "One problem with current approaches to accessing forum data is that they tend not to take into account the structure of the discourse itself, or other characteristics of the forum or forum participants. The bag-of-words (BOW) model common in information retrieval (IR) and text categorisation discards all contextual information. However, even in IR it has long been known that much more information than simple term occurrence is available. In the modern era of web search, for example, extensive use is made of link structure (Brin and Page, 1998) , anchor text, document zones, and a plethora of other document (and query, click stream and user) features (Manning et al., 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 549, |
|
"text": "(Brin and Page, 1998)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 658, |
|
"end": 680, |
|
"text": "(Manning et al., 2008)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The natural question to ask at this point is, What additional structure can we extract from web forum data? Previous work has been done in extracting useful information from various dimensions of web forums, such as the post-level structure . One dimension that has received relatively little attention is how we can use information about the identity of the participants to extract useful information from a web forum. In this work we will examine how we can utilize such user-level structure to improve performance over a user classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We have used the term threaded discourse to describe online data that represents a record of messages exchanged between a group of participants. In this work, we examine data from LinuxQuestions, a popular Internet forum for Linux-related troubleshooting. Aside from a limited set of features specific to the Linux-related troubleshooting domain, however, our techniques are domaininspecific and expected to generalize to any data that can be interpreted as a threaded discourse.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This work is part of ILIAD , an ongoing effort to improve information access in linux forums. Our contribution to the project is techniques to identify characteristics of forum users, building on earlier work in the space (Lui, 2009) . The problem that we face here is twofold: Firstly, there is no established ontology for characteristics of forum users. To address this, we have designed a set of attributes that we expect to be helpful in improving information access over forum data. Secondly, in order to exploit user characteristics we would need to evaluate a large number of users. This quantity of data would be much too large to be processed manually. We therefore apply supervised machine learning techniques to allow us to effectively discover the characteristics of a large number of forum users in an automated fashion. Lui and Baldwin (2009b) showed that user-level structure is useful in predicting percieved quality of forum posts. The data they evaluate over is extracted from Nabble, where the ratings provided by users are interpeted as the gold-standard for a correct classification. The task was originally proposed by and further explored by . In both cases, the authors focus on heuristic postlevel features, which are used to predict perceived quality of posts using a supervised machine learning approach. Lui and Baldwin (2009b) showed that features based on user-level structure outperformed the benchmark set by on a closely-related task, by using user-level structure to inform a post-level classification task. We build on this work by utilizing the user-level structure to perform our novel userlevel classification task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 233, |
|
"text": "(Lui, 2009)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 834, |
|
"end": 857, |
|
"text": "Lui and Baldwin (2009b)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 1332, |
|
"end": 1355, |
|
"text": "Lui and Baldwin (2009b)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In work on thread classification, Baldwin et al. (2007) attempted to classify forum threads scraped from Linux-related newsgroups according to three attributes: (1) Task Oriented: is the thread about a specific problem?; (2) Complete: is the problem described in adequate detail?; and (3) Solved: has a solution been provided? They manually annotated a set of 250 threads for these attributes, and extracted a set of features to describe each thread based on the aggregation of features from posts in different sections of the thread. We provide a novel extension of this concept, whereby we aggregate posts from a given user. Wanas et al. (2008) develop a set of post-level features for a classification task involving post and rating data from Slashdot. Their task involves classifying posts into one of three quality levels (High, Medium or Low), where the gold-standard is provided by user annotations from the forum. This is conceptually very similar to our task, and we build on this feature set.", |
|
"cite_spans": [ |
|
{ |
|
"start": 34, |
|
"end": 55, |
|
"text": "Baldwin et al. (2007)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 627, |
|
"end": 646, |
|
"text": "Wanas et al. (2008)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Extracting community structure from networks can yield insights into the relationships between users in a forum (Newman and Girvan, 2004; Drineas et al., 2004; Chapanond et al., 2005) , and could in turn aid in engineering descriptions of the users more suited to a particular task. Agrawal et al. (2003) describe a technique for partitioning the users in an online community based on their opinion on a given topic. They find that basic text classification techniques are unable to do better than the majority-class baseline for this particular task. They then describe a technique based on modeling the community as a reply-to network, with users as individual nodes, and edges indicating that a user has replied to a post by another user; using this representation, they are able to do much better than the baseline. Fortuna et al. (2007) build on this work, defining additional classes of networks that represent some of the relationships present in an online community. Part of our feature set is derived from modelling Internet forum users on the basis of the interactions that exist between them, such as a tendency to reply to each other or to coparticipate in threads. We extend the social network analysis of Agrawal et al. (2003) and Fortuna et al. (2007) to generate user-level features. Malouf and Mullen (2008) present the task of determining the political leaning of users on a U.S. political discussion site. They apply network analysis to the task, based on the observation that users tend to quote users of opposing political leaning more than they quote those of similar political leaning. They found that standard text categorisation methods performed poorly over their task, and that the results were improved significantly by incorporating network-derived features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 137, |
|
"text": "(Newman and Girvan, 2004;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 138, |
|
"end": 159, |
|
"text": "Drineas et al., 2004;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 160, |
|
"end": 183, |
|
"text": "Chapanond et al., 2005)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 283, |
|
"end": 304, |
|
"text": "Agrawal et al. (2003)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 820, |
|
"end": 841, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1219, |
|
"end": 1240, |
|
"text": "Agrawal et al. (2003)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 1245, |
|
"end": 1266, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1300, |
|
"end": 1324, |
|
"text": "Malouf and Mullen (2008)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In a similar vein, Carvalho et al. (2007) used a combination of textual features (in the form of \"email speech acts\") and network-based features to learn which users were team leaders. They found that the network-based features enhanced classification accuracy. Sentiment analysis (Pang and Lee, 2008) relates to this work as one of our user characteristics (POSITIVITY) is an expression of user sentiment. However, sentiment analysis has tended to focus on individual documents, and rarely takes into account the author. An exception to this is the work of Thomas et al. (2006) , who attempted to predict which way each speaker in a U.S. Congressional debate on a proposed bill voted, on the basis of both what was said and the indication of agreement between speakers. Their task is related to ours in that it involves a user-level classification, but it focused on extracting information identifying where the speakers agree and disagree. Expert finding is the task of ranking experts relative to each of a series of queries, and has been part of the TREC Enterprise Track (Craswell et al., 2005; Soboroff et al., 2006; Balog et al., 2006; Fang and Zhai, 2007) . The challenge is to estimate the likelihood of a given individual being an expert on a particular topic, on the basis of a document collection. There is certainly scope to evaluate the utility of the user characteristics proposed in this research in the context of the TREC expert finding task, although only a small fraction of the document collection (the mailing list archives) has the threaded structure requisite for our methods, and our focus is on the general characteristics of the user rather than their topic-specific expertise.", |
|
"cite_spans": [ |
|
{ |
|
"start": 19, |
|
"end": 41, |
|
"text": "Carvalho et al. (2007)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 281, |
|
"end": 301, |
|
"text": "(Pang and Lee, 2008)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 558, |
|
"end": 578, |
|
"text": "Thomas et al. (2006)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1076, |
|
"end": 1099, |
|
"text": "(Craswell et al., 2005;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1100, |
|
"end": 1122, |
|
"text": "Soboroff et al., 2006;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 1123, |
|
"end": 1142, |
|
"text": "Balog et al., 2006;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1143, |
|
"end": 1163, |
|
"text": "Fang and Zhai, 2007)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We have designed a set of user-level attributes which we expect to be useful in improving information access over forum data. The attributes were selected based on our personal experiences in interacting with online communities. In this, we sought to capture the attributes of users who provide meaningful contributions, as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Characteristics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "CLARITY: How clear is what the user meant in each of their posts, in the broader context of the thread?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Characteristics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "PROFICIENCY: What level of perceived technical competence does the user have in their posts?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Characteristics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "POSITIVITY: How positive is the user in their posts?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Characteristics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "EFFORT: How much effort does the user put into their posts?", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Characteristics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Each user-level attribute is quantified by way of a 5 point ordinal scale, as detailed in Table 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 90, |
|
"end": 97, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "User Characteristics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "While we have described the four attributes as if they were orthogonal to each other, in reality there are obvious overlaps. For example, high clarity often implies high effort, but the reverse is not necessarily true. For simplicity, we do not consider the interactions between the characteristics in this work, leaving it as a possibility for further research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "User Characteristics", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We created a new dataset specifically for this work based on data crawled from LinuxQuestions, 1 a popular Internet forum for Linux troubleshooting. From this forum, we scraped a background collection of 34157 threads, spanning 126094 posts by 25361 users.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to evaluate how well we can automatically rate forum users in each of our four user characteristics (from Section 3), we randomly selected 50 users who had each participated in more than 15 different threads in the full dataset. We asked four independent annotators to annotate the 50 users over each of the 4 attributes. The annotators all had a computer science background, and had participated in Linux-related online communities. For each attribute, the annotators were asked to choose a rating on a five-point scale, based on the description of user attributes from Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "For each of the 50 users, we randomly selected 15 threads that they had participated in, and partitioned these into 5 separate annotation instances as follows: for the first instance, we selected 1 thread; for the second instance we selected 2 threads; and so on, giving us 5 instances, each with 1 to 5 threads. This gave us a total of 250 annotation instances (with 5 instances per user). We chose to annotate each user multiple times in order to build a more complete picture of the user. Each instance presented a different number of threads to the annotator, in order to give the annotators maximal context in annotating a user while still minimizing the number of threads we required the user to have participated in.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Each annotator was asked to rate all 250 annotation instances, meaning that they actually saw each of the 50 users a total of five times each. Annotators were not alerted to the fact that they would annotate each user five times, and all usernames were removed from the threads before being displayed to the annotator. However, for a given annotation instance, the annotator was alerted to which posts the user being annotated had authored. The posts of other users in those threads where also presented to provide the full thread context, but the annotators were instructed to use those posts only to interpret the posts of the user in question.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Since each annotator annotated each user 5 times for each attribute, we compute a score for each user-annotator-attribute combination, which Puts obvious effort into their post 5 Turbo", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Goes out of his/her way in trying to make a contribution; an eager beaver! is simply the sum across the 5 annotations. Using this score, we then rank the users for each pairing of annotator-quality. We formulated the user-level classification task as four separate classification tasks, across the four attributes. In order to account for subtle variance in annotators' interpretations of the ordinal scale, we took a non-parametric approach to the data: we pooled all of the annotator ratings and established a single ranking over all the annotated users for each attribute. We then discretized this ranking into 5 equal-sized bins, in order to provide a more coarse-grained view of the relative ordering between users. Therefore, our task can be interpreted as assigning each user to their corresponding uniformly-distributed quintile on each attribute.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We calculate inter-annotator agreement on each of the four attributes via leave-one-out crossvalidation. For each user-annotator-attribute combination, we calculate two scores: the sum of ratings given by the annotator being considered, and the sum of ratings given by all the other annotators. For each of the four attributes, we rank the users based on each of these two scores, and com- (Kendall, 1938) between the two ranklists (Table 2) , as well as the p-value for the significance of the \u03c4 value.", |
|
"cite_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 405, |
|
"text": "(Kendall, 1938)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 441, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Inter-annotator Agreement", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We see that for all attributes, there is a statistically significant correlation between the annotations. This correlation is strongest in the EFFORT and PROFICIENCY attributes, and weakest in the CLARITY attribute. This is partly to be expected, since CLARITY is more subjective than EFFORT or PROFICIENCY. POSITIVITY shows an interesting quirk, where the ratings from one annotator appear completely uncorrelated with those of all the others. This suggests that POSITIVITY as an attribute is slightly more subjective than the others.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Inter-annotator Agreement", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We extract features for each user based on aggregating post-level features and via social network analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The most basic feature set we consider is a simple bag-of-words (BOW), computed as the sum of the bag-of-words model over each of the user's individual posts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Aggregate Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We also make use of two post-level feature sets from the literature on web user forum classification. The first is that of Baldwin et al. (2007) (BALDWIN P ost ), and outlined in Table 3 . It was designed to represent key posts in a thread for a thread-level classification (see Section 2) task. We compute this feature set for each of a user's posts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 123, |
|
"end": 144, |
|
"text": "Baldwin et al. (2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 186, |
|
"text": "Table 3", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Post-Aggregate Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The second is that of Wanas et al. (2008) , and is described in Table 4 . In this case, it was developed for a post-level classification task rating post quality, and thus lends itself readily to our postaggregate user representation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 22, |
|
"end": 41, |
|
"text": "Wanas et al. (2008)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 64, |
|
"end": 71, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Post-Aggregate Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "From each of BALDWIN P ost and WANAS, we derive a user-level feature set by finding the mean of each feature value over all of the user's posts in the full dataset. For boolean features, this can be directly interpreted as the proportion of the user's posts in which the feature is present. These feature sets are referred to as BALDWIN P ost AGG and WANAS AGG respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Aggregate Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Whereas it is possible for us to engineer a novel post-level feature set, our aim in this research is not to analyze the feature sets themselves, but rather to show that our techniques utilizing user-level structure perform better than techniques which ignore this information. We leave post-level feature engineering as an open avenue of further work. Fortuna et al. (2007) , but the application to user-level feature extraction is novel. POSTAFTER is modeled on the reply-to network described in Fortuna et al. (2007) . Our data does not contain explicit annotation about the reply structure in a thread, so we approximate this information by the temporal relationship between posts. There exist more sophisticated approaches to the discovery of reply structure in a thread (Kim et al., 2010), and we consider integrating such methods to be an important avenue of further work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 353, |
|
"end": 374, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 498, |
|
"end": 519, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Post-Aggregate Features", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "POSTAFTER is parametrized with two values: dist and count. Being a User Network, the nodes represent users. Two users A1 and A2 have a directed edge from A1 to A2 if and only if A1 submits a post to a thread that is within dist posts after a post in the same thread by A2 on at least count occasions. Note that this can occur more than once in a single thread. For our experiments, we used dist = 1 and count = 3. THREADPART is implemented as described in Fortuna et al. (2007) : nodes are again users, and each undirected edge indicates that two users have posted in the same thread on at least k occasions. Fortuna et al. (2007) set k = 5, but we only report on results for k = 2 and k = 3, as we found that for our dataset, the network is too sparse for higher values.", |
|
"cite_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 477, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 609, |
|
"end": 630, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Network Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "COMMONAUTHORS is also implemented as described in Fortuna et al. (2007) : nodes are threads, and each undirected edge indicates that two threads have at least m users in common. We follow Fortuna et al. (2007) in setting m = 3.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 71, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 188, |
|
"end": 209, |
|
"text": "Fortuna et al. (2007)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Network Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In User Networks, the edges represent some relationship between users. From a User Network, we generate a feature vector v for each user. v is of length N , where N is the total number of nodes, or equivalently, the total number of users in the network. v has at least one feature set to 1, which corresponds to the user described by this feature vector, which we will hereafter refer to as the originator. Features representing users directly connected to the originator in the network receive a feature value of 1, and users that are second-level neighbours of the originator are set to a feature value of 0.5. All other values in v are set to 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Network Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For Thread Networks, edges represent relationships between threads. The method for computing a feature vector is similar to that for User Networks. The key difference is that in this instance, nodes represent threads and not users. Therefore, to describe a particular user, we consider threads that the user has posted in. We define a vector v of length T , where T is the total number of threads in the forum. Given the set S 0 of threads that the user has posted in, for each thread in S 0 , we assign the value 1 to the feature in v corresponding to that thread. We then consider S 1 , the set of immediate neighbours of S 0 , and assign the value 1 to their corresponding features in v. Finally, we consider S 2 , the immediate neighbours of S 1 , and assign the value of 0.5 to their corresponding features. All other features are assigned the value 0.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Network Features", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In all experiments, we build our classifiers using a support vector machine (SVM: Joachims (1998)), using bsvm (Hsu and Lin, 2006) with a linear kernel. For each combination of features, we evaluate it by carrying out 10-fold cross-validation. The partitioning is performed once and re-used for each pairing of learner and feature set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Methodology", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Our experiments were performed using hydrat (Lui and Baldwin, 2009a) , an opensource framework for comparing classification systems. hydrat provides facilities for managing and combining feature sets, setting up cross-validation tasks and automatically computing corresponding results. Features were extracted from the forum data using forum features, 2 a Python module implementing a data model for forum data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 68, |
|
"text": "(Lui and Baldwin, 2009a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Methodology", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We evaluate our classifiers using microaveraged F-score (F \u00b5 ), reflecting the average performance per-document. As our classes are ordinal (representing quintiles of users), we additionally present results based on mean absolute error (MAE). MAE is the average absolute distance of the predicted (P red) ordinal value from the goldstandard (G) value. It is a reflection of how far off the mark the average prediction is, with an MAE of 0 indicating perfect classifier performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Methodology", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As a baseline, we use a simple majority-class (ZeroR) classifier. A benchmark classifier is constructed based on a BOW feature set, as is the standard in text categorization. To derive an upper bound for the task, we perform leave-one-out cross-validation over our annotations, and calculate the mean F-score and MAE between each annotator and the combination of the remaining annotators.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Methodology", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "When comparing a result to a baseline or a benchmark value, we also compute the p-value for a two-tailed paired t-test. In line with standard practice, we interpret p < 0.05 as statistically significant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Methodology", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "First, we present results for each of the feature sets in isolation over the four user characteristics (Table 5 ). In each case, we present the results for the majority class (ZeroR) baseline and the bag-of-words (BOW) benchmark in the first two rows. Statistically-significant improvements over ZeroR (including BOW) are suffixed with \" * \", and statistically-significant improvements over BOW are suffixed with \" + \". The best overall result for a given task achieved across all combinations of feature sets is presented in boldface, and is achieved for a single feature set in the case of CLARITY and POSITIVITY, both using User Network feature sets. The benchmark results (BOW) are considerably more impressive than the ZeroR baseline. For CLARITY, THREADPART 3 achieves the best result for the task, beating the BOW at a level of statistical significance for F \u00b5 . Recall that THREADPART 2 was based on a graph of coparticipation in threads, suggesting that knowledge of which users co-post to threads is informative in predicting how clear their posts are on average. In other words, there are clusters of users who co-predict their respective post clarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 111, |
|
"text": "(Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For POSITIVITY, POSTAFTER beats the BOW benchmark, but not at a level of statistical significance in this case. POSTAFTER may work in capturing POSITIVITY due to sets of antagonistic users who respond to each other's posts negatively (e.g. commonly engage in flame wars), or to cooperative users who engage in a mutually-supportive dialogue, each building positively on the previous poster's comments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For both CLARITY and POSITIVITY, the aforementioned individual feature sets achieve the best overall results in our experiments, i.e. combining these feature sets with BOW or other feature sets did not improve the results. In both cases, the MAE is around 1.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "For PROFICIENCY and EFFORT, the BOW F \u00b5 results were notably higher, to the degree that none of the feature sets in isolation were able to better it. As a result, we looked to the combination of up to three feature sets, and present in Table 6 the best-achieved results with two or three feature sets for PROFICIENCY and EFFORT. In both cases, it is the combination of the BOW feature set with one of the User Network feature sets and one of the post-level feature sets that produces the best result, illustrating the complementary nature of the three basic feature set types. Results for the BOW feature set in isolation, along with results for BOW with each of the two feature sets in the best-performing method, are presented to illustrate the relative effect of each. In the case of PROFI-CIENCY, THREADPART 2 and BALDWIN P ost AGG both lead to increased F \u00b5 when combined with BOW, as compared to the simple feature set (but only the combination of all three is significantly better than simple BOW). That is, PROFICIENCY appears to be the most multi-faceted of the four user classification attributes, in being best captured through the combination of lexical choice, macro post-level features, and network-based analysis of thread co-participation. With the network-based features, we suggest this is largely a negative effect, in that \"hacks\" and \"newbies\" are characterised by a lack of thread co-participation.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 243, |
|
"text": "Table 6", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "With EFFORT, BOW achieves by far its highest F \u00b5 across all four classification tasks, and the combination with THREADPART 3 and WANAS AGG barely surpasses it, at a level which is not statistically significant.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "That the best results are achieved in all four classification tasks with network-based features (possibly in combination with other feature sets) is telling, and underlines the potential of network analysis for user classification. The aggregate post-level feature sets BALDWIN P ost AGG and WANAS AGG are less effective, but bear in mind that they were not tailored specifically for the user classification task, so it is a positive result that they have an impact when aggregated over user-level structure, and suggests that further work in customizing the per-post feature set will yield further improvements on this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Finally, we turn to analysis of inter-annotator agreement for the four user classification subtasks, to gauge the quality of the results achieved by our best classifiers in each case. In Table 7 , we reproduce the BOW and best F \u00b5 results from Tables 5 and 6, and additionally present the mean inter-annotator (MIA) F \u00b5 based on leave-one-out cross-validation. We additionally present the pvalue for the two-tailed paired t-test for each of BOW-MIA and best-MIA. In addition to being able to compare the F \u00b5 values directly, we can observe that for CLARITY, PROF(ICIENCY) and POS(ITIVITY), the best-performing classifier is both significantly better than the BOW benchmark (and ZeroR baseline), and statistically indistinguishable from the upper bound figure. In the case of EFFORT, there is no significant difference between BOW and the upper bound, so it would highly unlikely that we could achieve a significant improvement over BOW for any of our classifiers. In summary, we were able to consistently exceed the majority class baseline on this task using user-level features, attaining results that were competitive with those utilising a state-of-the-art bag-of-words benchmark. We found that in most cases our results exceeded the benchmark to a high degree of statistical significance, with networkbased features featuring prominently for all classification subtasks.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 187, |
|
"end": 194, |
|
"text": "Table 7", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Given that the intention of this work is to enhance information access over web forum data, the next step we intend to take is to apply our trained classifiers to a larger corpus of web forum data, and assess the impact of the predictions in a task-based evaluation. Examples of such tasks include predicting perceived post quality and identifying troubleshooting-oriented threads (Baldwin et al., 2007) . We also note that there is limited room for progress given our current interpretation of the inter-annotator agreement. We intend to further analyze the annotations. In particular, since each annotator annotated each user five times, we intend to study the interaction between the number of context posts and the ratings given by the annotator.", |
|
"cite_spans": [ |
|
{ |
|
"start": 381, |
|
"end": 403, |
|
"text": "(Baldwin et al., 2007)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Further Work", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "In this work, we introduced a novel user classification task over web user forums. We prepared an annotated dataset relevant to the task, which we will release to the research community.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We extracted user-level features over aggregations of user posts, as well as via anaylsis of social networks in a web forum. We investigated each feature set we defined in isolation as well as in combination with the benchmark feature sets. We have shown that these user-level features can consistently outperform a majority-class baseline over a user classification task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "We succeeded in showing that user-level features have empirical utility in user classification, and we expect that the use of these features will generalize well to tasks over other aspects of threaded discourse, for example in profiling users or in ranking threads for information retrieval.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "9" |
|
}, |
|
{ |
|
"text": "http://www.linuxquestions.org", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://github.com/saffsd/forum_ features", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Mining newsgroups using networks arising from social behavior", |
|
"authors": [ |
|
{ |
|
"first": "Rakesh", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramakrishnan", |
|
"middle": [], |
|
"last": "Sridhar Rajagopalan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yirong", |
|
"middle": [], |
|
"last": "Srikant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the Twelfth International World Wide Web Conference (WWW'03)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "529--535", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rakesh Agrawal, Sridhar Rajagopalan, Ramakrishnan Srikant, and Yirong Xu. 2003. Mining newsgroups us- ing networks arising from social behavior. In Proceedings of the Twelfth International World Wide Web Conference (WWW'03), pages 529-535, Budapest, Hungary.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatic Thread Classification for Linux User Forum Information Access", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Martinez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Twelfth Australasian Document Computing Symposium (ADCS 2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "72--79", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin, David Martinez, and Richard Baron Pen- man. 2007. Automatic Thread Classification for Linux User Forum Information Access. In Proceedings of the Twelfth Australasian Document Computing Symposium (ADCS 2007), pages 72-79, Melbourne, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Intelligent Linux Information Access by Data Mining : the ILIAD Project", |
|
"authors": [ |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Martinez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Richard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Su", |
|
"middle": [ |
|
"Nam" |
|
], |
|
"last": "Penman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mackinlay", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL 2010 Workshop on Computational Linguistics in a World of Social Media: #SocialMedia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "15--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Timothy Baldwin, David Martinez, Richard B Penman, Su Nam Kim, Marco Lui, Li Wang, and Andrew Mackin- lay. 2010. Intelligent Linux Information Access by Data Mining : the ILIAD Project. In Proceedings of the NAACL 2010 Workshop on Computational Linguistics in a World of Social Media: #SocialMedia, pages 15-16, Los Ange- les, USA.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Formal models for expert finding in enterprise corpora", |
|
"authors": [ |
|
{ |
|
"first": "Krisztian", |
|
"middle": [], |
|
"last": "Balog", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leif", |
|
"middle": [], |
|
"last": "Azzopardi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of 29th International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "43--50", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krisztian Balog, Leif Azzopardi, and Maarten de Rijke. 2006. Formal models for expert finding in enterprise cor- pora. In Proceedings of 29th International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2006), pages 43-50.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The anatomy of a largescale hypertextual web search engine. Computer Networks and ISDN Systems", |
|
"authors": [ |
|
{ |
|
"first": "Sergei", |
|
"middle": [], |
|
"last": "Brin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Larry", |
|
"middle": [], |
|
"last": "Page", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "107--117", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sergei Brin and Larry Page. 1998. The anatomy of a large- scale hypertextual web search engine. Computer Net- works and ISDN Systems, 30(1-7):107-117.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Discovering leadership roles in email workgroups", |
|
"authors": [ |
|
{ |
|
"first": "Vitor", |
|
"middle": [], |
|
"last": "Carvalho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wen", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Cohen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 4th Conference on Email and Anti-Spam (CEAS 2007)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vitor Carvalho, Wen Wu, and William Cohen. 2007. Dis- covering leadership roles in email workgroups. In Pro- ceedings of the 4th Conference on Email and Anti-Spam (CEAS 2007), Mountain View, USA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Graph theoretic and spectral analysis of Enron email data", |
|
"authors": [ |
|
{ |
|
"first": "Anurat", |
|
"middle": [], |
|
"last": "Chapanond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mukkai", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Krishnamoorthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B\u00fclent", |
|
"middle": [], |
|
"last": "Yener", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Computational and Mathematical Organization Theory", |
|
"volume": "11", |
|
"issue": "3", |
|
"pages": "265--281", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Anurat Chapanond, Mukkai S. Krishnamoorthy, and B\u00fclent Yener. 2005. Graph theoretic and spectral analysis of En- ron email data. Computational and Mathematical Orga- nization Theory, 11(3):265-281.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Finding question-answer pairs from online forums", |
|
"authors": [ |
|
{ |
|
"first": "Gao", |
|
"middle": [], |
|
"last": "Cong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Long", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Yew", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of 31st International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR'08)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "467--474", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gao Cong, Long Wang, Chin-Yew Lin, Young-In Song, and Yueheng Sun. 2008. Finding question-answer pairs from online forums. In Proceedings of 31st International ACM- SIGIR Conference on Research and Development in Infor- mation Retrieval (SIGIR'08), pages 467-474, Singapore. Nick Craswell, Arjen P. de Vries, and Ian Soboroff. 2005. Overview of the TREC-2005 Enterprise track. In Pro- ceedings of the 14th Text REtrieval Conference (TREC 2005), Gaithersburg, USA.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Studying e-mail graphs for intelligence monitoring and analysis in the absence of semantic information", |
|
"authors": [ |
|
{ |
|
"first": "Petros", |
|
"middle": [], |
|
"last": "Drineas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mukkai", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Krishnamoorthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Sofka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B\u00fclent", |
|
"middle": [], |
|
"last": "Yener", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "of the IEEE International Conference on Intelligence and Security Informatics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "297--306", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Petros Drineas, Mukkai S. Krishnamoorthy, Michael D. Sofka, and B\u00fclent Yener. 2004. Studying e-mail graphs for intelligence monitoring and analysis in the absence of semantic information. In of the IEEE International Con- ference on Intelligence and Security Informatics, pages 297-306, Tucson, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "It pays to be picky: An evaluation of thread retrieval in online forums", |
|
"authors": [ |
|
{ |
|
"first": "Jonathan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Elsas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaime", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Carbonell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of 32nd International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR'09)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "714--715", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jonathan L. Elsas and Jaime G. Carbonell. 2009. It pays to be picky: An evaluation of thread retrieval in online fo- rums. In Proceedings of 32nd International ACM-SIGIR Conference on Research and Development in Information Retrieval (SIGIR'09), pages 714-715, Boston, USA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Probabilistic models for expert finding", |
|
"authors": [ |
|
{ |
|
"first": "Hui", |
|
"middle": [], |
|
"last": "Fang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengxiang", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 29th European Conference on Information Retrieval (ECIR'07)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "418--430", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hui Fang and ChengXiang Zhai. 2007. Probabilistic mod- els for expert finding. In Proceedings of the 29th Eu- ropean Conference on Information Retrieval (ECIR'07), pages 418-430, Rome, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Improving the classification of newsgroup messages through social network analysis", |
|
"authors": [ |
|
{ |
|
"first": "Blaz", |
|
"middle": [], |
|
"last": "Fortuna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Natasa", |
|
"middle": [], |
|
"last": "Eduarda Mendes Rodrigues", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Milic-Frayling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management (CIKM '07)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "877--880", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blaz Fortuna, Eduarda Mendes Rodrigues, and Natasa Milic- Frayling. 2007. Improving the classification of news- group messages through social network analysis. In Pro- ceedings of the Sixteenth ACM Conference on Information and Knowledge Management (CIKM '07), pages 877-880, Lisboa, Portugal. Chih-Wei Hsu and Chih-Jen Lin. 2006. BSVM- 2.06. http://www.csie.ntu.edu.tw/cjlin/ bsvm/. Retrieved on 15/09/2009.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Text categorization with support vector machines: learning with many relevant features", |
|
"authors": [ |
|
{ |
|
"first": "Thorsten", |
|
"middle": [], |
|
"last": "Joachims", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 10th European Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "137--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thorsten Joachims. 1998. Text categorization with support vector machines: learning with many relevant features. In Proceedings of the 10th European Conference on Machine Learning, pages 137-142, Chemnitz, Germany.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "A new measure of rank correlation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Kendall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1938, |
|
"venue": "Biometrika", |
|
"volume": "30", |
|
"issue": "1-2", |
|
"pages": "81--93", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. G. Kendall. 1938. A new measure of rank correlation. Biometrika, 30(1-2):81-93.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Tagging and Linking Web Forum Posts", |
|
"authors": [ |
|
{ |
|
"first": "Nam", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "192--202", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Su Nam Kim, Li Wang, and Timothy Baldwin. 2010. Tag- ging and Linking Web Forum Posts. In Proceedings of the Fourteenth Conference on Computational Natural Lan- guage Learning, pages 192-202, Uppsala, Sweden. Marco Lui and Timothy Baldwin. 2009a. hydrat. http://hydrat.googlecode.com. Retrieved on 15/09/2009.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "You Are What You Post: User-level Features in Threaded Discourse", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Baldwin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 14th Australasian Document Computing Symposium", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "98--105", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Lui and Timothy Baldwin. 2009b. You Are What You Post: User-level Features in Threaded Discourse. In Pro- ceedings of the 14th Australasian Document Computing Symposium, pages 98-105, Sydney, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Impact of user characteristics on online forum classification tasks. Honours thesis", |
|
"authors": [ |
|
{ |
|
"first": "Marco", |
|
"middle": [], |
|
"last": "Lui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marco Lui. 2009. Impact of user characteristics on online forum classification tasks. Honours thesis, The University of Melbourne, Australia.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Taking sides: Graph-based user classification for informal online political discourse", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Malouf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tony", |
|
"middle": [], |
|
"last": "Mullen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Internet Research", |
|
"volume": "18", |
|
"issue": "2", |
|
"pages": "177--190", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Malouf and Tony Mullen. 2008. Taking sides: Graph-based user classification for informal online politi- cal discourse. Internet Research, 18(2):177-190.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Introduction to Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prabhakar", |
|
"middle": [], |
|
"last": "Raghavan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hinrich", |
|
"middle": [], |
|
"last": "Sch\u00fctze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press, Cambridge, UK.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Finding and evaluating community structure in networks", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michelle", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Girvan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark E.J. Newman and Michelle Girvan. 2004. Finding and evaluating community structure in networks. Physical Re- view E, 69. Article Number 26113.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang and Lillian Lee. 2008. Opinion mining and sen- timent analysis. Foundations and Trends in Information Retrieval, 2(1-2):1-135.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Online community search using thread structure", |
|
"authors": [ |
|
{ |
|
"first": "Jangwon", |
|
"middle": [], |
|
"last": "Seo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"Bruce" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1907--1910", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jangwon Seo, W. Bruce Croft, and David A. Smith. 2009. Online community search using thread structure. In Pro- ceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009), pages 1907-1910, Hong Kong, China.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Overview of the TREC-2006 Enterprise track", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Soboroff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Arjen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nick", |
|
"middle": [], |
|
"last": "De Vries", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Craswell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 15th Text REtrieval Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Soboroff, Arjen P. de Vries, and Nick Craswell. 2006. Overview of the TREC-2006 Enterprise track. In Pro- ceedings of the 15th Text REtrieval Conference (TREC 2006), Gaithersburg, USA.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts", |
|
"authors": [ |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Thomas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "327--335", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congres- sional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 327-335, Sydney, Aus- tralia.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Automatic scoring of online discussion posts", |
|
"authors": [ |
|
{ |
|
"first": "Nayer", |
|
"middle": [], |
|
"last": "Wanas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Motaz", |
|
"middle": [], |
|
"last": "El-Saban", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Heba", |
|
"middle": [], |
|
"last": "Ashour", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Waleed", |
|
"middle": [], |
|
"last": "Ammar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2nd ACM Workshop on Information Credibility on the web (WICOW '08)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nayer Wanas, Motaz El-Saban, Heba Ashour, and Waleed Ammar. 2008. Automatic scoring of online discussion posts. In Proceedings of the 2nd ACM Workshop on Infor- mation Credibility on the web (WICOW '08), Napa Valley, USA.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Predicting the perceived quality of web forum posts", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Weimer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Conference on Recent Advances in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Weimer and Iryna Gurevych. 2007. Predicting the perceived quality of web forum posts. In Proceedings of the 2007 Conference on Recent Advances in Natural Lan- guage Processing (RANLP-07), Borovets, Bulgaria.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Automatically assessing the post quality in online discussions on software", |
|
"authors": [ |
|
{ |
|
"first": "Markus", |
|
"middle": [], |
|
"last": "Weimer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iryna", |
|
"middle": [], |
|
"last": "Gurevych", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "M\u00fchlh\u00e4user", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL: Interactive Poster and Demonstration Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "125--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Markus Weimer, Iryna Gurevych, and Max M\u00fchlh\u00e4user. 2007. Automatically assessing the post quality in online discussions on software. In Proceedings of the 45th An- nual Meeting of the ACL: Interactive Poster and Demon- stration Sessions, pages 125-128, Prague, Czech Repub- lic.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"text": "UnintelligibleIt is impossible to make sense of the user's posts; clear as mud! 2 Somewhat confused The meaning of the user's posts is ambiguous or open to interpretation 3 ComprehensibleWith some effort, it is possible to understand the meaning of the post 4 Reasonably clear You occasionally question the meaning of the user's posts 5 Very clearMeaning is always immediately obvious relative to the thread; sparkling clar-", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Attribute</td><td>Value</td><td>Description</td></tr><tr><td>CLARITY</td><td colspan=\"2\">1 ity!</td></tr><tr><td/><td>1 Hack</td><td>The posts of this user make it patently obvious that they have no technical</td></tr><tr><td/><td/><td>knowledge relevant to the threads they participate in; get off the forum!</td></tr><tr><td/><td>2 Newbie</td><td>Has limited understanding of the very basics, but nothing more</td></tr><tr><td/><td>3 Average</td><td>Usually able to make a meaningful technical contribution, but struggles with</td></tr><tr><td>PROFICIENCY</td><td/><td>more difficult/specialized problems</td></tr><tr><td/><td>4 Veteran</td><td>User gives the impression of knowing what they are talking about, with good</td></tr><tr><td/><td/><td>insights into the topic of the thread but also some gaps in their knowledge</td></tr><tr><td/><td>5 Guru</td><td>The posts of this user inspire supreme confidence, and leave the reader with a</td></tr><tr><td/><td/><td>warm, fuzzy feeling!</td></tr><tr><td/><td>1 Demon</td><td>Deliberately and systematically negative with no positive contribution; the</td></tr><tr><td/><td/><td>prince/princess of evil!</td></tr><tr><td/><td>2 Snark</td><td>The user is somewhat hurtful in their posts</td></tr><tr><td>POSITIVITY</td><td>3 Dull</td><td>The user's posts express no strong sentiment</td></tr><tr><td/><td>4 Jolly</td><td>The user's posts are generally pleasant</td></tr><tr><td/><td>5 Solar</td><td>Goes out of his/her way in trying to make a positive contribution in all possible</td></tr><tr><td/><td/><td>ways; positively radiant!</td></tr><tr><td/><td>1 Loser</td><td>Zero effort on the part of the user</td></tr><tr><td/><td>2 Slacker</td><td>Obvious deficiency in effort</td></tr><tr><td>EFFORT</td><td>3 Plodder</td><td>User's posts are unremarkable in terms of the effort put in</td></tr><tr><td/><td>4 Strider</td><td/></tr></table>" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"text": "The BALDWIN P ost feature set", |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Feature name Description</td><td>Type</td></tr><tr><td>onTopic</td><td>Post's relevance to the topic of a thread</td><td>Real</td></tr><tr><td>overlapPrev</td><td colspan=\"2\">Post's largest overlap with a previous post Real</td></tr><tr><td>overlapDist</td><td>Distance to previous overlapping post</td><td>Integer</td></tr><tr><td>timeliness</td><td>Ratio of time from prev post to average</td><td>Real</td></tr><tr><td/><td>inter-post interval</td><td/></tr><tr><td>lengthiness</td><td>Ratio of post length to average post length</td><td>Real</td></tr><tr><td/><td>in thread</td><td/></tr><tr><td>emoticons</td><td>Ratio of emoticons to sentences</td><td>Real</td></tr><tr><td>capitals</td><td>Ratio of capitals to sentences</td><td>Real</td></tr><tr><td>weblinks</td><td>Ratio of links to number of sentences</td><td>Real</td></tr></table>" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"text": "", |
|
"type_str": "table", |
|
"content": "<table><tr><td>: The WANAS feature set</td></tr><tr><td>relationships within the forum. Building on For-</td></tr><tr><td>tuna et al. (2007), we consider User Networks,</td></tr><tr><td>where each node represents a user, and Thread</td></tr><tr><td>Networks, where each node represents a thread. In</td></tr><tr><td>this work, we consider two User Networks and one</td></tr><tr><td>Thread Network, namely: (1) POSTAFTER, (2)</td></tr><tr><td>THREADPART, and (3) COMMONAUTHORS, re-</td></tr><tr><td>spectively. The networks we define build directly</td></tr><tr><td>on work done by</td></tr></table>" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"text": "Results for individual feature sets.", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"num": null, |
|
"text": "Results for augmented feature sets", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Attribute</td><td>BOW Best</td><td>MIA</td><td>pBoW pBest</td></tr><tr><td colspan=\"4\">CLARITY 0.120 0.260 0.240 0.049 0.723</td></tr><tr><td>PROF</td><td colspan=\"3\">0.240 0.360 0.395 0.009 0.427</td></tr><tr><td>POS</td><td colspan=\"3\">0.140 0.220 0.335 0.011 0.126</td></tr><tr><td>EFFORT</td><td colspan=\"3\">0.320 0.340 0.410 0.108 0.193</td></tr></table>" |
|
}, |
|
"TABREF10": { |
|
"html": null, |
|
"num": null, |
|
"text": "BOW benchmark, best result and mean inter-annotator (MIA) F \u00b5 over each user attribute", |
|
"type_str": "table", |
|
"content": "<table/>" |
|
} |
|
} |
|
} |
|
} |