Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O16-2004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:05:05.151996Z"
},
"title": "Identifying the Names of Complex Search Tasks with Task-Related Entities",
"authors": [
{
"first": "Ting-Xuan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Wen-Hsiang",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Conventional search engines usually consider a search query corresponding only to a simple task. Nevertheless, due to the explosive growth of web usage in recent years, more and more queries are driven by complex tasks. A complex task may consist of multiple sub-tasks. To accomplish a complex task, users may need to obtain information of various task-related entities corresponding to the sub-tasks. Users usually have to issue a series of queries for each entity during searching a complex search task. For example, the complex task \"travel to Beijing\" may involve several task-related entities, such as \"hotel room,\" \"flight tickets,\" and \"maps\". Understanding complex tasks with task-related entities can allow a search engine to suggest integrated search results for each sub-task simultaneously. To understand and improve user behavior when searching a complex task, we propose an entity-driven complex task model (ECTM) based on exploiting microblogs and query logs. Experimental results show that our ECTM is effective in identifying the comprehensive task-related entities for a complex task and generates good quality complex task names based on the identified task-related entities.",
"pdf_parse": {
"paper_id": "O16-2004",
"_pdf_hash": "",
"abstract": [
{
"text": "Conventional search engines usually consider a search query corresponding only to a simple task. Nevertheless, due to the explosive growth of web usage in recent years, more and more queries are driven by complex tasks. A complex task may consist of multiple sub-tasks. To accomplish a complex task, users may need to obtain information of various task-related entities corresponding to the sub-tasks. Users usually have to issue a series of queries for each entity during searching a complex search task. For example, the complex task \"travel to Beijing\" may involve several task-related entities, such as \"hotel room,\" \"flight tickets,\" and \"maps\". Understanding complex tasks with task-related entities can allow a search engine to suggest integrated search results for each sub-task simultaneously. To understand and improve user behavior when searching a complex task, we propose an entity-driven complex task model (ECTM) based on exploiting microblogs and query logs. Experimental results show that our ECTM is effective in identifying the comprehensive task-related entities for a complex task and generates good quality complex task names based on the identified task-related entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Conventional search engines usually consider single queries corresponding only to a simple search need. In reality, however, more and more queries are driven by complex search tasks (Guo & Agichtein, 2010; Jones & Klinkner, 2008) . Generally, a real-life complex search task usually has more than one sub-task to be accomplished. Therefore, users usually cannot accomplish a complex search task by submitting only a single query. Some researchers have worked to try to identify sub-tasks in order to help users deal with complex search tasks (Tan \uf02a Departmenr of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan E-mail: [email protected]; [email protected] et al., 2006; MacKay et al., 2008; Ji et al., 2011; Kotov et al., 2011; Yamamoto et al., 2012) .",
"cite_spans": [
{
"start": 182,
"end": 205,
"text": "(Guo & Agichtein, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 206,
"end": 229,
"text": "Jones & Klinkner, 2008)",
"ref_id": "BIBREF7"
},
{
"start": 542,
"end": 548,
"text": "(Tan \uf02a",
"ref_id": null
},
{
"start": 704,
"end": 717,
"text": "et al., 2006;",
"ref_id": "BIBREF17"
},
{
"start": 718,
"end": 738,
"text": "MacKay et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 739,
"end": 755,
"text": "Ji et al., 2011;",
"ref_id": "BIBREF6"
},
{
"start": 756,
"end": 775,
"text": "Kotov et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 776,
"end": 798,
"text": "Yamamoto et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Nevertheless, only identifying sub-tasks in a complex search task is not sufficient to help users who want to search the complex task name directly e.g., \"\u5317\u4eac\u65c5\u904a (travel to Beijing)\". Understanding complex task names for complex search tasks can help search engines deal with complex-task-based queries. A complex search task can be represented by a task event and a task topic. For example, as shown in Figure 1 , a complex task \"travel to Beijing,\" which is composed of a task topic \"Beijing\" and a task event \"travel,\" has at least three sub-tasks, including \"book flight ticket,\" \"reserve hotel room,\" and \"survey maps\". Users need to issue at least three queries for each sub-task including \"Beijing flight ticket,\" \"Beijing hotel,\" and \"Beijing map\". The queries targeting a sub-task usually focus on a task-related entity, such as \"flight ticket,\" \"hotel room,\" and \"maps\". Therefore, understanding task-related entities is very important for a complex task and can help search engines provide integrated search results containing a variety of information of distinct task-related entities. When users search for a complex task, we have found the users often have a task event that triggers the users to perform exploratory or comparative search behaviors, such as \"prepare something,\" \"buy something,\" or \"travel somewhere\". Furthermore, the search behaviors are usually around a certain task topic that is the subject of interest in the complex task. Users may describe the task event and task topic of their complex task with various task-related entities in microblogs, e.g., Twitter or Weibo 1 . Microblogs are a miniature version of traditional weblogs. In recent years, many users have posted and shared their life details with others on microblogs every day. Due to the post length limitation (only 140 characters for Weibo), users tend to describe only key points of their life task. Table 1 shows an example of a microblog. We find the user, who has an ongoing complex task \"\u5317\u4eac\u65c5\u904a (travel to Beijing),\" mentioned two task-related entities \"\u6a5f\u7968 (flight ticket)\" and \"\u98ef\u5e97 (hotel)\". To understand and model a complex search task, some researchers have analyzed long and short-term user search behavior based on a single user's search sessions (Agichtein et al., 2012; Liao et al., 2012; Mihalkova & Mooney, 2009; Tan et al., 2006) . Nevertheless, a single user's search session from query logs may not be sufficient in identifying complex search tasks since complex tasks may cross search sessions or be interleaved in a single search session. In this work, we address the problem of how to help users efficiently accomplish a complex task when submitting a single query or multiple queries. To complete the scenario illustrated in Figure 1 , a complex task name with several task-related entities must be identified. Basically, the problem can be divided into the following three major sub-problems.",
"cite_spans": [
{
"start": 2252,
"end": 2276,
"text": "(Agichtein et al., 2012;",
"ref_id": "BIBREF0"
},
{
"start": 2277,
"end": 2295,
"text": "Liao et al., 2012;",
"ref_id": "BIBREF10"
},
{
"start": 2296,
"end": 2321,
"text": "Mihalkova & Mooney, 2009;",
"ref_id": "BIBREF15"
},
{
"start": 2322,
"end": 2339,
"text": "Tan et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 402,
"end": 410,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1898,
"end": 1905,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 2741,
"end": 2749,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "1. Collect queries related to the same complex search task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "2. Extract task-related entities from the collected queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "3. Automatically identify the name of the complex search task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The above three problems are very important but non-trivial to solve. We propose an entity-driven complex task model (ECTM) to automatically identify complex task names and task-related entities based on various web resources. To evaluate our proposed ECTM, we have conducted extensive experiments on a large dataset of real-world query logs. The experimental results show that our ECTM is able to identify a comprehensive task name for a complex task with related entities and generate good quality complex task names based on the identified task-related entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Search session partition: Recent studies show that about 75% of search sessions are searching for complex tasks (Field & Allan, 2013) . To help users deal with their search tasks, researchers have devoted effort to understand and identify tasks from search sessions. Boldi et al. (2008) proposed a graph-based approach to dividing a long-term search session into search tasks. Guo and Agichtein (2010) investigated the hierarchical structure of a search task with a series of search actions based on search sessions. Agichetin et al. (2012) conducted a comprehensive analysis of search tasks and classified them based on several aspects, such as intent, motivation, complexity, work-or-fun, time-sensitive, and continued-or-not. Beeferman and Berger (2000) proposed a graph-based iterative approach to clustering query logs. Wen et al. (2001) clustered similar queries based on query content and document clicks. Cui et al. (2011) grouped search queries from a click-through log with similar search tasks using the random walk method. Unfortunately, exploring only the query content and click-through information for query clustering may not obtain precise results since queries usually are short and ambiguous. Note that the above works only focus on single-goal task discovery, while our work focuses on identifying a complex task with relevant sub-tasks over a series of sessions.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "(Field & Allan, 2013)",
"ref_id": null
},
{
"start": 267,
"end": 286,
"text": "Boldi et al. (2008)",
"ref_id": "BIBREF2"
},
{
"start": 377,
"end": 401,
"text": "Guo and Agichtein (2010)",
"ref_id": "BIBREF5"
},
{
"start": 729,
"end": 756,
"text": "Beeferman and Berger (2000)",
"ref_id": "BIBREF1"
},
{
"start": 825,
"end": 842,
"text": "Wen et al. (2001)",
"ref_id": "BIBREF19"
},
{
"start": 913,
"end": 930,
"text": "Cui et al. (2011)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Cross-session task prediction: Kotov et al. (2011) noticed that a complex task may require a user to issue a series of queries, spanning a long period of time and multiple search sessions. Thus, they addressed the problem of modeling and analyzing complex cross-session search tasks. Lucchese et al. (2011) tried to identify task-based sessions in query logs by semantic-based features extracted from Wiktionary and Wikipedia to overcome a lack of semantic information. Ji et al. (2011) proposed a graph-based regularization algorithm to predict popular search tasks and simultaneously classify queries and web pages by building two content-based classifiers. White et al. (2013) improved the traditional personalization methods for search-result re-ranking by exploiting similar tasks from other users to re-rank search results. Wang et al. (2011) addressed the problem of extracting cross-session tasks and proposed a task partition algorithm based on several pairwise similarity features. Raman et al. (2013) investigated intrinsic diversity (ID) for a search task and proposed a re-ranking algorithm according to the ID tasks.",
"cite_spans": [
{
"start": 31,
"end": 50,
"text": "Kotov et al. (2011)",
"ref_id": "BIBREF8"
},
{
"start": 284,
"end": 306,
"text": "Lucchese et al. (2011)",
"ref_id": "BIBREF13"
},
{
"start": 470,
"end": 486,
"text": "Ji et al. (2011)",
"ref_id": "BIBREF6"
},
{
"start": 660,
"end": 679,
"text": "White et al. (2013)",
"ref_id": "BIBREF20"
},
{
"start": 830,
"end": 848,
"text": "Wang et al. (2011)",
"ref_id": "BIBREF18"
},
{
"start": 992,
"end": 1011,
"text": "Raman et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "To discriminate between simple and complex tasks, we define simple tasks as triggering only one sub-task. Complex tasks may trigger more than one sub-task. A complex task consists of several sub-tasks, and each sub-task goal may be composed of a sequence of search queries. Therefore, modeling the sub-tasks is necessary for identifying a complex task. Jones and Klinkner (2008) proposed a classification-based method to divide a single search session into tasks and sub-tasks based on the four types of features, including time, word, query log sequence, and web search. Lin et al. (2012) defined a search goal as an action-entity pair and utilized a web trigram to identify fine-grained search goals. Jones, Yamamoto, et al. (2012) proposed an approach to mining sub-tasks for a task using query clustering based on bid phrases provided by advertisers. The most important difference between our work and previous works is that we further try to identify task names with related task-intrinsic entities. To the best of our knowledge, there is no existing approach to utilizing microblogs in dealing with task identification and identifying human-interpretable names. In this work, we propose an entity-driven complex task model (ECTM) to deal with the problems mentioned above. To the best of our knowledge, there is no existing approach to utilizing multiple resources in dealing with task identification and identifying human-interpretable names for complex search tasks with various task-related entities.",
"cite_spans": [
{
"start": 353,
"end": 378,
"text": "Jones and Klinkner (2008)",
"ref_id": "BIBREF7"
},
{
"start": 572,
"end": 589,
"text": "Lin et al. (2012)",
"ref_id": "BIBREF11"
},
{
"start": 703,
"end": 733,
"text": "Jones, Yamamoto, et al. (2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complex task search:",
"sec_num": null
},
{
"text": "To improve current search engines without support for complex task searches, we proposed an entity-driven complex task model (ECTM), which can automatically identify the name of a complex task and discover task-related entities behind the complex task. Figure 2 shows our ECTM architecture. The ECTM utilizes web resources, including query logs and microblogs. Given a search query that is driven by a latent complex task, there are three major stages in the ECTM to suggest integrated search results. ",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3.1"
},
{
"text": "Integrate an expanded task-coherent query set to help identify the latent complex search task. In this stage, we utilize query logs and user search sessions to collect task-coherent queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion:",
"sec_num": "1."
},
{
"text": "Extract multiple task-related entities from a task-coherent query set, then retrieve microblog posts based on extracted task-related entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Information Model:",
"sec_num": "2."
},
{
"text": "Identify the complex task name consisting of a task topic and a task event extracted from the retrieved microblog posts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Identification:",
"sec_num": "3."
},
{
"text": "In the following, we describe the details of each major stage in the ECTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Identification:",
"sec_num": "3."
},
{
"text": "In fact, only using a single query is insufficient for finding the latent complex task. We thus try to extract other relevant task-coherent queries from search sessions. Although users may persistently search for the same complex task over a period of time, they may also simultaneously search for multiple interleaved tasks (Liu & Beklin, 2010; MacKay & Watters, 2008 ). Therefore, identifying task-coherent queries from search sessions is an important and non-trivial issue. The process for extracting task-coherent queries is described below.",
"cite_spans": [
{
"start": 325,
"end": 345,
"text": "(Liu & Beklin, 2010;",
"ref_id": null
},
{
"start": 346,
"end": 368,
"text": "MacKay & Watters, 2008",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "Given an input query q and query logs Q, we first separate queries in the query logs into search sessions by the time gap of 24 hours. We extract search sessions containing the input query q and obtain a set of sessions S q . To extract task-coherent queries Q t from the session set S q , we employ a log-linear model (LLM) with the following three useful features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "Average Query Frequency: Generally, the frequency of a query reflects its popularity. To avoid a long session resulting in high query frequency, we calculate the normalized query frequency as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "| | \u2211 , | | \u2208 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "where freq(q t ,s) is the frequency of the query q t in the session s, is the sessions containing q t , |s| is the number of queries in the session s, and | | is the number of sessions containing query q t in the set .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "exp | | | | (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "where | | is the number of sessions containing the input query q in the set , | | is the number of sessions containing query q t in the set , and exp \u2022 is the exponential function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "Average Query Distance: Since queries that close to the input query in a search session may have high task-coherence degree for the latent complex task. We thus use normal distribution to estimate the task-coherence for each query:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "\u221a (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "where \u03c3 is standard deviation (which is set to 6.07, according to our training dataset, see Section 4.1.2), d is the average number of queries between q t , and input query q in sessions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "We employ a log-linear model to calculate the probability of each candidate task-coherent query based on the features described above:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "; \u2211 | | (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "where Q t is the set of all candidate queries in the session set S q , |F| is the number of used feature functions , W is the set of weighting parameters w i of feature functions, and Z(Q t ) is a normalizing factor set to the value",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "\u2211 exp \u2211 | | \u2208 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Coherent Query Expansion",
"sec_num": "3.2"
},
{
"text": "Sometimes, complex search task names will not occur in the expanded query set Q t ; therefore, we cannot directly identify a complex task name from the query set Q t . In this stage, we expand the content of Q t by extracting task-related entities and use the entities to retrieve microblog posts from a microblog search service, such as Weibo 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task-Related Information Model",
"sec_num": "3.3"
},
{
"text": "We first try to extract task-related entities from expanded task-coherent query set Q t . In fact, the queries in the query set Q t usually consist of a task topic and a task-related entity. For example, a query \"\u5317\u4eac\u6a5f\u7968 (Beijing flight ticket)\" contains a topic \"\u5317\u4eac (Beijing)\" and an entity \"\u6a5f\u7968 (flight ticket)\". To realize the Part-Of-Speech (POS) of the task-related entity in the query set Q t , we generated statistics on 2000 queries randomly selected from a query log. The entities were labeled with a POS tag by a Chinese segmentation and tagging tool. Table 2 shows the results of the POS tag distribution of queries. We find that most entities are common nouns (87.5%), such as cellphone or flight ticket. For the POS of the task topic, we find that 78.9% of task topics are proper nouns and 19.8% are common nouns. Therefore, we extract task-related entities from the query set Q t by extracting all common nouns in each set of queries and select the top-frequency proper noun as a candidate task topic. We thus can obtain a candidate task topic and a list of task-related entities E t ordered by the occurrence frequency. ",
"cite_spans": [],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Task-Related Entity Extraction",
"sec_num": "3.3.1"
},
{
"text": "To identify a complex task name that does not occur in queries, the basic idea is to collect microblog posts from microblog search engines based on the given task-related entities. According to our observations, a microblog post containing most of the task-related entities may also contain the task name (see the example in Table 1 ). Unfortunately, a microblog post may contain only a portion of the entities required for a complex task. To overcome the above problem, we identify pseudo queries based on all subsets containing two or three entities from top-n entities of E t . To make sure that each pseudo query is relevant to the candidate topic t, we combine the candidate topic t with each pseudo query and retrieve a set of microblog posts via microblog search engines.",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Task-Related Microblog Retrieval",
"sec_num": "3.3.2"
},
{
"text": "Based on the identified task-related entities and microblog posts in the previous stage, we aim to identify a complex task name. To identify a suitable task name, we utilize conditional random field (CRF) (Lafferty et al., 2001) to automatically label each term in a microblog post with a task-semantic tag.",
"cite_spans": [
{
"start": 205,
"end": 228,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Identification",
"sec_num": "3.4"
},
{
"text": "To realize the structure of a complex task name, we annotated 244 distinct complex task names from 513 search sessions (the details of the annotation process will be described in Section 4.1.2). Table 3 shows the statistics of the structure distribution. We found most task names consist of a task topic and a task event, such as \"\u8cfc\u8cb7\u4e09\u661f\u624b\u6a5f (buy Samsung cellphone),\" where \"\u4e09\u661f (Samsung)\" is the task topic, and \"\u8cfc\u8cb7\u624b\u6a5f (buy cellphone)\" is the task event. We also find that the task event usually is composed of a transitive verb (i.e., buy) and an event object (i.e., cellphone). Nevertheless, some events are intransitive verbs needing no event object, such as \"\u82f1\u8a9e\u5b78\u7fd2 (English learning),\" where \"\u5b78\u7fd2 (learning)\" is an intransitive verb in Chinese. Therefore, we define the two types of events as Event 1 and Event 2, where Event 1 consist of a transitive verb (E 1V ) and an object (E 1O ), and Event 2 is only an intransitive verb. We aim to automatically label each term in a microblog post with one of the five task-semantic tags, T (topic), E 1V (Event 1), E 1O (Event object), E 2 (Event 2), and O (Others). To automatically label each term with a task-semantic tag, we employ a supervised probabilistic graphical model, conditional random field (CRF), which is suitable to predict the latent structure of sentences [10] . CRF can predict sequences of labels for sequences of terms in a sentence. We use a popular CRF implementation \"CRF++,\" 3 which can adopt multiple features for each term. In the following, we describe the two types of features for complex task name identification, including term-based and post-based features.",
"cite_spans": [
{
"start": 1315,
"end": 1319,
"text": "[10]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Automatically Labeling of Task Name",
"sec_num": "3.4.1"
},
{
"text": "(1) Term-based features There are five term-based features proposed in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Complex Task Name Identification",
"sec_num": "3.4.2"
},
{
"text": "Stop word: A stop word usually is unimportant and not a task name. We consider stop word as a binary feature to indicate if the term is a task name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Complex Task Name Identification",
"sec_num": "3.4.2"
},
{
"text": "Candidate topic: In the previous stage (see Section 3.3.1), we extracted a candidate topic from task-coherent query set Q t . Therefore, we can utilize the candidate topic as a binary feature for indicating if a term is a task topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Complex Task Name Identification",
"sec_num": "3.4.2"
},
{
"text": "Term frequency: Generally, a term in a post with high frequency may indicate the term is more important than other terms in the post. The term frequency is normalized by dividing the largest frequency of terms in the post. The normalized term frequency is divided into three ranges, including high [1, 0.8), middle (0.2, 0.8], and low [0, 0.2]. Document frequency: A term occurring in several search-result posts may indicate the term is a task topic or a task event. The reason is that the search-result posts usually are related to a certain complex task. We normalize the document frequency by dividing the post number of search results. The normalized document frequency is divided into three intervals, including high [1, 0.8), middle (0.2, 0.8], and low [0, 0.2].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for Complex Task Name Identification",
"sec_num": "3.4.2"
},
{
"text": "According to our observation, the POS tag of a task topic usually is a proper noun (N p ) or a common noun (N c ), and the POS tag of a task event usually is a transitive verb (V t ) + common noun (N c ) or an intransitive verb (V i ). To enhance the accuracy of the CRF model, we only use four types of POS tag \"V i ,\" \"V t ,\" \"N c ,\" and \"N p \" and others are labeled as \"Others\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "(2) Post-based Features According to our observation, a post describing a complex task usually is more important or popular for an author or the author's friends. For example, when users write posts talking about their wedding, they will receive more attention than other ordinary posts. Therefore, we try to calculate a post importance score based on four post features, including descriptive, interactive, attractive, and influential degrees, according to the metadata of microblog posts. Figure 3 shows a real example of a microblog post collected from Weibo. We can see that there is some metadata on the post, such as the click times of \"like,\" \"share,\" and \"comment\". Descriptive degree: Generally, a complex task needs more words to describe a variety of subtasks. Thus, we assume that a microblog post p with longer context can provide more content about the complex task. Nevertheless, some spam posts may contain long text with repeated terms. Therefore, we calculate the entropy to represent post descriptive degree:",
"cite_spans": [],
"ref_spans": [
{
"start": 491,
"end": 499,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 log \u2208",
"eq_num": "(5)"
}
],
"section": "POS tag:",
"sec_num": null
},
{
"text": "where Term(p) is a set of terms in post p and P(w) is the occurrence probability of term w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "Interactive degree: If a post has many \"comments,\" we assume the post is more interactive. An interactive post has higher probability of mentioning a complex task. We formulate the interactive degree of a post as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "\u2208 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "where CommentCount(p) is the \"comment\" number of a post p and where max \u2022 function returns the max \"comment\" number of a post p i in the post set P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "Attractive degree: If a post receives many \"likes,\" we assume the post is more attractive. An attractive post has higher probability to mention a complex task. We formulate the attraction of a post as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "\u2208 (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "where LikeCount(p) is the \"like\" number of a post p and where max \u2022 function returns the max \"like\" number of a post p i in the post set P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "Influential degree: If a post was shared many times, we assume the post is more influential. An influential post has higher probability to mention a complex task. We formulate the influential degree of a post as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "\u2208 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "where ShareCount(p) is the \"share\" number of a post p and where max \u2022 function returns the max \"share\" number of a post p i in the post set P.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "Since the CRF model only accept features for terms (not for posts), we need to transform the post importance score to term importance score. The basic idea is that, if a term occurs often in more important posts, we can assume the term is important. Therefore, we calculate average term importance based on post importance as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "\u2211 \u2208 | | (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "where PostImportance(p i ) can be replaced by one of the above four feature functions, P w is a set of posts containing the term w, and |P w | is the number of posts in the set P w . We further normalize f TermImportance (w) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "\u2208 (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "where W is the set of all terms. Finally, we divide the normalized term importance score into three ranges, high [1, 0.8), middle (0.2, 0.8], and low [0, 0.2].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS tag:",
"sec_num": null
},
{
"text": "Based on the task-semantic tagging results of the CRF model, we can calculate the highest frequency task-semantic tagging terms, including e 1V , e 1O , e 2 , and t, for each type of task-semantic tag E 1V , E 1O , E 2 , and T, respectively. To compose a semantically suitable complex task name c, we use a rule-based algorithm that considers the frequency and POS of each task-semantic tagging term. Figure 4 shows the algorithm of complex task name composition (CTNC), which combines a task topic and a task event into a complex task name, where Freq(e) is the frequency of an event e and POS(t) is the POS tag of a topic t. We first compare the term frequency of a transitive event e 1V and an intransitive event e 2 . If the frequency of e 2 is greater than e 1V , CTNC simply returns a complex task name composed of <t+e 2 >, e.g., \"\u5317\u4eac<t>\u65c5\u904a<e2> (Beijing<t> travel<e 2 >)\". Otherwise, if the topic t is a common noun, CTNC returns a complex task name composed of <e 1V +t>, e.g., \"\u5b78\u7fd2<e 1V >\u82f1 \u8a9e<t> (learn<e 1V > English<t>)\". Otherwise, if the topic t is a proper noun, CTNC returns a complex task name composed of e 1V +t+e 1O , e.g., \"\u8cfc\u8cb7<e 1V >\u4e09\u661f< t >\u624b\u6a5f<e 1O > (buy<e 1V > Samsung< t > cellphone<e 1O >)\".",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 409,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Complex Task Name Composition",
"sec_num": "3.4.3"
},
{
"text": "Input: task-semantic tagging terms , , , Output: A complex task name If < Then return Else If is \"common noun\" Then return Else return ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm: complex task name composition",
"sec_num": null
},
{
"text": "We use a month of query logs from the Sogou search engine as our dataset. The query logs contain 21,422,773 records with 3,163,170 distinct queries. We group these query records into search sessions according to user ID. Since a complex task may span a period of time, we used 24 hours as the time gap to segment search sessions, which resulted in 264,360 search sessions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1.1"
},
{
"text": "In this work, we employed three annotators to label each query with a task-related entity and a latent complex task name. Since the query logs are diverse and often ambiguous, heuristically labeling the task-related entities and task names for each query may lead to inconsistent results. To identify reasonable and consistent training/testing data for evaluating our ECTM, a formal annotation method procedure should be provided. In the following, we describe the guidelines for annotators on how to label a task-related entity and a complex task name for each search query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Labeling",
"sec_num": "4.1.2"
},
{
"text": "In general, a search query should give one entity that users focus on. Annotators thus only focus on queries containing exactly one entity and discard other queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Labeling",
"sec_num": "4.1.2"
},
{
"text": "To better interpret task-related entities and task names for queries, annotators are encouraged to exploit external resources, e.g., clicked pages, search results for queries, or query context (i.e., other queries in the same search session).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Labeling",
"sec_num": "4.1.2"
},
{
"text": "Since a complex task should be determined based on the whole search session, annotators should complete the labelling of all task-related entities in a search session before they begin to label task names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Labeling",
"sec_num": "4.1.2"
},
{
"text": "From the 264,360 obtained sessions, we randomly labeled 5,142 sessions with task names and entities. For each task, we further examined the labeled results, and unified the similar task names annotated by different annotators. For instance, \"\u5317\u4eac\u65c5\u904a (travel to Beijing)\" and \"\u5317\u4eac\u65c5\u884c (trip to Beijing)\" would be unified to \"\u5317\u4eac\u65c5\u904a (travel to Beijing)\". Table 4 shows an example search session of our labeled results. In fact, it is not easy to find a search session containing a good complex search task search intent. Each query belonging to a complex search task was labeled with a task-related entity. After excluding the tasks containing less than three entities and some controversial tasks, we obtained only 523 complex tasks from 513 sessions. We found that, although there were many interleaving simple tasks in one session, few complex tasks occurred in the same session. In other words, we found users seldom deal with two complex tasks simultaneously within the same period of time. The statistics of the labeled results are shown in Table 5 . On average, there are 5.68 (2972 / 523) entities per task in our labeled dataset. Training data: We sampled 100 complex tasks as the training dataset, which contained 724 queries and 424 distinct entities. For the log-linear model (LLM) used in task-coherent query expansion, we identified pairwise training data according to our labeled data set. Each query pair indicates if two queries belong to the same complex task. Therefore, we could obtain 4950 (i.e., the combination number ) query pairs to train LLM. For the CRF model used in task name identification, we manually labeled 30 microblog posts retrieved for each complex task; thus, there were 3000 microblog posts for training the CRF model.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 4",
"ref_id": null
},
{
"start": 1037,
"end": 1044,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Labeling",
"sec_num": "4.1.2"
},
{
"text": "We used the remaining 423 complex tasks as the testing dataset, which contained 3017 queries and 1426 distinct entities. For each testing complex task, we randomly selected a query from the testing task as the input query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing data:",
"sec_num": null
},
{
"text": "Since the automatically identified task name may be semantically the same as our annotated task name but represented in different lexicons, we cannot simply judge each identified task name by an automatic keyword matching approach. To overcome the above problem, we employed three judges to give scores independently for identified task names of all compared methods (the details of the compared methods will be described in Section 4.1.5). To ensure the fairness for all compared methods, we designed a labeling procedure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Quality Labeling",
"sec_num": "4.1.3"
},
{
"text": "1. Merge all identified task names from different methods into a large task name list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Quality Labeling",
"sec_num": "4.1.3"
},
{
"text": "2. Remove the duplicated task names in the task name list. 3. Shuffle the task names in order to hide the information of the original rank of different methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Quality Labeling",
"sec_num": "4.1.3"
},
{
"text": "In order to give a relevance score, the judges could look at our pre-labeled task names and survey the information of the complex tasks. The score for each task name should be 0, 1, or 2. We define the criterion of giving a relevance score as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Quality Labeling",
"sec_num": "4.1.3"
},
{
"text": "Bad (score of 0): A bad complex task name is irrelevant to the complex task or is semantically unsuitable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Quality Labeling",
"sec_num": "4.1.3"
},
{
"text": "Fair (score of 1): A fair task name is semantically suitable but the judges cannot determine whether the task name can represent the complex task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Quality Labeling",
"sec_num": "4.1.3"
},
{
"text": "A good task name is semantically suitable and semantically the same as the pre-labeled task name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Good (score of 2):",
"sec_num": null
},
{
"text": "Since there were three judges (thus, we had three relevance scores for each task), we needed to decide the final relevance scores for evaluating performance of the compared methods. We used the majority decision to decide the final relevance score for each task. For instance, when a name was labeled with relevance scores 1, 0, and 0, the final relevance score would be 0. If a task name was labeled with three distinct relevance scores 0, 1, and 2, the final relevance score would be 1. We only considered a task name with a relevance score of 2 as a correct task name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Good (score of 2):",
"sec_num": null
},
{
"text": "Inclusion rate: The inclusion rate is to evaluate the fraction of the top n identified complex task names that include at least one correct complex task name. The equation of inclusion rate is given as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| | | |",
"eq_num": "(11)"
}
],
"section": "Evaluation Metrics",
"sec_num": "4.1.4"
},
{
"text": "where |T| is the number of testing tasks (i.e., 423) and where |inclusion(T)| is the number of testing tasks that can find at least one correct task name in top n identified complex task names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1.4"
},
{
"text": "Since we only need one correct task name for each testing data, we use MRR, which only considers the rank of the first returned correct task name for each testing data task. To calculate the MRR, the equation is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mean reciprocal rank (MRR):",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| | \u2211 | |",
"eq_num": "(12)"
}
],
"section": "Mean reciprocal rank (MRR):",
"sec_num": null
},
{
"text": "where |T| is the number of testing tasks and rank(i) is the rank of the first identified correct task name of the i th testing task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mean reciprocal rank (MRR):",
"sec_num": null
},
{
"text": "To realize the overall quality of the top n identified task name, we also use NDCG to evaluate the performance for different methods. According to scores of identified task name, we first have to calculate the DCG as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized discounted cumulative gain (NDCG):",
"sec_num": null
},
{
"text": "\u2211 (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized discounted cumulative gain (NDCG):",
"sec_num": null
},
{
"text": "where n is the number of identified task names and \u2208 0,1,2 is the relevance score of the top-i task name. In order to normalize the DCG value from 0 to 1, the DCG divided by IDCG, called NDCG, is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized discounted cumulative gain (NDCG):",
"sec_num": null
},
{
"text": "where IDCG can be calculated the same as DCG with an ideal rank, which was ranked by labeled scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Normalized discounted cumulative gain (NDCG):",
"sec_num": null
},
{
"text": "LRM_SERP (linear regression model with search engine result snippet): This method was proposed by Zeng et al. (2004) to identify salient phrases from search-result snippets. Salient phrases are identified using several regression models with five proposed features, including TFIDF, phrase length, intra-cluster similarity, cluster entropy, and phrase independence. We used the linear regression model (LRM) with the five features proposed in their work to extract salient phrases as task names from search-result snippets. Since using only the testing input query to collect search-result snippets may not be fair for identifying complex task names, we used our produced pseudo queries with task-related information model to collect search-result snippets from a web search engine (see Section 3.3). The weight for each feature was set as in Zeng et al.'s work. LRM_MB (linear regression model with microblog): We also tried to adopt a linear regression model with microblog posts in order to compare the resources of SERP and microblog based on the five features proposed in Zeng et al. LRM_MB+ (linear regression model with microblog plus): This was used to compare the performance of the linear regression model (LRM) and with our proposed microblog features with quantified values in LRM_MB.",
"cite_spans": [
{
"start": 98,
"end": 116,
"text": "Zeng et al. (2004)",
"ref_id": "BIBREF22"
},
{
"start": 843,
"end": 862,
"text": "Zeng et al.'s work.",
"ref_id": null
},
{
"start": 1077,
"end": 1088,
"text": "Zeng et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Identification Method Comparison",
"sec_num": "4.1.5"
},
{
"text": "ECTM (entity-driven complex task model): This is our proposed entity-driven complex task model, which utilizes microblog as extending data for a task-coherent query set. We used all features proposed in this work for training a CRF model to identify the name of complex task. The only difference between LRM_MB+ and our method is that the former first extracts bigrams and trigrams from posts as candidate phrases before using LRM to determine their correctness, and our method uses CRF to directly label each term in the posts with a task-semantic tag (i.e., topic or event).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Name Identification Method Comparison",
"sec_num": "4.1.5"
},
{
"text": "For each testing task, we need to determine the number of the top n selected task-related entities to produce pseudo queries for retrieving microblog posts. Since we use all subsets containing two or three entities of the top n entity set to produce pseudo queries, the entity number is critical to the number of produced pseudo queries (i.e., the total number of pseudo queries is plus , where is the k-combination of the entity set containing n entities), also making it critical to computational time. To realize how many entities should be selected for each testing task to achieve the best performance of identifying a complex task name, we randomly selected 15 complex tasks from the testing dataset to make the preliminary experiment. For each pseudo query, we retrieved the top 20 microblog posts and search-result snippets via a microblog search engine and a web search engine, respectively. We calculated the average top 3 inclusion rate for each testing task of LRM_SERP and LRM_MB. Figure 5 shows the different number of entities for producing pseudo queries along with the average top 3 inclusion rate. We can see both LRM_SERP and LRM_MB achieved better inclusion rates when using five entities to compose 20 (i.e., ) pseudo queries. Generally, a small number of entities achieved a worse inclusion rate since the retrieved microblog posts or search-result snippets for a complex task were insufficient. When the number of selected entities was greater than five, the number of unrelated entities also increased, resulting in a worse inclusion rate. To understand the correct number of microblog posts that should be used in our training data, we also conducted a preliminary experiment. Figure 6 shows that, when we used 30 posts, our ECTM could achieve the best precision. Therefore, in the following experiments, we used the top five entities for all methods when identifying pseudo queries. Table 6 shows the overall results of task name identification using different methods. Generally, the four methods achieved adequate top 5 inclusion rate (0.68, 0.72, 0.79, and 0.83 for LRM_SERP, LRM_MB, LRM_MB+, and ECTM, respectively). We found LRM_MB using a microblog is better than LRM_SERP using an SERP in identifying the complex task name. According to our analysis, microblog posts contain more task names, even in tail posts. On the contrary, only a few top-ranked search result snippets contain complex task names. The reason is that microblog posts are identified by users; thus, they have more likelihood to talk about a real-life complex task. Therefore, LRM_MB can achieve better performance than LRM_SERP. We also compared the performance between using and not using our proposed features in a microblog post (i.e., LRM_MB and LRM_MB+). Using microblog features can slightly improve the overall performance since important posts usually mention a complex task. Our proposed ECTM using CRF to automatically label task-semantic tags for each term can improve the performance significantly, and it achieved Top-1 MRR of 0.57. ",
"cite_spans": [],
"ref_spans": [
{
"start": 994,
"end": 1002,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1702,
"end": 1710,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1909,
"end": 1916,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Selection",
"sec_num": "4.1.6"
},
{
"text": "To realize whether the identified complex task names are semantically suitable, some example of top-1 identified task names are shown in Table 7 . For the query \"\u5317\u4eac\u6a5f\u7968 (Beijing flight ticket),\" only our ECTM identified correct task name \"\u5317\u4eac\u65c5\u904a (Beijing travel)\". LRM_SERP identified incorrect task name \"\u65c5\u884c\u793e\u65c5\u884c (travel agency travel),\" which is not semantically suitable. The reason is that search-result snippets sometimes are not represented as complete natural language sentences. Therefore, when we try to extract a bigram or trigram, there are some combinations that are not semantically suitable. LRM_MB and LRM_MB+ identified incorrect task names \"\u8a02\u8cfc\u81ea\u7531\u884c (reserve independent travel)\" and \"\u570b\u5916\u65c5\u904a\u7db2\u8a02 (online ordering to travel in foreign countries),\" which are semantically suitable but not very related with Beijing travel. The reason is that the above two methods do not consider the task topic when identifying complex task names. According to our observation, the task topic of a complex task usually occurs in almost every search query. For the testing task query \"\u6df1\u5733\u6703\u8a08\u5f85\u9047 (Shenzhen accounting salary),\" only LRM_MB+ identified the correct task name \"\u7533\u8acb\u6703\u8a08\u5de5\u4f5c (apply for accounting work),\" and our ECTM identified an incorrect task name \"\u7533\u8acb\u6df1\u5733\u5de5\u4f5c (apply for Shenzhen work)\". According to our analysis, ECTM identified an incorrect task topic \"\u6df1\u5733 (Shenzhen),\" which is a location name in China and occurs many times in a task-coherent query set (see Section 3.2). Nevertheless, the correct task topic should be \"\u6703\u8a08 (accounting)\", which specifies the career searched by the user. As a result, we find our ECTM may identify incorrect task names when the task topic is not the most frequent term in the task-coherent query set. ",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "MB",
"sec_num": null
},
{
"text": "For our proposed ECTM, we discuss two issues of producing pseudo queries with a candidate topic and the limitations of task name composition in the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "(1) Producing pseudo queries with a candidate topic: In general, pseudo queries containing a candidate topic can achieve better precision when retrieving microblog posts. If the pseudo queries do not contain a topic, it is hard to find the correct topic for the task name. For example, two tasks \"\u5317\u4eac\u65c5\u904a (travel to Beijing)\" and \"\u9752\u5cf6\u65c5\u904a (travel to Tsingtao)\" may have many of the same task-related entities, such as \"\u6a5f\u7968 (flight tickets),\" \"\u98ef\u5e97 (hotel),\" and \"\u5730\u5716 (maps)\". Therefore, all of the microblog posts containing these entities may be retrieved and result in our ECTM identify an incorrect task name. Nevertheless, when the identified candidate topic is incorrect, our task name identification model usually is unable to identify a correct task name. How to improve the precision of identifying a task topic is still an important issue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "(2) The limitations of task name composition: Our proposed ECTM can identify task names composed of correct task-semantic tags effectively. Nevertheless, some identified task names that contain an event object may have semantic flaws. For instance, we identified a complex task name \"\u6cbb\u7642\u5317\u4eac\u6253\u9f3e (treat Beijing snore),\" which is composed of an event \"\u6cbb \u7642 (treat),\" a topic \"\u5317\u4eac (Beijing),\" and an event object \"\u6253\u9f3e (snore),\" based on our definition of task name composition strategy, which considers only POS patterns. Actually, the more semantically suitable task name is \"\u5317\u4eac\u6cbb\u7642\u6253\u9f3e (Beijing treat snore)\" that is composed of a topic \"\u5317\u4eac (Beijing),\" an event \"\u6cbb\u7642 (treat),\" and an event object \"\u6253\u9f3e (snore)\". In this work, we still cannot provide a perfect solution for composing task names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "In this work, we proposed an entity-driven complex task model (ECTM), which addressed the problem of improving the user experience when searching for a complex task. We exploited various web resources, including query logs, microblogs, and search-result snippets, to enhance the performance of our ECTM. To identify a human-interpretable complex task name from short-content queries, we utilized microblog posts and investigated several useful features to train the CRF model to automatically identify complex task names. Experimental results show that ECTM efficiently identifies complex task names with various task-related entities. Nevertheless, there are still some problems that need to be solved. In the future, we will try to investigate other useful features to improve the task name identification when dealing with real-life complex task queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6."
},
{
"text": "Weibo: http://weibo.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Session Coverage: The queries occurring in several sessions are candidates in terms of task-coherence. To collect queries occurring in many sessions, we use average session frequency, which can be calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://s.weibo.com/weibo/%E5%8C%97%E4%BA%AC%E6%97%85%E9%81%8A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "CRF++: http://crfpp.googlecode.com/svn/trunk/doc/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Search, Interrupted: Understanding and Predicting Search Task Continuation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "White",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "P",
"middle": [
"N"
],
"last": "Bennett",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agichtein, E., White, R. W., Dumais, S. T., & Bennett, P. N. (2012). Search, Interrupted: Understanding and Predicting Search Task Continuation. In Proc. of SIGIR 2012.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Agglomerative Clustering of a Search Engine Query log",
"authors": [
{
"first": "D",
"middle": [],
"last": "Beeferman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Berger",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of KDD '00",
"volume": "",
"issue": "",
"pages": "407--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beeferman, D. & Berger, A. (2000). Agglomerative Clustering of a Search Engine Query log. In Proc. of KDD '00, 407-416.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Query-Flow Graph: Model and Applications",
"authors": [
{
"first": "P",
"middle": [],
"last": "Boldi",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Bonchi",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Donato",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gionis",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vigna",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boldi, P., Bonchi, F., Castillo, C., Donato, D., Gionis, A., & Vigna, S. (2008). The Query-Flow Graph: Model and Applications. In Proc. of CIKM 2008.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-view Random Walk Framework for Search Task Discovery from Click-through Log",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Du",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cui, J., Liu, H., Yan, J., Ji L., Jin R., He, J., Gu, Y., Chen, Z., & Du, X. (2011). Multi-view Random Walk Framework for Search Task Discovery from Click-through Log. In Proc. of CIKM 2011.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Task-Aware Query Recommendation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Feild",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Allan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feild, H. & Allan, J. (2013). Task-Aware Query Recommendation. In Proc. of SIGIR 2013.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ready to Buy or Just Browsing? Detecting Web Searcher Goals from Interaction Data",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Agichtein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guo, Q. & Agichtein, E. (2010). Ready to Buy or Just Browsing? Detecting Web Searcher Goals from Interaction Data. In Proc. of SIGIR 2010.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning Search Tasks in Queries and Web Pages via Graph Regularization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "W",
"middle": [
"V"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, M., Yan, J., Gu, S., Han, J., He, X., Zhang, W. V., & Chen, Z. (2011). Learning Search Tasks in Queries and Web Pages via Graph Regularization. In Proc. of SIGIR 2011.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Beyond the Session Timeout: Automatic Hierarchical Segmentation of Search Topics in Query Logs",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Klinkner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CIKM '08",
"volume": "",
"issue": "",
"pages": "699--708",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jones, R., & Klinkner, K. (2008). Beyond the Session Timeout: Automatic Hierarchical Segmentation of Search Topics in Query Logs. In Proc. of CIKM '08, 699-708.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Modeling and Analysis of Cross-Session Search Tasks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kotov",
"suffix": ""
},
{
"first": "P",
"middle": [
"N"
],
"last": "Bennett",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "White",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Teevan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kotov, A., Bennett, P. N., White, R. W., Dumais, S. T., & Teevan, J. (2011). Modeling and Analysis of Cross-Session Search Tasks. In Proc. of SIGIR 2011.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, J., Mccallum, A., & Pereira, F. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proc. of ICML, 282-289.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Evaluating the Effectiveness of Search Task Trails",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "L.-W",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liao, Z., Song, Y., He, L.-W., & Huang, Y. (2012). Evaluating the Effectiveness of Search Task Trails. In Proc. of WWW 2012.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Active Objects: Actions for Entity-Centric Search",
"authors": [
{
"first": "T",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Fuxman",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, T., Pantel, P., Gamon, M., Kannan, A., & Fuxman, A. (2012). Active Objects: Actions for Entity-Centric Search. In Proc. of WWW 2012.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Personalizing Information Retrieval for Multi-Session Tasks: The Roles of Task Stage and Task Type",
"authors": [
{
"first": "J &",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "N",
"middle": [
"J"
],
"last": "Belkin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, J & Belkin, N. J. (2010). Personalizing Information Retrieval for Multi-Session Tasks: The Roles of Task Stage and Task Type. In Proc. of SIGIR 2010.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Identifying Task-based Sessions in Search Engine Query Logs",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lucchese",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Orlando",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Perego",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Silvestri",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Tolomei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of WSDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucchese, C., Orlando, S., Perego, R., Silvestri, F., & Tolomei, G. (2011). Identifying Task-based Sessions in Search Engine Query Logs. In Proc. of WSDM 2011.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploring Multi-Session Web Tasks",
"authors": [
{
"first": "B",
"middle": [],
"last": "Mackay",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Watters",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CHI '08",
"volume": "",
"issue": "",
"pages": "1187--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MacKay, B. & Watters, C. (2008). Exploring Multi-Session Web Tasks. In Proc. of CHI '08, 1187-1196.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to Disambiguate Search Queries form Short Sessions",
"authors": [
{
"first": "L",
"middle": [],
"last": "Mihalkova",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of ECML '09",
"volume": "",
"issue": "",
"pages": "111--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalkova, L. & Mooney, R. (2009). Learning to Disambiguate Search Queries form Short Sessions. In Proc. of ECML '09, 111-127.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Toward Whole-Session Relevance: Exploring Intrinsic Diversity in Web Search",
"authors": [
{
"first": "K",
"middle": [],
"last": "Raman",
"suffix": ""
},
{
"first": "P",
"middle": [
"N"
],
"last": "Bennett",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Collins-Thompson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raman, K., Bennett, P. N., & Collins-Thompson, K. (2013). Toward Whole-Session Relevance: Exploring Intrinsic Diversity in Web Search. In Proc. of SIGIR 2013.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Mining Long-Term Search History to Improve Search Accuracy",
"authors": [
{
"first": "B",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of KDD '06",
"volume": "",
"issue": "",
"pages": "718--723",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tan, B., Shen, X., & Zhai, C. (2006). Mining Long-Term Search History to Improve Search Accuracy. In Proc. of KDD '06, 718-723.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Identifying Popular Search Goals behind Search Queries to Improve Web Search Ranking",
"authors": [
{
"first": "T.-X",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "W.-S",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of AIRS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wang, T.-X., & Lu, W.-S. (2011). Identifying Popular Search Goals behind Search Queries to Improve Web Search Ranking. In Proc. of AIRS 2011.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Clustering User Queries of Search Engine",
"authors": [
{
"first": "J.-R",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "J.-Y",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "H.-J",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen, J.-R., Nie, J.-Y., & Zhang, H.-J. (2001). Clustering User Queries of Search Engine. In Proc. of WWW 2001.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Enhancing Personalized Search by Mining and Modeling Task Behavior",
"authors": [
{
"first": "R",
"middle": [
"W"
],
"last": "White",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of WWW",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "White, R. W., Chu, W., Hassan, A., He, X., Song, Y., & Wang, H. (2013). Enhancing Personalized Search by Mining and Modeling Task Behavior. In Proc. of WWW 2013.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Wisdom of Advertisers: Mining Subgoals via Query Clustering",
"authors": [
{
"first": "T",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sakai",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iwata",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "J.-R",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamamoto, T., Sakai, T., Iwata, M., Yu, C., Wen, J.-R., & Tanaka, K. (2012). The Wisdom of Advertisers: Mining Subgoals via Query Clustering. In Proc. of CIKM 2012.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to Cluster Web Search Results",
"authors": [
{
"first": "H.-J",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Q.-C",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "W.-Y",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeng, H.-J., He, Q.-C., Chen, Z., Ma, W.-Y., & Ma, J. (2004). Learning to Cluster Web Search Results. In Proc. of SIGIR 2004.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The structure of a complex task with task-related entities and search queries.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Architecture of our proposed entity-driven complex task model.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "A real example post containing various metadata, such as the times of \"like,\" \"share,\" and \"comment\"",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "The algorithm of complex task name composition4. Experiments",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "The top 3 task name inclusion rate of different number of entities for producing pseudo queries The precision for different number of microblog posts (MB) in complex task name identification.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td>Chinese</td><td>English Translation</td></tr><tr><td>\u4eca\u5929\u5df2\u7d93\u8a02\u597d\u6a5f\u7968\uff0c\u53ea\u5269\u4e0b\u627e\u9593\u98ef</td><td/></tr><tr><td>\u5e97\uff0c\u5c31\u7b49\u8457\u4e0b\u79ae\u62dc\u53bb\u5317\u4eac\u65c5\u904a\u4e86\u597d</td><td/></tr><tr><td>\u671f\u5f85!</td><td/></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>POS tag</td><td>Entity</td><td>Topic</td></tr><tr><td>Common Noun</td><td>87.5%</td><td>19.8%</td></tr><tr><td>Proper Noun</td><td>7.3%</td><td>78.9%</td></tr><tr><td>Others</td><td>5.2%</td><td>1.3%</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Task Name Pattern</td><td>Percentage</td><td>Example</td></tr><tr><td>Topic + Event 1</td><td>54.92%</td><td>\u8cfc\u8cb7\u4e09\u661f\u624b\u6a5f (Buy Samsung cellphone)</td></tr><tr><td>Topic + Event 2</td><td>40.16%</td><td>\u82f1\u8a9e\u5b78\u7fd2 (English learning)</td></tr><tr><td>Others</td><td>4.92%</td><td>--</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF3": {
"content": "<table><tr><td/><td>Chinese</td><td/><td colspan=\"2\">English Translation</td></tr><tr><td>Query</td><td colspan=\"2\">Task-related Entity</td><td>Query</td><td>Task-related Entity</td></tr><tr><td>\u7d50\u5a5a\u9078\u8cfc\u9996\u98fe</td><td>\u9996\u98fe</td><td colspan=\"2\">Wedding jewelry purchase</td><td>Jewelry</td></tr><tr><td>\u7d50\u5a5a\u9996\u98fe</td><td>\u9996\u98fe</td><td colspan=\"2\">Wedding jewelry</td><td>Jewelry</td></tr><tr><td>\u7d50\u5a5a\u79ae\u670d</td><td>\u79ae\u670d</td><td colspan=\"2\">Order wedding dress</td><td>Dress</td></tr><tr><td>\u7d50\u5a5a\u6212\u6307\u5c55\u793a</td><td>\u6212\u6307</td><td colspan=\"2\">Wedding ring gallery</td><td>Ring</td></tr><tr><td>\u9ec3\u91d1\u6212\u6307</td><td>\u6212\u6307</td><td colspan=\"2\">Gold ring</td><td>Ring</td></tr><tr><td>\u8cfc\u8cb7\u9ec3\u91d1\u6212\u6307</td><td>\u6212\u6307</td><td colspan=\"2\">Buy gold ring</td><td>Ring</td></tr><tr><td/><td>Data Type</td><td>Total Count</td><td colspan=\"2\">Distinct Count</td></tr><tr><td/><td>Complex Task</td><td>523</td><td>244</td></tr><tr><td/><td>Query</td><td>3,741</td><td>1,715</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table><tr><td/><td>0.37</td><td>0.41</td><td>0.47</td><td>0.22</td><td>0.35</td><td>0.31</td><td>0.37</td><td>0.45</td><td>0.68</td></tr><tr><td>LRM_MB</td><td>0.42</td><td>0.52</td><td>0.54</td><td>0.27</td><td>0.46</td><td>0.42</td><td>0.42</td><td>0.65</td><td>0.72</td></tr><tr><td>LRM_MB+</td><td>0.48</td><td>0.54</td><td>0.59</td><td>0.39</td><td>0.54</td><td>0.53</td><td>0.48</td><td>0.68</td><td>0.79</td></tr><tr><td>ECTM</td><td>0.57</td><td>0.63</td><td>0.66</td><td>0.45</td><td>0.61</td><td>0.58</td><td>0.57</td><td>0.71</td><td>0.83</td></tr></table>",
"html": null,
"text": "",
"type_str": "table",
"num": null
}
}
}
}