Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O14-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:04:40.998502Z"
},
"title": "Unsupervised Approach for Automatic Keyword Extraction from Arabic Documents",
"authors": [
{
"first": "Arafat",
"middle": [],
"last": "Awajan",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present an unsupervised two-phase approach to extract keywords from Arabic documents that combines statistical analysis and linguistic information. The first phase detects all the N-grams that may be considered keywords. In the second phase, the N-grams are analyzed using a morphological analyzer to replace the words of the N-grams with their base forms that are the roots for the derived words and the stems for the non-derivative words. The N-grams that have the same base forms are regrouped and their counts accumulated. The ones that appear more frequently are then selected as keywords. An experiment is conducted to evaluate the proposed approach by comparing the extracted keywords with those manually selected. The results show that the proposed approach achieved an average precision of 0.51.",
"pdf_parse": {
"paper_id": "O14-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present an unsupervised two-phase approach to extract keywords from Arabic documents that combines statistical analysis and linguistic information. The first phase detects all the N-grams that may be considered keywords. In the second phase, the N-grams are analyzed using a morphological analyzer to replace the words of the N-grams with their base forms that are the roots for the derived words and the stems for the non-derivative words. The N-grams that have the same base forms are regrouped and their counts accumulated. The ones that appear more frequently are then selected as keywords. An experiment is conducted to evaluate the proposed approach by comparing the extracted keywords with those manually selected. The results show that the proposed approach achieved an average precision of 0.51.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Keyword extraction is the process of identifying a short list of words or noun phrases that capture the most important ideas or topics covered in a document. Keyword extraction has been used in a variety of natural language processing applications, such as informat ion retrieval systems, digital library searching, web content management, document clustering, and text summarization (Rose et al. 2010) . Although keywords are very useful for a large spectrum of applications, only a limited number of documents with keywords are available on-line. Therefore, appropriate tools that can automatically extract keywords from text are increasingly needed with the continually growing amount of electronic textual content available online.",
"cite_spans": [
{
"start": 384,
"end": 402,
"text": "(Rose et al. 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, an unsupervised two-phase approach for keyword extraction from Arabic \uf02a Princess Sumaya University for Technology -Department of Computer Science, Amman -Jordan E-mail: [email protected] documents is described. The proposed method combines the document's statistics and the linguistic features of the Arabic language to automatically extract keywords from a single document in a domain-independent way. In the first phase, all the N-grams are extracted and those considered as potential candidate keywords are retained. In the second phase, the candidate keywords are analyzed linguistically by a morphological analyzer that replaces each term with its base form, which are the roots of the derived words and the stems of the non-derivative words. The candidate keywords are then grouped in such a way that the keywords extracted from similar roots and stems are put together and their counts accumulated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This paper is organized as follows. In section 2, we present related works and the main approaches to keyword extraction. Section 3 highlights the main Arabic language features used in our technique. A detailed description of the proposed technique and its two phases provided in Section 4 and Section 5. Section 6 consists of the experimental results and the main findings of the evaluation of the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Existing automatic keyword extraction methods can be divided into two main approaches: supervised and unsupervised (Pudota et al. 2010; Hasan and Ng 2010) . In the supervised approach, the keyword extractor is trained to determine whether a given word or phrase is a keyword or not. An annotated set of documents with predefined keywords is always used in the learning phase. All the terms and noun phrases in the text are considered as potential keywords, but only those that match with keywords assigned to the annotated data are selected.",
"cite_spans": [
{
"start": 115,
"end": 135,
"text": "(Pudota et al. 2010;",
"ref_id": "BIBREF17"
},
{
"start": 136,
"end": 154,
"text": "Hasan and Ng 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The main disadvantages of this approach are its dependency on the learning model, the documents used as the training set, and the documents' domains. Furthermore, training data and learning processes are usually time-consuming (Turney 2000; Turney and Pantel 2010; Frank et al. 1999; Hulth 2003; Hulth 2004 ).",
"cite_spans": [
{
"start": 227,
"end": 240,
"text": "(Turney 2000;",
"ref_id": "BIBREF19"
},
{
"start": 241,
"end": 264,
"text": "Turney and Pantel 2010;",
"ref_id": "BIBREF20"
},
{
"start": 265,
"end": 283,
"text": "Frank et al. 1999;",
"ref_id": "BIBREF7"
},
{
"start": 284,
"end": 295,
"text": "Hulth 2003;",
"ref_id": "BIBREF11"
},
{
"start": 296,
"end": 306,
"text": "Hulth 2004",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The unsupervised approach for keyphrase extraction avoids the need for annotated documents. It uses language modeling and statistical analysis to select the potential keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "A candidate keyword is often selected based on features such as its frequency in the document, the position of its first occurrence in a document, and its linguistic attributes, such as its stem and part-of-speech (POS) tag (Matsuo and Ishizuka 2004; Mihalcea and Tarau 2004; Liu et al. 2009 ). The unsupervised methods are in general domain-independent and less expensive since they do not require building an annotated corpus.",
"cite_spans": [
{
"start": 224,
"end": 250,
"text": "(Matsuo and Ishizuka 2004;",
"ref_id": "BIBREF15"
},
{
"start": 251,
"end": 275,
"text": "Mihalcea and Tarau 2004;",
"ref_id": "BIBREF16"
},
{
"start": 276,
"end": 291,
"text": "Liu et al. 2009",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Keyword extraction algorithms from both approaches have been successfully developed and implemented for documents in the European languages (Rose et al. 2010; Liu et al. 2009; Matsuo et al. 2004) . However, despite the fact that Arabic is one of the major international languages making up about 4% of the Internet content, not many studies about extracting Arabic keywords have been performed. El-Shishtawy and Al-Sammak (2009) presented a supervised method that uses linguistic knowledge and machine learning techniques to extract Arabic keywords. The system uses an annotated Arabic data set of 30 documents from a specific domain, compiled by the authors as a training data set. The keywords from the documents' data set used to evaluate their system were assigned manually.",
"cite_spans": [
{
"start": 140,
"end": 158,
"text": "(Rose et al. 2010;",
"ref_id": "BIBREF18"
},
{
"start": 159,
"end": 175,
"text": "Liu et al. 2009;",
"ref_id": "BIBREF13"
},
{
"start": 176,
"end": 195,
"text": "Matsuo et al. 2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "An unsupervised keyphrase extraction system (KP-Miner) was proposed by El-Beltagy and Rafea (2008) . This system was basically developed for the English language and then adapted to work with the Arabic language. Statistical analysis of the texts was conducted in order to determine the most weighted terms. Two main conditions are considered; the first states that a phrase has to have appeared at least n times in the document from which the keywords are to be extracted, and the second condition is related to the position where a candidate keyphrase first appears within an input document. The linguistic analyses performed on the texts are limited to stop word removal and word stemming.",
"cite_spans": [
{
"start": 71,
"end": 98,
"text": "El-Beltagy and Rafea (2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "The hypothesis defended in this work is that using the linguistic features of the Arabic languagemainly its rich and complex morphological structuremay present an attractive paradigm to improve the extraction of keywords. The proposed approach is designed to work on a single document without any prior knowledge about its content or domain. Typically, a generic unsupervised keyphrase extractor features two steps; the first is to extract as many candidate words as possible, and the second is to apply the linguistic knowledge of the text language to tune the final list of extracted keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "Arabic is a Semitic language with rich morphology that is a combination of non-concatenative morphology and concatenative morphology. Regarding the concatenative aspect, an Arabic word is composed of a stem, affixes, and clitics. The affixes are concatenative morphemes that mark the tense, gender, and/or number of the word (Al-Sughaiyer and Al-Kharashi 2004). A clitic is a symbol consisting of one to three letters that can be attached to the beginning or the end of a word. It represents another part of speech, such as a preposition, a conjunction, the definite article, or an object pronoun (Habash 2010; Awajan 2007; Diab et al. 2007) . In terms of their formation, most of the stems obey non-concatenative rules and are generated according to the root-and-pattern scheme. In general, an Arabic word may be decomposed in its components according to the structure shown in figure 1. For example, the word \u202b,\"\u0648\u0627\u0644\u0627\u0644\u0639\u0628\u0648\u0646\"\u202c or \"and the players\" in English, consists of the clitics \u202b\"\u0648\"\u202c and \u202b,\"\u0627\u0644\"\u202c the stem \u202b,\"\u0627\u0644\u0639\u0628\"\u202c and the postfix \u202b.\"\u0648\u0646\"\u202c Its stem is generated from the root \u202b,\"\u0644\u0639\u0628\"\u202c according to the pattern \u202b.\"\u0641\u0627\u0639\u0644\"\u202c Figure 2 shows the steps for a word formation. The combinatory nature of the Arabic language morphology creates an important obstacle for different natural language processing applications, including keyword extraction. This property, generally known as \"data sparseness\", results in a large number of words generated from the same root but with different stems (Benajiba et al. 2009) . Consequently, the grouping of words according to their surface or stems cannot give keywords that Non-Concatenative Morphology",
"cite_spans": [
{
"start": 597,
"end": 610,
"text": "(Habash 2010;",
"ref_id": "BIBREF9"
},
{
"start": 611,
"end": 623,
"text": "Awajan 2007;",
"ref_id": "BIBREF0"
},
{
"start": 624,
"end": 641,
"text": "Diab et al. 2007)",
"ref_id": "BIBREF4"
},
{
"start": 1488,
"end": 1510,
"text": "(Benajiba et al. 2009)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1124,
"end": 1134,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The Features of Arabic Language",
"sec_num": "3."
},
{
"text": "accurately reflect the content of the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenative Morphology",
"sec_num": null
},
{
"text": "In order to tackle this problem, we need to conduct a deeper morphological analysis to extract the roots and to consider their properties in order to group related words and increase the weight of those representing the main ideas covered by the text. The linguistic analysis we are proposing will be applied at two different levels of the keyword extraction. The input text is preprocessed to assign each word with its POS in order to detect all the possible N -grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenative Morphology",
"sec_num": null
},
{
"text": "The detected N-grams are then post-processed to extract the roots, and to group the N-grams generated from the same roots, and to accumulate their weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concatenative Morphology",
"sec_num": null
},
{
"text": "This phase consists of several operations: sentence delimiting, tokenization, and POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "4.1"
},
{
"text": "The input text is processed to delimit sentences, following the assumption that no keyphrase parts are located separately in two or more different sentences (Pudota et al. 2010) .",
"cite_spans": [
{
"start": 157,
"end": 177,
"text": "(Pudota et al. 2010)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "4.1"
},
{
"text": "Punctuation marks, such as commas, semicolons, and dots, are used to divide the input documents into sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "4.1"
},
{
"text": "Tokenization aims at turning a text into a list of individual words or tokens (Manning et al. 2009) . As the clitics attached to a word always refer to other entities, such as pronouns, prepositions, conjunctions, and the definite article, a tokenizer is applied to separate all the clitics except the definite article from the word. The tokenizer is repeatedly applied until the word stops changing.",
"cite_spans": [
{
"start": 78,
"end": 99,
"text": "(Manning et al. 2009)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "4.1"
},
{
"text": "We then assign a POS tag to each token using the Stanford Arabic parser (Green and Manning. 2010) . The assigned POS tags are later used to select the possible N-grams, remove the verbs, and remove meaningless terms, such as the stop words.",
"cite_spans": [
{
"start": 72,
"end": 97,
"text": "(Green and Manning. 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Part-of-Speech Tagging",
"sec_num": "4.1"
},
{
"text": "A keyword is typically a combination of nouns and/or adjectives. Furthermore, the number of terms that are allowed in a keyword is often limited to three words. Thus, each sentence is processed to extract all the possible N-grams that constitute a sequence of adjacent words with a maximum length of three words. All the N-grams that contain verbs, stop words, or clitics ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Extraction and Filtering",
"sec_num": "4.2"
},
{
"text": "All the N-grams generated from the same base forms are grouped together, their counts accumulated, and represented by their NNG. A vector representation of the text is produced where each detected NNG and its frequency are listed. In this work, we define the frequency of a normalized N-gram NGi noted Freq (NGi) as the sum of all the N-grams having the same base forms of NGi.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Clustering and Weighting",
"sec_num": "5.2"
},
{
"text": "Each normalized N-gram should be assigned a weight that represents its relevance to be selected as a keyword. The keyword frequency and the keyword degree are generally considered for scoring potential keywords (Rose et al. 2010, Mihalcea and Tarau 2004) . The weight of a normalized N-gram NGi is given by the following formula: ",
"cite_spans": [
{
"start": 211,
"end": 242,
"text": "(Rose et al. 2010, Mihalcea and",
"ref_id": null
},
{
"start": 243,
"end": 254,
"text": "Tarau 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Clustering and Weighting",
"sec_num": "5.2"
},
{
"text": "Input",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Clustering and Weighting",
"sec_num": "5.2"
},
{
"text": "\u202b\u0627\u0644\u0647\u0627\u0634\u0645\u064a\u0629\u202c \u202b\u0627\u0627\u0644\u0631\u062f\u0646\u064a\u0629\u202c \u202b\u0627\u0644\u0645\u0645\u0644\u0643\u0629\u202c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Clustering and Weighting",
"sec_num": "5.2"
},
{
"text": "where m is the number of Normalized N-grams.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Clustering and Weighting",
"sec_num": "5.2"
},
{
"text": "As the unigrams are generally more frequent than the bi-grams and bi-grams are more frequent than tri-grams, we need to correct the weight of N-grams by introducing a new ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "N-gram Clustering and Weighting",
"sec_num": "5.2"
},
{
"text": "The list of N-grams is reordered according to their scores since the highest scores determine the potential candidate keywords. The number of extracted keywords is set by the user. The selection of keywords is done according to the following rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords Selection",
"sec_num": "5.3"
},
{
"text": "-If two N-grams have the same score, the longer one will be selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords Selection",
"sec_num": "5.3"
},
{
"text": "-If two candidate keywords have the same number of components and the same score, we select the higher degree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords Selection",
"sec_num": "5.3"
},
{
"text": "-If an N-gram is selected, all the possible combinations of its components will be removed from the list of N-grams to guaranty that an extracted keyword will not be included in another one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords Selection",
"sec_num": "5.3"
},
{
"text": "The list of keywords is then built by replacing each selected normalized N-gram by the most frequent of its surface N-gram in the original text. Therefore, the list of keywords that will be associated with the document will have more readable form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords Selection",
"sec_num": "5.3"
},
{
"text": "In order to evaluate the performance of the proposed system, an experiment was carried out to test it by comparing the extracted keywords against the manually assigned ones. A collection of 70 journal articles and article abstract selected from six journals and covering different domains was used. The dataset is divided into three groups according to their size [table 1 ].",
"cite_spans": [],
"ref_spans": [
{
"start": 364,
"end": 372,
"text": "[table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "6."
},
{
"text": "The average number of words per article is 3406. Each one of these articles was assigned a list of keywords. The number of keywords varies from 2 to 14, with an average of 5.14 keywords per document. The number of extracted keywords is set to the same number of keywords assigned manually to the documents, so the number of false positive detections and false negative detections will be equal, and the three measures P, R, and F will be identical. Table 1 shows the main results of the conducted experiment. An average precision of 0.51 was achieved. Since the primary analysis of the dataset showed that only about 73% of the human-generated keywords appear in the document texts, this result can be considered as a good result. The results have shown also that better results are achieved with larger documents. ",
"cite_spans": [],
"ref_spans": [
{
"start": 449,
"end": 456,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments and Evaluation",
"sec_num": "6."
},
{
"text": "This paper proposed an unsupervised two-stage approach for keyword extraction from Arabic texts that avoids the necessity of annotated data. The conducted experiments showed that the proposed method can extract keywords from single documents in a domain-independent way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "The linguistic analysis of the texts and the grouping of N-grams according to their linguistic features improve the quality of extracted keywords. An average precision of 0.51 was achieved in despite the fact that that only about 73% of the human-assigned keywords appear in the document texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "Al-Sughaier, I., Al-Kharashi, I. (2004) . Arabic morphological analysis techniques: A comprehensive survey. Journal of The American Society for Information Science and Technology (JASIST), 55(3), 189-213.",
"cite_spans": [
{
"start": 17,
"end": 39,
"text": "Al-Kharashi, I. (2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Arabic Text Preprocessing for the Natural Language Processing Applications",
"authors": [
{
"first": "A",
"middle": [],
"last": "Awajan",
"suffix": ""
}
],
"year": 2007,
"venue": "Arab Gulf Journal of Scientific Research",
"volume": "25",
"issue": "4",
"pages": "179--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Awajan, A. (2007). Arabic Text Preprocessing for the Natural Language Processing Applications. Arab Gulf Journal of Scientific Research, 25(4), 179-189.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Arabic Named Entity Recognition: A Feature-Driven Study",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Benajiba",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "17",
"issue": "5",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benajiba, Y., Diab, M., Rosso, P. (2009). Arabic Named Entity Recognition: A Feature-Driven Study. IEEE Transactions on Audio, Speech, and Language Processing, 17(5), 926-934.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Morphosyntactic analysis system for Arabic texts, International Arab Conference on Information Technology (ACIT)",
"authors": [
{
"first": "Alkhalil Morpho",
"middle": [],
"last": "Sys",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alkhalil Morpho Sys: A Morphosyntactic analysis system for Arabic texts, International Arab Conference on Information Technology (ACIT). Riyadh, Saudi Arabia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic Processing of Modern Standard Arabic Text. Chapter in Arabic Computational Morphology",
"authors": [
{
"first": "M",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "159--179",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diab, M., Hacioglu, K., JURAFSKY, D. (2007). Automatic Processing of Modern Standard Arabic Text. Chapter in Arabic Computational Morphology. Springer Ed. 159-179.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "KP-Miner: A keyphrase extraction system for English and Arabic documents",
"authors": [
{
"first": "S",
"middle": [],
"last": "El-Beltagy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Rafea",
"suffix": ""
}
],
"year": 2008,
"venue": "Information Systems",
"volume": "34",
"issue": "1",
"pages": "132--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "El-Beltagy S., & Rafea A. (2008). KP-Miner: A keyphrase extraction system for English and Arabic documents, Information Systems. 34(1), 132-144.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Arabic Keyphrase Extraction using Linguistic knowledge and Machine Learning Techniques",
"authors": [
{
"first": "T",
"middle": [],
"last": "El-Shishtawy",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sammak",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Second International Conference on Arabic Language Resources and Tools, The MEDAR Consortium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "El-Shishtawy, T., & Al-Sammak, A. (2009). Arabic Keyphrase Extraction using Linguistic knowledge and Machine Learning Techniques, In Proceedings of the Second International Conference on Arabic Language Resources and Tools, The MEDAR Consortium, Cairo, Egypt.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Domain-Specific Keyphrase Extraction",
"authors": [
{
"first": "E",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Paynter",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gutwin",
"suffix": ""
},
{
"first": "C",
"middle": [
"G"
],
"last": "Nevill-Manning",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "668--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank, E., Paynter, G.W., Witten, I.H., Gutwin, C., Nevill-Manning, C.G. (1999). Domain-Specific Keyphrase Extraction. Proceedings of the International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 668-673.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Better Arabic Parsing: Baselines, Evaluations, and Analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING-Beijing",
"volume": "",
"issue": "",
"pages": "394--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Green, S., and Manning, C. D. (2010). Better Arabic Parsing: Baselines, Evaluations, and Analysis. In COLING-Beijing. 394-402.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Introduction to Arabic Natural Language Processing",
"authors": [
{
"first": "N",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Habash, N. (2010). Introduction to Arabic Natural Language Processing. Morgan & Claypool Publishers, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conundrums in unsupervised Keyphrase Extraction: Making Sense of the State-of-the-Art",
"authors": [
{
"first": "K",
"middle": [
"S"
],
"last": "Hasan",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING 2010",
"volume": "",
"issue": "",
"pages": "365--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasan, K.S., NG, V. (2010). Conundrums in unsupervised Keyphrase Extraction: Making Sense of the State-of-the-Art. COLING 2010, 365-373.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improved automatic keyword extraction given more linguistic knowledge",
"authors": [
{
"first": "A",
"middle": [],
"last": "Hulth",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hulth, A. (2003). Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan,",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Combining Machine Learning and Natural Language Processing for Automatic Keyword Extraction. Doctoral dissertation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Hulth",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hulth, A. (2004). Combining Machine Learning and Natural Language Processing for Automatic Keyword Extraction. Doctoral dissertation. Department of Computer and Systems Sciences, Stockholm University.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Clustering to Find Exemplar Terms for Keyphrase Extraction",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "257--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Z., Li, P., Zheng, Y., Sun, M. (2009). Clustering to Find Exemplar Terms for Keyphrase Extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Singapore. 257-266.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Introduction to Information Retrieval",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, C. D., Raghavan, P., Sch\u00fctze, H. (2009). Introduction to Information Retrieval. Cambridge University Press.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Keyword Extraction from a Single Document using Word Co-occurrence Statistical Information",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Matsuo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2004,
"venue": "International Journal on Artificial Intelligence Tools",
"volume": "13",
"issue": "1",
"pages": "157--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matsuo, Y., Ishizuka, M. (2004). Keyword Extraction from a Single Document using Word Co-occurrence Statistical Information. International Journal on Artificial Intelligence Tools, 13(1), 157-169",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "TextRank: Brining order into texts",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihalcea, R., Tarau, P. (2004). TextRank: Brining order into texts. In Proceedings of EMNLP 2004, Association for Computational Linguistics, Barcelona, Spain. 404-411.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A New Domain Independent Keyphrase Extraction System",
"authors": [
{
"first": "N",
"middle": [],
"last": "Pudota",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Dattolo",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Baruzzo",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Tasso",
"suffix": ""
}
],
"year": 2010,
"venue": "Digital Libraries: Communications in Computer and Information Science",
"volume": "91",
"issue": "",
"pages": "67--78",
"other_ids": {
"DOI": [
"http://link.springer.com/book/10.1007/978-3-642-15850-6"
]
},
"num": null,
"urls": [],
"raw_text": "Pudota, N., Dattolo, A., Baruzzo, A., Tasso, C. (2010). A New Domain Independent Keyphrase Extraction System. Digital Libraries: Communications in Computer and Information Science, 91, 67-78.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic keyword extraction from individual documents",
"authors": [
{
"first": "S",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Cramer",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Cowley",
"suffix": ""
}
],
"year": 2010,
"venue": "Text Mining: Applications and Theory",
"volume": "",
"issue": "",
"pages": "3--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rose, S., Engel, D., Cramer, N., Cowley, W. (2010). Automatic keyword extraction from individual documents. Text Mining: Applications and Theory edited by Michael W. Berry and Jacob Kogan, John Wiley & Sons, Ltd. 3-20",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning Algorithm for Keyphrase Extraction",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
}
],
"year": 2000,
"venue": "Information Retrieval",
"volume": "2",
"issue": "4",
"pages": "303--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P. D. (2000). Learning Algorithm for Keyphrase Extraction. Information Retrieval, 2(4), 303-336.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "From Frequency to Meaning: Vector Space Models of Semantics",
"authors": [
{
"first": "P",
"middle": [
"D"
],
"last": "Turney",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pantel",
"suffix": ""
}
],
"year": 2010,
"venue": "Journal of Artificial Intelligence Research",
"volume": "37",
"issue": "",
"pages": "141--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, P. D., & Pantel, P. (2010). From Frequency to Meaning: Vector Space Models of Semantics. Journal of Artificial Intelligence Research, 37, 141-188.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Proclitic(s)+[Prefix(es)]] + stem + [Suffix(es) + [Enclitic]] Root pattern Arabic derivative word structure Root: \u202b:\u0644\u0639\u0628(\u202c lEb) (to play) Stem: \u202b)\u0627\u0644\u0639\u0628(\u202c (player) Word \u202b)\u0648\u0627\u0644\u0627\u0644\u0639\u0628\u0648\u0646(\u202c (and the players)"
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Arabic word formation (Example)Arabic words are classified into two categories: derivative words and non-derivative words. The stems of derivative words are generated from the roots according to standard patterns or templates. These standard patterns represent the major spelling rules governing Arabic words. Based on the above, a derivative Arabic word can be represented by its root along with its morphological pattern, and its roots carry its basic conceptual meaning.Non-derivative words include two sub-categories: fixed words and foreign words. Fixed words are a set of words that do not obey the derivation rules. These words are generally stop words, such as pronouns, prepositions, conjunctions, question words, and the like. The foreign words are nouns borrowed from foreign languages."
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "are removed. Only the N-grams that have their members labeled with one of the POS tags marking nouns or adjectives are retained. In addition, the unigrams that are not labeled as nouns are removed from the N-gram list.Figure 3shows the detected unigrams, bi-grams, and trigrams from a sentence."
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "grams is the process of reducing the words of an N-gram into their base forms. This process will allow the clustering of N-grams carrying the same information, hence reducing the sparseness of the text's potential keywords. To achieve this objective, a word morphological analyzer is developed based on the Alkhalil Morpho-Syntactic System (Boudlal et al. 2010). It is applied individually to the words on the list of N-grams. The morphological structures produced by the analyzer are used to determine the category of words, derivative or non-derivative. The derivative words are represented by their root along with their morphological pattern, and the non-derivative words are represented by their stem, permitting different N-grams that have common base forms to reinforce each other in scoring and to reduce the number of redundant terms and concepts. Each N-gram is associated with its list of base forms called the normalized N-grams (NNG) at the end of this step."
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "measure called score. The N-gram score takes into account the relevance of individual components forming the N-gram. The score of a unigram is equal to its weight since a unigram has one component. The score of other N-grams (bigrams, trigrams, \u2026 ) is given by the following formula: where the T1, T2,\u2026, TN represent the N roots/stems of the normalized N-gram NGi. The degree of an N-gram is calculated as the sum of its Weight and the Weights of all the higher structures containing this N-gram. Thus, the degree favors terms occurring frequently in longer candidate keywords, and the score favors the frequent terms regardless of their co-occurrence with other terms."
},
"TABREF0": {
"content": "<table><tr><td>Sentence in Arabic:</td><td colspan=\"10\">\u202b\u0627\u0644\u0647\u0627\u0634\u0645\u064a\u0629\u202c \u202b\u0627\u0627\u0644\u0631\u062f\u0646\u064a\u0629\u202c \u202b\u0627\u0644\u0645\u0645\u0644\u0643\u0629\u202c \u202b\u0627\u0644\u0649\u202c \u202b\u0631\u0629\u202c</td><td colspan=\"2\">\u202b\u0628\u0632\u064a\u0627\u202c \u202b\u0627\u0627\u0644\u0645\u0631\u064a\u0643\u064a\u202c \u202b\u0627\u0644\u0631\u0626\u064a\u0633\u202c \u202b\u0642\u0627\u0645\u202c</td></tr><tr><td>Input Sentence in English: Tokenization:</td><td>\u202b\u0627\u0644\u0647\u0627\u0634\u0645\u064a\u0629\u202c</td><td colspan=\"3\">| \u202b\u0627\u0627\u0644\u0631\u062f\u0646\u064a\u0629\u202c</td><td colspan=\"8\">| \u202b\u0627\u0644\u0645\u0645\u0644\u0643\u0629\u202c | \u202b\u0627\u0644\u0649\u202c | \u202b\u0632\u064a\u0627\u0631\u0629\u202c | \u202b\u0628\u202c \u202b\u0627\u0627\u0644\u0645\u0631\u064a\u0643\u064a|\u202c</td><td>\u202b\u0627\u0644\u0631\u0626\u064a\u0633|\u202c | \u202b\u0642\u0627\u0645\u202c</td></tr><tr><td>Unigrams:</td><td colspan=\"2\">\u202b\u0627\u0644\u0647\u0627\u0634\u0645\u064a\u0629\u202c</td><td>-</td><td colspan=\"2\">\u202b\u0627\u0623\u0644\u0631\u062f\u0646\u064a\u0629\u202c</td><td colspan=\"2\">-</td><td>\u202b\u0627\u0644\u0645\u0645\u0644\u0643\u0629\u202c</td><td>-</td><td colspan=\"2\">\u202b\u0632\u064a\u0627\u0631\u0629\u202c</td><td>-</td><td>\u202b\u0627\u0627\u0644\u0645\u0631\u064a\u0643\u064a\u202c</td><td>-</td><td>\u202b\u0627\u0644\u0631\u0626\u064a\u0633\u202c</td></tr><tr><td>Bi-grams:</td><td colspan=\"6\">\u202b\u0627\u0644\u0647\u0627\u0634\u0645\u064a\u0629\u202c \u202b\u0627\u0627\u0644\u0631\u062f\u0646\u064a\u0629\u202c</td><td>-</td><td colspan=\"5\">\u202b\u0627\u0627\u0644\u0631\u062f\u0646\u064a\u0629\u202c \u202b\u0627\u0644\u0645\u0645\u0644\u0643\u0629\u202c</td><td>-</td><td>\u202b\u0627\u0627\u0644\u0645\u0631\u064a\u0643\u064a\u202c \u202b\u0627\u0644\u0631\u0626\u064a\u0633\u202c</td></tr><tr><td>Tri-grams:</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "The American president visited the Hashemite Kingdom of Jordan."
},
"TABREF1": {
"content": "<table><tr><td>Dataset</td><td>Number of</td><td>Average of words</td><td>Precision</td></tr><tr><td/><td>Documents</td><td>per article</td><td/></tr><tr><td>1</td><td>22</td><td>6523</td><td>0.56</td></tr><tr><td>2</td><td>28</td><td>3238</td><td>0.54</td></tr><tr><td>3</td><td>20</td><td>212</td><td>0.41</td></tr><tr><td>All</td><td>70</td><td>3406</td><td>0.51</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": ""
}
}
}
}