ACL-OCL / Base_JSON /prefixR /json /rocling /2019.rocling-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:54:38.346089Z"
},
"title": "A Hybrid Approach of Deep Semantic Matching and Deep Rank for Context Aware Question Answer System",
"authors": [
{
"first": "Shu-Yi",
"middle": [],
"last": "Xie",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Chia-Hao",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Mo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lian-Xin",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yu-Sheng",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jian-Ping",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most of the existing Question Answer Systems focused on searching answers from the Knowledge-Base (KB), and ignore context aware information. Many Question Answer models perform well on public data-sets, but too complicated to be efficient in real world cases. Effectiveness, concurrency and system availability are equally important in industry which have large data and requests, we propose a Context Aware Question Answer System based on the Information Retrieval with Deep Semantic Matching and Deep Rank. It has been applied to the online question answer system for insurance Question Answer. By these means, we achieve both high QPS (Query Per Second) and effectiveness. Our approach improves the system's ability to understand the question with context aware coreference resolution, subject completion, and the long sentence compression. After the matching questions are recalled from the ElasticSearch, Siamese CBOW (Continues Bag-Of-Words Model) and KBQA filter some unreasonable ones by entity alignment. After the result is sorted by the deep rank model with co-occurrence words and semantic features, our system does clarification or answer output. Finally, for those questions that we are unable to provide answers, a dialogue mining module as part of our Smart Knowledge-Base Platform is developed. This results in more than 10 times improvement in terms of efficiency for manpower involved in data labeling process.",
"pdf_parse": {
"paper_id": "2019",
"_pdf_hash": "",
"abstract": [
{
"text": "Most of the existing Question Answer Systems focused on searching answers from the Knowledge-Base (KB), and ignore context aware information. Many Question Answer models perform well on public data-sets, but too complicated to be efficient in real world cases. Effectiveness, concurrency and system availability are equally important in industry which have large data and requests, we propose a Context Aware Question Answer System based on the Information Retrieval with Deep Semantic Matching and Deep Rank. It has been applied to the online question answer system for insurance Question Answer. By these means, we achieve both high QPS (Query Per Second) and effectiveness. Our approach improves the system's ability to understand the question with context aware coreference resolution, subject completion, and the long sentence compression. After the matching questions are recalled from the ElasticSearch, Siamese CBOW (Continues Bag-Of-Words Model) and KBQA filter some unreasonable ones by entity alignment. After the result is sorted by the deep rank model with co-occurrence words and semantic features, our system does clarification or answer output. Finally, for those questions that we are unable to provide answers, a dialogue mining module as part of our Smart Knowledge-Base Platform is developed. This results in more than 10 times improvement in terms of efficiency for manpower involved in data labeling process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The question answer system has been widely used in intelligent customer service, personal assistants, and dialogue robots. In 2018, the pretrain techniques based on a massive corpus pretraining model have made breakthroughs in multiple NLP tasks including Semantic Match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Representative models are Elmo [9] , GPT [10] , BERT [8] . Higher accuracy, compared with the Siamese CBOW, can be achieved by fine-tuning BERT on downstream tasks, but the model makes inference time much longer, the running efficiency does not meet the requirements of our online products. We propose a high-efficiency contextual referential solution based on syntax analysis to solve the problems of subject missing and pronoun resolution in the questionand-answer scenario in insurance industry that achieved good results. The voice input brings convenience to users but at the same time introduces typos in the results after the text processing. We use the insurance specific noun dictionary with the error correction model of Transformer [7] to improve the input from ASR. For the purpose of increasing the accuracy of matching sentences of the user's input with terms from Knowledge-Base, we use an efficient sentence compression algorithm, which can filter some insignificant content and retain some core content of the insurance industry. We rank all the answers from the retrieval module and do answer output finally. Our contributions are following:",
"cite_spans": [
{
"start": 31,
"end": 34,
"text": "[9]",
"ref_id": "BIBREF8"
},
{
"start": 41,
"end": 45,
"text": "[10]",
"ref_id": "BIBREF9"
},
{
"start": 53,
"end": 56,
"text": "[8]",
"ref_id": "BIBREF7"
},
{
"start": 743,
"end": 746,
"text": "[7]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "\u2022 Propose novel and efficient error correction, sentiment analysis, coreference resolution, sentence compression and other methods to enhance question comprehension ability especially in insurance domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "\u2022 Using ElasticSearch, deep semantic matching and KBQA combined the IR method to quickly recall matching questions. Improve the accuracy of the QA through deep learning rank while ensuring the overall efficiency of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "\u2022 Proposed a number of new industry test set construction methods and the QA evaluation methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "\u2022 Full-life processing management and optimization for the QA knowledge including question type identification, clustering and annotation dispatch for no answer questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Most of the existing professional domain question answering systems search for the most matching questions (question in KB and user query similarity matching) from the Knowledge-Base through information retrieval. Some existing question-and-answer systems such as the Ali Xiaomi and the Baidu AnyQ are single-round questions and answers that do not consider the context information. The AliMe from Alibaba, which combines the Knowledge-Base search and Seq2Seq generation, makes achievements in the e-commerce domain [2] . We use the same method as the AliMe and the Baidu AnyQ to match the question and user query similarity and consider context chat history at the same time.",
"cite_spans": [
{
"start": 516,
"end": 519,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RELATED WORK",
"sec_num": "2"
},
{
"text": "3 SYSTEM OVERVIEW Our overall system architecture is shown in Figure 1 . The user's question (that is query) is used as input. If it is a voice, it will be converted into text first. The context information is passed to the pre-processing module. After error correction and coreference resolution, the processing is passed to the retrieval module. It returns the best matching with the user problem respectively from ElasticSearch based on text retrieval, the semantic retrieval based on the Siamese CBOW and the KBQA based on knowledge graph. The question list is passed into the sorting module, and the multi-way matching list is merged, and some unreasonable matching questions are removed through the entity alignment, and the final related question list will be generated through deep learning sorting. Finally, the answer will be returned to the user according to the matching question with business type. We use open sourced NLP Tools with the insurance terminology dictionary for word segmentation, part-of-speech tagging and entity recognition. The multi-intention detection uses the method of splitting the sentence by punctuation and then classifying it. The question rewriting is mainly for the insurance product name, and the sentiment analysis is used to judgment the intents of the user's affirmation, negation and double negation. Following we describe more detail implements.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "RELATED WORK",
"sec_num": "2"
},
{
"text": "Step1: Divide the long sentence into several short sentences by punctuation or space, then classify the short sentences and remove the saliva statement",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long Sentence Compression",
"sec_num": "3.1.1"
},
{
"text": "Step2: Based on the sentence compression scheme of probability and syntax analysis, we only retain the core sentence components. Combined with the insurance keyword dictionary to ensure the keywords are retained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long Sentence Compression",
"sec_num": "3.1.1"
},
{
"text": "Example: Hello, I bought an insurance for my son in 2006 and I only paid 581 yuan for a year, however I didn't pay for it after that. Now I want the customer service to refund my money.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long Sentence Compression",
"sec_num": "3.1.1"
},
{
"text": "Compress result: I bought an insurance in 2006. Now I want to refund my money.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Long Sentence Compression",
"sec_num": "3.1.1"
},
{
"text": "Two solutions are used for business selection. The simple solution is based on the error correction of the insurance noun dictionary. According to the results of the previous word segment and syntactic analysis, the possible nouns are converted into PinYin and compared with the proper nouns in the dictionary for error correction. The general solution is the Transformer model with a special noun dictionary, the training datasets use about 32 million universal corpora from public news and the PinYin dictionary that from insurance domain. The input of encoder in the model is non-dictionary Chinese PinYin and Chinese word characters in the dictionary. The Decoder's output is a pure Chinese character, where the Chinese characters in the input dictionary do not participate in the prediction, then directly generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Correction",
"sec_num": "3.1.2"
},
{
"text": "We use context chat history as Coreference Resolution reference. Our implementation ideas are word segmentation, part-of-speech tagging, dependency syntax analysis, subject-predicate extraction, entity substitution. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "3.1.3"
},
{
"text": "(Question) What is the price of life insurance? (Answer) 300 yuan per year. In KBQA, it receives the pre-processed question information, characterized by the context information, the entity type, and the entity relationship, and predicts the subject entity to be queried through the question recognition model [1] , and the neighboring nodes centered on the entity from the KG.",
"cite_spans": [
{
"start": 310,
"end": 313,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Coreference Resolution",
"sec_num": "3.1.3"
},
{
"text": "The ranking module includes a deep ranking model and a rule sorting. The deep ranking model is mainly used to merge and score the answers of multiple recalls. The rule sorting is mainly used to verify the rules of the sorted answers again to ensure not only the stability but also reasonability of the sorted answers. In the choice of deep ranking model, we use the commonly used pair-wise ranking model. Owing to the model is less difficult for data collection, we define the format of the input sample as the pair of <user query, candidate queries> when modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Module",
"sec_num": "3.3"
},
{
"text": "By constructing the scorer, the scores of the correctly matched samples are as high as possible",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Module",
"sec_num": "3.3"
},
{
"text": "(normalized to [0,1]) and the scores of the mismatched samples are as low as possible. The deep ranking model uses the interaction model, which not only considers the semantic vectors of these two parts but also considers the calculation of the interaction information of these two parts so that it could get more accurate matching. In addition to semantic features, our model uses co-occurrence words in <user query, candidate queries> to model literal features. To better match the user's intention, we construct an intent classifier in the insurance industry, and perform intent feature extraction on the user query and the candidate queries respectively as input to the sorting model. In addition, we have made some attempts on the sentence features and get good results. As the Figure2 shows. We have: It gets the matching question list from rank module. If the confidence level is lower than the preset threshold, it will response a question to have user clarification and let the user to confirm the question he wants to ask and make a related question. If the confidence is high, the answer corresponding to the top one matching question or the recommendation question is returned according to the business rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Module",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(q, d) = FN([ , ,",
"eq_num": "]) L"
}
],
"section": "Ranking Module",
"sec_num": "3.3"
},
{
"text": "The intelligent Knowledge-Base is a behind-the-scenes role in the Q&A system. In addition to providing the FAQ engine with raw materials, it also manages and optimizes the life-cycle of the question-and-answer knowledge. The specific process can be seen in Figure 3 . corpus that delete the non-keywords, increase noise, synonym transfer and other methods to generate the literal test set to evaluate the robustness of the model. Our system achieved good results in these insurance business test sets and provided online service for one hundred million customers.",
"cite_spans": [],
"ref_spans": [
{
"start": 257,
"end": 265,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Intelligent Knowledge-Base",
"sec_num": "3.5"
},
{
"text": "This paper proposes a context aware, error correction, coreference resolution, long sentence compression, ElasticSearch and deep semantic matching with the Siamese CBOW and deep learning sorting for the question-and-answer system. Our approaches not only have good performance in engineering but also in model accuracy. Its architecture supports high concurrency requirements in real world use cases and has high availability that fits the standard production environment. We have already applied this system in on-line intelligent customer service bot, AI assistant, AI selling bot and other human-computer interaction AI products. In the future, we hope our question-and-answer system could support multimedia interaction, such as pictures, audios and videos in addition to text and voice so that we could solve more problems for users with more intelligence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hierarchical Types Constrained Topic Entity Detection for Knowledge Base Question Answering",
"authors": [
{
"first": "Yunqi",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Manling",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuanzhuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yantao",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2018,
"venue": "Xiaolong Jin",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunqi Qiu, Manling Li, Yuanzhuo Wang, Yantao Jia, Xiaolong Jin,2018, Hierarchical Types Constrained Topic Entity Detection for Knowledge Base Question Answering,ACM 2018, April 23-27, 2018 , Lyon, France.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine",
"authors": [
{
"first": "Minghui",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Feng-Lin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Siyu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Weipeng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Haiqing",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "498--503",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minghui Qiu, Feng-Lin Li, Siyu Wang, Xing Gao, Yan Chen, Weipeng Zhao, Haiqing Chen, Jun Huang, Wei Chu, 2017, AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine, ACL 2017 pages 498-503",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Convolutional Neural Networks for Sentence Classification",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim Y. Convolutional Neural Networks for Sentence Classification[C] Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2014: 1746-1751.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Maarten de Rijke",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kenter",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Borisov",
"suffix": ""
}
],
"year": 2016,
"venue": "Siamese CBOW-Optimizing Word Embeddings for Sentence Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kenter,Alexey Borisov,Maarten de Rijke, 2016, Siamese CBOW-Optimizing Word Embeddings for Sentence Representations, ACL 2016",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning Text Similarity with Siamese Recurrent Networks",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Neculoiu",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Versteegh",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Rotaru",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL 2016 Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "148--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Neculoiu,Maarten Versteegh,Mihai Rotaru, 2016, Learning Text Similarity with Siamese Recurrent Networks, ACL 2016 Proceedings of the 1st Workshop on Representation Learning for NLP, pages 148-157 .",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "373--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn,Alessandro Moschitti, 2015, Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks, Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval pages 373-382",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Kristina Toutanova",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova,2018, BERT: Pre- training of Deep Bidirectional Transformers for Language Understanding, Computation and Language 2018",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner,Christopher Clark, Kenton Lee, Luke Zettlemoyer, 2018, Deep contextualized word representations, NAACL 2018",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving language understanding with unsupervised learning",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "The overall architecture of our QA system 3.1 Pre-processing Module .",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Question) How about car insurance? (Coreference resolution result) What is the price of car insurance? 3.2 Retrieval Module The retrieval module includes keyword search, deep semantic matching and KBQA recall, using the advantages of each of these three methods to increase the number and diversity of recall answers. The keyword search is retrieved using the open source ElasticSearch (ES)engine. As for the deep semantic retrieval, we use the deep semantic model to perform semantic vector representation on the user query and the knowledge in the knowledge-base (standard question and extension question), and use the Annoy algorithm to quickly find and match the semantic vector. The deep semantic model is modeled using the siamese network[5]. For each query, the similarity of the annotations is used as the positive sample, the negative samples are generated by the random sampling method, and random sampling is performed for each iteration, which greatly increases the randomness of the training data and improves the generalization ability of the model. Inspired by the idea from the loss definition of face recognition, we use AM-Softmax as the loss function and achieved the best results.",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "(q, + , \u2212 ; \u03b8) = max (0,1 \u2212 score(q, + ) + score(q, \u2212 )) score(q, d) is the matching score of query and document ,while , , are the inputs of the neural network; L(q, + , \u2212 ; \u03b8) is the hinge loss of a train sample pair. Taking into account the professional requirements of the question-and-answer in the insurance field and the fact that sorting model cannot achieve 100% accuracy, we have added a priori knowledge of the insurance industry in the ordering of rules to ensure the professionalism of question-andanswer. Rule sorting mainly considers the alignment of professional entity information between the user query and the candidate question, and the best matching question should be consistent with the entity described by the user query and avoiding give an irrelevant answer.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Our Deep Rank Architecture 3.4 Output Module .",
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"uris": null,
"text": "Our Intelligent KB Module Architecture",
"type_str": "figure"
},
"TABREF0": {
"text": "In terms of feature extraction, for the purpose of extracting local word order relationships and context information better, we use LSTM, CNN, BERT and other networks to extract features.BERT performs best, but it takes a long time for online inference. Due to the limited quality of large-scale industrial corpus annotation, some data noise exists. The more complex models the more noise is fitted so the generalization ability is not as good as the simple model. Therefore, we chose the CBOW[4] model for feature extraction. Considering that Chinese word segmentation has limited effect in specific fields. To reduce the influence of word segmentation errors, we also use multiple dimensions of pre-training vectors to build our model, including: character embedding, word embedding, high-frequency phrase vector, where character embedding can solve literal matching, word embedding can represent the semantics of words, and phrase vectors can capture local-level word order relationships and achieve the best results.",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"6\">We have done some benchmarks by using insurance domain dataset in different models also,</td></tr><tr><td colspan=\"4\">the result show as following:</td><td/></tr><tr><td colspan=\"2\">Method</td><td colspan=\"2\">Siamese LSTM Siamese CNN</td><td colspan=\"2\">Siamese CBOW</td><td>BERT</td></tr><tr><td>Recall</td><td/><td>80.6%</td><td>83.5%</td><td colspan=\"2\">85.2%</td><td>88.9%</td></tr><tr><td/><td/><td colspan=\"4\">Table 1: Benchmark results of Deep Semantic models</td></tr><tr><td colspan=\"3\">The sentence vector</td><td colspan=\"3\">is obtained by normalizing the summation of word embedding in</td></tr><tr><td colspan=\"3\">sentence . The cos</td><td colspan=\"3\">is the similarity of user query vector and question 's vector. Both s</td></tr><tr><td>and</td><td colspan=\"4\">are hyperparameters where s is the scale factor and</td><td>determines the classifier's</td></tr><tr><td colspan=\"3\">boundary size.</td><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF1": {
"text": "Top3 accuracy, effective question response accuracy and knowledge coverage. The test set was divided into 5 categories, which are online log sampling used for the evaluation of model, the bad case collection, high frequency question mining used for algorithm regression testing, semantic test sets written according to demands fully cover the requirements, and the",
"html": null,
"num": null,
"content": "<table><tr><td>FAQ en g in e</td><td>H ad o o p</td><td colspan=\"2\">Prep ro cess -in g</td><td colspan=\"2\">Rem o ve d u p licates Raw lo g s</td><td colspan=\"2\">Filter n o -an sw er q u estio n s co n text</td><td>Resu lt1</td><td>H a d o o p</td><td>Prep ro cess</td></tr><tr><td>U ser q u eries \u4f46\u8003\u8651\u653e\u5047\u65f6\u9502\u7535 \u505a\u4e86\u8fd1\u89c6\u624b\u672f\u53ef\u4e70 ***\u5417\uff1f Asd f asd \u8fd1\u89c6\u624b\u672f\u80fd\u4e70*** \u5417</td><td/><td>\u6570 \u636e \u6316 \u6398 \u6a21 Lo g m i n i n g m o d u l e \u5757</td><td colspan=\"2\">Filterin g q u estio n s Valid /in valid D isp atch in g q u estio n s</td><td colspan=\"2\">Ow n er p red ictio n Relevan ce 0/1 N o -an sw er q u etio n s Tru e Ch at 0/1 False</td><td>False Tru e</td><td>Resu lt2</td><td>Gp u Cl u r ste</td><td>Bin ary classificatio n /filterin g M u lticlass classificatio n /d isp atch in g Au to m atic</td></tr><tr><td>Q&amp; A p airs</td><td>D B</td><td/><td>Im p ro vin g</td><td/><td>Au to fillin g Ow n er' s p ro c KB 1</td><td>KB n Au to fillin g Ow n er' s p ro c</td><td>KB Basic Au to fillin g Ow n er' s p ro c</td><td>p ro p erties fillin g</td></tr><tr><td/><td/><td/><td>efficien cy</td><td/><td>Clu sterin g</td><td>Clu sterin g</td><td>Clu sterin g</td><td>Resu lt3</td></tr><tr><td colspan=\"9\">4 Evaluation Metrics The Q&amp;A assessment indicators mainly include the number of valid questions, Top1 response b ase system Clu sterin g etc. accuracy, Kn o w led g e Sim ilar Sim ilar Sim ilar</td></tr></table>",
"type_str": "table"
}
}
}
}