|
{ |
|
"paper_id": "C69-1301", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:32:04.526331Z" |
|
}, |
|
"title": "A DIRECTED RANDOM PARAGRAPH GENERATOR", |
|
"authors": [ |
|
{ |
|
"first": "Stanley", |
|
"middle": [ |
|
"Y W" |
|
], |
|
"last": "Su", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The RAND Corporation", |
|
"location": { |
|
"settlement": "Santa Monica", |
|
"country": "California" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kenneth", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Harper", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The RAND Corporation", |
|
"location": { |
|
"settlement": "Santa Monica", |
|
"country": "California" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "", |
|
"pdf_parse": { |
|
"paper_id": "C69-1301", |
|
"_pdf_hash": "", |
|
"abstract": [], |
|
"body_text": [ |
|
{ |
|
"text": "The work described in the present paper represents a combination of two widely different approaches to the study of language. The first of these, the automatic generation of sentences by computer, is recent and highly specialized:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Yngve (1962) , Sakai and Nagao (1965) , Arsent'eva (1965) , Lomkovskaja (1965) , Friedman (1967), and Harper (1967) have applied a sentence generator to the study of syntactic and semantic problems of the level of the (isolated) sentence. The second, the study of units of discourse larger than the sentence, is as old as rhetoric, and extremely broad in scope; it includes, in one way or another, such diverse fields as beyond--the sentence analysis (cf. Hendricks, 1967) and the linguistic study of literary texts (Bailey, 1968, 53--76) . The present study is an application of the technique of sentence generation to an analysis of the paragraph; the latter is seen as a unit of discourse composed of lower-level units (sentences), and characterized by some kind of structure. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 12, |
|
"text": "(1962)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 15, |
|
"end": 37, |
|
"text": "Sakai and Nagao (1965)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 40, |
|
"end": 57, |
|
"text": "Arsent'eva (1965)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 60, |
|
"end": 78, |
|
"text": "Lomkovskaja (1965)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 81, |
|
"end": 101, |
|
"text": "Friedman (1967), and", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 102, |
|
"end": 115, |
|
"text": "Harper (1967)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 472, |
|
"text": "(cf. Hendricks, 1967)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 516, |
|
"end": 538, |
|
"text": "(Bailey, 1968, 53--76)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "VT 0 0 P1 S 0 VI 0 0 1 0 0 P4 P5 N 0 0 P6 P7 0 0 A 0 0 0 0 0 0 DV 0 0 0 0 0 0 DS 0 0 I 0 0 0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The governing probabilities for a word are independent of each other. In paragraph generation the decision to select a dependent type will be made without regard to the selection of other dependent types. For example, a noun can have probabilities P6 and P7 of being the governor of a noun and an adjective respectively. The selection of a noun as a dependent based on P6 will not affect, and will not be affected by, the selection of an adjective as a dependent.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are two types of co--occurrence data accompanying every word in the glossary: a set of governing probabilities and a list of dependents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The probability values associated with a word are determined on the basis of the syntactic behavior of the word in the processed text. If a noun occurs in 75 instances as the governor of an --6-adjective in I00 occurrences in a text, the probability of havipg an adjective as a dependent is 0.75. The zeroes and ones in Table I are constant for all words in the glossary.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 320, |
|
"end": 327, |
|
"text": "Table I", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "These values are not listed in the sets of probability values for the entrles of the glossary; however, they are known to the system. For instance, the set of probability values for a transitive verb will contain PI' P2' and P3\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The probability I of governing a noun as object will not be listed in the data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The second type of co--occurrence data accompanying every word in the glossary is a list of possible dependents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The list is specified in terms of word numbers and semantic classes (to be described later). It contains the words that actually appear in the processed physics text as dependents of the word with which the list is associated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the lists of dependents are compiled on the basis of word cooccurrence in the text, legitimate word combinations are guaranteed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the list of dependents for a verb~ those words which can only be the subject are marked \"S\" and those which can only be the direct object are marked \"0\". Table 2 . The restriction pattern in Fig. 2 specifies that the sen-tence to be generated should contain a transitive verb which belongs to either semantic class C1 or C2. The verb should govern (I) a noun as the subject of the sentence,", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 157, |
|
"end": 164, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 194, |
|
"end": 200, |
|
"text": "Fig. 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(2) an object which is to be selected from the words in semantic class C15 or the specified words W 1 and W2, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) an adverb which does not belong to semantic class C19. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "I. INTRODUCTION", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The general strategy for generating a paragraph is, first, to generate the initial sentence based on a selected restriction pattern, and then to generate each noninitial sentence base not only on a selected restriction pattern but also on the semantic properties of the words in all the previously generated sentences of the paragraph. The algorithm and the sentence generation procedure can best be illustrated by an example. Let us suppose that the restriction pattern shown in Fig. 4(a) is chosen for a sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 480, |
|
"end": 489, |
|
"text": "Fig. 4(a)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "4\u00b02. The Generation Al~orithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For ease of reference we will letter, the steps involved in this procedure.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4\u00b02. The Generation Al~orithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "a. If the restriction pattern specifies a restriction on the selection of the sentence governor (usually a tzars-sitive verb (VT) or an intransitive verb (VI)), a VT or VI will be randomly chosen from the specified semantic class(as) or word(s). Otherwise a VT or VI will be randomly chosen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4\u00b02. The Generation Al~orithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In our example the restriction pattern in Fig. 4(a) specifies that a word should be selected from the word class VT which is not a member of the semantic e[eeBes CI, C2, and C3, but is a governor of a word in C16, a word in word class N. and a word in C19. (Note also that the sentence should not contain a sentence adverb.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 51, |
|
"text": "Fig. 4(a)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "4\u00b02. The Generation Al~orithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are 16 candidates which satisfy the restrictions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "4\u00b02. The Generation Al~orithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "They are shown in Fig. 4(b Fig. 4(a) If the sentence to be generated is the first sentence of a paragraph, a noun which is in the dependent list associated with the verb 3336 and also a member of C16 is chosen. However, if the sentence is a noninitial one, the procedure CRITERIA is called to form a probability reweighting table on the basis of the criteria applicable to the verb 3336 and to this local structure (i.e., a VT dominates an NS). All candidates (those words which belong to C16 and which are in the dependent list associated with 3336) are first assigned an equal weight. Then the probability reweighting table is used to adjust the weights of the candidates. Fig. 4(b) shows the candidates for the node NS. An individual word is ~andomly chosen from the candidates based on their different weights:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 26, |
|
"text": "Fig. 4(b", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 27, |
|
"end": 36, |
|
"text": "Fig. 4(a)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 675, |
|
"end": 684, |
|
"text": "Fig. 4(b)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "4\u00b02. The Generation Al~orithm", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "word number 2625 whose internal address is 317.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "~ )DS(--DS) N(+N)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A noun is to be selected as the object of the verb 3336. As in d.l, the restriction pattern is consulted end, if the sentence is a noninitial one, the procedure CRITERIA is called. Fig. 4(b) shows the candidates for the node NO. The same probability reweighting scheme is applied to adjust the weights of the candidates. A word is selected at random: word number 1610 whose address is 261.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 190, |
|
"text": "Fig. 4(b)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "d.2.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An adverb is to be selected for the verb 3336.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similar to the previous procedure, the restriction pattern restricts the selection of candidates; CRITERIA is called for a noninitial sentence to construct the probability reweighting g. We now move from the third level to the fourth level of the dependency tree structure. Since the only word on the fourth level is an adjective, which does not govern, we have reached the lowest level. The generation of a sentence is completed. Fig. 4(c) shows the generated sentence. (In the Russian sentence, m0rphology is ignored.)", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 431, |
|
"end": 440, |
|
"text": "Fig. 4(c)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The restriction pattern of the sentence just generated, together with those of the previously generated Sentences, to distinguish between these two attributes, they will be discussed separately, in an admittedly artificial way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The chief function of the restriction pattern is to achieve intersentence development, and an overall patter n to the sequence of sentence pairs; to a degree, lexical coherence is also affected through the restriction pattern (e.g., through the recurrence of semantic classes). The main function of the probability rewei~htin~ tables is to .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "achieve cohesion, through the device of increasing the likelihood of lexical recurrence; the principle of development is also implemented here, to the extent that similar, but not identical, words are chosen in noninitial sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In general it may be said that the restriction pattern is designed to effect an overall pattern, whereas the reweighting tables are more local in effect, dealing with purely lexical materials. (2) from whole to part, or from multiplicity to singularity (presumably a variation of the first--cited principle);", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) past action to present; (4) \"other\" to \"present\" agent;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(5) \"other\" to \"present\" place; (6) cause to effect (more rarely, the reverse); (7) action to purpose of the action;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(8) action to means of performing the action; (9) simple rephrasing. Lack of space prevents illustration of these principles; it should be obvious that even this small stock of strategies will suffice for the production of innumerable paragraphs. It should also be noted that a random ordering of sentences built on the above pair--wise strategies will produce less than satisfactory results; certain sequences of sentence pairs are more likely than others to fit into an acceptable pattern for the paragrgph. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "d.3~", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The nature/ of scattering/ was investigated/ in an earlier paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VT(+VT)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The use of patterns to control development is summarized in Table 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 60, |
|
"end": 67, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "VT(+VT)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the verb investigate in sentence (a) belongs to semantic class C2, and C2 contains such verbs as \"study\" and \"investigate\" which specify very general actions, the node VT +C7 The node DV(+CI9) in pattern I and pattern 2, and node DV(+C20) in pattern 3 introduce the time progression and location chan~e to the paragraph. Class C19 contains such adverbs as \"in an earlier paper,\" \"in paper I,\" \"in an earlier study,\" etc., which specify that the time is past.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VT(+VT)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Class C20 contains such adverbs as \"in the present work,\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VT(+VT)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\"in the present paper,\" etc. which specify the different locations in which some actions were performed. \"the nature of scattering,\" in a.; a more obvious shortcoming is the lack of continuity in the noun object. Such deficiencies suggest the need for greater cohesion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "VT(+VT)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As a first approximation we have chosen to implement the following principles of cohesion: (i) selection of a \"concrete\" word in noun phrases; (2) word repetition;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5\u00b02. Cohesion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) use of hypernyms and synonyms; the conditions under which this can be done remain to be specified.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5\u00b02. Cohesion", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(3) The creation of \"lexical fields\" (containing, e.g~ such words as \"to photograph,\" \"camera,\" \"film, ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "5\u00b02. Cohesion", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": {}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"num": null, |
|
"uris": null, |
|
"text": "To repeat: the object of our investigation is the paragraph; the technique is analysis by synthesis, i.e. via the automatic generation of strings of sentences that possess the properties of paragraphs. Harper's earlier sentence generation program differed from other versions in its use of data on lexical cooccurrence and word behavior, both obtained from machine analysis of written text. These data are incorporated with some modifications in a new program designed to produce strings of sentences that possess the properties of coherence and development found in \"real\" discourse. (The actual goal is the production of isolated paragraphs, not an extended discourse.) In essence the program is designed (i) to generate an initial sentence; (ii) to \"inspect\" the result in order to determine strategies for producing the following sentence; (iii) to build a second sentence, .making use of one of these strategies, and employing, in addition, such criteria of cohesion as lexical class recurrence, substitution, anaphora, an4 synonymy; (iv) to continue the process for a prescribed number of sentences, observing both the general strategic principles and the lexical context. Analysis of the output ~ill lead to modification of the input materials, and the cycle will be repeated. This paper describes the implementations of these ideas, and discusses the theoretical implications of the paragraph generator. First we give a description of the language materials on which the generator operates. The next section deals with a program which converts the language data into tables with associative links to minimize --3the storage requirement and access time. Section 4 describes: (I) the function of the main components of the generation program, (2) the generation algorithm. Section 5 desczibes the implementation of some linguistic assumptions about semantic and structural connections in a discourse.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"num": null, |
|
"uris": null, |
|
"text": "The co--occurrence data can be regarded as eithersyntactic or semantic. They are distinguished here from both the dependency rules and part of speech designation, and from the semantic classes that have been established. At present, seventy--four semantic classes have been set up. Some of these are formed distributionally (i.e., on the --7-basis of their tendency to co-occur syntactically with the same words in text---cf. Harper, 1965); other classes contain words of the same root, synonyms, hypernyms, and words arbitrarily classified as \"concrete.\" The semantic classifications are highly tentative, and are subject to modification. Their extent is shown in", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"num": null, |
|
"uris": null, |
|
"text": ", (4), (5), (6), and (7). The advantages are (i) reduction in storage requirements, and (ii) capacity for rapid selection of a word from a part of speech or a sen~ntic class. The disadvantage is that we have placed a restriction on the amount of additional data that may be added to the existing lists. To avoid modifying the program when new data are added, indices (such as x, y, and z in Fig. i) to the reserved spaces in tables (I), (3), and (7) are n~de input parametecs to the program. At present the parameters are set to leave space for expansion of input data. Further expansion can be handled simply by readjusting the input parameters.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF3": { |
|
"num": null, |
|
"uris": null, |
|
"text": "are again used as the basis for selecting the restriction pattern for the next sentence. At the present stage of development no criterion is used to determine the end of a paragraph. The number of restriction patterns input to the pattern selection procedure determines the number of sentences in a paragraph. When the sentences of a paragraph have been generated, glossary lookup is performed and the transliterated Russian forms and their structural relations are printed. -25-5. IMPLEMEntATION OF LINGUISTIC ASSUMPTIONS The structure of paragraphs is poorly understood, and is in any event subject to enormous variety. Nevertheless, we have adopted a simplified model, which postulates that the units (sentences) of a paragraph should be arranged in a recognizable pattern. Specifically, it is assumed that each pair of sentences should be characterized by the attributes of development and cohesion. Development implies progression---for example, some kind of spatial, temporal, or logical movement: a paragraph can be assumed to \"get somewhere.\" Cohesion, on the other hand, implies continuity or relatedness; as such, it is a kind of curb on progression. Although it is difficult, perhaps impossible,", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF4": { |
|
"num": null, |
|
"uris": null, |
|
"text": "hundreds of sentence pairs, and scores of paragraphs, of Russian scientific texts, suggests that the following principles of development are commonly employed in intersentence connection:(I) progress from the general to the specific (more rarely, the reverse);", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF5": { |
|
"num": null, |
|
"uris": null, |
|
"text": "It was stated in Sec. 1 that the computer program would provide a means of inspecting the initial sentence of a paragraph before deciding on a strategy for fuzther develop-probability/ of absorption. c. A theory/ of interaction/ is worked out/ in the present paper.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF6": { |
|
"num": null, |
|
"uris": null, |
|
"text": "in pattern 2 controls the selec-tion of a verb of greater specificity; the verbs in C7 are appropriate. The node VT(+C7) in pattern 3 serves this purpose, i.e., to control the development of actions from ~enerality to specificity. The node NS(+C16) in pattern 2 specifies that an agent for the second sentence is not the present author implicitly specified in the third sentence. This restriction introduces another type of text development, i.e. from other wKiter to present author.", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF7": { |
|
"num": null, |
|
"uris": null, |
|
"text": "suggests partial introduction of coherence into the sentence-sequence. For example, the verbs in all sentences are in some degree related, and a general parallelism is maintained in the selection of agents and adverbs of time and location. Nonetheless it is clear that sentences a. through d. do not form n good paragraph. One deficiency is the excessively general character of the noun phrase,", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF8": { |
|
"num": null, |
|
"uris": null, |
|
"text": "4) use of anaphoric words; (5) increased repetition of members of the same semantic classes; (6) avoidance of word repetition within a single noun phrase. The implementation of these tactics is carried out in the reweighting table described in Sec. 4.In essence each tactic is a criterion for determining which words and semantic classes, together with reweighting values, are to be entered into the reweighting table. The content of the table depends on the criteria applied for each word selection; the main generation routine accepts the table as input for control of the selection, without '~nowing\" which criteria are used in forming the table.When the principles of cohesion have been applied,sentences a. through d. above might have the following form: a'. The nature of nuclear scattering was investigated in an earlier paper. b'. BelDv proposed in paper (i) a means of determining the probability of such pheonomena. c'. A theory of proton scattering is worked out in the present paper. d'. A method of analyzing the interaction of these particles is proposed. The following are some of the improvements in this sentence-sequence over a. through d.: (i) The addition of the \"concrete\" adjectives ~ and ~roton gives the noun phrases in a'. and c'. a specificity that is lacking in s. and c. This effect is forced upon the generation routine by requiring the selection of dependents in a noun phrase to continue until a word coded as \"concrete n has been chosen. (Since the effect may also be one of very long noun phrases, a counter-effect is achieved by constant up-weighting of the semantic class of concrete words in the reweighting table.) (2) The recurrence of scatterln~ in a'. and c'. increases continuity in the sentence sequence. The gener-i ation program achieves word repetition in adjacent or nearly adjacent sentences by entering the nour~-subjects or noun--objects of previously generated sentences (the choice between noun--subject or noun--object is made by reference to the restriction pattern for the sentence being generated) into the reweighting table, together with a high positive reweighting value. Moreover, the possible governors of these nouns are also entered into the table with the same reweighting value. The value controls the probability of repeating one of the nouns in a previously generated sentence or of selecting a noun which is the governor of a word in a previously generated sentence. In the latter case word repetition will occur on the next level of dependency structure, i.e., when the program selects a dependent for the selected governor. (3) The selection in b'. and d'. of phenomenon end particle, hypernyms of scat terin~ and proton respectively, introduces semantic continuity and, in addition, reduces the redundancy and monotony of word repetition. The use of hypernyms and synonyms is implemented by entering any hypernym and synonym of the words in previous sentences into the reweighting table with a positive reweighting value, thus increasing the probability of their selection. (4) The hypernyms phenomenon and particle in b'. and d'. acquire \"concreteness\" by the addition of anaphoric dependents suc___~h and thes_~e. The concreteness of the noun phrase such pheqomena in b'. has presumably been provided by the dependents of scatteri~R in a'. In the present system the addition of an anaphoric dependent for a hypernym --34-automatically terminates the selection of other dependents for the hypernym. (5) The selection of pro tOo and interaction in e'~ and d'o is a result of increasing the repetition of members of the same semantic class: the semantic classes represented by nuclear in a'o and scatterin~ in c'. are up-weighted during the generation of e'. and d' (6) The undesirable repetition in d. is eliminated: words generated in a noun phrase are entered with negative value in the reweighting table, so that their repetition in the same phrase is inhibited. The results of implementing even these few is currently operational, and produces output in reasonable times. Using the strategies for achieving development and cohesion so far developed, it is capable of generating ten--sentence strings in approximately fifteen seconds. Some of the main difficulties connected with the omtput are the following: (i) Deficiencies in the co-occurremce data affect the quality of individual sentences. For example, some nouns have very few dependents, a characteristic deriving from their behavior in the text on which the data is based; the selection of one of these nouns in a sentence may nullify the effect of applying strategies for development or cohesion. In general a generated paragraph is only as strong as the weakest link; defective single sentences can disturb the implementation of structural principles.(2) The grammar permits the generation of simple sentences only. Complex or compound sentences can, of course, be created by the device of juxtaposing these simple sentences with the help of conjunctions or relatives;", |
|
"type_str": "figure" |
|
}, |
|
"FIGREF9": { |
|
"num": null, |
|
"uris": null, |
|
"text": "\") would greatly increase the effect of cohesiono Distributional data for the formation of such \"fields\" is not readily available; if the classes are to be intuitively created, --36-the result will be inconsistent with our present system of classification. Study of these problems continues through analysis of the output. The effects of strengthening or relaxing various criteria for achieving development and cohesion have been observed in a series of experiments. The use of alternative sets of language input data (e.g., different dependent probabilities or semantic classes) is also contemplated. (It should be emphasized that the program is not oriented on a particular language or set of language data.) The experimental design of the generation program is consistent with this kind of modification.", |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td colspan=\"2\">SEMANTIC DATA</td><td/></tr><tr><td/><td>Number of</td><td>Number of</td></tr><tr><td>Classification</td><td>Classes</td><td>Words in Class</td></tr><tr><td>Distributional Classes</td><td>22</td><td>150</td></tr><tr><td>Hypernym Classes</td><td>i0</td><td>160</td></tr><tr><td>Word Families</td><td>25</td><td>52</td></tr><tr><td>Synonym--antonym Classes</td><td>16</td><td>48</td></tr><tr><td>\"Concrete\" Words</td><td>I</td><td>54</td></tr><tr><td>TOTAL</td><td>74</td><td>464</td></tr><tr><td colspan=\"3\">The language materials described above are punched</td></tr><tr><td colspan=\"3\">on approximately 2500 cards. The data are processed by s</td></tr><tr><td colspan=\"3\">conversion program in order to form the data base for the</td></tr><tr><td colspan=\"2\">paragraph generation program.</td><td/></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td/><td/><td>-10-</td><td/></tr><tr><td colspan=\"4\">.... i3) Words in semantic classes 1</td></tr><tr><td/><td/><td/><td>I</td></tr><tr><td/><td>~'2 Y3 YL</td><td/><td>Y47</td></tr><tr><td/><td/><td/><td>(4) Dependent list</td></tr><tr><td>i 2</td><td/><td>1 2</td><td colspan=\"2\">! i i ! r2</td></tr><tr><td>x 2</td><td/><td colspan=\"3\">: -130 Counter = -5 726 (Address 340) ~ 4 3</td></tr><tr><td colspan=\"2\">x? i</td><td/><td>CI (Value -i) C2 (Value-2)</td><td>,li iii\"</td></tr><tr><td/><td/><td/><td colspan=\"2\">4594 (Address 315)!</td></tr><tr><td>~4</td><td/><td/><td>C16 (Value-]b</td><td>Ii</td></tr><tr><td/><td/><td/><td/><td>J</td><td>4~</td></tr><tr><td>29~</td><td/><td/><td/></tr><tr><td/><td>I</td><td/><td/></tr><tr><td>~30~</td><td/><td/><td/><td>74</td></tr><tr><td>[ '.3~c 1~5</td><td/><td>i 2</td><td>(5) Hypernym List</td><td>t J,</td></tr><tr><td/><td/><td colspan=\"2\">~17 Counter = -i</td></tr><tr><td/><td/><td/><td>613 (Address 290)</td></tr><tr><td/><td>(2) Lookup table</td><td/><td/></tr><tr><td>1</td><td>W ord No. Address</td><td/><td>(6) Semantic class</td></tr><tr><td/><td/><td>~ 2</td><td/></tr><tr><td>\u2022</td><td>i</td><td>~19</td><td>Counter = -1 _C4 7 (Vs lu e 7z~ 7 2 __</td></tr><tr><td/><td colspan=\"4\">2) allows us to replace word numbers with their</td></tr><tr><td colspan=\"4\">addresses after all data have been processed.</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"content": "<table><tr><td>--13--</td><td/></tr><tr><td colspan=\"2\">down to any desired leve~. Table (i) is linked to tables</td></tr><tr><td colspan=\"2\">(4), (5), (6), and (7), and tables (4), (5), and (6) are</td></tr><tr><td colspan=\"2\">linked either directly back to table (I) or indirectly</td></tr><tr><td colspan=\"2\">through table (8) and then table (3) back to table (i).</td></tr><tr><td colspan=\"2\">Thus, access to any piece of information in these data</td></tr><tr><td colspan=\"2\">tables is gained by simple table lookup.</td></tr><tr><td colspan=\"2\">In view of the variability in the number of words in</td></tr><tr><td colspan=\"2\">each part-of--speech and semantic class, and in the number</td></tr><tr><td colspan=\"2\">of governing probabilities, hypernyms, ser~ntic classes</td></tr><tr><td colspan=\"2\">and dependents associated with each word, we have packed</td></tr><tr><td colspan=\"2\">these data in large arrays as illustrated in tables (i),</td></tr><tr><td>associated with these dependents.</td><td>In turn we can trace third</td></tr><tr><td colspan=\"2\">level dependent lists. We can easily continue this operation</td></tr></table>", |
|
"type_str": "table", |
|
"text": "", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF5": { |
|
"content": "<table><tr><td colspan=\"3\">the general \"development\" of paragraphs will be described added, and a random number in the range from I to the</td></tr><tr><td colspan=\"3\">in a later section. Identifier total sum is generated to determine which candidate should Weight</td></tr><tr><td>4.1~3. be selected.</td><td colspan=\"2\">Discourse Relator (RELATOR~ Input to this C2 +5</td></tr><tr><td colspan=\"3\">procedure are (i) a dependent type, (2) a probability W I --7 4.1.5. Word Generator ~WOR-GEN). This procedure finds</td></tr><tr><td colspan=\"3\">value, and (3) a restriction pattern. This procedure \u00bd +3 all possible candidates which satisfy the restrictions</td></tr><tr><td colspan=\"3\">determines whether the given dependent type conflicts C12 -4 specified in s restriction pattern, and assigns different</td></tr><tr><td colspan=\"3\">with the restrictions specified in the pattern. weights to them on the basis of the contents of a proba-If no</td></tr><tr><td colspan=\"3\">conflict is found, this procedure determines whether a bility reweighting table. It selects a word st random</td></tr><tr><td colspan=\"3\">As illustrated in the pattern, each node in a pattern word should be selected from the given dependent type Fig. 3 --Format of a reweighting table from the candidates according to their weights.</td></tr><tr><td colspan=\"3\">contains a word class and selection restrictions which are based on the input probability value. If the selection of</td></tr><tr><td colspan=\"3\">positively or negatively specified in terms of semantic</td></tr><tr><td colspan=\"2\">class(es), specific word(s) or a word class.</td><td>Restriction</td></tr><tr><td colspan=\"2\">patterns are stored in the following form:</td><td>Q-PIP2..oPn.</td></tr><tr><td colspan=\"3\">Q is a single pattern, or a combination of patterns, and The selection of</td></tr><tr><td colspan=\"3\">PIP2...P n are single restriction patterns. any word from these five will satisfy the restriction Essentially,</td></tr><tr><td colspan=\"3\">Q-PIP2...Pn is e rule which specifies that if a sentence pattern for the sentence. Instead of randomly selecting</td></tr><tr><td colspan=\"3\">(or string of sentences) whose sentence skeleton(s) matches one word out of these five candidates, we may want to</td></tr><tr><td colspan=\"3\">Q, then it can be followed by a sentence whose sentence increase the probability of selecting a word which will</td></tr><tr><td colspan=\"3\">skeleton is one of these Ps. have semantic connections with the word(s) in the preceding Thus, one of these Ps is</td></tr><tr><td colspan=\"3\">randomly selected to be used as a restriction pattern for or current sentence. When there are choices in word selec-</td></tr><tr><td colspan=\"3\">a succeeding sentence. tion, all candidates are preassigned equal weights, and The pattern selection procedure is</td></tr><tr><td colspan=\"3\">not yet coded. criteria relevant to the current selection are applied to At present, strings of restriction patterns</td></tr><tr><td colspan=\"3\">are given directly to the pattern selection routine. form a reweighting table. If a word in the list of can--The</td></tr><tr><td colspan=\"3\">generation program generates strings of sentences under didates matches a word or belongs to a semantic class in</td></tr><tr><td colspan=\"3\">the control oz direction of the restrictions specified in the table, the associated weight is added to its preassigned</td></tr><tr><td colspan=\"3\">the patterns. weight. The final positive weights of all candidates are The use of restriction patterns to control</td></tr></table>", |
|
"type_str": "table", |
|
"text": "The subject of the sentence should not govern an adjective.", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF8": { |
|
"content": "<table><tr><td colspan=\"3\">that the adjective fails the probability test, none is</td></tr><tr><td colspan=\"3\">chosen for the word 1610. Since the restriction pattern</td></tr><tr><td colspan=\"3\">specifies that a word should be selected from the word</td></tr><tr><td colspan=\"3\">class N as a dependent of the word 1610, the same operation</td></tr><tr><td colspan=\"3\">described in step d. is performed to select the word 2466,</td></tr><tr><td colspan=\"3\">whose address is 308.</td><td>\u00a2</td></tr><tr><td colspan=\"3\">e.3. The adverb 6505 selected in d.3. has no dependent</td></tr><tr><td colspan=\"3\">since adverbs never govern.</td></tr><tr><td colspan=\"3\">f. We now move to the third level of the dependency</td></tr><tr><td colspan=\"3\">tree structure. The noun 2466 may govern the dependent</td></tr><tr><td colspan=\"3\">types adjective and noun. Let us assume that the dependent</td></tr><tr><td colspan=\"3\">type A passes the tests described in step c. and the</td></tr><tr><td colspan=\"3\">dependent type N fails. An adjective 2263, whose address</td></tr><tr><td colspan=\"3\">is 42, is selected from the list of candidates shown in</td></tr><tr><td colspan=\"3\">e. The dependents of the words 2625, 1610, and 6505 the figure.</td></tr><tr><td colspan=\"3\">are now considered with respect to their possible dependents</td></tr><tr><td colspan=\"3\">and associated probabilities.</td><td>We are working from the top</td></tr><tr><td colspan=\"3\">to the second level of the dependency tree structure.</td></tr><tr><td>e.l.</td><td colspan=\"2\">The noun 2625 may govern the dependent types</td></tr><tr><td colspan=\"2\">adjective and noun.</td><td>Each of these is considered in turn</td></tr><tr><td colspan=\"3\">by the same operations described in steps b. and c. For</td></tr><tr><td colspan=\"3\">brevity, let us assume that none of these dependent types</td></tr><tr><td colspan=\"3\">pass the probability test.</td><td>Thus, no word is selected from</td></tr><tr><td colspan=\"3\">these dependent types.</td></tr><tr><td>e.2.</td><td colspan=\"2\">The noun 1610 may govern the dependent types</td></tr><tr><td colspan=\"3\">adjective and noun with different probability.</td><td>Assuming</td></tr></table>", |
|
"type_str": "table", |
|
"text": "table, and an adverb is randomly selected. In the figure we see the candidates for the node DV, and the adverb 6505, whose address is 179, is chosen.", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |