Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "T75-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:43:22.998678Z"
},
"title": "p.ll Author should read: p.30 Author should read: p.60 Author should read: p.84 Author should read: p.94 Author should read: ERRATA",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Riesbeck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Yale University",
"location": {}
},
"email": ""
},
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rockefeller University",
"location": {}
},
"email": ""
},
{
"first": "Joseph",
"middle": [
"D"
],
"last": "Becker",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wisconsin",
"location": {}
},
"email": ""
},
{
"first": "Newman",
"middle": [
"Sheldon"
],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wisconsin",
"location": {}
},
"email": ""
},
{
"first": "Carl",
"middle": [],
"last": "Hewitt",
"suffix": "",
"affiliation": {
"laboratory": "Artificial Intelligence Lab, MIT",
"institution": "",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "T75-1006",
"_pdf_hash": "",
"abstract": [],
"body_text": [],
"back_matter": [],
"bib_entries": {},
"ref_entries": {
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>understanding process ought to be sensitive to the understander's style</td></tr><tr><td>and purpose.\"</td></tr><tr><td>p.198 Table should read:</td></tr><tr><td>Act Name: Request</td></tr><tr><td>Argument List:</td></tr><tr><td>agent: A; recipient: R; message: M; requested response: X</td></tr><tr><td>Can Conditions:</td></tr><tr><td>CI: A EXPECTS that A CAN CAUSE some action such that</td></tr><tr><td>that action results in R KNOWING A's message.</td></tr><tr><td>C2: A EXPECTS that R will CHOOSE to UNDERSTAND A's message</td></tr><tr><td>C3: A BELIEVES that R BELIEVES certain propositions;</td></tr><tr><td>AND</td></tr><tr><td>A EXPECTS that [R's KNOWING A's message AND R BELIEVING</td></tr><tr><td>certain propositions] will result in R BELIEVING:</td></tr><tr><td>(I) A WANTS X</td></tr><tr><td>(2) A WANTS R to CAUSE X</td></tr><tr><td>(3) A BELIEVES that R CAN CAUSE X</td></tr><tr><td>(4) A EXPECTS that A's REQUESTING may MOTIVATE</td></tr><tr><td>R to CAUSE X</td></tr><tr><td>(5) A BELIEVES that R was NOT MOTIVATED to</td></tr><tr><td>CAUSE X prior to A's REQUEST.</td></tr><tr><td>Goal Hypotheses:</td></tr><tr><td>GI: R BELIEVES that A WANTS R to CAUSE X</td></tr><tr><td>G2: A's REQUEST may MOTIVATE R to CAUSE X</td></tr><tr><td>Outcome Possibilities:</td></tr><tr><td>O1: R will UNDERSTAND A's COMMUNICATIONACT</td></tr><tr><td>02: If Someone PERCEIVES A's message, then that</td></tr><tr><td>Someone CAN UNDERSTAND A's COMMUNICATIONACT</td></tr><tr><td>Motivational Hypotheses:</td></tr><tr><td>MI: A WANTS R to BELIEVE that A WANTS R to CAUSE X</td></tr><tr><td>M2: A WANTS X</td></tr><tr><td>M3: A WANTS R to CAUSE X</td></tr><tr><td>Normative Obligations:</td></tr><tr><td>NI: If someone BELIEVES that A is COMMUNICATING then that</td></tr><tr><td>someone BELIEVES that A OUGHT to UNDERSTAND A's message</td></tr><tr><td>N2: If R BELIEVES that A is COMMUNICATING to R then R</td></tr><tr><td>BELIEVES that R CUGHT to UNDERSTAND A's message</td></tr><tr><td>N3: If R BELIEVES that A is REQUESTING that</td></tr><tr><td>R CAUSE X AND R EXPECTS to NOT CAUSE X</td></tr><tr><td>then R BELIEVES that R OUGHT to EXPLAIN</td></tr><tr><td>to A why R EXPECTS to NOT CAUSE X</td></tr><tr><td>TABLE I. Representation of the Action REQUEST</td></tr><tr><td>22.</td></tr></table>",
"html": null,
"text": "p.104 Author should read: Marvin Minsky, Artificial Intelligence Lab, MIT p.142 First sentence of the final paragraph should read: \"What we are saying here is perhaps nothing more nor less than the conventional wisdom that the"
}
}
}
}