|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T02:13:45.677793Z" |
|
}, |
|
"title": "Grounded PCFG Induction with Images", |
|
"authors": [ |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The Ohio State University", |
|
"location": { |
|
"settlement": "Columbus", |
|
"region": "OH", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "The Ohio State University", |
|
"location": { |
|
"settlement": "Columbus", |
|
"region": "OH", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Recent work in unsupervised parsing has tried to incorporate visual information into learning, but results suggest that these models need linguistic bias to compete against models that only rely on text. This work proposes grammar induction models which use visual information from images for labeled parsing, and achieve state-of-the-art results on grounded grammar induction on several languages. Results indicate that visual information is especially helpful in languages where high frequency words are more broadly distributed. Comparison between models with and without visual information shows that the grounded models are able to use visual information for proposing noun phrases, gathering useful information from images for unknown words, and achieving better performance at prepositional phrase attachment prediction. 1 Recent grammar induction models are able to produce accurate grammars and labeled parses with raw text only (Jin et al., 2018b, 2019; Kim et al., 2019b,a; Drozdov et al., 2019), providing evidence against the poverty of the stimulus argument (Chomsky, 1965), and showing that many linguistic distinctions like lexical and phrasal categories can be directly induced from raw text statistics. However, as computational-level models of human syntax acquisition, they lack semantic, pragmatic and environmental information which human learners seem to use (Gleitman, 1990; Pinker and MacWhinney, 1987; Tomasello, 2003). This paper proposes novel grounded neuralnetwork-based models of grammar induction which take into account information extracted from images in learning. Performance comparisons show", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Recent work in unsupervised parsing has tried to incorporate visual information into learning, but results suggest that these models need linguistic bias to compete against models that only rely on text. This work proposes grammar induction models which use visual information from images for labeled parsing, and achieve state-of-the-art results on grounded grammar induction on several languages. Results indicate that visual information is especially helpful in languages where high frequency words are more broadly distributed. Comparison between models with and without visual information shows that the grounded models are able to use visual information for proposing noun phrases, gathering useful information from images for unknown words, and achieving better performance at prepositional phrase attachment prediction. 1 Recent grammar induction models are able to produce accurate grammars and labeled parses with raw text only (Jin et al., 2018b, 2019; Kim et al., 2019b,a; Drozdov et al., 2019), providing evidence against the poverty of the stimulus argument (Chomsky, 1965), and showing that many linguistic distinctions like lexical and phrasal categories can be directly induced from raw text statistics. However, as computational-level models of human syntax acquisition, they lack semantic, pragmatic and environmental information which human learners seem to use (Gleitman, 1990; Pinker and MacWhinney, 1987; Tomasello, 2003). This paper proposes novel grounded neuralnetwork-based models of grammar induction which take into account information extracted from images in learning. Performance comparisons show", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Figure 1: Examples of disambiguating information provided by images for the prepositional phrase attachment of the sentence Mary eats spaghetti with a friend (Gokcen et al., 2018) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 158, |
|
"end": 179, |
|
"text": "(Gokcen et al., 2018)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "that the proposed models achieve state-of-the-art results on multilingual induction datasets, even without help from linguistic knowledge or pretrained image encoders. Experiments show several specific benefits attributable to the use of visual information in induction. First, as a proxy to semantics, the co-occurrences between objects in images and referring words and expressions, such as the word spaghetti and the plate of spaghetti in Figure 1 , 2 provide clues to the induction model about the syntactic categories of such linguistic units, which may complement distributional cues from word collocation which normal grammar inducers rely on solely for induction. Also, pictures may help disambiguate different syntactic relations: induction models are not able to resolve many prepositional phrase attachment ambiguities with only text -for example in Figure 1 , there is little information in the text of Mary eats spaghetti with a friend for the induction models to induce a high attachment structure where a friend is a companion -and images may provide information to resolve these ambiguities. Finally, images may provide grounding information for unknown words when their syntactic properties are not clearly indicated by sentential context.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 442, |
|
"end": 450, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 861, |
|
"end": 869, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Existing unsupervised PCFG inducers exploit naturally-occurring cognitive and developmental constraints, such as punctuation as a proxy to prosody (Seginer, 2007) , human memory constraints (Noji and Johnson, 2016; Shain et al., 2016; Jin et al., 2018b) , and morphology , to regulate the posterior of grammars which are known to be extremely multimodal (Johnson et al., 2007) . Models in Shi et al. (2019) also match embeddings of word spans to encoded images to induce unlabeled hierarchical structures with a concreteness measure . Additionally, visual information is observed to provide grounding for words describing concrete objects, helping to identify and categorize such words. This hypothesis is termed 'noun bias' in language acquisition (Gentner, 1982 (Gentner, , 2006 Waxman et al., 2013) , through which the early acquisition of nouns is attributed to nouns referring to observable objects. However, the models in Shi et al. (2019) also rely on language-specific branching bias to outperform other text-based models, and images are encoded by pretrained object classifiers trained with large datasets, with no ablation to show the benefit of visual information for unsupervised parsing. Visual information has also been used for joint training of prepositional phrase attachment models (Christie et al., 2016) suggesting that visual information may contain semantic information to help disambiguate prepositional phrase attachment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 147, |
|
"end": 162, |
|
"text": "(Seginer, 2007)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 190, |
|
"end": 214, |
|
"text": "(Noji and Johnson, 2016;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 215, |
|
"end": 234, |
|
"text": "Shain et al., 2016;", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 235, |
|
"end": 253, |
|
"text": "Jin et al., 2018b)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 354, |
|
"end": 376, |
|
"text": "(Johnson et al., 2007)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 389, |
|
"end": 406, |
|
"text": "Shi et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 749, |
|
"end": 763, |
|
"text": "(Gentner, 1982", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 764, |
|
"end": 780, |
|
"text": "(Gentner, , 2006", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 801, |
|
"text": "Waxman et al., 2013)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 928, |
|
"end": 945, |
|
"text": "Shi et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1300, |
|
"end": 1323, |
|
"text": "(Christie et al., 2016)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The full grounded grammar induction model used in these experiments, ImagePCFG, consists of two parts: a word-based PCFG induction model and a vision model, as shown in Figure 2 . The two parts have their own objective functions. The PCFG induction model, called NoImagePCFG when trained by itself, can be trained by maximizing the marginal probability P(\u03c3) of sentences \u03c3. This part functions similarly to previously proposed PCFG induction models (Jin et al., 2018a; Kim et al., 2019a) where a PCFG is induced through maximization of the data likelihood of the training corpus marginalized over latent syntactic trees.", |
|
"cite_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 468, |
|
"text": "(Jin et al., 2018a;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 469, |
|
"end": 487, |
|
"text": "Kim et al., 2019a)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 177, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Grounded Grammar Induction Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The image encoder-decoder network in the vision model is trained to reconstruct the original image after passing through an information bottleneck. The latent encoding from the image encoder may be seen as a compressed representation of vi-sual information in the image, some of which is semantic, relating to objects in the image. We hypothesize that semantic information can be helpful in syntax induction, potentially through helping three tasks mentioned above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grounded Grammar Induction Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In contrast to the full model where the encoded visual representations are trained from scratch, the ImagePrePCFG model uses image embeddings encoded by pretrained image classifiers with parameters fixed during induction training. We hypothesize that pretrained image classifiers may provide useful information about objects in an image, but for grammar induction it is better to allow the inducer to decide which kind of information may help induction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grounded Grammar Induction Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The two parts are connected through a syntacticvisual loss function connecting a syntactic sentence embedding projected from word embeddings and an image embedding. We hypothesize that visual information in the encoded images may help constrain the search space of syntactic embeddings of words with supporting evidence of lexical attributes such as concreteness for nouns or correlating adjectives with properties of objects. 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grounded Grammar Induction Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The PCFG induction model is factored into three submodels: a nonterminal expansion model, a terminal expansion model and a split model, which distinguishes terminal and nonterminal expansions. The binary-branching non-terminal expansion rule probabilities, 4 and unary-branching terminal expansion rule probabilities in a factored Chomskynormal-form PCFG can be parameterized with these three submodels. Given a tree as a set \u03c4 of nodes \u03b7 undergoing non-terminal expansions c \u03b7 \u2192 c \u03b71 c \u03b72 (where \u03b7 \u2208 {1, 2} * is a Gorn address specifying a path of left or right branches from the root), and a set \u03c4 of nodes \u03b7 undergoing terminal expansions c \u03b7 \u2192 w \u03b7 (where w \u03b7 is the word at node \u03b7) in a parse of sentence \u03c3, the marginal a giraffe is eating leaves < l a t e x i t s h a 1 _ b a s e 6 4 = \" / t + R k o Y i T P P 2 q C O / l 4 7 1 ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "V f u q 1 1 4 = \" > A A A C H n i c b Z B L S w M x F I U z 9 V X r q 9 a l m 2 A R K k i Z K Q V 1 V 3 D j s o J 9 Q G c o m T T T h u Y x J B m 1 D P N X x J 3 + E n f i V n + I e 9 P H Q l s P B A 7 n 3 E s u X x g z q o 3 r f j m 5 t f W N z a 3 8 d m F n d 2 / / o H h Y a m u Z K E x a W D K p u i H S h F F B W o Y a R r q x I o i H j H T C 8 f W 0 7 9 w T p a k U d 2 Y S k 4 C j o a A R x c j Y q F 8 s Q S s / V j J M f U 2 H H G X n / W L Z r b o z w V X j L U w Z L N T s F 7 / 9 g c Q J J 8 J g h r T u e W 5 s g h Q p Q z E j W c F P N I k R H q M h 6 V k r E C c 6 S G e 3 Z / D U J g M Y S W W f M H C W / t 5 I E d d 6 w k M 7 y Z E Z 6 e V u G v 7 X 9 R I T X Q Y p F X F i i M D z j 6 K E Q S P h F A Q c U E W w Y R N r E F b U 3 g r x C C m E j c V V 8 A V 5 w J J z J A b p D E / W 8 4 L U M o p g M 6 u U v b O s Y D l 5 y 1 R W T b t W 9 e r V q 9 t 6 u V F b E M u D Y 3 A C K s A D F 6 A B b k A T t A A G j + A J v", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "D 1 D j N B k 1 Y 1 G U g 9 G L i 2 r V 5 g 4 K E O 4 = \" > A A A C H X i c b V D J S g N B F O y J W 4 x b 1 K O X x i D o J c w E Q b 0 J X j x G M I m Q G U N P 5 0 3 S p J e h u 0 c J w 3 y K e N M v 8 S Z e x Q / x b m c 5 u B U 8 K K r e 4 x U V p 5 w Z 6 / s f X m l h c W l 5 p b x a W V v f 2 N y q b u + 0 j c o 0 h R Z V X O m b m B j g T E L L M s v h J t V A R M y h E 4 8 u J n 7 n D r R h S l 7 b c Q q R I A P J E k a J d V K v u h 0 K Y o d x k k N x G x o 2 E K R X r f l 1 f w r 8 l w R z U k N z N H v V z 7 C v a C Z A W s q J M d 3 A T 2 2 U E 2 0 Z 5 V B U w s x A S u i I D K D r q C Q C T J R P o x f 4 w C l 9 n C j t R l o 8 V b 9 f 5 E Q Y M x a x 2 5 w E N b + 9 i f i f 1 8 1 s c h r l T K a Z B U l n j 5 K M Y 6 v w p A f c Z x q o 5 W N H C N X M Z c V 0 S D S h 1 r V V C S X c U y U E k f 0 8 T L W K i 2 4 Q 5 X l o E t w s D m v B U V F x P Q W / W / l L 2 o 1 6 c F w / u z q u n T f m j Z X R H t p H h y h A J + g c X a I m a i G K 7 t E", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "D e k L P 3 q P 3 4 r 1 6 b 7 P V k j e / 2 U U / 4 L 1 / A S C S o d I = < / l a t e x i t > e m < l a t e x i t s h a 1 _ b a s e 6 4 = \" q R", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "M W 3 n B D U 4 f M l v o s u W a X b 4 Q 9 B L s = \" > A A A C F n i c b V C 7 S g N B F J 2 N r x h f U U u b w S D E J u y G g N o F b C w j G B W y q 8 x O 7 s b B e S w z s 0 p Y 9 j f E T r / E T m x t / R B 7 Z 5 M U v g 5 c O J x z L / d w 4 p Q z Y 3 3 / w 6 v M z S 8 s L l W X a y u r a + s b 9 c 2 t c 6 M y T a F P F V f 6 M i Y G O J P Q t 8 x y u E w 1 E B F z u I h v j 0 v / 4 g 6 0 Y U q e 2 X E K k S A j y R J G i X V S G A p i b + I k h + J K X N c b f s u f A P 8 l w Y w 0 0 A y 9 6 / p n O F Q 0 E y A t 5 c S Y Q e C n N s q J t o x y K G p h Z i A l 9 J a M Y O C o J A J M l E 8 y F 3 j P K U O c K O 1 G W j x R v 1 / k R B g z F r H b L D O a 3 1 4 p / u c N M p s c R j m T a W Z B 0 u m j J O P Y K l w W g I d M A 7 V 8 7 A i h m r m s m N 4 Q T a h 1 N d V C C f d U C U H k M A 9 T r e J i E E R 5 H p o E 9 4 p m I 9 g v a q 6 n 4 H c r f 8 l 5 u x V 0 W k e n n U a 3 P W u s i n b Q L m q i A B 2 g L j p B P d R H F K X o A T 2 h Z + / R e / F e v b f p a s W b 3 W y j H / D e v w B / t 5 9 v < / l a t e x i t > L( , m) < l a t e x i t s h a 1 _ b a s e 6 4 = \" X h j R 5 w F M x e m I r Y c v E D z / T q 5 o U l s = \" > A A A C F X i c b V D L S s N A F J 3 U d 3 1 V X b o Z L E I L U p J S U H c F N y 5 c V L B a S E K Z T C f t 0 H m E m Y l S Q j 5 D 3 O m X u B O 3 r v 0 Q 9 0 5 r F t p 6 4 M L h n H u 5 9 5 4 o Y V Q b 1 / 1 0 S k v L K 6 t r 6 x v l z a 3 t n d 3 K 3 v 6 t l q n C p I s l k 6 o X I U 0 Y F a R r q G G k l y i C e M T I X T S + m P p 3 9 0 R p K s W N m S Q k 5 G g o a E w x M l b y r 2 q B p k O O T n i 9 X 6 m 6 D X c G u E i 8 g l R B g U 6 / 8 h U M J E 4 5 E Q Y z p L X v u Y k J M 6 Q M x Y z k 5 S D V J E F 4 j I b E t 1 Q g T n S Y z U 7 O 4 b F V B j C W y p Y w c K b + n s g Q 1 3 r C I 9 v J k R n p e W 8 q / u f 5 q Y n P w o y K J D V E 4 J 9 F c c q g k X D 6 P x x Q R b B h E 0 s Q V t T e C v E I K Y S N T a k c C P K A J e d I D L I g U T L K f S / M s k D H s J P X q l 4 9 L 9 u c v P l U F s l t s + G 1 G u f X r W q 7 W S S 2 D g 7 B E a g B D 5 y C N r g E H d A F G E j w C J 7 B i / P k v D p v z v t P a 8 k p Z g 7 A H z g f 3 y H / n h I = < / l a t e x i t > L(e m , e )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "< l a t e x i t s h a 1 _ b a s e 6 4 = \" Z 6 e D J l 1 E k + probability of \u03c3 can be computed as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "4 i / v + x m f j h K s W l D j U = \" > A A A C L X i c b V C 7 S g N B F J 3 1 G e M r a m k z G o Q I E n a D o H a C j Y V F B K N C d g 2 z k 7 t x c B 7 L z K w S l q 3 9 G r H T L 7 E Q x N Y v s H f y K H w d u H D m n H u Z e 0 + c c m a s 7 7 9 6 E 5 N T 0 z O z p b n y / M L i 0 n J l Z f X c q E x T a F H F l b 6 M i Q H O J L Q s s x w u U w 1 E x B w u 4 p u j g X 9 x C 9 o w J c 9 s P 4 V I k J 5 k C a P E O q l T 2 T i p h Y L Y 6 z j J o b g S O 9 8 e o W E 9 Q b Y 7 l a p f 9 4 f A f 0 k w J l U 0 R r N T + Q y 7 i m Y C p K W c G N M O / N R G O d G W U Q 5 F O c w M p I T e k B 6 0 H Z V E g I n y 4 S k F 3 n J K F y d K u 5 I W D 9 X v E z k R x v R F 7 D o H m 5 r f 3 k D 8 z 2 t n N t m P c i b T z I K k o 4 + S j G O r 8 C A X 3 G U a q O V 9 R w j V z O 2 K 6 T X R h F q X X j m U c E e V E E R 2 8 z D V K i 7 a Q Z T n o U l w s 6 h V g + 2 i 7 H I K f q f y l 5 w 3 6 s F u / e B 0 t 3 r Y G C d W Q u t o E 9 V Q g P b Q I T p G T d R C F N 2 j B / S E n r 1 H 7 8 V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P(\u03c3) = \u03c4,\u03c4 \u03b7\u2208\u03c4 P(c \u03b7 \u2192 c \u03b71 c \u03b72 ) \u2022 \u03b7\u2208\u03c4 P(c \u03b7 \u2192 w \u03b7 ) (1)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We first define a set of Bernoulli distributions that distribute probability mass between terminal and nonterminal rules, so that the lexical expansion model can be tied to the image model (see Section 4.2):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P(Term | c \u03b7 ) = softmax {0,1} (ReLU(W spl x B,c \u03b7 + b spl )), (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where c \u03b7 is a non-terminal category, W spl \u2208 R 2\u00d7h and b spl \u2208 R 2 are model parameters for hidden vectors of size h, and x B,c \u03b7 \u2208 R h the result of a multilayered residual network (Kim et al., 2019a) . The residual network consists of B architecturally identical residual blocks. For an input vector x b\u22121,c each residual block b performs the following computation:", |
|
"cite_spans": [ |
|
{ |
|
"start": 183, |
|
"end": 202, |
|
"text": "(Kim et al., 2019a)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x b,c = ReLU(W b ReLU(W b x b\u22121,c + b b ) + b b ) + x b\u22121,c ,", |
|
"eq_num": "(3)" |
|
} |
|
], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "with base case:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "x 0,c = ReLU(W 0 E \u03b4 c + b 0 )", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where \u03b4 c is a Kronecker delta function -a vector with value one at index c and zeros everywhere else -and E \u2208 R d\u00d7C is an embedding matrix for each nonterminal category c with embedding size d, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "W 0 \u2208 R h\u00d7d , W b , W b \u2208 R h\u00d7h and b 0 , b b , b b \u2208 R h", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "are model parameters with latent representations of size h. B is set to 2 in all models following Kim et al. (2019a) . Binary-branching non-terminal expansion rule probabilities for each non-terminal category c \u03b7 and left and right children c \u03b71 c \u03b72 are defined as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 116, |
|
"text": "Kim et al. (2019a)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P(c \u03b7 \u2192 c \u03b71 c \u03b72 ) = P(Term=0 | c \u03b7 ) \u2022 softmax c \u03b71 ,c \u03b72 (W nont E \u03b4 c \u03b7 + b nont ),", |
|
"eq_num": "(5)" |
|
} |
|
], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where W nont \u2208 R C 2 \u00d7d and b nont \u2208 R C 2 are parameters of the model. The lexical unary-expansion rule probabilities for a preterminal category c \u03b7 and a word w \u03b7 at node \u03b7 are defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "P(c \u03b7 \u2192 w \u03b7 ) = P(Term=1 | c \u03b7 ) \u2022 exp(n c \u03b7 ,w \u03b7 ) w exp(n c \u03b7 ,w ) (6) n c,w = ReLU(w lex n B,c,w + b lex )", |
|
"eq_num": "(7)" |
|
} |
|
], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where w is the generated word type, and w lex \u2208 R h and b lex \u2208 R are model parameters. Similarly,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "n b,c,w = ReLU(W b ReLU(W b n b\u22121,c,w + b b ) + b b ) + n b\u22121,c,w ,", |
|
"eq_num": "(8)" |
|
} |
|
], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "with base case:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "n 0,c,w = ReLU(W 0 E \u03b4 c L \u03b4 w ) + b 0 )", |
|
"eq_num": "(9)" |
|
} |
|
], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "where W 0 \u2208 R h\u00d72d , W b , W b \u2208 R h\u00d7h and b 0 , b b , b b \u2208 R h are model parameters for latent representations of size h. L is a matrix of syntactic word embeddings for all words in vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Induction model", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The vision model consists of an image encoderdecoder network and a syntactic-visual projector. The image encoder-decoder network encodes an image into an image embedding and then decodes that back into the original image. This reconstruction constrains the information in the image embedding to be closely representative of the original image. The syntactic-visual projector projects word embeddings used in the calculation of lexical expansion probabilities into the space of image embeddings, building a connection between the space of syntactic information and the space of visual information.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Vision model", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The image encoder employs a ResNet18 architecture (He et al., 2016) which encodes an image with 3 channels into a single vector. The encoder consists of four blocks of residual convolutional networks. The image decoder decodes an image from a visual vector generated by the image encoder. The image decoder used in the joint model is the image generator from DCGAN (Radford et al., 2016) , where a series of transposed convolutions and batch normalizations attempts to recover an image from an image embedding. 5", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 67, |
|
"text": "(He et al., 2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 365, |
|
"end": 387, |
|
"text": "(Radford et al., 2016)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The image encoder-decoder network", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The projector model is a CNN-based neural network which takes a concatenated sentence embedding matrix M \u03c3 \u2208 R |\u03c3|\u00d7d as input, where embeddings in M \u03c3 are taken from L, and returns the syntactic-visual embedding e \u03c3 . The jth full lengthwise convolutional kernel is defined as a matrix K j \u2208 R u j \u00d7k j d which slides across the sentence matrix M to produce a feature map, where u j is the number of channels in the kernel, k j is the width of the kernel, and d is the height of the kernel which is equal to the size of the syntactic word embeddings. Because the kernel is as high as the embeddings, it produces one vector of length u j for each window. The full feature map F j \u2208 R u j \u00d7H j , where H j is total number of valid submatrices for the kernel, is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The syntactic-visual projector", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "F j = h (K j vec(M \u03c3 [h..k j +h\u22121, * ] ) + b j ) \u03b4 h . (10)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The syntactic-visual projector", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Finally, an average pooling layer and a linear transform are applied to feature maps from different kernels:f", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The syntactic-visual projector", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= [mean(F 1 ) . . . mean(F j )] ,", |
|
"eq_num": "(11)" |
|
} |
|
], |
|
"section": "The syntactic-visual projector", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "e \u03c3 = tanh(W pool ReLU(f) + b pool ).", |
|
"eq_num": "(12)" |
|
} |
|
], |
|
"section": "The syntactic-visual projector", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "All Ks, bs and Ws here are parameters of the projector.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The syntactic-visual projector", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "There are three different kinds of objectives used in the optimization of the full grounded induction model. The first loss is the marginal likelihood loss for the PCFG induction model described in Equation 1, which can be calculated with the Inside algorithm. The second loss is the syntactic-visual loss. Given the encoded image embedding e m and the projected syntactic-visual embedding e \u03c3 of a sentence \u03c3, the syntactic-visual loss is the mean squared error of these two embeddings:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "L(e m , e \u03c3 ) = (e m \u2212 e \u03c3 ) (e m \u2212 e \u03c3 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The third loss is the reconstruction loss of the image. Given the original image represented as a vector i m and the reconstructed image\u00ee m , the reconstruction objective is the mean squared error of the corresponding pixel values of the two images:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(m) = (i m \u2212\u00ee m ) (i m \u2212\u00ee m ).", |
|
"eq_num": "(14)" |
|
} |
|
], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Models with different sets of input optimize the three losses differently for clean ablation. NoIm-agePCFG, which learns from text only, optimizes the negative marginal likelihood loss (the negative of Equation 1) using gradient descent. The model with pretrained image encoders, ImagePrePCFG, optimizes the negative marginal likelihood and the syntactic-visual loss (Equation 13) simultaneously. The full grounded grammar induction model Im-agePCFG learns from text and images jointly by minimizing all three objectives: negative marginal likelihood, syntactic-visual loss and image reconstruction loss Equation 14:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "L(\u03c3, m) = \u2212P(\u03c3) + L(e m , e \u03c3 ) + L(m).", |
|
"eq_num": "(15)" |
|
} |
|
], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "6 Experiment methods", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Experiments described in this paper use the MSCOCO caption data set (Lin et al., 2015) and the Multi30k dataset (Elliott et al., 2016) , which contains pairs of images and descriptions of images written by human annotators. Captions in the MSCOCO data set are in English, whereas captions in the Multi30k dataset are in English, German and French. Captions are automatically parsed (Kitaev and Klein, 2018) to generate a version of the reference set with constituency trees. 6 In addition to these datasets with captions generated by human annotators, we automatically translate the English captions into Chinese, Polish and Korean using Google Translate, 7 and parse the resulting translations into constituency trees, which are then used in experiments to probe the interactions between visual information and grammar induction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 86, |
|
"text": "(Lin et al., 2015)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 112, |
|
"end": 134, |
|
"text": "(Elliott et al., 2016)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 382, |
|
"end": 406, |
|
"text": "(Kitaev and Klein, 2018)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 475, |
|
"end": 476, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Results from models proposed in this paper -NoImagePCFG, ImagePrePCFG and ImagePCFG -are compared with published results from Shi et al. 2019, which include PRPN (Shen et al., 2018) , ON-LSTM (Shen et al., 2019) as well as the grounded VG-NSL models which uses either head final bias (VG-NSL+H) or head final bias and Fasttext embeddings (VG-NSL+H+F) as inductive biases from external sources. All of these models only induce unlabeled structures and have been evaluated with unlabeled F1 scores. We additionally report the labeled evaluation score Recall-Homogeneity (Rosenberg and Hirschberg, 2007; Jin and Schuler, 2020) for better comparison between the proposed models. All evaluation is done on Viterbi parse trees of the test set from 5 different runs. Details about hyper-parameters and results on development data sets can be found in the appendix. However, importantly, the tuned hyperparameters for the grammar induction model are the same across the three proposed models, which facilitates direct comparisons among these models to determine the effect of visual information on induction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 181, |
|
"text": "(Shen et al., 2018)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 192, |
|
"end": 211, |
|
"text": "(Shen et al., 2019)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 568, |
|
"end": 600, |
|
"text": "(Rosenberg and Hirschberg, 2007;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 601, |
|
"end": 623, |
|
"text": "Jin and Schuler, 2020)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Optimization", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Both unlabeled and labeled evaluation results are shown in Table 1 with left-and right-branching baselines. First, trees induced by the PCFG induction models are more accurate than trees induced with all other models, showing that the family of PCFG induction models is better at capturing syntactic regularities and provides a much stronger baseline for grammar induction. Second, using the NoImagePCFG model as a baseline, results from both the ImagePCFG model, where raw images are used as input, and the ImagePrePCFG model, where images encoded by pretrained image classifiers are used as input, do not show strong indication of benefits of visual information in induction. The baseline NoImagePCFG outperforms other models by significant margins on all languages in unlabeled evaluation. Compared to seemingly large gains between text-based models like PRPN and ON-LSTM 8 and the grounded models like VG-NSL+H on French and German observed by Shi et al. (2019) , the only positive gain between NoIm-agePCFG and ImagePCFG shown in Table 1 is the labeled evaluation on French where ImagePCFG outperforms NoImagePCFG by a small margin. Because the only difference between NoImagePCFG and ImagePCFG models is whether the visual information influences the syntactic word embeddings, the results indicate that on these languages, visual information does not seem to help induction. The gain seen in previous results may therefore be from external inductive biases. Finally, the Im-agePrePCFG model performs at slightly lower accuracies than the ImagePCFG model consistently across different languages, datasets and evaluation metrics, showing that the information needed in grammar induction from images is not the same as information needed for image classification, and such information can be extracted from images without annotated image classification data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 948, |
|
"end": 965, |
|
"text": "Shi et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 59, |
|
"end": 66, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1035, |
|
"end": 1042, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Standard set: no replication of effect for visual information", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "One potential advantage of using visual information in induction is to ground nouns and noun phrases. For example, if images like in Figure 1 are consistently presented to models with sentences describing spaghetti, the models may learn the categorize words and phrases which could be linked with objects in images as nominal units and then bootstrap other lexical categories. However, in the test languages above, a narrow set of very high fre- quency words such as determiners provide strong identifying information for nouns and noun phrases, which may greatly diminish the advantage contributed by visual information. In such cases, visual information may even be harmful, as models may attend to other information in images which is irrelevant to induction. Korean, Polish and Chinese are chosen as representatives of languages with no definite articles, and in which statistical information provided by high frequency words is less reliable because there are more such word types. Table 2 shows the performance scores of the three proposed systems on these languages. Comparing to results in Table 1, the models with visual information in the input significantly outperform the baseline model, NoImagePCFG, on a majority of the additional test datasets. Figure 3 shows the correlation between the RH difference between the ImagePCFG model and the NoImagePCFG model on each language in an image dataset, and the distribution of high frequency words in that language, defined as the number of word types needed to account for 10% of the number of word tokens in the Universal Dependency (Nivre et al., 2016) corpus of a language. 9 The figure shows that the largest gain brought by visual information in induction is on Korean, where the number of high frequency word types is also highest. Results on Chinese and Polish also show a benefit for visual information, although the gain is much smaller and less consistent. It also shows that when there is a trend of positive correlation between the number of high frequency words and the gain brought by visual information, factors other than high frequency words are at play as well in determining the final induction outcome for each dataset in each language in the visually grounded setup, which are left for investigation in future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1591, |
|
"end": 1611, |
|
"text": "(Nivre et al., 2016)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 1634, |
|
"end": 1635, |
|
"text": "9", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 133, |
|
"end": 141, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 987, |
|
"end": 994, |
|
"text": "Table 2", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1260, |
|
"end": 1268, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Languages with wider distribution of high-frequency word types: positive effect", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "We hypothesize three specific ways that visual information may help grammar induction. First, a strong correlation between words and objects in images can help identification and categorization of nouns and noun phrases, especially on languages where nouns and noun phrases are not readily identifiable by neighboring high frequency words. Second, visual information may provide bottom-up information for unknown word embeddings. Languages where neighboring words can reliably predict the grammatical category of an unknown word may build robust representations of unknown word embeddings, but the construction of the UNK embedding may also benefit from bottom-up information from images, especially when sentential context is not enough to build informative UNK embeddings. Finally, semantic information inside images may be helpful in solving syntactic ambiguities like prepositional phrase attachment in languages like English. Results from experiments described below with the ImagePCFG and NoIm-agePCFG models show evidence of all three ways.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Analysis of advantages of visual information", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "The 'Noun bias' hypothesis (Gentner, 1982) postulates that visual information in the induction process may impact how words are categorized grammatically, and nouns may receive an advantage because they correspond to objects in images. However, objects in images are often described with phrases, not single words. For example, captions like a red car is parked on the street, are common in both caption datasets, where the objects in the image may associate more strongly with modifier words like red than the head noun car.", |
|
"cite_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 42, |
|
"text": "(Gentner, 1982)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Grounding of nouns and noun phrases", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "Evaluations are carried out on the parsed sentences of all languages from two caption datasets using a part-of-speech homogeneity metric (Rosenberg and Hirschberg, 2007) for measuring the partof-speech accuracy, and an unlabeled NP recall score for measuring how many noun phrases in gold annotation are also found in the induced trees. Results in Figure 4 first show that the POS homogeneity scores from different models on the same induction dataset are extremely close to each other. Given that nouns are one of the categories with the most numerous tokens, the almost identical performance of POS homogeneity across different models indicates that the unsupervised clustering accuracy for nouns across different models is also very close, in contrast to substantial RH score differences on English and Korean.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 169, |
|
"text": "(Rosenberg and Hirschberg, 2007)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 348, |
|
"end": 356, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Grounding of nouns and noun phrases", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "However, NP recall scores show a pattern of performance ranking that resembles the ranking observed in Tables 1 and 2 . For all datasets except for the Polish Multi30k dataset, when the RH score of ImagePCFG is higher than NoImagePCFG, the NP recall score for the ImagePCFG model is also higher. Significance testing with permutation sampling shows that all performance differences are significant (p < 0.01). 10 High accuracy on noun phrases is crucial to high accuracy of other constituents such as prepositional phrases and verb phrases, which usually contain noun phrases, and eventually leads to high overall accuracy. This result suggests that the benefit contributed by visual information works at phrasal levels, most likely E n g l i s h K o r e a n P o l i s h C h i n e s e G e r m a n E n g l i s h F r e n c h K o r e a n grounding phrases, not words, with objects in images.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 103, |
|
"end": 117, |
|
"text": "Tables 1 and 2", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Grounding of nouns and noun phrases", |
|
"sec_num": "7.1" |
|
}, |
|
{ |
|
"text": "The informativeness of unknown word embeddings is tested among the induction models across different languages. An UNK test set is created by randomly replacing one word in one sentence with an UNK symbol if the sentence has no unknown words present. Table 3 shows the labeled evaluation results on the multilingual datasets. 11 First, performance on the UNK test sets on all languages is lower than on the normal test sets, showing that replacing random words with UNK symbols does impact performance. The performance ranking of the models on a majority of the languages is consistent with the ranking on the normal test set. The ranking of the models on one dataset, the Chinese Multi30k, is reversed on the UNK test set, where the ImagePCFG models show significantly higher performance than the NoImagePCFG models (Chinese: p < 0.01, permutation test on unlabeled F1). This result indicates that the ImagePCFG model in which visual information is supplied during train-ing may have built more informative embeddings for the unknown word symbols, helping the model to outperform the model without visual information on a majority of datasets where UNK symbols are frequent.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 251, |
|
"end": 258, |
|
"text": "Table 3", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Informativeness of the UNK embedding", |
|
"sec_num": "7.2" |
|
}, |
|
{ |
|
"text": "Finally, visual information may provide semantic information to resolve structural ambiguities. Word quintuples such as (a) hotel caught fire during (a) storm were extracted from English Wikipedia and the attachment locations were automatically labeled either as 'n' for low attachment, where the prepositional phrase adjoins the direct object, or 'v' for high attachment, where the prepositional phrase adjoins the main verb (Nakashole and Mitchell, 2015) . 168 test items are selected by human annotators for evaluation, within which 119 are sentences with high attached PPs and 49 are with low attached PPs. For evaluation of PP attachment with induced trees, one test item is labeled correct when the induced tree puts the main verb and the direct object into one constituent and it is labeled as 'v'. For example, if the induced tree has caught fire as a constituent, it counts as correct for the above example with high attachment. Low attachment trees must have a constituent with the direct object and the prepositional phrase. For example, for the sentence (a) guide gives talks about animals, the induced tree must have talks about animals. Average accuracies for all sentences as well as for sentences with high attachment or low attachment with induced grammars are shown in Figure 5 . Results show that the models trained with visual information on both datasets show significantly higher performance on the PP attachment task in most of the categories, except for the low attachment category with Multi30k models where the performance from both models is not significantly different. This is in contrast to the higher performance of the NoImagePCFG models on unlabeled F1 and labeled RH than that of the ImagePCFG models on English from both caption datasets. Results indicate that induction models use visual information for weighting competing latent syntactic trees for a sentence, which is consistent with the third hypothesized advantage of visual information for induction. This also indicates that the reason that the overall parsing performance of Im-agePCFG on English is lower than NoImagePCFG lies within other syntactic structures, which is left for future work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 426, |
|
"end": 456, |
|
"text": "(Nakashole and Mitchell, 2015)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1287, |
|
"end": 1295, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Prepositional phrase attachment", |
|
"sec_num": "7.3" |
|
}, |
|
{ |
|
"text": "This work proposed several novel neural networkbased models of grammar induction which take into account visual information in induction. These models achieve state-of-the-art results on multilingual induction datasets without any help from linguistic knowledge or pretrained image encoders. Further analyses isolated three hypothesized benefits of visual information: it helps categorize noun phrases, represent unknown words and resolve syntactic ambiguities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "ResNet18 architecture, 14 and the decoder employs the decoder architecture in the DCGAN model. 15 A batch size of 2 is used in training. Adam is used as the optimizer, with the initial learning rate at 5 \u00d7 10 \u22124 . The loss on the validation set is checked every 20000 batches, and training is stopped when the validation loss has not been lowered for 10 checkpoints. The model with the lowest validation loss is used as the candidate model for test evaluation, where best parses are generated with the Viterbi algorithm on an inside chart. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "The system implementation and translated datasets used in this work can be found at https://github.com/ lifengjin/imagepcfg.(a) friend as companion (b) friend as condiment", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://github.com/ajdagokcen/ madlyambiguous-repo", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The syntactic nature of word embeddings indicates that any lexical-specific semantic information in these embeddings may be abstract, which is generally not sufficient for visual reconstruction. Experiments with syntactic embeddings show that it is difficult to extract semantic information from them and present visually.4 These include the expansion rules generating the top node in the tree.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Details of these models can be found in the cited work and the appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The multilingual parsing accuracy for all languages used in this work has been validated inFried et al. (2019) and verified inShi et al. (2019).7 https://translate.google.com/.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PCFG induction models where a grammar is induced generally perform better in parsing evaluation than sequence models where only syntactic structures are induced(Kim et al., 2019a;.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Korean has 41, Chinese and Polish have 5, German has 4, English has 3 and French has 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Significance testing is not done on POS homogeneity due to the possibility that the same induced POS label may mean different things in different induced grammars.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The unlabeled evaluation results can be found in the appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The data set can be found at https://github.com/ ExplorerFreda/VGNSL along with image embeddings encoded by pretrained image encoders.13 The data set can be found at https://github.com/ multi30k/dataset along with image embeddings encoded by pretrained image encoders.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://pytorch.org/docs/stable/_modules/ torchvision/models/resnet.html#resnet1815 https://github.com/pytorch/examples/blob/ master/dcgan/main.py", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The authors would like to thank the anonymous reviewers for their helpful comments. Computations for this project were partly run on the Ohio Supercomputer Center (1987) . This work was supported by the Presidential Fellowship from the Ohio State University. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred. This work was also supported by the National Science Foundation grant #1816891. All views expressed are those of the authors and do not necessarily reflect the views of the National Science Foundation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 156, |
|
"end": 169, |
|
"text": "Center (1987)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The MSCOCO caption dataset used in Shi et al. (2019) contains 413,915 sentences in the training set, and 5000 sentences in the development and test sets respectively. 12 Every image is accompanied by 5 captions,and there are 82,783 images in total in the training set. The image embeddings of size 2048 used in Shi et al. (2019) are encoded by an image classifier with ResNet128 architecture trained with on the ImageNet classification task (Deng et al., 2009) .The Multi30k caption dataset contains 29,000 sentences in the training set, and 1,014 sentences in the development and 1,000 in the test set in four different languages, all of which except Czech are used in this work thanks to the availability of high accuracy constituency parsers in these languages. 13 There are as many images as there are captions in the training set. The image embeddings of size 2048 provided with the dataset are encoded by an image classifier with ResNet50 architecture also trained with on the ImageNet classification task.For data preprocessing, following Shi et al. (2019) , the size of the vocabulary is limited to 10,000 for all languages and datasets. All raw images are resized to 3 \u00d7 64 \u00d7 64 and normalized with means [0.485, 0.456, 0.406] and standard deviations [0.229, 0.224, 0.225] , calculated from images in ImageNet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 52, |
|
"text": "Shi et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 328, |
|
"text": "Shi et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 441, |
|
"end": 460, |
|
"text": "(Deng et al., 2009)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 765, |
|
"end": 767, |
|
"text": "13", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1046, |
|
"end": 1063, |
|
"text": "Shi et al. (2019)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1249, |
|
"end": 1281, |
|
"text": "deviations [0.229, 0.224, 0.225]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Details of datasets", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The hyperparameters used in all proposed models are tuned with the MSCOCO English development set. For the grammar induction model, the size of word and syntactic category embeddings, as well as the size of hidden intermediary representations is 64. The size of the image embedding in the ImagePCFG system is also 64. All out-ofvocabulary words are replaced by the UNK symbol. Sentences with more than 40 words in the training set are trimmed down to 40 words. For the projector model, five different convolutional kernels, from (1,64) to (5,64), are used with 128 output channels. The trainable image encoder employs a", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Hyperparameters", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Aspects of the Theory of Syntax", |
|
"authors": [ |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Chomsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1965, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Resolving language and vision ambiguities together: Joint segmentation & Prepositional attachment resolution in captioned scenes", |
|
"authors": [ |
|
{ |
|
"first": "Gordon", |
|
"middle": [], |
|
"last": "Christie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankit", |
|
"middle": [], |
|
"last": "Laddha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aishwarya", |
|
"middle": [], |
|
"last": "Agrawal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stanislaw", |
|
"middle": [], |
|
"last": "Antol", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yash", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Kochersberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dhruv", |
|
"middle": [], |
|
"last": "Batra", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing, Proceedings", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1493--1503", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/d16-1156" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gordon Christie, Ankit Laddha, Aishwarya Agrawal, Stanislaw Antol, Yash Goyal, Kevin Kochersberger, and Dhruv Batra. 2016. Resolving language and vision ambiguities together: Joint segmentation & Prepositional attachment resolution in captioned scenes. In EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing, Proceed- ings, pages 1493-1503.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Imagenet: A large-scale hierarchical image database", |
|
"authors": [ |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "2009 IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "248--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hier- archical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. Ieee.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Unsupervised labeled parsing with deep inside-outside recursive autoencoders", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Drozdov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Verga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi-Pei", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1507--1512", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1161" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Drozdov, Patrick Verga, Yi-Pei Chen, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised labeled parsing with deep inside-outside recursive autoencoders. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1507-1512, Hong Kong, China. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Multi30K: Multilingual English-German Image Descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Desmond", |
|
"middle": [], |
|
"last": "Elliott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stella", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Khalil", |
|
"middle": [], |
|
"last": "Sima'an", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucia", |
|
"middle": [], |
|
"last": "Specia", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 5th Workshop on Vision and Language", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "70--74", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/W16-3210" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Desmond Elliott, Stella Frank, Khalil Sima'an, and Lu- cia Specia. 2016. Multi30K: Multilingual English- German Image Descriptions. In Proceedings of the 5th Workshop on Vision and Language, pages 70- 74, Berlin, Germany. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Cross-domain generalization of neural constituency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Fried", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikita", |
|
"middle": [], |
|
"last": "Kitaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "323--330", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1031" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Fried, Nikita Kitaev, and Dan Klein. 2019. Cross-domain generalization of neural constituency parsers. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 323-330, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Why nouns are learned before verbs: Linguistic relativity versus natural partitioning", |
|
"authors": [ |
|
{ |
|
"first": "Dedre", |
|
"middle": [], |
|
"last": "Gentner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1982, |
|
"venue": "Language, thought, and culture", |
|
"volume": "2", |
|
"issue": "1", |
|
"pages": "301--334", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dedre Gentner. 1982. Why nouns are learned be- fore verbs: Linguistic relativity versus natural par- titioning. Language development: Vol. 2. Language, thought, and culture, 2(1):301-334.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Why Verbs Are Hard to Learn", |
|
"authors": [ |
|
{ |
|
"first": "Dedre", |
|
"middle": [], |
|
"last": "Gentner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Action Meets Word: How Children Learn Verbs", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "544--564", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1093/acprof:oso/9780195170009.003.0022" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dedre Gentner. 2006. Why Verbs Are Hard to Learn. In K. Hirsh-Pasek and R. Golinkoff, editors, Action Meets Word: How Children Learn Verbs, pages 544- 564. Oxford University Press.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The Structural Sources of Verb Meanings", |
|
"authors": [ |
|
{ |
|
"first": "Lila", |
|
"middle": [], |
|
"last": "Gleitman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Language Acquisition", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "3--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lila Gleitman. 1990. The Structural Sources of Verb Meanings. Language Acquisition, 1(1):3-55.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Madly ambiguous: A game for learning about structural ambiguity and why it's hard for computers", |
|
"authors": [ |
|
{ |
|
"first": "Ajda", |
|
"middle": [], |
|
"last": "Gokcen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "White", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--55", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N18-5011" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ajda Gokcen, Ethan Hill, and Michael White. 2018. Madly ambiguous: A game for learning about struc- tural ambiguity and why it's hard for computers. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Demonstrations, pages 51-55, New Orleans, Louisiana. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Deep residual learning for image recognition", |
|
"authors": [ |
|
{ |
|
"first": "Kaiming", |
|
"middle": [], |
|
"last": "He", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangyu", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shaoqing", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jian", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition", |
|
"volume": "2016", |
|
"issue": "", |
|
"pages": "770--778", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/CVPR.2016.90" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE Computer So- ciety Conference on Computer Vision and Pattern Recognition, volume 2016-Decem, pages 770-778.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Concreteness and subjectivity as dimensions of lexical meaning", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 -Proceedings of the Conference", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "725--731", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/p14-2118" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill and Anna Korhonen. 2014. Concreteness and subjectivity as dimensions of lexical meaning. In 52nd Annual Meeting of the Association for Com- putational Linguistics, ACL 2014 -Proceedings of the Conference, volume 2, pages 725-731.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Multi-Modal Models for Concrete and Abstract Concept Meaning", |
|
"authors": [ |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roi", |
|
"middle": [], |
|
"last": "Reichart", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anna", |
|
"middle": [], |
|
"last": "Korhonen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "285--296", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl{_}a{_}00183" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2014. Multi-Modal Models for Concrete and Abstract Con- cept Meaning. Transactions of the Association for Computational Linguistics, 2:285-296.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Depthbounding is effective: Improvements and evaluation of unsupervised PCFG induction", |
|
"authors": [ |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Finale", |
|
"middle": [], |
|
"last": "Doshi-Velez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lane", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2721--2731", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D18-1292" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2018a. Depth- bounding is effective: Improvements and evaluation of unsupervised PCFG induction. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2721-2731, Brus- sels, Belgium. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Unsupervised grammar induction with depth-bounded PCFG", |
|
"authors": [ |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Finale", |
|
"middle": [], |
|
"last": "Doshi-Velez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lane", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "211--224", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1162/tacl_a_00016" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2018b. Un- supervised grammar induction with depth-bounded PCFG. Transactions of the Association for Compu- tational Linguistics, 6:211-224.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Unsupervised learning of PCFGs with normalizing flow", |
|
"authors": [ |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Finale", |
|
"middle": [], |
|
"last": "Doshi-Velez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lane", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2442--2452", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1234" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifeng Jin, Finale Doshi-Velez, Timothy Miller, Lane Schwartz, and William Schuler. 2019. Unsuper- vised learning of PCFGs with normalizing flow. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2442-2452, Florence, Italy. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Variance of average surprisal: A better predictor for quality of grammar from unsupervised PCFG induction", |
|
"authors": [ |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2453--2463", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1235" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifeng Jin and William Schuler. 2019. Variance of average surprisal: A better predictor for quality of grammar from unsupervised PCFG induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2453-2463, Florence, Italy. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The Importance of Category Labels in Grammar Induction with Child-directed Utterances", |
|
"authors": [ |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Proceedings of 16th International Conference on Parsing Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lifeng Jin and William Schuler. 2020. The Impor- tance of Category Labels in Grammar Induction with Child-directed Utterances. In Proceedings of 16th International Conference on Parsing Technologies, Seattle, USA. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Bayesian Inference for PCFGs via Markov chain Monte Carlo", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharon", |
|
"middle": [], |
|
"last": "Goldwater", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of Human Language Technologies: The Conference of the North American Chapter of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2007. Bayesian Inference for PCFGs via Markov chain Monte Carlo. Proceedings of Hu- man Language Technologies: The Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 139-146.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Compound probabilistic context-free grammars for grammar induction", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2369--2385", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1228" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim, Chris Dyer, and Alexander Rush. 2019a. Compound probabilistic context-free grammars for grammar induction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 2369-2385, Florence, Italy. Asso- ciation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Unsupervised recurrent neural network grammars", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adhiguna", |
|
"middle": [], |
|
"last": "Kuncoro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G\u00e1bor", |
|
"middle": [], |
|
"last": "Melis", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1105--1117", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/N19-1114" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kun- coro, Chris Dyer, and G\u00e1bor Melis. 2019b. Unsu- pervised recurrent neural network grammars. In Pro- ceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long and Short Papers), pages 1105-1117, Minneapolis, Minnesota. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Constituency parsing with a self-attentive encoder", |
|
"authors": [ |
|
{ |
|
"first": "Nikita", |
|
"middle": [], |
|
"last": "Kitaev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "2676--2686", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P18-1249" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikita Kitaev and Dan Klein. 2018. Constituency pars- ing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676-2686, Melbourne, Australia. Associa- tion for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Microsoft COCO: Common Objects in Context", |
|
"authors": [ |
|
{ |
|
"first": "Tsung-Yi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Maire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Belongie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lubomir", |
|
"middle": [], |
|
"last": "Bourdev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Hays", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pietro", |
|
"middle": [], |
|
"last": "Perona", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deva", |
|
"middle": [], |
|
"last": "Ramanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"Lawrence" |
|
], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3686--3693", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/CVPR.2014.471" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Doll\u00e1r. 2015. Microsoft COCO: Common Objects in Context. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3686-3693.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "A knowledge-intensive model for prepositional phrase attachment", |
|
"authors": [ |
|
{ |
|
"first": "Ndapandula", |
|
"middle": [], |
|
"last": "Nakashole", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Tom", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "ACL-IJCNLP 2015 -53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "365--375", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3115/v1/p15-1036" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ndapandula Nakashole and Tom M Mitchell. 2015. A knowledge-intensive model for prepositional phrase attachment. In ACL-IJCNLP 2015 -53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing of the Asian Feder- ation of Natural Language Processing, Proceedings of the Conference, volume 1, pages 365-375.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Universal Dependencies v1: A Multilingual Treebank Collection", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Filip", |
|
"middle": [], |
|
"last": "Ginter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Goldberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sampo", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pyysalo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre, Marie-Catherine De Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Haji\u010d, Christopher D Man- ning, Ryan Mcdonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A Multilingual Treebank Collection. In Proceedings of Language Resources and Evaluation Conference.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Using Leftcorner Parsing to Encode Universal Structural Constraints in Grammar Induction", |
|
"authors": [ |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Noji", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Johnson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "33--43", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hiroshi Noji and Mark Johnson. 2016. Using Left- corner Parsing to Encode Universal Structural Con- straints in Grammar Induction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 33-43.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "The bootstrapping problem in language acquisition. Mechanisms of language acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Pinker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Macwhinney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "399--441", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steven Pinker and B MacWhinney. 1987. The boot- strapping problem in language acquisition. Mecha- nisms of language acquisition, pages 399-441.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Unsupervised representation learning with deep convolutional generative adversarial networks", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Radford", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Metz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumith", |
|
"middle": [], |
|
"last": "Chintala", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 4th International Conference on Learning Representations. International Conference on Learning Representations", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Radford, Luke Metz, and Soumith Chintala. 2016. Unsupervised representation learning with deep con- volutional generative adversarial networks. In Pro- ceedings of the 4th International Conference on Learning Representations. International Conference on Learning Representations, ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Vmeasure: A conditional entropy-based external cluster evaluation measure", |
|
"authors": [ |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Rosenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hirschberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andrew Rosenberg and Julia Hirschberg. 2007. V- measure: A conditional entropy-based external clus- ter evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural lan- guage learning (EMNLP-CoNLL).", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Fast Unsupervised Incremental Parsing", |
|
"authors": [ |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Seginer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the Annual Meeting of the Association of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "384--391", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoav Seginer. 2007. Fast Unsupervised Incremental Parsing. In Proceedings of the Annual Meeting of the Association of Computational Linguistics, pages 384-391.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Memory-bounded left-corner unsupervised grammar induction on child-directed input", |
|
"authors": [ |
|
{ |
|
"first": "Cory", |
|
"middle": [], |
|
"last": "Shain", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Bryce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lifeng", |
|
"middle": [], |
|
"last": "Jin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victoria", |
|
"middle": [], |
|
"last": "Krakovna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Finale", |
|
"middle": [], |
|
"last": "Doshi-Velez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timothy", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [], |
|
"last": "Schuler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lane", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "964--975", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cory Shain, William Bryce, Lifeng Jin, Vic- toria Krakovna, Finale Doshi-Velez, Timothy Miller, William Schuler, and Lane Schwartz. 2016. Memory-bounded left-corner unsupervised gram- mar induction on child-directed input. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Pa- pers, pages 964-975, Osaka, Japan. The COLING 2016 Organizing Committee.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Neural Language Modeling by Jointly Learning Syntax and Lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Yikang", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhouhan", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chin-Wei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron Courville. 2018. Neural Language Modeling by Jointly Learning Syntax and Lexicon. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Yikang", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shawn", |
|
"middle": [], |
|
"last": "Tan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Sordoni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Courville", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ICLR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered Neurons: Integrat- ing Tree Structures into Recurrent Neural Networks. In ICLR.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Visually grounded neural syntax acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Haoyue", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiayuan", |
|
"middle": [], |
|
"last": "Mao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Karen", |
|
"middle": [], |
|
"last": "Livescu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1842--1861", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P19-1180" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen Livescu. 2019. Visually grounded neural syntax ac- quisition. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1842-1861, Florence, Italy. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Constructing a language: A usage-based theory of language acquisition", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Tomasello", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Tomasello. 2003. Constructing a language: A usage-based theory of language acquisition. Har- vard University Press, Cambridge, MA, US.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Are Nouns Learned Before Verbs? Infants Provide Insight into a Longstanding Debate NIH Public Access Author Manuscript", |
|
"authors": [ |
|
{ |
|
"first": "Sandra", |
|
"middle": [], |
|
"last": "Waxman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaolan", |
|
"middle": [], |
|
"last": "Fu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sudha", |
|
"middle": [], |
|
"last": "Arunachalam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Erin", |
|
"middle": [], |
|
"last": "Leddon", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [], |
|
"last": "Geraghty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hyun-Joo", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Child Dev, and Perspect Author", |
|
"volume": "7", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1111/cdep.12032" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandra Waxman, Xiaolan Fu, Sudha Arunachalam, Erin Leddon, Kathleen Geraghty, Hyun-Joo Song, Child Dev, and Perspect Author. 2013. Are Nouns Learned Before Verbs? Infants Provide Insight into a Longstanding Debate NIH Public Access Author Manuscript. Child Dev Perspect, 7(3).", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "I B X 5 9 l 5 c 9 6 d j / l o z l n s H I E / c j 5 / A N o o o Q Y = < / l a t e x i t > e < l a t e x i t s h a 1 _ b a s e 6 4 = \"", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "7 8 9 5 H r R P e e G Y N / Y D 3 8 Q W G q 6 g x < / l a t e x i t > Different configurations of PCFG induction models: the model without vision (NoImagePCFG), the model with a pretrained image encoder (ImagePrePCFG) and the model with images (ImagePCFG.)", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "The correlation between number of word types needed to account for 10% of word tokens in a language (log # High Freq Words) and the RH gain from NoImagePCFG to ImagePCFG on different languages on the two different image datasets.", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"text": "The POS Homogeneity and NP Recall scores for the ImagePCFG and NoImagePCFG models across the test languages (** : p < 0.01). The average overall accuracy as well as accuracies for high and low attachment sentences in PP attachment evaluation for models with and without visual information (** : p < 0.01, * :p < 0.05).", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF1": { |
|
"text": "Table 1: Averages and standard deviations of labeled Recall-Homogeneity and unlabeled F1 scores of various unsupervised grammar inducers on the MSCOCO and Multi30k caption datasets. VG-NSL+H: VG-NSL system with head final bias. VG-NSL+H+F: VG-NSL system with head final bias and Fasttext word embeddings.(** : the unlabeled performance difference between NoImagePCFG and ImagePCFG is significant p < 0.01.)", |
|
"content": "<table><tr><td/><td colspan=\"2\">MSCOCO</td><td/><td/><td/><td>Multi30k</td><td/></tr><tr><td>Models</td><td colspan=\"2\">English**</td><td colspan=\"2\">English**</td><td colspan=\"2\">German**</td><td/><td>French**</td></tr><tr><td>F1</td><td/><td>RH</td><td>F1</td><td>RH</td><td>F1</td><td>RH</td><td>F1</td><td>RH</td></tr><tr><td colspan=\"2\">Left-branching 23.3</td><td>-</td><td>22.6</td><td>-</td><td>34.7</td><td>-</td><td>19.0</td><td>-</td></tr><tr><td colspan=\"2\">Right-branching 21.4</td><td>-</td><td>11.3</td><td>-</td><td>12.1</td><td>-</td><td>11.0</td><td>-</td></tr><tr><td colspan=\"3\">PRPN 52.5\u00b12.6 -</td><td colspan=\"2\">30.8\u00b117.9 -</td><td colspan=\"2\">31.5\u00b18.9 -</td><td colspan=\"2\">27.5\u00b17.0 -</td></tr><tr><td colspan=\"3\">ON-LSTM 45.5\u00b13.3 -</td><td colspan=\"2\">38.7\u00b112.7 -</td><td colspan=\"2\">34.9\u00b112.3 -</td><td colspan=\"2\">27.7\u00b15.6 -</td></tr><tr><td colspan=\"3\">VG-NSL+H 53.3\u00b10.2 -</td><td colspan=\"2\">38.7\u00b10.2 -</td><td colspan=\"2\">38.3\u00b10.2 -</td><td colspan=\"2\">38.1\u00b10.6 -</td></tr><tr><td colspan=\"3\">VG-NSL+H+F 54.4\u00b10.4 -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"9\">NoImagePCFG 60.0\u00b18.2 47.6\u00b110.0 59.4\u00b17.7 51.6\u00b18.5 48.1\u00b15.2 53.7\u00b15.2 44.3\u00b15.1 43.8\u00b15.2</td></tr><tr><td colspan=\"9\">ImagePrePCFG 55.6\u00b17.5 42.3\u00b17.3 47.0\u00b17.0 40.5\u00b17.2 46.2\u00b17.4 51.1\u00b18.0 42.6\u00b110.3 43.4\u00b110.8</td></tr><tr><td colspan=\"9\">ImagePCFG 55.1\u00b12.7 42.5\u00b11.5 48.2\u00b14.9 40.5\u00b15.0 47.0\u00b15.5 51.8\u00b18.4 43.6\u00b15.5 44.5\u00b16.3</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "1\u00b18.5 22.3\u00b16.8 58.9\u00b13.7 47.1\u00b13.8 61.2\u00b13.5 48.5\u00b13.7 ImagePrePCFG 39.0\u00b14.1 23.5\u00b13.2 60.5\u00b11.8 49.8\u00b13.3 60.0\u00b14.6 47.2\u00b14.5 ImagePCFG 45.0\u00b12.2 27.1\u00b12.6 53.6\u00b18.3 41.3\u00b17.8 64.9\u00b16.6 51.2\u00b18.6", |
|
"content": "<table><tr><td>Models on MSCOCO</td><td>Korean**</td><td/><td>Polish**</td><td/><td>Chinese**</td></tr><tr><td>F1</td><td>RH</td><td>F1</td><td>RH</td><td>F1</td><td>RH</td></tr><tr><td>NoImagePCFG 38.Models on Multi30k</td><td>Korean**</td><td/><td>Polish</td><td/><td>Chinese**</td></tr><tr><td>F1</td><td>RH</td><td>F1</td><td>RH</td><td>F1</td><td>RH</td></tr><tr><td colspan=\"6\">NoImagePCFG 30.7\u00b15.6 22.8\u00b13.1 49.6\u00b14.6 39.9\u00b15.1 59.1\u00b13.3 53.2\u00b14.7</td></tr><tr><td colspan=\"6\">ImagePrePCFG 27.1\u00b14.4 19.9\u00b13.4 48.4\u00b13.1 38.3\u00b12.9 57.9\u00b17.0 51.0\u00b17.7</td></tr><tr><td colspan=\"6\">ImagePCFG 44.9\u00b11.3 33.8\u00b12.1 49.7\u00b17.2 40.4\u00b16.1 58.5\u00b13.2 52.8\u00b14.6</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Averages and standard deviations of labeled Recall-Homogeneity and unlabeled F1 scores of various unsupervised grammar inducers on the MSCOCO and Multi30k caption datasets in the additional languages with high numbers of high-frequency word types. (** : the unlabeled performance difference between NoImagePCFG and ImagePCFG is significant p < 0.01.)", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "NoImagePCFG 46.2 21.7 45.8 46.0 52.8 49.9 42.2 22.8 38.9 51.6 ImagePCFG 41.2 26.4 40.2 48.1 51.3 39.9 42.6 33.2 39.7 53.2", |
|
"content": "<table><tr><td>Models</td><td colspan=\"2\">MSCOCO</td><td/><td/><td/><td colspan=\"2\">Multi30k</td><td/><td/></tr><tr><td>En</td><td>Ko</td><td>Pl</td><td>Zh</td><td>De</td><td>En</td><td>Fr</td><td>Ko</td><td>Pl</td><td>Zh</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Average labeled Recall-Homogeneity of the NoImagePCFG and ImagePCFG models on the MSCOCO and Multi30k caption datasets with random words replaced by the UNK symbol. Standard deviations across the datasets are similar to what is reported in Table 1 and 2. Chinese Multi30k is the one on which the NoImagePCFG model outperforms the ImagePCFG model on the normal test set but not on the UNK test set.", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "and 5 report unlabeled F1 and labeled RH results on the development sets in the multilingual caption datasets. Results show that development and test results are very similar, indicating that the general characteristics of the two sets are very close. 3\u00b18.2 46.4\u00b111.0 38.6\u00b18.7 22.6\u00b16.9 59.5\u00b13.8 47.5\u00b13.9 ImagePrePCFG 55.7\u00b17.5 39.6\u00b15.4 39.5\u00b14.2 24.1\u00b13.4 61.2\u00b11.6 50.1\u00b13.3 ImagePCFG 55.4\u00b12.7 43.2\u00b11.8 45.1\u00b12.3 27.5\u00b12.6 54.3\u00b18.3 41.6\u00b17.9", |
|
"content": "<table><tr><td>Models</td><td>English</td><td/><td>Korean</td><td/><td>Polish</td><td>Chinese</td></tr><tr><td>F1</td><td>RH</td><td>F1</td><td>RH</td><td>F1</td><td>RH</td><td>F1 RH</td></tr><tr><td>NoImagePCFG 60.</td><td/><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "Averages and standard deviations of labeled Recall-Homogeneity and unlabeled F1 scores of various unsupervised grammar inducers on the MSCOCO caption development datasets. 2\u00b15.7 53.6\u00b15.7 59.1\u00b18.1 52.2\u00b18.5 43.8\u00b14.9 43.2\u00b15.2 ImagePrePCFG 44.8\u00b17.9 50.0\u00b18.3 46.7\u00b17.3 40.7\u00b17.5 42.3\u00b110.3 42.8\u00b110.5 ImagePCFG 45.6\u00b15.2 50.6\u00b18.5 47.7\u00b15.4 40.9\u00b15.2 43.1\u00b15.1 43.9\u00b15.5 6\u00b15.7 22.2\u00b13.0 49.4\u00b14.9 40.0\u00b15.3 59.7\u00b13.3 53.6\u00b14.7 ImagePrePCFG 27.0\u00b14.8 19.2\u00b13.6 48.5\u00b13.1 38.5\u00b13.1 55.5\u00b19.3 48.3\u00b110.4 ImagePCFG 45.1\u00b11.1 33.4\u00b11.9 49.5\u00b17.6 40.8\u00b16.3 58.3\u00b13.2 52.1\u00b14.3", |
|
"content": "<table><tr><td>Models</td><td>German</td><td/><td>English</td><td/><td>French</td></tr><tr><td>F1</td><td>RH</td><td>F1</td><td>RH</td><td>F1</td><td>RH</td></tr><tr><td>NoImagePCFG 47.Models</td><td>Korean</td><td/><td>Polish</td><td/><td>Chinese</td></tr><tr><td>F1</td><td>RH</td><td>F1</td><td>RH</td><td>F1</td><td>RH</td></tr><tr><td>NoImagePCFG 30.</td><td/><td/><td/><td/><td/></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"text": "Averages and standard deviations of labeled Recall-Homogeneity and unlabeled F1 scores of various unsupervised grammar inducers on the Multi30k caption development datasets.", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |