|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:48:51.328049Z" |
|
}, |
|
"title": "Did You \"Read\" the Next Episode? Using Textual Cues for Predicting Podcast Popularity", |
|
"authors": [ |
|
{ |
|
"first": "Brihi", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indraprastha Institute of Information Technology", |
|
"location": { |
|
"settlement": "Delhi" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Shravika", |
|
"middle": [], |
|
"last": "Mittal", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indraprastha Institute of Information Technology", |
|
"location": { |
|
"settlement": "Delhi" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Chetan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Indraprastha Institute of Information Technology", |
|
"location": { |
|
"settlement": "Delhi" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Podcasts are an easily accessible medium of entertainment and information, often covering content from a variety of domains. However, only a few of them garner enough attention to be deemed 'popular'. In this work, we investigate the textual cues that assist in differing popular podcasts from unpopular ones. Despite having very similar polarity and subjectivity, the lexical cues contained in the podcasts are significantly different. Thus, we employ a triplet-based training method, to learn a text-based representation of a podcast, which is then used for a downstream task of \"popularity prediction\". Our best model received an F1 score of 0.82, achieving a relative improvement over the best baseline by 12.3%. * *Equal contribution. Ordered randomly.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Podcasts are an easily accessible medium of entertainment and information, often covering content from a variety of domains. However, only a few of them garner enough attention to be deemed 'popular'. In this work, we investigate the textual cues that assist in differing popular podcasts from unpopular ones. Despite having very similar polarity and subjectivity, the lexical cues contained in the podcasts are significantly different. Thus, we employ a triplet-based training method, to learn a text-based representation of a podcast, which is then used for a downstream task of \"popularity prediction\". Our best model received an F1 score of 0.82, achieving a relative improvement over the best baseline by 12.3%. * *Equal contribution. Ordered randomly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Predicting the popularity of media content, such as songs, podcasts, etc., before its release can have significant implications for the producers, artists, etc. Traditionally, this task has been attempted with hand-crafted feature sets (Tsagkias et al., 2008) , and utilising various audio features (Dhanaraj and Logan, 2005) . However, handcrafted feature sets are often not scalable, while audio-based features ignore the textual cues that are present in the data. Recently, with the rise in popularity and efficacy of Deep Learning, Neural network-based models (Yang et al., 2017; Zangerle et al., 2019) have also been proposed for hit-song prediction. There have also been some attempts to learn a general representation for media content, but only based on the audio of the content, not from the textual cues.", |
|
"cite_spans": [ |
|
{ |
|
"start": 236, |
|
"end": 259, |
|
"text": "(Tsagkias et al., 2008)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 325, |
|
"text": "(Dhanaraj and Logan, 2005)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 564, |
|
"end": 583, |
|
"text": "(Yang et al., 2017;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 584, |
|
"end": 606, |
|
"text": "Zangerle et al., 2019)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this work, we attempt to study the following: How does the textual content of popular podcasts differ from that of unpopular ones? First, we conduct experiments to assess the polarity of popular podcasts, and observe that it is quite similar to that of unpopular podcasts. This observation is also prevalent while studying the subjectivity of the transcripts. Furthermore, there is little to no variation when polarity and subjectivity are studied over time. We then analyse the differences in the keywords and the general topical categories interspersed between popular and unpopular podcasts. It is observed that content generally centered around 'Politics', 'Crime' or 'Media' is more popular than others. Keeping this in mind, we design a triplet-training method, that leverages similarities between the popular and unpopular podcast samples to create representations that are useful in the downstream podcast popularity prediction task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The problem of \"popularity prediction\" has been explored for different types of media content, in a variety of ways. For instance, Hit song prediction has been an active area of research. Dhanaraj and Logan (2005) used spectral features like MFCCs to train an SVM for predicting whether a song would be a hit or not. Yang et al. (2017) proposed a Convolutional Neural Network based architecture for predicting the popularity of a song, using audio-based features. More recently, Zangerle et al. (2019) employed a combination of low-level and high-level audio descriptors for training Neural Networks on a regression task. However, these works have not taken textual cues into account when predicting the popularity of a song. Sanghi and Brown (2014) made an attempt to use lyricbased features that incorporated the rhyming quality of the song. However, they did not learn a representation based on the lyrics.", |
|
"cite_spans": [ |
|
{ |
|
"start": 188, |
|
"end": 213, |
|
"text": "Dhanaraj and Logan (2005)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 335, |
|
"text": "Yang et al. (2017)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 726, |
|
"end": 749, |
|
"text": "Sanghi and Brown (2014)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "For podcasts, Tsagkias et al. (2008) gave a framework for assessing the credibility of pod-casts. Their notion of credibility included preference of the listeners. The framework was also shown to be reasonably effective in predicting popular podcasts (Tsagkias et al., 2009) . This framework included highly refined hand-crafted features, based on both audio, textual and content describing the podcast on its platform. Recently, proposed a GAN-based model, for learning representations of podcasts, based on non-textual features, and showed its applications in downstream tasks like music retrieval and popularity prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 36, |
|
"text": "Tsagkias et al. (2008)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 251, |
|
"end": 274, |
|
"text": "(Tsagkias et al., 2009)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Finally, popularity prediction is also challenging because of the class imbalance that is inherent in the problem definition itself. Popular podcasts or songs would always be in a minority in a corpus. This makes the task of learning a good representation for them difficult. To overcome this, we exploit the triplet-based training procedure (Hoffer and Ailon, 2015) for generating a balanced distribution of both popular and unpopular podcasts as the \"anchor\" podcast. (See Section 5.1)", |
|
"cite_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 366, |
|
"text": "(Hoffer and Ailon, 2015)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In our study, we use the dataset collected by Yang et al. 2019as a part of their podcast popularity prediction task. The dataset consists of 6511 episodes among which, there are 837 popular and 5674 unpopular (long-tail) podcasts. Based on the iTunes chart ranking, channels corresponding to the top 200 podcasts were treated as \"top channels\" and episodes from these top channels were then labelled as popular. provide a random 60-40 split of the dataset as a training and testing set. The average duration of the podcasts is 9.83 minutes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In this work, we only use the transcripts that are provided with the podcast audio. Each transcript contains the start and end timestamps (in milliseconds) along with every spoken token in a new line. We remove the timestamps and stop words for all transcripts. We also do not consider nonverbal vocalisations in the transcript (for example, \"ooooo\", \"ahhh\", etc.) for our analysis. After pre-processing, the podcast transcriptions contain 1557 tokens on an average.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In order to understand the general polarity and sentiment across popular and unpopular podcasts, we extract the polarity scores of each podcast using TEXTBLOB 1 , which is calculated by averaging the polarity of pre-defined lexicons, inferred from the words in the podcast. The polarity values range between \u22121 to 1, where anything above 0 is considered to be 'positive'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Polarity Analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We average the obtained polarity scores for all the podcasts, for each of the popular and unpopular categories. It was observed that the overall polarity of popular and unpopular podcasts is roughly the same -as the average polarity score for the popular class was 0.14 and for the unpopular class was 0.15. In order to understand how polarity varies over time, we split each podcast into four time-chunks based on the three quartiles (Q1, Q2 and Q3), which we call T 1, T 2, T 3 and T 4, in order, with the help of the timestamps provided with the podcast transcripts. Figure 1 shows the density distributions for raw polarity scores over the four splits (based on timestamps) for the two categories. It is observed that both popular and unpopular podcasts start-off with a positive tone, slowly transitioning into neutral content. However, there is limited observable distinction between popular and unpopular podcasts based on polarity.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 570, |
|
"end": 578, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Polarity Analysis", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Similar to Polarity analysis, we looked into subjectivity scores for each podcast using TEXTBLOB, which is calculated by averaging the subjectivity of pre-defined lexicons, inferred from the words in the podcast. The values vary between 0 and 1 such that, the higher the score the more 'opinion based' (subjective) the text is.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subjectivity Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "As was observed for polarity, the overall subjectivity of popular and unpopular podcasts is exactly the same -as the average subjectivity score obtained across all podcasts was 0.48 for both popular and unpopular classes.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subjectivity Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "1 https://textblob.readthedocs.io/en/dev/ To capture how subjectivity varies over time we used the same four timestamp based podcast chunks as was used for Polarity analysis. Figure 2 shows the density distributions for raw subjectivity scores over the four splits for the two categories.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 175, |
|
"end": 183, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Subjectivity Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "It can again be observed that both popular and unpopular podcasts maintain their subjectivity over time with no significant differences across categories.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subjectivity Analysis", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We use EMPATH (Fast et al., 2016) to analyse the topical signals with the help of 194 pre-defined lexicons (for example -'social media', 'war', 'violence', 'money', 'alcohol', 'crime' to name a few) that highly correlate with LIWC (Tausczik and Pennebaker, 2010).", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 33, |
|
"text": "(Fast et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexical Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We extract the scores from EMPATH for each category, for each podcast. The most and the least relevant lexical categories for popular podcasts, ordered by their significance values are given in Table 1 ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 194, |
|
"end": 201, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Lexical Analysis", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We also study what kind of keywords are present in popular and unpopular podcasts. We rank bigrams based on their Pointwise Mutual Information (PMI) scores and report the top 10 in Table 2 . It can be observed that in podcasts belonging to the popular class, keyword pairs like 'Hillary Clinton', 'Donald Trump', or 'Gordon Hayward' outshine highlighting the possibility of domain areas such as 'Politics', 'Sports', or 'Celebrities' to be responsible for making a podcast popular. This can also be seen in Section 4.3, which shows that 'Government' related topics are widely present in popular podcasts.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 189, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Keyword co-occurrence", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "On the other hand the top keyword pairs extracted from unpopular podcasts belong to more generic domains like 'Cities', 'Lifestyle', etc., to name a few. Table 2 : Top 10 bi-grams (ranked by their PMI values) for Popular vs. Unpopular podcasts: The keyword bi-grams in bold are encompassed by topics that are shown to be highly relevant for popular podcasts in Section 4.3.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 161, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Keyword co-occurrence", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "Owing to the lack of a balanced dataset for popularity prediction, we use the Triplet Training strategy. In this method, instead of having class labels like 'popular' or 'unpopular' for the podcasts, we group the podcasts into triplets -each triplet has an anchor a podcast, which is often the reference for comparison, a positive podcast p which belongs to the same class as a, and a negative podcast n which belongs to the other class. The intuition is to reduce the distance between the representations of podcasts belonging to the same class and vice versa. After extracting the representation of all the three podcasts in a triplet from a network with shared weights, we use the Triplet loss given below, as introduced by Schroff et al. (2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 727, |
|
"end": 748, |
|
"text": "Schroff et al. (2015)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "L(a, p, n) = N i=1 f (ai) \u2212 f (pi) 2 2 \u2212 f (ai) \u2212 f (ni) 2 2 + \u03b1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "where a i , p i and n i are the anchor, positive and negative podcast samples in the i th triplet, f is a function that outputs an embedding for the podcasts and \u03b1 is the margin between the positive and negative podcast samples.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Proposed Method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We use a pre-trained DISTILBERT ) model 2 to create initial representations for the podcasts, followed by two fully connected layers, which shared weights during the triplet training phase. The architecture can be seen in Figure 3. The output of the final layer is a 128dimensional vector, that is used as an embedding for the downstream popularity prediction task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 222, |
|
"end": 228, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Proposed Method", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The following methods are used to extract the representations of podcasts to predict their popularity:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 TF-IDF: TF-IDF weights (Ramos et al., 2003) corresponding to each word in a podcast are used to fill a vector, the size of which equals the size of training set's vocabulary.", |
|
"cite_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 45, |
|
"text": "(Ramos et al., 2003)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 WORD2VEC (WV): WORD2VEC (Mikolov et al., 2013) embeddings for each word in a podcast are averaged to create a single embedding representing the podcast.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 48, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Method Macro-Avg F1 TF-IDF 0.61 WV 0.59 DB 0.73 DB-T 0.82 Table 3 : Popularity Prediction: Macro-average F1 score for the baselines and the proposed Triplet training strategy for the popularity prediction task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 58, |
|
"end": 65, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 DISTILBERT (DB): The embedding corresponding to the [CLS] token in a pre-trained DISTILBERT is taken as an embedding for a podcast.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "\u2022 DISTILBERT-Triplet (DB-T): The embedding corresponding to the [CLS] token in a pre-trained DISTILBERT is trained in a Triplet manner as shown in the proposed method ( Figure 3) , and the output of the final neural network is a 128-dimensional embedding for the podcast.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 178, |
|
"text": "Figure 3)", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "For each of the methods listed above, embeddings corresponding to every podcast are extracted. We use a supervised classifier like XG-BOOST (Chen and Guestrin, 2016) with binary labels for popularity. Results for the various methods are given in Table 3 . Appropriate hyperparameter tuning is done over 5-fold cross validation, including adding penalties for misclassifying the minority (Popular) class. It can be seen that our proposed method (DB-T) significantly outperforms the others, achieving a relative improvement over the best baseline (DB) by 12.3%. 3", |
|
"cite_spans": [ |
|
{ |
|
"start": 140, |
|
"end": 165, |
|
"text": "(Chen and Guestrin, 2016)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 253, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation and Results", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "In this work, we explore how textual cues like polarity, subjectivity, lexicons and keywords differ in popular and unpopular podcasts. We then employ a triplet-based training procedure to counter the class imbalance problem in our data, which yields a relative improvement of 12.3% over the best performing baseline. In future work, we plan to explore this problem in a multi-modal setting, by constructing multi-modal embeddings that leverage both audio and textual data. We also plan to leverage temporal information associated with the transcripts, in the form of timestamps of the spoken words, for the task of popularity prediction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We use the DISTILBERT BASE model provided by huggingface's transformers library", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Xgboost. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
|
"authors": [ |
|
{ |
|
"first": "Tianqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlos", |
|
"middle": [], |
|
"last": "Guestrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2939672.2939785" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tianqi Chen and Carlos Guestrin. 2016. Xgboost. Pro- ceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Automatic prediction of hit songs", |
|
"authors": [ |
|
{ |
|
"first": "Ruth", |
|
"middle": [], |
|
"last": "Dhanaraj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Beth", |
|
"middle": [], |
|
"last": "Logan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ruth Dhanaraj and Beth Logan. 2005. Automatic pre- diction of hit songs. In ISMIR.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Empath. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems", |
|
"authors": [ |
|
{ |
|
"first": "Ethan", |
|
"middle": [], |
|
"last": "Fast", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Binbin", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Bernstein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/2858036.2858535" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ethan Fast, Binbin Chen, and Michael S. Bernstein. 2016. Empath. Proceedings of the 2016 CHI Con- ference on Human Factors in Computing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Deep metric learning using triplet network", |
|
"authors": [ |
|
{ |
|
"first": "Elad", |
|
"middle": [], |
|
"last": "Hoffer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nir", |
|
"middle": [], |
|
"last": "Ailon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Similarity-Based Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "84--92", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Elad Hoffer and Nir Ailon. 2015. Deep metric learning using triplet network. In Similarity-Based Pattern Recognition, pages 84-92, Cham. Springer Interna- tional Publishing.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of the 26th International Con- ference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Using tf-idf to determine word relevance in document queries", |
|
"authors": [ |
|
{ |
|
"first": "Juan", |
|
"middle": [], |
|
"last": "Ramos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the first instructional conference on machine learning", |
|
"volume": "242", |
|
"issue": "", |
|
"pages": "133--142", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learn- ing, volume 242, pages 133-142. New Jersey, USA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Hit song detection using lyric features alone", |
|
"authors": [ |
|
{ |
|
"first": "Abhishek", |
|
"middle": [], |
|
"last": "Sanghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [ |
|
"G" |
|
], |
|
"last": "Brown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 15th International Society for Music Information Retrieval Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhishek Sanghi and Daniel G. Brown. 2014. Hit song detection using lyric features alone. In Proceedings of the 15th International Society for Music Informa- tion Retrieval Conference 2014 (ISMIR 2014).", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", |
|
"authors": [ |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Facenet: A unified embedding for face recognition and clustering", |
|
"authors": [ |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Schroff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Kalenichenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Philbin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1109/cvpr.2015.7298682" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. 2015 IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The psychological meaning of words: Liwc and computerized text analysis methods", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Yla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Tausczik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Pennebaker", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Journal of Language and Social Psychology", |
|
"volume": "29", |
|
"issue": "1", |
|
"pages": "24--54", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1177/0261927X09351676" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yla R. Tausczik and James W. Pennebaker. 2010. The psychological meaning of words: Liwc and comput- erized text analysis methods. Journal of Language and Social Psychology, 29(1):24-54.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Exploiting surface features for the prediction of podcast preference", |
|
"authors": [ |
|
{ |
|
"first": "Manos", |
|
"middle": [], |
|
"last": "Tsagkias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Larson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Advances in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "473--484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manos Tsagkias, Martha Larson, and Maarten de Ri- jke. 2009. Exploiting surface features for the predic- tion of podcast preference. In Advances in Informa- tion Retrieval, pages 473-484, Berlin, Heidelberg. Springer Berlin Heidelberg.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Podcred: A framework for analyzing podcast preference", |
|
"authors": [ |
|
{ |
|
"first": "Manos", |
|
"middle": [], |
|
"last": "Tsagkias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Larson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wouter", |
|
"middle": [], |
|
"last": "Weerkamp", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maarten", |
|
"middle": [], |
|
"last": "De Rijke", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 2nd ACM Workshop on Information Credibility on the Web, WICOW '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "67--74", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/1458527.1458545" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manos Tsagkias, Martha Larson, Wouter Weerkamp, and Maarten de Rijke. 2008. Podcred: A framework for analyzing podcast preference. In Proceedings of the 2nd ACM Workshop on Information Credibility on the Web, WICOW '08, page 67-74, New York, NY, USA. Association for Computing Machinery.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Huggingface's transformers: State-of-the-art natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Wolf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lysandre", |
|
"middle": [], |
|
"last": "Debut", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Victor", |
|
"middle": [], |
|
"last": "Sanh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julien", |
|
"middle": [], |
|
"last": "Chaumond", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Clement", |
|
"middle": [], |
|
"last": "Delangue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anthony", |
|
"middle": [], |
|
"last": "Moi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierric", |
|
"middle": [], |
|
"last": "Cistac", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Rault", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R'emi", |
|
"middle": [], |
|
"last": "Louf", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Morgan", |
|
"middle": [], |
|
"last": "Funtowicz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jamie", |
|
"middle": [], |
|
"last": "Brew", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Revisiting the problem of audio-based hit song prediction using convolutional neural networks", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "621--625", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L. Yang, S. Chou, J. Liu, Y. Yang, and Y. Chen. 2017. Revisiting the problem of audio-based hit song pre- diction using convolutional neural networks. In 2017 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 621-625.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "More than just words: Modeling non-textual characteristics of podcasts", |
|
"authors": [ |
|
{ |
|
"first": "Longqi", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Drew", |
|
"middle": [], |
|
"last": "Dunne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Sobolev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mor", |
|
"middle": [], |
|
"last": "Naaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deborah", |
|
"middle": [], |
|
"last": "Estrin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "276--284", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.1145/3289600.3290993" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Longqi Yang, Yu Wang, Drew Dunne, Michael Sobolev, Mor Naaman, and Deborah Estrin. 2019. More than just words: Modeling non-textual charac- teristics of podcasts. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, WSDM '19, page 276-284, New York, NY, USA. Association for Computing Ma- chinery.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Hit song prediction: Leveraging low-and high-level audio features", |
|
"authors": [ |
|
{ |
|
"first": "Eva", |
|
"middle": [], |
|
"last": "Zangerle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramona", |
|
"middle": [], |
|
"last": "Huber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael V\u00f6tter Yi-Hsuan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 20th International Society for Music Information Retrieval Conference 2019 (ISMIR 2019)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "319--326", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eva Zangerle, Ramona Huber, and Michael V\u00f6tter Yi- Hsuan Yang. 2019. Hit song prediction: Leveraging low-and high-level audio features. In Proceedings of the 20th International Society for Music Informa- tion Retrieval Conference 2019 (ISMIR 2019), pages 319-326.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Density distribution of raw polarity scores for (a) Popular and (b) Unpopular podcasts over four time intervals.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Density distribution of raw subjectivity scores for (a) Popular and (b) Unpopular podcasts over four time intervals.", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Triplet Training Architecture: The podcast triples are first passed through a DISTILBERT model, followed by a 2-layer Neural Network, with a RELU non-linearity in between. The weights are shared across the triplet during training.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": ".", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">Rank Lexical Categories</td></tr><tr><td>1</td><td>Government</td></tr><tr><td>2</td><td>Crime</td></tr><tr><td>3</td><td>Politics</td></tr><tr><td>4</td><td>Money</td></tr><tr><td>5</td><td>Law</td></tr><tr><td>190</td><td>Hygiene</td></tr><tr><td>191</td><td>Social Media</td></tr><tr><td>192</td><td>Urban</td></tr><tr><td>193</td><td>Worship</td></tr><tr><td>194</td><td>Swimming</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"text": "Lexical Categories that are more likely to be present in popular podcasts, than unpopular podcasts:", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |