{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:27:22.052369Z" }, "title": "Developing Conversational Data and Detection of Conversational Humor in Telugu", "authors": [ { "first": "Vaishnavi", "middle": [], "last": "Pamulapati", "suffix": "", "affiliation": {}, "email": "vaishnavi.p@research.iiit.ac.in" }, { "first": "Radhika", "middle": [], "last": "Mamidi", "suffix": "", "affiliation": {}, "email": "radhika.mamidi@iiit.ac.in" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In the field of humor research, there has been a recent surge of interest in the sub-domain of Conversational Humor (CH). This study has two main objectives. (a) develop a conversational (humorous and non-humorous) dataset in Telugu. (b) detect CH in the compiled dataset. In this paper, the challenges faced while collecting the data and experiments carried out are elucidated. Transfer learning and non-transfer learning techniques are implemented by utilizing pre-trained models such as FastText word embeddings, BERT language models and Text GCN, which learns the word and document embeddings simultaneously of the corpus given. State-of-the-art results are observed with a 99.3% accuracy and a 98.5% f1 score achieved by BERT. 042 course relation present in humorous instances (Liu 043 et al., 2018). However, in conversations, partici-044 pants' personalities, their sense of humor, and the 045 relationship between the participants, add unique 046 complexities to the task of detection of CH. The 047 following example (translated) from a Telugu stage 048 play, Kanyasulkam.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In the field of humor research, there has been a recent surge of interest in the sub-domain of Conversational Humor (CH). This study has two main objectives. (a) develop a conversational (humorous and non-humorous) dataset in Telugu. (b) detect CH in the compiled dataset. In this paper, the challenges faced while collecting the data and experiments carried out are elucidated. Transfer learning and non-transfer learning techniques are implemented by utilizing pre-trained models such as FastText word embeddings, BERT language models and Text GCN, which learns the word and document embeddings simultaneously of the corpus given. State-of-the-art results are observed with a 99.3% accuracy and a 98.5% f1 score achieved by BERT. 042 course relation present in humorous instances (Liu 043 et al., 2018). However, in conversations, partici-044 pants' personalities, their sense of humor, and the 045 relationship between the participants, add unique 046 complexities to the task of detection of CH. The 047 following example (translated) from a Telugu stage 048 play, Kanyasulkam.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Humor as a phenomenon has interested scholars from diverse fields since time immemorial (Morreall, 2012). The abundance of research studies dedicated to humor is not only due to the fascinating nature of the domain but also due to its impact in everyday life (Martin and Lefcourt, 1983) (McGee and Shevlin, 2009) . (Sacks et al., 1978) . Sequence organization is the organization of these turns. If the conversational goal is to seek information, the first turn is the question, and the second turn is the answer (Schegloff et al., 1977) .", "cite_spans": [ { "start": 259, "end": 286, "text": "(Martin and Lefcourt, 1983)", "ref_id": "BIBREF4" }, { "start": 287, "end": 312, "text": "(McGee and Shevlin, 2009)", "ref_id": "BIBREF5" }, { "start": 315, "end": 335, "text": "(Sacks et al., 1978)", "ref_id": "BIBREF10" }, { "start": 513, "end": 537, "text": "(Schegloff et al., 1977)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "After filtering, a total of 2,047 conversational jokes were compiled. These jokes did not follow a standard structural format. Therefore, manual intervention was applied to make a homogeneous conversational humorous dataset (translated example given below) as the model could distinguish humorous and non-humorous data based on structural features, rather than semantic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Humorous Data", "sec_num": "3.1" }, { "text": "\"Hey! Your dog is exactly like a tiger!\", said Suresh. \"That is (emphasis) a tiger. It has been going around talking and thinking about love and has turned into a dog!\", replied Mahesh.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Original Conversational Joke:", "sec_num": null }, { "text": "Suresh: Hey! Your dog is exactly like a tiger! User456: It seems to be very sunny.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manually Converted to:", "sec_num": null }, { "text": "User123: Oh, wonderful!", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manually Converted to:", "sec_num": null }, { "text": "In both instances, 'User123' is replaced by the same common Telugu name. This resulted in a corpus of 10,156 conversations. Tweets that contained hashtags such as '#funny', '#joke', '#hahaha', or smiling/laughing emojis were removed to improve the dataset. After checking the corpus, conversations that contained profanity were removed to avoid ambiguity whether the conversation was humorous (F\u00e4gersten, 2012), finally resulting in 6,202 non-humorous conversations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manually Converted to:", "sec_num": null }, { "text": "To make the non-humorous data and humorous data as similar as possible in structure, several changes were made to the collected jokes and nonhumorous conversations (statistics in Table 1) \u2022 Preprocessing steps such as removal of URLS, hashtags, and emojis were performed. In the of our proposed methodology (Fig. 1) . The first is the use of pre-trained models, which is then further used for our downstream task of Conversational Humorous Data Non-Humorous Data Collected Data 6,107 10,156 Post Filtration 2,047 6,202 the probability of the word appearing in its entire 383 9 https://fasttext.cc/docs/en/ crawl-vectors.html surrounding context over iterations (Devlin et al., 2018 ).", "cite_spans": [ { "start": 661, "end": 681, "text": "(Devlin et al., 2018", "ref_id": null } ], "ref_spans": [ { "start": 179, "end": 187, "text": "Table 1)", "ref_id": "TABREF2" }, { "start": 307, "end": 315, "text": "(Fig. 1)", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "This pre-trained model can be plugged in for many downstream tasks by taking the pooled output of BERT and passing it to a neural network suitable for the task at hand. An important feature of BERT to be noted is that it does not consider whole ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "After collecting and pre-processing the tweets and their replies to form non-humorous conversations, experiments were run with Text GCN, FastText and various BERT models (refer to Table 3 ). The respective model is trained on 80% of the data and is tested on 20% of unseen data. As there are 2,047 instances of humorous conversations whereas 6,202 total instances of non-humorous conversations, this makes it an unbalanced dataset. The weights of the ", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 187, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "8" }, { "text": "As the Telugu script was abundantly mixed with Roman script in the movie dialogues", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/twintproject/twint 5 https://developer.twitter.com/en/docs/twitter-api/earlyaccess", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/shuyo/ language-detection 7 https://pypi.org/project/ google-transliteration-api/ 8 https://en.wikipedia.org/wiki/ Sardarji_joke", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "pre-trained models (non-English or dedicated to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "tilingual BERT base (cased) by Google, delivered", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "For the first approach we use FastText and BERT's MuRIL) as they have been trained on a large quan-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "Using bert sentence embedding for humor detection. tion in english-hindi code-mixed social media con-556 tent: Corpus and baseline system", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.05513" ] }, "num": null, "urls": [], "raw_text": "Using bert sentence embedding for humor detection. tion in english-hindi code-mixed social media con- 556 tent: Corpus and baseline system. arXiv preprint 557 arXiv:1806.05513.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Modeling sentiment association in discourse for humor recognition", "authors": [], "year": null, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "586--591", "other_ids": {}, "num": null, "urls": [], "raw_text": "Modeling sentiment association in discourse for hu- mor recognition. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 586-591.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Sense of humor as a moderator of the relation between stressors and moods", "authors": [ { "first": "A", "middle": [], "last": "Rod", "suffix": "" }, { "first": "", "middle": [], "last": "Martin", "suffix": "" }, { "first": "M", "middle": [], "last": "Herbert", "suffix": "" }, { "first": "", "middle": [], "last": "Lefcourt", "suffix": "" } ], "year": 1983, "venue": "Journal of personality and social psychology", "volume": "45", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rod A Martin and Herbert M Lefcourt. 1983. Sense of humor as a moderator of the relation between stres- sors and moods. Journal of personality and social psychology, 45(6):1313.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Effect of humor on interpersonal attraction and mate selection", "authors": [ { "first": "Elizabeth", "middle": [], "last": "Mcgee", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Shevlin", "suffix": "" } ], "year": 2009, "venue": "The Journal of psychology", "volume": "143", "issue": "1", "pages": "67--77", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elizabeth McGee and Mark Shevlin. 2009. Effect of humor on interpersonal attraction and mate selection. The Journal of psychology, 143(1):67-77.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Benign violations: Making immoral behavior funny. Psychological science", "authors": [ { "first": "Peter", "middle": [], "last": "Mcgraw", "suffix": "" }, { "first": "Caleb", "middle": [], "last": "Warren", "suffix": "" } ], "year": 2010, "venue": "", "volume": "21", "issue": "", "pages": "1141--1149", "other_ids": {}, "num": null, "urls": [], "raw_text": "A Peter McGraw and Caleb Warren. 2010. Benign vi- olations: Making immoral behavior funny. Psycho- logical science, 21(8):1141-1149.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Making computers laugh: Investigations in automatic humor recognition", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "531--538", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recognition. In Proceedings of Human Language Technology Conference and Conference on Empiri- cal Methods in Natural Language Processing, pages 531-538.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Philosophy of humor", "authors": [ { "first": "John", "middle": [], "last": "Morreall", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Morreall. 2012. Philosophy of humor.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A novel annotation schema for conversational humor: Capturing the cultural nuances in kanyasulkam", "authors": [ { "first": "Vaishnavi", "middle": [], "last": "Pamulapati", "suffix": "" }, { "first": "Gayatri", "middle": [], "last": "Purigilla", "suffix": "" }, { "first": "Radhika", "middle": [], "last": "Mamidi", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 14th Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "34--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaishnavi Pamulapati, Gayatri Purigilla, and Radhika Mamidi. 2020. A novel annotation schema for con- versational humor: Capturing the cultural nuances in kanyasulkam. In Proceedings of the 14th Linguistic Annotation Workshop, pages 34-47.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A simplest systematics for the organization of turn taking for conversation", "authors": [ { "first": "Harvey", "middle": [], "last": "Sacks", "suffix": "" }, { "first": "A", "middle": [], "last": "Emanuel", "suffix": "" }, { "first": "Gail", "middle": [], "last": "Schegloff", "suffix": "" }, { "first": "", "middle": [], "last": "Jefferson", "suffix": "" } ], "year": 1978, "venue": "Studies in the organization of conversational interaction", "volume": "", "issue": "", "pages": "7--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harvey Sacks, Emanuel A Schegloff, and Gail Jeffer- son. 1978. A simplest systematics for the organiza- tion of turn taking for conversation. In Studies in the organization of conversational interaction, pages 7- 55. Elsevier.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The preference for self-correction in the organization of repair in conversation", "authors": [ { "first": "A", "middle": [], "last": "Emanuel", "suffix": "" }, { "first": "Gail", "middle": [], "last": "Schegloff", "suffix": "" }, { "first": "Harvey", "middle": [], "last": "Jefferson", "suffix": "" }, { "first": "", "middle": [], "last": "Sacks", "suffix": "" } ], "year": 1977, "venue": "", "volume": "53", "issue": "", "pages": "361--382", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emanuel A Schegloff, Gail Jefferson, and Harvey Sacks. 1977. The preference for self-correction in the organization of repair in conversation. Lan- guage, 53(2):361-382.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Baseline needs more love: On simple wordembedding-based models and associated pooling mechanisms", "authors": [ { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Renqiang Min", "suffix": "" }, { "first": "Qinliang", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Henao", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.09843" ] }, "num": null, "urls": [], "raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018. Baseline needs more love: On simple word- embedding-based models and associated pooling mechanisms. arXiv preprint arXiv:1805.09843.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Computational morphology for telugu", "authors": [ { "first": "B", "middle": [], "last": "Srinivasu", "suffix": "" }, { "first": "", "middle": [], "last": "Manivannan", "suffix": "" } ], "year": 2018, "venue": "Journal of Computational and Theoretical Nanoscience", "volume": "15", "issue": "6-7", "pages": "2373--2378", "other_ids": {}, "num": null, "urls": [], "raw_text": "B Srinivasu and R Manivannan. 2018. Computational morphology for telugu. Journal of Computational and Theoretical Nanoscience, 15(6-7):2373-2378.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Joint embedding of words and labels for text classification", "authors": [ { "first": "Lawrence", "middle": [], "last": "Henao", "suffix": "" }, { "first": "", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1805.04174" ] }, "num": null, "urls": [], "raw_text": "Henao, and Lawrence Carin. 2018. Joint embedding of words and labels for text classification. arXiv preprint arXiv:1805.04174.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Humor detection: A transformer gets the last laugh", "authors": [ { "first": "Orion", "middle": [], "last": "Weller", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Seppi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.00252" ] }, "num": null, "urls": [], "raw_text": "Orion Weller and Kevin Seppi. 2019. Humor detection: A transformer gets the last laugh. arXiv preprint arXiv:1909.00252.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Graph convolutional networks for text classification", "authors": [ { "first": "Liang", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Chengsheng", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. Graph convolutional networks for text classification.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Proceedings of the AAAI Conference on Artificial Intelligence", "authors": [], "year": null, "venue": "", "volume": "33", "issue": "", "pages": "7370--7377", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7370-7377.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "In a conversational discourse between interlocutors, their attitude, present state of mind, psychological distance between the topic of humor and the individual (McGraw and Warren, 2010) determine whether what is intended to be humorous is perceived so. Conversational Humor (CH) is a subset of verbal humor. Verbal humor exists in the verbality of what is being spoken. Canned jokes (such as light-bulb jokes, knock-knock jokes) are a part of verbal humor, where they are not contextdependent, i.e., they can be removed from a conversation and still perceived as humorous. On the other hand, CH is heavily dependent on various factors including speakers' personalities, the relevant culture that is referenced, and current events. Numerous studies that focus on the detection of humor in short jokes/tweets rely on the contrastive disa conversational format." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "126" }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "words like Word2Vec but rather 'word pieces', and hence is useful for agglutinative languages such as Telugu and can handle unknown or erroneous spelled words. Numerous pre-trained models are available on HuggingFace. Instead of restricting to using BERT pre-trained models, BERT's cousins AlBERT and DistilBERT are also experimented with. AlBERT and DistilBERT strive to reduce the computational complexity. Specified below are the pre-trained models used for the purposes of this study: \u2022 BERT: Multilingual base model (cased, trained on top 104 languages with the largest Wikipedia), MuRIL (Multilingual Representations for Indian Languages, trained on 17 Indian languages), \u2022 AlBERT: Indic-BERT (ALBERT model pretrained only on 12 major Indian languages), \u2022 DistilBERT: A distilled version of Multilingual base model (cased) 7.2 Linear Classifier The pooled output of the BERT (or AlBERT or DistilBERT) encoders is directly sent to a classifier and, after that to a softmax layer. Class weights are considered here, too, due to the imbalanced dataset (2,047 humorous conversations, whereas 6,202 non-humorous conversations). A Pytorch classifier that applies a linear transformation to the given data is used. Subsequently, Cross Entropy loss function is chosen." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "coupled with different classifiers are used to solve 504 the work at hand. The performances of the var-505 ious models are evaluated and analyzed to glean 506 insights regarding the mechanisms employed. For 507 low-resource Indian languages such as Telugu, the 508 hurdles that lack of data pose are avoided as pre-509 trained models on a substantial amount of Telugu data are used effectively. The BERT model trained on 104 languages, Multhe best performance with an accuracy of 99.3% and an f1 score of 98.5%. Comparatively, Fast-515 Text comes close with merely a 2% difference in 516 accuracy, 97.3% and an f1 score of 94.6%. State-517 of-the-art results are thus produced by utilizing 518 transfer learning techniques and methodologies. 519 Telugu movie scripts could be analyzed to com-520 prehend the trends in the types of humor used in Telugu culture, the influencing factors, and the im-522 portance of shared knowledge of culture in the per-523 ception of humor (Pamulapati et al., 2020). Instead 524 of using premeditated conversations, real-time con-525 versations transcribed from humorous Telugu inter-526 views would capture the essence of conversational 527 humor better. Detection of humor in conversations 528 could be taken one step further to detect a particular 529 technique(s) or type(s) of Conversational Humor. and Manish Shrivastava. 2018.Humor detec-" }, "TABREF0": { "type_str": "table", "text": "https://telugu.samayam.com/ telugu-jokes/funny-jokes/articlelist/ 49228696.cms 2 https://bit.ly/3yL2HuX Though extensive attempts to obtain conversational data from movies or TV shows were made, due to reasons such as unavailability of transcripts, manual transcription needed, and unavailability of multilingual OCRs 3 , this direction proved to be unfeasible. Despite jokes being used, in the final dataset several conversational features are intact.", "content": "
All instances of the final humorous data used are
of the same format as
Speaker 1: utterance 1
Speaker 2: utterance 2
. . .
Speaker n: utterance n.
Therefore, features such as turn taking and se-
quence organization are present. Turn-taking orga-
nization is where participants alternate their utter-
ances, minimizing the noise arising from clashing
of utterances to have a smooth or effective com-
munication
", "html": null, "num": null }, "TABREF2": { "type_str": "table", "text": "of data. Subsequently, for the second approach,", "content": "
272
273the Text Graphical Convolutional Network (GCN)
274framework proposed by Yao et. al (2019) is imple-
275mented and fine-tuned.
2765 Text GCN
2775.1 Heterogeneous Graph
278There have been several attempts at learning word
279representations by mapping words and the docu-
280ments they are a part of to a graph and learning
281the word-word and word-document relations us-
282ing different features. In the paper by Yao et. al
283(2019), using unsupervised learning, they build a
284heterogenous graph of words and documents. First,
285this graph is built using word co-occurrence and
286document-word relations, after which a Text Graph-
ical Convolutional Network is learnt on the corpus.
293
", "html": null, "num": null }, "TABREF4": { "type_str": "table", "text": "", "content": "
of the input given, after which the first element is
: FastText nearest neighbors of word 'aNxulO'
classes are taken into consideration. Accuracy and
F1 score are used as evaluation metrics.
Highest accuracy results are obtained by Multi-
lingual BERT by Google. Multilingual BERT is
trained on languages using Wikipedia dumps
of the respective language. This model is pre-
trained with both objectives: Masked Language
Modeling (MLM) and Next Sentence Prediction.
Additionally, it is observed that FastText and BERT
models perform comparatively better than Text
GCN. The low F1 score implies a low precision
and low recall. This means that the model does
not predict that the text is humorous well and suffi-
ciently. In Section 5.1, it is specified that the edges
between word-word nodes are given weights based
on PMI (point-wise mutual information). The word
pair with the highest PMI(12.34) is
<aMxulO, kaxA>
Translation: <in that, right (in the sentence 'you finished your homework, right?'> It is evident that both words have no syntactic or semantic relation. Thus, the heterogeneous graph does not capture word relations well. In compar-ison, FastText's nearest neighbors defined in the pre-trained model for the word 'aNxulO' (trans-Architecture Text GCN FastText Multilingual BERT MuRIL Indic-BERT Multilingual DistilBERT 0.990 Accuracy F1 Score 0.592 0.374 0.973 0.946 0.993 0.985 0.988 0.977 0.992 0.982 0.980
lates to 'in that') are shown in Table 2.
By inspecting the nearest neighbors of the
queried word, FastText captures syntactic (plu-
ral of 'in that' is 'in those') and semantic rela-
tions (antonym of 'in that' is 'in this'). This high-
lights the importance of word embeddings for the
model's overall performance for the task to be car-
ried out. Word representations is a key aspect that
contributes to the effectiveness and performance of
text classification (Shen et al., 2018)(Wang et al.,
2018).
As mentioned in Section 3.3, conversational text
written in Roman script was transliterated to Tel-
ugu script using Google Transliterate API. The API
produces an array of most probable transliterations
", "html": null, "num": null }, "TABREF5": { "type_str": "table", "text": "", "content": "
: Performance of architectures implemented for
Conversational Humor recognition
9 Conclusion and Future Work500
In this work, the problem of conversational hu-501
mor detection in Telugu is addressed. Different502
word embedding algorithms or language models,
", "html": null, "num": null } } } }