{ "paper_id": "S18-1047", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:44:14.619656Z" }, "title": "Amrita student at SemEval-2018 Task 1: Distributed Representation of Social Media Text for Affects in Tweets", "authors": [ { "first": "Nidhin", "middle": [ "A" ], "last": "Unnithan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Coimbatore Amrita Vishwa Vidyapeetham", "location": { "country": "India" } }, "email": "" }, { "first": "Barathi", "middle": [], "last": "Ganesh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Coimbatore Amrita Vishwa Vidyapeetham", "location": { "country": "India" } }, "email": "bharathiganesh.hb@gmail.com" }, { "first": "M", "middle": [ "Anand" ], "last": "Kumar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Coimbatore Amrita Vishwa Vidyapeetham", "location": { "country": "India" } }, "email": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "", "affiliation": { "laboratory": "", "institution": "Coimbatore Amrita Vishwa Vidyapeetham", "location": { "country": "India" } }, "email": "kpsoman@amrita.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we did an analysis of \"Affects in Tweets\" which was one of the task conducted by SemEval 2018. Task was to build a model which is able to do regression and classification of different emotions from the given tweets data set. We developed a base model for all the subtasks using distributed representation (Doc2Vec) and applied machine learning techniques for classification and regression. Distributed representation is an unsupervised algorithm which is capable of learning fixed length feature representation from variable length texts. Machine learning techniques used for regression is 'Linear Regression' while 'Random Forest Tree' is used for classification purpose. Empirical results obtained for all the subtasks by our model are shown in this paper.", "pdf_parse": { "paper_id": "S18-1047", "_pdf_hash": "", "abstract": [ { "text": "In this paper we did an analysis of \"Affects in Tweets\" which was one of the task conducted by SemEval 2018. Task was to build a model which is able to do regression and classification of different emotions from the given tweets data set. We developed a base model for all the subtasks using distributed representation (Doc2Vec) and applied machine learning techniques for classification and regression. Distributed representation is an unsupervised algorithm which is capable of learning fixed length feature representation from variable length texts. Machine learning techniques used for regression is 'Linear Regression' while 'Random Forest Tree' is used for classification purpose. Empirical results obtained for all the subtasks by our model are shown in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Most basic form of communication between humans is through language. Thus it can act as a medium of how we are feeling at any particular instance. For example, if we are angry at someone rather than just hitting him first we would express our feeling through our words. Thus from a conversion we can make out the different emotions a person is going through at that time. Apart from this social media texts can be used for determining the class of a person as described in (Ganesh H. B. et al., 2016b) . In this work we are doing 2 ordinal classification, 1 classification and 2 regression of different emotions that people exhibits through tweets obtained from twitter Bravo-Marquez et al., 2014; Mohammad et al., 2013) for three different languages namely Arabic, English and Spanish. The data set given has tweets from all the three languages for each subtask . There is a total of five subtask an emotion intensity regression task, an emotion intensity ordinal classification task, a sentiment intensity regression task, a sentiment analysis ordinal classification task and an emotion classification task.", "cite_spans": [ { "start": 473, "end": 501, "text": "(Ganesh H. B. et al., 2016b)", "ref_id": "BIBREF2" }, { "start": 670, "end": 697, "text": "Bravo-Marquez et al., 2014;", "ref_id": "BIBREF0" }, { "start": 698, "end": 720, "text": "Mohammad et al., 2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We used distributed representation (Le and Mikolov, 2014; Ganesh H. B. et al., 2016a) to create feature vector which can be feed as input to machine learning algorithms for classification and regression. Bag-of-words is one of the most common method used to create fixed length feature vectors but the ordering and semantics of the words are ignored in this method. By using Doc2Vec, an unsupervised learning algorithm, we can create fixed length features from variable length data. Thus by using Doc2Vec we can preserve the ordering as well as the semantics of data. Another method for word representation is distributional representation (Ganesh H. B. et al., 2018) which is an extension of co-occurrence based representation and have the same disadvantages as cooccurrence based methods.", "cite_spans": [ { "start": 35, "end": 57, "text": "(Le and Mikolov, 2014;", "ref_id": "BIBREF4" }, { "start": 58, "end": 85, "text": "Ganesh H. B. et al., 2016a)", "ref_id": "BIBREF1" }, { "start": 640, "end": 667, "text": "(Ganesh H. B. et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Once the feature vector is created it is pushed into machine learning algorithm for classification and regression. We have used Random Forest Tree for classification which is an ensemble learning method that creates a number of decision trees during training and gives an output class which appears most often. For regression we used Linear Regression which tries to fit a line between the actual and predicted values by minimizing the error sum of squares between them. The final model is obtained after doing hyper parameter tuning for Doc2Vec size and n estimator, max depth for Random Forest Tree which are fixed through a grid search method before pushing to machine learning algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Section 2 of this paper gives a brief introduction to corpus. Section 3 describes the theory of different methods used. Section 4 describes the method-ology used. Section 5 covers result and discussion. Section 6 talks about our conclusion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The given corpus consists of tweets from three different languages for all five subtasks. The languages are English, Arabic and Spanish. Each language have training, development and test data set . While building the model training data set was splitted into 80% for training and 20% for testing. Training and development data set consist of tweet id, tweet, affect dimension and intensity score while test data set has entries as none at intensity scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus", "sec_num": "2" }, { "text": "Doc2Vec is an unsupervised learning algorithm which gives a fixed length vector representation of a variable length text. The text can be a sentence, paragraph or document. It is an extension of Word2Vec in which a vector representation of words are given inorder to predict a word given the vector representation of context words are given. Word2Vec is inspired because it can be used to predict the next word in a sentence given the context word vectors thus capturing the semantics of the sentence even though the word vectors are randomly initialized. Instead of word vector we will use document vector to predict next word given context from a document in Doc2Vec. In document vector every document is represented by a column of unique vector called document matrix and words are represented by unique vectors called word matrix. Next word in a context is predicted by the concatenation or averaging of document and word vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed Representation", "sec_num": "3.1" }, { "text": "In Doc2Vec the document vector is same for all context generated from same document but differs across documents. However word vector matrix is same for different document, i.e., the vector representation of same word across different document have the same vector representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed Representation", "sec_num": "3.1" }, { "text": "For regression tasks Linear Regression was used. Linear Regression tries to fit a line between the actual and predicted values by minimizing the error sum of squares between them. In a Linear Regression problem there will be one dependent variable and an independent variable. A regression tries to verify two objective, firstly whether a satisfactory prediction can be made by a set of predictor variables and secondly which all variables play an important role in predicting the outcome variable. The estimated regression outputs are used to explain the connection between independent and dependent variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Regression", "sec_num": "3.2" }, { "text": "For classification problem we used Random Forest Tree. It is an ensemble learning method that creates a number of decision trees during training and gives an output class which appears most often. Advantage of Random Forest Tree is its ability to control over-fitting by taking an average of all the decision trees for prediction. If more than one algorithm of same or different kind are combined to classify an object such an algorithm is called ensemble algorithm. For example it may run a prediction on SVM, Naive Bayes and Decision Tree before taking the vote for classification of test object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Forest Tree", "sec_num": "3.3" }, { "text": "The corpus was obtained from SemEval2018 website. Once the data was obtained the first process was to extract tweets from the data for all the languages. Once every thing was extracted from the document next step was to build a Doc2Vec model from the extracted tweets which will produce feature vectors which can be used as inputs for our machine learning techniques for regression and classification tasks. Gensim library was used to build the Doc2Vec model. Sklearn library was used for Random Forest Tree and Linear Regression.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment", "sec_num": "3.4" }, { "text": "Before fixing the Doc2Vec base model we did hyper parameter tuning for all subtasks in all languages. The parameters tuned for regression tasks was Doc2Vec size and for classification were Doc2Vec size and n estimator, max depth for Random Forest Tree. size of Doc2Vec means the dimensionality of the feature vector, i.e., in which dimension each document in a corpus is represented as. n estimator of Random Forest Tree means the number of decision trees used in the forest, i.e., before taking vote of a class how many different algorithms are to be run. max depth of Random Forest Tree gives the maximum depth of Tasks size n estimator max depth Task 1 140 --Task 2 250 40 17 Task 3 280 --Task 4 820 30 12 Task 5 150 10 8 the tree in algorithm. We did a grid search method to find out the optimum parameter values for each subtasks. For emotion intensity regression task (Task 1) and sentiment intensity regression task (Task 3) Doc2Vec size was varied from 10 to 1000 with an increment of 10 in each iteration. For emotion intensity ordinal classification task (Task 2), sentiment analysis ordinal classification task (Task 4) and emotion classification task (Task 5) Doc2Vec size was varied from 10 to 1000 with an increment of 10 in each iteration, n estimator of Random Forest Tree was varied from 10 to 150 with an increment of 10 in each iteration and max depth of Random Forest Tree was varied from 2 to 20 with an increment of 1 in each iteration. Variables used to estimate the ideal parameters for regression tasks were mean square error (MSE) and variance of Linear Regression algorithm. We selected those parameters that gave the least MSE value ans large variance value. Variables used to estimate the ideal parameters for classification tasks was accuracy of the Random Forest Tree algorithm. Once the parameters were fixed we build the model for each subtask and used it to predict the values for test data. Development data was used for hyper-parameter tuning while training data was used for building Doc2Vec model.", "cite_spans": [], "ref_spans": [ { "start": 649, "end": 727, "text": "Task 1 140 --Task 2 250 40 17 Task 3 280 --Task 4 820 30 12 Task 5 150", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiment", "sec_num": "3.4" }, { "text": "The ideal parameters obtained after hyperparameter tuning for each subtask for English is consolidated in Table 1 , Arabic is consolidated in Table 2 and Spanish is consolidated in Table 3 . The control parameter values obtained for the optimum parameters which in turn are used to build the model is consolidated in Table 4 for task 1 Table 5 for task 3 Table 6 for task 2 Table 7 for task 4 Table 8 for task 5", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 113, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 142, "end": 149, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 181, "end": 188, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 317, "end": 324, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 355, "end": 401, "text": "Table 6 for task 2 Table 7 for task 4 Table 8", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiment", "sec_num": "3.4" }, { "text": "The output of test data obtained by our model was compared with golden label available with SemEval2018 and the following results were ob-Tasks size n estimator max depth Task 1 20 --Task 2 50 90 19 Task 3 110 --Task 4 90 140 17 Task 5 80 1 18 tained. The metric used for evaluation is macro average F-Score and Pearson correlation coefficient. In macro average method precision and re-call on different sets of system is averaged. The harmonic mean of precision and recall will give us the F-Score. Such an obtained value is called macro F-Score. In Pearson correlation coefficient the linear correlation between two variables X1 and X2 is calculated. For emotion intensity regression task, emotion intensity ordinal classification task, sentiment intensity regression task and sentiment analysis ordinal classification task Pearson correlation coefficient is used as metric while for emotion classification task macro average F-Score is used as metric.", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 246, "text": "Task 1 20 --Task 2 50 90 19 Task 3 110 --Task 4 90 140 17 Task 5 80", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "For emotion intensity regression task on English tweets our model obtained an accuracy of 20.0% when compared with the golden label under Pearson correlation coefficient. When compared for individual emotions we got an accuracy of 21.6%, 21.0%, 11.2%, 26.2% for anger, fear, joy and sadness respectively. On Arabic tweets our model obtained an accuracy of 22.1% when compared with the golden label under Pearson correlation coefficient. When compared for individual emotions we got an accuracy of -0.3%, 17.9%, 31.5%, 39.3% for anger, fear, joy and sadness respectively. On Spanish tweets our model obtained an accuracy of 21.8% when compared with the golden label under Pearson correlation coefficient. When compared for individual emotions we got an accuracy of 24.1%, 21.4%, 14.2%, 27.3% for anger, fear, joy and sadness respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "For emotion intensity ordinal classification task on English tweets our model obtained an accuracy of 3.7% when compared with the golden label under Pearson correlation coefficient. When compared for individual emotions we got an accuracy of 2.6%, -0.2%, 6.7%, 5.5% for anger, fear, joy and sadness respectively. On Arabic tweets our model obtained an accuracy of 13.8% when compared with the golden label under Pearson correlation coefficient. When compared for individual emotions we got an accuracy of -6.2%, 5.0%, 28.7%, 27.5% for anger, fear, joy and sadness respectively. On Spanish tweets our model obtained an accuracy of 2.5% when compared with the golden label under Pearson correlation coefficient. When compared for individual emotions we got an accuracy of 2.0%, -5.2%, 6.3%, 6.8% for anger, fear, joy and sadness respectively. For sentiment intensity regression task on English tweets our model obtained an accuracy of 28.1% when compared with the golden label under Pearson correlation coefficient. On Arabic tweets our model obtained an accuracy of 47.0% when compared with the golden label under Pearson correlation coefficient. On Spanish tweets our model obtained an accuracy of 19.3% when compared with the golden label under Pearson correlation coefficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "For sentiment analysis ordinal classification task on English tweets our model obtained an accuracy of 12.5% when compared with the golden label under Pearson correlation coefficient. On Arabic tweets our model obtained an accuracy of 38.3% when compared with the golden label under Pearson correlation coefficient. On Spanish tweets our model obtained an accuracy of 12.7% when compared with the golden label under Pearson correlation coefficient.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "For emotion classification task on English tweets our model obtained an accuracy of 14.8% when compared with the golden label under macro average F-Score. On Arabic tweets our model obtained an accuracy of 25.0% when compared with the golden label under macro average F-Score. On Spanish tweets our model obtained an accuracy of 6.0% when compared with the golden label under macro average F-Score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4" }, { "text": "The task was to analyze the 'Affects of Tweets' from tweets comprising of different emotions from three different languages. We used distributed representation (Doc2Vec) for creating feature vector which was passed as the input to machine learning algorithm such as Linear Regression for regression tasks and Random Forest Tree for classification tasks. The model was fixed after doing hyperparameter tuning and the results obtained using the model on test data was evaluated using golden label by SemEval2018. The results obtained with the model after comparing with the golden label using some evaluation metric have been discussed in the paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Meta-level sentiment models for big social data analysis. Knowledge-Based Systems", "authors": [ { "first": "Felipe", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Marcelo", "middle": [], "last": "Mendoza", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Poblete", "suffix": "" } ], "year": 2014, "venue": "", "volume": "69", "issue": "", "pages": "86--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felipe Bravo-Marquez, Marcelo Mendoza, and Bar- bara Poblete. 2014. Meta-level sentiment models for big social data analysis. Knowledge-Based Systems, 69:86-99.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Amrita CEN at SemEval-2016 Task 1: Semantic relation from word embeddings in higher dimension", "authors": [ { "first": "Barathi", "middle": [], "last": "Ganesh", "suffix": "" }, { "first": "H", "middle": [ "B" ], "last": "", "suffix": "" }, { "first": "M", "middle": [ "Anand" ], "last": "Kumar", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "706--711", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barathi Ganesh H. B., M. Anand Kumar, and K. P. So- man. 2016a. Amrita CEN at SemEval-2016 Task 1: Semantic relation from word embeddings in higher dimension. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 706-711.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical semantics in context space : Amrita CEN@Author Profiling", "authors": [ { "first": "Barathi", "middle": [], "last": "Ganesh", "suffix": "" }, { "first": "H", "middle": [ "B" ], "last": "", "suffix": "" }, { "first": "M", "middle": [ "Anand" ], "last": "Kumar", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2016, "venue": "CEUR Workshop Proceedings, 1609", "volume": "", "issue": "", "pages": "881--889", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barathi Ganesh H. B., M. Anand Kumar, and K. P. So- man. 2016b. Statistical semantics in context space : Amrita CEN@Author Profiling. In CEUR Work- shop Proceedings, 1609, pages 881-889.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "From vector space models to vector space models of semantics", "authors": [ { "first": "Barathi", "middle": [], "last": "Ganesh", "suffix": "" }, { "first": "H", "middle": [ "B" ], "last": "", "suffix": "" }, { "first": "M", "middle": [ "Anand" ], "last": "Kumar", "suffix": "" }, { "first": "K", "middle": [ "P" ], "last": "Soman", "suffix": "" } ], "year": 2018, "venue": "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10478 LNCS", "volume": "", "issue": "", "pages": "50--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barathi Ganesh H. B., M. Anand Kumar, and K. P. Soman. 2018. From vector space models to vec- tor space models of semantics. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10478 LNCS, pages 50-60. Springer.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Inter- national Conference on Machine Learning, pages 1188-1196.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "SemEval-2018 Task 1: Affect in tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. SemEval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Understanding emotions: A dataset of tweets to study interactions between affect categories", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Pro- ceedings of the 11th Edition of the Language Re- sources and Evaluation Conference (LREC-2018), Miyazaki, Japan.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Nrc-canada: Building the stateof-the-art in sentiment analysis of tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Kiritchenko", "suffix": "" }, { "first": "", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1308.6242" ] }, "num": null, "urls": [], "raw_text": "Saif M Mohammad, Svetlana Kiritchenko, and Xiao- dan Zhu. 2013. Nrc-canada: Building the state- of-the-art in sentiment analysis of tweets. arXiv preprint arXiv:1308.6242.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "content": "", "text": "Tuned parameters for English.", "html": null, "type_str": "table" }, "TABREF1": { "num": null, "content": "
Tasks size n estimator max depth
Task 1 190--
Task 2 1604018
Task 3 120--
Task 4 32014016
Task 5 1801011
", "text": "Tuned parameters for Arabic.", "html": null, "type_str": "table" }, "TABREF2": { "num": null, "content": "
Variable English Arabic Spanish
MSE0.030.030.04
Variance0.030.030.08
", "text": "Tuned parameters for Spanish.", "html": null, "type_str": "table" }, "TABREF3": { "num": null, "content": "
Variable English Arabic Spanish
Accuracy 0.4883 0.40390.4047
", "text": "Control variable value for optimum parameters for Task 1.", "html": null, "type_str": "table" }, "TABREF4": { "num": null, "content": "
Variable English Arabic Spanish
MSE0.030.040.05
Variance0.060.060.02
", "text": "Control variable value for optimum parameters for Task 2.", "html": null, "type_str": "table" }, "TABREF5": { "num": null, "content": "
Variable English Arabic Spanish
Accuracy0.280.270.28
", "text": "Control variable value for optimum parameters for Task 3.", "html": null, "type_str": "table" }, "TABREF6": { "num": null, "content": "
Variable English Arabic Spanish
Accuracy 0.9525 0.95500.9401
", "text": "Control variable value for optimum parameters for Task 4.", "html": null, "type_str": "table" }, "TABREF7": { "num": null, "content": "", "text": "Control variable value for optimum parameters for Task 5.", "html": null, "type_str": "table" } } } }