{
"paper_id": "D10-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:36.602314Z"
},
"title": "Improving Gender Classification of Blog Authors",
"authors": [
{
"first": "Arjun",
"middle": [],
"last": "Mukherjee",
"suffix": "",
"affiliation": {},
"email": "amukherj@cs.uic.edu"
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "liub@cs.uic.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The problem of automatically classifying the gender of a blog author has important applications in many commercial domains. Existing systems mainly use features such as words, word classes, and POS (part-ofspeech) n-grams, for classification learning. In this paper, we propose two new techniques to improve the current result. The first technique introduces a new class of features which are variable length POS sequence patterns mined from the training data using a sequence pattern mining algorithm. The second technique is a new feature selection method which is based on an ensemble of several feature selection criteria and approaches. Empirical evaluation using a real-life blog data set shows that these two techniques improve the classification accuracy of the current state-ofthe-art methods significantly.",
"pdf_parse": {
"paper_id": "D10-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "The problem of automatically classifying the gender of a blog author has important applications in many commercial domains. Existing systems mainly use features such as words, word classes, and POS (part-ofspeech) n-grams, for classification learning. In this paper, we propose two new techniques to improve the current result. The first technique introduces a new class of features which are variable length POS sequence patterns mined from the training data using a sequence pattern mining algorithm. The second technique is a new feature selection method which is based on an ensemble of several feature selection criteria and approaches. Empirical evaluation using a real-life blog data set shows that these two techniques improve the classification accuracy of the current state-ofthe-art methods significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Weblogs, commonly known as blogs, refer to online personal diaries which generally contain informal writings. With the rapid growth of blogs, their value as an important source of information is increasing. A large amount of research work has been devoted to blogs in the natural language processing (NLP) and other communities. There are also many commercial companies that exploit information in blogs to provide value-added services, e.g., blog search, blog topic tracking, and sentiment analysis of people's opinions on products and services. Gender classification of blog authors is one such study, which also has many commercial applications. For example, it can help the user find what topics or products are most talked about by males and females, and what products and services are liked or disliked by men and women. Knowing this information is crucial for market intelligence because the information can be exploited in targeted advertising and also product development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the past few years, several authors have studied the problem of gender classification in the natural language processing and linguistic communities. However, most existing works deal with formal writings, e.g., essays of people, the Reuters news corpus and the British National Corpus (BNC). Blog posts differ from such text in many ways. For instance, blog posts are typically short and unstructured, and consist of mostly informal sentences, which can contain spurious information and are full of grammar errors, abbreviations, slang words and phrases, and wrong spellings. Due to these reasons, gender classification of blog posts is a harder problem than gender classification of traditional formal text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work has also attempted gender classification of blog authors using features such as content words, dictionary based content analysis results, POS (part-of-speech) tags and feature selection along with a supervised learning algorithm (Schler et al., 2006; Argamon et al., 2007; Yan and Yan, 2006) . This paper improves these existing methods by proposing two novel techniques. The first technique adds a new class of pattern based features to learning, which are not used in any existing work. The patterns are frequent sequences of POS tags which can capture complex stylistic characteristics of male and female authors. We note that these patterns are very different from the traditional n-grams because the patterns are of variable lengths and need to satisfy some criteria in order for them to represent significant regularities. We will discuss them in detail in Section 3.5.",
"cite_spans": [
{
"start": 241,
"end": 262,
"text": "(Schler et al., 2006;",
"ref_id": "BIBREF26"
},
{
"start": 263,
"end": 284,
"text": "Argamon et al., 2007;",
"ref_id": "BIBREF2"
},
{
"start": 285,
"end": 303,
"text": "Yan and Yan, 2006)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second technique is a new feature selection algorithm which uses an ensemble of feature selection criteria and methods. It is well known that each individual feature selection criterion and method can be biased and tends to favor certain types of features. A combination of them should be able to capture the most useful or discriminative features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experimental results based on a real life blog data set collected from a large number of blog hosting sites show that the two new techniques enable classification algorithms to significantly improve the accuracy of the current stateof-the-art techniques (Argamon et al., 2007; Schler et al., 2006; Yan and Yan, 2006) . We also compare with two publicly available systems, Gender Genie (BookBlog, 2007) and Gender Guesser (Krawetz, 2006) . Both systems implemented variations of the method given in (Argamon et al., 2003) . Here, the improvement of our techniques is even greater.",
"cite_spans": [
{
"start": 258,
"end": 280,
"text": "(Argamon et al., 2007;",
"ref_id": "BIBREF2"
},
{
"start": 281,
"end": 301,
"text": "Schler et al., 2006;",
"ref_id": "BIBREF26"
},
{
"start": 302,
"end": 320,
"text": "Yan and Yan, 2006)",
"ref_id": "BIBREF32"
},
{
"start": 389,
"end": 405,
"text": "(BookBlog, 2007)",
"ref_id": null
},
{
"start": 425,
"end": 440,
"text": "(Krawetz, 2006)",
"ref_id": "BIBREF20"
},
{
"start": 502,
"end": 524,
"text": "(Argamon et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been several recent papers on gender classification of blogs (e.g., Schler et al., 2006 , Argamon et al., 2007 Yan and Yan, 2006; Nowson et al., 2005) . These systems use function/content words, POS tag features, word classes (Schler et al., 2006) , content word classes (Argamon et al., 2007) , results of dictionary based content analysis, POS unigram (Yan and Yan, 2006) , and personality types (Nowson et al., 2005) to capture stylistic behavior of authors' writings for classifying gender. (Koppel et al. 2002 ) also used POS n-grams together with content words on the British National Corpus (BNC). (Houvardas and Stamatatos, 2006) even applied character (rather than word or tag) n-grams to capture stylistic features for authorship classification of news articles in Reuters.",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "Schler et al., 2006",
"ref_id": "BIBREF26"
},
{
"start": 99,
"end": 121,
"text": ", Argamon et al., 2007",
"ref_id": "BIBREF2"
},
{
"start": 122,
"end": 140,
"text": "Yan and Yan, 2006;",
"ref_id": "BIBREF32"
},
{
"start": 141,
"end": 161,
"text": "Nowson et al., 2005)",
"ref_id": "BIBREF23"
},
{
"start": 237,
"end": 258,
"text": "(Schler et al., 2006)",
"ref_id": "BIBREF26"
},
{
"start": 282,
"end": 304,
"text": "(Argamon et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 365,
"end": 384,
"text": "(Yan and Yan, 2006)",
"ref_id": "BIBREF32"
},
{
"start": 409,
"end": 430,
"text": "(Nowson et al., 2005)",
"ref_id": "BIBREF23"
},
{
"start": 506,
"end": 525,
"text": "(Koppel et al. 2002",
"ref_id": "BIBREF19"
},
{
"start": 616,
"end": 648,
"text": "(Houvardas and Stamatatos, 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, these works use only one or a subset of the classes of features. None of them uses all features for classification learning. Given the complexity of blog posts, it makes sense to apply all classes of features jointly in order to classify genders. Moreover, having many feature classes is very useful as they provide features with varied granularities and diversities. However, this also results in a huge number of features and many of them are redundant and may obscure classification. Feature selection is thus needed. Following the idea, this paper proposes a new ensemble feature selection method which is capable of extracting good features from different feature classes using multiple criteria.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We also note some less relevant literature. For example, (Tannen, 1990 ) deals with gender differences in \"conversational style\" and in \"formal written essays\", and (Gefen and Straub, 1997) reports differences in perception of males and females in the use of emails.",
"cite_spans": [
{
"start": 57,
"end": 70,
"text": "(Tannen, 1990",
"ref_id": "BIBREF29"
},
{
"start": 165,
"end": 189,
"text": "(Gefen and Straub, 1997)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our new POS pattern features are related to POS n-grams used in (Koppel et al., 2002; Argamon et al., 2007) , which considered POS 3-grams, 2-grams and unigrams as features. As shown in (Baayen et. al. 1996) , POS n-grams are very effective in capturing the fine-grained stylistic and heavier syntactic information. In this work, we go further by finding POS sequence patterns. As discussed in the introduction, our patterns are entirely different from POS n-grams. First of all, they are of variable lengths depending on whatever lengths can catch the regularities. They also need to satisfy some constraints to ensure that they truly represent some significant regularity of male or female writings. Furthermore, our POS sequence patterns can take care of n-grams and capture additional sequence regularities. These automatically mined pattern features are thus more discriminating for classification.",
"cite_spans": [
{
"start": 64,
"end": 85,
"text": "(Koppel et al., 2002;",
"ref_id": "BIBREF19"
},
{
"start": 86,
"end": 107,
"text": "Argamon et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 186,
"end": 207,
"text": "(Baayen et. al. 1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There are different classes of features that have been experimented for gender classification, e.g., F-measure, stylistic features, gender preferential features, factor analysis and word classes (Nowson et al., 2005; Schler et al., 2006; Corney et al., 2002; Argamon et al., 2007) . We use all these existing features and also propose a new class of features that are POS sequence patterns, which replace existing POS n-grams. Also, as mentioned before, using all feature classes gives us features with varied granularities. Upon extracting all these classes of features, a new ensemble feature selection (EFS) algorithm is proposed to select a subset of good or discriminative features.",
"cite_spans": [
{
"start": 195,
"end": 216,
"text": "(Nowson et al., 2005;",
"ref_id": "BIBREF23"
},
{
"start": 217,
"end": 237,
"text": "Schler et al., 2006;",
"ref_id": "BIBREF26"
},
{
"start": 238,
"end": 258,
"text": "Corney et al., 2002;",
"ref_id": "BIBREF8"
},
{
"start": 259,
"end": 280,
"text": "Argamon et al., 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering and Mining",
"sec_num": "3"
},
{
"text": "Below, we first introduce the existing features, and then present the proposed class of new pattern based features and how to discover them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Engineering and Mining",
"sec_num": "3"
},
{
"text": "The F-measure feature was originally proposed in (Heylighen and Dewaele, 2002) and has been used in (Nowson et al., 2005) with good results. Note that F-measure here is not the F-score or Fmeasure used in text classification or information retrieval for measuring the classification or retrieval effectiveness (or accuracy).",
"cite_spans": [
{
"start": 49,
"end": 78,
"text": "(Heylighen and Dewaele, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 100,
"end": 121,
"text": "(Nowson et al., 2005)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "F-measure",
"sec_num": "3.1"
},
{
"text": "F-measure explores the notion of implicitness of text and is a unitary measure of text's relative contextuality (implicitness), as opposed to its formality (explicitness). Contextuality and formality can be captured by certain parts of speech. A lower score of F-measure indicates contextuality, marked by greater relative use of pronouns, verbs, adverbs, and interjections; a higher score of Fmeasure indicates formality, represented by greater use of nouns, adjectives, prepositions, and articles. F-measure is defined based on the frequency of the POS usage in a text (freq.x below means the frequency of the part-of-speech x): (Heylighen and Dewaele, 2002) applied the Fmeasure to a corpus with known author genders and found a distinct difference between the sexes. Females scored lower preferring a more contextual style while males scored higher preferring a more formal style. F-measure values for male and female writings reported in (Nowson et al., 2005) also demonstrated a similar trend. In our work, we also use F-measure as one of the features.",
"cite_spans": [
{
"start": 631,
"end": 660,
"text": "(Heylighen and Dewaele, 2002)",
"ref_id": "BIBREF14"
},
{
"start": 943,
"end": 964,
"text": "(Nowson et al., 2005)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "F-measure",
"sec_num": "3.1"
},
{
"text": "F = 0.5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F-measure",
"sec_num": "3.1"
},
{
"text": "These are features which capture people's writing styles. The style of writing is typically captured by three types of features: part of speech, words, and in the blog context, words such as lol, hmm, and smiley that appear with high frequency. In this work, we use words and blog words as stylistic features. Part of speech features are mined using our POS sequence pattern mining algorithm. POS n-grams can also be used as features. However, since we mine all POS sequence patterns and use them as features, most discriminative POS ngrams are already covered. In Section 5, we will also show that POS n-grams do not perform as well as our POS sequence patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stylistic Features",
"sec_num": "3.2"
},
{
"text": "Gender preferential features consist of a set of signals that has been used in an email gender classification task (Corney et al., 2002) . These features come from various studies that have been undertaken on the issue of gender and language use (Schiffman, 2002) . It was suggested by these studies and also various other works that women's language makes more frequent use of emotionally intensive adverbs and adjectives like \"so\", \"terribly\", \"awfully\", \"dreadfully\" and women's language is more punctuated. On the other hand, men's conversational patterns express \"independence\" (Corney et al., 2002) . In brief, the language expressed by males is more proactive at solving problems while the language used by females is more reactive to the contribution of others -agreeing, understanding and supporting. We used the gender preferential features listed in Table 1, which indicate adjectives and adverbs based on the presence of suffixes and apologies as used in (Corney et al., 2002) . The feature value assignment will be discussed in Section 5. ",
"cite_spans": [
{
"start": 115,
"end": 136,
"text": "(Corney et al., 2002)",
"ref_id": "BIBREF8"
},
{
"start": 246,
"end": 263,
"text": "(Schiffman, 2002)",
"ref_id": "BIBREF25"
},
{
"start": 583,
"end": 604,
"text": "(Corney et al., 2002)",
"ref_id": "BIBREF8"
},
{
"start": 967,
"end": 988,
"text": "(Corney et al., 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender Preferential Features",
"sec_num": "3.3"
},
{
"text": "Factor or word factor analysis refers to the process of finding groups of similar words that tend to occur in similar documents. This process is referred to as meaning extraction in (Chung and Pennebaker, 2007) . Word lists for twenty factors, along with suggested labels/headings (for reference) were used as features in (Argamon et al., 2007) . Here we list some of those features (word classes) in Table 2 . For the detailed list of such word classes, the reader is referred to (Argamon et al., 2007) . We also used these word classes as features in our work. In addition, we added three more new word classes implying positive, negative and emotional connotations and used them as features in our experiments. These are listed in Table 3 .",
"cite_spans": [
{
"start": 182,
"end": 210,
"text": "(Chung and Pennebaker, 2007)",
"ref_id": "BIBREF7"
},
{
"start": 322,
"end": 344,
"text": "(Argamon et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 481,
"end": 503,
"text": "(Argamon et al., 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 401,
"end": 408,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 734,
"end": 741,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Factor Analysis and Word Classes",
"sec_num": "3.4"
},
{
"text": "Factor Words tion know, people, think, person, tell, feel, friends, talk, new, talking, mean, ask, understand, feelings, care, thinking, friend, relationship, realize, question, answer, saying Home woke, home, sleep, today, eat, tired, wake, watch, watched, dinner, ate, bed, day, house, tv, early, boring, yesterday, watching, sit Family years, family, mother, children, father, kids, parents, old, year, child, son, married, sister, dad, brother, moved, age, young, months, three, wife, living, college, four, high, five, died, six, baby, boy, spend, Christmas Food / Clothes food, eating, weight, lunch, water, hair, life, white, wearing, color, ice, red, fat, body, black, clothes, hot, drink, wear, blue, minutes, shirt, green, coffee, total, store, shopping Romance forget, forever, remember, gone, true, face, spent, times, love, cry, hurt, wish, loved agree, amazing, appealing, attraction, bargain, beaming, beautiful, best, better, boost, breakthrough, breeze, brilliant, brimming, charming, clean, clear, colorful, compliment, confidence, cool, courteous, cuddly, dazzling, delicious, delightful, dynamic, easy, ecstatic, efficient, enhance, enjoy, enormous, excellent, exotic, expert, exquisite, flair, free, generous, genius, great, graceful, heavenly, ideal, immaculate, impressive, incredible, inspire, luxurious, outstanding, royal, speed, splendid, spectacular, superb, sweet, sure, supreme, terrific, treat, treasure, ultra, unbeatable, ultimate, unique, wow, zest Negative wrong, stupid, bad, evil, dumb, foolish, grotesque, harm, fear, horrible, idiot, lame, mean, poor, heinous, hideous, deficient, petty, awful, hopeless, fool, risk, immoral, risky, spoil, spoiled, malign, vicious, wicked, fright, ugly, atrocious, moron, hate, spiteful, meager, malicious, lacking Emotion aggressive, alienated, angry, annoyed, anxious, careful, cautious, confused, curious, depressed, determined, disappointed, discouraged, disgusted, ecstatic, embarrassed, enthusiastic, envious, excited, exhausted, frightened, frustrated, guilty, happy, helpless, hopeful, hostile, humiliated, hurt, hysterical, innocent, interested, jealous, lonely, mischievous, miserable, optimistic, paranoid, peaceful, proud, puzzled, regretful, relieved, sad, satisfied, shocked, shy, sorry, surprised, suspicious, thoughtful, undecided, withdrawn We now present the proposed POS sequence pattern features and the mining algorithm. This results in a new feature class. A POS sequence pattern is a sequence of consecutive POS tags that satisfy some constraints (discussed below). We used (Tsuruoka and Tsujii, 2005) as our POS tagger.",
"cite_spans": [
{
"start": 13,
"end": 859,
"text": "tion know, people, think, person, tell, feel, friends, talk, new, talking, mean, ask, understand, feelings, care, thinking, friend, relationship, realize, question, answer, saying Home woke, home, sleep, today, eat, tired, wake, watch, watched, dinner, ate, bed, day, house, tv, early, boring, yesterday, watching, sit Family years, family, mother, children, father, kids, parents, old, year, child, son, married, sister, dad, brother, moved, age, young, months, three, wife, living, college, four, high, five, died, six, baby, boy, spend, Christmas Food / Clothes food, eating, weight, lunch, water, hair, life, white, wearing, color, ice, red, fat, body, black, clothes, hot, drink, wear, blue, minutes, shirt, green, coffee, total, store, shopping Romance forget, forever, remember, gone, true, face, spent, times, love, cry, hurt, wish, loved",
"ref_id": null
},
{
"start": 860,
"end": 2330,
"text": "agree, amazing, appealing, attraction, bargain, beaming, beautiful, best, better, boost, breakthrough, breeze, brilliant, brimming, charming, clean, clear, colorful, compliment, confidence, cool, courteous, cuddly, dazzling, delicious, delightful, dynamic, easy, ecstatic, efficient, enhance, enjoy, enormous, excellent, exotic, expert, exquisite, flair, free, generous, genius, great, graceful, heavenly, ideal, immaculate, impressive, incredible, inspire, luxurious, outstanding, royal, speed, splendid, spectacular, superb, sweet, sure, supreme, terrific, treat, treasure, ultra, unbeatable, ultimate, unique, wow, zest Negative wrong, stupid, bad, evil, dumb, foolish, grotesque, harm, fear, horrible, idiot, lame, mean, poor, heinous, hideous, deficient, petty, awful, hopeless, fool, risk, immoral, risky, spoil, spoiled, malign, vicious, wicked, fright, ugly, atrocious, moron, hate, spiteful, meager, malicious, lacking Emotion aggressive, alienated, angry, annoyed, anxious, careful, cautious, confused, curious, depressed, determined, disappointed, discouraged, disgusted, ecstatic, embarrassed, enthusiastic, envious, excited, exhausted, frightened, frustrated, guilty, happy, helpless, hopeful, hostile, humiliated, hurt, hysterical, innocent, interested, jealous, lonely, mischievous, miserable, optimistic, paranoid, peaceful, proud, puzzled, regretful, relieved, sad, satisfied, shocked, shy, sorry, surprised, suspicious, thoughtful, undecided, withdrawn",
"ref_id": null
},
{
"start": 2570,
"end": 2597,
"text": "(Tsuruoka and Tsujii, 2005)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Factor Analysis and Word Classes",
"sec_num": "3.4"
},
{
"text": "As shown in (Baayen et. al., 1996) , POS ngrams are good at capturing the heavy stylistic and syntactic information. Instead of using all such n-grams, we want to discover all those patterns that represent true regularities, and we also want to have flexible lengths (not fixed lengths as in n-grams). POS sequence patterns serve these purposes. Its mining algorithm mines all such patterns that satisfy the user-specified minimum support (minsup) and minimum adherence (minadherence) thresholds or constraints. These thresholds ensure that the mined patterns represent significant regularities.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "(Baayen et. al., 1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "The main idea of the algorithm is to perform a level-wise search for such patterns, which are POS sequences with minsup and minadherence. The support of a pattern is simply the proportion of documents that contain the pattern. If a pattern appears too few times, it is probably spurious. A sequence is called a frequent sequence if it satisfies minsup. The adherence of a pattern is measured using the symmetrical conditional probability (SCP) given in (Silva et al., 1999) . The SCP of a sequence with two elements |xy| is the product of the conditional probability of each given the other,",
"cite_spans": [
{
"start": 453,
"end": 473,
"text": "(Silva et al., 1999)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": ") ( ) ( ) , ( ) | ( ) | ( ) , ( 2 y P x P y x P x y P y x P y x SCP = =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "Given a consecutive sequence of POS tags |x 1 \u2026x n |, called a POS sequence of length n, a dispersion point defines two subparts of the sequence. A sequence of length n contains n-1 possible dispersion points. The SCP of the sequence |x 1 \u2026x n | given the dispersion point (denoted by *) |x 1 \u2026x n-1 *x n | is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": ") ( ) ... ( ) ... ( ) ), ... (( 1 1 2 1 1 1 n n n n n x P x x P x x P x x x SCP \u2212 \u2212 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "The SCP measure can be extended so that all possible dispersion points are accounted for.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "Hence the fairSCP of the sequence |x 1 \u2026x n | is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "\u2211 \u2212 = + \u2212 = 1 1 1 1 2 1 1 ) ... ( ) ... ( 1 1 ) ... ( ) ... ( n i n i i n n x x P x x P n x x P x x fairSCP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "fairSCP measures the adherence strength of POS tags in a sequence. The higher the fairSCP value, the more dominant is the sequence. Our POS sequence pattern mining algorithm is given below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "Input: Corpus D = {d | d is a document containing a sequence of POS tags}, Tagset T = {t | t is a POS tag}, and the user specified minimum support (minsup) and minimum adherence (minadherence). Output: All POS sequence patterns (stored in SP)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "mined from D that satisfy minsup and minadherence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "Algorithm mine-POS-pats(D, T, minsup, minadherence)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "1. C 1 \u2190 count each t (\u2208 T) in D; 2. F 1 \u2190 {f | f \u03f5 C 1 , f .count / n \u2265 minsup}; // n = |D| 3. SP 1 \u2190 F 1 ; 4. for (k = 2; k \u2264 MAX-length; k++) 5. C k = candidate-gen(F k-1 ); 6. for each document d \u03f5 D 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "for each candidate POS sequence c \u03f5 C k 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "if (c is contained in d) 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "c.count++; 10. endfor 11. endfor 12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "F k \u2190 {c \u03f5 C k | c.count / n \u2265 minsup}; 13 SP k \u2190 {f \u03f5 F k | fairSCP(f) \u2265 minadherence} 14. endfor 15. return SP \u2190 U k k SP ; Function candidate-gen(F k-1 ) 1. C k \u2190 \u2205; 2. for each POS n-gram c \u03f5 F k-1 3. for each t \u03f5 T 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "c\u2032 \u2190 addsuffix(c, t); // adds tag t to c as suffix 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "add c\u2032 to C k ; 6. endfor 7. endfor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "We now briefly explain the mine-POS-pats algorithm. The algorithm is based on level-wise search. It generates all POS patterns by making multiple passes over data. In the first pass, it counts the support of individual POS tags and determines which of them have minsup (line 2). Multiple occurrences of a tag in a document are counted only once. Those in F 1 are called length 1 frequent sequences. All length 1 sequence patterns are stored in SP 1 . Since adherence is not defined for a single element, we have SP 1 = F 1 (line 3). In each subsequent pass k until MAX-length (which is the maximum length limit of the mined patterns), there are three steps: 1. Using F k-1 (frequent sequences found in the (k-1) pass) as a set of seeds, the algorithm applies candidate-gen() to generate all possibly frequent POS k-sequences (sequences of length k) (line 5). Those infrequent sequences (which are not in F k-1 ) are discarded as adding more POS tags will not make them frequent based on the downward closure property in (Agrawal and Srikant, 1994) . 2. D is then scanned to compute the actual support count of each candidate in C k (lines 6-11). 3. At the end of each scan, it determines which candidate sequences have minsup and minadherence (lines 12 -13). We compute F k and SP k separately because adherence does not have the downward closure property as the support. Finally, the algorithm returns the set of all sequence patterns (line 15) that meet the minsup and minadherence thresholds.",
"cite_spans": [
{
"start": 1020,
"end": 1047,
"text": "(Agrawal and Srikant, 1994)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "The candidate-gen() function generates all possibly frequent k-sequences by adding each POS tag t to c as suffix. c is a k-1-sequence in F k-1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "In our experiments, we used MAX-length = 7, minsup = 30%, and minadherence = 20% to mine all POS sequence patterns. All the mined patterns are used as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "Finally, it is worthwhile to note that mine-POS-pat is very similar to the well-known GSP algorithm (Srikant and Agrawal, 1996) . Likewise, it has linear scale up with data size. If needed, one can use MapReduce (Dean and Ghemawat, 2004) with suitable modifications in mine-POS-pats to speed things up by distributing to multiple machines for large corpora. Moreover, mining is a part of preprocessing of the algorithm and its complexity does not affect the final prediction, as it will be later shown that for model building and prediction, standard machine learning methods are used.",
"cite_spans": [
{
"start": 100,
"end": 127,
"text": "(Srikant and Agrawal, 1996)",
"ref_id": "BIBREF28"
},
{
"start": 212,
"end": 237,
"text": "(Dean and Ghemawat, 2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversa",
"sec_num": null
},
{
"text": "Since all classes of features discussed in Section 3 are useful, we want to employ all of them. This results in a huge number of features. Many of them are redundant and even harmful. Feature selection thus becomes important. There are two common approaches to feature selection: the filter and the wrapper approaches (Blum and Langley, 1997; Kohavi and John, 1997) . In the filter approach, features are first ranked based on a feature selection criterion such as information gain, chisquare (\u03c7 2 ) test, and mutual information. A set of top ranked features are selected. On the contrary, the wrapper model chooses features and adds to the current feature pool based on whether the new features improve the classification accuracy.",
"cite_spans": [
{
"start": 318,
"end": 342,
"text": "(Blum and Langley, 1997;",
"ref_id": "BIBREF4"
},
{
"start": 343,
"end": 365,
"text": "Kohavi and John, 1997)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Feature Selection",
"sec_num": "4"
},
{
"text": "Both these approaches have drawbacks. While the wrapper approach becomes very time consuming and impractical when the number of features is large as each feature is tested by building a new classifier. The filter approach often uses only one feature selection criterion (e.g., information gain, chi-square, or mutual information). Due to the bias of each criterion, using only a single one may result in missing out some good features which can rank high based on another criterion. In this work, we developed a novel feature selection method that uses multiple criteria, and combines both the wrapper and the filter approaches. Our method is called ensemble feature selection (EFS).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Feature Selection",
"sec_num": "4"
},
{
"text": "EFS takes the best of both worlds. It first uses a number of feature selection criteria to rank the features following the filter model. Upon ranking, the algorithm generates some candidate feature subsets which are used to find the final feature set based on classification accuracy using the wrapper model. Since our framework generates much fewer candidate feature subsets than the total number of features, using wrapper model with candidate feature sets is scalable. Also, since the algorithm generates candidate feature sets using multiple criteria and all feature classes jointly, it is able to capture most of those features which are discriminating. We now detail our EFS algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "The algorithm takes as input, a set of n features F = {f 1 , \u2026, f n }, a set of t feature selection criteria \u0398 = {\u03b8 1 , \u2026, \u03b8 t }, a set of t thresholds \u03a4 = {\u03c4 1 , \u2026, \u03c4 t } corresponding to the criteria in \u0398, and a window w. \u03c4 i is the base number of features to be selected for criterion \u03b8 i . w is used to vary \u03c4 i (thus the number of features) to be used by the wrapper approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "Algorithm: EFS (F, \u0398, \u03a4, w) 1. for each \u03b8 i \u0398 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "Rank all features in F based on criterion \u03b8 i and let \u03be i denotes the ranked features 3. endfor 4. for i = 1 to t 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "C i \u2190 \u2205 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "for \u03c4 = \u03c4 i -w to \u03c4 = \u03c4 i + w 7. select first \u03c4 features \u03b6 i from \u03be i and add \u03b6 i to C i in order 8. endfor 9. endfor 10. // We now explain our EFS algorithm. Using a set of different feature selection measures, \u0398, we rank all features in our feature pool, F, using the set of criteria (lines 1-3). This is similar to the filter approach. In lines 4-9, we generate feature sets C i , 1 \u2264 i \u2264 t for each of the t criteria. Each set C i contains feature subsets, and each subset \u03b6 i is the set of top \u03c4 features in \u03be i ranked based on criterion \u03b8 i in lines 1-2. \u03c4 varies from \u03c4 i -w to \u03c4 i + w where \u03c4 i is the threshold for criterion \u03b8 i and w the window size. We vary \u03c4 and generate 2w + 1 feature sets and add all such feature sets \u03b6 i to C i (in lines 6-8) in order. We do so because it is difficult to know the optimal threshold \u03c4 i for each criterion \u03b8 i . It should be noted that \"adding in order\" ensures the ordering of feature sets \u03b6 i as shown in line 10, which will be later used to \"select and remove in order\" in line 15. In lines 11-20 we generate candidate feature sets using C i and add each such candidate feature set \u039b to OptCandFeatures. Each candidate feature set \u039b is a collection of top ranked features based on multiple criteria. It is generated by unioning the features in the first feature subset \u03b6 i , which is then removed from C i for each criterion \u03b8 i (lines 14-17). Each candidate feature set is added to OptCandFeatures in line 18. Since each C i has 2w+1 feature subsets \u03b6 i , there are a total of 2w+1 candidate feature sets \u039b in OptCandFeatures. Lines 21-23 assign an accuracy to each candidate feature set \u039b OptCandFeatures by running 10-fold cross validation on the training data using a chosen classifier with the features in \u039b. Finally, the optimal feature set \u039b OptCand-Features is returned in line 24.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "C i = {\u03b6 1 , \u2026, \u03b6 2w + 1 },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "An interesting question arising in the EFS algorithm is: How does one select the threshold \u03c4 i for each criterion \u03b8 i and the window size w? Intuitively, suppose that for criterion \u03b8 i , the optimal subset of features is S opt_i based on some optimal threshold \u03c4 i . Then the final feature set is a collection of all features f S opt_i \u2200 i. However, finding such optimal feature set S opt_i or optimal threshold \u03c4 i is a difficult problem. To counter this, we use the window w to select various feature subsets close to the top \u03c4 i features in \u03be i . Thus, the threshold values \u03c4 i and window size w should be approximated by experiments. In our experiments, we used \u03c4 i = top 1/20 th of the features ranked in \u03be i for \u2200 i and window size w = |F|/100, and got good results. Fortunately, as we will see in Section 6.2, these parameters are not sensitive at all, and any reasonably large size feature set seems to work equally well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "Finally, we are aware that there are some existing ensemble feature selection methods in the machine learning literature (Gargant\u00e9 et al., 2007; Tuv et al., 2009) . However, they are very different from our approach. They mainly use ensemble classification methods to help choose good features rather than combining different feature selection criteria and integrating different feature selection approaches as in our method.",
"cite_spans": [
{
"start": 121,
"end": 144,
"text": "(Gargant\u00e9 et al., 2007;",
"ref_id": "BIBREF11"
},
{
"start": 145,
"end": 162,
"text": "Tuv et al., 2009)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EFS Algorithm",
"sec_num": "4.1"
},
{
"text": "The set of feature selection criteria \u0398 = {\u03b8 1 \u2026\u03b8 t } used in our work are those commonly used individual selection criteria in the filter approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection Criteria",
"sec_num": "4.2"
},
{
"text": "Let C ={c 1 , c 2 , \u2026, c m } denotes the set of classes, and F = {f 1 , f 2 , \u2026, f n } the set of features. We list the criteria in \u0398 used in our work below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Selection Criteria",
"sec_num": "4.2"
},
{
"text": "This is perhaps the most commonly used criterion, which is based on entropy. The scoring function for information gain of a feature f is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "\u2211 \u2211 \u2211 = = + \u2212 = f f m i i i i m i i f c P f c P f P c P c P f IG , 1 1 ) | ( log ) | ( ) ( ) ( log ) ( ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "Mutual Information (MI): This metric is commonly used in statistical language modeling. The mutual information MI(f, c) between a class c and a feature f is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "\u2211\u2211 = f f c c c P f P c f P c f P c f MI , , ) ( ) ( ) , ( log ) , ( ) , (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "The scoring function generally used as the criterion is the max among all classes. MI(f) = max i {MI (f, c i )} (which we use). The weighted average over all classes can also be applied as the scoring function. \u03c7 2 Statistic: The \u03c7 2 statistic measures the lack of independence between a feature f and class c, and can be compared to the \u03c7 2 distribution with one degree of freedom. We use a 2x2 contingency table of a feature f and a class c to introduce In the table, W denotes the number of documents in the corpus in which feature f and class c cooccur, X the number of documents in which f occurs without c, Y the number of documents in which c occurs without f, and Z the number of documents in which neither c nor f occurs. Thus, N = W + X + Y + Z is the total number of documents in the corpus. \u03c7 2 test is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "\u03c7 2 test. c c f W X f Y Z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": ") )( )( )( ( ) ( ) , ( 2 2 Z Y X W Z X Y W YX WZ N c f + + + + \u2212 = \u03c7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "The scoring function using the \u03c7 2 statistic is either the weighted average or max over all classes. In our experiments, we use the weighted average:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "\u03c7 2 (f) = \u2211 = m i i i c f c P 1 2 ) , ( ) ( \u03c7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "Cross Entropy (CE): This metric is similar to mutual information (Mladenic and Grobelnik, 1998) :",
"cite_spans": [
{
"start": 65,
"end": 95,
"text": "(Mladenic and Grobelnik, 1998)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "\u2211 = = m i i i f P f c P f c P f P f CE 1 ) ( ) | ( log ) | ( ) ( ) (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Information Gain (IG):",
"sec_num": null
},
{
"text": "This criterion is based on the average absolute weight of evidence (Mladenic and Grobelnik, 1998) ",
"cite_spans": [
{
"start": 67,
"end": 97,
"text": "(Mladenic and Grobelnik, 1998)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Evidence for Text (WET):",
"sec_num": null
},
{
"text": ": | )) | ( 1 )( ( )) ( 1 )( | ( log | ) ( ) ( ) ( 1 f c P c P c P f c P f P c P f WET i i i i m i i \u2212 \u2212 = \u2211 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weight of Evidence for Text (WET):",
"sec_num": null
},
{
"text": "After selecting features belonging to different classes, values are assigned differently to different classes of features. There are three common ways of feature value assignments: Boolean, TF (Term Frequency) and TF-IDF (product of term and inverted document frequency). For details of feature value assignments, interested readers are referred to (Joachims, 1997) . While the Boolean scheme assigns a 1 to the feature value if the feature is present in the document and a 0 otherwise, the TF scheme assigns the relative frequency of the number of times that the feature occurs in the document. We did not use TF-IDF as it did not yield good results in our preliminary experiments.",
"cite_spans": [
{
"start": 349,
"end": 365,
"text": "(Joachims, 1997)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Value Assignments",
"sec_num": "5"
},
{
"text": "The feature value assignment to different classes of features is done as follows: The value of F-measure was assigned based on its actual value. Stylistic features such words, and blog words were assigned values 1 or 0 in the Boolean scheme and the relative frequency in the TF scheme (we experimented with both schemes). Feature values for gender preferential features were also assigned in a similar way. Factor and word class features were assigned values according to the Boolean or TF scheme if any of the words belonging to the feature class exists (factor or word class appeared in that document). Each POS sequence pattern feature was assigned a value according to the Boolean (or TF) scheme based on the appearances of the pattern in the POS tagged document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Value Assignments",
"sec_num": "5"
},
{
"text": "This section evaluates the proposed techniques and sees how they affect the classification accuracy. We also compare with the existing state-ofthe-art algorithms and systems. For algorithms, we compared with three representatives in (Argamon et al., 2007) , (Schler et al., 2006) and (Yan and Yan, 2006) . Since they do not have publicly available systems, we implemented them. Each of them just uses a subset of the features used in our system. Recall our system includes all their features and our own POS pattern based features. For systems, we compared with two public domain systems, Gender Genie (BookBlog, 2007) and Gender Guesser (Krawetz, 2006) , which implemented variations of the algorithm in (Argamon et. al, 2003) .",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "(Argamon et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 258,
"end": 279,
"text": "(Schler et al., 2006)",
"ref_id": "BIBREF26"
},
{
"start": 284,
"end": 303,
"text": "(Yan and Yan, 2006)",
"ref_id": "BIBREF32"
},
{
"start": 602,
"end": 618,
"text": "(BookBlog, 2007)",
"ref_id": null
},
{
"start": 638,
"end": 653,
"text": "(Krawetz, 2006)",
"ref_id": "BIBREF20"
},
{
"start": 705,
"end": 727,
"text": "(Argamon et. al, 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "We used SVM classification, SVM regression, and Na\u00efve Bayes (NB) as learning algorithms. Although SVM regression is not designed for classification, it can be applied based on the output of positive or negative values. It actually worked better than SVM classification for our data. For SVM classification and regression, we used SVMLight (Joachims, 1999) , and for NB we used (Borgelt, 2003) . In all our experiments, we used accuracy as the evaluation measure as the two classes (male and female) are roughly balanced (see the data description below), and both classes are equally important.",
"cite_spans": [
{
"start": 339,
"end": 355,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF16"
},
{
"start": 377,
"end": 392,
"text": "(Borgelt, 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "To keep the problem of gender classification of informal text as general as possible, we collected blog posts from many blog hosting sites and blog search engines, e.g., blogger.com, technorati.com, etc. The data set consists of 3100 blogs. Each blog is labeled with the gender of its author. The gender of the author was determined by visiting the profile of the author. Profile pictures or avatars associated with the profile were also helpful in confirming the gender especially when the gender information was not available explicitly. To ensure quality of the labels, one group of students collected the blogs and did the initial labeling, and the other group double-checked the labels by visiting the actual blog pages. Out of 3100 posts, 1588 (51.2%) were written by men and 1512 (48.8%) were written by women. The average post length is 250 words for men and 330 words for women.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blog Data Set",
"sec_num": "6.1"
},
{
"text": "We used all features from different feature classes (Section 3) along with our POS patterns as our pool of features. We used \u03c4 and w values stated in Section 4.1 and criteria mentioned in Section 4.2 for our EFS algorithm. EFS was compared with three commonly used feature selection methods on SVM classification (denoted by SVM), SVM regression (denoted by SVM_R) and the NB classifier. The results are shown in Table 5 . All results were obtained through 10-fold cross validation.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "Also, the total number of features selected by IG, MI, \u03c7 2 , and EFS were roughly the same. Thus, the improvement in accuracy brought forth by EFS was chiefly due to the combination of features selected (based on multi-criteria).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "To measure the accuracy improvement of using our POS patterns over common POS n-grams, we also compared our results with those from POS ngrams (Koppel et al., 2002) . The comparison results are given in Table 6 . Table 6 also includes results to show the overall improvement in accuracy with our two new techniques. We tested our system without any feature selection and without using the POS sequence patterns as features.",
"cite_spans": [
{
"start": 143,
"end": 164,
"text": "(Koppel et al., 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 203,
"end": 210,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 213,
"end": 220,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "The comparison results with existing algorithms and public domain systems using our reallife blog data set are tabulated in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 124,
"end": 131,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "Also, to see whether feature selection helps and how many features are optimal, we varied \u03c4 and w of the EFS algorithm and plotted the accuracy vs. no. of features. These results are shown in Figure 1 . (Argamon et al., 2007) 77.86 (Schler et al., 2006) 79.63 (Yan and Yan, 2006) 68.75 Our method 88.56 Table 7 : Accuracy comparison with other systems ",
"cite_spans": [
{
"start": 204,
"end": 226,
"text": "(Argamon et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 233,
"end": 254,
"text": "(Schler et al., 2006)",
"ref_id": "BIBREF26"
},
{
"start": 261,
"end": 280,
"text": "(Yan and Yan, 2006)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 192,
"end": 201,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 304,
"end": 311,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "Based on the results given in the previous section, we make the following observations: \u2022 SVM regression (SVM_R) performs the best (Table 5) . SVM classification (SVM) also gives good accuracies. NB did not do so well. \u2022 Table 5 also shows that our EFS feature selection method brings about 6-10% improvement in accuracy over the other feature selection methods based on SVM classification and SVM regression. The reason has been explained in the introduction section. Paired t-tests showed that all the improvements are statistically significant at the confidence level of 95%. For NB, the benefit is less (3%).",
"cite_spans": [],
"ref_spans": [
{
"start": 131,
"end": 140,
"text": "(Table 5)",
"ref_id": null
},
{
"start": 219,
"end": 228,
"text": "\u2022 Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "\u2022 Keeping all other parameters constant, Table 5 also shows that Boolean feature values yielded better results than the TF scheme across all classifiers and feature selection methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 48,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "\u2022 Row 1 of Table 6 tells us that feature selection is very useful. Without feature selection (All features), SVM regression only achieves 70% accuracy, which is way inferior to the 88.56% accuracy obtained using EFS feature selection. Row 2 shows that without EFS and without POS sequence patterns, the results are even worse.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "\u2022 Keeping all other parameters intact, Table 6 also demonstrated the effectiveness of our POS pattern features over POS n-grams. We have discussed the reason in Section 3.2 and 3.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "\u2022 From Tables 5 and 6, we can infer that the overall accuracy improvement using EFS and all feature classes described in Section 3 is about 15% for SVM classification and regression and 10% for NB. Also, using POS sequence patterns with EFS brings about a 5% improvement over POS n-grams (Table 6 ). The improvement is more pronounced for SVM based methods than NB.",
"cite_spans": [],
"ref_spans": [
{
"start": 288,
"end": 296,
"text": "(Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "\u2022 Table 7 summarizes the accuracy improvement brought by our proposed techniques over the existing state-of-art systems. Our techniques have resulted in substantial (around 9%) accuracy improvement over the best of the existing systems. Note that (Argamon et al., 2007) used Logistic Regression with word classes and POS unigrams as features. (Schler et al., 2006) used Winnow classifier with function words, content word classes, and POS features. (Yan and Yan, 2006) used Naive Bayes with content words and blog-words as features. For all these systems, we used their features and ran their original classifiers and also the three classifiers in this paper and report the best results. For example, for (Argamon et al., 2007) , we ran Logistic Regression and our three methods. SVM based methods always gave slightly better results. We could not run Winnow due to some technical issues. SVM and SVM_R gave comparable results to those given in their original papers. These results again show that our techniques are useful. All the gains are statistically significant at the confidence level of 95%.",
"cite_spans": [
{
"start": 247,
"end": 269,
"text": "(Argamon et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 343,
"end": 364,
"text": "(Schler et al., 2006)",
"ref_id": "BIBREF26"
},
{
"start": 449,
"end": 468,
"text": "(Yan and Yan, 2006)",
"ref_id": "BIBREF32"
},
{
"start": 705,
"end": 727,
"text": "(Argamon et al., 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 2,
"end": 9,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "\u2022 From Figure 1 , we see that when the number of features selected is small (<100) the classification accuracy is lower than that obtained by using all features (no feature selection). However, the accuracy increases rapidly as the number of selected features increases. After obtaining the best case accuracy, it roughly maintains the accuracy over a long range. The accuracies then gradually decrease with the increase in the number of features. This trend is consistent with the prior findings in (Mladenic, 1998; Rogati and Yang, 2002; Forman 2003; Riloff et al., 2006; Houvardas and Stamatatos, 2006) .",
"cite_spans": [
{
"start": 500,
"end": 516,
"text": "(Mladenic, 1998;",
"ref_id": "BIBREF21"
},
{
"start": 517,
"end": 539,
"text": "Rogati and Yang, 2002;",
"ref_id": "BIBREF24"
},
{
"start": 540,
"end": 552,
"text": "Forman 2003;",
"ref_id": "BIBREF10"
},
{
"start": 553,
"end": 573,
"text": "Riloff et al., 2006;",
"ref_id": "BIBREF24"
},
{
"start": 574,
"end": 605,
"text": "Houvardas and Stamatatos, 2006)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 7,
"end": 15,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "It is important to note here that over a long range of 2000 to 20000 features, the accuracy is high and stable. This means that the thresholds of EFS are easy to set. As long as they are in the range, the accuracy will be good.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "Finally, we would like to mention that (Herring and Paolillo, 06) has used genre relationships with gender classification. Their finding that subgenre \"diary\" contains more \"female\" and subgenre \"filter\" having more \"male\" stylistic features independent of the author gender, may obscure gender classification as there are many factors to be considered. Herring and Paolillo referred only words as features which are not as fine grained as our POS sequence patterns. We are also aware of other factors influencing gender classification like genre, age and ethnicity. However, much of such information is hard to obtain reliably in blogs. They definitely warren some future studies. Also, EFS being a useful method for feature selection in machine learning, it would be useful to perform further experiments to investigate how well it performs on a variety of classification datasets. This again will be an interesting future work.",
"cite_spans": [
{
"start": 39,
"end": 61,
"text": "(Herring and Paolillo,",
"ref_id": null
},
{
"start": 62,
"end": 65,
"text": "06)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Observations and Discussions",
"sec_num": "6.3"
},
{
"text": "This paper studied the problem of gender classification. Although there have been several existing papers studying the problem, the current accuracy is still far from ideal. In this work, we followed the supervised approach and proposed two novel techniques to improve the current state-of-the-art. In particular, we proposed a new class of features which are POS sequence patterns that are able to capture complex stylistic regularities of male and female authors. Since there are a large number features that have been considered, it is important to find a subset of features that have positive effects on the classification task. Here, we proposed an ensemble feature selection method which takes advantage of many different types of feature selection criteria in feature selection. Experimental results based on a real-life blog data set demonstrated the effectiveness of the proposed techniques. They help achieve significantly higher accuracy than the current state-of-the-art techniques and systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fast Algorithms for Mining Association Rules. VLDB",
"authors": [
{
"first": "R",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Srikant",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "487--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agrawal, R. and Srikant, R. 1994. Fast Algorithms for Mining Association Rules. VLDB. pp. 487-499.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Gender, genre, and writing style in formal written texts",
"authors": [
{
"first": "S",
"middle": [],
"last": "Argamon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fine",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shimoni",
"suffix": ""
}
],
"year": 2003,
"venue": "Text-Interdisciplinary Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Argamon, S., Koppel, M., J Fine, AR Shimoni. 2003. Gender, genre, and writing style in formal written texts. Text-Interdisciplinary Journal, 2003.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mining the Blogosphere: Age, Gender and the varieties of self-expression",
"authors": [
{
"first": "S",
"middle": [],
"last": "Argamon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "J",
"middle": [
"W"
],
"last": "Pennebaker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Schler",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Argamon, S., Koppel, M., Pennebaker, J. W., Schler, J. 2007. Mining the Blogosphere: Age, Gender and the varieties of self-expression, First Monday, 2007 -firstmonday.org",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Outside the cave of shadows: Using syntactic annotation to enhance authorship attribution, Literary and Linguistic Computing",
"authors": [
{
"first": "H",
"middle": [],
"last": "Baayen",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Van Halteren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tweedie",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "11",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baayen, H., H van Halteren, F Tweedie. 1996. Outside the cave of shadows: Using syntactic annotation to enhance authorship attribution, Literary and Lin- guistic Computing, 11, 1996.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Selection of relevant features and examples in machine learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Langley",
"suffix": ""
}
],
"year": 1997,
"venue": "Artificial Intelligence",
"volume": "97",
"issue": "1-2",
"pages": "245--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blum, A. and Langley, P. 1997. Selection of relevant features and examples in machine learning. Artifi- cial Intelligence, 97(1-2):245-271.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Revealing people's thinking in natural language: Using an automated meaning extraction method in open-ended self-descriptions",
"authors": [
{
"first": "C",
"middle": [
"K"
],
"last": "Chung",
"suffix": ""
},
{
"first": "J",
"middle": [
"W"
],
"last": "Pennebaker",
"suffix": ""
}
],
"year": 2007,
"venue": "J. of Research in Personality",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chung, C. K. and Pennebaker, J. W. 2007. Revealing people's thinking in natural language: Using an au- tomated meaning extraction method in open-ended self-descriptions, J. of Research in Personality.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Gender Preferential Text Mining of E-mail Discourse",
"authors": [
{
"first": "M",
"middle": [],
"last": "Corney",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Vel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Mohay",
"suffix": ""
}
],
"year": 2002,
"venue": "18th annual Computer Security Applications Conference (ACSAC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Corney, M., Vel, O., Anderson, A., Mohay, G. 2002. Gender Preferential Text Mining of E-mail Dis- course. 18th annual Computer Security Applica- tions Conference (ACSAC), 2002.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mapreduce: Simplified data processing on large clusters, Operating Systems Design and Implementation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ghemawat",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Dean and S. Ghemawat. 2004. Mapreduce: Simpli- fied data processing on large clusters, Operating Systems Design and Implementation, 2004.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An extensive empirical study of feature selection metrics for text classification",
"authors": [
{
"first": "G",
"middle": [],
"last": "Forman",
"suffix": ""
}
],
"year": 2003,
"venue": "JMLR",
"volume": "3",
"issue": "",
"pages": "1289--1306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Forman, G., 2003. An extensive empirical study of fea- ture selection metrics for text classification. JMLR, 3:1289 -1306 , 2003.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Genetic Algorithm to Ensemble Feature Selection",
"authors": [
{
"first": "R",
"middle": [
"A"
],
"last": "Gargant\u00e9",
"suffix": ""
},
{
"first": "T",
"middle": [
"E"
],
"last": "Marchiori",
"suffix": ""
},
{
"first": "S",
"middle": [
"R W"
],
"last": "Kowalczyk",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gargant\u00e9, R. A., Marchiori, T. E., and Kowalczyk, S. R. W., 2007. A Genetic Algorithm to Ensemble Fea- ture Selection. Masters Thesis. Vrije Universiteit, Amsterdam.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Gender differences in the perception and use of e-mail: An extension to the technology acceptance model",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gefen",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Straub",
"suffix": ""
}
],
"year": 1997,
"venue": "MIS Quart",
"volume": "21",
"issue": "4",
"pages": "389--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gefen, D., D. W. Straub. 1997. Gender differences in the perception and use of e-mail: An extension to the technology acceptance model. MIS Quart. 21(4) 389-400.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Gender and genre variation in weblogs",
"authors": [
{
"first": "S",
"middle": [
"C"
],
"last": "Herring",
"suffix": ""
},
{
"first": "J",
"middle": [
"C"
],
"last": "Paolillo",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Sociolinguistics",
"volume": "10",
"issue": "4",
"pages": "439--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herring, S. C., & Paolillo, J. C. 2006. Gender and ge- nre variation in weblogs, Journal of Sociolinguis- tics, 10 (4), 439-459.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Variation in the contextuality of language: an empirical measure",
"authors": [
{
"first": "F",
"middle": [],
"last": "Heylighen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Dewaele",
"suffix": ""
}
],
"year": 2002,
"venue": "Foundations of Science",
"volume": "7",
"issue": "",
"pages": "293--340",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heylighen, F., and Dewaele, J. 2002. Variation in the contextuality of language: an empirical measure. Foundations of Science, 7, 293-340.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "N-gram Feature Selection for Authorship Identification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Houvardas",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Stamatatos",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the 12th Int. Conf. on Artificial Intelligence: Methodology, Systems, Applications",
"volume": "",
"issue": "",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houvardas, J. and Stamatatos, E. 2006. N-gram Fea- ture Selection for Authorship Identification, Proc. of the 12th Int. Conf. on Artificial Intelligence: Me- thodology, Systems, Applications, pp. 77-86.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Making large-Scale SVM Learning Practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T. 1999. Making large-Scale SVM Learning Practical. Advances in Kernel Methods -Support Vector Learning, B. Sch\u00f6lkopf and C. Burges and A. Smola (ed.), MIT-Press, 1999.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Text categorization with support vector machines",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, T. 1997. Text categorization with support vector machines, Technical report, LS VIII Number 23, University of Dortmund, 1997",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Wrappers for feature subset selection",
"authors": [
{
"first": "R",
"middle": [],
"last": "Kohavi",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "John",
"suffix": ""
}
],
"year": 1997,
"venue": "Artificial Intelligence",
"volume": "97",
"issue": "1-2",
"pages": "273--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kohavi, R. and John, G. 1997. Wrappers for feature subset selection. Artificial Intelligence, 97(1- 2):273-324.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatically Categorizing Written Text by Author Gender. Literary and Linguistic Computing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Argamon",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Shimoni",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koppel, M., Argamon, S., Shimoni, A. R.. 2002. Auto- matically Categorizing Written Text by Author Gender. Literary and Linguistic Computing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Gender Guesser. Hacker Factor Solutions",
"authors": [
{
"first": "N",
"middle": [],
"last": "Krawetz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krawetz, N. 2006. Gender Guesser. Hacker Factor Solutions. http://www.hackerfactor.com/ Gender- Guesser.html",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Feature subset selection in text learning",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mladenic",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of ECML-98",
"volume": "",
"issue": "",
"pages": "95--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mladenic, D. 1998. Feature subset selection in text learning. In Proc. of ECML-98, pp. 95-100.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Feature selection for classification based on text hierarchy",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mladenic",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Grobelnik",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Workshop on Learning from Text and the Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mladenic, D. and Grobelnik, D.1998. Feature selection for classification based on text hierarchy. Proceed- ings of the Workshop on Learning from Text and the Web, 1998",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Gender, Genres, and Individual Differences",
"authors": [
{
"first": "S",
"middle": [],
"last": "Nowson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Oberlander",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Gill",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 27th annual meeting of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "1666--1671",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nowson, S., Oberlander J., Gill, A. J., 2005. Gender, Genres, and Individual Differences. In Proceedings of the 27th annual meeting of the Cognitive Science Society (p. 1666-1671). Stresa, Italy.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "High performing and scalable feature selection for text classification",
"authors": [
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Emnlp",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Rogati",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2002,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "659--661",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riloff, E., Patwardhan, S., Wiebe, J.. 2006. Feature Subsumption for opinion Analysis. EMNLP, Rogati, M. and Yang, Y.2002. High performing and scalable feature selection for text classification. In CIKM, pp. 659-661, 2002.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bibliography of Gender and Language",
"authors": [
{
"first": "H",
"middle": [],
"last": "Schiffman",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schiffman, H. 2002. Bibliography of Gender and Lan- guage. http://ccat.sas.upenn.edu/~haroldfs/ pop- cult/bibliogs/gender/genbib.htm",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Effects of age and gender on blogging",
"authors": [
{
"first": "J",
"middle": [],
"last": "Schler",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Argamon",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pennebaker",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the AAAI Spring Symposium Computational Approaches to Analyzing Weblogs",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schler, J., Koppel, M., Argamon, S, and Pennebaker J. 2006. Effects of age and gender on blogging, In Proc. of the AAAI Spring Symposium Computa- tional Approaches to Analyzing Weblogs.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Using LocalMaxs Algortihm for the Extraction of Contiguous and Noncontiguous Multiword Lexical Units",
"authors": [
{
"first": "J",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Guillore",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lopes",
"suffix": ""
}
],
"year": 1999,
"venue": "Springer Lecture Notes in AI 1695",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silva, J., Dias, F., Guillore, S., Lopes, G. 1999. Using LocalMaxs Algortihm for the Extraction of Conti- guous and Noncontiguous Multiword Lexical Units. Springer Lecture Notes in AI 1695, 1999",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Mining sequential patterns: Generalizations and performance improvements",
"authors": [
{
"first": "R",
"middle": [],
"last": "Srikant",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Agrawal",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. 5th Int. Conf. Extending Database Technology (EDBT'96)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srikant, R. and Agrawal, R. 1996. Mining sequential patterns: Generalizations and performance im- provements, In Proc. 5th Int. Conf. Extending Data- base Technology (EDBT'96), Avignon, France.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "You just don't understand",
"authors": [
{
"first": "D",
"middle": [],
"last": "Tannen",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tannen, D. (1990). You just don't understand, New York: Ballantine.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Bidirectional Inference with the Easiest-First Strategy for Tagging Sequence Data",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "467--474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsuruoka, Y. and Tsujii, J. 2005. Bidirectional Infe- rence with the Easiest-First Strategy for Tagging Sequence Data, HLT/EMNLP 2005, pp. 467-474.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Feature selection with ensembles, artificial variables, and redundancy elimination",
"authors": [
{
"first": "E",
"middle": [],
"last": "Tuv",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Borisov",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Runger",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Torkkola",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuv, E., Borisov, A., Runger, G., and Torkkola, K. 2009. Feature selection with ensembles, artificial variables, and redundancy elimination. JMLR, 10.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Gender Classification of Weblog Authors",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Yan",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Approaches to Analyzing Weblogs, AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan, X., Yan, L. 2006. Gender Classification of Web- log Authors. Computational Approaches to Analyz- ing Weblogs, AAAI.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Accuracy vs. no. of features using EFS",
"num": null,
"type_str": "figure"
},
"TABREF2": {
"html": null,
"content": "
",
"type_str": "table",
"num": null,
"text": "Gender preferential features"
},
"TABREF3": {
"html": null,
"content": "",
"type_str": "table",
"num": null,
"text": "Words in factors"
},
"TABREF4": {
"html": null,
"content": "3.5 Proposed POS Sequence Pattern Fea- |
tures |
",
"type_str": "table",
"num": null,
"text": "Words implying positive, negative and emotional connotations"
},
"TABREF6": {
"html": null,
"content": "",
"type_str": "table",
"num": null,
"text": "Two-way contingency table of f and c"
},
"TABREF8": {
"html": null,
"content": "System | Accuracy (%) |
Gender Genie | 61.69 |
Gender Guesser | 63.78 |
",
"type_str": "table",
"num": null,
"text": "Accuracies of POS n-grams and POS patterns with or without EFS (Boolean value assignment)"
}
}
}
}