{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:31:14.152096Z" }, "title": "Psychotherapy is Not One Thing: Simultaneous Modeling of Different Therapeutic Approaches", "authors": [ { "first": "Maitrey", "middle": [], "last": "Mehta", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "" }, { "first": "Derek", "middle": [ "D" ], "last": "Caperton", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Katherine", "middle": [], "last": "Axford", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "" }, { "first": "Lauren", "middle": [], "last": "Weitzman", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "" }, { "first": "David", "middle": [], "last": "Atkins", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" }, { "first": "Vivek", "middle": [], "last": "Srikumar", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "" }, { "first": "Zac", "middle": [ "E" ], "last": "Imel", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Utah", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "There are many different forms of psychotherapy. Itemized inventories of psychotherapeutic interventions provide a mechanism for evaluating the quality of care received by clients and for conducting research on how psychotherapy helps. However, evaluations such as these are slow, expensive, and are rarely used outside of well-funded research studies. Natural language processing research has progressed to allow automating such tasks. Yet, NLP work in this area has been restricted to evaluating a single approach to treatment, when prior research indicates therapists used a wide variety of interventions with their clients, often in the same session. In this paper, we frame this scenario as a multi-label classification task, and develop a group of models aimed at predicting a wide variety of therapist talk-turn level orientations. Our models achieve F1 macro scores of 0.5, with the class F1 ranging from 0.36 to 0.67. We present analyses which offer insights into the capability of such models to capture psychotherapy approaches, and which may complement human judgment.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "There are many different forms of psychotherapy. Itemized inventories of psychotherapeutic interventions provide a mechanism for evaluating the quality of care received by clients and for conducting research on how psychotherapy helps. However, evaluations such as these are slow, expensive, and are rarely used outside of well-funded research studies. Natural language processing research has progressed to allow automating such tasks. Yet, NLP work in this area has been restricted to evaluating a single approach to treatment, when prior research indicates therapists used a wide variety of interventions with their clients, often in the same session. In this paper, we frame this scenario as a multi-label classification task, and develop a group of models aimed at predicting a wide variety of therapist talk-turn level orientations. Our models achieve F1 macro scores of 0.5, with the class F1 ranging from 0.36 to 0.67. We present analyses which offer insights into the capability of such models to capture psychotherapy approaches, and which may complement human judgment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A typical psychotherapy session involves a clienttherapist dialog with the aim of diagnosing and assuaging a client's mental health condition. Psychotherapists, generally, rely on certain approaches (e.g., Cognitive Behavioral or Interpersonal Therapy) and interventions differ across these approaches. 1 . For example, a therapist might focus on a client's interpersonal relationships, their emotions, or help develop behavioral activities designed to reduce symptoms (or all of the above). A key goal of psychotherapy research is to categorize such approaches and study them to determine the effectiveness of each approach in any given scenario. We refer to this process of categorizing and detecting approaches based on an overarching theory as 'evaluation'.", "cite_spans": [ { "start": 303, "end": 304, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we study an application of Natural Language Processing (NLP) to mental health, and focus on therapists' approach to psychotherapy (Imel et al., 2015) . Past NLP research has developed tools for evaluating specific types of interventions like Motivational Interviewing (Cao et al., 2019) or Cognitive Behavioral therapy (Flemotomos et al., 2021) . However, psychotherapists differ from each other in the approaches they take. Furthermore, they can also vary in the interventions they use within and between sessions. The lines of work mentioned before assume that a session is comprised of exactly one approach, and consequently do not attempt to automatically evaluate different psychotherapy approaches that may coexist in the same session.", "cite_spans": [ { "start": 145, "end": 164, "text": "(Imel et al., 2015)", "ref_id": "BIBREF11" }, { "start": 283, "end": 301, "text": "(Cao et al., 2019)", "ref_id": "BIBREF1" }, { "start": 334, "end": 359, "text": "(Flemotomos et al., 2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "McCarthy and Barber (2009) proposed one multiple-approach evaluation methodology-the Multitheoretical List of Therapeutic Interventions (MULTI), which is a list of 60 interventions (or, items) against which a psychotherapy session as a whole is evaluated post-session. The MULTI items are grouped into eight approaches. Note the MULTI is a session-level measure and thereby limited in specificity because it does not record therapist language that informs a given item's presence. Caperton (2021) extend the scheme to the evaluation of therapist monologues, talk-turn by talk-turn, in addition to the session-level evaluation. Such a scheme provides additional detail over time in a session.", "cite_spans": [ { "start": 13, "end": 26, "text": "Barber (2009)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Evaluating sessions with the MULTI requires a certain amount of time to be set aside postsession. Evaluating talk-turns manually for every session would be even more onerous and inefficient. This calls for a better automatic/semi-automatic method(s) to evaluate talk-turns. These methods serve two advantages: i) reducing the amount of effort required in manual classification for research and quality assurance, and ii) creating applications to analyze approaches deemed helpful on out-ofsession platforms (e.g., social media).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To that end, we present a neural machine learning model which aims to automate talk-turn level approach annotation. The task is set up in the following fashion: Given a therapist input talk-turn, does the input (or part of the input) correspond to one or more approaches. A talk-turn might only represent one approach, or might have different parts that correspond to different approaches. It is also possible that a therapist talk-turn does not fall within a specific therapeutic approach (e.g., minimal encouragers, small talk, etc.). Examples are shown in Table 1 . This problem posits itself perfectly as a multi-label classification task.", "cite_spans": [], "ref_spans": [ { "start": 559, "end": 566, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The state-of-the-art in natural language processing (NLP) has seen significant improvements with the advent of transformer-based models (Vaswani et al., 2017; Devlin et al., 2019) . In this paper, we show the performance of one such pre-trained transformer based language model on three paradigms, and experiment with changing context windows. Our models achieve around 0.5 F1 macro scores with the class F1 ranging from 0.36 to 0.67. Our analyses reveal that while our models mispredict on certain talk-turns during a session, they capture the dominant approaches when viewed from a session-level perspective. Furthermore, we show that certain decisions rely on inter-session context, and even common-sense knowledge which sets up a challenge for current models.", "cite_spans": [ { "start": 136, "end": 158, "text": "(Vaswani et al., 2017;", "ref_id": "BIBREF25" }, { "start": 159, "end": 179, "text": "Devlin et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Talk-turn Level MULTI-30 Coding MULTI-60 and MULTI-30. The Multitheoretical List of Therapeutic Interventions (MULTI) was originally developed as a list of 60 interventions (McCarthy and Barber, 2009) . The 60-items belonged to eight different coarse-grained subscales, each representing a therapeutic approach. Each item was rated on a 5-point Likert scale for how prevalent the intervention was over the course of a psychotherapy session. The MULTI-60 was later re-evaluated through an item reduction procedure to create the more parsimonious MULTI-30 (Solomonov et al., 2019) , comprised of the same eight subscales. In this work, we use focus on the eight coarse-grained approaches.", "cite_spans": [ { "start": 175, "end": 202, "text": "(McCarthy and Barber, 2009)", "ref_id": "BIBREF18" }, { "start": 556, "end": 580, "text": "(Solomonov et al., 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Each subscale was defined by a psychotherapeutic theoretical orientation. We describe each subscale briefly here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. Psychodynamic (PD) items focus on addressing nonconscious content from the client's psyche to alleviate distress. coders met together with their team leader every two weeks to discuss difficult talk-turns, items, and areas of disagreement. Coders were tasked with identifying the presence or absence of theory-derived content in therapists' language at every therapist talk-turn (i.e., a string of words or statements uninterrupted by client speech). A given talk-turn could be identified with one, multiple, or no interventions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Of the 243 unique sessions, 102 were annotated by multiple coders, resulting in 270 codings for interrater analysis. The statement-level interrater reliability of the eight theoretical orientations (subscales) was calculated using Cohen's kappa. Kappa was calculated for every possible coder pair who rated the same session and weighted according to the number of comparisons. Subscale kappa scores ranged from .37 ('fair' reliability; Landis and Koch (1977) ) to .63 ('substantial').", "cite_spans": [ { "start": 436, "end": 458, "text": "Landis and Koch (1977)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The dataset was split by client randomly into train/dev/test sets containing 70%, 15% and 15% of the clients respectively. The splits contain 338, 66, and 76 sessions respectively containing 74k, 14k, and 17k talk-turns in total. Dataset statistics for the training split are presented in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 289, "end": 296, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While we want to model the eight subscales (plus the 'No Code' class) conventionally used in literature, we deviate from these eight classes for the implementation. The Behavioral, Cognitive, and Dialectical-Behavioral subscales contain overlapping items (e.g., items 1 and 10 are shared by all three subscales). We break these subscales into four categories such that each of these categories contains mutually exclusive items. Note that the other subscales(Psychodynamic, Interpersonal, etc.) remain the same. Hence, in total, we obtain ten modified model classes (including the 'No Code' class). We refer the reader to Tables 7 and 8 in Appendix A for further details on the breakdown. This method can aid downstream analysis by allowing credit/blame assessment on a smaller set of items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "In our setup, each therapist talk-turn u i has a corresponding binary label vector y i . The binary label vector is ten dimensional, one decision each for the nine model classes (i.e., modified subscales) and one additional class indicating the absence of any code (NC). For all our experiments, we consider the RoBERTa-base (Liu et al., 2019b) model as the language model of choice. This model takes in a talk-turn u i as input to produce contextual representations for its words. We take the pooler output of these contextual representations which gives us a vector representation h i for the talk-turn.", "cite_spans": [ { "start": 325, "end": 344, "text": "(Liu et al., 2019b)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h i = P ooler(RoBERT a(u i ))", "eq_num": "(1)" } ], "section": "Models", "sec_num": "3" }, { "text": "We consider three modeling paradigms for our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Stand-Alone (SA) Model. This model is the vanilla multi-label classifier. Talk-turn representations are passed through a linear layer with the number of output nodes equal to the model classes. The result is passed through a sigmoid layer resulting in a vector of presence probabilities for each label\u0177 i . That is\u0177", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i = \u03c3(w T h i + b)", "eq_num": "(2)" } ], "section": "Models", "sec_num": "3" }, { "text": "where w and b are the weights of the linear layer. For inference, a probability of 0.5 or above indicates label presence for a particular class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Pipeline Model. A heavily imbalanced dataset can hinder model performance for the underrepresented categories. As seen in Table 2 , the number of examples with a \"No Code\" class highly skews the dataset, potentially leading to performance bias towards the class. To alleviate this problem, we define a pipeline model that uses a separate binary classifier to determine whether a talk-turn deserves an orientation category or not. If this binary classifier predicts that the talk-turn supports at least one orientation, then the talk-turn is given to a multi-label model to predict over the nine model classes. The multi-label model will be similar to the one mentioned in the Stand-Alone Model, except nine classes are considered since predicting a \"No Code\" would be redundant. The multi-label model has the flexibility, nonetheless, to predict an absence of orientation by predicting that none of the codes are present (i.e., a zero vector). Note that two separate RoBERTa models are used for the binary and the multi-label classifiers.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Multi-Task Model. The Pipeline Model trains two separate RoBERTa models -one for the binary classifer and one for the multi-label model. A major drawback of this system is that training two RoBERTa models is computationally expensive and memory-intensive. An alternative method is to share the RoBERTa layer between the two tasks and have two separate linear layers for the respective binary and multi-label classification. This strategy of multi-task or joint learning has shown to be of promise in literature (Liu et al., 2019a; Stickland and Murray, 2019) and allows for better shared representation. The losses for both the tasks are combined as a weighted sum for learning. We consider two variants of the model based on the number of output classes for the multi-label classifier. The MultiTask 10 variant considers all the classes including \"No Code\" while MultiTask 9 excludes the \"No Code\" class. The inference is identical to the Pipeline Model. The Multi-Task and Stand-Alone model paradigm can be thought of as fairly similar architectures. However, the Multi-Task model assigns a higher loss weight to the binary classifier, uses a different optimization metric and utilizes a pipelined inference approach as opposed to the one-shot prediction by the Stand-Alone model. So far, we explained that we break the conventional eight subscales into nine which have mutually exclusive items. While this approach allows us for better analysis, it is essential to present performance on the original theoretical subscales. To that end, we aggregate binary vector model predictions to the conventional eight MULTI subscales during evaluation. The output of the model, a ten-dimensional vector, will be mapped to a nine-dimensional vector(eight subscales plus 'No Code'). We use these nine-dimensional vectors to perform model evaluation. Table 8 is a guide for mapping model classes to the MULTI subscales.", "cite_spans": [ { "start": 511, "end": 530, "text": "(Liu et al., 2019a;", "ref_id": "BIBREF16" }, { "start": 531, "end": 558, "text": "Stickland and Murray, 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 1841, "end": 1848, "text": "Table 8", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "All the models use the RoBERTa-base implementation in HuggingFace's Transformers library (Wolf et al., 2020) for obtaining contextual representations. We utilize the pooler output as defined by the library which uses the embedding of the classification token passed through a pre-trained linear layer followed by a tanh activation. We use weighted losses to account for class-imbalance in all cases. The loss weight for a label i is determined by 1 \u2212 n i n , where n i are the number of talk-turns where label i is coded, and n is the total number of talk-turns in the training data. This choice ensures that rarer classes are given greater importance during learning.", "cite_spans": [ { "start": 89, "end": 108, "text": "(Wolf et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "Hyperparameters. All the models use a learning rate of 10 \u22125 and the RoBERTa layer is fine-tuned is each case. We use the early stopping mechanism set at 5 epochs to avoid overfitting. The macro-averaged F1 score on a held-out validation is used to choose the best multi-label classification model. We use macro-averaged F2 score, instead, for the binary classification models since it favors recall on the positive label. This metric is ideal since the multi-label classifier would have the opportunity to correct false positives leaking from the binary classifier. Hyperparameters are tuned based on experimental results on a smaller dataset. All results are averages across three random seeds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.1" }, { "text": "How do our models perform on the dataset? The comparative performance of our models are shown in Table 3 . We report the model performance in terms of exact accuracy, micro and macroaveraged F1 scores across the label set, including the No Code (NC) label, and excluding it. We see that all the modeling paradigms perform almost similarly and to our surprise, the Pipeline or the MultiTask models do not produce substantial gains. Furthermore, we investigate the performance of the models on individual approach categories to understand the results further. These are reported in Table 4 . We observe that model performances for categories do not deviate substantially between paradigms. By comparing to the number of training examples per label in Table 2 , we observe that the performance closely correlates to the amount of data seen by the model.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 104, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 580, "end": 587, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 749, "end": 756, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "4.2" }, { "text": "For the results in Table 3, we consider just the therapist talk-turn and not the context surrounding it, i.e., the client and therapist talk-turns before or after it. We investigate whether adding additional context helps. We consider the following two approaches in addition to the previously shown approach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does added context help?", "sec_num": null }, { "text": "1. Client talk-turn immediately preceding the therapist talk-turn in question can help determine the subscale. Take, for example, the Person-Centered subscale items. In these interventions, therapists often paraphrase statements which clients had just made. Hence, we concatenate the previous client (PrevC) talk-turn to the therapist talk-turn. 2. We observe from the training data that subscales tend to occur in chunks with the therapist opting for a certain orientation for a period of the session. We experiment with added therapist talk-turn context (TC) preceding and following the talk-turn in question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Does added context help?", "sec_num": null }, { "text": "We choose the MultiTask 9 model for this comparison which achieves the best performance. The results are in Table 5 . We see that there is a small increase observed when therapist contexts are added. However, these gains are not substantial (< 2%). Client context does not help the performance. We also show some example predictions of a session snapshot in Table 6 .", "cite_spans": [], "ref_spans": [ { "start": 108, "end": 115, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 358, "end": 365, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Does added context help?", "sec_num": null }, { "text": "In this section, we present analyses on the development set. We choose the best performing Multi-Task 9 model for our analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "Do our models capture the global prevalence of approaches? The MULTI, to begin with, was intended to capture approaches at the session level. We investigate whether our models replicate the trends at a session-level. The comparative analysis for a randomly chosen session is shown in Figure 1 . We see that despite making mistakes locally, the model captures approaches over therapist talk-turns. In this case, we see that the therapist scarcely uses a Psychodynamic or Interpersonal intervention and the model prediction shows similar behavior. On the other hand, the other subscale interventions are used almost uniformly over the length of the session. The model again captures this pattern.", "cite_spans": [], "ref_spans": [ { "start": 284, "end": 292, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "Which categories are confused with each other? Figure 2a presents which categories tend to cooccur with each other. We observe a category Process-Experiential (PE) co-occurs with Person-Centered (PC) almost every third instance. Similarly, Psychodynamic (PD) approach almost always co-occurs with Process-Experiential (PE). Note that this is not commutative, i.e., PE co-occurs with PD about every fourth instance. Figure 2b shows the same, however, between gold labels and model prediction. Here we ask the question: for a certain category that exists in the gold data, what are the categories predicted by the model ? Figures 2a and 2b should be identical if our model is ideal. Studying these figures in conjunction, gives us an idea of where the model confuses predictions the most. For example, a lot of Cognitive (CT) instances get misclassified as Person-Centered (PC), a trend which is not reflected in Figure 2a . We also observe that Psychodynamic (PD) items get significantly mispredicted as Process-Experiential (PE). A large number of approach-labeled instances get classified as 'No Code'. We expected this observa- tion given the skew in the training data.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 56, "text": "Figure 2a", "ref_id": "FIGREF1" }, { "start": 415, "end": 424, "text": "Figure 2b", "ref_id": "FIGREF1" }, { "start": 618, "end": 637, "text": "? Figures 2a and 2b", "ref_id": "FIGREF1" }, { "start": 911, "end": 920, "text": "Figure 2a", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Analysis", "sec_num": "5" }, { "text": "F1 scores and Cohen's kappa scores cannot be compared directly. We analyze some model error examples to assess examples in a fair manner. We selected 22 examples at random with the constraint of selecting different combinations of labels. Out of the 22 examples chosen, five were ones which had an 'NC' gold label and a non-'NC' model prediction, while five had the opposite. The remaining twelve examples were mis-predictions between approach classes. Of the twelve, four were cases in which the talk-turn had a single gold approach and a single model prediction which did not match, while four each were cases in which there were multiple gold approaches but a single model predicted approach, and vice-versa. We made sure that the cases were diverse. We present five of these examples. We consider the best MutliTask 9 model which is trained on just the therapist talk-turn (Va) for this analysis. Example 1 \"Interaction with your ex, like that's better for you\" Human Annotation: NC Model Prediction: IP Here the human assessed that the talk-turn was not structured or specific enough to earn a code, despite the presence of interpersonal content. However, the model identified interpersonal language which may or may not be linked to client distress. In this case, the human seems to have been more conservative than the model in applying a code. Example 2 \"And did you journal? Or keep a log?\" Human Annotation: BT, CT, DBT Model Prediction: NC Here, journaling and log-keeping likely refers to reviewing homework, so the annotator marked an Item 10. This item, subsequently, maps onto three Table 6 : Example model predictions subscales (BT, CT, and DBT). The model, in contrast, would not have known the homework context from this statement alone, resembling a case of atheoretical information gathering, hence an NC. Example 3 \"Yeah and it sound sounds to me like you've already been incredibly patient with him, waiting for him to do those things, and recently he's just been letting you down over and over.\" Human Annotation: IP Model Prediction: IP, PC, CF Both human and model identify clear evidence of client distress linked to an inter-personal relationship. However, the model detects justifiable PC and CF codes, explained by the emotion-added paraphrase and support for the client.", "cite_spans": [], "ref_spans": [ { "start": 1598, "end": 1605, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "6" }, { "text": "Example 4 \"I would guess that, I mean, that that's a really hard place for her to figure out.\" Human Annotation: PE, PC Model Prediction: CF There is no clear argument for PE with only the context from this talk-turn. The human coder likely saw that the therapist made a paraphrase to justify the PC code. The model's CF coding is likely linked to the phrase 'really hard', which often arises from therapists providing empathic support for their client.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "6" }, { "text": "Example 5 \"So how was that experience, this last week of paying attention to your thoughts?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "6" }, { "text": "Human Annotation: PC, BT, CT, DBT Model Prediction: PC The therapist clearly asks about the client's experience, justifying a PC label. The phrase \"last week of paying attention to your thoughts\", however, sounds like a homework check-in (Item 10). Similar to example 2, Item 10 triggers three subscales and the human annotation of BT, CT, and DBT subscales seems appropriate and highlights a case which the model does not capture. This is an interesting case of annotation based on common-sense knowledge with which NLP models still struggle. We should emphasize again that the humans do not annotate eight subscales directly; rather, they annotate based on the 30-item inventory. For instance, in example 2, the human annotator does not annotate the BT, CT, and DBT categories individually. They, instead, might have just annotated a single item (item 10) which maps to the three subscales. Hence, it should not be misconstrued that the human has over-labeled in that scenario.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "6" }, { "text": "In general, after analyzing the 22 examples, we find that in many such erroneous cases, prior intrasession (short or long range) and even inter-session contextual information might be relevant to determine the correct context. We leave this as a possible direction for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "6" }, { "text": "Artificial Intelligence and its sub-domains are being increasingly discussed as possible sources of improvements in mental health conversations (Lee et al., 2021; Aafjes-van Doorn et al., 2021) . Moreover, transcribed therapy data from counselling centres, and public mental health forums have encouraged interest in the NLP community (Goharian et al., 2021; Le Glaz et al., 2021) . NLP tools have since been used to help automate Motivational Interviewing (Tanana et al., 2016; P\u00e9rez-Rosas et al., 2017) , suicide ideation detection (Huang et al., 2014; Sawhney et al., 2018) , etc. to name a few. More recently, pre-trained language models have been increasing finding use in various facets like qualitative session content analysis (Grandeit et al., 2020) , detecting (Wu et al., 2021) and determining the direction of empathy (Hosseini and Caragea, 2021b,a) . Li et al. (2022) tions from a client perspective. Client talk-turn responses to therapist interventions are evaluated based on 3-class response type and a 5-class experience type adapted from TCCS (Ribeiro et al., 2013) .", "cite_spans": [ { "start": 144, "end": 162, "text": "(Lee et al., 2021;", "ref_id": "BIBREF14" }, { "start": 163, "end": 193, "text": "Aafjes-van Doorn et al., 2021)", "ref_id": "BIBREF0" }, { "start": 335, "end": 358, "text": "(Goharian et al., 2021;", "ref_id": null }, { "start": 359, "end": 380, "text": "Le Glaz et al., 2021)", "ref_id": null }, { "start": 457, "end": 478, "text": "(Tanana et al., 2016;", "ref_id": "BIBREF24" }, { "start": 479, "end": 504, "text": "P\u00e9rez-Rosas et al., 2017)", "ref_id": "BIBREF19" }, { "start": 534, "end": 554, "text": "(Huang et al., 2014;", "ref_id": "BIBREF9" }, { "start": 555, "end": 576, "text": "Sawhney et al., 2018)", "ref_id": "BIBREF21" }, { "start": 735, "end": 758, "text": "(Grandeit et al., 2020)", "ref_id": "BIBREF6" }, { "start": 771, "end": 788, "text": "(Wu et al., 2021)", "ref_id": "BIBREF27" }, { "start": 830, "end": 861, "text": "(Hosseini and Caragea, 2021b,a)", "ref_id": null }, { "start": 864, "end": 880, "text": "Li et al. (2022)", "ref_id": null }, { "start": 1061, "end": 1083, "text": "(Ribeiro et al., 2013)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "To the best of our knowledge, this will be the first work to automate the MULTI subscale assignment of therapist talk-turns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "7" }, { "text": "The expanding awareness and need for mental health improvement demands the ubiquity of such resources. Therapeutic evaluation becomes increasingly important as more people leverage mental health resources. We consider one such evaluation strategy -a talk-turn level adaptation of the MULTI -which evaluates therapist orientations. A major downside of such strategies remains their time-intensive nature. In this paper, we propose using pre-trained language models, which have proven to be high performance systems, to automate this evaluation. We experiment across three modeling paradigms using a pre-trained language model -RoBERTa. In addition, we show substantial analyses to understand the results. Our experiments are encouraging, however, we stress that substantial gaps in performance remain. We see this work as a significant stepping stone towards improving therapeutic feedback using NLP tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We note that the gold data used for this project was collected at a university counseling center at a university in the western United States. This induces a demographic bias in the data. It is highly possible that this data is neither representative of the various dialects of the English language spoken around the globe, nor of mental health concerns in the broader population. Our models are built using pre-trained language models, which, by design, are opaque. Consequently, our results are not interpretable. The data was anonymized to protect information disclosures. Text snippets have been paraphrased by a Psychology graduate to mask stylistic cues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ethics Statement", "sec_num": "9" }, { "text": "We would like to thank the members of the Utah NLP group, and Utah Laboratory for Psychotherapy Science for their invaluable suggestions through the course of this work. We would also like to thank the anonymous reviewers for their insightful feedback. The authors acknowledge the support of NIH/NIAAA R01 AA018673 and NSF award #1822877 (Cyberlearning). Conflict of Interest. Drs. Imel and Atkins are co-founders and have minority equity stakes in a technology company -Lyssn.io that is focused on developing computational models that quantify aspects of patient-provider interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements and COI Declarations", "sec_num": "10" }, { "text": "We use the words 'approach' and 'orientation' interchangeably. Later in this paper, we use 'subscales' to align with practical usage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "A Subscales, Items and Model Classes 6, 12, 14, 15 Psychodynamic 5, 7, 18, 26, 27, 30 Interpersonal 4, 21, 11, 16, 17 Common Factors 8, 9, 19 Behavioral only 13,20,24Cognitive only 28,29Dialectical-Behavioral only 1,10Cognitive-Behavioral shared Table 7 are shown. Note that all our evaluations are presented on the conventional MULTI sub-scales by aggregating performance on their constituent model classes.", "cite_spans": [ { "start": 37, "end": 39, "text": "6,", "ref_id": null }, { "start": 40, "end": 43, "text": "12,", "ref_id": null }, { "start": 44, "end": 47, "text": "14,", "ref_id": null }, { "start": 48, "end": 67, "text": "15 Psychodynamic 5,", "ref_id": null }, { "start": 68, "end": 70, "text": "7,", "ref_id": null }, { "start": 71, "end": 74, "text": "18,", "ref_id": null }, { "start": 75, "end": 78, "text": "26,", "ref_id": null }, { "start": 79, "end": 82, "text": "27,", "ref_id": null }, { "start": 83, "end": 102, "text": "30 Interpersonal 4,", "ref_id": null }, { "start": 103, "end": 106, "text": "21,", "ref_id": null }, { "start": 107, "end": 110, "text": "11,", "ref_id": null }, { "start": 111, "end": 114, "text": "16,", "ref_id": null }, { "start": 115, "end": 135, "text": "17 Common Factors 8,", "ref_id": null }, { "start": 136, "end": 138, "text": "9,", "ref_id": null }, { "start": 139, "end": 141, "text": "19", "ref_id": null } ], "ref_spans": [ { "start": 246, "end": 253, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Scoping Review of Machine Learning in Psychotherapy Research", "authors": [ { "first": "Katie", "middle": [], "last": "Aafjes-Van Doorn", "suffix": "" }, { "first": "C\u00e9line", "middle": [], "last": "Kamsteeg", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Bate", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Aafjes", "suffix": "" } ], "year": 2021, "venue": "Psychotherapy Research", "volume": "31", "issue": "1", "pages": "92--116", "other_ids": { "DOI": [ "https://www.tandfonline.com/doi/epub/10.1080/10503307.2020.1808729?needAccess=true" ] }, "num": null, "urls": [], "raw_text": "Katie Aafjes-van Doorn, C\u00e9line Kamsteeg, Jordan Bate, and Marc Aafjes. 2021. A Scoping Review of Ma- chine Learning in Psychotherapy Research. Psy- chotherapy Research, 31(1):92-116.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes", "authors": [ { "first": "Jie", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Tanana", "suffix": "" }, { "first": "Zac", "middle": [], "last": "Imel", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Poitras", "suffix": "" }, { "first": "David", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Srikumar", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5599--5611", "other_ids": { "DOI": [ "10.18653/v1/P19-1563" ] }, "num": null, "urls": [], "raw_text": "Jie Cao, Michael Tanana, Zac Imel, Eric Poitras, David Atkins, and Vivek Srikumar. 2019. Observing Di- alogue in Therapy: Categorizing and Forecasting Behavioral Codes. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5599-5611, Florence, Italy. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Development of a Multitheoretical, Statement-level Measure of Psychotherapeutic Interventions", "authors": [ { "first": "", "middle": [], "last": "Derek D Caperton", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Derek D Caperton. 2021. Development of a Multitheo- retical, Statement-level Measure of Psychotherapeu- tic Interventions. . Dissertation.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": { "DOI": [ "10.18653/v1/N19-1423" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automated Quality Assessment of Cognitive Behavioral Therapy Sessions Through Highly Contextualized Language Representations", "authors": [ { "first": "Nikolaos", "middle": [], "last": "Flemotomos", "suffix": "" }, { "first": "R", "middle": [], "last": "Victor", "suffix": "" }, { "first": "Zhuohao", "middle": [], "last": "Martinez", "suffix": "" }, { "first": "Torrey", "middle": [ "A" ], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Creed", "suffix": "" }, { "first": "C", "middle": [], "last": "David", "suffix": "" }, { "first": "Shrikanth", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "", "middle": [], "last": "Narayanan", "suffix": "" } ], "year": 2021, "venue": "PloS one", "volume": "16", "issue": "10", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolaos Flemotomos, Victor R Martinez, Zhuohao Chen, Torrey A Creed, David C Atkins, and Shrikanth Narayanan. 2021. Automated Quality As- sessment of Cognitive Behavioral Therapy Sessions Through Highly Contextualized Language Represen- tations. PloS one, 16(10):e0258639.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access", "authors": [ { "first": "Nazli", "middle": [], "last": "Goharian", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Yates", "suffix": "" }, { "first": "Molly", "middle": [], "last": "Ireland", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Niederhoffer", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Resnik", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nazli Goharian, Philip Resnik, Andrew Yates, Molly Ireland, Kate Niederhoffer, and Rebecca Resnik, edi- tors. 2021. Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access. Association for Computational Linguistics, Online.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using BERT for Qualitative Content Analysis in Psychosocial Online Counseling", "authors": [ { "first": "Philipp", "middle": [], "last": "Grandeit", "suffix": "" }, { "first": "Carolyn", "middle": [], "last": "Haberkern", "suffix": "" }, { "first": "Maximiliane", "middle": [], "last": "Lang", "suffix": "" }, { "first": "Jens", "middle": [], "last": "Albrecht", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Lehmann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science", "volume": "", "issue": "", "pages": "11--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Grandeit, Carolyn Haberkern, Maximiliane Lang, Jens Albrecht, and Robert Lehmann. 2020. Using BERT for Qualitative Content Analysis in Psy- chosocial Online Counseling. In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science, pages 11-23.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distilling Knowledge for Empathy Detection", "authors": [ { "first": "Mahshid", "middle": [], "last": "Hosseini", "suffix": "" }, { "first": "Cornelia", "middle": [], "last": "Caragea", "suffix": "" } ], "year": 2021, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2021", "volume": "", "issue": "", "pages": "3713--3724", "other_ids": { "DOI": [ "10.18653/v1/2021.findings-emnlp.314" ] }, "num": null, "urls": [], "raw_text": "Mahshid Hosseini and Cornelia Caragea. 2021a. Dis- tilling Knowledge for Empathy Detection. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2021, pages 3713-3724, Punta Cana, Dominican Republic. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "It Takes Two to Empathize: One to Seek and One to Provide", "authors": [ { "first": "Mahshid", "middle": [], "last": "Hosseini", "suffix": "" }, { "first": "Cornelia", "middle": [], "last": "Caragea", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "35", "issue": "", "pages": "13018--13026", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mahshid Hosseini and Cornelia Caragea. 2021b. It Takes Two to Empathize: One to Seek and One to Provide. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):13018-13026.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Detecting Suicidal Ideation in Chinese Microblogs with Psychological Lexicons", "authors": [ { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "David", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Tianli", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tingshao", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2014, "venue": "2014 IEEE 11th Intl Conf on Ubiquitous Intelligence and Computing and 2014 IEEE 11th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaolei Huang, Lei Zhang, David Chiu, Tianli Liu, Xin Li, and Tingshao Zhu. 2014. Detecting Suicidal Ideation in Chinese Microblogs with Psychological Lexicons. In 2014 IEEE 11th Intl Conf on Ubiqui- tous Intelligence and Computing and 2014 IEEE 11th", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Intl Conf on Autonomic and Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "844--849", "other_ids": {}, "num": null, "urls": [], "raw_text": "Intl Conf on Autonomic and Trusted Computing and 2014 IEEE 14th Intl Conf on Scalable Computing and Communications and Its Associated Workshops, pages 844-849. IEEE.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Computational Psychotherapy Research: Scaling up the Evaluation of Patient-Provider Interactions", "authors": [ { "first": "E", "middle": [], "last": "Zac", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Imel", "suffix": "" }, { "first": "David C", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "", "middle": [], "last": "Atkins", "suffix": "" } ], "year": 2015, "venue": "Psychotherapy", "volume": "52", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zac E Imel, Mark Steyvers, and David C Atkins. 2015. Computational Psychotherapy Research: Scaling up the Evaluation of Patient-Provider Interactions. Psy- chotherapy, 52(1):19.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The Measurement of Observer Agreement for Categorical Data", "authors": [ { "first": "Richard", "middle": [], "last": "Landis", "suffix": "" }, { "first": "Gary G", "middle": [], "last": "Koch", "suffix": "" } ], "year": 1977, "venue": "Biometrics", "volume": "", "issue": "", "pages": "159--174", "other_ids": {}, "num": null, "urls": [], "raw_text": "J Richard Landis and Gary G Koch. 1977. The Measure- ment of Observer Agreement for Categorical Data. Biometrics, pages 159-174.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom", "authors": [ { "first": "E", "middle": [], "last": "Ellen", "suffix": "" }, { "first": "John", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Munmun", "middle": [ "De" ], "last": "Torous", "suffix": "" }, { "first": "Colin", "middle": [ "A" ], "last": "Choudhury", "suffix": "" }, { "first": "Sarah", "middle": [ "A" ], "last": "Depp", "suffix": "" }, { "first": "Ho-Cheol", "middle": [], "last": "Graham", "suffix": "" }, { "first": "", "middle": [], "last": "Kim", "suffix": "" }, { "first": "P", "middle": [], "last": "Martin", "suffix": "" }, { "first": "", "middle": [], "last": "Paulus", "suffix": "" }, { "first": "H", "middle": [], "last": "John", "suffix": "" }, { "first": "Dilip V", "middle": [], "last": "Krystal", "suffix": "" }, { "first": "", "middle": [], "last": "Jeste", "suffix": "" } ], "year": 2021, "venue": "Biological Psychiatry: Cognitive Neuroscience and Neuroimaging", "volume": "6", "issue": "9", "pages": "856--864", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen E Lee, John Torous, Munmun De Choudhury, Colin A Depp, Sarah A Graham, Ho-Cheol Kim, Martin P Paulus, John H Krystal, and Dilip V Jeste. 2021. Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Ar- tificial Wisdom. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 6(9):856-864.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Pengfei Fang, Hongliang He, and Zhenzhong Lan. 2022. Towards Automated Real-time Evaluation in Text-based Counseling", "authors": [ { "first": "Anqi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jingsong", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Lizhi", "middle": [], "last": "Ma", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2203.03442" ] }, "num": null, "urls": [], "raw_text": "Anqi Li, Jingsong Ma, Lizhi Ma, Pengfei Fang, Hongliang He, and Zhenzhong Lan. 2022. Towards Automated Real-time Evaluation in Text-based Coun- seling. arXiv preprint arXiv:2203.03442.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multi-Task Deep Neural Networks for Natural Language Understanding", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pengcheng", "middle": [], "last": "He", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4487--4496", "other_ids": { "DOI": [ "10.18653/v1/P19-1441" ] }, "num": null, "urls": [], "raw_text": "Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-Task Deep Neural Networks for Natural Language Understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487-4496, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Roberta: A Robustly Optimized BERT Pretraining Approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Multitheoretical List of Therapeutic Interventions (MULTI): Initial Report", "authors": [ { "first": "S", "middle": [], "last": "Kevin", "suffix": "" }, { "first": "Jacques P", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "", "middle": [], "last": "Barber", "suffix": "" } ], "year": 2009, "venue": "Psychotherapy research", "volume": "19", "issue": "1", "pages": "96--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kevin S McCarthy and Jacques P Barber. 2009. The Multitheoretical List of Therapeutic Interventions (MULTI): Initial Report. Psychotherapy research, 19(1):96-113.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Predicting Counselor Behaviors in Motivational Interviewing Encounters", "authors": [ { "first": "Ver\u00f3nica", "middle": [], "last": "P\u00e9rez-Rosas", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Resnicow", "suffix": "" }, { "first": "Satinder", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "An", "suffix": "" }, { "first": "J", "middle": [], "last": "Kathy", "suffix": "" }, { "first": "Delwyn", "middle": [], "last": "Goggin", "suffix": "" }, { "first": "", "middle": [], "last": "Catley", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1128--1137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ver\u00f3nica P\u00e9rez-Rosas, Rada Mihalcea, Kenneth Resni- cow, Satinder Singh, Lawrence An, Kathy J Goggin, and Delwyn Catley. 2017. Predicting Counselor Be- haviors in Motivational Interviewing Encounters. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 1, Long Papers, pages 1128-1137.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "How Collaboration in Therapy Becomes Therapeutic: The Therapeutic Collaboration Coding System. Psychology and Psychotherapy: Theory, Research and Practice", "authors": [ { "first": "Eug\u00e9nia", "middle": [], "last": "Ribeiro", "suffix": "" }, { "first": "P", "middle": [], "last": "Antonio", "suffix": "" }, { "first": "Miguel", "middle": [ "M" ], "last": "Ribeiro", "suffix": "" }, { "first": "Adam", "middle": [ "O" ], "last": "Gon\u00e7alves", "suffix": "" }, { "first": "William B", "middle": [], "last": "Horvath", "suffix": "" }, { "first": "", "middle": [], "last": "Stiles", "suffix": "" } ], "year": 2013, "venue": "", "volume": "86", "issue": "", "pages": "294--314", "other_ids": { "DOI": [ "https://bpspsychub.onlinelibrary.wiley.com/doi/10.1111/j.2044-8341.2012.02066.x" ] }, "num": null, "urls": [], "raw_text": "Eug\u00e9nia Ribeiro, Antonio P Ribeiro, Miguel M Gon\u00e7alves, Adam O Horvath, and William B Stiles. 2013. How Collaboration in Therapy Becomes Ther- apeutic: The Therapeutic Collaboration Coding Sys- tem. Psychology and Psychotherapy: Theory, Re- search and Practice, 86(3):294-314.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Exploring and Learning Suicidal Ideation Connotations on Social Media with Deep Learning", "authors": [ { "first": "Ramit", "middle": [], "last": "Sawhney", "suffix": "" }, { "first": "Prachi", "middle": [], "last": "Manchanda", "suffix": "" }, { "first": "Puneet", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "Rajiv", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Raj", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "167--175", "other_ids": { "DOI": [ "10.18653/v1/W18-6223" ] }, "num": null, "urls": [], "raw_text": "Ramit Sawhney, Prachi Manchanda, Puneet Mathur, Rajiv Shah, and Raj Singh. 2018. Exploring and Learning Suicidal Ideation Connotations on Social Media with Deep Learning. In Proceedings of the 9th Workshop on Computational Approaches to Sub- jectivity, Sentiment and Social Media Analysis, pages 167-175, Brussels, Belgium. Association for Compu- tational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "The Multitheoretical List of Therapeutic Interventions-30 Items (MULTI-30)", "authors": [ { "first": "Nili", "middle": [], "last": "Solomonov", "suffix": "" }, { "first": "S", "middle": [], "last": "Kevin", "suffix": "" }, { "first": "", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "S", "middle": [], "last": "Bernard", "suffix": "" }, { "first": "Jacques P", "middle": [], "last": "Gorman", "suffix": "" }, { "first": "", "middle": [], "last": "Barber", "suffix": "" } ], "year": 2019, "venue": "Psychotherapy Research", "volume": "29", "issue": "5", "pages": "565--580", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nili Solomonov, Kevin S McCarthy, Bernard S Gorman, and Jacques P Barber. 2019. The Multitheoretical List of Therapeutic Interventions-30 Items (MULTI- 30). Psychotherapy Research, 29(5):565-580.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning", "authors": [ { "first": "Asa", "middle": [ "Cooper" ], "last": "Stickland", "suffix": "" }, { "first": "Iain", "middle": [], "last": "Murray", "suffix": "" } ], "year": 2019, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "5986--5995", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asa Cooper Stickland and Iain Murray. 2019. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning. In International Conference on Machine Learning, pages 5986-5995. PMLR.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A Comparison of Natural Language Processing Methods for Automated Coding of Motivational Interviewing", "authors": [ { "first": "Michael", "middle": [], "last": "Tanana", "suffix": "" }, { "first": "Kevin", "middle": [ "A" ], "last": "Hallgren", "suffix": "" }, { "first": "E", "middle": [], "last": "Zac", "suffix": "" }, { "first": "", "middle": [], "last": "Imel", "suffix": "" }, { "first": "C", "middle": [], "last": "David", "suffix": "" }, { "first": "Vivek", "middle": [], "last": "Atkins", "suffix": "" }, { "first": "", "middle": [], "last": "Srikumar", "suffix": "" } ], "year": 2016, "venue": "Journal of substance abuse treatment", "volume": "65", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Tanana, Kevin A Hallgren, Zac E Imel, David C Atkins, and Vivek Srikumar. 2016. A Comparison of Natural Language Processing Methods for Auto- mated Coding of Motivational Interviewing. Journal of substance abuse treatment, 65:43-50.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Attention Is All You Need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Transformers: State-of-the-Art Natural Language Processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Towards Low-Resource Real-Time Assessment of Empathy in Counselling", "authors": [ { "first": "Zixiu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rim", "middle": [], "last": "Helaoui", "suffix": "" }, { "first": "Diego", "middle": [ "Reforgiato" ], "last": "Recupero", "suffix": "" }, { "first": "Daniele", "middle": [], "last": "Riboni", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access", "volume": "", "issue": "", "pages": "204--216", "other_ids": { "DOI": [ "10.18653/v1/2021.clpsych-1.22" ] }, "num": null, "urls": [], "raw_text": "Zixiu Wu, Rim Helaoui, Diego Reforgiato Recupero, and Daniele Riboni. 2021. Towards Low-Resource Real-Time Assessment of Empathy in Counselling. In Proceedings of the Seventh Workshop on Computa- tional Linguistics and Clinical Psychology: Improv- ing Access, pages 204-216, Online. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Predictions over a therapy session. Session proceeds left to right with a colored bar indicating the presence of an approach (or lack thereof) for the respective category. Plot (a) shows the approaches in gold annotations, (b) shows the same for model predictions.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "Co-occurrence Statistics. Figure (a) describes co-occurrence between approaches in therapist talkturns in the human-annotated gold data. E.g., out of 227 talk-turns where PD is annotated, 173 talk-turns also had PE annotation.Figure (b)describes co-occurrence between approaches in the gold data and the model predictions. E.g., out of 227 talk-turns where PD is annotated as mentioned in (a), only 51 talk-turns had a PD model prediction. The color gradients are normalized on rows.", "num": null, "type_str": "figure" }, "TABREF1": { "html": null, "text": "Examples of talk-turns which have a single, multiple or no approach categories assigned. In the Multiple Approaches examples, colored text snippets correspond to their respective approach categories with the same color.", "num": null, "content": "", "type_str": "table" }, "TABREF3": { "html": null, "text": "", "num": null, "content": "
", "type_str": "table" }, "TABREF4": { "html": null, "text": "Test Labels Metrics (in %)SA Pipeline MultiTask 9 MultiTask 10", "num": null, "content": "
Exact Accuracy 76.8474.6378.1475.86
AllF1 M acro48.2448.5249.3547.79
F1 M icro79.0675.4378.6378.17
Non-NCF1 M acro F1 M icro42.79 47.0343.32 46.9044.06 47.6442.32 46.26
", "type_str": "table" }, "TABREF5": { "html": null, "text": "Experimental results for all the classes (top half) and the eight subscales excluding'No Code' (bottom-half)", "num": null, "content": "
ClassClass Abbrv.SA Pipeline MultiTask 9 MultiTask 10
No CodeNC91.8890.0791.6591.55
PsychodynamicPD32.1132.6430.6532.97
Process-ExperientialPE67.2065.3067.5367.32
InterpersonalIP33.2535.2138.1634.34
Person-centeredPC43.9544.8643.1343.77
Common FactorsCF48.9948.8748.0647.30
BehavioralBT41.2543.0043.9038.96
CognitiveCT33.9533.1236.4134.26
Dialectical-Behavioral DBT41.6243.6044.6539.67
", "type_str": "table" }, "TABREF6": { "html": null, "text": "", "num": null, "content": "
LabelsMetricsVa PrevC TC
Acc78.1478.34 78.69
AllF1 M acro 49.3549.12 50.17
F1 M icro 78.6378.67 79.00
Non-NCF1 M acro 44.06 F1 M icro 47.6443.80 44.96 47.25 48.00
", "type_str": "table" }, "TABREF7": { "html": null, "text": "", "num": null, "content": "
: Comparison of model performance(in %) with
added contexts as compared to the MultiTask 9 model
with just the therapist talk-turn (Va). This table shows
results for all labels (top half) and the eight subscales
excluding 'No Code' (bottom half)
", "type_str": "table" } } } }