diff --git "a/0tE2T4oBgHgl3EQfiQfL/content/tmp_files/load_file.txt" "b/0tE2T4oBgHgl3EQfiQfL/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/0tE2T4oBgHgl3EQfiQfL/content/tmp_files/load_file.txt" @@ -0,0 +1,565 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf,len=564 +page_content='AI based approach to Trailer Generation for Online Educational Courses 1st Prakhar Mishra IIIT Bangalore, India prakhar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='mishra@iiitb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='in 2nd Chaitali Diwan IIIT Bangalore, India chaitali.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='diwan@iiitb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='in 3rd Srinath Srinivasa IIIT Bangalore, India sri@iiitb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='in 4th G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Srinivasaraghavan IIIT Bangalore, India gsr@iiitb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='in Abstract—In this paper, we propose an AI based approach to Trailer Generation in the form of short videos for online educational courses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailers give an overview of the course to the learners and help them make an informed choice about the courses they want to learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' It also helps to generate curiosity and interest among the learners and encourages them to pursue a course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' While it is possible to manually generate the trailers, it requires extensive human efforts and skills over a broad spectrum of design, span selection, video editing, domain knowledge, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', thus making it time-consuming and expensive, especially in an academic setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The framework we propose in this work is a template based method for video trailer generation, where most of the textual content of the trailer is auto-generated and the trailer video is automatically generated, by leveraging Machine Learning and Natural Language Processing techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The proposed trailer is in the form of a timeline consisting of various frag- ments created by selecting, para-phrasing or generating content using various proposed techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The fragments are further enhanced by adding voice-over text, subtitles, animations, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', to create a holistic experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Finally, we perform user evaluation with 63 human evaluators for evaluating the trailers generated by our system and the results obtained were encouraging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Index Terms—Video Trailer Generation, Machine Learning, Natural Language Processing I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' INTRODUCTION The growth of the internet has significantly increased the amount of free instructional content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' These resources are offered not only by big institutions but also by individual content creators over various platforms such as Coursera, Udemy, YouTube, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This increase in content production rate has resulted in the creation of redundant courses and tutoring videos for many topics over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' In spite of advantages like on-demand accessibility, the abundance of options has increased confusion and made it more challenging to select a course that might be in line with learner’s interests.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' And often, enrolling to a course that doesn’t meet the learner’s expectations for a course’s curriculum and other aspects such as expected level of commitment, the availability of support, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', causes the learner to lose motivation and eventually drop the course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [1], [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This problem can be tackled to a certain extent by presenting a video trailer to the learners before the start of the course (learning pathway) to help them quickly glance through the pathway and get an overall idea of the course content and its format [3]–[5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The idea of Trailers is not brand-new, and the film industry has been using them extensively for a while.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailers, in context of movies are mostly about advertising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' They notify viewers about an upcoming movie while generating interest among them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Often the effectiveness of a trailer affects the perception of the movie, even before it is released publicly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The course trailers serve a greater purpose in the educational context than simple course promotion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Before beginning the learning journey, they aid in helping learners set realistic expectations for their learning outcomes and competency mastery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Concept of trailers might resemble with that of summariza- tion [6]–[8], but apart from incorporating a few elements of summarization like shortening and abstracting out information from substantial sized input source, trailers are different in terms of their motivation, purpose and the impact they cre- ate on the end users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Unlike summaries, trailers need not be complete in their coverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Also, they are designed to give glimpses of a few interesting segments of the narrative without revealing the main plot or climax of the underlying narrative [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Although there is no clear demarcation of what a climax is in academic narratives, based on our analysis of many academic course trailers in popular MOOCs (Massive Open Online Courses) such as Udemy1 and Coursera2, we see prevalence of a common pattern in trailer timelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The timeline starts with an introduction about the course and the instructor and ends with a call-to-action (CTA) which offers opportunity to the learners to take action or start the course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' In between, there are several elements and factoids about the course and its contents, that aim to arouse viewer interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The current approach of generating trailers is manual, cumbersome and time-consuming, it requires someone with relevant skills like designing, video editing, and a subject matter expert to help in curating the trailer content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Although, there are software products like Apple iMovie3, Windows Movie Maker4 and others that people can use for generating trailers by performing basic editing like cuts, merging frames, 1https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='udemy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='com 2https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='coursera.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='org 3https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='apple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='com/in/imovie 4https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='microsoft.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='com/en-us/p/movie-maker-video-editor/ 9mvfq4lmz6c9 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='03957v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='CL] 10 Jan 2023 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailer Structure etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Yet the content to be placed in the trailer has to be curated entirely by a human expert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' In our work, we propose a semi-automatic template based framework for generating video trailers for learning pathways, which are a sequence of related educational documents of various forms [10]–[12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Here, most of the content that is placed in the trailer is auto-generated with a scope for taking inputs from the creator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The framework for trailer generation consists of various essential trailer fragments arranged as a timeline of the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Each fragment is composed of a sequence of frames that are coherent within themselves in terms of the topical information they present.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' And inherently, each frame is composed of various types of elements and their properties like font size, text styling, image size, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 1 shows the illustration for the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Once all the elements are generated and placed at their respective positions within a frame of a trailer fragment, a template is applied to it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The template consists of the multi-modal experiences such as voice-over, subtitles, sounds, animations, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' It also determines the elements of the trailer design such as number and ordering of fragments, frames and elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 2 shows the visual view of some of the frames for one of the templates with it’s corresponding elements and their positioning in the frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' RELATED WORK There are studies that discuss the idea, use and motivation of having trailers for academic courses [3]–[5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Also, there are online educational platforms like Coursera and Udemy which have course trailers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' However, we could not find literature on approaches to generating trailers for academic courses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hence, in the following paragraphs we discuss some of the pioneering works of trailer generation in-general across other domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailer generation can also be seen as special case of larger research interest of adding an element of surprise to the engage receiver’s attention in midst of information overload [13], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Authors in [15]–[18] present an approach for automatic trailer generation from movies as input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hermes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [16] create trailers for action movies by analyzing audio and video signals present in movies and automatically detecting features like faces, scene cuts, sound-volume, etc and use ontology of the corresponding domain for producing trailers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Irie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [17] propose a movie trailer generation method which extracts symbols like title logo, main theme music and selects impressive shot or speech segments based on clustering methods and EM algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Brachmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [15] propose an approach of generating action movie trailers using the concept of trailer grammar, knowledge base and various ML techniques for analyzing audio and images present in the movie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [18] propose a system that understands and encodes the patterns and emotions present in horror movies using Convolution Neural Networks(CNN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' All the above methods use visual and audio cues to derive the trailer frames, whereas we use raw text data and build the necessary discriminative and generative Neural Network models to create frames and its elements to be placed in the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hesham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' in [19] explore the idea of creating movie trailers from their subtitles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' They first classify the movie by genre, identify important keywords and then rank important subtitles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The trailer is then generated by stacking the movie time-frames corresponding to the important subtitles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Gaikwad et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' in [20] propose a technique to create previews of movies by utilizing subtitles and finding the most representative scenes by matching them with the plot summaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Chi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [21] propose an approach to automatically create marketing-style short videos for a given product page url by extracting elements and their styles present in the product html page under specified tags.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Unlike the aforementioned works which primarily focus on generating trailers based on an extractive strategies, in our work we develop various modules that comprehend in- put document and generate content for the trailer either by paraphrasing or by using Natural Language Generator based model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' As far as we know, automatic/semi-automatic generation of video trailers for learning pathways is unexplored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Our proposed approach of video trailer generation using Machine Learning, Natural Language Processing and Generation tech- niques is also unique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' PROPOSED SYSTEM We propose a framework for trailer generation consisting of different trailer fragments that form a trailer timeline, genera- tion of the trailer fragments and finally applying templates that determine the look and feel of the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Based on our analysis of multiple trailers presented for various online courses offered on various educational platforms like Coursera and Udemy, we designed and structured our trailer elements, fragments and overall flow of the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We propose a trailer timeline consisting of 7 trailer frag- ments namely, Splash, Trailer Title, Author Details, Outline, Meta-Information, Social Proof and finally the Call-to-Action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Figure 3 shows the timeline of all the above-mentioned frag- ments in the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Each of these fragments define a specific part of the trailer, their purpose and their importance in the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We define the fragments in detail in further part of this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' As discussed earlier, fragments are composed of Trailer Fragment 1 Fragment 2 Fragment t Frame?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 1 Frame?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 2 Frame?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='2 f Element 1 Element 1 Element eFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Illustration of Frames Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailer Timeline a sequence of frames and each frame is composed of various types of elements and their properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The overall approach for trailer generation is illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' All the resources mapped to a learning pathway form the input to our Fragment Data Generator (FDG) module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Template constraints that define the elements, fragments and frames also form the input to FDG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The FDG along with other sources like creator’s input, any images or information from the web or knowledge bases, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', can be incorporated into the frames or the fragments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Once the elements for all the frames across all the fragments are generated, we pass it to the composition module for adding in other important aspects of the trailer like voice-over, subtitles, sounds, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', to add to its multi-modal experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fragment Data Generation Following are the proposed trailer fragments arranged in the order of their appearance in the trailer timeline- Splash Fragment: The idea of splash fragment is to display any introductory information related to the trailer such as credits, software logo, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', mostly obtained from creator’s input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This optional fragment could also be the last fragment in the trailer depending on the creator’s preference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailer Title Fragment: In this fragment we generate a short yet representative title for the entire trailer, hence giving a quick idea about the topic that summarizes the underlying pathway or the set of resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We apply Hierarchical Title Generation model [22] over the resources mapped to the learning pathway to get the list of trailer titles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We select a title among them based on their Term Frequency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' In case, none of the titles are above a threshold, we fall back on the fact that the first resource in the pathway is the proxy to the introductory resource, and we generate the trailer title for it by applying Single Document Title Generator [23], [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Figure 5 shows the trailer title fragment generation flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Author Details Fragment: A quick introduction about the author or the instructor of the learning pathway could help the learners build an implicit connect and trust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Majority of the elements in the Author Details Fragment like author names, affiliations and author’s image are expected from the creator while creating the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Template constraints such as addressing multiple authors with different frame elements, handling and getting relevant images to be put in this fragment etc are also obtained from trailer creator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' These inputs and template constraints are plugged in the automation system to fill the overall author frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Additionally, we crawl the web to get relevant images, for example: we crawl the web and get relevant affiliation images and place it in the desired coordinates as defined by the template.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Also for the templates that allow for having only the frontal face of author, we make use of an open-sourced face recognition model5 to crop the face from the uploaded author image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' In case no author image is provided to the system by the creator, we place a dummy caricatured relevant sized image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Similarly, we have defined defaults for the features, frames and templates in case there is no input from the trailer creator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' For example, when multiple authors exists, we display information w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='t to the the first author entered by the creator and treat him/her as the primary instructor and rest all the authors are abstracted by placing them under the “and others” category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Outline Fragment: This fragment gives an idea about the specific topics that would be covered in the learning pathway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This could help in setting learners’ expectation in terms of the topics covered and in deciding whether the content aligns to his/her end goals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' For this we use Single Document 5https://docs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='opencv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='org/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='4/db/d28/tutorial cascade classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='html AddTextHere Add Text Here What you will learn .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='. AddTextHere Add Text Here Add Text Here 1 Add TextHere ② 3 4 Add Text Add Text Add Text Here Here Here Frame 1 Frame 2 Frame 3Meta- Splash Title Author Outline Information Social Proof CTA Introduction Introduction Overview of Course Building Credits/Logo Defining to the Course about the topics covered Structure and Validation Next Steps Instructor other details and TrustFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailer Generation Flow Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailer Title Fragment Generation Flow Title Generator [23], [24] model to generate titles for all the resources in the learning pathway which represents the outline of the learning pathway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Every template under the outline fragment limits the number of text elements to be listed on the screen with the aim to balance aesthetics and information at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' To adhere to this prior constraint, we design a multi-step process to select diverse, yet impactful set of elements from a relatively larger list of outlines generated in the previous step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 6 shows the entire pipeline of Outline Text Selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Let K be the number of text elements that the frame requires and N be the total number of resources we have as input and let K < N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We start with all the resources (N) given by the user and remove any instance of assessments and short documents under the assumption that such documents won’t hold much informational content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' After this we remove any occurrence of exact duplicates and near duplicates in the remaining set and pass the remaining resource list to the title generator system to generate title for every resource.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Post this, we fix the first and the last position of the outline with the first and last resource title.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We specifically do this action because of the inherent ordering present in the input resource as a part of learning pathway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Also intuitively, picking first and last sets a bound over the topic space to be covered under a particular course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Finally on this reduced set, we divide the space into bins of equal size from which we randomly sample one outline ele- ment from each bin to remaining K−2 positions in the outline list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We use threshold based Jaccard and cosine similarity for filtering syntactic and semantic duplicates respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The Jaccard similarity between any two documents is calculated as an intersection over union of word sets for both documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' It helps us get sense of syntactic similarity between documents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' For calculating cosine similarity,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' we vectorise our inputs using ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Learning Pathway ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='R1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='RR3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='R4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='R5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Ra ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='R7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Rs ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Template ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Constraints ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Fragment Data Generator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='OtherSources ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Creator ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Input ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Fragment Data ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Splash ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Trailer Title ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Outline ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Meta- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Social Proof ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Call-to- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Knowledge ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='AuthorDetails ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Information ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Action ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Base ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Fragment Data ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Web ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Composition ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Voice-over ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Text-to- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Frame ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Subtitle ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ixal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Speech ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Generation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Generation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Duration ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Generation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Trailer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Music ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ArchiveTask: Generate ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='No ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Hierarchical Title ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Titles List ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='/Winning ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='No ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='KUserInput ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Pick 1st Resource ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Trailer Title ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Generation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='Title ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' from Input Yes Yes Single Document Title Generator Trailer Title 4pre-trained Sentence Transformers [25] and then measure the semantic closeness between them using cosine similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Algorithm 1 Duplicates Filter 1: resources = Array(1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' , N − 1, N) 2: remaining resources = Array(1, N) 3: for i ← 2 to N − 1 do 4: scores = Array() 5: for r ← remaining resources do 6: scores ← calculate similarity(i, r) 7: end for 8: if max(scores) < threshold then 9: remaining resources ← i 10: end if 11: end for 12: return remaining resources Since every pathway is composed of different resources of various properties like length, style, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', having one threshold that fits all does not work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hence, our threshold is adaptable in a way that guarantees at-least K items are selected post any of the syntactic or semantic pruning steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The threshold search space is between 0 to 1 where for efficiency and tractability we quantize it at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Then for each threshold we get remaining resources as defined in Algorithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Finally the threshold that guarantees at-least K items and possibly reduces the input set by maximum is chosen as the final threshold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Meta-Information Fragment: The idea of having Meta- Information Fragment is to inform learners about other impor- tant aspects of the course like course structure, total reading time, total number of resources, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We believe this would help learners understand more about the learning pathway or resources apart from just knowing the topics that would be covered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Also, such information can be used by learners in charting out their learning hours and estimating the efforts it would take for the successful completion of the course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Some of the elements that we generate automatically as part of this fragment are: generating topical word clouds 6 bases on word frequencies after pre-processing like stop-word removal, estimating total reading time based on average reading speed statistics and other pathway level derived statistics like total resources, availability of discussion forum, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Social Proof Fragment: Social Proof is one of the most prominent ways of social influence and is based on the heuristic that the users follow others similar to them when uncertain [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We collect these statistics from the deployed learning environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This information is added to the video trailer over time when different learners take this course and the analytical data is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Call-to-Action Fragment: CTA is a marketing term which is designed to push the audience in taking the desired actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' It is an important aspect of any trailer because all of the enthusiasm that is built in a learner while watching the trailer is of no use if the learner is not clear on the next actionable [27], 6https://pypi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='org/project/wordcloud/ [28] item.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' In our system, we randomly select phrases from a set of pre-defined list of potential key-phrases to be placed on the screen at a pre-defined location under this fragment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Some of the phrases we use are ‘Start your learning today’, ‘Let’s get started’, ‘Are you ready?’' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', along with the action that will take the learner on the learning pathway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Additional Elements In this subsection, we discuss two other interesting elements that we propose to be added to the trailers, namely, Definition Extractor and Paraphraser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' These are shown as suggestions to the trailer creator and it’s up to the creator to include them and decide their placement in the trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Definition Extractor: Definitions are descriptive elements that we believe can help in introduction of concepts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' To select the definition from the learning resource, we propose a discriminative model that classifies a given piece of text into Definition or Non-Definition class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' For building the classifier model, we use a dataset7 that contains positive and negative definition candidates extracted from Wikipedia for various topics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Our best performing model is a fine-tuned DistilBERT- base-uncased8 model with a Definition class F1-score of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='96 and Non-Definition class F1-score of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='97 on the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Paraphraser: We believe that this is an useful utility that can be used in the Outline and Trailer title fragments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This gives the creator an ability to re-write concisely any substan- tially larger textual content present in any frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We use a publicly available pre-trained model9 for this task which fine- tunes a large sized T5 (Text-to-Text Transfer Transformer) [7] model on a parallel corpus of sentence and it’s corresponding paraphrase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Video Composition Video Composition module is responsible for stitching together all the elements that need to be part of the trailer, such as the Frame data, Voice-over text, Text-to-Speech (TTS), etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', into a trailer video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4 pictorially shows the overall flow of the various components that are a part of the video compo- sition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We use Python’s MoviePy library10 as our choice for video editing and composition of the templates as it provides us with all the necessary editing functions like inserting text, concatenations and cuts, which we use to draft our templates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' After the frame-level data elements are in-place, the next step is to generate voice-over text for each of the frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Voice-over text is defined as the spoken-text that the narrator speaks while a frame is displayed on the screen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' For this, we select grammar from a pre-defined set of slot based text grammars which we define per frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The slots in the grammar are nothing but the screen’s text elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Finally, once the Voice-over Text is generated for every frame, we pass them through the IBM Watson’s Text-to-speech (TTS) 7http://nlp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='uniroma1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='it/wcl/ 8https://huggingface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='co/distilbert-base-uncased 9https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='com/ramsrigouthamg/Questgen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ai 10 https://zulko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='io/moviepy Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Outline Text Selection API11 with relevant parameters such as voice-type, gender, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', by choosing from a list of speaker profiles to get the audio files for every frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 7 illustrates the flow from grammar selection to voice generation for the Trailer Title Fragment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We then derive the frame duration accordingly to make sure that the visual and audio aspects of the frames are in sync and minimize any kind of lag on either ends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Finally, along with all the above details, we input template constraints like positioning of elements, and styles, user preferences, and some basic animations like fade-in and fade-out settings to come up with the final trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' EXPERIMENTS In this section, we describe the dataset, evaluation strategy and results obtained for the trailers generated by our proposed system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Dataset: Apart from the datasets which we have used for training and evaluating specific modules that are responsible for generating fragment relevant data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We created three dif- ferent learning pathways for our experiments and evaluation of the generated trailers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Each learning pathway differs with each other in the number of resources and stylometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Two of the pathways are based on text book chapters with difference in number of resources mapped, and one pathway is video lectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We tried to take different pathways to evaluate our model’s flexibility on different types of learning pathways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' First one was created by sampling some chapters sequentially from a freely available Machine Learning textbook [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' For second, we chose the speech-to-text transcription of a week’s video lectures from an academic course on NLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Our third learning pathway is the entire ML textbook [29]12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' All the three corpus are analogous to learning pathways as they are all semantically coherent, progressive and share the same global topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Evaluation and Results: Trailers can be seen as gen- erative tasks with an inherent notion of creativity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Here the objective evaluation is not straight-forward because the ef- fectiveness of a trailer is highly subjective and relies on the human perception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' However, we think that human evaluation 11https://cloud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ibm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='com/catalog/services/speech-to-text 12Datasets can be found at: https://bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ly/3ro3JLO 1 The first trailer looked more catchy compared to the second one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Being generated by an AI agent, both seems to be good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 2 Looks amazing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Great work!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 3 You guys have truly done a remarkable work!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4 Good job, keep it up!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 5 Great!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' TABLE I POSITIVE COMMENTS 1 Maybe I just felt that he was conveying info too fast 2 As of now, it sounds a bit robotic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Some improvements w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='t the TTS can help make it better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 3 Slowing the video when the information that is being conveyed is relatively dense would be helpful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' For example, when going through the list of topics, speaking slowly helps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' When giving instructor names, one can be fast.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4 Also, if there’s some way to bring viewer’s attention to the part of the slide that’s being mentioned, that would be better where the content is not sequential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 5 Remove the date from the frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Add something about what can they do once they learn the course(what type of problems can they solve) TABLE II IMPROVEMENTS SUGGESTED BY USERS on various trailers generated can give us a good perspective on the quality of the trailers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We had 63 human evaluators consisting of Engineering graduates, Post-graduates and PhD students well versed in the technical domain that represent our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We evaluate 6 trailers13 in total that were generated from 3 different learning pathways as discussed above, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', 2 trailer per learning pathway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' These two trailers are based on two templates T1, T2 created by us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Both the templates differ in aesthetics and level-of-detail(LOD).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The evaluation for each trailer was done on a set of 8 questions on Likert-scale from 1 to 5, where 1 would mean very poor and 5 would mean very good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' There were three separate groups of evaluators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Each group was provided with 2 trailers based on 2 templates for the same pathway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We thoughtfully perform this diversification to simulate the cluster sampling procedure, since showing all 6 trailers to the same evaluators would have created boredom, resulting in not so accurate evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 13Sample Trailers: https://bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ly/3Hscie9 Filtering Less-informative Documents Syntactic Filters over Document Text All Input Filter Assessments Filter Short Filter Exact Filter Near Generate Title for Resources Documents Duplicates Duplicates every Resource R = [1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' N-1, N] Select 1st and pth Randomly Select 1 resource and add in Outline Elements resource from each Outline then Divide Filter Semantic Filter Near Filter Exact bin P-2 resources into K- Duplicates Duplicates Duplicates O = [1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' K-1, K] 2 equal spaced bins Semantic Filter over Titles R = [1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='. P-1, P] Syntactic Filters over Titles where, K<=P<=NFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Flow of Grammar selection to Voice-over generation We also encouraged the evaluators to give free comments for the trailers they evaluated, as this would help us improve our system in future iterations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' I and II lists down some of the positive comments and improvements suggested by the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 8 shows some of the trailer fragments generated by our proposed system14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Following is the list of 8 questions that were asked to the evaluator during the evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' The text in italics highlights the broader aspect of the evaluation feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Q1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Did you find the trailer to be self-contained?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Q2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' How were the fonts and styles used in the trailer in terms of readability?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Q3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' How did you find the length and pace of the trailer?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Q4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' As a user, how impressed are you with this trailer overall?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Q5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Could this trailer evoke interest in someone taking this course?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' (Ignoring any prior inclination to the topic) Q6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' How was the average duration of each frame?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Q7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Based on the trailer you just saw, do you think you have a good impression of the course now?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Q8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' How did you find the sync between the audio and visuals you saw?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' As can be seen in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 9, the scores obtained for each of the survey questions are good and far above the average(score of 3) for almost all the trailers generated by our approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Also, in our study, we found both the templates performed equally good.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' However, for Q5, the average scores is relatively lower compared to other questions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' On digging deeper we found some of the comments of total 24 comments we received mentioned about the difficulty of the course for not getting interested in the course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This could mean that this question (Q5) is more subjective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 14Detailed demo walk-through: https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='youtube.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='com/watch?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='v= 06VVuAlFhTk V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' CONCLUSIONS AND FUTURE WORK In this paper, we presented a novel framework for au- tomatically generating video trailers for a learning pathway using ML and NLP techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We validated our trailers on multiple corpus of varied granularity with human evaluation and the results obtained were encouraging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' This approach can be adapted to different domains given enough data to train the models involved in the entire process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We believe that this approach can lay foundation to building more advanced versions of trailer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' In future, we plan to improve the existing system by incorporating suggestions obtained in the user evaluation and adding more interesting themes like automatically detecting learning outcomes given the resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We also intend to create an interactive dashboard to take inputs from the creator and allow the creator to make edits to the auto-generated content.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' ACKNOWLEDGMENT We thank the Center of Excellence on Cognitive Com- puting, funded by Mphasis F1 Foundation for funding this research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' We also thank Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Prasad Ram and Gooru team (https://gooru.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='org) for the topical discussions and encourage- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' REFERENCES [1] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Simpson, “Student retention in distance education: are we failing our students?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Open Learning: The Journal of Open, Distance and e- Learning, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 28, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 2, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 105–119, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hartnett, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' St George, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Dron, “Examining motivation in online distance learning environments: Complex, multifaceted, and situation- dependent,” International Review of Research in Open and Distributed Learning, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 12, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 20–38, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [3] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Gayoung, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Sunyoung, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Myungsun, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Yoomi, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Ilju, “A study on the development of a mooc design model,” Educational technology international, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 17, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 1–37, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [4] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='-m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Wong, “Factors leading to effective teaching of moocs,” Asian Association of Open Universities Journal, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [5] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' STACEY, “Pedagogy of moocs,” for Innovation and Quality in Learning, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 111, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Title Generator Sample Title: Graph Neural Networks Hello, Welcome to this course on _TITLE_ Hi, Welcome to this course on _TITLE Grammar Random Selection Hi, Welcome to this course on -_TITLE Hello, Welcome to _TITLE_ course.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hi, Welcome to the course_TITLE_ Trailer Title Fragment Hi, Welcome to this course on Graph Neural Networks Text-to-speech engineFig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trailer Fragments Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Average scores per Survey Question for all 3 pathways and trailers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Here P1, P2, P3 represent 3 Pathways and T1, T2 represent Templates [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Zhao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Saleh, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Liu, “Pegasus: Pre-training with extracted gap-sentences for abstractive summarization,” in International Conference on Machine Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' PMLR, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 11 328–11 339.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [7] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Raffel, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Shazeer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Roberts, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Lee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Narang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Matena, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Zhou, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Li, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Liu, “Exploring the limits of trans- fer learning with a unified text-to-text transformer,” arXiv preprint arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='10683, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [8] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Mihalcea and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Tarau, “Textrank: Bringing order into text,” in Proceedings of the 2004 conference on empirical methods in natural language processing, 2004, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 404–411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [9] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Lienhart, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Pfeiffer, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Effelsberg, “Video abstracting,” Com- munications of the ACM, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 40, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 12, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 54–62, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [10] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Diwan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Srinivasa, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Ram, “Automatic generation of coherent learning pathways for open educational resources,” in European Confer- ence on Technology Enhanced Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Springer, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 321–334.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [11] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Chi, “Ontology-based curriculum content sequencing system with semantic rules,” Expert Systems with Applications, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 36, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 7838–7847, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [12] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Shmelev, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Karpova, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Dukhanov, “An approach of learning path sequencing based on revised bloom’s taxonomy and domain ontolo- gies with the use of genetic algorithms,” Procedia Computer Science, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 66, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 711–719, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [13] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Varshney, “To surprise and inform,” in 2013 IEEE International Symposium on Information Theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' IEEE, 2013, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 3145–3149.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [14] ——, “Must surprise trump information?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' IEEE Technology and Society Magazine, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 38, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 81–87, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [15] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Brachmann, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Chunpir, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Gennies, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Haller, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Kehl, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Mochtarram, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' M¨ohlmann, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Schrumpf, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Schultz, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Stolper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=', in Digital Tools in Media Studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' transcript-Verlag, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 145–158.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [16] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hermes and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Schultz, “Automatic generation of hollywood-like movie trailers,” eCulture Factory, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [17] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Irie, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Satou, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Kojima, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Yamasaki, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Aizawa, “Automatic trailer generation,” in Proceedings of the 18th ACM international con- ference on Multimedia, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 839–842.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Smith, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Joshi, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Huet, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hsu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Cota, “Harnessing ai for augmenting creativity: Application to movie trailer creation,” in Proceedings of the 25th ACM international conference on Multimedia, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 1799–1808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [19] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hesham, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Hani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Fouad, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Amer, “Smart trailer: Automatic generation of movie trailer using only subtitles,” in 2018 First Interna- tional Workshop on Deep and Representation Learning (IWDRL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 26–30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [20] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Gaikwad, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Sontakke, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Patwardhan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Pedanekar, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Karande, “Plots to previews: Towards automatic movie preview retrieval using publicly available meta-data,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 3205–3214.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [21] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Chi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Sun, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Panovich, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Essa, “Automatic video creation from a web page,” in Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 279–292.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [22] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Mishra, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Diwan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Srinivasa, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Srinivasaraghavan, “Automatic title generation for learning resources and pathways with pre-trained transformer models,” International Journal of Semantic Computing, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 15, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 04, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 487–510, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [23] ——, “Automatic title generation for text with pre-trained transformer language model,” in 2021 IEEE 15th International Conference on Semantic Computing (ICSC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 17–24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [24] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Wan, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Xiao, “From neural sentence summarization to headline generation: A coarse-to-fine approach.” in IJCAI, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 17, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4109–4115.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [25] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Reimers and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Gurevych, “Sentence-bert: Sentence embeddings using siamese bert-networks,” arXiv preprint arXiv:1908.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='10084, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [26] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Cialdini and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' James, Influence: Science and practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Pearson education Boston, MA, 2009, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [27] “Call-to-action (cta),” https://bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ly/3DDUBp4, accessed: 2021-12-08.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [28] “3 reasons a call to action is important,” https://bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='ly/33c7WbO, ac- cessed: 2021-12-08.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' [29] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Gareth, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Daniela, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Trevor, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Robert, An introduction to statistical learning: with applications in R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Spinger, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' What you will learn .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=" October 6, 20: October 6, 2021 Readtimo ~8hr model Resources oneet al Let's get started!" metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Regularization in Convolutional Neural function Deep Learning Networks 1 2 3 4 6 Text layergradient Regular Resources neuralnetwork Assessments Are you rward Networks Optimization Techniques Recurrent Noural ready?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'} +page_content=' Networks Training Active Discussion Forum Ihis course, In this specaly curated course ol the curriculum you wll go througn startyourjourney Outline Frame Meta-Information Frame CTA Frame5 4 Likert Value 3 2 1 0 Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 SurveyQuestion P1-T1 P1-T2 P2-T1 P2-T2 P3-T1 P3-T2' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tE2T4oBgHgl3EQfiQfL/content/2301.03957v1.pdf'}