diff --git "a/I9AyT4oBgHgl3EQf5_o0/content/tmp_files/load_file.txt" "b/I9AyT4oBgHgl3EQf5_o0/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/I9AyT4oBgHgl3EQf5_o0/content/tmp_files/load_file.txt" @@ -0,0 +1,820 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf,len=819 +page_content='A Survey on Protein Representation Learning: Retrospect and Prospect Lirong Wu 1,2 ∗ , Yufei Huang 1,2 ∗ , Haitao Lin 1,2 , Stan Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Li 1† 1 AI Lab, Research Center for Industries of the Future, Westlake University 2 College of Computer Science and Technology, Zhejiang University {wulirong,huangyufei,linhaitao,stan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='zq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='li}@westlake.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='cn Abstract Proteins are fundamental biological entities that play a key role in life activities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The amino acid sequences of proteins can be folded into stable 3D structures in the real physicochemical world, form- ing a special kind of sequence-structure data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' With the development of Artificial Intelligence (AI) tech- niques, Protein Representation Learning (PRL) has recently emerged as a promising research topic for extracting informative knowledge from mas- sive protein sequences or structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' To pave the way for AI researchers with little bioinformatics background, we present a timely and comprehen- sive review of PRL formulations and existing PRL methods from the perspective of model architec- tures, pretext tasks, and downstream applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We first briefly introduce the motivations for pro- tein representation learning and formulate it in a general and unified framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Next, we divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence- structure co-modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Finally, we discuss some technical challenges and potential directions for improving protein representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The lat- est advances in PRL methods are summarized in a GitHub repository https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='com/LirongWu/ awesome-protein-representation-learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 1 Introduction Proteins perform specific biological functions that are essen- tial for all living organisms and therefore play a key role when investigating the most fundamental questions in the life sci- ences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The proteins are composed of one or several chains of amino acids that fold into a stable 3D structure to enable vari- ous biological functionalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Therefore, understanding, pre- dicting, and designing proteins for biological processes are critical for medical, pharmaceutical, and genetic research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Previous approaches on protein modeling are mostly driven by biological or physical priors, and they explore com- plex sequence-structure-function relationships through en- ergy minimization [Rohl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Xu and Zhang, 2011], ∗Equal contribution, † Corresponding author dynamics simulations [Hospital et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Karplus and Petsko, 1990], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' With the development of artificial in- telligence and low-cost sequencing technologies, data-driven Protein Representation Learning (PRL) [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Rives et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Hermosilla and Ropinski, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Jing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] has made remarkable progress due to its superior performance in modeling complex nonlinear rela- tionships.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The primary goal of protein representation learning is to extract transferable knowledge from protein data with well-designed model architectures and pretext tasks, and then generalize the learned knowledge to various protein-related downstream applications, ranging from structure prediction to sequence design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Despite their great progress, it is still tricky for AI researchers without bioinformatics background to get started with protein representation learning, and one obstacle is the vast amount of physicochemical knowledge in- volved behind the proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Therefore, a survey on PRL meth- ods that is friendly to the AI community is urgently needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Existing surveys related to PRL [Iuchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Unsal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Torrisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] are mainly developed from the perspective of biological applications, but do not go deeper into other important aspects, such as model architectures and pretext tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Overall, our contributions can be summarized as follows: (1) Comprehensive review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Our survey provides a comprehensive and up-to-date review of existing PRL methods from the perspective of the model ar- chitectures and pretext tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' (2) New taxonomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We divide existing PRL methods into three categories: sequence-based, structure-based, and sequence-structure co-modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' (3) De- tailed Implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We summarize the paper lists and open-source codes in a public GitHub repository, setting the stage for the development of more future works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' (4) Future directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We point out the technical limitations of current research and discuss several promising directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 2 Notation and Problem Statement The sequence of amino acids can be folded into a stable 3D structure, forming a special kind of sequence-structure data, which determines its properties and functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Therefore, we can model each protein as a graph G = (V, E, X, F), where V is the ordered set of N nodes in the graph representing amino acid residues and E ∈ V × V is the set of edges that connects the nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Each node u ∈ V in graph G can be attributed with a scalar-vector tuple xu = (su, Vu), where su ∈ RO and arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='00813v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='LG] 31 Dec 2022 Vu ∈ R3×P .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Each edge e ∈ E can be attributed with a scalar- vector tuple fe = (se, Ve), where se ∈ RT and Ve ∈ R3×D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Given a model architecture fθ(·) and a set of K losses of pretext tasks {L(1) pre(θ, η1), L(2) pre(θ, η2), · · · , L(K) pre (θ, ηK)} with projection heads {gηk(·)}K k=1, Protein Representation Learning (PRL) usually works in a two-stage manner: (1) Pre-training the model fθ(·) with pretext tasks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' and (2) Fine- tuning the pre-trained model fθinit(·) with a projection head gω(·) under the supervision of a specific downstream task Ltask(θ, ω).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The learning objective can be formulated as θ∗, ω∗ = arg min (θ,ω) Ltask(θinit, ω), s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' θinit, {η∗ k}K k=1 = arg min θ,{ηk}K k=1 K � k=1 λkL(k) pre(θ, ηk) (1) where {λk}K k=1 are trade-off task hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' A high- level overview of the PRL framework is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In practice, if we set K = 1, ω = η1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', L(1) pre(θ, η1) = Ltask(θ, ω), it is equivalent to learning task-specific repre- sentations directly under downstream supervision, which in this survey can be considered as a special case of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Pretext Tasks Prediction Head Prediction Head Encoder Downstream Task Encoder Step 2 Fine-tune Step 1 Pre-train Figure 1: A general framework for protein representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In this survey, we mainly focus on the model architecture fθ(·) and pretext tasks {L(k) pre(θ, ηk)}K k=1 for protein repre- sentation learning, and defer the discussion on downstream applications until Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' A high-level overview of this sur- vey with some representative examples is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 3 Model Architectures In this section, we summarize some commonly used model architectures for learning protein sequences or structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='1 Sequence-based Encoder The sequence encoder takes as input (V, X) and then aims to capture the dependencies between amino acids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] treats protein sequences as a special “biological lan- guage” and then establishes an analogy between such “bio- logical language” and natural (textual) language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Inspired by this, many classical model architectures developed for natural language processing can be directly extended to handle pro- tein sequences [Asgari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Depending on whether a single sequence or multiple sequences are to be encoded, there are a variety of different sequence-based encoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Single Sequences The commonly used sequence encoders for modeling single sequences include Variational Auto-Encoder (VAE) [Sinai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019], Recurrent Neural Networks (RNNs) [Armenteros et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], Long Short-Term Memory (LSTM) [Hochreiter and Schmidhuber, 1997], BERT [Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018], Transformer [Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Based on the vanilla Transformer, [Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] proposes a novel geometry-inspired transformer (Geoformer) to further distill the structural and physical pairwise relationships between amino acids into the learned protein representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' If we do not consider the ordering of amino acids in the sequences, we can also directly apply Convolutional Neural Networks (CNNs) [LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 1995] or ResNet [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2016] to capture the local dependencies between adjacent amino acids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' MSA Sequences The long-standing practices in computational biology are to make inferences from a family of evolutionarily related se- quences [Weigt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Thomas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Lapedes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 1999].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Therefore, there have been several multi- ple sequences encoders proposed to capture co-evolutionary information by taking as input a set of sequences in the form of multiple sequence alignment (MSA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For exam- ple, MSA Transformer [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] extends the self- attention mechanism to the MSA setting, which interleaves self-attention across rows and columns to capture dependen- cies between amino acids and between sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' As a cru- cial component of AlphaFold2, Evoformer [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] alternatively updates MSA and Pair representations in each block, which encode co-evolutionary information in se- quences and relations between residues, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='2 Structure-based Encoder Despite the effectiveness of sequence-based encoders, the power of pre-training with protein structures has been rarely explored, even though protein structures are known to be de- terminants of protein functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' To better utilize this critical structural information, a large number of structure-based en- coders have been proposed to model structural information, which can be mainly divided into three categories: feature map-based, message-passing GNNs, and geometric GNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Feature map-based Methods The use of deep learning to model protein 3D structures could be traced back to a decade ago [Zhang and Zhang, 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Schaap et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2001].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Early methods directly extracted sev- eral hand-crafted feature maps from protein structures and then applied 3D CNNs to model the geometric information of proteins [Derevyanko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Amidi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Townshend et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Later work extended 3D CNNs to spherical convolution for identifying interaction patterns on protein surfaces [Sverrisson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Gainza et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Message-passing GNNs To further capture the geometric relationships and biomedi- cal interactions between amino acids, it has been proposed to first construct a graph from the extracted feature maps by thresholding or k Nearest Neighbors (kNN) [Preparata and Shamos, 2012].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Then, many existing message-passing Graph Neural Networks (GNNs) can be directly applied to model protein structures, including Graph Convolutional Network (GCN) [Kipf and Welling, 2016], Graph Isomorphism Net- work (GIN) [Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018], and GraphSAGE [Hamilton PRL Preliminaries Notation and Problem Statement Architectures Sequence-based Single Sequence LSTM [Hochreiter and Schmidhuber, 1997], Transformer [Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2017], CNNs [LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 1995] MSA Sequence MSA Transformer [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], Evoformer [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Structure-based Feature map-based 3D CNNs [Derevyanko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018], Spherical CNNs [Sverrisson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Message-passing GNNs GCNs [Kipf and Welling, 2016], IEConv [Hermosilla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], GearNet [Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Geometric GNNs GVP [Jing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], GBP [Aykent and Xia, 2022], DWP [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Sequence-structure Co-modeling DeepFRI [Gligorijevi´c et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], LM-GVP [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Pretext Tasks Sequence-based Supervised PLUS [Min et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], Profile Prediction [Sturmfels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], Progen [Madani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Self-Supervised MLM [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019], PMLM [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], NAP [Alley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019], CPC [Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Structure-based Contrative Multiview Contrast [Hermosilla and Ropinski, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Predictive Distance and Angle Prediction [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022], Dihedral Prediction [Hermosilla and Ropinski, 2022] Sequence-structure Co-modeling Full-atomic Structure Prediction [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Applications Property Prediction Stability [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019], Fold Quality [Baldassarre et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], Mutation Effect [Meier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], PPI [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Structure Prediction Full-atomic or Backbone Prediction [Hiranuma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022], Structure Inpainting [McPartlon and Xu, 2022] Protein Design Template-based [Ingraham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019], De Novo [Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Koepnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Structure-Based Drug Design Auto-regressive [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022], Diffusion [Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Schneuing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Figure 2: A high-level overview of this survey with representative examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2017].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' However, the edges in the protein graph may have some key properties, such as dihedral angles and direc- tions, which determine the biological function of proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' With this in mind, there have been several structure-based encoders proposed to simultaneously leverages the node and edge features of the protein graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, [Hermosilla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] proposes IE convolution (IEconv) to simultane- ously capture the primary, secondary and tertiary structures of proteins by incorporating intrinsic and extrinsic distances between nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, [Hermosilla and Ropinski, 2022] adopts a similar architecture to IEConv, but introduces seven additional edge features to efficiently describe the relative po- sition and orientation of neighboring nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Furthermore, GearNet [Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] proposes a simple structure en- coder, which encodes spatial information by adding different types of sequential or structural edges and then performs both node-level and edge-level message passing simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Geometric GNNs The above message-passing GNNs incorporate the 3D geom- etry of proteins by encoding the vector features Vu/Ve into rotation-invariant scalars su/se.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' However, reducing this vec- tor information directly to scalars may not fully capture com- plex geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Therefore, geometric-aware neural networks are proposed to bake 3D rigid transformations into network operations, leading to SO(3)-invariant and equivariant GNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, [Jing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] introduces Geometric Vector Perceptrons (GVPs), which replace standard multi-layer per- ceptrons (MLPs) in feed-forward layers and operate directly on both scalar and vector features under a global coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, [Aykent and Xia, 2022] proposes Geometric Bottleneck Perceptron (GBPs) to integrate geometric features and capture complex geometric relations in the 3D structure, based on which a new SO(3)-equivariant message passing neural network is proposed to support a variety of geomet- ric representation learning tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' To achieve more sensitive geometric awareness in both global transformations and local relations, [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] proposes Directed Weight Percep- trons (DWPs) by extending not only the hidden neurons but the weights from scalars to 2D/3D vectors, naturally saturat- ing the network with 3D structures in the Euclidean space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='3 Sequence-structure Encoder Compared to sequence- and structure-based encoders, com- paratively less work has focused on the co-encoding of pro- tein sequences and structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The mainstream model archi- tecture is to extract amino acid representations as node fea- tures by a language model and then capture the dependencies between amino acids using a GNN module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, [Gligorijevi´c et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] introduces DeepFRI, a Graph Con- volutional Network (GCN) for predicting protein functions by leveraging sequence representations extracted from a pro- tein language model (LSTM) and protein structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, LM-GVP [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] is composed of a protein lan- guage model (composed of Transformer blocks) and a GVP network, where the protein LM takes protein sequences as input to compute amino acid embeddings and the GVP net- work is used to make predictions about protein properties on a graph derived from the protein 3D structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Moreover, [You and Shen, 2022] applies the hierarchical RNN and GAT to encode both protein sequences and structures and proposes a cross-interaction module to enforce a learned relationship be- tween the encoded embeddings of the two protein modalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 4 Pretext Task The pretext tasks are designed to extract meaningful repre- sentations from massive data through optimizing some well- designed objective functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In this section, we summarize some commonly used pretext tasks for learning on proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='1 Sequence-based Pretext Task There have been many pretext tasks proposed for pre-training language models, including Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) [Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018], which can be naturally extended to pre-train protein sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We divide existing sequence-based pretext tasks into two main categories: self-supervised and supervised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Self-supervised Pretext Task The self-supervised pretext tasks utilize the training data itself as supervision signals without the need for additional annota- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' If we consider an amino acid in a sequence as a word in a sentence, we can naturally extend masked language mod- eling to protein sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, we can statically or dynamically mask out a single or a set of contiguous amino acids and then predict the masked amino acids from the re- maining sequences [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Elnaggar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Rives et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nambiar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, [McDermott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] com- bines adversarial training with MLM and proposes to mask amino acids in a learnable manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Taking into account the dependence between masked amino acids, Pairwise MLM (PMLM) [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] proposes to model the probabil- ity of a pair of masked amino acids instead of predicting the probability of a single amino acid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, Next Amino acid Prediction (NAP) [Alley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Elnaggar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Strodthoff et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] aims to predict the type of the next amino acid based on a set of given sequence fragments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Dif- ferent from the above methods, Contrastive Predictive Cod- ing (CPC) [Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] applies different augmentation transformations on the input sequence to generate different views, and then maximizes the agreement of two jointly sam- pled pairs against that of two independently sampled pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Supervised Pretext Task The supervised pretext tasks use additional labels as auxiliary information to guide the model to learn knowledge relevant to downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, PLUS [Min et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] devises a protein-specific pretext task, namely Same-Family Prediction (SFP), which trains a model to predict whether a given protein pair belongs to the same protein family.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The protein family labels provide weak structural information and help the model learn structurally contextualized representa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, [Sturmfels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] proposes to use HMM profiles derived from MSA as labels and then take Profile Pre- diction as a pretext task to help the model learn information about protein structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In addition, to leverage the exponen- tially growing protein sequences that lack costly structural annotations, Progen [Madani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] trains a language model with conditioning tags that encode various annotations, such as taxonomic, functional, and locational information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='2 Structure-based Pretext Task Despite the great progress in the design of structure-based encoders and graph-based pretext tasks [Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022b], there are few efforts focus- ing on the structure-based pre-training of proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Existing structure-based pretext tasks for proteins can be mainly clas- sified into two branches: contrastive and predictive methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Contrastive Pretext Task The primary goal of contrastive methods is to maximize the agreement of two jointly sampled positive pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, Multiview Contrast [Hermosilla and Ropinski, 2022] pro- poses to randomly sample two sub-structures from each pro- tein, encoder them into two representations, and finally max- imize the similarity between representations from the same protein while minimizing the similarity between representa- tions from different proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, [Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] adopts almost the same architecture as Multiview Contrast, but replaces GearNet with IEConv as the structure encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Predictive Pretext Task The contrastive methods deal with the inter-data information (data-data pairs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In contrast, the predictive methods aim to self-generate informative labels from the data as supervision and handle the data-label relationships.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Categorized by dif- ferent types of pseudo labels, the predictive methods have different designs that can capture different levels of struc- tural protein information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] proposes two predictive tasks, namely Distance Prediction and Angle Prediction, which take hidden representations of residues as input and aim to predict the relative distance be- tween pairwise residues and the angle between two edges, respectively, which helps to learn structure-aware protein rep- resentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Furthermore, [Hermosilla and Ropinski, 2022] propose Residue Type Prediction and Dihedral Prediction based on geometric or biochemical properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Specifically, Residue Type Prediction randomly masks the node features of some residues and then lets the structure-based encoders predict these masked residue types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Instead, Dihedral Pre- diction constructs a learning objective by predicting the di- hedral angle between three consecutive edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, [You and Shen, 2022] proposes graph completion (GraphComp), which takes as input a protein graph with partially masked residues and then makes predictions for those masked tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='3 Sequence-structure Pretext Task Most of the existing methods design pretext tasks for a single modality but ignore the dependencies between sequences and structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' If we can design the pretext task based on both protein sequences and structures, it should capture richer in- formation than using single modality data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In practice, there is no clear boundary between pretext tasks and downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, AlphaFold2 [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] takes full-atomic structure prediction as a downstream task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' How- ever, if we are concerned with protein property prediction, structure prediction can also be considered as a pretext task Table 1: Summary of representative protein representation learning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Method Category Architecture Pretext Task Year Bio2Vec-CNN [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Sequence-based CNN 2019 TAPE [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Sequence-based ResNet, LSTM, Transformer Masked Language Modeling, Next Amino Acid Prediction 2019 UniRep [Alley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Sequence-based Multiplicative LSTM Next Amino Acid Prediction 2019 TripletProt [Nourani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based Siamese Networks Contrastive Predictive Coding 2020 PLP-CNN [Shanehsazzadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based CNN 2020 CPCProt [Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based GRU, LSTM Contrastive Predictive Coding 2020 MuPIPR [Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based GRU, LSTM Next Amino Acid Prediction 2020 ProtTrans [Elnaggar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based Transformer, Bert, XLNet Masked Language Modeling 2020 DMPfold [Kandathil et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based GRU, ResNet 2020 Profile Prediction [Sturmfels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based Transformer HMM Profile Prediction 2020 PRoBERTa [Nambiar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based Transformer Masked Language Modeling 2020 UDSMProt [Strodthoff et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Sequence-based LSTM Next Amino Acid Prediction 2020 ESM-1b [Rives et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sequence-based Transformer Masked Language Modeling 2021 PMLM [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sequence-based Transformer Pairwise Masked Language Modeling 2021 MSA Transformer [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sequence-based MSA Transformer Masked Language Modeling 2021 ProteinLM [Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sequence-based BERT Masked Language Modeling 2021 PLUS [Min et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sequence-based Bidirectional RNN Masked Language Modeling, Same-Family Prediction 2021 Adversarial MLM [McDermott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sequence-based Transformer Masked Language Modeling, Adversarial Training 2021 ProteinBERT [Brandes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Sequence-based BERT Masked Language Modeling 2022 CARP [Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022a] Sequence-based CNN Masked Language Modeling 2022 3DCNN [Derevyanko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018] Structure-based 3DCNN 2018 IEConv [Hermosilla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Structure-based IEConv 2020 GVP-GNN [Jing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Structure-based GVP 2020 GraphMS [Cheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Structure-based GCN Multiview Contrast 2021 DL-MSFM [Gelman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Structure-based GCN 2021 PG-GNN [Xia and Ku, 2021] Structure-based PG-GNN 2021 CRL [Hermosilla and Ropinski, 2022] Structure-based IEConv Multiview Contrast 2022 DW-GNN [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Structure-based DWP 2022 GBPNet [Aykent and Xia, 2022] Structure-based GBP 2022 GearNet [Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Structure-based GearNet Multiview Contrast, Distance and Dihedral Prediction, Residue Type Prediction 2022 ATOMRefine [Wu and Cheng, 2022] Structure-based SE(3) Transformer 2022 STEPS [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Structure-based GIN Distance and Dihedral Prediction 2022 GraphCPI [Quan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Co-Modeling CNN, GNN 2019 MT-LSTM [Bepler and Berger, 2019] Co-Modeling Bidirectional LSTM Contact prediction, Pairwise Similarity Prediction 2019 LM-GVP [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Co-Modeling Transformer, GVP 2021 AlphaFold2 [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Co-Modeling Evoformer Masked Language Modeling, Full-atomic Structure Prediction 2021 DeepFRI [Gligorijevi´c et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Co-Modeling LSTM, GCN 2021 HJRSS [Mansoor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Co-Modeling SE(3) Transformer Masked Language Modeling, Graph Completion 2021 GraSR [Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Co-Modeling LSTM, GCN Momentum Contrast 2022 CPAC [You and Shen, 2022] Co-Modeling Hierarchical RNN, GAT Masked Language Modeling, Graph Completion 2022 MIF-ST [Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022b] Co-Modeling CNN, GNN Masked Inverse Folding 2022 OmegaFold [Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Co-Modeling Geoformer Masked Language Modeling, Full-atomic Structure Prediction 2022 that enables the learned sequence representations to contain sufficient structural information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' It was found by [Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] that the representations from AlphFold2’s Evoformer could work well on various protein-related downstream tasks, including fold classification, stability prediction, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' More- over, [Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022b] proposes a novel pre-training pre- text task, namely Masked Inverse Folding (MIF), which trains a model to reconstruct the original amino acids conditioned on the corrupted sequence and the backbone structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 5 Downstream Tasks (Applications) In the above, we have presented a variety of commonly used model architectures and pretext tasks for protein representa- tion learning, based on which we summarized the surveyed works in Table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 1, listing their categories, model architec- tures, pretext tasks, and publication years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In this section, we can divide existing downstream tasks for protein representa- tion learning into the following four main categories: protein property prediction, protein (complex) structure prediction, protein design, and structure-based drug design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' It is worth noting that some downstream tasks have labels (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', model outputs) that do not change with rigid body trans- formations of the inputs (if they can, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', protein structures).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, various protein property prediction tasks take a transformable protein structure as input and output a con- stant prediction, usually modeled as a simple multi-label clas- sification problem or multiple binary classification problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' However, the labels of some downstream tasks will change equivariantly with the inputs, and these tasks are getting more and more attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Typically, the learning objectives of these tasks are structure-related, and they usually have higher re- quirements on the model architecture, requiring the model to be SE(3)-equivariant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We believe that from the perspective of protein representation learning, the approaches to different downstream tasks can also learn from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='1 Protein Property Prediction The protein property prediction aims to regress or classify some important properties from protein sequences or struc- tures that are closely related to biological functions, such as the types of secondary structure, the strength of connec- tions between amino acids, types of protein folding, fluo- rescence intensity, protein stability, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, several protein-specific prediction tasks can also be grouped into this category, including quality evaluation of protein folding [Baldassarre et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], predicting the ef- fect of mutations on protein function [Meier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], and predicting protein-protein interactions [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='2 Protein (Complex) Structure Prediction The primary goal of protein structure prediction is to pre- dict the structural coordinates from a given set of amino acid sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Some approaches aim to predict only back- bone coordinates [Baek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Si et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], while others focus on the more challenging full-atomic coordi- nate predictions [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' On the other hand, protein structure refine- ment [Hiranuma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Wu and Cheng, 2022] proposes to update a coarse protein structure to generate a more fine- grained structure in an iterative manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Besides, the task of protein structure inpainting aims to reconstruct the complete protein structure from a partially given sub-structure [McPart- lon and Xu, 2022] or distance map [Lee and Kim, 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='3 Protein Design Deep learning-based protein design has made tremendous progress in recent years, and the major works can be di- vided into three categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The first one is to pre-train the model with a large number of sequences from the same pro- tein family, and then use it to generate new homologous se- quences [Smith and Smith, 1990].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The structure-based meth- ods aim to directly generate the protein sequences under the condition of a given protein structure [Ingraham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The last and most challenging one is the de novo protein de- sign [Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Korendovych and DeGrado, 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Koepnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019], which aims to generate both protein sequences and structures conditioned on taxonomic and key- word tags such as molecular function and cellular component.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='4 Structure-Based Drug Design Structure-Based Drug Design (SBDD) is a promising direc- tion for fast and cost-efficient compound discovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Specif- ically, SBDD designs inhibitors or activators (usually small molecules, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', drugs) directly against protein targets of inter- est, which means a high success rate and efficiency [Kuntz, 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Drews, 2000].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In the past two years, a line of auto- regressive methods have been proposed for SBDD [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Masuda et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], which gener- ate molecule atoms one by one conditioned on given structure context of protein targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Recently, there are some works based on Denoising Diffusion Probabilistic Model (DDPM) [Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Schneuing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Targeting on spe- cific protein pockets, the diffusion-based methods generate molecule atoms as a whole from random gaussian noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The above methods are all dependent on a proper repre- sentation module of protein, especially the protein structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The early attempt of deep generative models in this field [Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] uses 3D CNN as the protein structure context encoder to get meaningful and roto-translation invariant fea- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' With the development of protein structure representa- tion methods, particularly the geometric-aware models, sub- sequent methods widely use geometric-(equi/in)variant net- works, such as EGNN [Gong and Cheng, 2019], GVP [Jing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], and IPA [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021], as the backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' It is worth noting that protein representation models are not only common in various protein structure context encoders, but many generative decoders can also adopt its architectural design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' From this example, we can see that protein represen- tation is a very fundamental problem and that many down- stream tasks involving proteins can benefit from advances of protein representation research in various aspects, including better embeddings and more excellent model architectures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 6 Deep Insights and Future Outlooks 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='1 Deeper Insights On the basis of a detailed review of the model architectures, pretext tasks, and downstream tasks, we would like to provide some deeper insights into protein representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Insights 1: PRL is the core of deep protein modeling With the development of deep learning, deep protein mod- eling is becoming a popular research topic, and one of its core is how to learn “meaningful” representations for pro- teins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' This involves three key issues: (1) Feature Extraction: model architectures;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' (2) Pre-training: pretext tasks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' and (3) Application: downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' An in-depth investigation of the above three key issues is of great importance for the de- velopment of more deep protein modeling methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Insights 2: Task-level convertibility Throughout this survey, one of the main points we have em- phasized is the convertibility between downstream tasks and pretext tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We believe we are the first to explain the role of pretext tasks from this perspective, which seems to have been rarely involved in previous work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, we di- rectly categorize some well-known downstream tasks, such as full-atomic structure prediction, as a specific kinds of pretext tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The motivation behind such an understanding lies in the fact that the definition of a task is itself a relative concept and that different tasks can help the model extract different aspects of information, which may be complementary to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, full-atomic structure prediction helps the model capture rich structural information, which is also ben- eficial for various protein property prediction tasks, such as folding prediction, since it is known that protein structure of- ten determines protein function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' This suggests that whether a specific task is a downstream task or a pretext task usually depends on what we are concerned about, and the role of a task may keep changing from application to application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Insights 3: Data-specific criterion for design selections It is tricky to discuss the advantages and disadvantages of dif- ferent methods or designs because the effectiveness of differ- ent methods depends heavily on the size, format, and com- plexity of the data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, for simple small-scale data, Transformer is not necessarily more effective than traditional LSTM for sequence modeling, and the situation may be com- pletely opposite for large-scale complex data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Therefore, there is no “optimal” architecture or pretext task that works for all data types and downstream tasks, and the criterion for the selection of architecture and pretext task is data-specific.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='2 Future Outlooks Despite the great progress of existing methods, challenges still exist due to the complexity of proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In this section, we suggest some promising directions for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Direction 1: Broader application scenarios The biological research topics on proteins are diverse, but most of the existing work has delved into only a small subset of them, due to the fact that these topics have been well for- malized by some representative works, such as AlphaFlod2 [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] for protein structure prediction and TAPE [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] for protein property prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' As a result, it is more worthwhile to explore the role of protein representation learning in a wider range of biological applica- tion scenarios than to design some overly complex modules for subtle performance gains in a well-formalized application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Direction 2: Unified evaluation protocols Research in protein representation learning is now in an era of barbarism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' While a great deal of new works are emerging ev- ery day, most of them are on unfair comparisons, such as with different datasets, architectures, metrics, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, some MSA-based works on structure prediction have been blatantly compared with those single-sequence-based works and claimed to be better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' To promote the health of the field, there is an urgent need to establish unified evaluation proto- cols in various downstream tasks to provide fair comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Direction 3: Protein-specific designs Previous PRL methods directly take mature architectures and pretext tasks from the natural language processing field to train proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' For example, modeling protein sequences us- ing LSTM may be a major innovation, but replacing LSTM with Bi-LSTM for stuble performance improvements makes little sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Now, it is time to step out of this comfort zone of scientific research, and we should no longer be satisfied with simply extending techniques from other domains to the protein domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' PRL is not only a machine learning problem but also a biological problem, so we should consider design- ing more protein-specific architectures and pretext tasks by incorporating protein-related domain knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In particu- lar, most of the existing work on PRL is based on unimodal protein sequences or structures, and it requires more work ex- ploring sequence-structure co-modeling to fully explore the correspondence between 1D sequences and 3D structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Direction 4: Margin from pre-training to fine-tuning Currently, tremendous efforts are focusing on protein pre- training strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' However, how to fine-tune these pre- trained models to specific downstream tasks is still under- explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Though numerous strategies have been proposed to address this problem in the fields of computer vision and natural language processing [Zhuang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020], they are difficult to be directly applied to proteins.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' One obstacle to knowledge transfer is the huge variability between different protein datasets, both in terms of sequence length and struc- tural complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The second one is poor generalization of pre-trained models especially for various tasks where collect- ing labeled data is laborious.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Therefore, it is an important issue to design protein-specific techniques to minimize the margin between pre-training and downstream tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Direction 5: Lack of explainability While existing protein representation learning methods have achieved promising results on a variety of downstream tasks, we still know little about what the model has learned from protein data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Which of the feature patterns, sequence frag- ments, or sequence-structure relationships has been learned?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' These are important issues for understanding and interpret- ing model behavior, especially for those privacy-secure tasks such as drug design, but are missing in current PRL works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Overall, the interpretability of PRL methods remains to be explored further in many respects, which helps us understand how the model works and provides a guide for better usage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' 7 Conclusions A comprehensive survey of the literature on protein repre- sentation learning is conducted in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We develop a general unified framework for PRL methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Moreover, we systematically divide existing PRL methods into three main categories: sequence-based, structure-based, and sequence- structure co-modeling from three different perspectives, in- cluding model architectures, pretext tasks, and downstream applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Finally, we point out the technical limitations of the current research and provide promising directions for future work on PRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' We hope this survey to pave the way for follow-up AI researchers with no bioinformatics background, setting the stage for the development of more future works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' References [Alley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Ethan C Alley, Grigory Khimulya, Suro- jit Biswas, Mohammed AlQuraishi, and George M Church.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Unified rational protein engineering with sequence-based deep representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature methods, 16(12):1315–1322, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Amidi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018] Afshine Amidi, Shervine Amidi, Dim- itrios Vlachakis, Vasileios Megalooikonomou, Nikos Para- gios, and Evangelia I Zacharaki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Enzynet: enzyme classi- fication using 3d convolutional neural networks on spatial representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' PeerJ, 6:e4750, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Armenteros et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Jose Juan Almagro Armenteros, Alexander Rosenberg Johansen, Ole Winther, and Henrik Nielsen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Language modelling for biological sequences– curated datasets and baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' BioRxiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Asgari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Ehsaneddin Asgari, Nina Poerner, Al- ice C McHardy, and Mohammad RK Mofrad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deep- prime2sec: deep learning for protein secondary structure prediction from the primary sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' BioRxiv, page 705426, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Aykent and Xia, 2022] Sarp Aykent and Tian Xia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Gbp- net: Universal geometric representation learning on pro- tein structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4–14, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Baek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Minkyung Baek, Frank DiMaio, Ivan Anishchenko, Justas Dauparas, Sergey Ovchinnikov, Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch, R Dustin Schaeffer, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Accurate prediction of protein structures and interactions using a three-track neural net- work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Science, 373(6557):871–876, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Baldassarre et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Federico Baldassarre, David Men´endez Hurtado, Arne Elofsson, and Hossein Az- izpour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Graphqa: protein model quality assessment using graph convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Bioinformatics, 37(3):360–366, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Bepler and Berger, 2019] Tristan Bepler and Bonnie Berger.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Learning protein sequence embeddings using information from structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:1902.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='08661, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Brandes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Nadav Brandes, Dan Ofer, Yam Peleg, Nadav Rappoport, and Michal Linial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Proteinbert: A uni- versal deep-learning model of protein sequence and func- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Bioinformatics, 38(8):2102–2110, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Can Chen, Jingbo Zhou, Fan Wang, Xue Liu, and Dejing Dou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Structure-aware protein self- supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='04213, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Cheng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Shicheng Cheng, Liang Zhang, Bo Jin, Qiang Zhang, Xinjiang Lu, Mao You, and Xueqing Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Graphms: Drug target prediction using graph represen- tation learning with substructures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Applied Sciences, 11(7):3239, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Derevyanko et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018] Georgy Derevyanko, Sergei Gru- dinin, Yoshua Bengio, and Guillaume Lamoureux.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deep convolutional networks for quality assessment of protein folds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Bioinformatics, 34(23):4046–4053, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018] Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Bert: Pre-training of deep bidirectional transformers for language understand- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='04805, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Ding et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Xinqiang Ding, Zhengting Zou, and Charles L Brooks III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deciphering protein evolution and fitness landscapes with latent space models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature com- munications, 10(1):1–13, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Drews, 2000] J¨urgen Drews.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Drug discovery: A historical perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Science, 287(5460):1960–1964, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Elnaggar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Ahmed Elnaggar, Michael Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang, Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer, Martin Steinegger, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Prottrans: towards cracking the language of life’s code through self-supervised deep learning and high performance computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='06225, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Gainza et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Pablo Gainza, Freyr Sverrisson, Fred- erico Monti, Emanuele Rodola, D Boscaini, MM Bron- stein, and BE Correia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deciphering interaction finger- prints from protein molecular surfaces using geometric deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature Methods, 17(2):184–192, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Gelman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sam Gelman, Sarah A Fahlberg, Pete Heinzelman, Philip A Romero, and Anthony Gitter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Neu- ral networks to learn protein sequence–function relation- ships from deep mutational scanning data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 118(48):e2104878118, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Gligorijevi´c et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Vladimir Gligorijevi´c, P Douglas Renfrew, Tomasz Kosciolek, Julia Koehler Leman, Daniel Berenberg, Tommi Vatanen, Chris Chandler, Bryn C Tay- lor, Ian M Fisk, Hera Vlamakis, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Structure-based protein function prediction using graph convolutional net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature communications, 12(1):1–14, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Gong and Cheng, 2019] Liyu Gong and Qiang Cheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Ex- ploiting edge features for graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Pro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9211–9219, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hamilton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2017] Will Hamilton, Zhitao Ying, and Jure Leskovec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Inductive representation learning on large graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Neural information processing systems, pages 1024–1034, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deep residual learning for image recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Liang He, Shizhuo Zhang, Lijun Wu, Huanhuan Xia, Fusong Ju, He Zhang, Siyuan Liu, Yingce Xia, Jianwei Zhu, Pan Deng, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Pre-training co- evolutionary protein representation via a pairwise masked language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='15527, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hermosilla and Ropinski, 2022] Pedro Hermosilla and Timo Ropinski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Contrastive representation learning for 3d protein structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='15675, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hermosilla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Pedro Hermosilla, Marco Sch¨afer, Matˇej Lang, Gloria Fackelmann, Pere Pau V´azquez, Barbora Kozl´ıkov´a, Michael Krone, Tobias Ritschel, and Timo Ropinski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Intrinsic-extrinsic convolution and pool- ing for learning on 3d protein structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='06252, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hiranuma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Naozumi Hiranuma, Hahnbeom Park, Minkyung Baek, Ivan Anishchenko, Justas Dau- paras, and David Baker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Improved protein structure refinement guided by deep learning based accuracy estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature communications, 12(1):1–11, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hochreiter and Schmidhuber, 1997] Sepp Hochreiter and J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Neural computation, 9(8):1735–1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hospital et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2015] Adam Hospital, Josep Ramon Go˜ni, Modesto Orozco, and Josep L Gelp´ı.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Molecular dynamics simulations: advances and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Advances and ap- plications in bioinformatics and chemistry: AABC, 8:37, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Lun Hu, Xiaojuan Wang, Yu-An Huang, Pengwei Hu, and Zhu-Hong You.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' A survey on computa- tional models for predicting protein–protein interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Briefings in Bioinformatics, 22(5):bbab036, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Hu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Mingyang Hu, Fajie Yuan, Kevin K Yang, Fusong Ju, Jin Su, Hui Wang, Fei Yang, and Qiuyang Ding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Exploring evolution-based &-free protein language models as protein function predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='06583, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2016] Po-Ssu Huang, Scott E Boyken, and David Baker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The coming of age of de novo protein de- sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature, 537(7620):320–327, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Ingraham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] John Ingraham, Vikas Garg, Regina Barzilay, and Tommi Jaakkola.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Generative models for graph-based protein design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Advances in neural informa- tion processing systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Iuchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Hitoshi Iuchi, Taro Matsutani, Keisuke Yamada, Natsuki Iwano, Shunsuke Sumi, Shion Hosoda, Shitao Zhao, Tsukasa Fukunaga, and Michiaki Hamada.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Representation learning applications in biological se- quence analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Computational and Structural Biotech- nology Journal, 19:3198–3208, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Jing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael JL Townshend, and Ron Dror.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Learning from protein structure with geometric vector perceptrons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='01411, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Jumper et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] John Jumper, Richard Evans, Alexan- der Pritzel, Tim Green, Michael Figurnov, Olaf Ron- neberger, Kathryn Tunyasuvunakool, Russ Bates, Au- gustin ˇZ´ıdek, Anna Potapenko, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Highly accu- rate protein structure prediction with alphafold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature, 596(7873):583–589, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Kandathil et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Shaun M Kandathil, Joe G Greener, Andy M Lau, and David T Jones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deep learning-based prediction of protein structure using learned representa- tions of multiple sequence alignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Biorxiv, pages 2020–11, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Karplus and Petsko, 1990] Martin Karplus and Gregory A Petsko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Molecular dynamics simulations in biology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Na- ture, 347(6294):631–639, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Kipf and Welling, 2016] Thomas N Kipf and Max Welling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Semi-supervised classification with graph convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:1609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='02907, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Koepnick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Brian Koepnick, Jeff Flatten, Tamir Husain, Alex Ford, Daniel-Adriano Silva, Matthew J Bick, Aaron Bauer, Gaohua Liu, Yojiro Ishida, Alexander Boykov, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' De novo protein design by citizen scientists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Nature, 570(7761):390–394, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Korendovych and DeGrado, 2020] Ivan V Korendovych and William F DeGrado.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' De novo protein design, a retrospective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Quarterly reviews of biophysics, 53, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Kuntz, 1992] Irwin D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Kuntz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Structure-based strategies for drug design and discovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Science, 257(5073):1078– 1082, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Lapedes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 1999] Alan S Lapedes, Bertrand G Giraud, LonChang Liu, and Gary D Stormo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Correlated mutations in models of protein sequences: phylogenetic and struc- tural effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Lecture Notes-Monograph Series, pages 236– 256, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [LeCun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 1995] Yann LeCun, Yoshua Bengio, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Convolutional networks for images, speech, and time se- ries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' The handbook of brain theory and neural networks, 3361(10):1995, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Lee and Kim, 2022] Jin Sub Lee and Philip M Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Pro- teinsgm: Score-based generative modeling for de novo protein design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Jiahan Li, Shitong Luo, Congyue Deng, Chaoran Cheng, Jiaqi Guan, Leonidas Guibas, Jian Peng, and Jianzhu Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Directed weight neural networks for protein structure representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='13299, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Haitao Lin, Yufei Huang, Meng Liu, Xu- anjing Li, Shuiwang Ji, and Stan Z Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Diffbp: Generative diffusion of 3d molecules for target protein binding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='11214, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022a] Meng Liu, Youzhi Luo, Kanji Uchino, Koji Maruhashi, and Shuiwang Ji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Generating 3d molecules for target protein binding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In International Con- ference on Machine Learning, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022b] Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, and Philip Yu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Graph self- supervised learning: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' IEEE Transactions on Knowledge and Data Engineering, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Lu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Amy X Lu, Haoran Zhang, Marzyeh Ghas- semi, and Alan Moses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Self-supervised contrastive learn- ing of protein representations by mutual information max- imization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' BioRxiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Luo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Shitong Luo, Jiaqi Guan, Jianzhu Ma, and Jian Peng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' A 3D generative model for structure-based drug design.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Thirty-Fifth Conference on Neural Information Processing Systems, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Madani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Ali Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R Eguchi, Po-Ssu Huang, and Richard Socher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Progen: Lan- guage modeling for protein generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='03497, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Mansoor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Sanaa Mansoor, Minkyung Baek, Umesh Madan, and Eric Horvitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Toward more general embeddings for protein design: Harnessing joint represen- tations of sequence and structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Masuda et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Tomohide Masuda, Matthew Ragoza, and David Ryan Koes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Generating 3d molecular structures conditional on a receptor binding site with deep generative models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='14442, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [McDermott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Matthew McDermott, Brendan Yap, Harry Hsu, Di Jin, and Peter Szolovits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Adversarial contrastive pre-training for protein sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='00466, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [McPartlon and Xu, 2022] Matthew McPartlon and Jinbo Xu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Attnpacker: An end-to-end deep learning method for rotamer-free protein side-chain packing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Meier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu, and Alex Rives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Language models enable zero-shot prediction of the effects of muta- tions on protein function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:29287–29303, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Min et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Seonwoo Min, Seunghyun Park, Siwon Kim, Hyun-Soo Choi, Byunghan Lee, and Sungroh Yoon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Pre-training of deep bidirectional protein sequence rep- resentations with structural information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' IEEE Access, 9:123912–123926, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Nambiar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Ananthan Nambiar, Maeve Heflin, Simon Liu, Sergei Maslov, Mark Hopkins, and Anna Ritz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Transforming the language of life: transformer neural net- works for protein prediction tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, pages 1– 8, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Nourani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Esmaeil Nourani, Ehsaneddin Asgari, Alice C McHardy, and Mohammad RK Mofrad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Triplet- prot: Deep representation learning of proteins based on siamese networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Biorxiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Peng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Xingang Peng, Shitong Luo, Jiaqi Guan, Qi Xie, Jian Peng, and Jianzhu Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Pocket2mol: Efficient molecular sampling based on 3d protein pockets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Inter- national Conference on Machine Learning, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Preparata and Shamos, 2012] Franco P Preparata and Michael I Shamos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Computational geometry: an introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Springer Science & Business Media, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Quan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Zhe Quan, Yan Guo, Xuan Lin, Zhi-Jie Wang, and Xiangxiang Zeng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Graphcpi: Graph neural representation learning for compound-protein interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 717–722.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' IEEE, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Peter Chen, John Canny, Pieter Abbeel, and Yun Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Evaluating protein transfer learn- ing with tape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Advances in neural information processing systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Rao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Roshan M Rao, Jason Liu, Robert Verkuil, Joshua Meier, John Canny, Pieter Abbeel, Tom Sercu, and Alexander Rives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Msa transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In International Con- ference on Machine Learning, pages 8844–8856.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' PMLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Rives et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Biological structure and function emerge from scal- ing unsupervised learning to 250 million protein se- quences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Proceedings of the National Academy of Sci- ences, 118(15):e2016239118, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Rohl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2004] Carol A Rohl, Charlie EM Strauss, Kira MS Misura, and David Baker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Protein structure pre- diction using rosetta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Methods in enzymology, volume 383, pages 66–93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Elsevier, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Schaap et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2001] Marcel G Schaap, Feike J Leij, and Martinus Th Van Genuchten.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Rosetta: A computer pro- gram for estimating soil hydraulic parameters with hi- erarchical pedotransfer functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Journal of hydrology, 251(3-4):163–176, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Schneuing et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Arne Schneuing, Yuanqi Du, Charles Harris, Arian Jamasb, Ilia Igashov, Weitao Du, Tom Blundell, Pietro Li´o, Carla Gomes, Max Welling, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Structure-based drug design with equivariant diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='13695, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Shanehsazzadeh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Amir Shanehsazzadeh, David Belanger, and David Dohan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Is transfer learning neces- sary for protein landscape prediction?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='03443, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Si et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Dong Si, Spencer A Moritz, Jonas Pfab, Jie Hou, Renzhi Cao, Liguo Wang, Tianqi Wu, and Jianlin Cheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deep learning to predict protein backbone struc- ture from high-resolution cryo-em density maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Scientific reports, 10(1):1–22, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Sinai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2017] Sam Sinai, Eric Kelsic, George M Church, and Martin A Nowak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Variational auto-encoding of protein sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:1712.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='03346, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Smith and Smith, 1990] Randall F Smith and Temple F Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Automatic generation of primary sequence patterns from sets of related protein sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 87(1):118–122, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Strodthoff et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Nils Strodthoff, Patrick Wagner, Markus Wenzel, and Wojciech Samek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Udsmprot: univer- sal deep sequence models for protein classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Bioin- formatics, 36(8):2401–2409, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Sturmfels et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Pascal Sturmfels, Jesse Vig, Ali Madani, and Nazneen Fatema Rajani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Profile prediction: An alignment-based pre-training task for protein sequence models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='00195, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Sverrisson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Freyr Sverrisson, Jean Feydy, Bruno E Correia, and Michael M Bronstein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Fast end-to- end learning on protein surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15272–15281, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Thomas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2005] John Thomas, Naren Ramakrishnan, and Chris Bailey-Kellogg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Graphical models of residue coupling in protein families.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Proceedings of the 5th international workshop on Bioinformatics, pages 12–20, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Torrisi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Mirko Torrisi, Gianluca Pollastri, and Quan Le.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Deep learning methods in protein structure prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Computational and Structural Biotechnology Journal, 18:1301–1310, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Townshend et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Raphael Townshend, Rishi Bedi, Patricia Suriana, and Ron Dror.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' End-to-end learning on 3d protein structure for interface prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 32, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Unsal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Serbulent Unsal, Heval Atas¸, Muam- mer Albayrak, Kemal Turhan, Aybar C Acar, and Tunca Do˘gan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Evaluation of methods for protein representation learning: a quantitative analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2019] Yanbin Wang, Zhu-Hong You, Shan Yang, Xiao Li, Tong-Hai Jiang, and Xi Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' A high ef- ficient biological language model for predicting protein– protein interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Cells, 8(2):122, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Zichen Wang, Steven A Combs, Ryan Brand, Miguel Romero Calvo, Panpan Xu, George Price, Nataliya Golovach, Emannuel O Salawu, Colby J Wise, Sri Priya Ponnapalli, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Lm-gvp: A generalizable deep learning framework for protein property prediction from sequence and structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Weigt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2009] Martin Weigt, Robert A White, Hendrik Szurmant, James A Hoch, and Terence Hwa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Identification of direct residue contacts in protein–protein interaction by message passing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 106(1):67–72, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Wu and Cheng, 2022] Tianqi Wu and Jianlin Cheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Atomic protein structure refinement using all-atom graph representations and se (3)-equivariant graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Lirong Wu, Haitao Lin, Cheng Tan, Zhangyang Gao, and Stan Z Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Self-supervised learning on graphs: Contrastive, generative, or predictive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' IEEE Transactions on Knowledge and Data Engineering, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Ruidong Wu, Fan Ding, Rui Wang, Rui Shen, Xiwen Zhang, Shitong Luo, Chenpeng Su, Zuo- fan Wu, Qi Xie, Bonnie Berger, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' High-resolution de novo structure prediction from primary sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Xia and Ku, 2021] Tian Xia and Wei-Shinn Ku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Geomet- ric graph representation learning on protein structure pre- diction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' In Proceedings of the 27th ACM SIGKDD Con- ference on Knowledge Discovery & Data Mining, pages 1873–1883, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Chunqiu Xia, Shi-Hao Feng, Ying Xia, Xi- aoyong Pan, and Hong-Bin Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Fast protein structure comparison through effective representation learning with contrastive graph neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' PLoS computational biology, 18(3):e1009986, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Xiao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2021] Yijia Xiao, Jiezhong Qiu, Ziang Li, Chang-Yu Hsieh, and Jie Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Modeling protein us- ing large-scale pretrain language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='07435, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Xie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Self-supervised learn- ing of graph neural networks: A unified review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Xu and Zhang, 2011] Dong Xu and Yang Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Improv- ing the physical realism and structural accuracy of protein models by a two-step atomic-level energy minimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Biophysical journal, 101(10):2525–2534, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2018] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' How powerful are graph neural net- works?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:1810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='00826, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022a] Kevin K Yang, Alex X Lu, and Nicolo K Fusi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Convolutions are competitive with transformers for protein sequence pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022b] Kevin K Yang, Niccol`o Zanichelli, and Hugh Yeh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Masked inverse folding with sequence transfer for protein representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' bioRxiv, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [You and Shen, 2022] Yuning You and Yang Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Cross- modality and self-supervised protein embedding for compound–protein affinity and contact prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Bioin- formatics, 38(Supplement 2):ii68–ii74, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Zhang and Zhang, 2010] Jian Zhang and Yang Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' A novel side-chain orientation dependent potential derived from random-walk reference state for protein fold selec- tion and structure prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' PloS one, 5(10):e15386, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2022] Zuobai Zhang, Minghao Xu, Arian Ja- masb, Vijil Chenthamarakshan, Aurelie Lozano, Payel Das, and Jian Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Protein representation learn- ing by geometric structure pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content='06125, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Guangyu Zhou, Muhao Chen, Chelsea JT Ju, Zheng Wang, Jyun-Yu Jiang, and Wei Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Muta- tion effect estimation on protein–protein interactions us- ing deep contextualized representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' NAR ge- nomics and bioinformatics, 2(2):lqaa015, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' [Zhuang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=', 2020] Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, and Qing He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' A comprehensive survey on transfer learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'} +page_content=' Proceedings of the IEEE, 109(1):43–76, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/I9AyT4oBgHgl3EQf5_o0/content/2301.00813v1.pdf'}